da3553d800
* Set `node-fetch` to `^2.7.0` * Update package-lock.json ``` # root bin/npm update node-fetch bin/npm update cross-fetch # in other path in docker npm update node-fetch ``` * Update node-fetch patch * [fetch-utils] Skip the test: destroys the request body if it doesn't get consumed ``` 1) fetch-utils fetchJson destroys the request body if it doesn't get consumed: FetchError: Invalid response body while trying to fetch http://example.com:30001/json/ignore-request: write EPIPE at PassThrough.<anonymous> (/overleaf/node_modules/node-fetch/lib/index.js:400:12) at PassThrough.emit (node:events:529:35) at emitErrorNT (node:internal/streams/destroy:151:8) at emitErrorCloseNT (node:internal/streams/destroy:116:3) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) ``` * [fetch-utils] Delete the test: destroys the request body if it doesn't get consumed * Remove the `setTimeout` in the node-fetch patch Fixes a test and doesn't break filestore acceptance tests * Update node-fetch patch again: bring changes from https://github.com/node-fetch/node-fetch/blob/e87b093/src/index.js * Update node-fetch patch again: bring changes from https://github.com/node-fetch/node-fetch/blob/e87b093/src/index.js * Update node-fetch patches back to single lines Per https://github.com/overleaf/internal/pull/20165#discussion_r1739035513 GitOrigin-RevId: 945e5a12e838673b7bc87b588b7aca1bcd9109e2 |
||
---|---|---|
.. | ||
api | ||
benchmarks | ||
config | ||
migrations | ||
storage | ||
test | ||
.gitignore | ||
.mocharc.json | ||
.nvmrc | ||
app.js | ||
buildscript.txt | ||
docker-compose.ci.yml | ||
docker-compose.yml | ||
Dockerfile | ||
knexfile.js | ||
Makefile | ||
package.json | ||
README.md | ||
tsconfig.json |
Database migrations
The history service uses knex to manage PostgreSQL migrations.
To create a new migrations, run:
npx knex migrate:make migration_name
To apply migrations, run:
npx knex migrate:latest
For more information, consult the knex migrations guide.
Global blobs
Global blobs are blobs that are shared between projects. The list of global blobs is stored in the projectHistoryGlobalBlobs Mongo collection and is read when the service starts. Changing the list of global blobs needs to be done carefully.
Adding a blob to the global blobs list
If we identify a blob that appears in many projects, we might want to move that blob to the global blobs list.
- Add a record for the blob to the projectHistoryGlobalBlobs collection.
- Restart the history service.
- Delete any corresponding project blobs.
Removing a blob from the global blobs list
Removing a blob from the global blobs list is trickier. As soon as the global blob is made unavailable, every project that needs the blob will have to get its own copy. To avoid disruptions, follow these steps:
-
In the projectHistoryGlobalBlobs collection, set the
demoted
property tofalse
on the global blob to remove. This will make the history system write new instances of this blob to project blobs, but still read from the global blob. -
Restart the history service.
-
Copy the blob to all projects that need it.
-
Remove the blob from the projectHistoryGlobalBlobs collection.
-
Restart the history service.