Cloning a folder getting increasingly slower
-
I have a default set of templates in a folder structure, that I want duplicated for each new user.
I'm currently creating a new folder and then using the
/studio/hierarchyMove
endpoint to "clone" from the source folder into the new folder, e.g.// Do the clone - from the existing folder to the newly created folder const cloneUrl = `${jsReportSettings.url}/studio/hierarchyMove`; const clonePayload = { source: { id: jsReportSettings.cloneFolderId, entitySet: "folders", onlyChildren: true, }, target: { shortid: newFolderResult.shortid, updateReferences: true, }, copy: true, replace: false, };
(Not sure if there's a better approach, I got the above from examining the network calls when cloning a folder in the Studio...)
This does work, but I have an issue where the cloning process as outlined above is taking longer and longer with each new user.
Any ideas? Is there a better way to "clone" a folder? I couldn't find any documentation on the
/studio/hierarchyMove
endpoint, so not sure what the parameters do and if there's an option there that might help -updateReferences
maybe?Also, I'm currently using the file system - default
jsreport-fs-store
- for this (running a Docker image on Azure Container Apps with a "Premium" performance storage account - SSD backed).Would switching to using a database backed store be faster, e.g. MongoDB or Postgres? Are there any benchmarks around what's fastest?
-
The
hierarchyMove
uses transactions for the data consistency. Unfortunately, transactions in the fs store a currently poorly implemented regarding performance. With a lot of data in store, it will always need to do many copy operations which can be time-consuming with slower disks.Switching to the full-blown database will make a big difference.
The jsreport is tested well with mongo, we use it in all our services.
-
Ah makes sense, thanks very much @admin.
Am in the midst of migrating to Mongo now, so hopefully that'll help matters!
-
Just wanted to say thanks @admin, moving to Mongo brought down the folder cloning from upwards of 2 mins down to 11secs! UI and API seems much snappier as well.
(Note: Am using MongoDB Atlas Serverless, seems to work fine so far.)
One thing I found was that using a US database from the UK for development was crazy slow, presumably my internet connection or latency or something, but when I connected from the Docker image hosted in the same region, it was all good.
Next up, migrating the rendered reports. :)