Not able to generate reports with 2 million rows and filesize > 80MB.
-
Hi,
We are trying to create a report with text recipe and handlebars engine with around 2 million rows. We make a rpc call in the beforeRender function to fetch the data and the request returns the data in around one minute.
But then the report is never generated and our afterRender function is never called.
The report gets generated successfully sometimes in around 3 minutes but most of the times the execution gets stalled in between the two scripts.Below is our code for beforeRender script :-
dmLog.info(`starting before render script for : ${req.data}.`); //database query to fetch data rows for report. return getData() .then((result) => { dmLog.info("got data"); req.data["values"] = result; dmLog.info("data processed"); done(); }) .catch((e) => { dmLog.error("Error in before render script for Wifi Read Report."); dmLog.error(e); req.data["error"] = true; done(); }); }
And this is for afterRender scipt :-
dmLog.info("reached after render"); //saving generated report into filesystem. saveReport(req.data, res, done) .then((x) => { dmLog.info(`afterRender for save completed for data ${req.data}- `); done(); }) .catch((x) => { dmLog.info(`afterRender for save is failed for data ${req.data}- `); done(); }); } }
We have set the timeout in our config for both scripts.timeout and templatingEngines.timeout to 10 minutes.
We are able to generate report with around 1.7 million rows successfully but faced issues with 2 million.
Can you suggest what we can do in order to prevent this stalling?
-
The node process likely reaches the 1.5GB memory and starts to run GC.
You should increase the memory limit for node.js process using --max-old-space-size flagnode --max_old_space_size=6096 server.js
or if you use jsreport cli set environment variable
NODE_OPTIONS="--max-old-space-size= 6096"
and run
jsreport start
This will get propagated by default just to the main process. If you use
dedicated-process
orhttp-server
strategy for templating engines/scripts, you need to set also this config"templatingEngines": { "timeout": 1000000, "strategy": "dedicated-process", "forkOptions": { "execArgv": ["--max-old-space-size=6096"] } }
However, the
in-process
strategy will give you the best performance, because it avoids serializing and deserializing of such giant JSON between processes.With increasing the memory limits, I am able to produce 200MB output with 15mil records.