Can We keep the page number upon merging the pdfs using pdf util
-
Hi,
I have tried to call the same template in a recursive way to handle bulk data as a seperate template. But when I prepend to actual template the header, footer and page numbers are getting reset. Is there any way to handle it on pdfutil append?
-
Please share some playground demo to give us an idea what you do.
-
In order to overcome the issue of generation of a report for long table I have tried to call the same template parallelly based on the chunkSize. But when I do it Im not able to properly append the Table of content to the generated PDF. Also, performance wise it takes 30-45 mins. But in comparison with Crystal Reports it generates with in 1 min for 50k records as well. Please find the link of playground here https://playground.jsreport.net/w/anon/~duy8f7u. Could you please help with TOC part? Also for performance wise i shall try with div and changing strategy to chrome-pool with no of workers to 4 or 5
-
The call to
{{{pdfAddPageItem}}}
adds a "hidden" element into the html which is later parsed and you can find it on$pdf.pages[x].items
. However, these marks are also removed before the scriptafterRender
to avoid polluting the final pdf with it. In case you need to preserve them for the post-processing, you can disable the clearing like this.async function beforeRender(req, res) { req.data = req.data || {}; req.data.chapters = chapters[`${req.data.reportTitle}`]; req.options = { ...req.options, pdfUtils: { removeHiddenMarks: false } } }
And also pass it to the dynamic render calls where you need to preserve them
jsreport.render({ template: {}, options: { pdfUtils: { removeHiddenMarks: false } } }
-
Thank you I have tried it but for dynamically appended one link is not clickable.
-
yes, this is unfortunately known problem I already linked in your previous topic
https://github.com/jsreport/jsreport/issues/771It's in our backlog, but I'm not sure when it gets implemented.
Did you measure a significant performance improvement when using parallelization?
-
Yes.
"chrome": {
"timeout": 3600000,
"strategy": "chrome-pool",
"numberOfWorkers": "5"
},
changing to chrome pool from dedicated process and merging pdf from the afterrender hook parallel increased the performance. it takes now 5 minutes for 50k data with 4k pages, 30 mins for 17k pages but header footer merge is taking longer . Is there a way to optimize it?
-
t takes now 5 minutes for 50k data with 4k pages, 30 mins for 17k pages but header footer merge is taking longer . I
Could you share the original time before the and after the parallelization? Including also the header merge and all operations. So we know how this can be useful and how to prioritize the 771 fix.
header footer merge is taking longer . Is there a way to optimize it?
Don't know how you could optimize it at your end. Maybe the pdf-utils can be optimized, but we didn't dig deep there.
-
Before parallelisation it used to take 35-40 minutes or sometimes it used to timeout. Also figured that header-footer append takes time when there is alot of pages like 15k and above pages. So any ways to parallelise headerfooter as well.!!
-
I apologize for the trouble. The 17k pages report is a super rare case and things may not be optimized for it.
So any ways to parallelise headerfooter as well.!!
I don't know how to improve this from the top of my head. I've submitted the task to our backlog to analyze it.
https://github.com/jsreport/jsreport/issues/933