Database Administrators Asked by Mark McWiggins on January 9, 2021
I am backing up and restoring a large DB (7+TB) … I am using
pg_dump -j 8 -Fd
(8 parallel dump threads with directory formatting).
I just wondered if anybody has tested with a higher -j value like 16 or 32 or even 64?
Please advise … Thanks much!
Yes, I started the restore last evening using the -j 32 flag about 7pm PST and in 12 hours it had restored 3.5 TB of the 7 TB total: much faster than the previous trial where I was using the -j 8 flag on admittedly a much smaller (267 GB) database.
Hope this is helful!
Answered by Mark McWiggins on January 9, 2021
You will probably not see a performance increase.
"pg_dump" only dumps multiple tables in parallel. It does not parallelize the dumping of data for any single table. So the entire process will always take at least as long as it takes to dump the single slowest table, no matter how much parallelism you throw at it. Most applications are dominated by a handful of large tables, so won't see much increase to assign more jobs than the number of large tables + 1.
And beyond that, you would need enough CPU and IO. '-Fd' uses compression by default, and that compression is often the bottleneck (on the CPU) rather than IO. But if you have enough large tables and enough cores, then IO would probably become the bottleneck.
Answered by jjanes on January 9, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP