Stack Overflow Asked by kadope on December 19, 2020
I had have to redefine the description of the problem.
I have PostgreSQL cloud-based database, making 1.5M requests per day. I checked the statistics of the individual queries themselves with different variants of the extracted data. In general, the individual queries seem to be okay (they are really simple and are unlikely to delay). The problem occurs while the application is running. The application is an internet game. During one gaming session, new records (with the current state of the game) are constantly being written to the database. A lot of inserts are made at this time. The user may wish to see the history of the game at any time(while such inserts are in progress). At this point, when the writing-service is adding new records into the database, the reading-service reads the data. Such reading is very rare compared to writing, it occurs in a ratio of 1:100 (but such reading will occur more often in the future). Service ugh-read usually reads data in 0-6 seconds. Sometimes the reading time increases to over 40 or even 100 seconds. Rare jumps like 10-20 seconds would be acceptable, but I absolutely need to get rid of jumps over 40 seconds.
For this particular problem I think about replication MASTER-SLAVE (write_only-read_only)
The additional information: commentator asked about:
If would be good, I could present the structure of queries and tables. Everything is written in spring.
Simply speaking, you have to find and remove the bottleneck.
A few pointers:
Look at the operating system and see how the I/O system and the CPU are doing.
Reduce the number of concurrent database connections, perhaps using a connection pool.
Employ pg_stat_statements
to find the statements that cause the most load.
Correct answer by Laurenz Albe on December 19, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP