Server Fault Asked on November 22, 2021
I’m writing a business plan and I have to simulate the cost when my website will be reach from 500,000 unique visitors.
Each page does 50 queries +-
Doing this calculation I need to perform 3,000 queries second… what kind of server can handle it?
The problem is: actually my site is doing 2,000 visits day, and having -+ 150/200 queries / second…
starting from this point I’ll expect 50,000 queries / second.
How many servers I need in cluster or replication do manage this job?
There are too many things that can effect your queries per second, please do not trust my data without testing yourself. I post my speed test result here to help someone to estimate the qps with current (2018-09) mysql database and machine. In my test the data size is less than server memory (that dramatically reduces IO and enhances performance a lot).
I use a one cpu 3.75GB memory, 100GB ssd, gcp cloud mysql server instance and get:
Answered by bronze man on November 22, 2021
For large "hot" datasets, it is probably worth the investment in time to convert to a "big data" scheme, it is what they are for. For example, if you have a vast quantity of data to retrieve, but you never rewrite, but only append new data, look at Apache Hive. Browse around, their is usually a flavor you can interface easily enough to existing code, that will also prevent the heartburn of running out of cache space.
Answered by BHGalyean on November 22, 2021
Judging by your comments, the biggest factor will be your data set size, or at least the size of the "hot" data set. 3,000qps or even 8,000qps on a 16-core server is not a problem at all as long as the server rarely has to go to the disk to satisfy the query. Once the active data set exceeds the amount of memory InnoDB is using to cache it, your performance will drop off rapidly.
Answered by Elliott on November 22, 2021
I used to work for an e-commerce company with a website that had several million page hits per day. We had a single DELL PE 1750 with 2 single core CPUs and 2GB of RAM, database size approx. 4GB. At peak times this server handled up to 50k+ queries per second.
Having said this: the database was well structured, all queries were finely tuned (we had weekly sessions analysing the slow query logs and fixing up queries and indexes) and the server setup was also fine-tuned. Caching is definitely a good idea, but MySQL does that anyway, you just have to analyse the performance and then fine tune how your memory is used (query cache vs other options).
From that experience I can tell you that the highest impact is caused by missing indexes, wrong indexes and bad database design (e.g. long string fields as primary keys and similar nonsense).
Answered by wolfgangsz on November 22, 2021
As Ignacio remarked, you may want to look into caching. In the cms or perhaps even in front of the stack. 50+ queries for every (every!) page truly is a lot.
Answered by Joris on November 22, 2021
This really can't be estimated without knowing anything about the specific queries you're running, the database scheme and its size.
A simple SELECT on an indexed column is quite a different beast from a couple of JOINs based on non-indexed ones... and of course things change a lot if the involved tables contain 1K records or 1M.
Also:
Answered by Massimo on November 22, 2021
It all depends on how complex the query is, and how much memory the servers have, and how fast the disks are.
If the queries are very simple, or very well tuned then a single large database server can handle that. If however the queries are very complex (or simple but poorly tuned) then you'll need several servers.
Answered by mrdenny on November 22, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP