table that has 100 millon rows?
I try to use indexs to optimize the querying then the querying become more slow.
Have you tried sharding?
Shard the database into multiple different sections and index the shards. Then when a query is executed it will take less time and it does not have to iterate through all the records
I had that on a database that had about 300 million rows. Any request that did recursive joining got extremely slow, I had to make a tool to remove the need for those requests. Latency was generally rather high, throughput was not affected in an extremely significant way. The database I was using was MS SQL server 2008
Обсуждают сегодня