Похожие чаты

Hi, guys, does anyone has the experience that querying a

table that has 100 millon rows?
I try to use indexs to optimize the querying then the querying become more slow.

3 ответов

9 просмотров

Have you tried sharding?

Shard the database into multiple different sections and index the shards. Then when a query is executed it will take less time and it does not have to iterate through all the records

I had that on a database that had about 300 million rows. Any request that did recursive joining got extremely slow, I had to make a tool to remove the need for those requests. Latency was generally rather high, throughput was not affected in an extremely significant way. The database I was using was MS SQL server 2008

Похожие вопросы

Обсуждают сегодня

This is a question from my wife who make a fortune with memes 😂😂 About the Migration and Tokens: 1. How will the old tokens be migrated to the new $LGCYX network? What is th...
🍿 °anton°
2
30500 за редактор? )
Владимир
47
What is the Dex situation? Agora team started with the Pnetwork for their dex which helped them both with integration. It’s completed but as you can see from the Pnetwork ann...
Ben
1
а через ESC-код ?
Alexey Kulakov
29
Anyone knows where there are some instructions or discort about failed bridge transactions ?
Jochem
21
@lozuk how do I get my phex copies of my ehex from a atomic wallet, to move to my rabby?
Justfrontin 👀
11
есть ПО, которое лежит папкой, по сути и не инсталится, а просто запускается. Надо раскидать по машинам в домене. Я так понял, что прям настройками GPO, копировать я смогу тол...
Dmitriy
8
any reference of this implementation?
BitBuddha
29
Чёт не понял, я ж правильной функцией воспользовался чтобы вывести отладочную информацию? но что-то она не ловится
notme
18
Привет)) уже кажется эту тему перемусолили, но вот я так и не понял. Я сейчас сижу на 27дюймов 2к мониторе. На Актуальной макоси, если я куплю 27д 4к монитор: - будет ли изобр...
Vladislav Piskunov
16
Карта сайта