to implement a dynamic block size increase, scaling in response to growing #economic activities.
#BCH #crypto #educational #meetup
https://youtu.be/-ltRrXRcrpA?feature=shared
https://www.meetup.com/in_crypto_we_trust/
This could end up being a bad idea because it could mean we become bsv in the long run. 1gb blocks when current retail hardware only allow for 128mb max. You can have 1gb if it is only servers on Amazon like hosted nodes.
Have you calculated when that could happen? This topic has been talked about a lot, yes if there is sustained activity for several years, the blocksize would eventually grow, so if someone wants to make a spam attack for the next decade…. With enough hash to be mining the vast majority of blocks, then… go for it 😎
I think in 5 years normal every day computers will be able to run 128mb blocks like nothing. Right now 20tb HDD are $300. 200mbps internet connection cost $100 or less. 100tb nas cost $3000 today, and 1gb internet connection cost $200 if you find it. So in 5 years that 100tb nas will cost less than $1000 and 1gb internet connections will be the norm on every major city
Today that’s already the case! And there is no good reason for a normal node operator to keep the entire history of to e block chain, they just need to verify the tx that are happening in the blocks. HAMR does look to seriously jump the HDD market though… 50TB should be out within the next 2-3 years, with 32TB next year 🤤
It’s great seeing the movement of tech. Quite insane what we have achieved in the last 20-40 years.
the maximum moves very slowly, so no it does not pose such risk
more than 50% hashrate needs to be mining increasingly bigger blocks if the limit is to continue increasing. https://gitlab.com/0353F40E/ebaa#algorithm-too-fast
Last spring I built a mid range home computer with 2 TB SSD for $800. It would process 1 GB blocks in 2 minutes, so it could keep up, but wouldn’t be fast enough to run a mining pool. My $100/month home internet could easily handle a mining pool and I suspect a fast gaming computer would be OK as well. With a little work, BCHN node software could multithread all processing, which could then be split up to run a cluster of home computers. One could, in principle run at 1,000,000 transactions a second on a cluster of hundreds of Raspberry PIs connected by a fast ethernet switch, but a server computer would be more cost effective. But node development has been focused on features, especially tokens and scripting. My personal belief is that an equal level of development should be spent on performance, which requires significant refactoring of node software so that it can be highly multi threaded. Currently the only significant multithreading seems to be signature checking, but this does not solve the IO problem to the UTXO database. Current NVMe SSD hardware is more than fast enough to support a million TPS with multiple threads and a high queue depth, as can be seen by running benchmarks such as crystaldiskmark or fio, but a custom database is probably needed.
Performance will become the priority when it's needed
Обсуждают сегодня