as fud. Trying to explain DAG's claim of infinite scalability BUT i am running into the limit of my little understanding. How can the network be infinitely scalable and faster as more users join the network when there is finite amount of dag, and a finite amount of 250k nodes validating the network within that supply, each of which represents a finite amount of computing power on each node's VM?
Would this mean either an increase in supply or more lite nodes committed to the network?
Amount to run a node could be decreased, or supply could be increased like a stock split. So, if you had 500k dag, now you might have 1 million dag and can run 4 nodes instead of 2, and you wouldn't be diluted in this scenario.
Ah yeah that was what I was thinking. Could a large part of the scalability be in the DAG data structure as well? Thanks for engaging on the question @SSBVegeta
Yes, and also the custom consensus mechanism that Constellation have built, proof of reputable observation.
If the amount to run a node would be decreased, for example cut in half, one node would have 125k DAG. How come the TPS/bandwith of that node stays the same?
Can somebody explain this to me?
My understanding is that the TPS is just a fraction which is 1/total nodes. So regardless of how many tokens a node costs, the tps is still divided among the nodes pointed at the state channel(s) @SSBVegeta
so it means that the node requirement reduced by 1/2, mean the total nodes X2, which i believe it stays the same?
Following this reasoning, it sounds like the TPS of the DAG network is basically "set", which is incorrect, no?
Обсуждают сегодня