209 похожих чатов

Has Bitcoin Cash ever considered including the UTXO set merkle

root in each block? It would allow for immediate sync

16 ответов

36 просмотров

Yeah, Somethiung like this in the works https://bitcoincashresearch.org/t/chip-2021-07-utxo-fastsync/502/72 It's called UXTO commitments.

Michael-Fletcher Автор вопроса

Yes, but there is no detailed plan for it's introduction. There has been some experiments within specific node implementations. That must be regarded as experiment on how it can be done. Any node operater can use it for his own purpose, if it is complete. But the real value of it is when the merkle root (or any identification of the complete set) is included in the block header, and validated by each miner. Satoshi called it utxo-set commitments. The whitepaper: "Once the latest transaction in a coin is buried under enough blocks, the spent transactions before it can be discarded to save disk space. To facilitate this without breaking the block's hash, transactions are hashed in a Merkle Tree [7][2][5], with only the root included in the block's hash." So not new, and not magic. But an agreement on how it should be done in an effective way. It is also not extremely useful with the amount of data we have on the blockchain now. But as always, it is good to have a plan for the future.

Michael-Fletcher Автор вопроса
ErdoganTalk jackson
Yes, but there is no detailed plan for it's introd...

Satoshi actually talked about committing the UTXO set hash within the header? What are some of the issues sopping this right now?! Building the UTXO set based on some order and including it in the header

Michael Fletcher
Satoshi actually talked about committing the UTXO ...

mostly a lack of urgency, it's a lot of work to iron out all the worst case performance kinks, work out how exactly to activate it, and other possible features that could be tagged on. an extremely minimalistic (and in my opinion, quite elegant) version exists here: https://github.com/softwareverde/bitcoin-verde/blob/development/specification/utxo-fastsync-chip-20210625.md

that's the chapter about reclaiming disk space. Which is a concept that is still lacking on any full node on BCH, I hope someone will code that. But, besides that point. That chapter does not actually talk about commitments in any form.

Michael-Fletcher Автор вопроса

That's referring to the tx set commitment, not the UTXO set commitment which is what I'm referring to

Michael Fletcher
That's referring to the tx set commitment, not the...

the short answer is, some people have a good idea om how to do it. A proof-of-concept has been available for a while (you got linked to it above). BUT, we can't actually benefit from such a scheme until a better pruning comes along as a restored node without it won't be able to do anything useful for anyone but the owner. For instance, you would not be able to connect a wallet to it without trusting the full node. So, nice idea, lots of work to be done to actually use it.

Tom
that's the chapter about reclaiming disk space. Wh...

It doesn't use the "commitments" word. Maybe he used that in later e-mails, not sure where it came from. To be able to shrink the chain from the beginning, you ask yourself "do you think the chain was correct a year ago" (correct in the sense that the correct amounts exist on the correct outputs). If that is the general agreement, you can cut off everything before that point. You lose the historic transactions of course, which is a different problem. To have the same consensus we have otherwise in the chain, the utxo-set on that point must be built and published, and it's identification (the merkle root of the utxo-set) must be a part of the block hash. If I am totally confused here, please tell me If we have this implemented, that would solve a problem, commonly known with computers, that a forever expanding dataset can not grow forever, it will take up all available space. Using commitments, In the endgame, the data set size will be more or less static. It does not solve everything, for example the utxo-set itself will probably grow forever, due to lost coins. I am sure other things also will tend to grow. Depending on how fast it grows, it is either a problem or not a problem. For example, we would like, in society, to keep a copy of each book ever produced. In the far future, those books will cover the whole globle, leaving no space for humans. So it is not smart to take any ever increasing data-set and make a problem out of it.

ErdoganTalk jackson
It doesn't use the "commitments" word. Maybe he us...

a couple of clarifications. - a scheme like a commitment is essentially the same as what the current BitcoinCore node thinks of as checkpoints. A hash encoded by the devs known to be good. This checkpoint is used to avoid full tx validation on Core. A commitment is very similar and it would indeed allow not just not validation, but also not downloading. The clarification is that we already have a way to look at it and do this in a way that is permissionless innovation. When someone wants to code it, naturally. - the 'forever expanding dataset' problem is practically speaking not nearly as certain to be an issue as people think. This due to being digital. Our storage space availability has expanded much faster than people seem to think. No, you're not going to run an actual production full node on a rPi. And even my laptop now has 1TB of the fastest drive possible. The clarification is that worrying about this while we see 30TB drives in the store is premature optimization. (but, for sure, if someone wants to build it let them do so!)

Tom
a couple of clarifications. - a scheme like a com...

I think I covered most of that, except that checkpoints coded into the client have a different security model. The developers inserts the checkpoints, while in the commitments idea, the miners are responsible, just like they are for the validity of the transactions and blocks. It a miner doesn't do it right, his new block will be invalid. We agree on the usefulness of it. What I think of is that the btc people believe they can handle their small blocks, and they argue we can not handle our large blocks. Normally I would ignore, because this argument and most other arguments tbh, are not honestly reasoned, they are reduced to memes. Normally I ignore that, but there is a "customer relations" effect.

Стикер

any node, fast sync or not, will always need to have the full set of headers. That doesn't change.

Стикер

Похожие вопросы

Карта сайта