Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Bitcoin Unlimited team has tested 1 GB blocks and presented their research and findings at conferences already.

That being said, the least sustainable solution is to keep blocks at 1 MB for btc. The core group have ousted and alienated everyone who made bitcoin work originally. The fees have priced out everyone who created the ecosystem originally. It is crystal clear to anyone even slightly paying attention that they have done nothing but lie and censor.

If you are getting all your information from /r/bitcoin you should know that it is censored into oblivion and has been nothing but propaganda for years now. Bigger blocks obviously work and the 'lighting network' not only has raises enormous questions about how it can work, it has been promised as just around the corner for multiple years now.



The Bitcoin Unlimited team tested on a tiny network ; ~6 miners with a highly simplified set of transactions that made some of the statistics collected so meaningless that they explicitly left them out of the talk. Under these conditions, they found that 1GB was the point where the network broke under its own weight [0]. If you were to run the full sized bitcoin network, you would likely see problems much sooner than 1GB. As far as I recall, they did not even address the centralization argument (eg. the network may "work", but give a disproportionate advantage to large miners).

[0] This actually happened a couple of times earlier, but those were fixable with straightforward software optimizations.


Apparently we're already seeing block size increases give disproportionate advantage to large miners on Ethereum, which allows miner voting on block size similar to Bitcoin Unlimited's proposal and is processing the most transactions out of all the major coins, at much smaller transaction rates than that: https://www.reddit.com/r/ethereum/comments/7pfshh/why_is_8m_... (Ethereum probably isn't as highly optimized as Bitcoin though.)


This is a generalization that is meaningless without the context of what the bottleneck actually is. Bandwidth works, processing blocks 1,000 times bigger works, what exactly do you think would be the problem? 1GB every 10 minutes is 1.6MB/s. The DOCSIS 3.0 standard goes higher than 100MB/s and anyone can rent a VPS with a gigabit connection for $15 - $20 USD per month.


https://youtu.be/LDF8bOEqXt4?t=4722

The bottleneck is propagation time. Also "The propogation time did not depend strongly on the network bandwidth for the given nodes"

Keep in mind that it is not sufficient for each node to have the bare minimum amount of bandwidth to download 1 block every ten minutes. When a node mines a block, we need that block to propagate across the entire network (~11,000 nodes [0]). Further, we want this propagation time to be relatively trivial; otherwise the number of orphan blocks would increase giving an advantage to large mining clusters and reducing the overall security of the network.

[0] https://bitnodes.earn.com/dashboard/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: