Settling the Block Size Debate

This is a guest post by Eric Lombrozo, the Co-CEO and CTO of Ciphrex Corp., a software company pioneering decentralized consensus network technology. Lombrozo is also a founding member of the CryptoCurrency Security Standards Steering Committee and has been a longtime contributor to the open source Bitcoin core development effort.

Introduction

In the last few months, a contentious debate has arisen surrounding the issue of a hardcoded constant in the consensus rules of the Bitcoin network. While on the surface it appears to be a simple enough change, this single issue has opened up a veritable Pandora’s box.

What is the block-size limit and why is it there?

When the Bitcoin network was first created, several assumptions had to be made regarding what kind of computational resources a typical Bitcoin node would have. Among these resources were network bandwidth, storage space and processor speed. If blocks were allowed to grow too big they would swamp these resources, making it easy to attack the network or discourage people from running a node. On the other hand, if blocks were too small, network resources would be underutilized unnecessarily, keeping the number of transactions too low. Despite the fact that available computational resources vary widely between devices and computer technology continues to evolve, for the sake of simplicity, a size was chosen: one megabyte.

It all comes down to economics

The block size limit is, at its core, an economic decision. It balances transaction load with availability of computational resources to handle the load. Block space is subject to the same economic principles of supply and demand as any other scarce resource.

In the early days of the Bitcoin network, it was expected the transaction load would remain well below the proscribed limit for some time. However, it was anticipated that eventually blocks would fill up as more and more nodes

Originally appeared at: https://bitcoinmagazine.com/21377/settling-block-size-debate/