In this article I will focus on general directions in scaling that seem most viable at present. The ideas presented here stem from over two years of discussion among people actively working on the Bitcoin protocol.
Scaling, on-chain and off-chain…
Validating and storing blocks is computationally expensive. It consumes a lot of CPU, memory, bandwidth, and storage space. Validation is a fundamental operation nodes on the network must perform. The proper functioning of validating nodes is essential to the security of the entire network.
This means there are resource constraints that must be addressed for anything we want to do on-chain. There have been a number of things already implemented and others that are planned in Bitcoin Core for the sake of reducing these resource costs to make it safer to increase block size in the future.
For off-chain scaling, the constraints are far looser since the transactions don’t require the entire network to process them and store them for all eternity. This means off-chain protocols can provide significantly larger gains in transaction throughput with far less risk to the security of the base layer.
There’s a widespread misconception that Bitcoin Core developers are opposed to block size increases or hard forks fundamentally – this is not the case. Rather, most Bitcoin protocol developers (and not just Bitcoin Core) seem to think that in order to increase block size we must make sure the resource costs can be properly managed. And in order to do a hard fork it is necessary to have a much more civil, less contentious environment.
Compatible soft forks are specially designed upgrades that avoid many of the issues with hard forks when properly deployed and activated. In particular, they allow people to continue to use old software and send transactions between old and new software and do not require everyone on the network to upgrade at once. All else being equal, the risk of a chain split with a compatible soft fork that is properly deployed and activated is substantially less than that of a hard fork.
Within the Bitcoin protocol development community, compatible soft forks tend to be preferred over hard forks as a means of upgrading the protocol’s consensus rules. Several compatible soft forks have been deployed and activated successfully but no planned hard fork has ever been deployed on the Bitcoin main network to date. There’s a considerable amount of science that’s been developed in helping ensure the safe deployment of compatible soft forks with minimal network disruption.
Furthermore, to achieve widespread agreement among the technical community, all proposals must pass a rigorous peer review process. Otherwise we risk the security of the entire network (worth many billions of USD) and will likely end up with multiple incompatible chains and/or an otherwise broken network. Hence, there’s a strong focus on fixing the issues that make it safer and easier to scale in different ways, which are prerequisites for future proposals.
The Bitcoin Core project has a long history of performance improvements that make on-chain scaling more feasible including signature caching, ultraprune, parallel script verification, headers first synchronization, libsecp256k1 (crypto library that accelerates signature validation 5-7x), mempool limiting, compact blocks, assumed valid blocks, a cuckoo cache that further speeds up signature validation, and optimizations to the networking code. The latest release of Bitcoin Core, 0.14.0, synchronizes many times faster than the Bitcoin Core of just a couple years ago and a few more optimizations are still thought possible.
Segregated Witness (Segwit), a compatible soft fork that has already been active on three dedicated test networks as well as the Bitcoin testnet and is awaiting activation on the Bitcoin mainnet, is considered to be a critical component in the next stage of both on-chain and off-chain scaling solutions. For on-chain scaling, it reduces UTXO bloat and provides more node synchronization options as well as solving the quadratic sighash problem that currently makes some large transactions take a long time to validate. For off-chain scaling, it fixes transaction malleability, allowing for much cleaner implementations of off-chain protocols such as the Lightning Network.
Segwit also allows for addition of other features in the future such as schnorr signatures and MAST (merkelized abstract syntax trees) which could substantially reduce the size of signatures and scripts in a block.
While work is undertaken in figuring out how to further increase block size and use existing block space more efficiently, off-chain protocol ideas can be concurrently explored. Having teams working in these different areas, both on-chain and off-chain in parallel, will likely lead to a rapid pace of innovation. I believe realistically, we could deploy a hard fork that further increases block size within the next year or two assuming the contention in the community has settled down.
On-chain we could realistically see a small but significant factor increase in transaction throughput near-to-midterm, however there are diminishing returns. It gets harder and harder to squeeze more here without some further improvements to the tech. Specifically, reduction in validation costs as well as more sophisticated network architecture will be required.
Dynamic or adaptive block size proposals have been made – but it’s still an open problem how best to solve this as all the proposals to date seem to suffer security issues. However, we should not hold back progress elsewhere just because a solution isn’t yet known to this specific problem.
Off-chain, we could realistically see much bigger gains within the next few years for certain use cases, 1000x or more in transaction throughput, without loss of base level security.
DISCLAIMER: The views expressed here are my own although I am trying to reflect as accurately as possible the general understanding that exists in the Bitcoin protocol development community.