A brief history of Ethereum
When Ethereum came out, back in 2015, its fans were crazy for its Turing-complete scripting capabilities. It felt somewhat magical: submitting pieces of code to the almighty blockchain, letting the world’s most resilient computer interpret that code, perform financial operations, and store the results forever. All this without KYC, and at the cost of a few cents.
People were fascinated and everyone guessed what the first practical use case would be. Then tokens came. They were a joke. Seriously, do we need a coin for dentists? Or, Bananacoin? But from an engineering perspective, everything worked fine and after all, some use case is better than no use case.
A few years and one crypto-bubble burst later, the first big decentralized exchange and lending platform were built. Many more similar projects followed soon and the umbrella term DeFi was coined. And since Ethereum supports as many as—hold your breath—10 transactions per second (tps), the transaction fees climbed to as high as $50 per transaction. The sentiment is that the Ethereum blockchain is clogged and quite unusable.
The mainstream solution
So here we are: Ethereum finally discovering its first real use case, but almost immediately getting close to unusable under the load.
Why is Ethereum so slow? With Ethereum, Bitcoin, and the majority of blockchains, every node receives,aws processes, and forever stores all transactions within the system. Such a huge redundancy is a cost of the enormous stability and fault-tolerance that blockchains such as Ethereum guarantee.
The high-level solution probably won’t surprise you much: Let the nodes work more in parallel. This means that every transaction will be processed by just a small fraction of all nodes. These nodes will find an agreement on what the transaction results are and update the balances on affected addresses. Inspired by database design, the system is called sharding and has become one of the most common buzzwords related to blockchain scaling.
Let me come out of the closet now: I’m really skeptical about sharding and I’m really skeptical about horizontal scaling proposals in general. While it creates an interesting playground for scientists, it may be beyond human power to engineer such a complex system.
Let me present three arguments why I think this is the case:
First, let us compare with existing systems, that are trying to achieve roughly the same thing—distributed databases. Reading through Amazon’s DynamoDB or Google’s Bigtable design papers, these systems are an order of magnitude simpler than sharded blockchain. Yet it took a paramount effort to make such systems work properly. And the creators didn’t even need to design for malicious actors that’ll do everything possible to sabotage your protocol!
Second, the scaling issue is not anything new and history shows that it is indeed a hard problem. You may recall projects such as RaiBlocks (later Nano), EOS, or IOTA. They all claimed they fixed the scaling issue and they are all pretty much dead now. Ethereum’s scaling solution Plasma created more than 20 different design proposals and yet the project seems abandoned now.
Finally, it takes years from the “spec ready” state to the “product ready” state. If the researchers cannot even write the specs, how many years are we from the launch?
Polkadot, a new kid on the sharding block, brings some really refreshing ideas to the table and I’d like to see them succeed. They, however, don’t give a definitive answer for implementing their no. 1 feature: checking the validity of the parachain state transitions by validators. The whitepaper mentions zero-knowledge proofs, which is a fine solution for token transfers. However, as Vitalik Buterin noted, zero-knowledge-SNARKs suitable for powering up a smart-contract blockchain are not even discovered by scientists yet! This is certainly not a small missing detail in a protocol design.
With the Avalanche ecosystem, you can create as many separate zones as you want, but there is no solution for zones to communicate with each other, so it’s as good as “scaling” Ethereum by launching multiple separate ETH blockchains.
The Near project has a very nice whitepaper, and in my opinion, they get farthest in producing a proper high-level spec. However, as an engineer the whitepaper gives me the creeps: every other sentence in the design part represents months of engineering work. It’ll take ages before anyone implements Near properly; Near is certainly not near.
Vertical scaling for the win
Imagine this situation: you’ve developed software for a client. Now the software has a problem: it cannot process the current load of transactions. All low-hanging performance fixes are already done. You are in a client’s place and have 12 hours to solve this. What would you do? The answer is simple: spend 2 hours migrating the system to the stronger AWS machine and spend the remaining 10 hours in the pub tasting the local beer.
This approach is quite opportunistic and not very noble in my fellow scientists’ eyes. But it has a few advantages: it works, it probably doesn’t break anything and it’s compatible with your taste for a good beer.
A powerful personal notebook can easily transmit, process, and store 1,000 tps. That’s a very practical 100x improvement over the current Ethereum throughput. There is no fundamental reason why Ethereum couldn’t simply do the same. Fundamental is an important word here: it’s certainly not as easy as turning the switch to 1,000 tps!
Solana is one project using such an approach. Their requirements for running a node are much more demanding than the Ethereum node. Apart from using the sheer power of their hardware, Solana engineers optimized like crazy on all levels of the system: They write smart contracts in low-level languages such as Rust, they replaced mempool with a faster alternative and they even came up with a novelty consensus approach which is designed for speed. All these things combined, they are bragging about supporting 50,000 tps. Based on my quick research no one seems to contest this statement, so it’ll probably be roughly correct. After all, project Serum is an order book-based (read: extremely transaction heavy) exchange that’s already operating on top of Solana.
Another example: Binance launched its version of an Ethereum-compatible smart-contract chain called Binance Smart Chain. While not even properly decentralized, people flock to it. The reason? It works now, for a fraction of the cost.
Let me end with a prediction: by the end of 2021 people will be fed up by sharding solutions not progressing fast enough and new vertical-scaling solutions will emerge. Solana and Binance Smart Chain will attract even more attention and so will non-sharding scaling solutions on Ethereum such as Rollups. Sure, these may be a temporary solution, but sometimes nothing is more permanent than a temporary solution.
At Vacuumlabs we are conservative about the choice of technologies. Using overhyped stuff often leads to pain, misery, and weekend overtimes, and no one needs that. But to innovate, sometimes you need to mindfully bet on something new and promising. That’s what we did with Cardano years ago, and we’re doing with Solana now.
If you’re interested in developing for these platforms, we’re hiring!
Disclaimer: The article is not investment advice. Do your own research before putting your money somewhere. The author has some positions in most of the projects mentioned, with the exception of Bananacoin.