ETH 2.0: What you need to know

From the CMC editorial desk: You’ve seen lots of coverage about ETH 2.0, but what will it end up looking like? To answer this question, we ask Julien Klepatch, who provides an approachable explanation of the technical details.

A couple of weeks ago, Vitalik Buterin unveiled ETH 2.0 at Devcon4. ETH 2.0, consisting of Shasper and Serenity, is an ambitious plan to  dramatically increase the scalability of Ethereum. ETH 2.0 is fascinating but also complex, and there is a lot of confusion around what we can expect from ETH 2.0 when it is implemented. ETH 2.0 is actually an umbrella term that regroups three sub-projects: Sharding, Casper and eWASM.

In this article we will go through a high-level overview of these three sub-projects and how they promise to make Ethereum more scalable. But before this, let’s first understand what scalability really brings to the table.


Scalability has become a buzzword in the blockchain industry. Everybody talks about it, and it is cited as the main challenge for most blockchain projects. What is scalability and why does it matter so much?

Scalability is formally defined as the ability for a system to increase its output proportionally to its input. More specifically for blockchains, scalability is defined as the capacity to process transactions at the same speed (output) when the number of transactions increases (input). The maximum number of transactions per second for Bitcoin and Ethereum is respectively 7 and 15 at this point in time.

These numbers are unfortunately far too low to support mass adoption and everyday usage. The Ethereum network famously became congested on multiple occasions in 2017 when popular ICOs drove many transactions on the network, or when CryptoKitties users rushed to buy, sell and breed their favorite kitties.

Why Ethereum is currently slow?

In Ethereum, every transaction sent to the network has to be processed by ALL the computers of the network, and blockchain data has to be stored by all the computers as well.

Even though Ethereum transactions don’t require a lot of computing power to process and dApps typically don’t store a lot of data on the blockchain, when you multiply this by the number of computers of the network (20,000 as of November 2018), the processing power needed climbs rapidly.

But why are we duplicating the effort on all the computers of the network? Because that’s required by the Proof-Of-Work (POW) Algorithm of Ethereum. POW is also used in Bitcoin, but with Ethereum instead of processing simple transactions we run computations defined in small programs called smart contracts. This makes the computational load even bigger.

Could we use another system that doesn’t require this costly duplication?  Yes, that’s the whole point of Ethereum 2.0.

Overview of Ethereum 2.0

Source: Hsiao-Wei Wang

This diagram looks complicated but if we break it down and explain it piece-by-piece it will be easier to understand. First, there are three main parts:

  • Main chain (i.e the current Ethereum Blockchain)
  • Beacon chain (i.e Casper)
  • Shard chains (i.e Sharding)

Most of the action will take place in the Shard chains and the Beacon chain. The Shard chains are where the smart contracts will be runn and where their data will be stored. The Beacon chain is a coordination and validation layer. As for the main chain it used only for keeping Ethereum 2.0 “miners” accountable by staking their Ether (more on that in the next section).

You might be surprised to still see the Main Ethereum Blockchain in Ethereum 2.0. But that is only temporary, during the transition from Ethereum 1.0 to Ethereum 2.0. The long-term plan is to get rid of the main chain entirely and to only keep the Beacon chain (Proof-Of-Stake) and the Shard chains.

Before we get into the individual components and explain how they work together, let’s define some terms in Ethereum 2.0:

Alright, enough jargon! Let’s dive into Proof-Of-Stake (POS), one of the fundamental idea of Ethereum 2.0.

Casper & Proof-Of-Stake

The original idea of Casper was to switch the consensus algorithm of the Ethereum Main chain from Proof-Of-Work (POW) to Proof-Of-Stake (POS). The main motivation was to reduce centralization risks and environmental costs of POW – POW needs a lot of resources for mining.

With POS, the blockchain still has blocks. However, there are no miners but instead there are validators. Validators are split into block proposers and attesters who respectively create and validate new blocks. Validators stake some Ether in a smart contract, and vote to attest that new blocks are valid. If they don’t participate in the vote or if they validate incorrect blocks, their Ether stake will be either reduced or destroyed entirely, if they are found out. That’s what we call the slashing conditions and the mechanism that keeps the blockchain secure. The equivalent of this for POW would be a miner who spends a lot of money in electricity to mine an incorrect block, only to have other miners reject his block. In the case of POS, we can produce the same incentive to behave honestly, but without having to spend any electricity.

Beacon Chain

The Beacon chain is a new POS blockchain that re-uses the Casper research for its consensus mechanism. Its main job is to:

  • maintain the validator set
  • store attestations, which are hashes of shard data attested by validators

When a validator wants to participate in the verification of shard blocks, they need to first deposit 32 Ethers (staking) in the Validator Main Contract (VMC), a smart contract on the Main chain:

The Beacon chain periodically checks this smart contract and when it finds new validators it adds them to a list of pending validators.

Regularly, the Beacon chain assigns pending validators to random shards, and also re-shuffles the set of already active validators. This random sampling process is crucial to the security of ETH 2.0 as it prevents validators from colluding and carrying out 51% attack on individual shards.

Proposers send hashes of shard data to the Beacon chain, and attesters sign these hashes if they believe they are valid. These attestations are stored in the blocks of the Beacon chain, but not the shard data itself.


Inspired by the sharding techniques used in traditional databases to split large volumes of data into several databases, Ethereum sharding proposes to split both transaction processing and blockchain data storage into several groups of computers called “shards”.

In total there would be 1024 shards. Each of them is like its own mini-blockchain, with block proposers and attesters adding new blocks to the blockchain and validating them.

Ethereum addresses, balances and smart contract data will be split across all these shards. For example, all addresses starting with ‘0xA….’ could be stored in the first shard, all addresses  starting with ‘0xB…’ would be stored in the second shard, etc. (At this point, there is no specification and the mechanism described is illustrative only.)

When a transaction is sent to the network, it would be executed in the shard that has the address that signed the transaction. As a result, only a subset of all the computers involved in the overall network will have to deal with this specific transaction, which will greatly reduce the load on participants.

Most of the scalability improvements of ETH 2.0 will be brought by Sharding. But there is still an interesting extra optimization that is provided by another project called eWASM.


The Ethereum Virtual Machine (EVM) is the central component of Ethereum that runs the smart contracts. When the EVM runs a smart contract, it charges Ether to the sender of the transaction. The more computing power a smart contract requires, the more the EVM will charge.

When smart contracts are deployed on the blockchain, they are compiled into a series of “opcodes” or in more familiar terms, basic instructions that the EVM can understand. Each opcode is associated to an abstract unit call “gas”.

Each block limits the amount of computational work that can be put into it by imposing a limit on the total amount of “gas” for the current block.

By improving the efficiency of the EVM, we can reduce the gas cost of each opcode, which will allow us to fit more transactions in a block.

EVM 1.5 was the first proposed initiative to improve the efficiency of the EVM.

Creating an efficient and secure virtual machine from scratch is a very difficult task. Virtual machines used in production have required the combined efforts of hundreds of talented engineers, working together for years or even decades to bring it to fruition. The total costs of such projects can be in the hundreds of millions of dollars. Think of projects like the Javascript virtual machine, for example.

Couldn’t we re-use an existing virtual machine that has been already optimized instead?

That’s the idea of eWASM. eWASM is a plan to build a new EVM based on WebAssembly, a cutting-edge virtual machine that has been developed by teams of Google and Mozilla engineers. The original purpose of this virtual machine is to allow any programming language to be run (efficiently) in web browsers. One of the benefit of re-using an already existing virtual machine is to leverage the gigantic efforts that have already been produced on this technology. Another huge benefit is the superior security offered by WebAssembly. So far, the EVM hasn’t been hacked, but as the network usage grows, the incentive to hack it will grow as well.

Casper, Beacon Chain, Sharding, Ewasm… all of this is great, but how do they all work together?

Is this enough to solve Ethereum’s scaling woes?

Vitalik Buterin stated that ETH 2.0 will be able to process 15,000 transactions per second, instead of the current 15 transactions per second. That’s a massive 1,000x increase.

The number of transactions per second of blockchain is often compared to credit card payment networks, like Visa which can process several thousands of transactions per second. AliPay, the Chinese financial powerhouse, has an even larger capacity, with more than 10,000 transactions processed per second during Chinese holidays.

It looks like 15,000 transactions per second for Ethereum would be enough. But that’s only if you consider Ethereum as a payment network, and assuming current demands remain stable.

First, contrary to credit card payment processors, Ethereum allows not only normal payments but also micro-payments. This extra capability will likely fuel a new demand, which will result in many more transactions per second than what we currently see on credit card payment processors.

In addition, Ethereum is much more than just a payment network. It’s a platform to enable a new kinds of decentralized applications (dApps) that has the potential to grow as much or even more than the mobile platform. The number of transactions resulting from the usage of these dApps will add up to grow the demand for micro-payments.

Even though ETH 2.0 is impressive, it might not be enough. To scale Ethereum even more, Ethereum researchers have thought of doing “quadratic sharding”, which involves making shards of shards, and would theoretically allow to scale Ethereum without any limit. However, this is just a very early idea and no one is  sure it would actually work or could even be implemented.

Ethereum 2.0 will be rolled out in several phases, with the final phase being expected in 2019 or 2020. In the meantime Dapps won’t probably wait that long and will start to explore other scalability solutions such as sidechains (Loom, Raiden, POA network…). When ETH 2.0 is ready, these sidechains will be added to individual shards and take the scalability of Ethereum even further.

About Julien Klepatch 

If you want to learn how to develop Ethereum Dapps & Smart Contracts, go to my website and youtube channel EatTheBlocks. You will find a lot of free tutorials about Ethereum, Solidity, Truffle & other tools for Dapps. I also released a video course with Manning to create a full Decentralized Exchange (DEX) for ERC20 tokens on Ethereum.