Blog
Blockchain Applications Scalability Problems and Solutions

Blockchain Applications Scalability Problems and Solutions

Tech Talks

By

George Spasov

June 12, 2020

8 Min Read

Every dapp creator dreams of hundreds of thousands or even millions of users for their system. The few that have actually encountered this have quickly realized the blockchain scalability problem. In this article, we will clear some of the aspects of the blockchain scalability and how you can think about it. We will also show you how you can avoid it or solve it by means of software architecture.

What Is Blockchain Scalability

The blockchain scalability refers to the ability of the blockchain network or decentralized application to work well at scale. Working well can have different metrics – resource-efficiency, time-efficiency, and service quality levels. Scale, on the other hand, can mean high volumes of interactions, high volumes of data exchange, or high-speed needs.

What Is the Blockchain Scalability Problem

The problem with the blockchain scalability is that the blockchain is inherently slow and expensive compared to traditional centralized systems. The more frequently you use the network, the more you have to pay, and the higher the cost per transaction goes. The more data that you need to exchange the higher your cost per byte goes. Sometimes the business use cases are even impossible in terms of speed or become unreasonably expensive.

Blockchain Scalability Solutions

Fortunately, the blockchain community is highly innovative and creative one and various solutions to the problems have gained popularity lately. Normally these solutions involve transferring data exchange to other parts of the software architecture or reducing the blockchain interactions needed without introducing additional trust issues.

In the next sections, you will see the five most popular solutions to the most common scalability problems for dapps.

Offchain Storage Solution

Storing data on the chain is costly, even if it is a couple of hundreds of kilobytes. There are use-cases though in which we must be able to refer to files or large amounts of data on-chain due to our business logic. We might have important metadata that must exist on the chain.

The solution to this problem is to store only a lightweight proof of the data and not store the whole data in raw version on the chain. By performing Hash function on the raw data we can generate a Hash that can represent the data on chain as a proof of existence.

A common example of storing data off-chain while preserving it’s Proof of Existence on-chain can be found in Marketplaces, Intellectual property claims or RealEstate tokenization.

For example, if we have a Marketplace for registering/buying patents (intellectual properties) we must be able to check the exact patent that is being registered on-chain. Something the data might be hundreds of gigabytes. In that case, we could upload the data on IPFS and store the hash of the data on-chain. Using this pattern we are able to store the proof of the existence of the patent without storing the actual patent itself on-chain. If someone wants to check the content of the patent, he can get the hash from the blockchain and download the IPFS contents using the Hash (IPFS uses the hash of the data being stored as an ID).

Keep in mind that we need to take care of the data availability problem that comes with storing data on IPFS. Having a hash on-chain, referring to data on IPFS, does not mean that the data will be available once we request it from IPFS nodes, that is why we (as dApp developers) need to make sure that we have IPFS nodes storing our data.

Offchain Negotiation Solution

Every on-chain interaction costs some form of money. The more transactions we use in order to reach the value-generating event, the higher the cost of this event. This is quite common when it comes to handshakes and negotiations where a lot of offers and counter-offers might go back and forth until the solution is reached (if ever).

The solution to this problem is to allow the negotiations to happen off-chain, and architecture the smart contracts in such a way, that the buyer can submit a cryptographic proof of the negotiation alongside the payment and receive the good in a single transaction.

A common example of such negotiation might be a peer-to-peer NFT marketplace. The price of the NFT in hand is subject to negotiation. If the counterparties to this deal negotiate on-chain this might lead to tens of transactions on-chain with the price for the deal going up forever.

A solution to this problem is to build the architecture allowing the counterparties to negotiate off-chain. When a decision is reached the selling party needs to send a signed approval to the buyer authorizing the sale to go through. This signed approval needs to contain information about the sold NFT, the buyer, and the price. The buyer can then submit the signature and the payment to the carefully designed smart contract and complete the trade.

The trustlessness of the process can be maintained by allowing the smart contract to check the correctness of the conditions that the seller has agreed to and only then transfer the NFT.

Speach Bubble IllustrationGlow Blog Banner

Aggregate-Commit Pattern Solution

A lot of times DLT based applications need the high speeds of off-chain systems while maintaining the trustlessness of DLT and the auditability that certain events happened.

Current DLT implementations don’t offer high enough speeds to cater to real-time high-throughput applications, or simply inputting all the real-time data ends up being very costly.

DLT based systems might use the aggregate-commit pattern to overcome this problem.

The pattern consists of two stages, the first one being the aggregate phase, where the system aggregates and stores the data off-chain. The second stage is the commit phase, where a cryptographic fingerprint of the current state of the data is generated and stored on-chain. The fingerprint needs to be cryptographically strong enough to allow for the end-user to validate the data has not been tampered, modified or its order has been changed.

Several versions of this pattern are widely used in the DLT world. Most of them are based on a data structure known as a Merkle tree (or a version of it). The Merkle tree root changes every time a new leaf is added, thus the tree root acts as a fingerprint for the whole dataset. In addition, by supplying just one leaf and the necessary nodes to lead to the root, one can trustlessly prove that this data is contained at this exact leaf and has not been changed or swapped.

Several versions of this pattern are known and widely used.

As a first hypothetical use-case, we can take giveaways and airdrops. Let’s say you want to give 1 token to every user that has registered on your website. If you approach this problem the traditional way, every new user will mean one new transaction that you need to pay for to the DLT network. This becomes highly unscalable very fast.

Using the concept of “monoplasma” one can architect a system, where every gifted token is written as a new leaf in a Merkle tree. The data written in the leaf would be some form of saying “address x is owed y tokens”.

Every time a new leaf is inserted the root hash of the tree is recalculated. The root hash of the Merkle tree gets periodically updated in a smart contract. As another change from the conventional airdropping way, Instead of pushing these tokens directly to the user wallets, the users are asked to submit a transaction, towards the smart contract holding the Merkle root, and “claim” their tokens. The claiming is done by submitting how much the address is owed and the necessary node hashes for the smart contract to independently generate the root of the tree. If the calculated root matches the stored root, the claim is correct and the tokens are disbursed.

It’s important to be noted that the aggregate-commit patterns normally operate via off-chain aggregation operator. In the airdrop use case, this is the system gifting away the tokens.

Another example of aggregate-commit is the new version of the old concept known as plasma. Plasma shares similarities to monoplasma, with the main difference being that the events written in the Merkle tree are not generated by a single operator, but are actually in response to users transacting between each other. The plasma operator catches the events, inserts them in the Merkle tree and periodically pushes the tree root to the blockchain. One popular data structure for the leaves is the Unspent Transaction Outputs scheme known as UTXO.

When a user wants to get in this “chain inside the chain”, they deposit their funds into the contract maintaining the Merkle tree root. The operator sees this deposit and generates a new leaf stating that this address now has the deposit amount in the plasma network.

When a user wants to get out of the plasma network, they submit the cryptographic evidence showing their latest balance. The Merkle root smart contract checks the validity of the evidence and disburses the balance back to the user.

ZK-Rollups is another nuance of plasma. While in Plasma all user transactions are bundled one by one in transactions, ZK-Rollups batch hundreds of these in a single transaction. The transactions are bundled in a block and zero-knowledge proof is generated, proving that the rules of the transfers have been abided. The on-chain smart contract has the ability to verify the validity of these transactions, without even disclosing the transaction counterparties or amounts.

Speach Bubble Illustration
Contact Us
Glow Blog Banner

State channels

A lot of business cases call for two (or even more) parties to constantly communicate with each other and agree with the latest state of their interaction until a meaningful end-result is reached. While the communication is peer-to-peer the business logic of the case might make it favorable for one of the parties to act maliciously in order to gain an advantage.

Tackling such cases can be done by designing your system based on the state channels pattern. Firstly, the parties involved in the communication need to always cryptographically sign any sent or received data and the resulting state of interaction.

Second, a smart contract is written that recognizes the rules of the business case and can make trustless decisions on who is right and who is wrong, just based on the latest signed states.

Example of such a scenario is player vs player games or event peer-to-peer credit lines.

Let’s take a hypothetical use-case of two players playing the game of chess and have placed some bets against each other. As the game calls for a lot of interactions (moves) that are not final (meaning the game does not end, and the winner cannot be decided yet), to send every single one of these to the blockchain can become very unscalable – they will be playing a very expensive game of chess.

Let’s see how we can tackle this scenario using state channels. Firstly, we can encode a smart contract that takes the wagers of the two parties and keeps them until the game resolution. This smart contract also needs to be able to recognize the winner based on a state, if the state is final.

Next, we modify the architecture allowing for players to use peer-to-peer communication for the moves between them. After a player moves, they send their move to the other player, alongside with signed version of the latest state of the game – the state of the game being the position of all figures along with the 64 fields. When the game ends, regardless of the result, one of the parties can take the latest signed state of the game and submit it to the smart contract. The smart contract checks the validity of the game and disburses the reward.

Several scenarios exist where the actors can act dishonestly in order to gain an advantage. Lets look at some of them and how these are mitigated.

Firstly, the loser, seeing that it is probably losing the game, might decide to act as if they are offline and can’t continue the game. To tackle this problem, the honest player can trigger a function in the smart contract, submitting the latest state, and giving a time interval, for the losing player to submit their next move. If the losing player fails to do so, they lose the game automatically.

Another scenario is a dishonest player forging an end state where they win, that has nothing to do with the game being played. In order to fight this, when a winning state is submitted, the opposing player has a time period to submit previous moves proving that the end state is forged and rewarding the honest player.

Depending on the business case, different new attacks can be possible, but as a general rule of thumb, make safeguards against these in a way that the smart contract can verify and be an arbiter for correctness.

Thin Proxy

Many business cases call for one and the same business logic smart contract to be deployed multiple times and be used by different users. As business logic smart contracts tend to be quite big, this quickly adds up in cost.

Deploy the business logic smart contract once, and allow for the deployment of thin smart contracts that “borrow” the business logic from the business logic smart contract.

Let’s take the example from the State channels pattern of two players playing chess. The arbiter smart contract is quite a fat one as it needs to carry the logic of the game and all the edge cases. If we keep deploying this fat smart contract we would be raising the cost per played game quite a lot.

Using the thin proxy pattern we can tackle this problem. We can deploy the business logic smart contract once and for every next game deploys a special thin proxy contract. This thin proxy contract can leverage the ability of the specific DLT network you are using to execute the business logic of other smart contracts on their own memory space. If we take Ethereum as an example, this is the “delegatecall” command. Our thin contracts only carry one method that invokes the execution of the business logic from the bigger smart contract.

One disadvantage of this method is that while it is initially much cheaper to deploy, proxying constantly adds a bit of cost on top of every transaction. This means that if your contracts are going to be used hundreds or more times, you might be better off deploying the fat contract. In the use case of chess, the lifecycle of a game would normally involve much fewer transactions, making it worth it to implement.

Have a project in mind?
Drop us a line.

Or just shoot us a message on Telegram

Select Contact Method
Looking for

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form.
Mail Box IllustrationCube Illustration 2