Web3 has a memory problem. Not in the “we forgot something” sense, but in the core architectic sense. It doesn’t have a real memory layer.
Blockchains today do not quite look alien compared to traditional computers, but a core basic aspect of older computing is still missing: a memory layer built for decentralization that supports the next iteration of the Internet.
Muriel Médard is a speaker in Consensus 2025 14-16. May. Sign up to get your ticket here.
After the 2nd World War, John von Neumann laid out The architecture of modern computers. Each computer needs input and output, a CPU for control and arithmetic, and memory to save the latest version data with a “bus” to download and update this data in memory. Generally known as RAM, this architecture has been the basis of computing for decades.
In its core, web3 is a decentralized computer – a “world computer.” On the higher layers, it is rather recognizable: operating systems (EVM, SVM) running at thousands of decentralized nodes that drive decentralized applications and protocols.
But when you dig deeper, something is missing. The memory layer that is important for storing, accessing and updating short -term and long -term data does not look like the memory bus or memory device that von Neumann imagined.
Instead, it is a mashup of different approaches to best efforts to achieve this purpose, and the results are general messy, ineffective and difficult to navigate.
Here’s the problem: If we are to build a world computer that is basically different from the von Neumann model, there is better a great reason to do so. Currently, web3’s memory layers are not only different, it is intricate and ineffective. Transactions are slow. Storage is sluggish and expensive. Scaling mass recording with this current approach is almost impossible. And that’s not what decentralization should be about.
But there is another way.
Many people in this room try their best to work around this limitation, and we are at a point now that the current solutions just can’t keep up. This is where the use of algebraic coding makes use of equations to represent data for efficiency, resilience and flexibility come in.
The core problem is this: How do we implement decentral code for web3?
A new memory infrastructure
That’s why I took the leap from Akademia, where I had the role of my NEC chairman and professor of software science and technique to dedicate myself and a team of experts to promote high performance memory for web3.
I saw something bigger: The potential to redefine how we think of calculating in a decentralized world.
My team with the optimal creates decentralized memory that acts as a dedicated computer. Our approach is driven by random linear network coding (RLNC), a technology developed in my My lab Over almost two decades. It is a proven data coding method that maximizes flow and resistance in networks with high reliability from industrial systems to the Internet.
Data coding is the process of converting information from one format to another to effective storage, transmission or treatment. Data coding has been around for decades and there are many iterations of what is used in networks today. RLNC is the modern approach to data coding built specifically for decentralized computing. This scheme transforms data into packages into transmission across a network of nodes, ensuring high speed and efficiency.
With several engineering prices from top global institutions, more than 80 patents and several real world installations, RLNC is no longer just a theory. RLNC has received considerable recognition, including the IEEE Communications Society and Information Theory Society Society Society Society Society Joint Paper Award for the work “A random linear network coding approach to Multicast.” RLNC’s influence was recognized with IEEE KOJI KOBAYASHI COMPUTERS AND COMMUNICATIONS AWARD in 2022.
RLNC is now ready for decentralized systems, enabling faster data formation, efficient storage and real -time access, making it a key solution for web3’s scalability and efficiency challenges.
Why this matters
Let’s take a step back. Why is all of this question doing? Because we need memory for the world computer that is not only decentralized but also effective, scalable and reliable.
Currently, Blockchains are dependent on best efforts, ad hoc solutions that partly achieve what memory in high performance computing does. What they lack is a unified memory layer that includes both the memory bus for data formation and RAM for data storage and access.
The bus part of the computer should not become the bottleneck as it does now. Let me explain.
“Gossip” is the common method of data propagation in blockchain networks. It is a peer-to-peer communication protocol where nodes exchange information with random peers to spread data over the network. In its current implementation, it struggles in the scale.
Imagine you need 10 information from neighbors who repeat what they have heard. When you talk to them, you will first get new information. But as you approach nine out of 10, the chance of hearing something new falls from a neighbor, making the last piece of information difficult to get. Chances are 90%that the next thing you hear is something you already know.
This is how Blockchain – Sladder works today – effective early, but superfluous and slow when trying to end the information sharing. You should be extremely lucky to get something new every time.
With RLNC we get around the core scalability problem in the current gossip. RLNC works as if you managed to be extremely lucky, so every time you hear info, it’s just happened to be info that is new to you. This means much greater flow and much lower latency. This RLNC-driven gossip is our first product that validators can implement through a simple API call to optimize data formation for their nodes.
Now let’s examine the memory part. It helps to think about memory as dynamic storage, such as hit on a computer or for that matter our closet. Decentralized RAM must mimic a closet; It must be structured, reliable and consistent. A piece of data is either there or not, no half -bit, no missing sleeves. It’s atomicity. Items remain in the order they were placed – you may see an older version but never a wrong one. It’s consistency. And unless it has moved, everything remains set; Data does not disappear. It’s durability.
Instead of the closet, what do we have? Mempools are not something we keep around in computers, so why do we do it in web3? The main reason is that there is no proper memory layer. If we think of data control in blockchains as a control of clothing in our closet, is a Mempool like having a pile of laundry on the floor where you are not sure what’s in there and you have to accommodate.
Current delays in transaction treatment can be extremely high for any single chain. With reference to Ethereum as an example, it takes two eras or 12.8 minutes to finish any single transaction. Without decentralized RAM, web3 is dependent on Mempools where transactions sit until treated, resulting in delays, overloading and unpredictability.
Full nodes store everything, inflated the system and manufactures retrieval complex and expensive. On computers, RAM holds what is currently needed, while less used data is moving to cold storage, perhaps in the cloud or on the counter. Full nodes are like a closet with all the clothes you ever wear (from everything you’ve ever carried as a baby so far).
This is not something we do on our computers, but they are found in web3 because storage and reading/writing access are not optimized. With RLNC, we create decentralized RAM (DERAM) for timely, updated state in a way that is financially, elastic and scalable.
Deram and data propagation powered by RLNC can solve Web3’s largest bottlenecks by making memory faster, more efficient and more scalable. It optimizes data formation, reduces the storage flare and enables real -time access without compromising decentralization. It has long been an important missing piece on the world computer, but not long.