Hoskinson may be wrong about the future of decentralized computing

The blockchain trilemma reared its head once again at Consensus in Hong Kong in February, to some extent putting Cardano founder Charles Hoskinson on the back foot – having to reassure attendees that hyperscalers like Google Cloud and Microsoft Azure are not a risk of decentralization.

The point was made that large blockchain projects need hyperscaler and that you don’t have to worry about a single point of failure because:

  • Advanced cryptography neutralizes the risk
  • Multiparty computing distributes key material
  • Confidential data processing protects data in use

The argument rested on the idea that “if the cloud can’t see the data, the cloud can’t control the system,” and it was left there due to time constraints.

But there is an alternative to Hoskinson’s argument in favor of hyperscalers that deserves more attention.

MPC and confidential data processing reduce exposure

This was something of a strategic bastion in Charles’s argument – ​​that technologies such as multi-party computation (MPC) and confidential data processing ensure that hardware vendors would not have access to the underlying data.

They are powerful tools. But they not dissolve the underlying risk.

MPC distributes key material across multiple parties so that no single participant can reconstruct a secret. It meaningfully reduces the risk of a single compromised node. However, the safety surface expands in other directions. The coordination layer, the communication channels and the management of the participating nodes all become critical.

Instead of relying on a single key holder, the system now depends on a distributed set of actors behaving correctly and the protocol being implemented correctly. The single point of failure does not disappear. In fact, it simply becomes a distributed trust surface.

Confidential computing, especially trusted execution environments, introduces another trade-off. Data is encrypted during execution, limiting exposure to the hosting provider.

But Trusted Execution Environments (TEEs) rely on hardware assumptions. They depend on microarchitectural isolation, firmware integrity, and proper implementation. Academic literature, for example here and here, has repeatedly shown that side-channel and architectural vulnerabilities continue to appear across enclave technologies. The margin of safety is narrower than traditional cloud, but it is not absolute.

More importantly, both MPC and TEEs often operate on top of hyperscaler infrastructure. The physical hardware, the virtualization layer and the supply chain remain concentrated. If an infrastructure provider controls access to machines, bandwidth or geographic areas, it retains operational leverage. Cryptography can prevent data inspection, but it does not prevent throughput throttling, shutdowns, or political interference.

Advanced cryptographic tools make specific attacks more difficult, but they still do not eliminate the risk of failure at the infrastructure level. They simply replace a visible concentration with a more complex one.

The ‘No L1 Can Handle Global Compute’ Argument

Hoskinson pointed out that hyperscalers are necessary because no single layer 1 can handle the computational demands of global systems, citing the trillions of dollars that have helped build such data centers.

Of course, Layer 1 networks weren’t built to run AI training loops, high-frequency trading engines, or business analytics pipelines. They exist to maintain consensus, verify state transitions, and provide persistent data availability.

He is right about what layer 1 is for. But global systems mainly need results that everyone can verify, even if the calculation happens elsewhere.

In modern crypto infrastructure, heavy computation is increasingly done off-chain. What matters is that results can be proven and verified onchain. This is the basis of rollups, zero-knowledge systems, and verifiable computer networks.

Focusing on whether an L1 can run global compute misses the core question of who controls the execution and storage infrastructure behind verification.

If the computation happens off-chain but relies on centralized infrastructure, the system inherits centralized failure modes. Settlement remains decentralized in theory, but the path to producing valid state transitions is concentrated in practice.

The question should be about dependency at the infrastructure layer, not compute capacity inside layer 1.

Cryptographic neutrality is not the same as participatory neutrality

Cryptographic neutrality is a powerful idea and something Hoskinson used in his argument. This means that rules cannot be changed arbitrarily, hidden backdoors cannot be introduced, and the protocol remains fair.

But cryptography moves on hardware.

The physical layer determines who can participate, who can afford it, and who ends up excluded because throughput and latency are ultimately limited by real machines and the infrastructure they run on. If hardware production, distribution and hosting remain centralized, participation becomes economically closed, even when the protocol itself is mathematically neutral.

In high-compute systems, the hardware is the game-changer. It determines the cost structure, who can scale, and the resilience under censorship pressure. A neutral protocol running on centralized infrastructure is neutral in theory but limited in practice.

The priority should shift towards cryptography combined with diversified hardware ownership.

Without diversity in the infrastructure, neutrality becomes fragile under stress. If a small set of providers can rate-limit workloads, restrict regions, or impose compliance gates, the system inherits their leverage. Regular fairness alone does not guarantee participatory fairness.

Specialization beats generalization in computer markets

Competing with AWS is often framed as a matter of scale, but this too is misleading.

Hyperscalers optimize for flexibility. Their infrastructure is designed to serve thousands of workloads simultaneously. Virtualization layers, orchestration systems, enterprise compliance tools, and elasticity guarantees—these capabilities are strengths of general computing, but they are also cost layers.

Zero-knowledge proof and verifiable computation is deterministic, computationally dense, memory-bandwidth limited, and pipeline-sensitive. In other words, they reward specialization.

A purpose-built proof network competes for proof per dollar, proof per watts and proof per latency. When hardware, test software, circuit design, and aggregation logic are vertically integrated, efficiency is combined. Removing unnecessary abstraction layers reduces overhead. Persistent throughput on persistent clusters outperforms elastic scaling for tight, constant workloads.

In computing markets, specialization consistently outperforms generalization for fixed, high-volume tasks. AWS optimizes for freedom of choice. A dedicated proof network optimizes for one working class.

The financial structure is also different. Hyperscaler’s price for business margins and wide demand variability. A protocol-incentivized network can depreciate hardware differently and adjust performance around sustained utilization rather than short-term rental models.

The competition is about structural efficiency for a defined workload.

Use Hyperscalers, but don’t depend on them

Hyperscalers are not the enemy. They are efficient, reliable and globally distributed infrastructure providers. The problem is addiction.

A resilient architecture uses large vendors for burst capacity, geographic redundancy, and edge distribution, but it does not anchor core functions to a single provider or a small cluster of providers.

Settlement, final verification, and the availability of critical artifacts should remain intact even if a cloud region fails, a vendor leaves a market, or policy constraints tighten.

This is where decentralized storage and computing infrastructure becomes a viable alternative. Evidence artifacts, historical records, and verification inputs should be non-retractable at a provider’s discretion. Instead, they should live off infrastructure that is economically adapted to the protocol and structurally difficult to turn off.

Hypescalers should be used as one optional accelerator rather than something fundamental to the product. Cloud can still be useful for reach and bursts, but the system’s ability to produce evidence and continue what verification depends on is not gated by a single vendor.

In such a system, if a hyperscaler disappeared tomorrow, the network would only slow down because the parts that matter most are owned and operated by a wider network rather than rented from a big brand choke point.

This is how crypto’s ethos of decentralization is strengthened.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top