4.2. Performance and Reliability
Terra Classic is operationally live and—most of the time—behaves like a normal Cosmos-family PoS chain: blocks are produced consistently, transactions settle, and planned upgrades are executed through governance. The reliability story is therefore not “constant outages,” but change-management maturity and tail-risk: the network has experienced a small number of high-impact reliability events (notably, a prolonged unplanned halt during an upgrade) and continues to carry dependency-driven DoS/halt risk that must be managed through timely patching and disciplined upgrade execution.
4.2.1 What “performance & reliability” means for an L1 (and what we measure)
For a PoS L1, reliability must be assessed across four layers—because “the chain is producing blocks” is necessary but not sufficient:
Consensus liveness – blocks continue to be produced (no halt).
Transaction inclusion quality – txs are included within a reasonable time; no prolonged mempool starvation.
Upgrade execution reliability – scheduled upgrades complete without extended unplanned downtime.
Service-layer reliability – public RPC/LCD/gRPC + indexers/explorers stay usable; otherwise users perceive “outage” even if consensus is healthy.
This chapter focuses on (1)–(3) as core chain reliability, and flags (4) as an operational maturity dependency (deep dive in 4.4).
4.2.2 Current operational state (what we can claim without overreach)
Based on the evidence corpus, Terra Classic’s recent consensus-level operation is stable, with typical Cosmos-like block cadence and routine chain activity. Where the chain has shown fragility historically is not day-to-day liveness, but edge cases around upgrades and dependency mismatches.
4.2.3 Reliability of change: planned halts vs unplanned downtime
Terra Classic uses governance-driven upgrades, and planned chain halts are part of the operational model for many upgrades. This is normal in Cosmos-style upgrade plans: the chain intentionally stops at a defined height so validators can switch binaries and resume.
That said, the most investor-relevant question is: do planned upgrades stay planned, or do they convert into extended unplanned downtime?
Evidence pack documents multiple upgrades that are treated as “documented / executed” (e.g., v3.1.6, v3.3.0, v3.4.0, v3.5.0, v3.6.0, v3.6.1) and distinguishes “scheduled halt” from “unplanned outage.”
Why this matters:
In mature L1 operations, upgrades should be “boring”: rehearsed on testnet, with reproducible builds, clear validator runbooks, and rollback paths.
In less mature operations, upgrade days become systemic risk events.
4.2.4 Confirmed high-impact reliability incidents (core chain)
This section is deliberately short and evidence-backed—we only list events with verifiable sources.
A) May 2022 emergency halt (crash-era)
The chain was halted during the May 2022 collapse period as an emergency measure. Evidence pack notes approximate timing and rationale (halt to reduce governance attack surface amid extreme volatility).
Reliability interpretation: this is an “extraordinary conditions” halt, less useful as a signal of normal operational maturity, but it sets the baseline for the post-crash era.
B) IBC disabled for months (availability of cross-chain functionality)
IBC channels were disabled during the collapse period and later re-enabled through governance-driven action (proposal-guided process; upgrade to v1.0.4 before a defined block height).
Reliability interpretation: consensus can be “up” while core ecosystem functionality (IBC mobility) is effectively “down.” This has real investor impact: liquidity and exit paths are constrained.
C) March 2023 unplanned chain halt during v1.1.0 upgrade (~8 hours)
This is the most important “normal-operations” reliability incident in the post-crash era: the chain halted at the scheduled upgrade height and remained down for ~8 hours due to a validator-side build mismatch—validators compiled with the wrong libwasmvm library version, producing divergent app_hash and preventing consensus.
The evidence pack describes mitigations: L1 Task Force coordination, validator re-upgrades, snapshots for resync, and an after-action review with procedural recommendations.
Reliability interpretation (state-level):
The chain is not fragile by default, but upgrade-day operational discipline has historically been a point of failure.
This is not a “code exploit” story. It’s a build reproducibility / release process story—an operational maturity gap.
4.2.5 Degraded performance risks: DoS and “halt-class” vulnerabilities (patched, but illustrative)
Reliability is not only about actual outages—it is also about credible pathways to disruption. The corpus documents a cluster of halt-class or network-wide degradation risks disclosed in the context of v3.6.1, including:
CometBFT BitArray crash-class vulnerability (ASA-2025-003) — malformed BitArray structures could crash nodes and potentially halt the network if exploited.
Oracle gas-limit DoS — oracle vote transactions could set gas limit equal to block gas limit, monopolizing block gas and excluding legitimate transactions.
Legacy WASM query DoS — oversized legacy query responses could degrade performance; mitigated via size limits and improved handling.
These were treated as serious enough to warrant a hot-fix release, with explicit patch actions (CometBFT upgrade to v0.37.16; oracle module constraints; WASM query limits) and a scheduled upgrade requirement.
Reliability interpretation (state-level):
This is an example of dependency-driven reliability risk: even if Terra Classic core logic is stable, upstream Cosmos/CometBFT/CosmWasm issues can create halt or DoS pathways.
The positive signal is that the ecosystem demonstrated the ability to patch and ship a hot-fix upgrade in response to disclosed risk.
4.2.6 Reliability implications for investors and builders (why this belongs in a “state of the chain” report)
For investors, “reliability” translates into two measurable outcomes:
Trust premium / listing and integration comfort – chains with frequent or poorly handled upgrades pay a reputational tax (partners price in operational risk).
Builder cost – unreliable endpoints, breaking changes, or unclear upgrade runbooks increase engineering overhead and reduce app velocity.
For builders specifically:
Upgrades like v3.6.0 introduced changes requiring developers using gRPC contract queries to update their code, i.e., “reliability” includes “API stability and breaking-change management.”
4.2.7 What this chapter deliberately does not do (and where it goes instead)
This chapter does not do root-cause governance analysis (e.g., “why maturity is low,” “who failed,” “why upgrades were contested”). That belongs in:
5.x (validator operational reality and coordination)
7.x (delivery capacity, dev process, tooling)
11.x (diagnosis & root cause synthesis)
Here we keep the scope: what reliability looks like today and what concrete evidence says about where it has failed in the past.