12.5. How This Report Stays Alive: Accountability Workflow (Not Roadmap)
12.5.1 Purpose and scope
This report was designed as a diagnostic and measurement instrument, not a recovery roadmap. Its value persists only if it remains falsifiable: claims can be challenged, corrected, and re-verified by third parties using evidence.
Accordingly, this article defines an accountability workflow—a lightweight operating model that keeps the report and the Truth Dashboard usable as an investor tool without requiring (a) a validator-authored roadmap, or (b) any assumption that the report author will continue updating the report or the dashboard after completion.
What this workflow is:
A public QA loop for claims, sources, and metrics.
A method to convert disputes into evidence packets and dispositions (accepted / corrected / rejected / unresolved).
What this workflow is not:
A substitute for the A/B accountability fork in 12.2.
A commitment to “continuous maintenance.”
A roadmap.
12.5.2 What “stays alive” means in operational terms
A report “stays alive” when three conditions remain true, even if no one actively expands it:
Claims remain falsifiable.
A third party can test whether a claim is true using the cited sources and the stated method. This is why the report uses an explicit evidence hierarchy and requires claim labeling (Measured / Documented / Reported / Inferred).Key metrics remain recomputable.
The Truth Dashboard is framed as a data layer created during the research/writing process, and the report explicitly favors time-series exports and defined metric dictionaries over screenshots and anecdotes.Accountability has a public surface.
Terra Classic’s governance participation is structurally thin (e.g., voter wallets as a tiny fraction of active wallets), which increases narrative drift risk.
Therefore, the system must have a public place where issues are logged, triaged, and resolved (or explicitly left unresolved).
Practical implication for investors:
If the workflow exists, investors can detect whether Terra Classic is improving or simply cycling narratives—without relying on personalities.
12.5.3 System overview: the accountability loop
This workflow is built as a simple loop:
Evidence → Claim → Challenge → Triage → Disposition → Record
Evidence: dashboards, forum threads, governance records, repo artifacts, exchange notices, dated snapshots.
Claims: statements made in the report or implied by visuals.
Challenges: submissions that assert a claim is wrong, incomplete, misleading, or missing context.
Triage: sorting by severity and required effort.
Disposition: accepted correction, clarification, rejection, or unresolved (with reason).
Record: an errata entry / version note that preserves the chain of custody.
12.5.4 Inputs: what can be submitted into the workflow
To keep this system investor-grade (not social-driven), inputs are restricted to categories that can be adjudicated with evidence:
12.5.4.1 Correction request (hard error)
Examples:
wrong number, wrong date, wrong proposal outcome, wrong ownership attribution (with proof).
Minimum submission packet
Claim reference (chapter/section)
Proposed correction text
Source proof (link/screenshot/PDF excerpt)
Timestamp window (when the proof was true)
12.5.4.2 Missing evidence / missing context
Examples:
the claim may be directionally correct, but a key constraint, counterexample, or confounder is missing.
Minimum submission packet
Claim reference
What’s missing + why it changes interpretation
Evidence source(s)
12.5.4.3 Dispute (interpretation conflict)
Examples:
disagreement with inference or framing, not with raw data.
Minimum submission packet
Identify which part is inference (not measurement)
Provide alternative interpretation
Provide new evidence (if any)
This aligns with the report’s explicit separation of “what we can prove vs infer.”
12.5.4.4 Data source / metric definition request
Examples:
request for metric definition, formula, or export.
Minimum submission packet
Metric name
Why it matters for an investor decision
Proposed source if known
The report’s KPI governance approach is based on explicit metric definitions and reproducibility.
12.5.4.5 “Improvement proposal” (system change request)
This is not a roadmap request. It is a request to improve:
measurement surfaces,
disclosure standards,
proof artifacts,
governance transparency.
(Example: “publish a dated control-plane registry”, “publish an execution register”, “add export endpoints.”)
12.5.5 Triage rules: how items get prioritized without politics
Because Terra Classic discourse can be high-volume, the workflow must be able to reject noise without censorship.
Use a simple triage grid grounded in the report’s own prioritization logic (impact, feasibility, risk reduction, time-to-effect).
Severity levels
S1 — Critical (investor risk / integrity)
Numeric errors in headline KPIs
Ownership/control-plane errors
Security assurance misstatements
Governance capture misrepresentation
Market access claim errors (delisting/suspension)
S2 — Material (changes interpretation)
Missing constraints that change the direction of conclusions
Major omissions in method that affect comparability
S3 — Minor (clarity / completeness)
Typos, phrasing, readability
Additional examples that do not change conclusions
S4 — Non-actionable (insufficient evidence)
Pure opinion without sources
Unverifiable assertions
Duplicate items already resolved
Triage acceptance criteria (“what counts”)
An item is triage-eligible only if:
it references a specific claim or metric,
provides evidence or a testable path to evidence,
is time-bounded (which window it pertains to).
12.5.6 Dispositions: the only allowed outcomes
To preserve integrity, every triaged item must end in one of these dispositions:
Accepted — corrected
The claim was wrong; correction is applied and logged.Accepted — clarified
The claim was technically true but missing key context; add clarification and log.Rejected — insufficient evidence
The submission fails the evidence threshold; log rejection reason.Rejected — not in scope
The request is a roadmap demand, a political demand, or a non-falsifiable opinion; log and close.Deferred — requires primary source
The claim might be wrong but cannot be resolved without a missing primary source; record as open gap.Split decision
Numeric part corrected; interpretation retained (or vice versa). This keeps “evidence vs inference” discipline intact.
12.5.7 Versioning without “rewriting history”
This report must remain credible as an archival document. Corrections must not silently rewrite past claims.
Minimum versioning rules
Version tags (e.g., v1.0, v1.1) for material changes.
Errata log for S1/S2 issues: what changed, why, and the proof reference.
Dated snapshots: where data is time-windowed, corrections must preserve the original snapshot window and explain deltas.
This aligns with the report spec’s explicit “versioning + corrections policy” and its “no ongoing update commitment” boundary.
Important: This workflow does not require ongoing updates. It defines how updates would be handled if corrections are made—so that the document remains defensible.
12.5.8 Truth Dashboard: how it functions in this workflow
The Truth Dashboard is best treated as:
a snapshot evidence layer (captured during research/writing), and
a verification interface for rechecking key claims and time-series.
It should be referenced with:
the source (where the data came from),
the window (dates / time range),
and the definition (metric dictionary).
This mirrors the dashboard’s own framing as research-derived data and the report’s methodological emphasis on transparent sourcing.
12.5.9 Practical investor workflow (monthly, 30 minutes)
This is the simplest “investor operational loop” that uses Chapter 12 without requiring a roadmap:
Recompute the KPI stack (12.4) for the last 30/90 days
Look for directionality (trend) not single spikes.Check governance execution surfaces
did funded initiatives publish proof-of-delivery?
are execution registers updated?
Check integrity surfaces (12.3)
control-plane registry changed?
security assurance delta changed?
any market-access incidents?
Review open S1/S2 items in the issue workflow
unresolved S1/S2 items should be treated as risk inflation.
Update confidence tier
Upgrade confidence only when adjacent layers improve together (e.g., governance execution + demand).
Downgrade when integrity layers regress (control-plane opacity, missing proofs, repeated unverified claims).
This approach is specifically designed for Terra Classic’s reality: low broad participation and concentrated responsibility.
12.5.10 Integrity safeguards: preventing “process capture”
Any public workflow can be gamed. To reduce capture and narrative manipulation:
Evidence threshold enforcement: no evidence → no escalation.
Single source of truth for definitions: metric dictionary controls meaning drift.
Windowed claims only: all claims are time-bounded.
No anonymous “official” status: if Terra Classic has no authorized spokesperson, the workflow must not imply one (see 12.3 authority gap).
Preserve dissent: “rejected” items stay visible with reasons (prevents quiet censorship accusations).
12.5.11 Key takeaways
This report stays alive when it remains falsifiable. The accountability workflow is a proof loop, not a roadmap.
Terra Classic’s structural governance realities make a public QA loop necessary. Thin participation amplifies narrative drift; investors need proof surfaces.
The workflow is tool-agnostic and does not require ongoing author involvement. It defines standards and dispositions, not a promise of continuous updates.
Truth Dashboard + metric dictionary + errata log = credibility substrate. If these are present, claims can be tested; if not, “recovery” remains non-verifiable.
If validators want credibility, they should embrace verifiability. If they reject proof surfaces, investors should treat Terra Classic as structurally fragile regardless of narrative.