Avalanche Economic Data Snapshots
This directory contains timestamped data snapshots that serve as the single source of truth for Avalanche network metrics used throughout the economic modeling project.
Purpose
The snapshot system addresses the problem of data inconsistency across multiple documents by:
- Centralizing all network metrics in timestamped files
- Providing clear source citations for each data point
- Documenting data quality and known gaps
- Enabling temporal analysis of network evolution
Structure
content/data/
├── README.md # This file
├── snapshot-YYYY-MM-DD.md # Individual timestamped snapshots
└── snapshot-latest.md # Symlink to most recent snapshot (optional)
Using Snapshots
For Reading Data
When you need Avalanche network metrics for your work:
- Find the most recent snapshot matching your analysis period
- Check the data freshness table to understand recency of each metric
- Review the data quality assessment to understand confidence levels
- Cite the snapshot in your document using this format:
Staking ratio: 40.7% [Source: snapshot-2025-11-28.md]For Creating Documents
When creating new analysis documents, economic models, or research:
- Always reference a specific snapshot rather than copying data
- Link to the snapshot file so readers can access full context
- Note which metrics are critical to your analysis
- Document assumptions made when data has gaps
Example References
According to the [November 2025 snapshot](../data/snapshot-2025-11-28.md),
the network has ~855 Primary Network validators staking 189M AVAX (40.7% of circulating supply).Creating New Snapshots
When to Create
Create a new snapshot when:
- Weekly routine: Every 7 days for regular tracking
- Major events: Network upgrades, significant metric changes
- Before modeling: Start of new economic analysis work
- Data updates: When critical metrics have materially changed
How to Create
1. Prepare Data Collection
Gather data from these primary sources:
Market Data:
Network Metrics:
Burn Data:
L1 Ecosystem:
Protocol Updates:
2. Use the Template
Copy the most recent snapshot and update all sections:
# Copy previous snapshot
cp snapshot-2025-11-28.md snapshot-$(date +%Y-%m-%d).md
# Edit the new file, updating:
# - Snapshot date in header
# - All metric values
# - Source links and dates
# - Data freshness table
# - Any new upgrades or developments3. Required Sections
Every snapshot must include:
-
Token Supply Metrics
- Max supply, total supply, circulating supply, staked supply
- Percentages and calculations
-
Staking Metrics
- Validator count, delegator count, total staked, staking ratio, APR
-
L1 Ecosystem
- Active L1 count, categories, notable projects
- Validator participation in L1s
-
Fee Burning Metrics
- Total burned, daily burn rate, annual burn rate
- Recent burn activity trends
-
Inflation & Emission
- Gross inflation, burn offset, net inflation
- Daily emission, years to max supply
-
Network Activity
- Daily transactions, active addresses, TVL
- Quarter-over-quarter growth
-
Recent Upgrades
- New upgrades since last snapshot
- Activation dates and key changes
-
Data Quality Assessment
- Confidence levels for each category
- Known gaps and limitations
- Data freshness by category
4. Data Quality Guidelines
Mark confidence levels:
- High Confidence: Protocol-defined, exchange-reported, blockchain-verified
- Moderate Confidence: Third-party analytics, varying sources
- Low Confidence: Estimates, incomplete data, API gaps
Document discrepancies: When sources conflict, document:
- The range of values reported
- Which source you selected and why
- The resolution methodology used
Note data freshness:
- Fresh: < 6 hours old
- Recent: < 7 days old
- Aging: 7-30 days old
- Stale: > 30 days old
5. Calculation Standards
Use these standard formulas:
Staking Ratio = (Staked Supply / Circulating Supply) × 100
Net Inflation = Gross Inflation % - Burn Rate %
Annual Burn % = (Daily Burn × 365 / Total Supply) × 100
% of Max Issued = (Total Supply / 720,000,000) × 100
Daily Emission = (Total Staked × APR) / 365
6. Source Citation
Every data point must have:
- Source: Organization or platform
- Link: Direct URL to data (in Sources section)
- Date: When the data was current
Use this format in tables:
| Metric | Value | Source | Date |
|--------|-------|--------|------|
| Total Validators | 1,539 | Avascan | Nov 2025 |Validation Checklist
Before committing a new snapshot, verify:
- All tables have Source and Date columns filled
- All calculated values have formulas documented
- Data quality assessment is complete
- Discrepancies are documented with resolution
- Links in Sources section are functional
- Snapshot date in header is correct
- Previous snapshot data is preserved (don’t modify old snapshots)
Documents Referencing This Data
The following project documents should reference data snapshots rather than embedding static data:
Economic Models
/home/ygg/Workspace/BCRG/Avalanche/avalanche-research/resources/avalanche/avalanche_economic_model.md- References: Token supply, staking metrics, burn rates, inflation
Literature Reviews & Analysis
/home/ygg/Workspace/BCRG/Avalanche/avalanche-research/resources/avalanche-data/CLAUDE.md- References: Network growth, validator counts, foundation metrics
Future Documents
When creating new analysis or modeling documents:
- Link to the relevant snapshot(s)
- Document which snapshot date you used
- Note if you made interpolations or adjustments
- Update references when using newer snapshots
Data Collection System
Automated Collection (Snowpeer)
The project includes a Snowpeer API data collection system at:
/home/ygg/Workspace/BCRG/Avalanche/avalanche-research/resources/avalanche-data/resources/snowpeer/
Capabilities:
- Automated collection from Snowpeer API endpoints
- Local SQLite database storage
- Scheduled updates with configurable frequencies
- Data analysis and reporting tools
Limitations:
- Validator endpoint returns empty (API gap)
- Delegator endpoint returns empty (API gap)
- No ICM messaging data available
- No LSP data available via API
See /resources/avalanche-data/resources/snowpeer/DATA_GAPS.md for details.
Manual Collection Required
Due to Snowpeer API limitations, these must be collected manually:
- Current validator counts → Use Avascan
- Delegator statistics → Use Avalanche Stats Dashboard
- Real-time staking metrics → Use multiple sources, take consensus
- Market data (price, market cap) → Use CoinGecko/CoinMarketCap
- TVL → Use DefiLlama
- Burn statistics → Use burnedavax.com
Future Automation
Consider implementing:
- Direct RPC queries to Avalanche nodes for validator/delegation data
- Custom blockchain indexer for precise on-chain metrics
- API aggregation combining multiple data sources
- Automated snapshot generation via scheduled scripts
- Data validation comparing multiple sources automatically
Snapshot Comparison
To track network evolution, compare metrics across snapshots:
# Example: Compare staking ratios over time
grep "Staking Ratio" snapshot-2025-11-*.md
# Example: Track validator growth
grep "Total Validators" snapshot-2025-*.md | sortConsider creating analysis scripts to:
- Plot metric evolution over time
- Detect significant changes or anomalies
- Generate trend reports
- Validate data consistency
Best Practices
For Analysts
- Always check data freshness before using metrics in critical analysis
- Understand confidence levels and document uncertainty in your work
- Use conservative estimates when data sources conflict
- Cite specific snapshots rather than “current data”
- Note when you interpolate or make assumptions
For Modelers
- Document which snapshot(s) your model parameters come from
- Track sensitivity to data uncertainty (confidence intervals)
- Validate models against multiple snapshot periods
- Update models when new snapshots reveal material changes
- Archive model runs with snapshot references for reproducibility
For Researchers
- Create new snapshots before major research milestones
- Contribute data sources when you find better/newer sources
- Document methodology for any derived metrics
- Cross-reference with original protocol documentation
- Share findings that reveal data gaps or inconsistencies
Troubleshooting
Data Conflicts
Problem: Different sources report different values
Solution:
- Check date of each source (may be from different times)
- Understand methodology differences (e.g., “active” vs “deployed”)
- Prioritize: Protocol docs > Blockchain explorers > Analytics > News
- Document the conflict and your resolution in the snapshot
Missing Data
Problem: Required metric not available
Solution:
- Check if Snowpeer API has added new endpoints
- Query blockchain directly via RPC if possible
- Use proxy metrics or estimates (clearly documented)
- Mark as “Not Available” rather than guessing
- Document the gap in Data Quality Assessment
Stale Data
Problem: Some metrics are weeks/months old
Solution:
- Note staleness in the Data Freshness table
- Check if the metric changes slowly (e.g., max supply = static)
- Seek alternative sources for fast-moving metrics
- Consider direct blockchain queries for critical data
- Document limitations in your analysis
Contributing
When contributing to the snapshot system:
- Follow the template structure from existing snapshots
- Document your sources with direct links
- Explain calculations so others can verify
- Note data quality honestly (don’t hide gaps)
- Preserve history (don’t modify past snapshots)
Version History
- 2025-11-28: Initial snapshot system created
- Established template structure
- Documented 7 major metric categories
- Identified Snowpeer API data gaps
- Created comprehensive November 2025 baseline
Contact & Support
For questions about the snapshot system:
- Review
/resources/avalanche-data/CLAUDE.mdfor project context - Check
/resources/avalanche-data/resources/snowpeer/README.mdfor data collection - Consult
/resources/avalanche-data/resources/snowpeer/DATA_GAPS.mdfor known limitations
Remember: The goal is consistency and traceability, not perfection. Document what you know, acknowledge what you don’t, and always cite your sources.