AWS S3 - Cronos EVM Public Dataset

Cronos Public Dataset - AWS S3 Access Guide

The Cronos public blockchain dataset is hosted on Amazon S3 and is freely queryable via Athena, ClickHouse, Presto/Trino, DuckDB, or any engine that supports S3-backed Parquet or CSV data.

Dataset Overview

Base S3 path:

s3://aws-public-blockchain/v1.1/cronos/evm/

Public index page: https://aws-public-blockchain.s3.us-east-2.amazonaws.com/index.html#v1.1/cronos/evm/

Region: us-east-2 Update frequency: Nightly sync at +2 UTC

Available Tables

Table
Description

blocks

One row per block โ€” includes block metadata like hash, miner, and gas usage.

transactions

One row per transaction โ€” includes sender, recipient, gas, and value.

receipts

One row per transaction receipt โ€” includes status, gas used, and logs count.

logs

One row per emitted EVM log โ€” includes address, topics, and data.

decoded_events

Human-readable decoded events (topics and data mapped to ABI definitions).

Common Prerequisites (Step 0)

Before you start, make sure you:

  • Have internet access (dataset is public, no credentials needed).

  • Use region us-east-2 for AWS-based tools.

  • Have a basic understanding of SQL (optional).

Option 1: Amazon Athena (Console Method)

Step 0 โ€” Create or Log In to AWS Account

Sign in at https://aws.amazon.com/.

Step 1 โ€” Open Athena in the us-east-2 Region

Go to https://console.aws.amazon.com/athena/.

Step 2 โ€” Create a Database

Step 3 โ€” Create Tables

blocks

transactions

receipts

logs

decoded_events

Step 4 โ€” Query Examples

Option 2: ClickHouse

Step 0 โ€” Install ClickHouse

or

Step 1 โ€” Connect

Step 2 โ€” Query Public Parquet Data

Repeat similarly for other tables by adjusting the path:

  • /transactions/date=*/*.parquet

  • /receipts/date=*/*.parquet

  • /logs/date=*/*.parquet

  • /decoded_events/date=*/*.parquet

No credentials or setup required.

Option 3: Presto / Trino

Step 0 โ€” Install

Follow: https://trino.io/docs/current/installation.html

Step 1 โ€” Configure Hive Connector

Edit etc/catalog/hive.properties:

Step 2 โ€” Start Trino

Step 3 โ€” Create Schema & Tables

blocks

transactions

receipts

logs

decoded events

Step 4 โ€” Query

Option 4: DuckDB (Local)

Step 0 โ€” Install

Step 1 โ€” Run Query (CLI)

Step 2 โ€” Python Example

Works out-of-the-box โ€” no AWS setup needed.

Option 5: Other Tools

Tool
Connection Example

Spark

spark.read.parquet("s3a://aws-public-blockchain/v1.1/cronos/evm/transactions/date=*/")

Dask / Polars

dd.read_parquet("https://.../*.parquet") or pl.scan_parquet()

AWS Glue

Create crawler with s3://aws-public-blockchain/v1.1/cronos/evm/

Redshift Spectrum

Create external schema and map Parquet tables to S3 paths

Summary

Engine
Auth Needed
Works With
Notes

Athena

No

AWS Console

Easiest for browser users

ClickHouse

No

Local or Cloud

Fast columnar queries

Presto / Trino

No

Cluster setup

Integrates with Hive

DuckDB

No

Local, Python

Lightweight and portable

Data Model โ€” Cronos Public Dataset

The Cronos dataset consists of five interrelated tables:

Each layer represents a deeper level of blockchain execution โ€” from the block level down to decoded smart contract events.

Relationship Diagram (Text-Based)

Key Relationships

Parent
Child
Join Columns
Relationship

blocks

transactions

block_number

One block has many transactions

transactions

receipts

transaction_hash

One-to-one

transactions

logs

transaction_hash

One-to-many

logs

decoded_events

transaction_hash, log_index

One-to-one

blocks

receipts / logs / decoded_events

block_number

Cross-layer link for time context

Last updated

Was this helpful?