Skip to content

Commit

Permalink
docs(changelog): fix typos and improve clarity
Browse files Browse the repository at this point in the history
  • Loading branch information
djwhitt committed Dec 6, 2024
1 parent b10a8e8 commit 27d3a8b
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,24 +8,24 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

### Added

- Added a ClickHouse auto-import service. When enabled it calls the Parquet
- Added a ClickHouse auto-import service. When enabled, it calls the Parquet
export API, imports the exported Parquet into ClickHouse, moves the Parquet
files to an `imported` subdirectory, and deletes data items in SQLite up to
where the Parquet export ended. To use it, run Docker Compose with the
`clickhouse` profile, set the `CLICKHOUSE_URL` to `http://clickhouse:8123`,
and ensure you have set an `ADMIN_KEY`.
Using this configuration the core service will also combine results from
Using this configuration, the core service will also combine results from
ClickHouse and SQLite when querying transaction data via GraphQL. Note: if
you have a large number of data items in SQLite, the first export and
subsequent delete may take an exteneded period. Also, this functionality is
subsequent delete may take an extended period. Also, this functionality is
considered **experimental**. We expect there are still bugs to be found in it
and we may make breaking changes to the ClickHouse schema in the future. If
you choose to use it in production (not yet recommended), we suggest backing
up copies of the Parquet files found in `data/parquet/imported` so that they
can be reimported if anything goes wrong or future changes require it.
- Added a background data verification process that will attempt to recompute
data roots for bundles and compare them to data roots indexed from Arweave
nodes. When the data roots match, all descendent data items will be marked as
nodes. When the data roots match, all descendant data items will be marked as
verified. This enables verification of data initially retrieived from sources,
like other gateways, that serve contiguous data instead of verifiable chunks.
Data verification can be enabled by setting the
Expand Down

0 comments on commit 27d3a8b

Please sign in to comment.