Releases: streamingfast/substreams
v1.15.1
v1.15.0
Server
- Save deterministic failures in WASM in the module cache (under a file named
errors.0123456789.zst
at the failed block number), so further requests depending on this module at the same block can return the error immediately without re-executing the module.
CLI
substreams init
: add Stellar to the list of supported grouped chains (this will require everyone to upgrade the CLI version to use codegen)substreams init
: create project in a new directory, not in the current directory of the user.substreams init
: new Protobuf field to enforce versions with the codegen.
v1.14.6
Server
- Tier2 now returns GRPC error codes for
DeadlineExceeded
when it times out, andResourceExhausted
when a request is rejected due to overload - Tier1 now correctly reports tier2 job outcomes in the
substreams request stats
- Added jitter in "retry" logic to prevent all workers from retrying at the same time when tier2 are overloaded
- Fix panic on tier2 when hitting a timeout for requests running from pre-cached module outputs
- Add environment variables to control retry behavior, "SUBSTREAMS_WORKER_MAX_RETRIES" (default 10) and "SUBSTREAMS_WORKER_MAX_TIMEOUT_RETRIES" (default 2), changing from previous defaults (720 and 3)
The worker_max_timeout_retries is the number of retries specifically applied to block execution timing out (ex: because of external calls) - The mechanism to slow down processing segments "ahead of blocks being sent to user" has been disabled on "noop-mode" requests, since these requests are used to pre-cache data and should not be slowed down.
- The "number of segments ahead" in this mechanism has been increased from
>number of parallel workers>
to<number of parallel workers> * 1.5
v1.14.5
- Bugfix on server: fix panic on requests disconnecting before the resolvedStartBlock is set.
v1.14.4
🚫 DO NOT USE IN SERVER - panics on some requests
Server
- Properly reject requests with a stop-block below the "resolved" StartBlock (caused by module initialBlocks or a chain's firstStreamableBlock)
- Added the
resolved-start-block
to thesubstreams request stats
log
CLI
- fix the 'Hint' when --limit-processed-blocks is too low, sometimes suggesting "0 or 0" and some typos
v1.14.3
CLI
-
The
substreams gui
flag--debug-modules-output
has been removed, it had zero effect. -
The
substreams run
flag--debug-modules-output
now accepts regular expressions likesubstreams run --debug-modules-output=".*"
. -
Fixed
--skip-package-validation
to also skip sub packages being imported. -
Added
--limit-processed-blocks
flag tosubstreams run
andsubstreams gui
to set thelimit_processed_blocks
field in the request -
The information messages in 'substreams run' now print to STDERR instead of STDOUT.
Server
-
Added a mechanism for 'production-mode' requests where the tier1 will not schedule tier2 jobs over { max_parallel_subrequests } segments above the current block being streamed to the user.
This will ensure that a user slowly reading blocks 1, 2, 3... will not trigger a flood of tier2 jobs for higher blocks, let's say 300_000_000, that might never get read. -
Added a validation on a module for the existence of 'triggering' inputs: the server will now fail with a clear error message
when the only available inputs are stores used with mode 'get' (not 'deltas'), instead of silenlty skipping the module on every block. -
Fixed
runtime error: slice bounds out of range
error on heavy memory usage with wasmtime engin -
Added information about the number of blocks that need to be processed for a given request in the
sf.substreams.rpc.v2.SessionInit
message -
Added an optional field
limit_processed_blocks
to thesf.substreams.rpc.v2.Request
. When set to a non-zero value, the server will reject a request that would process more blocks than the given value with theFailedPrecondition
GRPC error code. -
Improved error messages when a module execution is timing out on a block (ex: due to a slow external call) and now return a
DeadlineExceeded
Connect/GRPC error code instead of a Internal. Removed 'panic' from wording. -
Improved connection draining on shutdown: Now waits for the end of the 'shutdown-delay' before draining and refusing new connections, then waits for 'quicksaves' and successful signaling of clients, up to a max of 30 sec.
-
In 'substreams request stats' log, add fields:
remote_jobs_completed
,remote_blocks_processed
andtotal_uncompressed_read_bytes
v1.14.2
v1.14.1
- Fix another
cannot resolve 'old cursor' from files in passthrough mode -- not implemented
bug when receiving a request in production-mode with a cursor that is below the "linear handoff" block
v1.14.0
This release brings performance improvements to the substreams engine, through the introduction of a new "QuickSave" feature, and a switch to wasmtime
as the default runtime for Rust modules.
Server
-
Implement "QuickSave" feature to save the state of "live running" substreams stores when shutting down, and then resume processing from that point if the cursor matches.
- enabled if the "QuickSaveStoreURL" attribute is not empty in the tier1 config
- requires the "CheckPendingShutdown" module to be passed to the app via NewTier1()
-
Rust modules will now be executed with
wasmtime
by default instead ofwazero
.- Prevents the whole server from stalling in certain memory-intensive operations in wazero.
- Speed improvement: cuts the execution time in half in some circumstances.
- Wazero is still used for modules with
wbindgen
and modules compiled withtinygo
. - Set env var
SUBSTREAMS_WASM_RUNTIME=wazero
to revert to previous behavior.
CLI
- Fixed
--skip-package-validation
to also skip sub packages being imported. - Trim down packages when using 'imports': only the modules explicitly defined in the YAML manifest and their dependencies will end up in the final spkg.
v1.13.0
Server
Request Pool and Worker Pool
-
Added
GlobalRequestPool
to theTier1Modules
struct inapp/tier1.go
and integrated it into theRun
method to enhance request lifecycle management.
When set, theGlobalRequestPool
will manage the borrowing, quotas, and keep-alive mechanisms for user requests via requests to a GRPC remote server. -
Added
WorkerPoolFactory
to theTier1Modules
struct inapp/tier1.go
and integrated it into theRun
method to enhance worker lifecycle management.
When set, theWorkerPool
will manage the borrowing, quotas, and keep-alive mechanisms for worker subrequests on tier2, via requests to a GRPC remote server.
Performance
-
Added 'shared cache' on tier1: execution of modules near the HEAD of the chain will be done once for a given module hash and the result shared between requests.
This will reduce CPU usage and increase performance when many requests are using the same modules (ex: foundational modules) -
Improved "time to first block" when a lot of cached files exist on dependency substreams modules by skipping reads segments that won't be used and assuming stores "full KVs" are always filled sequentially (since they are!)
-
Limit parallel execution of a stage's layer.
Previously, the engine was executing modules in a stage's layer all in parallel. We now change that behavior, development mode will from now on execute every sequentially and when in production mode will limit parallelism to 2 (hard-coded) for now.
The auth plugin can control that value dynamically by providing a trusted headerX-Sf-Substreams-Stage-Layer-Parallel-Executor-Max-Count
. -
Fixed a regression since "v1.12.2" where the SkipEmptyOutput instruction was ignored in substreams mappers
CLI
- Removed enforcement of
BUFBUILD_AUTH_TOKEN
environment variable when using descriptor sets. It appears there is now a public free tier to query those which should work in most cases. - When running Solana package, set base58 encoding by default in the GUI.
- Add Sei Mainnet to the
ChainConfigByID
map.