Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebase #4

Merged
merged 1,528 commits into from
May 22, 2024
Merged

Rebase #4

merged 1,528 commits into from
May 22, 2024

Conversation

juangirini
Copy link

No description provided.

svyatonik and others added 30 commits April 10, 2024 10:28
* Fix spelling mistakes across the whole repository (#3808)

**Update:** Pushed additional changes based on the review comments.

**This pull request fixes various spelling mistakes in this
repository.**

Most of the changes are contained in the first **3** commits:

- `Fix spelling mistakes in comments and docs`

- `Fix spelling mistakes in test names`

- `Fix spelling mistakes in error messages, panic messages, logs and
tracing`

Other source code spelling mistakes are separated into individual
commits for easier reviewing:

- `Fix the spelling of 'authority'`

- `Fix the spelling of 'REASONABLE_HEADERS_IN_JUSTIFICATION_ANCESTRY'`

- `Fix the spelling of 'prev_enqueud_messages'`

- `Fix the spelling of 'endpoint'`

- `Fix the spelling of 'children'`

- `Fix the spelling of 'PenpalSiblingSovereignAccount'`

- `Fix the spelling of 'PenpalSudoAccount'`

- `Fix the spelling of 'insufficient'`

- `Fix the spelling of 'PalletXcmExtrinsicsBenchmark'`

- `Fix the spelling of 'subtracted'`

- `Fix the spelling of 'CandidatePendingAvailability'`

- `Fix the spelling of 'exclusive'`

- `Fix the spelling of 'until'`

- `Fix the spelling of 'discriminator'`

- `Fix the spelling of 'nonexistent'`

- `Fix the spelling of 'subsystem'`

- `Fix the spelling of 'indices'`

- `Fix the spelling of 'committed'`

- `Fix the spelling of 'topology'`

- `Fix the spelling of 'response'`

- `Fix the spelling of 'beneficiary'`

- `Fix the spelling of 'formatted'`

- `Fix the spelling of 'UNKNOWN_PROOF_REQUEST'`

- `Fix the spelling of 'succeeded'`

- `Fix the spelling of 'reopened'`

- `Fix the spelling of 'proposer'`

- `Fix the spelling of 'InstantiationNonce'`

- `Fix the spelling of 'depositor'`

- `Fix the spelling of 'expiration'`

- `Fix the spelling of 'phantom'`

- `Fix the spelling of 'AggregatedKeyValue'`

- `Fix the spelling of 'randomness'`

- `Fix the spelling of 'defendant'`

- `Fix the spelling of 'AquaticMammal'`

- `Fix the spelling of 'transactions'`

- `Fix the spelling of 'PassingTracingSubscriber'`

- `Fix the spelling of 'TxSignaturePayload'`

- `Fix the spelling of 'versioning'`

- `Fix the spelling of 'descendant'`

- `Fix the spelling of 'overridden'`

- `Fix the spelling of 'network'`

Let me know if this structure is adequate.

**Note:** The usage of the words `Merkle`, `Merkelize`, `Merklization`,
`Merkelization`, `Merkleization`, is somewhat inconsistent but I left it
as it is.

~~**Note:** In some places the term `Receival` is used to refer to
message reception, IMO `Reception` is the correct word here, but I left
it as it is.~~

~~**Note:** In some places the term `Overlayed` is used instead of the
more acceptable version `Overlaid` but I also left it as it is.~~

~~**Note:** In some places the term `Applyable` is used instead of the
correct version `Applicable` but I also left it as it is.~~

**Note:** Some usage of British vs American english e.g. `judgement` vs
`judgment`, `initialise` vs `initialize`, `optimise` vs `optimize` etc.
are both present in different places, but I suppose that's
understandable given the number of contributors.

~~**Note:** There is a spelling mistake in `.github/CODEOWNERS` but it
triggers errors in CI when I make changes to it, so I left it as it
is.~~

(cherry picked from commit 002d926)

* Fix

---------

Co-authored-by: Dcompoze <contact@dcsoftware.xyz>
* taplo

* markdown

* publish = false

* feature propagation
Bumps [scale-info](https://github.com/paritytech/scale-info) from 2.11.0 to 2.11.1.
- [Release notes](https://github.com/paritytech/scale-info/releases)
- [Changelog](https://github.com/paritytech/scale-info/blob/master/CHANGELOG.md)
- [Commits](paritytech/scale-info@v2.11.0...v2.11.1)

---
updated-dependencies:
- dependency-name: scale-info
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.36.0 to 1.37.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](tokio-rs/tokio@tokio-1.36.0...tokio-1.37.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
* added CLI arguments: full WS URI + separate for WS path URI component + additional log

* URI -> URL?

* added TODO

* fmt
* Use workspace.[authors|edition]

* Add repository.workspace = true

* Upgrade dependencies to the polkadot-sdk versions

* Upgrade async-std version

* Update jsonrpsee version

* cargo update

* use ci-unified image
* Migrate fee payment from `Currency` to `fungible` (#2292)

Part of #226
Related #1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by #1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0f)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
The plain `people-kusama` and `coretime-kusama` chainspecs were uploaded
at #3961. Only binaries
with compatible runtime versions can run with plain chainspec. For
example:

One of the latest master builds fails:

```
docker run paritypr/polkadot-parachain-debug:master-216509db --chain coretime-kusama
...
Error: Service(Client(Storage("wasm call error Other: Exported method GenesisBuilder_get_preset is not found"))
```

Master build from 5 days ago:

```
docker run paritypr/polkadot-parachain-debug:master-68cdb126 --chain coretime-kusama
...
2024-04-08 16:51:02 [Parachain] 🔨 Initializing Genesis block/state (state: 0xc418…889c, header-hash: 0x638c…d050)
2024-04-08 16:51:03 [Relaychain] 🔨 Initializing Genesis block/state (state: 0xb000…ef6b, header-hash: 0xb0a8…dafe)
2024-04-08 16:51:03 [Relaychain] 👴 Loading GRANDPA authority set from genesis on what appears to be first startup.
2024-04-08 16:51:03 [Relaychain] 👶 Creating empty BABE epoch changes on what appears to be first startup.
2024-04-08 16:51:03 [Relaychain] 🏷 Local node identity is: 12D3KooWQEp2uPow3FnngGmy9dYQ3qxY1GkmumS5MqBWEQscwTyy
2024-04-08 16:51:03 [Relaychain] 📦 Highest known block at #0
...
```

## Changes:

1. Rename:
```
coretime-kusama.json -> coretime-kusama-genesis.json
people-kusama.json -> people-kusama-genesis.json
```

2. Generate raw chainspec:

```
docker run --rm -v $(pwd):/dir paritypr/polkadot-parachain-debug:master-68cdb126 build-spec --raw --chain /dir/people-kusama-genesis.json > people-kusama.json 
docker run --rm -v $(pwd):/dir paritypr/polkadot-parachain-debug:master-68cdb126 build-spec --raw --chain /dir/coretime-kusama-genesis.json > coretime-kusama.json
```

## Tests:

New chainspec can run on the latest master build:

```
docker run -it --rm -v $(pwd):/dir paritypr/polkadot-parachain-debug:master-216509db --chain /dir/coretime-kusama.json
...
2024-04-09 16:44:39 [Relaychain] ⚙️ Syncing, target=#22665488 (8 peers), best: #2371 (0x19f8…5f3a), finalized #2048 (0xede6…f879), ⬇ 388.6kiB/s ⬆ 87.0kiB/s
2024-04-09 16:44:39 [Parachain] 💤 Idle (6 peers), best: #0 (0x638c…d050), finalized #0 (0x638c…d050), ⬇ 6.3kiB/s ⬆ 2.8kiB/s
```

Plain chainspec will fail:

```
docker run -it --rm -v $(pwd):/dir paritypr/polkadot-parachain-debug:master-216509db --chain /dir/coretime-kusama-genesis.json
... 
Error: Service(Client(Storage("wasm call error Other: Exported method GenesisBuilder_get_preset is not found")))
```
Part of #3326 

@kianenigma @ggwpez 

polkadot address: 12poSUQPtcF1HUPQGY3zZu2P8emuW9YnsPduA4XG3oCEfJVp

---------

Signed-off-by: Matteo Muraca <mmuraca247@gmail.com>
Co-authored-by: ordian <write@reusable.software>
[Weights
compare](https://weights.tasty.limo/compare?unit=weight&ignore_errors=true&threshold=10&method=asymptotic&repo=polkadot-sdk&old=master&new=pg%2Fbench_tweaks&path_pattern=substrate%2Fframe%2F**%2Fsrc%2Fweights.rs%2Cpolkadot%2Fruntime%2F*%2Fsrc%2Fweights%2F**%2F*.rs%2Cpolkadot%2Fbridges%2Fmodules%2F*%2Fsrc%2Fweights.rs%2Ccumulus%2F**%2Fweights%2F*.rs%2Ccumulus%2F**%2Fweights%2Fxcm%2F*.rs%2Ccumulus%2F**%2Fsrc%2Fweights.rs)

Note: Raw weights change does not mean much here, as this PR reduce the
scope of what is benchmarked, they are therefore decreased by a good
margin. One should instead print the Schedule using

cargo test --features runtime-benchmarks bench_print_schedule --
--nocapture
or following the instructions from the
[README](https://github.com/paritytech/polkadot-sdk/tree/pg/bench_tweaks/substrate/frame/contracts#schedule)
for looking at the Schedule of a specific runtime

---------

Co-authored-by: command-bot <>
This tiny PR extends the `on_validated_block_announce` log with the bad
PeerID.
Used to identify if the peerID is malicious by correlating with other
logs (ie peer-set).

While at it, have removed the `\n` from a multiline log, which did not
play well with
[sub-triage-logs](https://github.com/lexnv/sub-triage-logs/tree/master).

cc @paritytech/networking

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
Reuse wasmi Module when validating.
Prepare the code for 0.32 and the addition of Module::new_unchecked
Dear team, dear @NachoPal @joepetrowski @bkchr @ggwpez,

This is a retry of #3957, after merging master as advised!,

Many thanks!

**_Milos_**

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
- Update internal logic so that the storage_base_deposit does not
include ED
- add v16 migration to update ContractInfo struct with this change

Before:
<img width="820" alt="Screenshot 2024-03-21 at 11 23 29"
src="https://github.com/paritytech/polkadot-sdk/assets/521091/a0a8df0d-e743-42c5-9e16-cf2ec1aa949c">

After:
![Screenshot 2024-03-21 at 11 23
42](https://github.com/paritytech/polkadot-sdk/assets/521091/593235b0-b866-4915-b653-2071d793228b)

---------

Co-authored-by: Cyrill Leutwiler <cyrill@parity.io>
Co-authored-by: command-bot <>
Same issue but about av-cores was fixed in
#1403

Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
…ancient (#4070)

fixes #4067

Also add an early bail out for look ahead collator such that we don't
waste time if a CollatorFn is not set.

TODO:
- [x] add test.
- [x] Polkadot System Parachain burn-in.

---------

Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Closes #4041

Changes:
- Increase cache size and reduce retries.
- Ignore Substrate SE links :(
- Fix broken link.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
#3630)

This is phase 2 of async backing enablement for the system parachains on
the production networks. ~~It should be merged after
polkadot-fellows/runtimes#228 is enacted. After it is released,~~ all
the system parachain collators should be upgraded, and then we can
proceed with phase 3, which will enable async backing in the runtimes.

UPDATE: Indeed, we don't need to wait for the runtime upgrade enactions.
The lookahead collator handles the transition by itself, so we can
upgrade ASAP.

## Scope of changes

Here, we eliminate the dichotomy of having "generic Aura collators" for
the production system parachains and "lookahead Aura collators" for the
testnet system parachains. Now, all the collators are started as
lookahead ones, preserving the logic of transferring from the shell node
to Aura-enabled collators for the asset hubs. So, indeed, it simplifies
the parachain service logic, which cannot but rejoice.
Currently `subsystem-regression-tests` job fails if the first benchmarks
fail and there is no result for the second benchmark. Also dividing the
job makes the pipeline faster (currently it's a longest job)

cc paritytech/ci_cd#969
cc @AndreiEres

---------

Co-authored-by: Andrei Eres <eresav@me.com>
Implements the idea from
#3899
- Removed latencies
- Number of runs reduced from 50 to 5, according to local runs it's
quite enough
- Network message is always sent in a spawned task, even if latency is
zero. Without it, CPU time sometimes spikes.
- Removed the `testnet` profile because we probably don't need that
debug additions.

After the local tests I can't say that it brings a significant
improvement in the stability of the results. However, I belive it is
worth trying and looking at the results over time.
Completes the removal of `try-runtime-cli` logic from `polkadot-sdk`.
ggwpez and others added 29 commits May 16, 2024 08:55
`cumulus-pallet-dmp-queue` is not needed anymore since
#1246.

The only logic that remains in the pallet is a lazy migration in the
[`on_idle`](https://github.com/paritytech/polkadot-sdk/blob/8d62c13b2541920c37fb9d9ca733fcce91e96573/cumulus/pallets/dmp-queue/src/lib.rs#L158)
hook.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
…ets (#4392)

As per #3326, removes pallet::getter usage from the bounties and
child-bounties pallets. The syntax `StorageItem::<T, I>::get()` should
be used instead.

Changes to one pallet involved changes in the other, so I figured it'd
be best to combine these two.

cc @muraca

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
Re-applying #2302 after increasing the `MaxPageSize`.  

Remove `without_storage_info` from the XCMP queue pallet. Part of
#323

Changes:
- Limit the number of messages and signals a HRMP channel can have at
most.
- Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 5 to have our tooling remind us to deploy that migration.

## Integration

If you see this error in your try-runtime-cli:  
```pre
Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535
```

Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
something like this:
```rust
type MaxPageSize = ConstU32<{ 103 * 1024 }>;
```

There is currently no easy way for on-chain governance to adjust the
HRMP max message size of all channels, but it could be done:
#3145.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
…rotocol (#3833)

This PR adds to the DHT only the peers that support the genesis/fork/kad
protocol.
Before this PR, any peer that supported the legacy `/kad/[id]` protocol
was added to the DHT.

This is the first step in removing the support for the legacy kad
protocols.

While I have adjusted unit tests to validate the appropriate behavior,
this still needs proper testing in our stack.

Part of #504.

cc @paritytech/networking

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
The bridge relay is **not** using `tokio`, while `jsonrpsee` does. To
make it work together, we are spawning a separate tokio task for every
jsonrpsee subscription, which holds a subscription reference. It looks
like we are not stopping those tasks when we no longer need it and when
there are more than `1024` active subscriptions, `jsonrpsee` stops
opening new subscriptions. This PR adds an `cancel` signal that is sent
to the background task when we no longer need a subscription.
Demote `Ignored block announcement because all validation slots for this
peer are occupied.` message to debug level.

This is mostly an indicator of somebody spamming the node or (more
likely) some node actively keeping up with the network but not
recognizing it's in a major sync mode, so sending zillions of block
announcements (have seen this on Versi).

This warning shouldn't be considered an error by the end user, so let's
make it debug.

Ref. #1929.
As per #3326, removes usage of the pallet::getter macro from the
democracy pallet. The syntax `StorageItem::<T, I>::get()` should be used
instead.

cc @muraca
…c committee in next store period (#4478)

While syncing Ethereum consensus updates to the Snowbridge Ethereum
light client, the syncing process stalled due to error
`InvalidSyncCommitteeUpdate` when importing the next sync committee for
period `1087`.

This bug manifested specifically because our light client checkpoint is
a few weeks old (submitted to governance weeks ago) and had to catchup
until a recent block. Since then, we have done thorough testing of the
catchup sync process.

### Symptoms
- Import next sync committee for period `1086` (essentially period
`1087`). Light client store period = `1086`.
- Import header in period `1087`. Light client store period = `1087`.
The current and next sync committee is not updated, and is now in an
outdated state. (current sync committee = `1086` and current sync
committee = `1087`, where it should be current sync committee = `1087`
and current sync committee = `None`)
- Import next sync committee for period `1087` (essentially period
`1088`) fails because the expected next sync committee's roots don't
match.

### Bug
The bug here is that the current and next sync committee's didn't
handover when an update in the next period was received.

### Fix
There are two possible fixes here:
1. Correctly handover sync committees when a header in the next period
is received.
2. Reject updates in the next period until the next sync committee
period is known.

We opted for solution 2, which is more conservative and requires less
changes.

### Polkadot-sdk versions
This fix should be backported in polkadot-sdk versions 1.7 and up.

Snowfork PR: Snowfork#145

---------

Co-authored-by: Vincent Geddes <117534+vgeddes@users.noreply.github.com>
resolves #3315

---------

Co-authored-by: doordashcon <jessechejieh@doordashcon.local>
Co-authored-by: command-bot <>
Co-authored-by: Bastian Köcher <git@kchr.de>
Using Dynamic Parameters for contracts seems like a bad idea for now.

Given that we have benchmarks for each host function (in addition to our
extrinsics), parameter storage reads will be counted multiple times. We
will work on updates to the benchmarking framework to mitigate this
issue in future iterations.

---------

Co-authored-by: command-bot <>
Before relayer crates have been moved + merged, the `MetricsParams` type
has been created from a `substrate-relay` crate (binary) and hence it
has been setting the `substrate_relay_build_info` metic value properly -
to the binary version. Now it is created from the
`substrate-relay-helper` crate, which has the fixed (it isn't published)
version `0.1.0`, so our relay provides incorrect metric value. This
'breaks' our monitoring tools - we see that all relayers have that
incorrect version, which is not cool.

The idea is to have a global static variable (shame on me) that is
initialized by the binary during initialization like we do with the
logger initialization already. Was considering some alternative options:
- adding a separate argument to every relayer subcommand and propagating
it to the `MetricsParams::new()` causes a lot of changes and introduces
even more noise to the binary code, which is supposed to be as small as
possible in the new design. But I could do that if team thinks it is
better;
- adding a `structopt(skip) pub relayer_version: RelayerVersion`
argument to all subcommand params won't work, because it will be
initialized by default and `RelayerVersion` needs to reside in some util
crate (not the binary), so it'll have the wrong value again.
Changes the Ethereum client storage scope to public, so it can be set in
a migration.

When merged, we should backport to the all other release branches:

- [ ] release-crates-io-v1.7.0 - patch release the fellows BridgeHubs
runtimes #4504
- [ ] release-crates-io-v1.8.0 -
#4505
- [ ] release-crates-io-v1.9.0 -
#4506
- [ ] release-crates-io-v1.10.0 -
#4507
- [ ] release-crates-io-v1.11.0 -
#4508
- [ ] release-crates-io-v1.12.0 (commit soon)
…ce on the pool account (#4503)

addresses #4440 (will
close once we have this in prod runtimes).
related: #2037.

An extra consumer reference is preventing pools to be destroyed. When a
pool is ready to be destroyed, we
can safely clear the consumer references if any. Notably, I only check
for one extra consumer reference since that is a known bug. Anything
more indicates possibly another issue and we probably don't want to
silently absorb those errors as well.

After this change, pools with extra consumer reference should be able to
destroy normally.
Co-authored-by: jimwfs <169986508+jimwfs@users.noreply.github.com>
…4512)

Replace usage of deprecated
`substrate_rpc_client::ChildStateApi::storage_keys` with
`substrate_rpc_client::ChildStateApi::storage_keys_paged`.

Required for successful scraping of Aleph Zero state.
)

As per #3326, removes pallet::getter macro usage from
pallet-fast-unstake. The syntax `StorageItem::<T, I>::get()` should be
used instead.

cc @muraca

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Implements #4429

Collators only need to maintain the implicit view for the paraid they
are collating on.
In this case, bypass prospective-parachains entirely. It's still useful
to use the GetMinimumRelayParents message from prospective-parachains
for validators, because the data is already present there.

This enables us to entirely remove the subsystem from collators, which
consumed resources needlessly

Aims to resolve #4167 

TODO:
- [x] fix unit tests
)

Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
)

closes paritytech/parity-bridges-common#3000

Recently we've changed our bridge configuration for Rococo <> Westend
and our new relayer has started to submit transactions every ~ `30`
seconds. Eventually, it switches itself into limbo state, where it can't
submit more transactions - all `author_submitAndWatchExtrinsic` calls
are failing with the following error: `ERROR bridge Failed to send
transaction to BridgeHubRococo node: Call(ErrorObject { code:
ServerError(-32006), message: "Too many subscriptions on the
connection", data: Some(RawValue("Exceeded max limit of 1024")) })`.

Some links for those who want to explore:
- server side (node) has a strict limit on a number of active
subscriptions. It fails to open a new subscription if this limit is hit:
https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/server/src/middleware/rpc/layer/rpc_service.rs#L122-L132.
The limit is set to `1024` by default;
- internally this limit is a semaphore with `limit` permits:
https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L461-L485;
- semaphore permit is acquired in the first link;
- the permit is "returned" when the `SubscriptionSink` is dropped:
https://github.com/paritytech/jsonrpsee/blob/a4533966b997e83632509ad97eea010fc7c3efc0/core/src/server/subscription.rs#L310-L325;
- the `SubscriptionSink` is dropped when [this `polkadot-sdk`
function](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L58-L94)
returns. In other words - when the connection is closed, the stream is
finished or internal subscription buffer limit is hit;
- the subscription has the internal buffer, so sending an item contains
of two steps: [reading an item from the underlying
stream](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L125-L141)
and [sending it over the
connection](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L111-L116);
- when the underlying stream is finished, the `inner_pipe_from_stream`
wants to ensure that all items are sent to the subscriber. So it: [waits
until the current send operation
completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L146-L148)
and then [send all remaining items from the internal
buffer](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L150-L155).
Once it is done, the function returns, the `SubscriptionSink` is
dropped, semaphore permit is dropped and we are ready to accept new
subscriptions;
- unfortunately, the code just calls the `pending_fut.await.is_err()` to
ensure that [the current send operation
completes](https://github.com/paritytech/polkadot-sdk/blob/278486f9bf7db06c174203f098eec2f91839757a/substrate/client/rpc/src/utils.rs#L146-L148).
But if there are no current send operation (which is normal), then the
`pending_fut` is set to terminated future and the `await` never
completes. Hence, no return from the function, no drop of
`SubscriptionSink`, no drop of semaphore permit, no new subscriptions
allowed (once number of susbcriptions hits the limit.

I've illustrated the issue with small test - you may ensure that if e.g.
the stream is initially empty, the
`subscription_is_dropped_when_stream_is_empty` will hang because
`pipe_from_stream` never exits.
…4465)

closes paritytech/parity-bridges-common#2963

See issue above for rationale
I've been thinking about adding similar calls to other pallets, but:
- for parachains pallet I haven't been able to think of a case when we
will need that given how long referendum takes. I.e. if storage proof
format changes and we want to unstuck the bridge, it'll take a large a
few weeks to sync a single parachain header, then another weeks for
another and etc.
- for messages pallet I've made the similar call initially, but it just
changes a storage key (`OutboundLanes` and/or `InboundLanes`), so
there's no any logic here and it may be simply done using
`system.set_storage`.

---------

Co-authored-by: command-bot <>
)

This PR introduces custom types / substrate wrappers for `Multiaddr`,
`multiaddr::Protocol`, `Multihash`, `ed25519::*` and supplementary types
like errors and iterators.

This is needed to unblock `libp2p` upgrade PR
#1631 after
#2944 was merged.
`libp2p` and `litep2p` currently depend on different versions of
`multiaddr` crate, and introduction of this "common ground" types is
needed to support independent version upgrades of `multiaddr` and
dependent crates in `libp2p` & `litep2p`.

While being just convenient to not tie versions of `libp2p` & `litep2p`
dependencies together, it's currently not even possible to keep `libp2p`
& `litep2p` dependencies updated to the same versions as `multiaddr` in
`libp2p` depends on `libp2p-identity` that we can't include as a
dependency of `litep2p`, which has it's own `PeerId` type. In the
future, to keep things updated on `litep2p` side, we will likely need to
fork `multiaddr` and make it use `litep2p` `PeerId` as a payload of
`/p2p/...` protocol.

With these changes, common code in substrate uses these custom types,
and `litep2p` & `libp2p` backends use corresponding libraries types.
This version includes the latest release of pjs/api
(https://github.com/polkadot-js/api/releases/tag/v11.1.1).
Thx!
This PR backports version bumps and reorganisation of the prdoc files
from the `1.12.0` release branch back to `master`
@juangirini juangirini merged commit 005083d into ideal-lab5:master May 22, 2024
12 of 17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.