Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pallet: Referenda Tracks #2

Merged
merged 2 commits into from
Dec 28, 2023

Conversation

pandres95
Copy link
Member

No description provided.

@pandres95 pandres95 self-assigned this Dec 22, 2023
@olanod olanod changed the base branch from master to release-virto-v1.5.0 December 22, 2023 16:13
fix(pallet-referenda): impl Debug + PartialEq on TrackInfo

chore(impl_tracks_info): split codebase

chore(referenda-tracks): add test suite
info: TrackInfoOf<T, I>,
pallet_origin: PalletsOriginOf<T>,
) -> DispatchResultWithPostInfo {
T::UpdateOrigin::ensure_origin(origin)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in one of our talks, for insert and remove we will need a different origin(AdminOrigin?) since the entity adding and removing tracks is of higher privilege(e.g. root) than the origin updating the track info(e.g. a community updating info for its own origins).

/// A new track has been inserted
Created { id: TrackIdOf<T, I> },
/// The information for a track has been updated
Updated { id: TrackIdOf<T, I>, info: TrackInfoOf<T, I> },
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As events end up in the storage it's usually better to keep them small, TrackInfo can be bulky so instead we could include the origin in the events to give clients what they need to construct the storage key and easily fetch the info from the storage when the event happens.

id: TrackIdOf<T, I>,
info: TrackInfoOf<T, I>,
) -> DispatchResultWithPostInfo {
T::UpdateOrigin::ensure_origin(origin)?;
Copy link
Member

@olanod olanod Dec 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking that it wouldn't hurt if UpdateOrigin(different from AdminOrigin) is an EnsureOriginWithArg<Self::RuntimeOrigin, TrackIdOf<T, I>>, I could see us using some special track ids that encode information about the community to be able to easily determine later if the community(or subgroup) is allowed to edit the given origin.


ensure!(Tracks::<T, I>::contains_key(id), Error::<T, I>::TrackIdNotFound);

OriginToTrackId::<T, I>::iter_keys().for_each(|origin| {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Iterating all the origins looks unnecessarily expensive, I think it's reasonable to expect the extrinsic to be called with the corresponding pallet_origin.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking the same.

However, I noticed this structure allowed for a multiple origins to single track setup, like the one that exists on OpenGov currently.

My guess is that, unless we figure out how to make that setup work, this solution has a low chance of being accepted, as it would turn out to be a worse solution than what's already out there (?)

Maybe I'm wrong on the last part, but it's worth exploring those scenarios, and in that case, this expense is necessary.

@olanod olanod merged commit 2fb9e66 into release-virto-v1.5.0 Dec 28, 2023
6 of 10 checks passed
pandres95 pushed a commit that referenced this pull request Feb 15, 2024
1. Benchmark results are collected in a single struct.
2. The output of the results is prettified.
3. The result struct used to save the output as a yaml and store it in
artifacts in a CI job.

```
$ cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
$ cat output.txt

polkadot/node/subsystem-bench/examples/availability_read.yaml #1

Network usage, KiB                     total   per block
Received from peers               510796.000  170265.333
Sent to peers                        221.000      73.667

CPU usage, s                           total   per block
availability-recovery                 38.671      12.890
Test environment                       0.255       0.085


polkadot/node/subsystem-bench/examples/availability_read.yaml #2

Network usage, KiB                     total   per block
Received from peers               413633.000  137877.667
Sent to peers                        353.000     117.667

CPU usage, s                           total   per block
availability-recovery                 52.630      17.543
Test environment                       0.271       0.090


polkadot/node/subsystem-bench/examples/availability_read.yaml #3

Network usage, KiB                     total   per block
Received from peers               424379.000  141459.667
Sent to peers                        703.000     234.333

CPU usage, s                           total   per block
availability-recovery                 51.128      17.043
Test environment                       0.502       0.167

```

```
$ cargo run -p polkadot-subsystem-bench --release -- --ci test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
$ cat output.txt
- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #1'
  network:
  - resource: Received from peers
    total: 509011.0
    per_block: 169670.33333333334
  - resource: Sent to peers
    total: 220.0
    per_block: 73.33333333333333
  cpu:
  - resource: availability-recovery
    total: 31.845848445
    per_block: 10.615282815
  - resource: Test environment
    total: 0.23582828799999941
    per_block: 0.07860942933333313

- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #2'
  network:
  - resource: Received from peers
    total: 411738.0
    per_block: 137246.0
  - resource: Sent to peers
    total: 351.0
    per_block: 117.0
  cpu:
  - resource: availability-recovery
    total: 18.93596025099999
    per_block: 6.31198675033333
  - resource: Test environment
    total: 0.2541994199999979
    per_block: 0.0847331399999993

- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #3'
  network:
  - resource: Received from peers
    total: 424548.0
    per_block: 141516.0
  - resource: Sent to peers
    total: 703.0
    per_block: 234.33333333333334
  cpu:
  - resource: availability-recovery
    total: 16.54178526900001
    per_block: 5.513928423000003
  - resource: Test environment
    total: 0.43960946299999537
    per_block: 0.14653648766666513
```

---------

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
@pandres95 pandres95 deleted the virto-pallet-referenda-tracks branch April 2, 2024 15:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants