Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

FRAME: Add new poll hook + various other improvements to pallet hooks #14279

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

@kianenigma kianenigma added the A3-in_progress Pull request is in progress. No review needed at this stage. label May 31, 2023
@kianenigma kianenigma changed the title Add new poll hook + various other improvements to pallet hooks FRAME: Add new poll hook + various other improvements to pallet hooks May 31, 2023
@kianenigma kianenigma changed the title FRAME: Add new poll hook + various other improvements to pallet hooks FRAME: Add new poll hook + various other improvements to pallet hooks May 31, 2023
///
/// This function can freely read from the state, but any change it makes to the state is
/// meaningless. Writes can be pushed back to the chain by submitting extrinsics from the
/// offchain worker to the transaction pool. See `pallet-example-offchain-worker` for more
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// offchain worker to the transaction pool. See `pallet-example-offchain-worker` for more
/// offchain worker to the transaction pool. See `pallet-example-offchain-worker-ping-pong` and `pallet-example-offchain-worker-price-oracle` for more

/// that make (by default) validators generate transactions that feed results
/// of those long-running computations back on chain.
/// Implementing this function on a pallet allows you to perform long-running tasks that are
/// dispatched as separate threads, and entirely independent of the main wasm runtime.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// dispatched as separate threads, and entirely independent of the main wasm runtime.
/// executed as separate threads, and entirely independent of the main state transition function (even though its functionality is also defined by the runtime wasm blob).

this comes back to what we define as "runtime": wasm blob or STF?

Comment on lines +360 to +361
/// This function can freely read from the state, but any change it makes to the state is
/// meaningless. Writes can be pushed back to the chain by submitting extrinsics from the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// This function can freely read from the state, but any change it makes to the state is
/// meaningless. Writes can be pushed back to the chain by submitting extrinsics from the
/// This function can freely read from the state, but any change it makes to the state can only be achieved by submitting extrinsics from the

/// to perform off-chain computations, calls and submit transactions
/// with results to trigger any on-chain changes.
/// Any state alterations are lost and are not persisted.
/// TODO: ask for review from bernardo
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left some suggestions, LGTM overall

///
/// The latter point is usually correlated with the block resources being available, for example
/// `Weight` leftover.
fn poll(_n: BlockNumber) -> Weight {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it take the remaining weight into argument, similarly to on_idle ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How exactly does this work? This is skipped when on_initialize consumes all the weight? This sounds like on_idle but before extrinsics.

Also the name is not consistent with all other hooks which are on_xxx

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on_initialize and on_finalize will be deprecated. on_idle() happens at the end of a block and only if it has weight left. poll() happens at the beginning, prior to non-mandatory transaction inclusion.

So:

  • initialize_block
    • Pallets on_initialize code, though this either be migrated to poll or moved into a hard-deadline hook, called by a explicit configuration of System pallet.
  • If no migrations scheduled or on-going: poll
  • Mandatory extrinsics (inherents)
  • Multi-block migrations
  • If no migrations on-going: other extrinsics/transactions
  • If weight left on_idle
  • finalize_block
    • Pallets on_finalize code, though this either be migrated to poll or moved into a hard-deadline hook, called by a explicit configuration of System pallet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

while we are here, what’s the plan for post inherent hook?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about this - got a link?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it take the remaining weight into argument, similarly to on_idle ?

I considered this and am still open to it. End of the day, the developer can always query remaining weight at the beginning of the code and make an informed choice not to overfill the block.

I have less of a strong opinion about which to chose, but more so to unify them. In all Hooks functions block_number and remaining_weight should be either passed in for convenience, or not passed in based on the assumption that you can always read them from systme.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would happen if there is a faulty MBM at hand, and it never stops? should MBMs have a deadline, after which we (partially) resume the chain? Ideally we want few governance calls to be resumed, but not everything else.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Side note, I wonder what poeple think about my comment here. I sometimes find such discussions to be confusing because we use the term DispatchClass even for things that are not dispatchable (like Hooks), to my best knowledge. Moreover, something like OnIdle is declared Mandatory, even thought it is clearly not.

<frame_system::Pallet<System>>::register_extra_weight_unchecked(

We can possibly formulate a more comprehensive (and ideally extensible) enum WeightClass, the variants of which encapsulate different weights that may or may not be permitted to be consumed. For example, each variant of this enum is encoded at a custom codec_index(x), where x demonstrates its "priority".

  • MBMs are then configured to block all weights classes with "priority" less then some y.
  • Executive/System is configured to never permit over-consumption of weights of certain classes (extrinsics), but permit others to exceed their limits (hooks, perhaps except on_idle).

Things like frame_system::Config::PreInherents or on_initialize are set to codec_index(255) which by definition means they cannot be blocked.

Possibly, one could argue any code that has priority other than 255 needs to be benchmarked and be able to predict its weight consumption prior to execution. This is something that we have a soft-expectation about in on_idle now, but don't and cannot enforce.

This also allows things like "permit sudo or governance calls during MBMs" to be expressed.

Possibly an over-engineering, but I feel like this can:

  1. Improve pol+MBM
  2. close https://github.com/paritytech/substrate/issues/13731
  3. close https://github.com/paritytech/substrate/issues/11309

all at once.

Copy link
Member

@ggwpez ggwpez Jun 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would happen if there is a faulty MBM at hand, and it never stops? should MBMs have a deadline, after which we (partially) resume the chain? Ideally we want few governance calls to be resumed, but not everything else.

Yes MBMs have a max_steps in the current design after which they are treated as failure.
Governance can be resumed through the failed callback of the migrations pallet, but it is currently up to the implementor to do so. Possibly by using the SafeMode pallet.

This also allows things like "permit sudo or governance calls during MBMs" to be expressed.

But governance calls should in general not be possible during MBMs - only once they fail.

I am not sure if I quite understood the advantage of a WeightClass. To me it still looks pretty similar to DispatchClass, which could be extended to have more levels.
But ultimately it comes down to: is it a hard requirement to execute in a block or is it not. And I think Mandatory already makes that distinction quite clear. on_idle is not mandatory, but i think it is needed to call the weight consumption function with that argument to prevent it from erroring.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also have the open question about the parachains_inherent and whether it's really mandatory; if it is then it would imply that pallet_message_queue could never have an MBM in the model since it is called into be a ME. We might need to split the inherent into two parts in that case; one which is mandatory but which doesn't contain any messages and a second which is optional (and would not be included in the case of an MBM) and only does the message transport.

The messages could stay part of the inherent? However, when there is a MBM ongoing, we would need to buffer the inbound messages by default. Writing to the state can be done in the new format (assuming there would be some MBM running for pallet-message-queue). It would be good if we could announce to other chains that the message queue is full and we can not process more messages right now.

In general, we will always require that parts of the state are always migrated in one go. If for example pallet-message-queue has some counter for stored messages, we need to migrate this counter directly when starting the MBM. This can not be done later. I think this pattern of having certain values requiring an instant migration will be found for other pallets as well. Another example would be the list of authorities that should be migrated, which would probably also be required to be done at the start of a potential MBM.

@kianenigma kianenigma mentioned this pull request Jun 20, 2023
2 tasks
@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable
Logs: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/3065529

/// of [`construct_runtime`]'s expansion. Look for a test case with a name along the lines of:
/// `__construct_runtime_integrity_test`.
///
/// This hook is suitable location to ensure value values/types provided to the `Config` trait
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// This hook is suitable location to ensure value values/types provided to the `Config` trait
/// This hook is suitable location to ensure values/types provided to the `Config` trait

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A3-in_progress Pull request is in progress. No review needed at this stage.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants