Fast Validator Discussions! #3
Replies: 6 comments 1 reply
-
The sequencer is not exactly needed for the MVP as all transactions can be processed on our router for now and we can leave out the rollback for now. Related to that I already am building cost calculation into the current implementation, but we have no way of actually debiting those accounts on mainnet unless we have some accounts for each user that we control and that we debit from. Users would have to fund those before being able to play using our validator.
Could even be a locally running validator while we are testing this and possibly even for the demo.
I think the quickest solution would be to figure this out locally first, but I agree that we don't need it at all and just send all txs to our validator for the demo
That will not affect latency at all as we declare txs to be finalized as soon as we committed account state changes to the bank.
Agreed, but we should get the latency to <100ms even for the demo so there is a visible difference to the main validator
I'm not even including them to begin with and am removing vote/stake related code as I port things over. However WRT crates that will be a longer endeavor as they aren't very clearly structured and sometimes just to get access to a specific type a whole crate is imported for now.
As I'm understanding things more it'd be easier to just stick with the 400ms slot time. It's less confusing to users. We can include more transactions in a single block though, i.e. we don't even produce blocks, such it is simply a matter of increasing the slot and pretending that a specific transaction was executed as part of it. We can do that since we only have one bank. I can elaborate on that if needed.
I'm not sure what reads we are referring to here. I'm using the accounts-db for account states on the validator itself. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
We don't produce blocks (at least for now) until we find a need to do so It's ephemeral so all state/ledger, etc. that we maintain while it's running will disappear anyways once we roll back the changes. Basically we're taking a single bank block less approach and just fill in things like slots to be compatible. So since there are no blocks then slot time (400ms) is only meaningful to explorers, in our case it makes no difference at all since as soon as a transaction is persisted to the accounts-db it is considered finalized, i.e. it is never finalized into a block.
Forgetting about the block then yes we can batch such that each thread can take on a max of 128 (or more if we want to) transactions for each batch. However we could make that batch smaller in order to get more frequent account states persisted - again not at all related to a block, but just to however large we allow the batches to be.
WRT ordering I'm using the central scheduler which considers compute units (we don't support prioritization fees to avoid cheating/arbitrage for now). So as in mainnet there is no guarantee of the order also since its hard to determine which thread a transaction will run on (in parallel with non-conflicting other transactions). However if we keep batches small enough then max latency should be kept small
Agreed! That's why we do this in a quicker dirty prototype to then have a look and decide how to re-implement it better
Do we need that in the MVP? I was under the impression that for now the RPC runs on the same box as the validator and will provide the data by querying it?
That part I don't understand as we go for a single node approach for now. Why do we need to manage concurrency in a distributed manner? Unless you're referring to the accounts we lock on mainnet, and in that case whoever locks them first wins, i.e. the solana cluster already is the system managing locks for us. |
Beta Was this translation helpful? Give feedback.
-
NOTE: that if we want to align batch completion with slot increase then we can do that (may make more sense for explorers and such). In that case we can make slots go as fast as we need, i.e. a slot time shouldn't be larger than the max latency we want to allow. |
Beta Was this translation helpful? Give feedback.
-
Questions/notes for discussion:
We don't; let's forget about this for the MVP.
To clarify with one example the situation I'm thinking of (with both write and read issues):
Even with a single node, there is going to be concurrency/locks as multiple users are going to hit the RPC with transactions, trying to change the same state. Also, users have to read the prev account state. Assuming case 1), where we have some sort of batches/blocks:
Assuming case 2) we don't have batches/blocks (streaming?):
Questions: 1 or 2 ? Pros, Cons? Up to discussion |
Beta Was this translation helpful? Give feedback.
-
Terminology:
|
Beta Was this translation helpful? Give feedback.
-
🚀 Fast Validator MVP
Initial discussion for components and architecture needed for the Fast Validator MVP.
1: Delegation Module
The delegation module/program locks components/accounts on the mainnet and delegates the authority to the identity (a.k.a. private key) of the validator/sequencer.
2: Validator
The fast validator is SVM compatible and can execute any transaction that would work on the mainnet.
Websocket RPC Subscriptions
HTTP RPC Methods
Targets:
Fast: We don't need crazy performance at this stage, and more optimization could be done after the MVP. We are doing quick optimization for speed-ups and we can reduce the target block time.
Considerations: Running the solana-test-validator as it is locally is already quite fast. It's not a priority to be 10x faster at this stage.
Targeting 100ms end-to-end
Minimal Footprint: We decided to start from a bottom-up approach and gradually add the crates we need. This allows us to package and containerize a minimal SVM runtime.
Considerations/Questions:
Ability to Clone on Demand Accounts: The validator needs to expose an RPC method (or have a built-in program) that allows it to be instructed from the outside to clone accounts on-demand.
Consideration: This is perhaps one of the most important features of the validator, which allows it to execute transactions that rely on accounts living on the mainnet.
Easy Startup Configuration:
Consideration:
For the MVP, the most likely useful configuration will be:
3: Sequencer
The sequencer module writes the state-diff of delegated accounts on the mainnet and runs the undelegation process.
Considerations: It's key to understand the frequency, policy, and mechanism of sequencing (e.g., periodic vs. end of the session).
Notes:
Mainnet could be any reference SVM cluster, including devnet or others (e.g., Eclipse SVM).
Rpc router can be faked for the demo if not enough time. A quick/easy implementation is probably doable with Cloudflare functions
(Not needed for the MVP) For fast reads, we could use a database on the edge, something like:
from where clients can easily get < 100ms updates. We still need to carefully optimize for fast writes with another approach.
4: Luzid
5: RPC Router
The RPC router determines where to route the transaction, whether on the mainnet or the auxiliary validator, depending on the account needed in the transaction. See section 2.2.2 of https://arxiv.org/pdf/2311.02650.pdf.
6: Gasless API
7: Ticking
Add sysvar to the validator
BOLT System that provides Delta time
8: Systems/Components Registry
Beta Was this translation helpful? Give feedback.
All reactions