On Fri, 6 Mar 2026 12:08:07 +1100
David Gibson
Stefano convinced me that my earlier proposal for the dynamic update protocol was unnecessarily complex. Plus, I saw a much better way of handling socket continuity in the context of a "whole table" replacement. So here's an entirely revised protocol suggestion.
# Outline
I suggest that each connection to the control socket handles a single transaction.
1. Server hello - Server sends magic number, version - Possibly feature flags / limits (e.g. max number of rules allowed)
Feature flags and limits could be fixed depending on the version, for simplicity. If pifs are unexpected (somebody trying to forward ports to a container and touching passt instead) we should find out as part of 3. I can't think of other substantial types of mismatches.
2. Client hello - Client sends magic number - Do we need anything else?
As long as we have a version reported by the server, we should be fine. We'll just increase it if we need something else. Do we want a client version too?
3. Server lists pifs - Server sends the number of pifs, their indices and names
Up to here, I guess we can skip all this for an initial Podman-side-complete implementation.
4. Server lists rules - Server sends the list of rules, one pif at a time
Could this be a fixed-size blob with up to, say, 16 pifs? We'll need to generalise pifs at some point. I'm not sure if it makes things simpler. I would defer this to the implementation.
5. Client gives new rules - Client sends the new list of rules, one pif at a time - Server loads them into the shadow table, and validates (no socket operations)
Is it one shadow table per pif or one with everything? If it's one per pif, do we want to have the whole exchange prepended by "load table for pif x" or "store table for pif y" commands? I would suggest not, at the moment, as it looks slightly complicated, but eventually in a later version we could switch to that.
6. Server acknowledges - Either reports an error and disconnects, or acks waiting for client 7. Client signals apply - Server swaps shadow and active tables, and syncs sockets with new active table 8. Server gives error summary - Server reports bind/listen/whatever errors 9a. Client signals commit - Shadow table (now the old table) discarded or 9b. Client signals rollback - Shadow and active tables swapped back, syncs sockets - Discard shadow table (now the "new" table again) - New bind error report?
Do we need these as five separate steps? Couldn't the server simply apply or try to apply as soon as the client is done, and acknowledge or return error once everything is done? What about this instead: 5. Client sends new rules (blob of known size) 6. Server receives, loads into shadow table, swaps tables and syncs socket, with rollback to old table on error. 8. Server sends error / success summary (single byte, at least in this version)
10. Server closes control connection
...if we keep my 8. above, it would be more logical that the client closes the connection.
# Client disconnects
A client disconnect before step (7) is straightforward: discard the shadow table, nothing has changed.
A client disconnect between (7) and (9) triggers a rollback, same as (9b).
In my modified version, a client disconnect during 5. would trigger discarding of the shadow table that's being filled (kind of no-op really). A disconnect after that doesn't affect the following steps instead, but the server won't report error or success.
# Error reporting
Error reporting at step (6) is fairly straightforward: we can send an error code and/or an error message.
Error reporting at (8) is trickier. As a first cut, we could just report "yes" or "no" - taking into account the FWD_WEAK flag. But the client might be able to make better decisions or at least better messages to the user if we report more detailed information. Exactly how detailed is an open question: number of bind failures? number of failures per rule? specific ports which failed?
For the moment I would report a single byte. Later, we could probably send back the list of rules with a success / error type version for each one of them. Think of just sending the same type of fixed-size table back and forth.
# Interim steps
I propose these steps toward implementing this:
i. Merge TCP and UDP rule tables. The protocol above assumes a single rule table per-pif, which I think is an easier model to understand and more extensible for future protocol support. ii. Read-only client. Implement steps (1) to (4). Client can query and list the current rules, but not change them. iii. Rule updates. Implement remaining protocol steps, but with a "close and re-open" approach on the server, so unaltered listening sockets might briefly disappear iv. Socket continuity. Have the socket sync "steal" sockets from the old table in preference to re-opening them.
If you have any time to work on (ii) while I work on (i), those should be parallelizable.
Yes, I'll start adapting the existing draft as soon as possible. I think ii. could go in parallel with all the other steps, I can just call some stubs meanwhile.
# Concurrent updates
Server guarantees that a single transaction as above is atomic in the sense that nothing else is allowed to change the rules between (4) and (9). The easiest way to do that initially is probably to only allow a single client connection at a time.
I would call this a feature...
If there's a reason to, we could alter that so that concurrent connections are allowed, but if another client changed anything after step (4), then we give an error on the next op (or maybe just close the control socket from the server side).
...even if we go for my modified version.
# Tweaks / variants
- I'm not sure that step (2) is necessary
I would skip it. The only reason why we might want it is to send a client version, but we can also implement sending of a client version only starting from a newer server version.
- I'm not certain that step (7) is necessary, although I do kind of prefer the client getting a chance to see a "so far, so good" before any socket operations happen.
I think it's quite unrealistic that we'll ever manage to build some sensible logic to decide what to do depending on partial failures. If so far is not good, the server should just abort, and the user will have to fix mistakes and try again. -- Stefano