On Mon, Oct 20, 2025 at 07:11:07AM +0200, Stefano Brivio wrote:
On Mon, 20 Oct 2025 11:20:19 +1100 David Gibson
wrote: On Fri, Oct 17, 2025 at 08:28:12PM +0200, Stefano Brivio wrote:
On Thu, 16 Oct 2025 09:54:25 +1100 David Gibson
wrote: On Wed, Oct 15, 2025 at 02:31:27PM +0800, Yumei Huang wrote:
On Wed, Oct 15, 2025 at 8:05 AM David Gibson
wrote: On Tue, Oct 14, 2025 at 03:38:36PM +0800, Yumei Huang wrote: > According to RFC 2988 and RFC 6298, we should use an exponential > backoff timeout for data retransmission starting from one second > (see Appendix A in RFC 6298), and limit it to about 60 seconds > as allowed by the same RFC: > > (2.5) A maximum value MAY be placed on RTO provided it is at > least 60 seconds.
The interpretation of this isn't entirely clear to me. Does it mean if the total retransmit delay exceeds 60s we give up and RST (what this patch implements)? Or does it mean that if the retransmit delay reaches 60s we keep retransmitting, but don't increase the delay any further?
Looking at tcp_bound_rto() and related code in the kernel suggests the second interpretation.
> Combine the macros defining the initial timeout for both SYN and ACK. > And add a macro ACK_RETRIES to limit the total timeout to about 60s. > > Signed-off-by: Yumei Huang
> --- > tcp.c | 32 ++++++++++++++++---------------- > 1 file changed, 16 insertions(+), 16 deletions(-) > > diff --git a/tcp.c b/tcp.c > index 3ce3991..84da069 100644 > --- a/tcp.c > +++ b/tcp.c > @@ -179,16 +179,12 @@ > * > * Timeouts are implemented by means of timerfd timers, set based on flags: > * > - * - SYN_TIMEOUT_INIT: if no ACK is received from tap/guest during handshake > - * (flag ACK_FROM_TAP_DUE without ESTABLISHED event) within this time, resend > - * SYN. It's the starting timeout for the first SYN retry. If this persists > - * for more than TCP_MAX_RETRIES or (tcp_syn_retries + > - * tcp_syn_linear_timeouts) times in a row, reset the connection > - * > - * - ACK_TIMEOUT: if no ACK segment was received from tap/guest, after sending > - * data (flag ACK_FROM_TAP_DUE with ESTABLISHED event), re-send data from the > - * socket and reset sequence to what was acknowledged. If this persists for > - * more than TCP_MAX_RETRIES times in a row, reset the connection > + * - ACK_TIMEOUT_INIT: if no ACK segment was received from tap/guest, eiher > + * during handshake(flag ACK_FROM_TAP_DUE without ESTABLISHED event) or after > + * sending data (flag ACK_FROM_TAP_DUE with ESTABLISHED event), re-send data > + * from the socket and reset sequence to what was acknowledged. It's the > + * starting timeout for the first retry. If this persists for more than > + * allowed times in a row, reset the connection > * > * - FIN_TIMEOUT: if a FIN segment was sent to tap/guest (flag ACK_FROM_TAP_DUE > * with TAP_FIN_SENT event), and no ACK is received within this time, reset > @@ -342,8 +338,7 @@ enum { > #define WINDOW_DEFAULT 14600 /* RFC 6928 */ > > #define ACK_INTERVAL 10 /* ms */ > -#define SYN_TIMEOUT_INIT 1 /* s */ > -#define ACK_TIMEOUT 2 > +#define ACK_TIMEOUT_INIT 1 /* s, RFC 6298 */ I'd suggest calling this RTO_INIT to match the terminology used in the RFCs.
Sure.
> #define FIN_TIMEOUT 60 > #define ACT_TIMEOUT 7200 > > @@ -352,6 +347,11 @@ enum { > > #define ACK_IF_NEEDED 0 /* See tcp_send_flag() */ > > +/* Number of retries calculated from the exponential backoff formula, limited > + * by a total timeout of about 60 seconds. > + */ > +#define ACK_RETRIES 5 > +
As noted above, I think this is based on a misunderstanding of what the RFC is saying. TCP_MAX_RETRIES should be fine as it is, I think. We could implement the clamping of the RTO, but it's a "MAY" in the RFC, so we don't have to, and I don't really see a strong reason to do so.
If we use TCP_MAX_RETRIES and not clamping RTO, the total timeout could be 255 seconds.
Stefano mentioned "Retransmitting data after 256 seconds doesn't make a lot of sense to me" in the previous comment.
That's true, but it's pretty much true for 60s as well. For the local link we usually have between passt and guest, even 1s is an eternity.
Rather than the local link I was thinking of whatever monitor or liveness probe in KubeVirt which might have a 60-second period, or some firewall agent, or how long it typically takes for guests to stop and resume again in KubeVirt.
Right, I hadn't considered those. Although.. do those actually re-use a single connection? I would have guessed they use a new connection each time, making the timeouts here irrelevant.
It depends on the definition of "each time", because we don't time out host-side connections immediately.
Hm, ok. Is your concern that getting a negative answer from the probe will take too long?
Pretending passt isn't there, the timeout would come from the default values for TCP connections. It looks like there's no specific SO_SNDTIMEO value set for those probes, and you can't configure the timeout, at least according to:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-...
My guess would be that the probe would probably time out at the application level long before the TCP layer times out, but I don't know for sure.
and for tcp_syn_retries, tcp(7) says:
The default value is 6, which corresponds to retrying for up to approximately 127 seconds.
In this series, to make things transparent, we read out those values, so that part is fine. But does the Linux kernel clamp the RTO?
It turns out that yes, it does, TCP_RTO_MAX_SEC is 120 seconds (before 1280c26228bd ("tcp: add tcp_rto_max_ms sysctl") that was TCP_RTO_MAX, same value), and it's used by tcp_retransmit_timer() via tcp_rto_max(). That change makes it configurable.
I'm tempted to suggest that we should read out that value as well (with a 120-second fallback for older kernels) to make our behaviour as transparent as possible.
It's slightly more complicated and perhaps not strictly needed, but we've been bitten a few times by cases where applications and users expect us to behave like the Linux kernel, and we didn't... so maybe we could do this as well while at it? Given the rest of this series, it looks like a relatively small addition to it.
I think that's a good idea. It's a bit more work, but it doesn't greatly increase the conceptual complexity and will more closely match the kernel's behaviour. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson