On Fri, Dec 05, 2025 at 02:20:12AM +0100, Stefano Brivio wrote:
On Fri, 5 Dec 2025 11:08:06 +1100 David Gibson
wrote: On Thu, Dec 04, 2025 at 08:45:37AM +0100, Stefano Brivio wrote:
...instead of checking if it's less than SNDBUF_SMALL, because this isn't simply an optimisation to coalesce ACK segments: we rely on having enough data at once from the sender to make the buffer grow by means of TCP buffer size tuning implemented in the Linux kernel.
Use SNDBUF_BIG: above that, we don't need auto-tuning (even though it might happen). SNDBUF_SMALL is too... small.
Do you have an idea of how often sndbuf exceeds SNDBUF_BIG? I'm wondering if by making this change we might have largely eliminated the first branch in practice.
Before this series, or after 6/8 in this series, it happens quite often. It depends on the bandwidth * delay product of course, but at 1 Gbps and 20 ms RTT we get there in a couple of seconds.
Maybe 1 MiB would make more sense for typical conditions, but I'd defer this to a more adaptive implementation of the whole thing. I think it should also depend on the RTT, ideally.
Ok. Adding that context to the commit message might be useful.
Otherwise,
Reviewed-by: David Gibson
Signed-off-by: Stefano Brivio
--- tcp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tcp.c b/tcp.c index e4c5a5b..fbf97a0 100644 --- a/tcp.c +++ b/tcp.c @@ -1079,7 +1079,7 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, if (bytes_acked_cap && !force_seq && !CONN_IS_CLOSING(conn) && !(conn->flags & LOCAL) && !tcp_rtt_dst_low(conn) && - (unsigned)SNDBUF_GET(conn) >= SNDBUF_SMALL) { + (unsigned)SNDBUF_GET(conn) >= SNDBUF_BIG) { if (!tinfo) { tinfo = &tinfo_new; if (getsockopt(s, SOL_TCP, TCP_INFO, tinfo, &sl)) -- 2.43.0
-- Stefano
-- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson