On Thu, Dec 04, 2025 at 08:45:37AM +0100, Stefano Brivio wrote:
...instead of checking if it's less than SNDBUF_SMALL, because this isn't simply an optimisation to coalesce ACK segments: we rely on having enough data at once from the sender to make the buffer grow by means of TCP buffer size tuning implemented in the Linux kernel.
Use SNDBUF_BIG: above that, we don't need auto-tuning (even though it might happen). SNDBUF_SMALL is too... small.
Do you have an idea of how often sndbuf exceeds SNDBUF_BIG? I'm wondering if by making this change we might have largely eliminated the first branch in practice.
Signed-off-by: Stefano Brivio
--- tcp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tcp.c b/tcp.c index e4c5a5b..fbf97a0 100644 --- a/tcp.c +++ b/tcp.c @@ -1079,7 +1079,7 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, if (bytes_acked_cap && !force_seq && !CONN_IS_CLOSING(conn) && !(conn->flags & LOCAL) && !tcp_rtt_dst_low(conn) && - (unsigned)SNDBUF_GET(conn) >= SNDBUF_SMALL) { + (unsigned)SNDBUF_GET(conn) >= SNDBUF_BIG) { if (!tinfo) { tinfo = &tinfo_new; if (getsockopt(s, SOL_TCP, TCP_INFO, tinfo, &sl)) -- 2.43.0
-- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson