...instead of checking if it's less than SNDBUF_SMALL, because this
isn't simply an optimisation to coalesce ACK segments: we rely on
having enough data at once from the sender to make the buffer grow
by means of TCP buffer size tuning implemented in the Linux kernel.
Use SNDBUF_BIG: above that, we don't need auto-tuning (even though
it might happen). SNDBUF_SMALL is too... small.
Signed-off-by: Stefano Brivio
---
tcp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tcp.c b/tcp.c
index e4c5a5b..fbf97a0 100644
--- a/tcp.c
+++ b/tcp.c
@@ -1079,7 +1079,7 @@ int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn,
if (bytes_acked_cap && !force_seq &&
!CONN_IS_CLOSING(conn) &&
!(conn->flags & LOCAL) && !tcp_rtt_dst_low(conn) &&
- (unsigned)SNDBUF_GET(conn) >= SNDBUF_SMALL) {
+ (unsigned)SNDBUF_GET(conn) >= SNDBUF_BIG) {
if (!tinfo) {
tinfo = &tinfo_new;
if (getsockopt(s, SOL_TCP, TCP_INFO, tinfo, &sl))
--
2.43.0