Nit: v1 in the subject tag is not necessary (not harmful either): if
there are no version tags, it's implicit we're talking about version 1.
On Fri, 10 Apr 2026 13:25:39 +0530
Anshu Kumari
tcp_send_flag() can return error codes from tcp_prepare_flags() failing TCP_INFO, or from failure to collect buffers on the vhost-user path. These errors indicate the connection requires resetting.
Most callers of tcp_send_flag() were ignoring the error code and carrying on as if nothing was wrong. Check the return value at each call site and handle the error appropriately: - in tcp_data_from_tap(), return -1 so the caller resets - in tcp_tap_handler(), goto reset - in tcp_timer_handler()/tcp_sock_handler()/tcp_conn_from_sock_finish(), call tcp_rst() and return - in tcp_tap_conn_from_sock(), set CLOSING flag (flow not yet active) - in tcp_keepalive(), call tcp_rst() and continue the loop - in tcp_flow_migrate_target_ext(), goto fail
The call in tcp_rst_do() is left unchecked: we are already resetting, and tcp_sock_rst() still needs to run regardless.
Nit: we always use Link: tags (CONTRIBUTING.md uses the plural which might be a bit confusing, I guess we should fix that), rationale: https://archives.passt.top/passt-dev/20230704132104.48106368@elisabeth/ https://archives.passt.top/passt-dev/20251105163137.424a6537@elisabeth/ But I fix up these tags on merge anyway, no need to re-send (in general).
Signed-off-by: Anshu Kumari
--- tcp.c | 59 ++++++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 44 insertions(+), 15 deletions(-) diff --git a/tcp.c b/tcp.c index 8ea9be8..9ce671a 100644 --- a/tcp.c +++ b/tcp.c @@ -1917,7 +1917,9 @@ static int tcp_data_from_tap(const struct ctx *c, struct tcp_tap_conn *conn, "keep-alive sequence: %u, previous: %u", seq, conn->seq_from_tap);
- tcp_send_flag(c, conn, ACK); + if (tcp_send_flag(c, conn, ACK)) + return -1;
A general comment: in _some_ of these cases where we fail to send ACK segments, I intentionally didn't check for errors and let the connection live on, because that looked like the most graceful failure handling to me. After all, ACK segments without data are not assumed to be reliably transmitted (RFC 9293, 3.8.4), so, given that failing to send some should have a similar outcome as the peer missing some, I guess we're always expected to recover from a situation like that. This doesn't apply to other occurrences below where we fail to send a SYN segment or where failure to send ACK segments might mean we are in some expected state (including a connection that might get stuck forever). But reading David's description of bug #194, I wonder if he had something else in mind. That is, I don't have a strong preference against resetting the connection whenever we fail to prepare buffers, but in many of these cases we don't really _have to_ reset the connection. David, do you see this differently?
+ tcp_timer_ctl(c, conn);
if (setsockopt(conn->sock, SOL_SOCKET, SO_KEEPALIVE, @@ -2043,14 +2045,16 @@ eintr: * Then swiftly looked away and left. */ conn->seq_from_tap = seq_from_tap; - tcp_send_flag(c, conn, ACK); + if (tcp_send_flag(c, conn, ACK)) + return -1; }
if (errno == EINTR) goto eintr;
if (errno == EAGAIN || errno == EWOULDBLOCK) { - tcp_send_flag(c, conn, ACK | DUP_ACK); + if (tcp_send_flag(c, conn, ACK | DUP_ACK)) + return -1; return p->count - idx;
} @@ -2070,7 +2074,8 @@ out: */ if (conn->seq_dup_ack_approx != (conn->seq_from_tap & 0xff)) { conn->seq_dup_ack_approx = conn->seq_from_tap & 0xff; - tcp_send_flag(c, conn, ACK | DUP_ACK); + if (tcp_send_flag(c, conn, ACK | DUP_ACK)) + return -1; } return p->count - idx; } @@ -2084,7 +2089,8 @@ out:
conn_event(c, conn, TAP_FIN_RCVD); } else { - tcp_send_flag(c, conn, ACK_IF_NEEDED); + if (tcp_send_flag(c, conn, ACK_IF_NEEDED)) + return -1; }
return p->count - idx; @@ -2122,7 +2128,10 @@ static void tcp_conn_from_sock_finish(const struct ctx *c, return; }
- tcp_send_flag(c, conn, ACK); + if (tcp_send_flag(c, conn, ACK)) { + tcp_rst(c, conn); + return; + }
/* The client might have sent data already, which we didn't * dequeue waiting for SYN,ACK from tap -- check now. @@ -2308,7 +2317,9 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, goto reset; }
- tcp_send_flag(c, conn, ACK); + if (tcp_send_flag(c, conn, ACK)) + goto reset; + conn_event(c, conn, SOCK_FIN_SENT);
return 1; @@ -2388,7 +2399,9 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, }
conn_event(c, conn, SOCK_FIN_SENT); - tcp_send_flag(c, conn, ACK); + if (tcp_send_flag(c, conn, ACK)) + goto reset; + ack_due = 0;
/* If we received a FIN, but the socket is in TCP_ESTABLISHED @@ -2478,7 +2491,11 @@ static void tcp_tap_conn_from_sock(const struct ctx *c, union flow *flow,
conn->wnd_from_tap = WINDOW_DEFAULT;
- tcp_send_flag(c, conn, SYN); + if (tcp_send_flag(c, conn, SYN)) { + conn_flag(c, conn, CLOSING);
I would wait for David to confirm, but I'm fairly sure that this needs FLOW_ACTIVATE(conn); before returning, just like in the other error path of this function, because otherwise we'll leave the newly created flow in an "incomplete" state. Due to flow table restrictions we adopted to keep the implementation simple (see "Theory of Operation - allocating and freeing flow entries" in flow.c), quoting from the documentation to enum flow_state in flow.h: * Caveats: * - At most one entry may be NEW, INI, TGT or TYPED at a time, so * it's unsafe to use flow_alloc() again until this entry moves to * ACTIVE or FREE so, if we create a second connection within the same epoll cycle (for example by calling tcp_tap_conn_from_sock() again), we'll now have two entries in state TYPED, which breaks this assumption, and things will David, I think this isn't documented very obviously, even though it's all there in flow.h. This just occurred to me because of commit 52419a64f2df ("migrate, tcp: Don't flow_alloc_cancel() during incoming migration") but we can't expect others to know about past commits. I wonder if you could think of a quick way to make this more prominent... should we perhaps state return conditions in functions, like you already added for isolation.c?
+ return; + } + conn_flag(c, conn, ACK_FROM_TAP_DUE);
tcp_get_sndbuf(conn); @@ -2585,7 +2602,10 @@ void tcp_timer_handler(const struct ctx *c, union epoll_ref ref) return;
if (conn->flags & ACK_TO_TAP_DUE) { - tcp_send_flag(c, conn, ACK_IF_NEEDED); + if (tcp_send_flag(c, conn, ACK_IF_NEEDED)) { + tcp_rst(c, conn); + return; + } tcp_timer_ctl(c, conn); } else if (conn->flags & ACK_FROM_TAP_DUE) { if (!(conn->events & ESTABLISHED)) { @@ -2598,7 +2618,10 @@ void tcp_timer_handler(const struct ctx *c, union epoll_ref ref) tcp_rst(c, conn); } else { flow_trace(conn, "SYN timeout, retry"); - tcp_send_flag(c, conn, SYN); + if (tcp_send_flag(c, conn, SYN)) { + tcp_rst(c, conn); + return; + } conn->retries++; conn_flag(c, conn, SYN_RETRIED); tcp_timer_ctl(c, conn); @@ -2662,8 +2685,11 @@ void tcp_sock_handler(const struct ctx *c, union epoll_ref ref, tcp_data_from_sock(c, conn);
if (events & EPOLLOUT) { - if (tcp_update_seqack_wnd(c, conn, false, NULL)) - tcp_send_flag(c, conn, ACK); + if (tcp_update_seqack_wnd(c, conn, false, NULL) && + tcp_send_flag(c, conn, ACK)) { + tcp_rst(c, conn); + return; + } }
return; @@ -2903,7 +2929,8 @@ static void tcp_keepalive(struct ctx *c, const struct timespec *now) if (conn->tap_inactive) { flow_dbg(conn, "No tap activity for least %us, send keepalive", KEEPALIVE_INTERVAL); - tcp_send_flag(c, conn, KEEPALIVE); + if (tcp_send_flag(c, conn, KEEPALIVE)) + tcp_rst(c, conn); }
/* Ready to check fot next interval */ @@ -3926,7 +3953,9 @@ int tcp_flow_migrate_target_ext(struct ctx *c, struct tcp_tap_conn *conn, int fd if (tcp_set_peek_offset(conn, peek_offset)) goto fail;
- tcp_send_flag(c, conn, ACK); + if (tcp_send_flag(c, conn, ACK)) + goto fail; + tcp_data_from_sock(c, conn);
if ((rc = tcp_epoll_ctl(conn))) {
-- Stefano