On Fri, May 03, 2024 at 03:42:55PM +0200, Stefano Brivio wrote:
On Thu, 2 May 2024 11:31:52 +1000 David Gibson
wrote: On Wed, May 01, 2024 at 04:28:38PM -0400, Jon Maloy wrote: [snip]
/* Receive into buffers, don't dequeue until acknowledged by guest. */ do len = recvmsg(s, &mh_sock, MSG_PEEK); @@ -2195,7 +2220,10 @@ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) return 0; }
- sendlen = len - already_sent; + sendlen = len; + if (!peek_offset_cap) + sendlen -= already_sent; + if (sendlen <= 0) { conn_flag(c, conn, STALLED); return 0; @@ -2365,9 +2393,17 @@ static int tcp_data_from_tap(struct ctx *c, struct tcp_tap_conn *conn, flow_trace(conn, "fast re-transmit, ACK: %u, previous sequence: %u", max_ack_seq, conn->seq_to_tap); + + /* Ensure seq_from_tap isn't updated twice after call */ + tcp_l2_data_buf_flush(c);
tcp_l2_data_buf_flush() was replaced by tcp_payload_flush() in a recently merged change from Laurent.
IIUC, this is necessary because otherwise our update to seq_to_tap can
...but Jon's comment refers to seq_from_tap (not seq_to_tap)? I'm confused.
I missed this, but as Jon clarified it's supposed to be seq_to_tap here.
be clobbered from tcp_payload_flush() when we process the queued-but-not-sent frames.
...how? I don't quite understand the issue here: tcp_payload_flush() updates seq_to_tap once we send the frames, not before, right?
Yes, but that's not relevant. The problem arises when (1) we queue some frames without knowledge of the fast re-transmit, *then* (2) we realize we need to re-transmit. This can happen if we have activity on both sock-side and tap-side of the same connection in the same epoll cycle. If we process the socket side first, we will do (1), then while processing the tap side we could see dup acks triggering (2). At (2) we rewind seq_to_tap, but when we flush the frames queued at (1) we advance it again, incorrectly - we should only be advancing it when we *re*-transmit new data. So, either we need to flush out everything *before* we wind back seq_to_tap (Jon's approach), or we need to cancel those queued frames (more optimal, but more complex to implement.
This seems like a correct fix, but not an optimal one: we're flushing out data we've already determined we're going to retransmit. Instead, I think we want a different helper that simply discards the queued frames
Don't we always send (within the same epoll_wait() cycle) what we queued? What am I missing?
We do (or at least should), yes. But in a single epoll cycle we can queue frames (triggered by socket activity) before we realise we have to retransmit (triggered by tap or timer activity).
- I'm thinking maybe we actually want a helper that's called from both the fast and slow retransmit paths and handles that.
Ah, wait, we only want to discard queued frames that belong to this connection, that's trickier.
It seems to me this is a pre-existing bug, we just managed to get away with it previously. I think this is at least one cause of the weirdly jumping forwarding sequence numbers you observed. So I think we want to make a patch fixing this that goes before the SO_PEEK_OFF changes.
+ conn->seq_ack_from_tap = max_ack_seq; conn->seq_to_tap = max_ack_seq; + set_peek_offset(conn, 0); tcp_data_from_sock(c, conn); + + /* Empty queue before any POLL event tries to send it again */ + tcp_l2_data_buf_flush(c);
I'm not clear on what the second flush call is for. The only frames queued should be those added by the tcp_data_from_sock() just above, and those should be flushed when we get to tcp_defer_handler() before we return to the epoll loop.
}
if (!iov_i) @@ -2459,6 +2495,7 @@ static void tcp_conn_from_sock_finish(struct ctx *c, struct tcp_tap_conn *conn, conn->seq_ack_to_tap = conn->seq_from_tap;
conn_event(c, conn, ESTABLISHED); + set_peek_offset(conn, 0);
/* The client might have sent data already, which we didn't * dequeue waiting for SYN,ACK from tap -- check now. @@ -2539,6 +2576,7 @@ int tcp_tap_handler(struct ctx *c, uint8_t pif, int af, goto reset;
conn_event(c, conn, ESTABLISHED); + set_peek_offset(conn, 0);
The set_peek_offset() could go into conn_event() to avoid the duplication. Not sure if it's worth it or not.
I wouldn't do that in conn_event(), the existing side effects there are kind of expected, but set_peek_offset() isn't so closely related to TCP events I'd say.
Fair enough.
if (th->fin) { conn->seq_from_tap++; @@ -2705,7 +2743,7 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref, struct sockaddr_storage sa; socklen_t sl = sizeof(sa); union flow *flow; - int s; + int s = 0;
if (c->no_tcp || !(flow = flow_alloc())) return; @@ -2767,7 +2805,10 @@ void tcp_timer_handler(struct ctx *c, union epoll_ref ref) flow_dbg(conn, "ACK timeout, retry"); conn->retrans++; conn->seq_to_tap = conn->seq_ack_from_tap; + set_peek_offset(conn, 0); + tcp_l2_data_buf_flush(c);
Uh.. doesn't this flush have to go *before* the seq_to_tap update, for the reasons discussed above?
tcp_data_from_sock(c, conn); + tcp_l2_data_buf_flush(c);
I don't understand the purpose of these new tcp_l2_data_buf_flush() calls. If they fix a pre-existing issue (but I'm not sure which one), they should be in a different patch.
As noted above I understand the purpose of the first one, but not the second.
tcp_timer_ctl(c, conn); } } else { @@ -3041,7 +3082,8 @@ static void tcp_sock_refill_init(const struct ctx *c) */ int tcp_init(struct ctx *c) { - unsigned b; + unsigned int b, optv = 0; + int s;
for (b = 0; b < TCP_HASH_TABLE_SIZE; b++) tc_hash[b] = FLOW_SIDX_NONE; @@ -3065,6 +3107,16 @@ int tcp_init(struct ctx *c) NS_CALL(tcp_ns_socks_init, c); }
+ /* Probe for SO_PEEK_OFF support */ + s = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP); + if (s < 0) { + warn("Temporary TCP socket creation failed"); + } else { + if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int))) + peek_offset_cap = true; + close(s); + } + debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not "); return 0; }
-- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson