On Fri, Dec 05, 2025 at 01:51:43AM +0100, Stefano Brivio wrote:
On Wed, 5 Nov 2025 14:49:59 +1100 David Gibson
wrote: On Mon, Nov 03, 2025 at 11:16:12AM +0100, Stefano Brivio wrote:
For both TCP and UDP, we request vhost-user buffers that are large enough to reach ETH_ZLEN (60 bytes), so padding is just a matter of increasing the appropriate iov_len and clearing bytes in the buffer as needed.
Link: https://bugs.passt.top/show_bug.cgi?id=166 Signed-off-by: Stefano Brivio
I think this is correct, apart from the nasty bug Laurent spotted.
I'm less certain if this is the most natural way to do it.
--- tcp.c | 2 -- tcp_internal.h | 1 + tcp_vu.c | 27 +++++++++++++++++++++++++++ udp_vu.c | 11 ++++++++++- 4 files changed, 38 insertions(+), 3 deletions(-)
diff --git a/tcp.c b/tcp.c index e91c0cf..039688d 100644 --- a/tcp.c +++ b/tcp.c @@ -335,8 +335,6 @@ enum { }; #endif
-/* MSS rounding: see SET_MSS() */ -#define MSS_DEFAULT 536 #define WINDOW_DEFAULT 14600 /* RFC 6928 */
#define ACK_INTERVAL 10 /* ms */ diff --git a/tcp_internal.h b/tcp_internal.h index 5f8fb35..d2295c9 100644 --- a/tcp_internal.h +++ b/tcp_internal.h @@ -12,6 +12,7 @@ #define BUF_DISCARD_SIZE (1 << 20) #define DISCARD_IOV_NUM DIV_ROUND_UP(MAX_WINDOW, BUF_DISCARD_SIZE)
+#define MSS_DEFAULT /* and minimum */ 536 /* as it comes from minimum MTU */ #define MSS4 ROUND_DOWN(IP_MAX_MTU - \ sizeof(struct tcphdr) - \ sizeof(struct iphdr), \ diff --git a/tcp_vu.c b/tcp_vu.c index 1c81ce3..7239401 100644 --- a/tcp_vu.c +++ b/tcp_vu.c @@ -60,6 +60,29 @@ static size_t tcp_vu_hdrlen(bool v6) return hdrlen; }
+/** + * tcp_vu_pad() - Pad 802.3 frame to minimum length (60 bytes) if needed + * @iov: iovec array storing 802.3 frame with TCP segment inside + * @cnt: Number of entries in @iov + */ +static void tcp_vu_pad(struct iovec *iov, size_t cnt) +{ + size_t l2len, pad; + + ASSERT(iov_size(iov, cnt) >= sizeof(struct virtio_net_hdr_mrg_rxbuf)); + l2len = iov_size(iov, cnt) - sizeof(struct virtio_net_hdr_mrg_rxbuf);
Re-obtaining l2len from iov_size() seems kind of awkward, since the callers should already know the length - they've just used it to populate iov_len.
That's only the case for tcp_vu_send_flag() though, because tcp_vu_data_from_sock() can use split buffers and iov_len of the first element is not the same as the whole frame length.
Yes, but..
That is, you could (very much in theory) have iov_len set to 50 for the first iov item, set to 4 for the second iov item, and the frame needs padding, but you can't tell from the first iov item itself.
Before we call tcp_vu_prepare() on that path, we've already calculated 'dlen' from iov_size, here we're calling iov_size a second time.
+ if (l2len >= ETH_ZLEN) + return;
+ + pad = ETH_ZLEN - l2len; + + /* tcp_vu_sock_recv() requests at least MSS-sized vhost-user buffers */ + static_assert(ETH_ZLEN <= MSS_DEFAULT);
So, this is true for the data path, but not AFAICT for the flags path.
There _is_ still enough space in this case, because we request space for (tcp_vu_hdrlen() + sizeof(struct tcp_syn_opts)) which works out to: ETH_HLEN 14 + IP header 20 + TCP header 20 + tcp_syn_opts 8 ---- 62 > ETH_ZLEN
But the comment and assert are misleading.
Dropped, in favour of:
It seems like it would make more sense to clamp ETH_ZLEN as a lower length bound before we vu_collect() the buffers.
this.
👍
Or indeed, like we should be calculating l2len already including the clamping.
That's not trivial to do for the data path, I think (see above). I think it would be doable with a rework of the tcp_vu_data_from_sock() loop but I'd say it's beyond the scope of this series.
Yeah, fair enough.
+ memset(&iov[cnt - 1].iov_base + iov[cnt - 1].iov_len, 0, pad); + iov[cnt - 1].iov_len += pad; +} + /** * tcp_vu_send_flag() - Send segment with flags to vhost-user (no payload) * @c: Execution context @@ -138,6 +161,8 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) tcp_fill_headers(c, conn, NULL, eh, ip4h, ip6h, th, &payload, NULL, seq, !*c->pcap);
+ tcp_vu_pad(&flags_elem[0].in_sg[0], 1); + if (*c->pcap) { pcap_iov(&flags_elem[0].in_sg[0], 1, sizeof(struct virtio_net_hdr_mrg_rxbuf)); @@ -456,6 +481,8 @@ int tcp_vu_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn)
tcp_vu_prepare(c, conn, iov, buf_cnt, &check, !*c->pcap, push);
+ tcp_vu_pad(iov, buf_cnt); + if (*c->pcap) { pcap_iov(iov, buf_cnt, sizeof(struct virtio_net_hdr_mrg_rxbuf)); diff --git a/udp_vu.c b/udp_vu.c index 099677f..1b60860 100644 --- a/udp_vu.c +++ b/udp_vu.c @@ -72,8 +72,8 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s, { const struct vu_dev *vdev = c->vdev; int iov_cnt, idx, iov_used; + size_t off, hdrlen, l2len; struct msghdr msg = { 0 }; - size_t off, hdrlen;
ASSERT(!c->no_udp);
@@ -116,6 +116,15 @@ static int udp_vu_sock_recv(const struct ctx *c, struct vu_virtq *vq, int s, iov_vu[idx].iov_len = off; iov_used = idx + !!off;
+ /* pad 802.3 frame to 60 bytes if needed */ + l2len = *dlen + hdrlen - sizeof(struct virtio_net_hdr_mrg_rxbuf); + if (l2len < ETH_ZLEN) { + size_t pad = ETH_ZLEN - l2len; + + iov_vu[idx].iov_len += pad; + memset(&iov_vu[idx].iov_base + off, 0, pad); + } + vu_set_vnethdr(vdev, iov_vu[0].iov_base, iov_used);
/* release unused buffers */ -- 2.43.0
-- Stefano
-- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson