On 4/10/26 09:28, David Gibson wrote:
On Fri, Apr 03, 2026 at 06:38:11PM +0200, Laurent Vivier wrote:
The previous per-protocol padding done by vu_pad() in tcp_vu.c and udp_vu.c was only correct for single-buffer frames: it assumed the padding area always fell within the first iov, writing past its end with a plain memset().
It also required each caller to compute MAX(..., ETH_ZLEN + VNET_HLEN) for vu_collect() and to call vu_pad() at the right point, duplicating the minimum-size logic across protocols.
Move the Ethernet minimum size enforcement into vu_collect() itself, so that enough buffer space is always reserved for padding regardless of the requested frame size.
Rewrite vu_pad() to take a full iovec array and use iov_memset(), making it safe for multi-buffer (mergeable rx buffer) frames.
In tcp_vu_sock_recv(), replace iov_truncate() with iov_skip_bytes(): now that all consumers receive explicit data lengths, truncating the iovecs is no longer needed. In tcp_vu_data_from_sock(), cap each frame's data length against the remaining bytes actually received from the socket, so that the last partial frame gets correct headers and sequence number advancement.
Signed-off-by: Laurent Vivier
--- iov.c | 1 - tcp_vu.c | 34 ++++++++++++++++++---------------- udp_vu.c | 14 ++++++++------ vu_common.c | 31 +++++++++++++++---------------- vu_common.h | 2 +- 5 files changed, 42 insertions(+), 40 deletions(-) diff --git a/iov.c b/iov.c index dabc4f1ceea3..28c6d40d2986 100644 --- a/iov.c +++ b/iov.c @@ -180,7 +180,6 @@ size_t iov_truncate(struct iovec *iov, size_t iov_cnt, size_t size) * Will write less than @length bytes if it runs out of space in * the iov */ -/* cppcheck-suppress unusedFunction */ void iov_memset(const struct iovec *iov, size_t iov_cnt, size_t offset, int c, size_t length) { diff --git a/tcp_vu.c b/tcp_vu.c index 8c1894dca7fe..2dfe14485eee 100644 --- a/tcp_vu.c +++ b/tcp_vu.c @@ -88,7 +88,7 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags)
elem_cnt = vu_collect(vdev, vq, &flags_elem[0], 1, &flags_iov[0], 1, NULL, - MAX(hdrlen + sizeof(*opts), ETH_ZLEN + VNET_HLEN), NULL); + hdrlen + sizeof(*opts), NULL); if (elem_cnt != 1) return -1;
@@ -128,8 +128,6 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) return ret; }
- l2len = hdrlen + optlen - VNET_HLEN; - iov_truncate(&flags_iov[0], 1, l2len + VNET_HLEN); payload = IOV_TAIL(flags_elem[0].in_sg, 1, hdrlen);
if (flags & KEEPALIVE) @@ -138,17 +136,17 @@ int tcp_vu_send_flag(const struct ctx *c, struct tcp_tap_conn *conn, int flags) tcp_fill_headers(c, conn, eh, ip4h, ip6h, th, &payload, optlen, NULL, seq, !*c->pcap);
- vu_pad(&flags_elem[0].in_sg[0], l2len); - + vu_pad(flags_elem[0].in_sg, 1, hdrlen + optlen);
Is there a reason not to fold vu_pad() into vu_flush()?
Yes, vu_pad() needs iovec while vu_flush() takes elements. See in TCP series, '[PATCH v5 3/4] tcp_vu: Support multibuffer frames in tcp_vu_sock_recv()' (20260403170419.3233031-4-lvivier@redhat.com): vu_pad(&iov[frame[i].idx_iovec], frame[i].num_iovec, dlen + hdrlen); vu_flush(vdev, vq, &elem[frame[i].idx_element], frame[i].num_element, dlen + hdrlen);
vu_flush(vdev, vq, flags_elem, 1, hdrlen + optlen);
Thanks, Laurent