On Tue, Feb 13, 2024 at 11:49 AM Paolo Abeni
<pabeni(a)redhat.com> wrote:
> @@ -2508,7 +2508,10 @@ static int
tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len,
> WRITE_ONCE(*seq, *seq + used);
> copied += used;
> len -= used;
> -
> + if (flags & MSG_PEEK)
> + sk_peek_offset_fwd(sk, used);
> + else
> + sk_peek_offset_bwd(sk, used);
Yet another cache miss in TCP fast path...
We need to move sk_peek_off in a better location before we accept this patch.
I always thought MSK_PEEK was very inefficient, I am surprised we
allow arbitrary loops in recvmsg().
Let me double check I read the above correctly: are you concerned by
the 'skb_queue_walk(&sk->sk_receive_queue, skb) {' loop that could
touch a lot of skbs/cachelines before reaching the relevant skb?
The end goal here is allowing an user-space application to read
incrementally/sequentially the received data while leaving them in
receive buffer.
I don't see a better option than MSG_PEEK, am I missing something?
Thanks,
Paolo