[net-next 0/2] tcp: add support for SO_PEEK_OFF socket option
We add support for the SO_PEEK_OFF socket option as a new feature in TCP. In a separate patch, we fix a bug that was revealed while testing this feature. Jon Maloy (2): tcp: add support for SO_PEEK_OFF tcp: correct handling of extreme menory squeeze net/ipv4/af_inet.c | 1 + net/ipv4/tcp.c | 16 ++++++++++------ net/ipv4/tcp_output.c | 5 ++++- 3 files changed, 15 insertions(+), 7 deletions(-) -- 2.42.0
When reading received messages from a socket with MSG_PEEK, we may want
to read the contents with an offset, like we can do with pread/preadv()
when reading files. Currently, it is not possible to do that.
In this commit, we add support for the SO_PEEK_OFF socket option for TCP,
in a similar way it is done for Unix Domain sockets.
In the iperf3 log examples shown below, we can observe a throughput
improvement of 15-20 % in the direction host->namespace when using the
protocol splicer 'pasta' (https://passt.top).
This is a consistent result.
pasta(1) and passt(1) implement user-mode networking for network
namespaces (containers) and virtual machines by means of a translation
layer between Layer-2 network interface and native Layer-4 sockets
(TCP, UDP, ICMP/ICMPv6 echo).
Received, pending TCP data to the container/guest is kept in kernel
buffers until acknowledged, so the tool routinely needs to fetch new
data from socket, skipping data that was already sent.
At the moment this is implemented using a dummy buffer passed to
recvmsg(). With this change, we don't need a dummy buffer and the
related buffer copy (copy_to_user()) anymore.
passt and pasta are supported in KubeVirt and libvirt/qemu.
jmaloy@freyr:~/passt$ perf record -g ./pasta --config-net -f
SO_PEEK_OFF not supported by kernel.
jmaloy@freyr:~/passt# iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 192.168.122.1, port 44822
[ 5] local 192.168.122.180 port 5201 connected to 192.168.122.1 port 44832
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.02 GBytes 8.78 Gbits/sec
[ 5] 1.00-2.00 sec 1.06 GBytes 9.08 Gbits/sec
[ 5] 2.00-3.00 sec 1.07 GBytes 9.15 Gbits/sec
[ 5] 3.00-4.00 sec 1.10 GBytes 9.46 Gbits/sec
[ 5] 4.00-5.00 sec 1.03 GBytes 8.85 Gbits/sec
[ 5] 5.00-6.00 sec 1.10 GBytes 9.44 Gbits/sec
[ 5] 6.00-7.00 sec 1.11 GBytes 9.56 Gbits/sec
[ 5] 7.00-8.00 sec 1.07 GBytes 9.20 Gbits/sec
[ 5] 8.00-9.00 sec 667 MBytes 5.59 Gbits/sec
[ 5] 9.00-10.00 sec 1.03 GBytes 8.83 Gbits/sec
[ 5] 10.00-10.04 sec 30.1 MBytes 6.36 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.04 sec 10.3 GBytes 8.78 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
jmaloy@freyr:~/passt#
logout
[ perf record: Woken up 23 times to write data ]
[ perf record: Captured and wrote 5.696 MB perf.data (35580 samples) ]
jmaloy@freyr:~/passt$
jmaloy@freyr:~/passt$ perf record -g ./pasta --config-net -f
SO_PEEK_OFF supported by kernel.
jmaloy@freyr:~/passt# iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 192.168.122.1, port 52084
[ 5] local 192.168.122.180 port 5201 connected to 192.168.122.1 port 52098
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.32 GBytes 11.3 Gbits/sec
[ 5] 1.00-2.00 sec 1.19 GBytes 10.2 Gbits/sec
[ 5] 2.00-3.00 sec 1.26 GBytes 10.8 Gbits/sec
[ 5] 3.00-4.00 sec 1.36 GBytes 11.7 Gbits/sec
[ 5] 4.00-5.00 sec 1.33 GBytes 11.4 Gbits/sec
[ 5] 5.00-6.00 sec 1.21 GBytes 10.4 Gbits/sec
[ 5] 6.00-7.00 sec 1.31 GBytes 11.2 Gbits/sec
[ 5] 7.00-8.00 sec 1.25 GBytes 10.7 Gbits/sec
[ 5] 8.00-9.00 sec 1.33 GBytes 11.5 Gbits/sec
[ 5] 9.00-10.00 sec 1.24 GBytes 10.7 Gbits/sec
[ 5] 10.00-10.04 sec 56.0 MBytes 12.1 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.04 sec 12.9 GBytes 11.0 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
logout
[ perf record: Woken up 20 times to write data ]
[ perf record: Captured and wrote 5.040 MB perf.data (33411 samples) ]
jmaloy@freyr:~/passt$
The perf record confirms this result. Below, we can observe that the
CPU spends significantly less time in the function ____sys_recvmsg()
when we have offset support.
Without offset support:
----------------------
jmaloy@freyr:~/passt$ perf report -q --symbol-filter=do_syscall_64 \
-p ____sys_recvmsg -x --stdio -i perf.data | head -1
46.32% 0.00% passt.avx2 [kernel.vmlinux] [k] do_syscall_64 ____sys_recvmsg
With offset support:
----------------------
jmaloy@freyr:~/passt$ perf report -q --symbol-filter=do_syscall_64 \
-p ____sys_recvmsg -x --stdio -i perf.data | head -1
28.12% 0.00% passt.avx2 [kernel.vmlinux] [k] do_syscall_64 ____sys_recvmsg
Suggested-by: Paolo Abeni
Testing of the previous commit ("tcp: add support for SO_PEEK_OFF")
in this series along with the pasta protocol splicer revealed a bug in
the way tcp handles window advertising during extreme memory squeeze
situations.
The excerpt of the below logging session shows what is happeing:
[5201<->54494]: ==== Activating log @ tcp_select_window()/268 ====
[5201<->54494]: (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM) --> TRUE
[5201<->54494]: tcp_select_window(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354, returning 0
[5201<->54494]: ADVERTISING WINDOW SIZE 0
[5201<->54494]: __tcp_transmit_skb(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: tcp_recvmsg_locked(->)
[5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0
[5201<->54494]: NOT calling tcp_send_ack()
[5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: tcp_recvmsg_locked(<-) returning 131072 bytes, window now: 250164, qlen: 83
[...]
[5201<->54494]: tcp_recvmsg_locked(->)
[5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0
[5201<->54494]: NOT calling tcp_send_ack()
[5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: tcp_recvmsg_locked(<-) returning 131072 bytes, window now: 250164, qlen: 1
[5201<->54494]: tcp_recvmsg_locked(->)
[5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0
[5201<->54494]: NOT calling tcp_send_ack()
[5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: tcp_recvmsg_locked(<-) returning 57036 bytes, window now: 250164, qlen: 0
[5201<->54494]: tcp_recvmsg_locked(->)
[5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: NOT calling tcp_send_ack()
[5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: tcp_recvmsg_locked(<-) returning -11 bytes, window now: 250164, qlen: 0
We can see that although we are adverising a window size of zero,
tp->rcv_wnd is not updated accordingly. This leads to a discrepancy
between this side's and the peer's view of the current window size.
- The peer thinks the window is zero, and stops sending.
- This side ends up in a cycle where it repeatedly caclulates a new
window size it finds too small to advertise.
Hence no messages are received, and no acknowledges are sent, and
the situation remains locked even after the last queued receive buffer
has been consumed.
We fix this by setting tp->rcv_wnd to 0 before we return from the
function tcp_select_window() in this particular case.
Further testing shows that the connection recovers neatly from the
squeeze situation, and traffic can continue indefinitely.
Signed-off-by: Jon Maloy
On Wed, 3 Apr 2024 18:58:33 -0400
Jon Maloy
Testing of the previous commit ("tcp: add support for SO_PEEK_OFF") in this series along with the pasta protocol splicer revealed a bug in the way tcp handles window advertising during extreme memory squeeze situations.
The excerpt of the below logging session shows what is happeing:
[5201<->54494]: ==== Activating log @ tcp_select_window()/268 ==== [5201<->54494]: (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM) --> TRUE [5201<->54494]: tcp_select_window(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354, returning 0 [5201<->54494]: ADVERTISING WINDOW SIZE 0 [5201<->54494]: __tcp_transmit_skb(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: tcp_recvmsg_locked(->) [5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0 [5201<->54494]: NOT calling tcp_send_ack() [5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: tcp_recvmsg_locked(<-) returning 131072 bytes, window now: 250164, qlen: 83
[...]
[5201<->54494]: tcp_recvmsg_locked(->) [5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0 [5201<->54494]: NOT calling tcp_send_ack() [5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: tcp_recvmsg_locked(<-) returning 131072 bytes, window now: 250164, qlen: 1
[5201<->54494]: tcp_recvmsg_locked(->) [5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0 [5201<->54494]: NOT calling tcp_send_ack() [5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: tcp_recvmsg_locked(<-) returning 57036 bytes, window now: 250164, qlen: 0
[5201<->54494]: tcp_recvmsg_locked(->) [5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: NOT calling tcp_send_ack() [5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: tcp_recvmsg_locked(<-) returning -11 bytes, window now: 250164, qlen: 0
We can see that although we are adverising a window size of zero, tp->rcv_wnd is not updated accordingly. This leads to a discrepancy between this side's and the peer's view of the current window size. - The peer thinks the window is zero, and stops sending. - This side ends up in a cycle where it repeatedly caclulates a new window size it finds too small to advertise.
Hence no messages are received, and no acknowledges are sent, and the situation remains locked even after the last queued receive buffer has been consumed.
We fix this by setting tp->rcv_wnd to 0 before we return from the function tcp_select_window() in this particular case. Further testing shows that the connection recovers neatly from the squeeze situation, and traffic can continue indefinitely.
Signed-off-by: Jon Maloy
--- net/ipv4/tcp_output.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index e3167ad96567..5803fd402708 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -264,8 +264,11 @@ static u16 tcp_select_window(struct sock *sk) * are out of memory. The window is temporary, so we don't store * it on the socket.
One nit: now that you do store it on the socket, you should probably change this comment as well.
*/ - if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) + if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) { + tp->rcv_wnd = 0; + tp->rcv_wup = tp->rcv_nxt;
...I'm wondering if you should set 'pred_flags' to 0, as it's done at the end of the function for other cases where the window is advertised as zero. At least according to the comment to tcp_rcv_established() it looks like it's needed: * - A zero window was announced from us - zero window probing * is only handled properly in the slow path.
return 0; + }
cur_win = tcp_receive_window(tp); new_win = __tcp_select_window(sk);
The rest, including 1/2, looks good to me. -- Stefano
On 2024-04-05 13:55, Stefano Brivio wrote:
On Wed, 3 Apr 2024 18:58:33 -0400 Jon Maloy
wrote: Testing of the previous commit ("tcp: add support for SO_PEEK_OFF") in this series along with the pasta protocol splicer revealed a bug in the way tcp handles window advertising during extreme memory squeeze situations.
The excerpt of the below logging session shows what is happeing:
[5201<->54494]: ==== Activating log @ tcp_select_window()/268 ==== [5201<->54494]: (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM) --> TRUE [5201<->54494]: tcp_select_window(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354, returning 0 [5201<->54494]: ADVERTISING WINDOW SIZE 0 [5201<->54494]: __tcp_transmit_skb(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354
[5201<->54494]: tcp_recvmsg_locked(->) [5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0 [5201<->54494]: NOT calling tcp_send_ack() [5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: tcp_recvmsg_locked(<-) returning 131072 bytes, window now: 250164, qlen: 83
[...]
[5201<->54494]: tcp_recvmsg_locked(->) [5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0 [5201<->54494]: NOT calling tcp_send_ack() [5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: tcp_recvmsg_locked(<-) returning 131072 bytes, window now: 250164, qlen: 1
[5201<->54494]: tcp_recvmsg_locked(->) [5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: (win_now: 250164, new_win: 262144 >= (2 * win_now): 500328))? --> time_to_ack: 0 [5201<->54494]: NOT calling tcp_send_ack() [5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: tcp_recvmsg_locked(<-) returning 57036 bytes, window now: 250164, qlen: 0
[5201<->54494]: tcp_recvmsg_locked(->) [5201<->54494]: __tcp_cleanup_rbuf(->) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: NOT calling tcp_send_ack() [5201<->54494]: __tcp_cleanup_rbuf(<-) tp->rcv_wup: 2812454294, tp->rcv_wnd: 5812224, tp->rcv_nxt 2818016354 [5201<->54494]: tcp_recvmsg_locked(<-) returning -11 bytes, window now: 250164, qlen: 0
We can see that although we are adverising a window size of zero, tp->rcv_wnd is not updated accordingly. This leads to a discrepancy between this side's and the peer's view of the current window size. - The peer thinks the window is zero, and stops sending. - This side ends up in a cycle where it repeatedly caclulates a new window size it finds too small to advertise.
Hence no messages are received, and no acknowledges are sent, and the situation remains locked even after the last queued receive buffer has been consumed.
We fix this by setting tp->rcv_wnd to 0 before we return from the function tcp_select_window() in this particular case. Further testing shows that the connection recovers neatly from the squeeze situation, and traffic can continue indefinitely.
Signed-off-by: Jon Maloy
--- net/ipv4/tcp_output.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index e3167ad96567..5803fd402708 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -264,8 +264,11 @@ static u16 tcp_select_window(struct sock *sk) * are out of memory. The window is temporary, so we don't store * it on the socket. One nit: now that you do store it on the socket, you should probably change this comment as well.
*/ - if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) + if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) { + tp->rcv_wnd = 0; + tp->rcv_wup = tp->rcv_nxt; ...I'm wondering if you should set 'pred_flags' to 0, as it's done at the end of the function for other cases where the window is advertised as zero.
At least according to the comment to tcp_rcv_established() it looks like it's needed:
* - A zero window was announced from us - zero window probing * is only handled properly in the slow path.
return 0; + }
cur_win = tcp_receive_window(tp); new_win = __tcp_select_window(sk);
The rest, including 1/2, looks good to me.
Good points. I'll fix those and post the patches with your "Reviewed-by:" /thx
participants (2)
-
Jon Maloy
-
Stefano Brivio