Re: Connecting back to the host through a dummy veth interface
Hi Felix,
On Thu, 18 Dec 2025 13:32:36 +0100
Felix Rubio
Hi everybody,
I am trying to run a number of rootless podman pods and containers by different users, while still being able to talk to each other. To this end I am creating a dummy veth interface and publishing all the exposed ports there (this works: I can communicate from other host services with those containers), and I am also trying to set that dummy veth interface as the default gateway for the pods/containers (with the expectation that then they will be able to reach each other). However, this is not working... and I am pretty lost.
For example, I am running the following command, trying to connect a ldap client container to a ldap server container, unsuccessfully.
podman run --rm --dns=10.255.255.1 --network=pasta:--outbound- if4=cluster_dns0,--gateway=10.255.255.1 --add-host=ldap.host.internal:host-gateway sh -c "ip add && ip route && ldapwhoami -H ldaps:// ldap.host.internal:1636"
Is this something impossible to do, or am I doing something wrong?
Sorry, I'm a bit swamped at the moment, and I plan to get back to you in a bit, but meanwhile, I think the dummy veth trick is unnecessarily complicated. I think you could simply connect "to the host" and redirect from there to the containers by means of mapped ports. See: https://blog.podman.io/2024/10/podman-5-3-changes-for-improved-networking-ex... for a couple of details. But I'll try to come up with a full example next. -- Stefano
Hi Felix,
On Thu, 18 Dec 2025 13:32:36 +0100
Felix Rubio
wrote: Hi everybody,
I am trying to run a number of rootless podman pods and containers by different users, while still being able to talk to each other. To this end I am creating a dummy veth interface and publishing all the exposed ports there (this works: I can communicate from other host services with those containers), and I am also trying to set that dummy veth interface as the default gateway for
Hey Stefano, Thank you for your answer! I know I can run rootful containers, and that then I can access the host's network ns. However, this exposes a number of potential issues: * In case the an attacker manages to break out of the container, gets root * That enables connecting back to the host loopback, but then from that container any service listening to the loopback can be reached as well. The reason for looking for a way of binding those services to 10.255.255.1 (so that only exposed services will be in that interface) and running fully rootless, if works, provides a more secure system... in general. About the mapped ports, I am a bit lost: for what I have tested, running rootless disables the possibility to connect back to the host, right? Regards, and thank you! Felix On Saturday, 20 December 2025 15:12:24 Central European Standard Time Stefano Brivio wrote: the
pods/containers (with the expectation that then they will be able to reach each other). However, this is not working... and I am pretty lost.
For example, I am running the following command, trying to connect a ldap client container to a ldap server container, unsuccessfully.
podman run --rm --dns=10.255.255.1 --network=pasta:--outbound- if4=cluster_dns0,--gateway=10.255.255.1 --add- host=ldap.host.internal:host-gateway sh -c "ip add && ip route && ldapwhoami -H ldaps:// ldap.host.internal:1636"
Is this something impossible to do, or am I doing something wrong?
Sorry, I'm a bit swamped at the moment, and I plan to get back to you in a bit, but meanwhile, I think the dummy veth trick is unnecessarily complicated.
I think you could simply connect "to the host" and redirect from there to the containers by means of mapped ports. See:
https://blog.podman.io/2024/10/podman-5-3-changes-for-improved-networking-ex...
for a couple of details. But I'll try to come up with a full example next.
-- Felix Rubio
On Sat, 20 Dec 2025 15:28:43 +0100
Felix Rubio
Hey Stefano,
Thank you for your answer! I know I can run rootful containers, and that then I can access the host's network ns. However, this exposes a number of potential issues: * In case the an attacker manages to break out of the container, gets root * That enables connecting back to the host loopback, but then from that container any service listening to the loopback can be reached as well.
Sure. That's the whole point behind pasta(1) and rootless containers with Podman / rootlesskit. I certainly won't be the one suggesting that you'd run anything as root. :)
The reason for looking for a way of binding those services to 10.255.255.1 (so that only exposed services will be in that interface) and running fully rootless, if works, provides a more secure system... in general.
Indeed.
About the mapped ports, I am a bit lost: for what I have tested, running rootless disables the possibility to connect back to the host, right?
Hah, I see now. No, that's not the case. You can run rootless containers and connect to the host from them, in two ways: 1. disabled by default in Podman's pasta integration, not what you want: via the loopback interface, see -U / -T in 'man pasta' and --host-lo-to-ns-lo for the other way around. In that case, packets appear to be local (source address is loopback) in the other namespace ("host" or initial namespace for packets from a container, and container for packets from host). This gives you better throughput but making connections appear as if they were local is risky (cf. CVE-2021-20199), so it's disabled by default, and not what I'm suggesting (at least in general) 2. what you get as default in Podman: using pasta's --map-guest-addr. The current description of this option in pasta(1) isn't great, hence https://bugs.passt.top/show_bug.cgi?id=132, but the idea is that you will reach the host from the container with a non-loopback address, as if the connection was coming from another host (which should represent the expected container usage). So here's an example: $ podman run --rm -ti -p 8089:80 traefik/whoami 2025/12/21 10:42:16 Starting up on port 80 [in another terminal] $ podman run --rm -ti fedora curl host.containers.internal:8089 Hostname: ab94f49b5042 IP: 127.0.0.1 IP: ::1 IP: **.***.*.*** IP: ****:***:***:***::* IP: ****::****:****:****:**** RemoteAddr: 169.254.1.2:46592 GET / HTTP/1.1 Host: host.containers.internal:8089 User-Agent: curl/8.15.0 Accept: */* ...doesn't that work for you? Note that you'll need somewhat recent versions of pasta (>= 2024_08_21.1d6142f) and Podman (>= 5.3). -- Stefano
Ciao, Stefano I have just discovered how little I know about rootless networking in containers: I thought that when using host.containers.internal I was really connecting back to the loopback interface (127.0.0.1). Indeed, this works - Terminal 1, user 1: podman run --rm -ti -p 8089:80 traefik/whoami - Terminal 2, user 2: podman run --rm -ti alpine /bin/sh -c "apk add curl; curl host.containers.internal:8089" As I have a smtp server listening on that interface, port 25, I have run this experiment, which does not work: podman run --rm -ti alpine /bin/sh -c "apk add busybox-extras; telnet host.containers.internal 25" telnet: can't connect to remote host (169.254.1.2): Connection refused I only seem to be able to connect, using rootless pasta, to ports that are published by other containers. In case any container gets compromised connections from that container could only be established to services run by other containers, then? Similarly... Could I create another "network of pods" by using map-guest-addr with another ip (say 169.254.1.3) and the pods in 169.254.1.2 and 169.254.1.3 would not be able to talk to each other? So the solution for my use case is then to bind e.g., port 1636 to both 10.255.255.1 and to 169.254.1.2, so that external connections to it can get through, but also connections from other rootless pods? Really: Thank you very much for your answer! Felix On Sunday, 21 December 2025 11:47:22 Central European Standard Time Stefano Brivio wrote:
On Sat, 20 Dec 2025 15:28:43 +0100
Felix Rubio
wrote: Hey Stefano,
Thank you for your answer! I know I can run rootful containers, and that then I can access the host's network ns. However, this exposes a number of potential issues: * In case the an attacker manages to break out of the container, gets root * That enables connecting back to the host loopback, but then from that container any service listening to the loopback can be reached as well.
Sure. That's the whole point behind pasta(1) and rootless containers with Podman / rootlesskit. I certainly won't be the one suggesting that you'd run anything as root. :)
The reason for looking for a way of binding those services to 10.255.255.1 (so that only exposed services will be in that interface) and running fully rootless, if works, provides a more secure system... in general.
Indeed.
About the mapped ports, I am a bit lost: for what I have tested, running rootless disables the possibility to connect back to the host, right?
Hah, I see now. No, that's not the case. You can run rootless containers and connect to the host from them, in two ways:
1. disabled by default in Podman's pasta integration, not what you want: via the loopback interface, see -U / -T in 'man pasta' and --host-lo-to-ns-lo for the other way around.
In that case, packets appear to be local (source address is loopback) in the other namespace ("host" or initial namespace for packets from a container, and container for packets from host).
This gives you better throughput but making connections appear as if they were local is risky (cf. CVE-2021-20199), so it's disabled by default, and not what I'm suggesting (at least in general)
2. what you get as default in Podman: using pasta's --map-guest-addr.
The current description of this option in pasta(1) isn't great, hence https://bugs.passt.top/show_bug.cgi?id=132, but the idea is that you will reach the host from the container with a non-loopback address, as if the connection was coming from another host (which should represent the expected container usage).
So here's an example:
$ podman run --rm -ti -p 8089:80 traefik/whoami 2025/12/21 10:42:16 Starting up on port 80
[in another terminal] $ podman run --rm -ti fedora curl host.containers.internal:8089 Hostname: ab94f49b5042 IP: 127.0.0.1 IP: ::1 IP: **.***.*.*** IP: ****:***:***:***::* IP: ****::****:****:****:**** RemoteAddr: 169.254.1.2:46592
On Sat, 20 Dec 2025 15:28:43 +0100
Felix Rubio
wrote: Hey Stefano,
Thank you for your answer! I know I can run rootful containers, and that
Something more: I see that pasta is binding to 0.0.0.0. This means that, while allows other pods to connect to the published port of a container through 169.254.1.2, it also enables that port to be reachable from the network. Is there any way to prevent that? Regards! Felix On Sunday, 21 December 2025 11:47:22 Central European Standard Time Stefano Brivio wrote: then
I can access the host's network ns. However, this exposes a number of potential issues: * In case the an attacker manages to break out of the container, gets root * That enables connecting back to the host loopback, but then from that container any service listening to the loopback can be reached as well.
Sure. That's the whole point behind pasta(1) and rootless containers with Podman / rootlesskit. I certainly won't be the one suggesting that you'd run anything as root. :)
The reason for looking for a way of binding those services to 10.255.255.1 (so that only exposed services will be in that interface) and running fully rootless, if works, provides a more secure system... in general.
Indeed.
About the mapped ports, I am a bit lost: for what I have tested, running rootless disables the possibility to connect back to the host, right?
Hah, I see now. No, that's not the case. You can run rootless containers and connect to the host from them, in two ways:
1. disabled by default in Podman's pasta integration, not what you want: via the loopback interface, see -U / -T in 'man pasta' and --host-lo-to-ns-lo for the other way around.
In that case, packets appear to be local (source address is loopback) in the other namespace ("host" or initial namespace for packets from a container, and container for packets from host).
This gives you better throughput but making connections appear as if they were local is risky (cf. CVE-2021-20199), so it's disabled by default, and not what I'm suggesting (at least in general)
2. what you get as default in Podman: using pasta's --map-guest-addr.
The current description of this option in pasta(1) isn't great, hence https://bugs.passt.top/show_bug.cgi?id=132, but the idea is that you will reach the host from the container with a non-loopback address, as if the connection was coming from another host (which should represent the expected container usage).
So here's an example:
$ podman run --rm -ti -p 8089:80 traefik/whoami 2025/12/21 10:42:16 Starting up on port 80
[in another terminal] $ podman run --rm -ti fedora curl host.containers.internal:8089 Hostname: ab94f49b5042 IP: 127.0.0.1 IP: ::1 IP: **.***.*.*** IP: ****:***:***:***::* IP: ****::****:****:****:**** RemoteAddr: 169.254.1.2:46592 GET / HTTP/1.1 Host: host.containers.internal:8089 User-Agent: curl/8.15.0 Accept: */*
...doesn't that work for you? Note that you'll need somewhat recent versions of pasta (>= 2024_08_21.1d6142f) and Podman (>= 5.3).
-- Felix Rubio
On Sat, 20 Dec 2025 15:28:43 +0100
Felix Rubio
wrote: Hey Stefano,
Thank you for your answer! I know I can run rootful containers, and that
Ok, things are starting to get clear. The problem was, I think, between the desk and the keyboard. * I have everything on a VM that I configure with Ansible. I have just taken everything down and started from scratch * I still have my containers without any ad-hoc network. They are binding only to network interface 10.255.255.1, which is a dummy ethernet. * My error was that I am running an LDAP server in one of these containers, and I was checking if it was working with a ldapwhoami. The client was replying that could not reach the server, which triggered all subsequent investigation, but the real cause was that the certificate offered by the server was not trusted by the client, and the latter broke the connection (without giving a more proper message - facepalm). Once fixed the problem with the certificates, everything seems to work. This means that: * I have a dns server in 10.255.255.1 that resolves ldap.host.internal to 10.255.255.1 * ldap server rootless container is listening to 10.255.255.1:1636 * ldap client is in another rootless container, and can reach directly ldap.host.internal:1636. ... Is this last point expected? the ldap server is started through podman as a regular user, without any network options... nothing fancy. The reason for me asking is that all I have read points in the direction that from a rootless container I should not be able to loopback to the host... but maybe this dummy interface is not identified as "the host" and therefore I can connect to services bound to it? On the LDAP side, the logs show that these connections are coming from the same 10.255.255.1. That would be actually convenient, because then I can put firewall rules in place that prevent connecting from that dummy ethernet back to the host at all. Thank you very much, and sorry for the initial confusing messages. Felix On Sunday, 21 December 2025 11:47:22 Central European Standard Time Stefano Brivio wrote: then
I can access the host's network ns. However, this exposes a number of potential issues: * In case the an attacker manages to break out of the container, gets root * That enables connecting back to the host loopback, but then from that container any service listening to the loopback can be reached as well.
Sure. That's the whole point behind pasta(1) and rootless containers with Podman / rootlesskit. I certainly won't be the one suggesting that you'd run anything as root. :)
The reason for looking for a way of binding those services to 10.255.255.1 (so that only exposed services will be in that interface) and running fully rootless, if works, provides a more secure system... in general.
Indeed.
About the mapped ports, I am a bit lost: for what I have tested, running rootless disables the possibility to connect back to the host, right?
Hah, I see now. No, that's not the case. You can run rootless containers and connect to the host from them, in two ways:
1. disabled by default in Podman's pasta integration, not what you want: via the loopback interface, see -U / -T in 'man pasta' and --host-lo-to-ns-lo for the other way around.
In that case, packets appear to be local (source address is loopback) in the other namespace ("host" or initial namespace for packets from a container, and container for packets from host).
This gives you better throughput but making connections appear as if they were local is risky (cf. CVE-2021-20199), so it's disabled by default, and not what I'm suggesting (at least in general)
2. what you get as default in Podman: using pasta's --map-guest-addr.
The current description of this option in pasta(1) isn't great, hence https://bugs.passt.top/show_bug.cgi?id=132, but the idea is that you will reach the host from the container with a non-loopback address, as if the connection was coming from another host (which should represent the expected container usage).
So here's an example:
$ podman run --rm -ti -p 8089:80 traefik/whoami 2025/12/21 10:42:16 Starting up on port 80
[in another terminal] $ podman run --rm -ti fedora curl host.containers.internal:8089 Hostname: ab94f49b5042 IP: 127.0.0.1 IP: ::1 IP: **.***.*.*** IP: ****:***:***:***::* IP: ****::****:****:****:**** RemoteAddr: 169.254.1.2:46592 GET / HTTP/1.1 Host: host.containers.internal:8089 User-Agent: curl/8.15.0 Accept: */*
...doesn't that work for you? Note that you'll need somewhat recent versions of pasta (>= 2024_08_21.1d6142f) and Podman (>= 5.3).
-- Felix Rubio
Let me answer your latest three email separately, because actually
there are valid open questions in all of them (and yes, we need
https://bugs.passt.top/show_bug.cgi?id=144 and some "howto" section
beyond man pages and Podman documentation, but it won't be for this year
either...)
On Sun, 21 Dec 2025 16:17:37 +0100
Felix Rubio
Ciao, Stefano
I have just discovered how little I know about rootless networking in containers: I thought that when using host.containers.internal I was really connecting back to the loopback interface (127.0.0.1).
Indeed, this works - Terminal 1, user 1: podman run --rm -ti -p 8089:80 traefik/whoami - Terminal 2, user 2: podman run --rm -ti alpine /bin/sh -c "apk add curl; curl host.containers.internal:8089"
As I have a smtp server listening on that interface, port 25, I have run this experiment, which does not work: podman run --rm -ti alpine /bin/sh -c "apk add busybox-extras; telnet host.containers.internal 25" telnet: can't connect to remote host (169.254.1.2): Connection refused
Because it's probably binding to localhost (something in 127.0.0.1/8 or ::1 or both), but the destination of this connection attempt is not a loopback address.
I only seem to be able to connect, using rootless pasta, to ports that are published by other containers. In case any container gets compromised connections from that container could only be established to services run by other containers, then?
...or other hosts. But there's a way to override that. From pasta(1), emphasis mine: --map-host-loopback addr Translate addr to refer to the host. Packets from the guest to addr will be redirected to the host. ** On the host such packets will appear to have both source and destination of 127.0.0.1 or ::1. ** ...and yes, I guess we should rephrase this as well, but with this option you would be able to connect to services that bind to loopback addresses (too). Podman doesn't enable this by default (it would be a bad default for security) so you would need to issue something like 'podman run --net=pasta:--map-host-loopback,169.254.1.2 ...'.
Similarly... Could I create another "network of pods" by using map-guest-addr with another ip (say 169.254.1.3) and the pods in 169.254.1.2 and 169.254.1.3 would not be able to talk to each other?
It all depends on what ports are exposed and what interface and address they are bound to, on the host. But yes, you could do something like that. Eventually, *after* https://bugs.passt.top/show_bug.cgi?id=140 is done, we might consider implementing proper inter-container communication with a single instance of pasta. That would make things easier... but we're not quite there yet.
So the solution for my use case is then to bind e.g., port 1636 to both 10.255.255.1 and to 169.254.1.2, so that external connections to it can get through, but also connections from other rootless pods?
You could do that, yes. -- Stefano
On Sun, 21 Dec 2025 16:32:23 +0100
Felix Rubio
Something more: I see that pasta is binding to 0.0.0.0. This means that, while allows other pods to connect to the published port of a container through 169.254.1.2, it also enables that port to be reachable from the network.
Is there any way to prevent that?
Yes, you can specify specific addresses or interfaces to bind to, relevant examples from pasta(1): -t 192.0.2.1/22 Forward local port 22, bound to 192.0.2.1, to port 22 on the guest -t 192.0.2.1%eth0/22 Forward local port 22, bound to 192.0.2.1 and in‐ terface eth0, to port 22 -t %eth0/22 Forward local port 22, bound to any address on in‐ terface eth0, to port 22 Podman supports part of that as well, see podman-run(1) (--publish) or: https://github.com/containers/podman/blob/2fbecb48e166ed79662ea5e45f2d56081a... for a summary. -- Stefano
On Mon, 22 Dec 2025 13:48:03 +0100
Felix Rubio
Ok, things are starting to get clear. The problem was, I think, between the desk and the keyboard.
The chair! I think it was the chair. :)
* I have everything on a VM that I configure with Ansible. I have just taken everything down and started from scratch
* I still have my containers without any ad-hoc network. They are binding only to network interface 10.255.255.1, which is a dummy ethernet.
* My error was that I am running an LDAP server in one of these containers, and I was checking if it was working with a ldapwhoami. The client was replying that could not reach the server, which triggered all subsequent investigation, but the real cause was that the certificate offered by the server was not trusted by the client, and the latter broke the connection (without giving a more proper message - facepalm).
Once fixed the problem with the certificates, everything seems to work. This means that: * I have a dns server in 10.255.255.1 that resolves ldap.host.internal to 10.255.255.1 * ldap server rootless container is listening to 10.255.255.1:1636 * ldap client is in another rootless container, and can reach directly ldap.host.internal:1636.
... Is this last point expected? the ldap server is started through podman as a regular user, without any network options... nothing fancy.
Yes, it's expected, because 10.255.255.1 is not a loopback address.
The reason for me asking is that all I have read points in the direction that from a rootless container I should not be able to loopback to the host... but maybe this dummy interface is not identified as "the host" and therefore I can
It's rather not identified as "loopback".
connect to services bound to it? On the LDAP side, the logs show that these connections are coming from the same 10.255.255.1. That would be actually convenient, because then I can put firewall rules in place that prevent connecting from that dummy ethernet back to the host at all.
You don't need a whole new interface for that, by the way. You could just add that address to an existing interface, assuming that the LDAP server lets you bind to a specific address and not just a specific interface. -- Stefano
On Mon, 22 Dec 2025 13:48:03 +0100
Felix Rubio
wrote: Ok, things are starting to get clear. The problem was, I think, between
desk and the keyboard.
The chair! I think it was the chair. :)
* I have everything on a VM that I configure with Ansible. I have just taken everything down and started from scratch
* I still have my containers without any ad-hoc network. They are binding only to network interface 10.255.255.1, which is a dummy ethernet.
* My error was that I am running an LDAP server in one of these containers, and I was checking if it was working with a ldapwhoami. The client was replying that could not reach the server, which triggered all subsequent investigation, but the real cause was that the certificate offered by the server was not trusted by the client, and the latter broke the connection (without giving a more proper message - facepalm).
Once fixed the problem with the certificates, everything seems to work. This
means that: * I have a dns server in 10.255.255.1 that resolves ldap.host.internal to
10.255.255.1
* ldap server rootless container is listening to 10.255.255.1:1636 * ldap client is in another rootless container, and can reach directly
ldap.host.internal:1636.
... Is this last point expected? the ldap server is started through podman as a regular user, without any network options... nothing fancy.
Yes, it's expected, because 10.255.255.1 is not a loopback address.
The reason for me asking is that all I have read points in the direction
from a rootless container I should not be able to loopback to the host... but maybe this dummy interface is not identified as "the host" and therefore I can
It's rather not identified as "loopback".
connect to services bound to it? On the LDAP side, the logs show that
Damn... I knew I had to get rid of that chair... xD My setup is a bit complex: I am running a k3s cluster with some services outside it, but running on the same host. The purpose is to have some central services common to all the applications I am running (e.g., authentication) running on these rootless containers. This way I can take down the whole cluster without loosing services that are required by other parties... at the expense of properly protecting them. The reason for using a dummy interface is because then I can implement simple, wide rules, stating that this interface can only receive connections from the k3s cluster or to specific ports, and that connections from that interface can only be established to the cluster or to specific ports. I am doing this because, should a malicious actor manage to run code on those services or break outside of the container, they would be able to establish connections outside anywhere. I know I can use an existing interface for all this, but then I would have to be way more careful about how these firewall rules are implemented... whereas using this dummy interface I can deny by default and only allow as required. Stefano, thank you very much for your answers. I really appreciate the time you took writting them. Regards! Felix On Monday, 22 December 2025 23:51:17 Central European Standard Time Stefano Brivio wrote: the that these
connections are coming from the same 10.255.255.1. That would be actually convenient, because then I can put firewall rules in place that prevent connecting from that dummy ethernet back to the host at all.
You don't need a whole new interface for that, by the way. You could just add that address to an existing interface, assuming that the LDAP server lets you bind to a specific address and not just a specific interface.
-- Felix Rubio
participants (2)
-
Felix Rubio
-
Stefano Brivio