-
Notifications
You must be signed in to change notification settings - Fork 17
Proposed guest blog post about containers, public IPs and Firewalld port forwarding #29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
dsofeir
wants to merge
2
commits into
firewalld:master
Choose a base branch
from
dsofeir:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
54 changes: 54 additions & 0 deletions
54
...posts/2022-12-13-tut-access-to-public-ip-from-vms-containers-using-firewalld.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
--- | ||
layout: post | ||
title: "Access to public IP from VMs/Containers using Firewalld" | ||
section: Blog | ||
date: 2022-12-13T13:00:00 | ||
author: David Foley | ||
category: tutorial | ||
--- | ||
|
||
You are running some LXC containers on a host and you use Firewalld to forward ports from the public internet to the containers. How do you enable access to the public IP of the LXD host from the LXC containers? | ||
|
||
## TL;DR | ||
- Ordinairly connections from containers to to services which are reached by port forwarding on Firewalld fail, because after outputting the packet to the public interface, it never returns and therefore Firewalld cannot process the port forward. | ||
- The soloution is destination NAT: `sudo firewall-cmd --zone=trusted --add-rich-rule='rule family="ipv4" destination address="203.0.113.1" forward-port port="80" protocol="tcp" to-port="80" to-addr="10.10.1.20"'` | ||
|
||
## The Scenario | ||
You have some LXC containers running on a host, the default LXD setup creates a virtual bridge to which all the containers are connected, they have their own private network say in the 10.10.1.0/24 subnet. | ||
|
||
You use Firewalld to forward ports from the public internet to the containers. In this scenario when a container does a DNS lookup to which the answer is the public IP address of the LXD host and the container then tries to connect to say port 80 on that public IP it will fail. | ||
|
||
Why? The HTTP request is received on the input chain by firewalld, the packet is then output to the public interface of the host. Since the HTTP Proxy is not bound to the public interface the connection fails. It instead must be reached via port forwards in Firewalld. After outputting the packet to the public interface, it never returns and therefore Firewalld cannot process the port forward. | ||
|
||
## The Solution | ||
Destination NAT rules in firewalld are the solution here. | ||
|
||
I understand NAT would appear to be an obvious answer. Indeed if you simply enable masquerading on the zone which contains the container virtual network this will begin to work, this has the unintended consequence of also source NATing all incoming requests to the containers. This means client IPs will no longer visible to applications running in containers. | ||
|
||
Destination NAT is applied on the prerouting chain, before the routing decision, where it modifies the destination IP address of the packet based. In the example the diagram describes Firewalld recognises that the destination IP for the HTTP request is the public IP of the host. After this it takes the packet and changes the destination IP address to the internal IP address of the Web Proxy Container. | ||
|
||
To setup DST-NAT on the example host, you would follow this procedure: | ||
|
||
1. Execute these commands on the Firewalld host: | ||
``` | ||
sudo firewall-cmd --zone=trusted --add-rich-rule='rule family="ipv4" destination address="203.0.113.1" forward-port port="80" protocol="tcp" to-port="80" to-addr="10.10.1.20"' | ||
sudo firewall-cmd --zone=trusted --add-rich-rule='rule family="ipv4" destination address="203.0.113.1" forward-port port="443" protocol="tcp" to-port="443" to-addr="10.10.1.20"' | ||
``` | ||
__Explanation:__ | ||
- These two rules apply DST-NAT to packets destined for 203.0.113.1, port 80 and port 443. | ||
- You will need to make sure `--zone=` matches the zone you have your container virtual network bound to. | ||
- In this example 203.0.113.1 is our public IP address, change this to match yours | ||
- In this example 10.10.1.20 is the internal IP address of the container running HTTP reverse proxy. Change this IP to match your setup. | ||
- If you add other protocols which where handled by port forwarding, you would just continue adding rules with the appropriate port numbers. | ||
|
||
2. Verify the setup works. If it does, make the changes permanent by executing the following: | ||
``` | ||
sudo firewall-cmd --permanent --zone=trusted --add-rich-rule='rule family="ipv4" destination address="203.0.113.1" forward-port port="80" protocol="tcp" to-port="80" to-addr="10.10.1.20"' | ||
sudo firewall-cmd --permanent --zone=trusted --add-rich-rule='rule family="ipv4" destination address="203.0.113.1" forward-port port="443" protocol="tcp" to-port="443" to-addr="10.10.1.20"' | ||
``` | ||
__Explanation:__ | ||
- These two commands are the same as those from the first step with the simple addition of the `--permanent` flag to make them permanent | ||
|
||
----- | ||
|
||
* Originally posted at [David Foley's Blog](https://www.dfoley.ie/blog/access-to-public-ip-from-vms-containers-using-firewalld), complete with diagrams. |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's surprising. It should not be the case. Source NAT (masquerade) should only happen for traffic leave the LXC host and destined to the internet/LAN.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am almost certain this is what is happening since this is what I was initially trying to solve.
When I enabled masquerading on the zone which dealt with the container virtual network, I was able to reach the port forwards in Firewalld. Specifically port forwarding of port 80 and 443 to the NGINX container. However there was a problem, NGINX could not ascertain the HTTP client IP address, instead every request appeared to orginate from the Firewalld host.
When I turned of masquerading, NGINX was still accessible from the public internet via the Firewalld port forwarding and the HTTP client IP address was correct. Obviously it was no longer accessible from the internal virtual container network.
This is why I concluded that masquerading was also resulting in SRC-NAT.
Of course, I could be wrong. I am to make any edit you may suggest? I think an explanation of why "--add-masquerade" doesn't work and the rich rule does is necessary though.
Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I follow you now. I reworded it a bit below. What do you think?
NAT would appear to be an obvious answer. If you enable masquerading (source NAT) on the zone which contains the container virtual network, e.g.
trusted
, the traffic will pass. Unfortunately it has unintended consequence of source NAT-ing all incoming requests to the containers. This means client IPs will no longer visible to applications running in containers.However, I'm still not convinced the traffic should have worked with
--zone trusted --add-masquerade
.What version of firewalld are you using?