Don’t expose your REDIS or SQL ports to the world, limit which Docker containers can access them ( legacy solution )
A long time ago we had a temporary security breach. It was not really fully exploited as it was a pretty shielded use-case – but it could have been, if it was used in production.
When you have Docker containers that share ports with each other, some of their ports may be exposed on your global network. In this particular setup, the vendor’s 5 Docker containers were relying on a Redis DB that was exposed on the usual redis port 6379.
A world pre-docker-compose
Nowadays if you define these containers in a docker-compose.yml
, they would usually get their own little private network. Unless you change the network driver to “bridge”.
In our case, that was pre-docker-compose, and so this port was exposed to the world.
Redis is by default shipped for speed, not security
If you’re familiar with redis, redis is focused on speed, not security. Any local service can usually connect to your redis, and authentication is off by default. This can lead to vulnerable situations like in this case – where the software architects of that solution relied on unsecure default settings.
So we had a redis port exposed to the world, now what?
As we were double-checking the data that we received from the redis instance, it didn’t lead to a lot of issues except hang-ups in our routes. A simple restart of the services emptied the redis instance, and everything was running smoothly again, but obviously that was not a longterm solution.
Malicious bots scanning your redis ports and adding malicious data to them is of course never a good place to be.
On the other hand we were stuck with these running containers, as – being a legacy solution – the vendors solution was discontinued. It wasn’t a production-relevant make-or-break situation, so not much priority was put on this. We couldn’t ( or didn’t want to ) refactor it. We needed a quick fix.
IPTABLES is the doorman of your world
We wrote a small cronjob, that would read the faulty containers IP address, and would limit traffic in between this container to the containers it should talk with.
We did the same for the PostgreSQL instance, just in case.
Now please, in case you ever need this – which we hope you don’t – please before you touch any iptables rules, add a script that resets them every 5 minutes. IPTABLES is the single most efficient way of locking yourself out from a server.
Now this way you can limit containers to only talk to each other, even if they expose ports publicly. You can use this for Redis, PostgreSQL, MySQL, Memcached and many other tools.
Simply try to connect to the ports of your docker host from outside, and see if you’d actually like anybody to connect to them. MySQL and PostgreSQL obviously have their own security mechanisms. But no outside traffic allowed is more secure than any outside traffic allowed.
Here goes:
# 0. Define the container ips that we want to
# communicate with each other
REDIS_IP=docker inspect --format '{{ .NetworkSettings.IPAddress }}' redis_container
APP_IP1=docker inspect --format '{{ .NetworkSettings.IPAddress }}' app1
APP_IP2=docker inspect --format '{{ .NetworkSettings.IPAddress }}' app2
APP_IP3=docker inspect --format '{{ .NetworkSettings.IPAddress }}' app3
PSQL_IP=docker inspect --format '{{ .NetworkSettings.IPAddress }}' db_container
echo "redis " $REDIS_IP
echo "app1 " $APP_IP1
echo "app2 " $APP_IP2
echo "app3 " $APP_IP3
echo "db " $PSQL_IP
Secure the Redis instance by limiting which ips can communicate with the redis port
# 1. delete old forwarding rules for this redis
# ip with the chain we're about to make
iptables -D FORWARD -p tcp --source 0.0.0.0/0 --destination $REDIS_IP --dport 6379 -j CUSTOM_REDIS
# 2. create a new chain called CUSTOM_REDIS
iptables -N CUSTOM_REDIS
# 3. flush the new chain ( delete old rules
# if they exist, e.g. if we changed ips )
iptables -F CUSTOM_REDIS
# 4. allow app1, app2 & app3 to talk
# to the redis container
iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source $APP_IP1 --destination $REDIS_IP -j ACCEPT
iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source $APP_IP2 --destination $REDIS_IP -j ACCEPT
iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source $APP_IP3 --destination $REDIS_IP -j ACCEPT
# 5. don't allow any other IPs to access
# this port for this ip
iptables -A CUSTOM_REDIS -p tcp --dport 6379 --source 0.0.0.0/0 --destination $REDIS_IP -j DROP
# 6. attach the chain to the REDIS IP
iptables -I FORWARD 1 -p tcp --source 0.0.0.0/0 --destination $REDIS_IP --dport 6379 -j CUSTOM_REDIS
# 7. list the results
iptables --list CUSTOM_REDIS
Secure the PostgreSQL instance, by limiting which IPs can talk with this container
# 1. delete old forwarding rules
# for this psql ip with the chain we're about to make
iptables -D FORWARD -p tcp --source 0.0.0.0/0 --destination $PSQL_IP --dport 5432 -j CUSTOM_PSQL
# 2. create a new chain called CUSTOM_PSQL
iptables -N CUSTOM_PSQL
# 3. flush the new chain ( delete old rules
# if they exist, e.g. if we changed ips )
iptables -F CUSTOM_PSQL
# 4. allow app1 & psql to talk to each other
iptables -A CUSTOM_PSQL -p tcp --dport 5432 --source $APP_IP1 --destination $PSQL_IP -j ACCEPT
iptables -A CUSTOM_PSQL -p tcp --dport 5432 --source $PSQL_IP --destination $APP_IP1 -j ACCEPT
# 5. don't allow any other IPs to access
# this port for this ip
iptables -A CUSTOM_PSQL -p tcp --dport 5432 --source 0.0.0.0/0 --destination $PSQL_IP -j DROP
# 6. attach the chain to the PSQL IP
iptables -I FORWARD 1 -p tcp --source 0.0.0.0/0 --destination $PSQL_IP --dport 5432 -j CUSTOM_PSQL
# 7. list the results
iptables --list CUSTOM_PSQL
And that’s it. Now add this as a cronjob to run every hour or so, or add it as a task to run on every reboot / docker restart, and you’re good to go!
Test the rules by trying to connect to these ports from outside the host before you apply the iptables rules, and after you apply them.
Note : This is obviously a hack for legacy systems. You’re better of in using docker-compose networks or linking containers directly, instead of exposing the ports to the world.