Docker - the good, the bad and the ugly

Post Reply
User avatar
mad_ady
Posts: 6783
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 215 times
Been thanked: 164 times
Contact:

Docker - the good, the bad and the ugly

Unread post by mad_ady » Thu Nov 07, 2019 9:46 pm

I think older linux sysadmins see it as wasteful while new users embrace it as a package manager.

After being through an audit and having to explain why I have unpatched servers running from 10 years ago it seems having dockerised apps doesn't seem like such a bad thing. The reason often is os upgrades leads to downtime and need testing, while having your apps in containers allows you to upgrade your os without breaking the apps.
Running 10 year old apache inside your container is bad, but at least the container can add a new layer of protection - the attacker can break into your old container, but not in your host os (or not directly anyway).

But I do have my concerns on memory use. If you run two containers (either identical or different) that load an app which uses an identical libfoo.so - does your OS load it once in RAM? I suspect it gets loaded twice and this impacts ram, caches, etc..

Disk space usage can at least be mitigated with a filesystem like zfs which can do deduplication...

So - what are your thoughts?

Containers have been around long enough that they are considered the successors of virtualization. And I'm old enough to remember shouting and screaming at virtualization when it first surfaced...

powerful owl
Posts: 105
Joined: Thu Mar 28, 2019 8:57 pm
languages_spoken: english
ODROIDs: 6 x HC1, H2
Has thanked: 20 times
Been thanked: 9 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by powerful owl » Fri Nov 08, 2019 12:45 am

mad_ady wrote:
Thu Nov 07, 2019 9:46 pm
Containers have been around long enough that they are considered the successors of virtualization.
They are? I thought they were complementary.

I've never managed to "get" Docker. I'm not saying it's bad, just that I don't understand it. At least for what I do, it just seems simpler to install what I need. I've started using containers with LXC, but that's almost the same as working with a VM. I'm thinking I'll try RancherOS (in a VM cough), perhaps that will finally make the Docker penny drop for me.

elatllat
Posts: 1573
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2
Has thanked: 24 times
Been thanked: 64 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by elatllat » Fri Nov 08, 2019 1:56 am

Virtualization (KVM, etc) adds security.
Containerization (LXC, etc) adds portability.
Both are often used as a more user friendly replacement for chroot.

Personally I'm not a fan of Containerization (while it helps enable some security settings, it also introduces more security problems)
Both technologies are not required for Continuous Availability (BIND,BitTorrent,CockroachDB,Dovecot,git,etc) so are not actually significant for keeping things up to date.

https://external-preview.redd.it/tPgaGg ... a0ba3b58b2
These users thanked the author elatllat for the post:
mad_ady (Fri Nov 08, 2019 3:54 am)

User avatar
meveric
Posts: 10527
Joined: Mon Feb 25, 2013 2:41 pm
languages_spoken: german, english
ODROIDs: X2, U2, U3, XU-Lite, XU3, XU3-Lite, C1, XU4, C2, C1+, XU4Q, HC1, N1, Go, H2 (N4100), N2, H2 (J4105)
Has thanked: 17 times
Been thanked: 149 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by meveric » Fri Nov 08, 2019 3:34 am

Docker has it's uses, especially in Development for rapid deployment of test and development environments it's useful.
But running a production environment off of Docker is nothing that I would suggest.
In fact even software development in docker often leads to major issues, as memory leaks are not directly visible.
Also running Docker as a "package manger" is pretty stupid.
Things like running "Transmission" in a docker container, while all you need to do is do a apt-get install transmission-gtk makes you wonder why people waste resources to run docker container for a simple application.
If you want "containerized" software installation, use snap packages or something similar, that's what they are made for.

On the other hand Docker did have lots of issues with memory management in the past. Older versions always utilized swap if you wanted to limit memory due to non existing --memory-swap options. Means, instead of limiting your RAM docker simply started writing data on the HDD which could not be prevented (newer versions have this fixed).
mad_ady wrote:
Thu Nov 07, 2019 9:46 pm
Running 10 year old apache inside your container is bad, but at least the container can add a new layer of protection - the attacker can break into your old container, but not in your host os (or not directly anyway).
I think the opposite is the case, since this gives a false sense of security. If you run old software that is known for security leaks it's more likely one of the leaks can affect your system.

Also you might want to consider the things mentioned here:
https://www.oreilly.com/ideas/five-secu ... ing-docker
Especially the first two I think are most problematic.

(and there are other people raising concerns about Docker and container as well).
https://seclists.org/oss-sec/2019/q1/119
https://techbeacon.com/security/hackers ... ophe-3-2-1
These users thanked the author meveric for the post:
mad_ady (Fri Nov 08, 2019 3:54 am)
Donate to support my work on the ODROID GameStation Turbo Image for U2/U3 XU3/XU4 X2 X C1 as well as many other releases.
Check out the Games and Emulators section to find some of my work or check the files in my repository to find the software i build for ODROIDs.
If you want to add my repository to your image read my HOWTO integrate my repo into your image.

Tvan
Posts: 10
Joined: Sun Jul 21, 2019 8:35 am
languages_spoken: english
ODROIDs: xu4,hc2,n2
Has thanked: 0
Been thanked: 3 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by Tvan » Fri Nov 08, 2019 3:57 am

Ouch, old Linux sysadmin here.
Virtualization is for more systems on fewer resources. Which doesn't do a thing for security.
Containers are like a offshoot of what we called sandboxes, isolate, develop, test, then into production. still nothing for security.
The simple rule is, the more complex the system, the more software installed, the harder it is to secure.
Just my 2 cents
Tvan

User avatar
mad_ady
Posts: 6783
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 215 times
Been thanked: 164 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by mad_ady » Fri Nov 08, 2019 4:36 am

Thank you for your thoughts.
@meveric : kernel exploits have the same risk for apps running on bare metal. VMs do offer more protection, you're right. Regarding getting DoS'ed - you could run thd container in a cgroup I think. That would limit cpu/memory/io and maximum processes I think. That could limit the attack effect.

Let me ask you all for your advice:
I have a bunch of old servers running RHEL4, 5, Centos 6 running all sorts of undocumented, custom production code (on old, physical hardware). This all needs to be modernised, even if the apps remain largely the same.
I was thinking of deploying some VMs with a modern OS and moving the apps into docker containers. The reason for separating the apps from the OS is to allow the OS to be updated more frequently and to be able to move the apps if needed to other systems.
If I move them over directly on the OS I lose confidence in doing system upgrades.
I could upgrade the docker instance by adding the base code and rebuilding the container with newer packages and doing tests without affecting production.

I still need to do a lot of digital archaeology to:
1. find out which apps/scripts are still used
2. find out the connectivity/network requirements of these scripts
3. make a list of dependencies for the script (modules, files, etc)
4. add custom application code to a source control system (bye bye working directly on production...)
5. create the docker file to build everything.

I still need to do steps 1-4 for each application, so step 5 can be either a dockerfile or an ansible playbook...

What do you guys recommend as a future-proof solution that balances system administration, security and performance?

elatllat
Posts: 1573
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2
Has thanked: 24 times
Been thanked: 64 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by elatllat » Fri Nov 08, 2019 5:44 am

Tvan wrote:
Fri Nov 08, 2019 3:57 am
...Virtualization ... doesn't do a thing for security...
The general rule of thumb is the smaller and more stable the code base the more secure it is, so I'd rank VM>BSD>Linux in order of security.
I'm sure the cloud heavyweights like google, amazon, Microsoft would agree (because they use VMs on big hardware instead of many discreet hardware units).
Granted Microsoft dropped the security ball on it's cloud offerings a few times but they do that with everything.

To be clear we are talking about securing one app from another on the same hardware... though odroid/Scaleway/etc are making discreet hardware for each app an option.

elatllat
Posts: 1573
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2
Has thanked: 24 times
Been thanked: 64 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by elatllat » Fri Nov 08, 2019 6:03 am

mad_ady wrote:
Fri Nov 08, 2019 4:36 am
...all sorts of undocumented, custom production code...
I'd start by listing the end goal of each component,
then move a piece at a time into individual CentOS8 VMs (on GCP, AWS, Rackspace, or self hosted),
anything abandon-ware try to replace with a modern alternative,
anything that can have a multi-master configuration or customization do that.
anything really harry with incompatibilities with the rest of system I might put in a container, but more likely I'd just make an install script for it.

User avatar
mad_ady
Posts: 6783
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 215 times
Been thanked: 164 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by mad_ady » Fri Nov 08, 2019 2:56 pm

Thanks for the advice, but wouldn't moving (internal) apps to a new vm get me in the same update nightmare 10 years from now? (assuming that I'd be working in the same place).
At least with containers I could presumably upgrade the OS without breaking (much) the apps, or I could upgrade the containers one at a time, when needed.
Is my though process wrong or am I missing something?

User avatar
meveric
Posts: 10527
Joined: Mon Feb 25, 2013 2:41 pm
languages_spoken: german, english
ODROIDs: X2, U2, U3, XU-Lite, XU3, XU3-Lite, C1, XU4, C2, C1+, XU4Q, HC1, N1, Go, H2 (N4100), N2, H2 (J4105)
Has thanked: 17 times
Been thanked: 149 times
Contact:

Re: Docker - the good, the bad and the ugly

Unread post by meveric » Fri Nov 08, 2019 4:05 pm

mad_ady wrote:
Fri Nov 08, 2019 4:36 am
Thank you for your thoughts.
@meveric : kernel exploits have the same risk for apps running on bare metal.
Correct, but if your excuse for running old software in docker container is, that it's saver this risk multiplies, as it's easier to take over a docker container with old unpatched software and then go for attacks against the Kernels from there when you already have root privileges.
mad_ady wrote:
Fri Nov 08, 2019 4:36 am
Let me ask you all for your advice:
I have a bunch of old servers running RHEL4, 5, Centos 6 running all sorts of undocumented, custom production code (on old, physical hardware). This all needs to be modernised, even if the apps remain largely the same.
I was thinking of deploying some VMs with a modern OS and moving the apps into docker containers. The reason for separating the apps from the OS is to allow the OS to be updated more frequently and to be able to move the apps if needed to other systems.
If I move them over directly on the OS I lose confidence in doing system upgrades.
I could upgrade the docker instance by adding the base code and rebuilding the container with newer packages and doing tests without affecting production.
Since docker is sharing resources and especially the Kernel with the host system, if your application is incompatible with newer Kernel versions, docker container won't help you either, as you still break your programs.
This is already seen with applications like firewalls (ufw for example) where change in the Kernel code prevents running of some command line code that is used in some applications, as the Kernel does not support this. (https://bugs.debian.org/cgi-bin/bugrepo ... bug=915627)
mad_ady wrote:
Fri Nov 08, 2019 4:36 am
I still need to do a lot of digital archaeology to:
1. find out which apps/scripts are still used
2. find out the connectivity/network requirements of these scripts
3. make a list of dependencies for the script (modules, files, etc)
4. add custom application code to a source control system (bye bye working directly on production...)
5. create the docker file to build everything.

I still need to do steps 1-4 for each application, so step 5 can be either a dockerfile or an ansible playbook...

What do you guys recommend as a future-proof solution that balances system administration, security and performance?
That's what test systems are normally for. A VM or second machine running the same software, performing a system upgrade and testing if it's working.
Or snapshots if you're using a VM or LVM solution, where you simply can revert to a state where you know it was working if something goes wrong.
Also checking dependencies often reveals how far you can go with the upgrades without breaking the system.
If you keep your system (docker or real system) in an old state, will always pose a security risk. Who knows what will be found in 10 year old code from now on that no one ever considered.
Check the last huge security issues with linux. All problems were several years old an not just a "recent bug" so you might end up with drivers on a system that in 4 years turn out to be able to entirely interrupt your production system with no way to upgrade it, as you are trapped in your docker deployment.
Donate to support my work on the ODROID GameStation Turbo Image for U2/U3 XU3/XU4 X2 X C1 as well as many other releases.
Check out the Games and Emulators section to find some of my work or check the files in my repository to find the software i build for ODROIDs.
If you want to add my repository to your image read my HOWTO integrate my repo into your image.

Post Reply

Return to “General Topics”

Who is online

Users browsing this forum: No registered users and 0 guests