Docker – How and why I use it

Docker is a fantastic way to run applications in containers separate from your host OS. But, you may ask why bother doing this or why not use VMs instead?

Docker is not a replacement for virtual machines or all local applications, but it can help you modularize your system and keep your host OS neat and tidy. I primarily use Docker for things like small applications to put everything in a nice tidy folder structure without installing a bunch of dependencies on my host OS. Here are some of the things that I run in containers at home.

  • Couchpotato
  • Crashplan
  • Muximux
  • NZBget
  • Observium
  • Plex Media Server
  • Plexpy
  • Ruby on Rails development environment
  • Sonarr
  • Unifi Controller

Running these things in containers keeps things like the Mono libraries, Ruby dependencies (and gems), Java, etc. From being stored on my host OS. I can also easily port these containers to a new system by stopping the container, rsyncing/ZFS sending it over, and running docker create on the other end. Here’s a snippet of how I setup the folder structure for my containers and what I use to set most of them up.

All of this is running on my Ubuntu 16.04 server. The first thing I will do is setup the folder structure for all of these apps. Holy crap! That is a crazy long command you are issuing there. If you take a step back you will see that I first switch to the root user, and then make the folders for all my apps in /docker/containers and a config directory in each to house their configuration files. Then, I created a shared downloads directory and some directories for specialized containers like observium and Plex. Finally, I change the owner and group to my zack user to prevent any permissions issues between my containers. As a sidenote, the /docker path is on a pair of ZFS mirrors made up of (4) 400GB HGST SAS disks, that makes taking snapshots of my container’s configurations and content super easy.

For future reference used in the examples below, my user zack has user id of 1000 and group of 1000.

Here I’m passing through the localtime from the host machine to the container. I’m also passing through the /docker/containers/couchpotato/config folder to the container mounted at /docker/containers/couchpotato/config. This also passes through a download directory /docker/downloads/completed/Movies that is shared by the NZBget container. Finally, I have my mergerfs pool shared to the container to move completed files to. I’m passing through port 5050 from the host to the container. This allows me to connect to the container from my network by going to the host ip on port 5050.

Crashplan is receiving it’s name from the host, and also setting the timezone so that the container has accurate time. I’m again passing through a config directory to the container and my entire mergerfs pool so that I can backup specific directories to Crashplan Central. I’m passing through ports 4242 and 4243 that Crashplan needs to function. This runs Crashplan in headless mode on the server. I connect to this instance from my Macbook Air and have configured it with the server’s ip address, and ui_info and identity files, so that I can manage it remotely. This uses Java, so I’m glad this isn’t on my OS.

Muximux is an nice aggregator for all of these services. It allows me to have one landing page so that I don’t have to keep 10 different tabs open for each service. I have also added things like my EdgeOS login page, and IPMI devices into this page as well. This runs on port 80.

NZBget runs as the zack user and group. I’m passing through port 6789 and a couple of directories for files following the same pattern as above.

OpenVPN Access Server allows me to easily connect remotely. I port forwarded port 1194 to this host to support this container.

I use Observium to montior SNMP data from a few of my networking devices as well as my firewall. This has port 8668 passed through, along with the timezone from the host, and a few directories that it needs to function.

Plex is a beast when you factor in all of the metadata and artwork that it sucks in. This keeps everything in one nice tidy directory structure and is easily backed up. I’m using the host option so that plex can function correctly, along with running the plexpass version instead of stable. I’m also passing through configuration/transcode directories, my mergerfs pool, as well as the underlying individual disks in the pool. This last part allows me to setup plex folders for each disk and only spins up that one disk to view a file vs. potentially having to spin up a few or the whole pool as plex “searches” for the file to playback.

Plexpy is a great way to gather stats from the Plex host. This runs on port 8181 and again runs as my user and group (1000).

This is getting repetitive 🙂 Sonarr runs on port 8989 and as my user and group again. I pass through a few specific directories from my mergerfs pool as well as a shared folder from NZBget. This has a bunch of Mono dependencies, so I’m glad this isn’t crufting up my OS.

Finally Unifi. This uses Java again and requires a bunch of open ports, so this is a great thing to containerize. You can read about connecting your AP to the Unifi Controller in my older article.

Managing Containers
This is super easy. You can view your containers like this. It will show you what containers are running and for how long.

You can start and stop them like this.

If you need to make a change to a container, (add/remove a volume add a port, etc.) you can easily remove the current container and re-run your docker create line again.

If you ever need/want to completely remove a container image, you just stop the container, remove it, and then remove the image.

You can view the log files of a container like this.

Or, you can even enter the container if you’d like to.

This only scratches the surface of what you can do with Docker. It’s an awesome technology and I encourage you to check it out. Also, the people over at have a huge list of awesome containers and are happy to assist with any issues you might have via their forums or IRC.

Permission denied on Docker

If you have all of your users and permissions set correctly, you may want to check if SELinux is causing the issue.  You can read more about a possible solution here.



I love learning new things and trying out the latest technology.

You may also like...

17 Responses

  1. rebels1405 says:

    Hi. First off, let me give you a huge thank you for what you do on this site. I love the new look by the way. I have been following for the past few months and have just finished building a headless linux server using ubuntu 16.04. I have it set up with snapraid, MergerFS, couch potato, nzbget and others all thanks to you! Seriously, you have the best guides on the internet for what you do.

    My question is about VPN for usenet and things like nzbget, coach potato, sonarr, etc. Do you use one? I have a PIA subscription, but I can’t seem to get it auto configured without having to manually log in and leave the terminal window open to actually use it, and even then, I am not sure if it is working, leaking DNS, or the like. It seems like you are also running all of these programs, so I would like to ask what your method is. Note – I tried to send this via the contact page, but I got an error when trying to send.

    • Zack Zack says:

      First of all, thanks for the kind words and the heads up on the contact form (I just fixed it). I don’t use VPN as all communication is done via SSL. It would probably worth investigating though. That being said, you could certainly do this on the Ubuntu host with a Docker container like this.

      But, if I were to pay for a VPN, I would rather put it on my router and force all traffic on the VPN VLAN to use the PIA tunnel (this would be hard to write a tutorial for everyone though, as there are TONS of router options from hardware devices (aftermarket routers or Edgerouters, etc. ) to software firewalls like PfSense or Sophos.

      Maybe let me know what tutorial you are following (or directions you are using) and I’ll take a look.

  2. rebels1405 says:

    Thanks for the reply! I was using this tutorial to set it up and it worked, but I had to keep the terminal window open to stay connected and that isn’t ideal. I tried following the instructions on the docker container that you linked, but I couldn’t get it to work either. I got it installed, but it wouldn’t work for some reason. I do have a PfSense router that i am routing all traffic through. Is there a way to set it up there to where only traffic from my ubuntu server goes through that unless I use the app to connect on my other non-headless machines? Does that even make sense? Sorry, I am still learning.

    • Zack Zack says:

      No need to apologize for questions 🙂 To route traffic automatically by host in your PFSense router, you would need to setup a VLAN or a separate subnet on a different network interface. Those directions are super easy to get PIA working in Ubuntu. I would just create an init script that runs at login. These directions should accomplish this task easily (also note the first comment).

      PIA VPN Setup Directions for Headless Server

      You may also want to consider setting up a killswitch so that if the VPN Tunnel is down, your traffic won’t go out to the internet.

  3. chad says:

    Ok having trouble with the Couchpotato Docker setup. When I start it, the logs indicate

    Not sure what I’m doing wrong. Obviously it’s a permissions issue but the -R /docker are all owned by the correct user. PID and GID correspond correctly to the owner and group of the entire /docker folder, recursively. Here is my docker create command

    I’ve tried several combinations of permissions and can’t seem to get the error to go away. I’m on CentOS 7.2, 3.10.0-327.28.3.e17.x86_64. Docker version is 1.10.3

    Help? Any ideas?

    • Zack Zack says:

      Definitely appears to be a permissions issue. Have you tried to see if it works with 777 permissions as a starting point?

      • chad says:

        Yes, tried all of that – got the t-shirt.

        Good news tho. Turned out to be an issue with selinux and it set to enabled. I found the solution at I choose to implement solution #2 and once I executed ‘sudo setenforce 0’ the container started working normally. Created the Plex also container and got normal operation too after having the same permission issue as before. Can’t reboot my server now so I can’t text to see if it will survive a reboot but it’s a step in the right direction.

        Might want to keep this posted for us CentOS users! 😉

  4. codgedodger says:

    I’m having a hard time getting the dockerized Sonarr and Couchpotato added to the dockerized rutorrent. Is there a way I have to link the docks so they can read off Rutorrent and add torrents?

    I have port 5050 open for Couchpotato, 8989 for Sonarr and 443,80,51412 open for rutorrent. I’m trying to link the RPC communication to both services (couchpotato and sonarr) but am wondering if since they are all docked they can’t go back and forth. Have an ideas?

    If I could get both couchpotato and sonarr working with rutorrent I’d be set. I’ve provided a link for the image when I run “docker ps -a”.

  5. crsavage1 says:

    Just a quick question as I modeled my build from your site. Thank you by the way, pretty awesome. I also used this article to build sonarr etc. but used your paths /docker/downloads…but it is an independent path rather than to the mergerfs pool /storage/TV for example. So not a big deal, but I then have to go in an scrape the shows from the /docker/downloads/tv directory and copy them to /storage/TV. Did I miss a symlink somewhere or build the container incorrectly? Thanks for any help.

    • Zack Zack says:

      Hello, thanks for the kind words. To answer your question, no, I didn’t miss a symlink. I don’t use the /storage container for downloads, I only move finished downloads there. Notice that the Sonarr directions contain mountpoints for both /docker/downloads and /storage/tv_shows, etc. You need to turn on completed download handling in Sonarr, and you need to set the proper final path for your shows when you add them (or edit them) to point to /storage/tv_shows. Also, make sure your category in NZBget matches the category for your Download Client in Sonarr.

      • crsavage1 says:

        Oh, I meant whether I missed a symlink. Not you. I see the /storage/tv as the path within sonarr but it continues to put everything under the /docker/download/tv folder. When I add a series, I add them as /storage/TV, but when I built the container I have a feeling I defined something incorrect. When it gets the episodes, even though defined as /storage/TV, it adds them to the download folder.

        • Zack Zack says:

          I have NZBget setup to grab downloads and put the first schedule downloads in /docker/downloads/TV/completed. If you have Sonarr setup with the same category, it should know the first is he’d downloads are there. Next, you setup the path in Sonarr, so that once the downloads are done, it will rename and move them to the final path (/storage/TV). I hope that helps 🙂

  6. phuriousgeorge says:

    Hello again and once again, thank you for your blog! My home setup is modeled pretty close to yours. I’m having a specific problem here lately with samba/mergerfs permissions and Docker containers I was curious if you might have some insight to?

    My grab/DL setup is a bit different, as I run a dedicated server I pool downloads on due to privacy/limited bandwidth at home, so my local apps all drop into a blackhole location for me to upload to my server (manually or automatically – I know I can automate, but I like the user input). The problem I’m having is it seems everything created in my blackhole location (and other places in my pool) are created with permissions I cannot access from my PC:

    -r—-x–t 1 phuriousgeorge nogroup 30106 Dec 29 14:08 Cri.torrent
    -r—-x–t 1 phuriousgeorge nogroup 866145 Dec 29 05:59 The.nzb

    I know I can chmod/chown as needed or setup a cron, but that’s a hassle and seems it shouldn’t be required. Any file created by my Ubuntu local user is unreadable by my Windows PC =/. I had this working previously, but I didn’t document what I did and seem to have no luck this recent fresh install.

    • Zack Zack says:

      I assume you are connecting with the guest user on your Windows box instead of logging in the with the user/password of your Ubuntu user. If so, that’s why you can’t use the files. I would either connect as the Ubuntu user (use smbpasswd to create a user and password that matches your Ubuntu user) or chmod /storage as 777.

      • phuriousgeorge says:

        Thanks, I forgot about the user auth. I don’t think I did that before because of constant issues with Windows 7, but I believe I’ve got things working, at least for the moment lol.

Leave a Reply