Docker – How and why I use it

Docker is a fantastic way to run applications in containers separate from your host OS. But, you may ask why bother doing this or why not use VMs instead?

Docker is not a replacement for virtual machines or all local applications, but it can help you modularize your system and keep your host OS neat and tidy. I primarily use Docker for things like small applications to put everything in a nice tidy folder structure without installing a bunch of dependencies on my host OS. Here are some of the things that I run in containers at home.

  • Couchpotato
  • Crashplan
  • Muximux
  • NZBget
  • Observium
  • Plex Media Server
  • Plexpy
  • Radarr
  • Sonarr
  • Unifi Controller
  • Watchtower Docker Automatic Updater

Running these things in containers keeps things like the Mono libraries, Ruby dependencies (and gems), Java, etc. From being stored on my host OS. I can also easily port these containers to a new system by stopping the container, rsyncing/ZFS sending it over, and running docker create on the other end. Here’s a snippet of how I setup the folder structure for my containers and what I use to set most of them up.

All of this is running on my Ubuntu 16.04 server. The first thing I will do is setup the folder structure for all of these apps. Holy crap! That is a crazy long command you are issuing there. If you take a step back you will see that I first switch to the root user, and then make the folders for all my apps in /docker/containers and a config directory in each to house their configuration files. Then, I created a shared downloads directory and some directories for specialized containers like observium and Plex. Finally, I change the owner and group to my zack user to prevent any permissions issues between my containers. As a sidenote, the /docker path is on a pair of ZFS mirrors made up of (4) 400GB HGST SAS disks, that makes taking snapshots of my container’s configurations and content super easy.

For future reference used in the examples below, my user zack has user id of 1000 and group of 1000.

Here I’m passing through the localtime from the host machine to the container. I’m also passing through the /docker/containers/couchpotato/config folder to the container mounted at /docker/containers/couchpotato/config. This also passes through a download directory /docker/downloads/completed/Movies that is shared by the NZBget container. Finally, I have my mergerfs pool shared to the container to move completed files to. I’m passing through port 5050 from the host to the container. This allows me to connect to the container from my network by going to the host ip on port 5050.

Crashplan is receiving it’s name from the host, and also setting the timezone so that the container has accurate time. I’m again passing through a config directory to the container and my entire mergerfs pool so that I can backup specific directories to Crashplan Central. I’m passing through ports 4242 and 4243 that Crashplan needs to function. This runs Crashplan in headless mode on the server. I connect to this instance from my Macbook Air and have configured it with the server’s ip address, and ui_info and identity files, so that I can manage it remotely. This uses Java, so I’m glad this isn’t on my OS.

Muximux is an nice aggregator for all of these services. It allows me to have one landing page so that I don’t have to keep 10 different tabs open for each service. I have also added things like my EdgeOS login page, and IPMI devices into this page as well. This runs on port 80.

NZBget runs as the zack user and group. I’m passing through port 6789 and a couple of directories for files following the same pattern as above.

OpenVPN Access Server allows me to easily connect remotely. I port forwarded port 1194 to this host to support this container.

I use Observium to montior SNMP data from a few of my networking devices as well as my firewall. This has port 8668 passed through, along with the timezone from the host, and a few directories that it needs to function.

Plex is a beast when you factor in all of the metadata and artwork that it sucks in. This keeps everything in one nice tidy directory structure and is easily backed up. I’m using the host option so that plex can function correctly, along with running the plexpass version instead of stable. I’m also passing through configuration/transcode directories, my mergerfs pool, as well as the underlying individual disks in the pool. This last part allows me to setup plex folders for each disk and only spins up that one disk to view a file vs. potentially having to spin up a few or the whole pool as plex “searches” for the file to playback.

Radarr is a fork of Sonarr that provides downloading similar to Couchpotato. This runs on port 7878.

Plexpy is a great way to gather stats from the Plex host. This runs on port 8181 and again runs as my user and group (1000).

This is getting repetitive 🙂 Sonarr runs on port 8989 and as my user and group again. I pass through a few specific directories from my mergerfs pool as well as a shared folder from NZBget. This has a bunch of Mono dependencies, so I’m glad this isn’t crufting up my OS.

Finally Unifi. This uses Java again and requires a bunch of open ports, so this is a great thing to containerize. You can read about connecting your AP to the Unifi Controller in my older article.

Updating Docker container images is a bit weird when you first learn about it. First, you must stop your container, then remove the current image, then re-create the image with the same options. Also, you need to keep track of when the maintainer updates the images. Sure, you could write a script to do this, but luckily, there is another Docker container that does this for you. It’s called Watchtower. Here’s how you set it up.

That’s it. It will periodically check for updates to your Docker container images, and if there is a newer version, it will pull the image, and re-create the container. All without you lifting a finger.

Managing Containers
This is super easy. You can view your containers like this. It will show you what containers are running and for how long.

You can start and stop them like this.

If you need to make a change to a container, (add/remove a volume add a port, etc.) you can easily remove the current container and re-run your docker create line again.

If you ever need/want to completely remove a container image, you just stop the container, remove it, and then remove the image.

You can view the log files of a container like this.

Or, you can even enter the container if you’d like to.

This only scratches the surface of what you can do with Docker. It’s an awesome technology and I encourage you to check it out. Also, the people over at linuxserver.io have a huge list of awesome containers and are happy to assist with any issues you might have via their forums or IRC.

Permission denied on Docker

If you have all of your users and permissions set correctly, you may want to check if SELinux is causing the issue.  You can read more about a possible solution here.

Zack

Zack

I love learning new things and trying out the latest technology.

You may also like...

35 Responses

  1. rebels1405 says:

    Hi. First off, let me give you a huge thank you for what you do on this site. I love the new look by the way. I have been following for the past few months and have just finished building a headless linux server using ubuntu 16.04. I have it set up with snapraid, MergerFS, couch potato, nzbget and others all thanks to you! Seriously, you have the best guides on the internet for what you do.

    My question is about VPN for usenet and things like nzbget, coach potato, sonarr, etc. Do you use one? I have a PIA subscription, but I can’t seem to get it auto configured without having to manually log in and leave the terminal window open to actually use it, and even then, I am not sure if it is working, leaking DNS, or the like. It seems like you are also running all of these programs, so I would like to ask what your method is. Note – I tried to send this via the contact page, but I got an error when trying to send.

    • Zack Zack says:

      First of all, thanks for the kind words and the heads up on the contact form (I just fixed it). I don’t use VPN as all communication is done via SSL. It would probably worth investigating though. That being said, you could certainly do this on the Ubuntu host with a Docker container like this.

      But, if I were to pay for a VPN, I would rather put it on my router and force all traffic on the VPN VLAN to use the PIA tunnel (this would be hard to write a tutorial for everyone though, as there are TONS of router options from hardware devices (aftermarket routers or Edgerouters, etc. ) to software firewalls like PfSense or Sophos.

      Maybe let me know what tutorial you are following (or directions you are using) and I’ll take a look.

  2. rebels1405 says:

    Thanks for the reply! I was using this tutorial to set it up https://helpdesk.privateinternetaccess.com/hc/en-us/articles/219438247-Installing-OpenVPN-PIA-on-Linux and it worked, but I had to keep the terminal window open to stay connected and that isn’t ideal. I tried following the instructions on the docker container that you linked, but I couldn’t get it to work either. I got it installed, but it wouldn’t work for some reason. I do have a PfSense router that i am routing all traffic through. Is there a way to set it up there to where only traffic from my ubuntu server goes through that unless I use the app to connect on my other non-headless machines? Does that even make sense? Sorry, I am still learning.

    • Zack Zack says:

      No need to apologize for questions 🙂 To route traffic automatically by host in your PFSense router, you would need to setup a VLAN or a separate subnet on a different network interface. Those directions are super easy to get PIA working in Ubuntu. I would just create an init script that runs at login. These directions should accomplish this task easily (also note the first comment).

      PIA VPN Setup Directions for Headless Server

      You may also want to consider setting up a killswitch so that if the VPN Tunnel is down, your traffic won’t go out to the internet.

  3. chad says:

    Ok having trouble with the Couchpotato Docker setup. When I start it, the logs indicate

    Not sure what I’m doing wrong. Obviously it’s a permissions issue but the -R /docker are all owned by the correct user. PID and GID correspond correctly to the owner and group of the entire /docker folder, recursively. Here is my docker create command

    I’ve tried several combinations of permissions and can’t seem to get the error to go away. I’m on CentOS 7.2, 3.10.0-327.28.3.e17.x86_64. Docker version is 1.10.3

    Help? Any ideas?

    • Zack Zack says:

      Definitely appears to be a permissions issue. Have you tried to see if it works with 777 permissions as a starting point?

      • chad says:

        Yes, tried all of that – got the t-shirt.

        Good news tho. Turned out to be an issue with selinux and it set to enabled. I found the solution at http://nanxiao.me/en/selinux-cause-permission-denied-issue-in-using-docker/. I choose to implement solution #2 and once I executed ‘sudo setenforce 0’ the container started working normally. Created the Plex also container and got normal operation too after having the same permission issue as before. Can’t reboot my server now so I can’t text to see if it will survive a reboot but it’s a step in the right direction.

        Might want to keep this posted for us CentOS users! 😉

  4. codgedodger says:

    I’m having a hard time getting the dockerized Sonarr and Couchpotato added to the dockerized rutorrent. Is there a way I have to link the docks so they can read off Rutorrent and add torrents?

    I have port 5050 open for Couchpotato, 8989 for Sonarr and 443,80,51412 open for rutorrent. I’m trying to link the RPC communication to both services (couchpotato and sonarr) but am wondering if since they are all docked they can’t go back and forth. Have an ideas?

    If I could get both couchpotato and sonarr working with rutorrent I’d be set. I’ve provided a link for the image when I run “docker ps -a”.

    http://imgur.com/xdrSveo

  5. crsavage1 says:

    Just a quick question as I modeled my build from your site. Thank you by the way, pretty awesome. I also used this article to build sonarr etc. but used your paths /docker/downloads…but it is an independent path rather than to the mergerfs pool /storage/TV for example. So not a big deal, but I then have to go in an scrape the shows from the /docker/downloads/tv directory and copy them to /storage/TV. Did I miss a symlink somewhere or build the container incorrectly? Thanks for any help.

    • Zack Zack says:

      Hello, thanks for the kind words. To answer your question, no, I didn’t miss a symlink. I don’t use the /storage container for downloads, I only move finished downloads there. Notice that the Sonarr directions contain mountpoints for both /docker/downloads and /storage/tv_shows, etc. You need to turn on completed download handling in Sonarr, and you need to set the proper final path for your shows when you add them (or edit them) to point to /storage/tv_shows. Also, make sure your category in NZBget matches the category for your Download Client in Sonarr.

      • crsavage1 says:

        Oh, I meant whether I missed a symlink. Not you. I see the /storage/tv as the path within sonarr but it continues to put everything under the /docker/download/tv folder. When I add a series, I add them as /storage/TV, but when I built the container I have a feeling I defined something incorrect. When it gets the episodes, even though defined as /storage/TV, it adds them to the download folder.

        • Zack Zack says:

          I have NZBget setup to grab downloads and put the first schedule downloads in /docker/downloads/TV/completed. If you have Sonarr setup with the same category, it should know the first is he’d downloads are there. Next, you setup the path in Sonarr, so that once the downloads are done, it will rename and move them to the final path (/storage/TV). I hope that helps 🙂

  6. phuriousgeorge says:

    Hello again and once again, thank you for your blog! My home setup is modeled pretty close to yours. I’m having a specific problem here lately with samba/mergerfs permissions and Docker containers I was curious if you might have some insight to?

    My grab/DL setup is a bit different, as I run a dedicated server I pool downloads on due to privacy/limited bandwidth at home, so my local apps all drop into a blackhole location for me to upload to my server (manually or automatically – I know I can automate, but I like the user input). The problem I’m having is it seems everything created in my blackhole location (and other places in my pool) are created with permissions I cannot access from my PC:

    -r—-x–t 1 phuriousgeorge nogroup 30106 Dec 29 14:08 Cri.torrent
    -r—-x–t 1 phuriousgeorge nogroup 866145 Dec 29 05:59 The.nzb

    I know I can chmod/chown as needed or setup a cron, but that’s a hassle and seems it shouldn’t be required. Any file created by my Ubuntu local user is unreadable by my Windows PC =/. I had this working previously, but I didn’t document what I did and seem to have no luck this recent fresh install.

    • Zack Zack says:

      I assume you are connecting with the guest user on your Windows box instead of logging in the with the user/password of your Ubuntu user. If so, that’s why you can’t use the files. I would either connect as the Ubuntu user (use smbpasswd to create a user and password that matches your Ubuntu user) or chmod /storage as 777.

      • phuriousgeorge says:

        Thanks, I forgot about the user auth. I don’t think I did that before because of constant issues with Windows 7, but I believe I’ve got things working, at least for the moment lol.

  7. charlieny100 says:

    I like the idea of putting everything in containers and want to give this a try. But like others, I want some traffic to go out over my routers VPN. I currently route traffic by IP address. Can I have a Plex container that is the IP of the host and an NZBGet container with a different IP address?

    • Zack Zack says:

      As long as you have a second nic in the host, and can deal with configuring the Docker router, then anything is possible. You would just attach the nzbget vm to the second network interface.

  8. Dulanic says:

    So here is a question that maybe I am just completely missing. How do you keep the containers up to date? Do you need to recreate them? Is there a command to update them?

    • Zack Zack says:

      Good question. To update the image of a running Docker container, you need to remove and recreate them. The good thing is that any volumes, will still hold all your data and configurations, so this is very easy to do.

      And, you are back in business. The nice thing about many of these containers is that the application, you are running will be updated if you just restart the container. Things like Plex, NZBget, and Sonarr all work like that. So, it could be as easy as this if you just wanted the latest version of plex.

      There is also Docker Compose, but I have not shown how to use that here. If you have built a compose.yml file, you could also update it like this.

      • Dulanic says:

        So follow up to this.. what can I do to determine which of these do this and which need to be recreated? I didn’t keep track of all of my dockers as I flagged all of them to –restart=always so they autostart. So a simple reboot would restart all the dockers I use, but I have no idea which actually need a recreate.

        • Zack Zack says:

          You could write a script that checks for an update to the image, or use watchtower.

          Or, you can use it like this…

          • Dulanic says:

            Awesome! Watchtower works perfect. I searched and searched and never saw that. That is a perfect choice to keep my dockers updated.

          • Dulanic says:

            OK I did that and that watchtower is bugged… Ill report it. It flipped my internal and external ports on muximux

  9. Zack Zack says:

    Hmmm, I’m glad it almost worked for you. I haven’t seen any weirdness from Watchtower, including with Muximux. Keep me posted on that. Did your other containers get updated correctly?

  10. haljordan says:

    How are you handling reboots of the host os and having each container start upon a successful reboot of the host os?

    • Zack Zack says:

      Hello! Great question. There are lots of ways to do this, but I like to just use a simple BASH script that I call via crontab with the @reboot option. That way it always starts up. Here’s my script (/root/ scripts/docker/starter.sh).

      And, my crontab line for this item (crontab -e as root).

    • Dulanic says:

      A way that I found works well also is when you turn load the dockers originally, add –restart=always . This will automatically start dockers upon reboot etc… so example would be docker create –restart=always and then the rest…

      • Zack Zack says:

        Yes, thanks for sharing. As I mentioned, there are tons of ways to do this including using docker-compose. The reason, I like my method, is that if I don’t want a container start after a reboot, I just comment it out in the starter.sh. With -restart=always, I would either need to stop and rm the container and then reconfigure it (unless I have a compose.yml file) or use docker update –restart=no container_name just to avoid it auto starting at boot. Perhaps a better way is to create your Docker with –restart=unless-stopped instead. That way if I have stopped a container before restart, it won’t auto restart.

  11. Hildebrau says:

    Zach, great stuff here. My old non docker media server has reached its life span. I’m going to recreate it like you laid out here. Thank you!

    I was wondering if you’d be willing to share your config files well? I know there is some sensitive stuff in there, but perhaps replacing that with CHANGEME string it similar could work? I’m just looking for something that has all the plumbing worked out already between NZBGet, sonarr, radarr, etc. If that doesn’t interest you, no prob. I’ll hash it out manually this afternoon, I hope.

    Do you use any of the nzbToMedia scripts?

  12. codgedodger says:

    I just want let you know that Watchtower is the best thing I’ve ever discovered. Thank you so much Zack for this amazing knowledge!

    • Zack Zack says:

      I have been watching Organizr, but haven’t tried it yet. It would be super easy to try out though.

Leave a Reply