Docker – How and why I use it

Docker is a fantastic way to run applications in containers separate from your host OS. But, you may ask why bother doing this or why not use VMs instead?

Docker is not a replacement for virtual machines or all local applications, but it can help you modularize your system and keep your host OS neat and tidy. I primarily use Docker for things like small applications to put everything in a nice tidy folder structure without installing a bunch of dependencies on my host OS. Here are some of the things that I run in containers at home.

Running these things in containers keeps things like the Mono libraries, Ruby dependencies (and gems), Java, etc. From being stored on my host OS. I can also easily port these containers to a new system by stopping the container, rsyncing/ZFS sending it over, and running docker create on the other end. Here’s a snippet of how I setup the folder structure for my containers and what I use to set most of them up.

All of this is running on my Ubuntu 16.04 server. The first thing I will do is setup the folder structure for all of these apps. Holy crap! That is a crazy long command you are issuing there. If you take a step back you will see that I first switch to the root user, and then make the folders for all my apps in /docker/containers and a config directory in each to house their configuration files. Then, I created a shared downloads directory and some directories for specialized containers like observium and Plex. Finally, I change the owner and group to my zack user to prevent any permissions issues between my containers. As a sidenote, the /docker path is on a pair of ZFS mirrors made up of (8) 400GB HGST SAS disks + and Intel S3700 ZIL, that makes taking snapshots of my containers’ configurations and content super easy.

sudo -i
mkdir -p /docker/containers/{couchpotato,crashplan,muximux,nzbget,observium,plex,plexpy,portainer,radarr,sonarr,unifi}/config
chown -R zack:zack /docker
mkdir -p /docker/downloads/{completed/Movies,completed/TV}
mkdir -p /docker/containers/observium/{config,logs,rrd}
mkdir -p /docker/containers/plex/{config,transcode}
chmod -R 777 observium

For future reference used in the examples below, my user zack has user id of 1000 and group of 1000.


cAdvisor is a simple Monitor for Docker containers. It’s an easy way to check utilization without needing to SSH into your host. This host will be available at http://:7070 after it starts.

docker run -d                                   \
  --volume=/:/rootfs:ro                         \
  --volume=/var/run:/var/run:rw                 \
  --volume=/sys:/sys:ro                         \
  --volume=/var/lib/docker/:/var/lib/docker:ro  \
  --publish=7070:8080                           \
  --detach=true                                 \
  --name=cadvisor                               \


Here I’m passing through the localtime from the host machine to the container. I’m also passing through the /docker/containers/couchpotato/config folder to the container mounted at /docker/containers/couchpotato/config. This also passes through a download directory /docker/downloads/completed/Movies that is shared by the NZBget container. Finally, I have my mergerfs pool shared to the container to move completed files to. I’m passing through port 5050 from the host to the container. This allows me to connect to the container from my network by going to the host ip on port 5050.

docker run \
--name=couchpotato \
-v /etc/localtime:/etc/localtime:ro \
-v /docker/containers/couchpotato/config:/config \
-v /docker/downloads/completed/Movies:/downloads \
-v /storage/videos:/movies \
-e PGID=1000 -e PUID=1000  \
-p 5050:5050 \


Crashplan is receiving it’s name from the host, and also setting the timezone so that the container has accurate time. I’m again passing through a config directory to the container and my entire mergerfs pool so that I can backup specific directories to Crashplan Central. I’m passing through ports 4242 and 4243 that Crashplan needs to function. This runs Crashplan in headless mode on the server. I connect to this instance from my Macbook Air and have configured it with the server’s ip address, and ui_info and identity files, so that I can manage it remotely. This uses Java, so I’m glad this isn’t on my OS.

docker run \
--name crashplan \
-e TZ=America/Detroit \
--publish 4242:4242 --publish 4243:4243 \
--volume /docker/containers/crashplan/config:/var/crashplan \
--volume /storage:/storage \


A Minecraft server for my son.

docker run \
    --name minecraft-vanilla \
    -p 25565:25565 \
    -d \
    -it \
    -v /docker/containers/minecraft-vanilla/data:/data \
    -e EULA=TRUE \
    -e WHITELIST=username \
    -e OPS=username \
    -e DIFFICULTY=easy \
    -e MAX_PLAYERS=3 \
    -e ALLOW_NETHER=true \
    -e SPAWN_ANIMALS=true \
    -e SPAWN_MONSTERS=true \
    -e SPAWN_NPCS=true \
    -e MODE=creative \
    -e PVP=false \


Muximux is an nice aggregator for all of these services. It allows me to have one landing page so that I don’t have to keep 10 different tabs open for each service. I have also added things like my EdgeOS login page, and IPMI devices into this page as well. This runs on port 80.

docker run \
--name=muximux \
-p 80:80 \
-p 443:443 \
-v /docker/containers/muximux/config:/config \


NZBget runs as the zack user and group. I’m passing through port 6789 and a couple of directories for files following the same pattern as above.

docker run \
--name nzbget \
-p 6789:6789 \
-e PUID=1000 -e PGID=1000 \
-v /docker/containers/nzbget/config:/config \
-v /docker/downloads:/downloads \
-v /storage/videos:/movies \
-v /storage:/storage \


I use Observium to montior SNMP data from a few of my networking devices as well as my firewall. This has port 8668 passed through, along with the timezone from the host, and a few directories that it needs to function.

docker run \
--name=observium \
-p 8668:8668 \
-e TZ="America/Detroit" \
-v /docker/containers/observium/config:/config \
-v /docker/containers/observium/logs:/opt/observium/logs \
-v /docker/containers/observium/rrd:/opt/observium/rrd \


OpenVPN Access Server allows me to easily connect remotely. I port forwarded port 1194 to this host to support this container.

docker run \
--name=openvpn-as \
-v /docker/containers/openvpn-as/config:/config \
-e PGID=1000 -e PUID=1000 \
-e TZ=America/Detroit \
-e INTERFACE=enp0s25 \
--net=host \
--privileged \


An adblocking DNS server for my house.

docker run -d \
    --name pihole \
    -p 53:53/tcp -p 53:53/udp -p 8082:80 \
    -v /docker/containers/pihole/config/etc/pihole:/etc/pihole/ \
    -v /docker/containers/pihole/config/dnsmasq.d/:/etc/dnsmasq.d/ \
    -e ServerIP= \
    -e ServerIPv6= \
    -e TZ=America/Detroit \
    --restart=always \


Plex is a beast when you factor in all of the metadata and artwork that it sucks in. This keeps everything in one nice tidy directory structure and is easily backed up. I’m using the host option so that plex can function correctly, along with running the plexpass version instead of stable. I’m also passing through configuration/transcode directories, my mergerfs pool, as well as the underlying individual disks in the pool. This last part allows me to setup plex folders for each disk and only spins up that one disk to view a file vs. potentially having to spin up a few or the whole pool as plex “searches” for the file to playback.

docker create \
-d \
--name plex \
--net=host \
-e TZ="America/Detroit" \
-e PLEX_UID=1000 -e PLEX_GID=1000 \
-v /docker/containers/plex/config:/config \
-v /storage:/storage \
-v /docker/containers/plex/transcode:/transcode \
--device /dev/dri:/dev/dri \


docker create \
--runtime=nvidia \
--name=plex \
--net=host \
-e NVIDIA_DRIVER_CAPABILITIES=compute,video,utility \
-e VERSION=latest \
-e PUID=1000 -e PGID=1000 \
-e TZ=America/Detroit \
-v /docker/containers/plex/config:/config \
-v /storage:/storage \
-v /transcode:/transcode \


Portainer is a nice management GUI for Docker containers. It allows you to view running containers, and start/stop/destroy them. You can also create new containers pull from LS.IO repositories or using any Docker repo.

docker run -d -p 9000:9000 --name=portainer -v /docker/containers/portainer/config:/data -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer


Radarr is a fork of Sonarr that provides downloading similar to Couchpotato. This runs on port 7878.

docker create \
  --name=radarr \
    -v /docker/containers/radarr/config:/config \
    -v /storage/videos:/storage/videos \
    -v /docker/downloads/completed/RadarrMovies:/downloads/completed/RadarrMovies \
    -e PGID=1000 -e PUID=1000  \
    -e TZ="America/Detriot" \
    -p 7878:7878 \
    -p 9899:9899 \


This is getting repetitive 🙂 Sonarr runs on port 8989 and as my user and group again. I pass through a few specific directories from my mergerfs pool as well as a shared folder from NZBget. This has a bunch of Mono dependencies, so I’m glad this isn’t crufting up my OS.

docker create \
--name sonarr \
-p 8989:8989 \
-p 9898:9898 \
-e PUID=1000 -e PGID=1000 \
-v /etc/localtime:/etc/localtime:ro \
-v /docker/containers/sonarr/config:/config \
-v /storage/tv_shows:/storage/tv_shows \
-v /storage/anime:/storage/anime \
-v /docker/downloads/completed/TV:/downloads/completed/TV \


Finally Unifi. This uses Java again and requires a bunch of open ports, so this is a great thing to containerize. You can read about connecting your AP to the Unifi Controller in my older article.

docker create \
--name=unifi-controller \
-e PGID=1000 \
-e PUID=1000  \
-p 3478:3478/udp \
-p 10001:10001/udp \
-p 8080:8080 \
-p 8081:8081 \
-p 8443:8443 \
-p 8843:8843 \
-p 8880:8880 \
-v /etc/localtime:/etc/localtime:ro \
-v /docker/containers/unifi/config:/config \
--restart unless-stopped \


Tautulli is a great way to gather stats from the Plex host. This runs on port 8181 and again runs as my user and group (1000).

docker create \
  --name=tautulli \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=America/Detroit \
  -p 8181:8181 \
  -v /docker/containers/tautulli/config:/config \
  -v /docker/containers/plex/config/Library/Application040Support/Plex040Media040Server/Logs:/logs \
  --restart unless-stopped \


Updating Docker container images is a bit weird when you first learn about it. First, you must stop your container, then remove the current image, then re-create the image with the same options. Also, you need to keep track of when the maintainer updates the images. Sure, you could write a script to do this, but luckily, there is another Docker container that does this for you. It’s called Watchtower. Here’s how you set it up.

docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  containrrr/watchtower --cleanup

That’s it. It will periodically check for updates to your Docker container images, and if there is a newer version, it will pull the image, and re-create the container. All without you lifting a finger.

Managing Containers
This is super easy. You can view your containers like this. It will show you what containers are running and for how long.

docker ps -a

You can start and stop them like this.

docker start unifi
docker stop unifi

If you need to make a change to a container, (add/remove a volume add a port, etc.) you can easily remove the current container and re-run your docker create line again.

# Stop the running container
docker stop nzbget

# Remove the container
docker rm nzbget

# Re-Run the create line here...
docker run --name nzbget ... the rest

If you ever need/want to completely remove a container image, you just stop the container, remove it, and then remove the image.

docker stop nzbget
docker rm nzbget
docker rmi linuxserver/nzbget

You can view the log files of a container like this.

docker logs -f plexpy

Or, you can even enter the container if you’d like to.

docker exec -it crashplan /bin/bash

This only scratches the surface of what you can do with Docker. It’s an awesome technology and I encourage you to check it out. Also, the people over at have a huge list of awesome containers and are happy to assist with any issues you might have via their forums or IRC.

Permission denied on Docker

If you have all of your users and permissions set correctly, you may want to check if SELinux is causing the issue.  You can read more about a possible solution here.


I love learning new things and trying out the latest technology.

You may also like...

48 Responses

  1. rebels1405 says:

    Hi. First off, let me give you a huge thank you for what you do on this site. I love the new look by the way. I have been following for the past few months and have just finished building a headless linux server using ubuntu 16.04. I have it set up with snapraid, MergerFS, couch potato, nzbget and others all thanks to you! Seriously, you have the best guides on the internet for what you do.

    My question is about VPN for usenet and things like nzbget, coach potato, sonarr, etc. Do you use one? I have a PIA subscription, but I can’t seem to get it auto configured without having to manually log in and leave the terminal window open to actually use it, and even then, I am not sure if it is working, leaking DNS, or the like. It seems like you are also running all of these programs, so I would like to ask what your method is. Note – I tried to send this via the contact page, but I got an error when trying to send.

    • Zack says:

      First of all, thanks for the kind words and the heads up on the contact form (I just fixed it). I don’t use VPN as all communication is done via SSL. It would probably worth investigating though. That being said, you could certainly do this on the Ubuntu host with a Docker container like this.

      But, if I were to pay for a VPN, I would rather put it on my router and force all traffic on the VPN VLAN to use the PIA tunnel (this would be hard to write a tutorial for everyone though, as there are TONS of router options from hardware devices (aftermarket routers or Edgerouters, etc. ) to software firewalls like PfSense or Sophos.

      Maybe let me know what tutorial you are following (or directions you are using) and I’ll take a look.

  2. rebels1405 says:

    Thanks for the reply! I was using this tutorial to set it up and it worked, but I had to keep the terminal window open to stay connected and that isn’t ideal. I tried following the instructions on the docker container that you linked, but I couldn’t get it to work either. I got it installed, but it wouldn’t work for some reason. I do have a PfSense router that i am routing all traffic through. Is there a way to set it up there to where only traffic from my ubuntu server goes through that unless I use the app to connect on my other non-headless machines? Does that even make sense? Sorry, I am still learning.

    • Zack says:

      No need to apologize for questions 🙂 To route traffic automatically by host in your PFSense router, you would need to setup a VLAN or a separate subnet on a different network interface. Those directions are super easy to get PIA working in Ubuntu. I would just create an init script that runs at login. These directions should accomplish this task easily (also note the first comment).

      PIA VPN Setup Directions for Headless Server

      You may also want to consider setting up a killswitch so that if the VPN Tunnel is down, your traffic won’t go out to the internet.

  3. chad says:

    Ok having trouble with the Couchpotato Docker setup. When I start it, the logs indicate

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 10-adduser: executing... 
              _     _ _
             | |___| (_) ___
             | / __| | |/ _ \ 
             | \__ \ | | (_) |
             |_|___/ |_|\___/
    Brought to you by
    We do accept donations at:
    User uid:    1000
    User gid:    1000
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 30-install: executing... 
    usermod: Failed to change ownership of the home directory chown: /config: Permission denied
    Cloning into '/app/couchpotato'...
    [cont-init.d] 30-install: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    tail: can't open '/config/data/logs/CouchPotato.log': No such file or directory
    tail: can't open '/config/data/logs/error.log': No such file or directory
    Traceback (most recent call last):
    Traceback (most recent call last):
      File "/app/couchpotato/", line 133, in 
        l = Loader()
      File "/app/couchpotato/", line 52, in __init__
      File "/usr/lib/python2.7/", line 157, in makedirs
        mkdir(name, mode)
    OSError: [Errno 13] Permission denied: '/config/data'

    Not sure what I’m doing wrong. Obviously it’s a permissions issue but the -R /docker are all owned by the correct user. PID and GID correspond correctly to the owner and group of the entire /docker folder, recursively. Here is my docker create command

    docker create \
    --name=couchpotato \
    -v /etc/localtime:/etc/localtime:ro \
    -v /docker/containers/couchpotato/config:/config \
    -v /mnt/storage/downloads/completed/Movies:/downloads \
    -v /mnt/storage/videos:/movies \
    -e PGID=1000 \
    -e PUID=1000 \
    -p 5050:5050 linuxserver/couchpotato

    I’ve tried several combinations of permissions and can’t seem to get the error to go away. I’m on CentOS 7.2, 3.10.0-327.28.3.e17.x86_64. Docker version is 1.10.3

    Help? Any ideas?

    • Zack says:

      Definitely appears to be a permissions issue. Have you tried to see if it works with 777 permissions as a starting point?

      chmod -R 777 /docker/containers/couchpotato
      • chad says:

        Yes, tried all of that – got the t-shirt.

        Good news tho. Turned out to be an issue with selinux and it set to enabled. I found the solution at I choose to implement solution #2 and once I executed ‘sudo setenforce 0’ the container started working normally. Created the Plex also container and got normal operation too after having the same permission issue as before. Can’t reboot my server now so I can’t text to see if it will survive a reboot but it’s a step in the right direction.

        Might want to keep this posted for us CentOS users! 😉

  4. codgedodger says:

    I’m having a hard time getting the dockerized Sonarr and Couchpotato added to the dockerized rutorrent. Is there a way I have to link the docks so they can read off Rutorrent and add torrents?

    I have port 5050 open for Couchpotato, 8989 for Sonarr and 443,80,51412 open for rutorrent. I’m trying to link the RPC communication to both services (couchpotato and sonarr) but am wondering if since they are all docked they can’t go back and forth. Have an ideas?

    If I could get both couchpotato and sonarr working with rutorrent I’d be set. I’ve provided a link for the image when I run “docker ps -a”.

  5. crsavage1 says:

    Just a quick question as I modeled my build from your site. Thank you by the way, pretty awesome. I also used this article to build sonarr etc. but used your paths /docker/downloads…but it is an independent path rather than to the mergerfs pool /storage/TV for example. So not a big deal, but I then have to go in an scrape the shows from the /docker/downloads/tv directory and copy them to /storage/TV. Did I miss a symlink somewhere or build the container incorrectly? Thanks for any help.

    • Zack says:

      Hello, thanks for the kind words. To answer your question, no, I didn’t miss a symlink. I don’t use the /storage container for downloads, I only move finished downloads there. Notice that the Sonarr directions contain mountpoints for both /docker/downloads and /storage/tv_shows, etc. You need to turn on completed download handling in Sonarr, and you need to set the proper final path for your shows when you add them (or edit them) to point to /storage/tv_shows. Also, make sure your category in NZBget matches the category for your Download Client in Sonarr.

      • crsavage1 says:

        Oh, I meant whether I missed a symlink. Not you. I see the /storage/tv as the path within sonarr but it continues to put everything under the /docker/download/tv folder. When I add a series, I add them as /storage/TV, but when I built the container I have a feeling I defined something incorrect. When it gets the episodes, even though defined as /storage/TV, it adds them to the download folder.

        • Zack says:

          I have NZBget setup to grab downloads and put the first schedule downloads in /docker/downloads/TV/completed. If you have Sonarr setup with the same category, it should know the first is he’d downloads are there. Next, you setup the path in Sonarr, so that once the downloads are done, it will rename and move them to the final path (/storage/TV). I hope that helps 🙂

  6. phuriousgeorge says:

    Hello again and once again, thank you for your blog! My home setup is modeled pretty close to yours. I’m having a specific problem here lately with samba/mergerfs permissions and Docker containers I was curious if you might have some insight to?

    My grab/DL setup is a bit different, as I run a dedicated server I pool downloads on due to privacy/limited bandwidth at home, so my local apps all drop into a blackhole location for me to upload to my server (manually or automatically – I know I can automate, but I like the user input). The problem I’m having is it seems everything created in my blackhole location (and other places in my pool) are created with permissions I cannot access from my PC:

    -r—-x–t 1 phuriousgeorge nogroup 30106 Dec 29 14:08 Cri.torrent
    -r—-x–t 1 phuriousgeorge nogroup 866145 Dec 29 05:59 The.nzb

    I know I can chmod/chown as needed or setup a cron, but that’s a hassle and seems it shouldn’t be required. Any file created by my Ubuntu local user is unreadable by my Windows PC =/. I had this working previously, but I didn’t document what I did and seem to have no luck this recent fresh install.

    comment = Ubuntu File Server
    path = /storage
    browsable = yes
    guest ok = yes
    read only = no
    create mask = 0777
    hide files = /lost+found/snapraid.content/.Trash-1000/
    veto files = /lost+found/snapraid.content/.Trash-1000/
    • Zack says:

      I assume you are connecting with the guest user on your Windows box instead of logging in the with the user/password of your Ubuntu user. If so, that’s why you can’t use the files. I would either connect as the Ubuntu user (use smbpasswd to create a user and password that matches your Ubuntu user) or chmod /storage as 777.

      chmod -R 777 /storage
      • phuriousgeorge says:

        Thanks, I forgot about the user auth. I don’t think I did that before because of constant issues with Windows 7, but I believe I’ve got things working, at least for the moment lol.

  7. charlieny100 says:

    I like the idea of putting everything in containers and want to give this a try. But like others, I want some traffic to go out over my routers VPN. I currently route traffic by IP address. Can I have a Plex container that is the IP of the host and an NZBGet container with a different IP address?

    • Zack says:

      As long as you have a second nic in the host, and can deal with configuring the Docker router, then anything is possible. You would just attach the nzbget vm to the second network interface.

  8. Dulanic says:

    So here is a question that maybe I am just completely missing. How do you keep the containers up to date? Do you need to recreate them? Is there a command to update them?

    • Zack says:

      Good question. To update the image of a running Docker container, you need to remove and recreate them. The good thing is that any volumes, will still hold all your data and configurations, so this is very easy to do.

      docker stop plex
      docker rm plex
      docker run \
      -d \
      --name plex \
      --net=host \
      -e TZ="America/Detroit" \
      -e PLEX_UID=1000 -e PLEX_GID=1000 \
      -v /docker/containers/plex/config:/config \
      -v /storage:/storage \
      -v /docker/containers/plex/transcode:/transcode \

      And, you are back in business. The nice thing about many of these containers is that the application, you are running will be updated if you just restart the container. Things like Plex, NZBget, and Sonarr all work like that. So, it could be as easy as this if you just wanted the latest version of plex.

      docker restart plex

      There is also Docker Compose, but I have not shown how to use that here. If you have built a compose.yml file, you could also update it like this.

      docker-compose up -d --build plex

      • Dulanic says:

        So follow up to this.. what can I do to determine which of these do this and which need to be recreated? I didn’t keep track of all of my dockers as I flagged all of them to –restart=always so they autostart. So a simple reboot would restart all the dockers I use, but I have no idea which actually need a recreate.

        • Zack says:

          You could write a script that checks for an update to the image, or use watchtower.

          docker run -d \
          --name watchtower \
          -v /var/run/docker.sock:/var/run/docker.sock \

          Or, you can use it like this…

          docker run -it -v /var/run/docker.sock:/var/run/docker.sock webhippie/watchtower
          • Dulanic says:

            Awesome! Watchtower works perfect. I searched and searched and never saw that. That is a perfect choice to keep my dockers updated.

          • Dulanic says:

            OK I did that and that watchtower is bugged… Ill report it. It flipped my internal and external ports on muximux

  9. Zack says:

    Hmmm, I’m glad it almost worked for you. I haven’t seen any weirdness from Watchtower, including with Muximux. Keep me posted on that. Did your other containers get updated correctly?

  10. haljordan says:

    How are you handling reboots of the host os and having each container start upon a successful reboot of the host os?

    • Zack says:

      Hello! Great question. There are lots of ways to do this, but I like to just use a simple BASH script that I call via crontab with the @reboot option. That way it always starts up. Here’s my script (/root/ scripts/docker/

      #! /bin/bash
      # Enable Docker Containers
      docker start crashplan
      docker start unifi
      docker start nzbget
      docker start plex
      docker start plexpy
      docker start sonarr
      docker start radarr
      docker start muximux
      docker start openvpn-as
      docker start booksonic
      docker start watchtower

      And, my crontab line for this item (crontab -e as root).

      # Start Docker Containers
      @reboot /root/scripts/docker/ > /dev/null 2>&1
    • Dulanic says:

      A way that I found works well also is when you turn load the dockers originally, add –restart=always . This will automatically start dockers upon reboot etc… so example would be docker create –restart=always and then the rest…

      • Zack says:

        Yes, thanks for sharing. As I mentioned, there are tons of ways to do this including using docker-compose. The reason, I like my method, is that if I don’t want a container start after a reboot, I just comment it out in the With -restart=always, I would either need to stop and rm the container and then reconfigure it (unless I have a compose.yml file) or use docker update –restart=no container_name just to avoid it auto starting at boot. Perhaps a better way is to create your Docker with –restart=unless-stopped instead. That way if I have stopped a container before restart, it won’t auto restart.

  11. Hildebrau says:

    Zach, great stuff here. My old non docker media server has reached its life span. I’m going to recreate it like you laid out here. Thank you!

    I was wondering if you’d be willing to share your config files well? I know there is some sensitive stuff in there, but perhaps replacing that with CHANGEME string it similar could work? I’m just looking for something that has all the plumbing worked out already between NZBGet, sonarr, radarr, etc. If that doesn’t interest you, no prob. I’ll hash it out manually this afternoon, I hope.

    Do you use any of the nzbToMedia scripts?

  12. codgedodger says:

    I just want let you know that Watchtower is the best thing I’ve ever discovered. Thank you so much Zack for this amazing knowledge!

    • Zack says:

      I have been watching Organizr, but haven’t tried it yet. It would be super easy to try out though.

      docker stop muximux
      mkdir -p /docker/containers/organizr/config
      docker create \
        --name=organizr \
        -v /docker/containers/organizr/config:/config \
        -e PGID=1000 -e PUID=1000  \
        -p 80:80 \
      docker start organizr
  13. haljordan says:

    Do you have some example commands that you can run from the host OS to affect various containers? I would like to script a pause of nzbget. I’m pretty sure something like this from the host OS: docker exec -it nzbget /app/nzbget -P should work, but it returns this error:
    Request sent
    No response or invalid response (timeout, not nzbget-server or wrong nzbget-server version)
    I’m doing this so that I can script an rclone upload to gdrive , but have the script pause nzbget first….. then resume upon completion of upload.

    • Zack says:

      I just pause the container itself vs. pausing nzbget. Here is all you need to both pause and unpause it.

      docker pause nzbget
      docker unpause nzbget

      I use this exact setup in my SnapRAID sync script.

      What you tried should work, but doesn’t appear to. Although, you can get the version number, which is strange.

      docker exec -it nzbget /app/nzbget -v
      nzbget version: 18.1

      That may be worth an ticket on their github or a question on the LS.IO forum.

      • haljordan says:

        Here’s how I accomplished this.
        #beginning of script (pause nzbget)
        docker exec -it nzbget /app/nzbget -P
        #various rclone commands
        #after rclone is done (unpause nzbget)
        docker exec -it nzbget /app/nzbget -U

        • Zack says:

          Thanks for the reply! This will sure help others. Pausing NZBget via docker exec is probably the safest way to do accomplish this, but I have had no issue pausing/unpausing the container. Plus, it’s easier to setup an array of containers to stop/start via script as I did in my SnapRAID sync script.

  14. Savage702 says:

    So I want to get started with Docker. I’m already running all things I want to Dockerize, so I started easy with PlexPy (which I wasn’t running) and CaAdvisor. Got both of them going, all seems good with them. Organizr threw a tantrum because port 80 is in use by Apache. I had to pick a different port.

    But items like Plex & Sonarr, that are accessing my Snapraid array… I see in your Plex, you just listed /storage, when I was under the impression I was going to have to list out each location?

    then I notice in Sonarr, you DID do that. ?

    Also, how would we get the current Plex config/library and all the cached items to pick up from the Docker version? To retain all my views, and not have it scrape for days?

    • Zack says:

      It sounds like you have started off well. In regards to storage for things like Sonarr/Radarr, I only pass through what that container actually needs. So, my Plex container needs all of my stuff including pictures, home movies, etc, so it’s just eaiser to pass all of /storage through to it. I also mapped /storage:/storage so that I could transfer my old library into the Plex container without re-scraping as you mentioned (the paths remained the same).

      Here is how you migrate your data from the old system.

      Basically, in the Dockerized Plex
      1. Sign out of Plex
      2. Shut down the container

      docker stop plex

      3. Stop your old Plex server as well.
      4. Copy data from the old to the new docker container

      rsync -av --progress /var/lib/plexmediaserver/Library/Application\ Support/Plex\ Media\ Server/ /docker/containers/plex/config/Library/Application\ Support/Plex\ Media\ Server/

      5. Fire up the Docker container

      docker start plex

      6. Launch the Plex app, and login.
      7. Update your libraries
      8. Try to play something.

      If something doesn’t work, you can always start over with no issue.

      docker stop plex
      docker rm plex
      rm -rf /docker/containers/plex
      mkdir -p /docker/containers/plex/{config,transcode}
      chown -R your_user:your_user /docker/containers/plex
      • Savage702 says:

        Thank you good sir. This all seemed to go off without a hitch. I have Plex, Sonarr and Sabnzbd all up and running. I did all this on a new install of ubuntu, and just flip flopped between drives and used a common drive to transfer the data and configs to and such. Took a while for the Plex library to update again, even though everything was there, seemed it wanted to go through everything again.

        I am adding to all my containers the following:
        –restart unless-stopped \

        This seems like a nice option, although I think it didn’t start up plexpy on reboot. Seemed there was one of them that didn’t take.

        Also discovered docker update command, nice since I forgot to add the restart option on Plex
        docker update –restart unless-stopped plex

        Not sure if it’s my imagination, but images are displaying a little slower while scrolling my plex library now, and I had a movie on my Roku crash the roku (causing the roku to reboot) twice last night. Could be a Roku issue… not too sure. Will have to see how things go.

  15. haljordan says:

    So now that crashplan is kaput, what have you explored going forward? There are some ways to script the backup of data volumes of containers to a .tar file. You could then script rclone sync of those .tar files to google drive.

    • Zack says:

      I’m currently just nightly stopping each container and using duplicity to backup to my remote colo’d backup server. I’ve been considering doing this with Backblaze B2, but I haven’t set that up yet. I’d love to hear what others are doing.

  16. mzuidwijk says:

    Zack my friend! I love your website 🙂 I started a few weeks ago to play with docker (with intention to get rid of vmware and run everything in docker containers). I also planning to build myself an energy friendly new server. My old Intel Atom with 4G ram and 4x 2TB mdadm RAID5. Now I’ve got 10x 4TB SAS disks and was planning to sell them and buy lower energy disks (2,5″ SATA)…. till I saw your website. No RAID, I’ll be using mergerFS and Snapraid in the to-be-build-server. All my VM’s will run in docker (home assistant, unifi, grafana, influxdb, telegraf, and some others). Main reason: spindown of disks to preserve energy 😀

    • Zack says:

      Thanks! I’m very glad that my site benefited you 🙂

      P.S. I don’t shy away from donations to help me continue to host this site going forward.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.