Dev Journal: Docker, Home Automation and Media Servers
Well, I wrapped up my side project. And, I think largely binned it. I'm keeping the code. And who knows, maybe I'll start it up again in the future.
What's changed? Not a whole lot. I got it to the point where I wanted it. It even ran my house for a few weeks. The weakest link was me. But, the second weakest link was openHab. When I got my Raspberry Pi, I started tinkering and ended up trying out Home Assistant. And, after a few days, it was doing everything I was doing, as well as everything openHab wasn't doing.
I don't want to harp on openHab. It sounds like a few of the things I was missing are simply not fully tested. And, it seems like the HA community is a little more lax. In my case, I don't want to hook this up to an external network anyway, so security isn't my greatest concern. I really just want something to help consolidate controls for the whole smart home and power some displays with some nice looking information. The two things which sealed the deal were the better looking (and more flexible) UI in the form of the new Lovelace UI for HA and the better support. Literally every single piece of connected smart equipment in my home had support. But, that is a different story than I wanted to tell here.
All of this got me searching around for information on HA in various configuration. And Pi projects in various configurations. And a lot of these came back to Docker. Or rather, I discovered that a lot of these projects had pre-built Docker images and many even had easily templatized deployment scripts all ready for me.
I had gotten my feet a bit wet with Docker when I did my little home automation project. And, the rate at which it kept popping up continued to make me increasingly more curious.
The straw that broke the camels back was I couldn't find a lot of good information on just how much I can throw at my Pi install of HA before it starts crawling. At the same time, I read a lot of bad things about the general stability of running anything on a Pi. Now, I'm not in the same boat as many. My smart home doesn't require HA to function. And most people in the house wouldn't even know it broke. I could also, rather easily restore a backup and be up and running in very little time. At worst I'd be out the cost of an SD card.
But, I also wanted to do a media server. And potentially a few other things I could host in Docker.
So, I thought; why not turn my desktop, which is usually turned off, into a server for various Docker related projects? I mean, the thing has 16GB of RAM, 16 logical cores (8 physical), a decent SSD, a 1TB internal storage drive and a 1TB USB attached drive. Then, if my Pi SD card gets fried, I can simply spin it up in Docker on much more reliable (and less corruptable) hardware.
So, first up, Ubuntu 18.04. I like it when things just work. And while my desktop isn't exactly 100% Linux friendly, and Ubuntu throws a random error every time it starts on pretty much all of my machines... it nevertheless works. And, most importantly, it works well with Docker. Which leads to #2. Docker and Docker Compose. Easy installs on this version of Ubuntu. #3 is Portainer. I loaded this in a Docker container... and it helps me monitor and manage my Docker container (and everything else Docker related).
So, the last step a project. A reason to have this thing not running Windows. I bought a new SSD so I could unplug the Windows disk to switch back whenever I want. But, I still need a reason to leave this beast running most, if not all of the time. So, first up was a media server. I tried Plex, but it was slow to start and encoded choppy crap. I was also turned off by Plex Pass, as it wanted me to buy the rights to play it on my Android phone. So, I grabbed Emby, what I presumed was a free, OSS variant. It looked lovely, but I fire it up on my TV and it tells me that I haven't until the 2nd of next month to continue using it without paying.
So, I read around. It turns out Emby was once free. And then they kind of turned on the OSS community (may or may not have violated licenses), closed source important parts of the app and started charging. And not only charging... charging for basically everything.
Dig around a bit more and find that Plex only charges for mobile clients. Web and, more importantly TV apps are free on the same network as the server. And, where Emby's only option was subscription based, Plex did allow me to at least buy licenses for mobile devices for a low one time fee.
Next up was the shitty, choppy encoding. Partly my fault. I think a big part was that I had everything except the container itself running off of the USB. Configs, Transcoding and Video files. And, I was transferring my media library over to the device at the same time. BUT... Emby had no problems.
The clue was... and I hadn't missed this either, in the setup scripts. Emby's example Docker deployment command included adding in access to my hardware to handle transcoding. This was missing in the Plex scripts. A quick Google search confirmed; this was exactly what I was missing. Plus an extra line for good measure.
Now, I had the transcoding and config on my internal SSD and the media on the internal HDD instead of the USB. And, now, with the added hardware acceleration, I'm encoding 30+GB BluRay files over the network and the Docker container is using... 8% CPU. It is brilliant and I love it.
So, I'm back to free for the media server.
Next will be setting up a backup Home Assistant instance in Docker. Once all of the settings and everything match the Pi... I might take the Pi and turn it into a display to control the home instead of being the server. Or, I may grab a DAC hat and build out a Volumio device. Though, I may see if I can play with Volumio in a Docker image first... while I love quality sounding music, I'm not going to bother if the OS is a pain.
Anyway, I know this wasn't so much a dev related journal as talking about how I stopped developing my app and what I replaced it with. But, there are a number of important things to take away from this I think:
Firstly, I stopped developing the app for a simple reason; it had taught me the technologies I was interested in learning as far as it was going to go (Docker and RabbitMQ) and once Christmas vacation was over my investment time was going to drop. While there were still novel aspects left to my plans, HA is actually planning to incorporate similar features. In short, by the time I got it done, something better was likely to already exist.
So, it had taught me what I wanted it to teach me. I had taken it as far as I needed to. It was a working product. I didn't give up before it reached at least the minimum stage for me.
There is nothing wrong with either taking up a project just to learn something or stopping development when you find an existing tool which does the job better. In fact, in my opinion, too many people take too many clones too far. And at the same time, many people rely on others too much when at least trying something on your own can be incredibly valuable.
In fact, as you can see, Docker has VERY much become a skill and a tool I'm taking further. I'll also be working with RabbitMQ at work. But, it also opened up my eyes to MQ tech in general. And, part of what kicked off this whole thing in respect to HA was adding in MQQT to the equation. And MQQT is pretty much the same as RabbitMQ. Again, had I not done my own project I likely wouldn't have even thought about MQQT or considered it in the equation. Even worse, I may not have even considered HA.
For the informative bit of this post. Docker. What is it? If you are technical and want a technical description, you can find that anywhere. If you want a dumbed down explanation. Here it is; Docker is a tool which let's you spin up very lightweight application instances in their own pre-defined environments and which run more or less right on top of the OS, and so are quite fast.
For instance, HA, running in a Linux Container will include all of the dependencies HA needs to run. It will load it's own versions of them, so you can't have conflicts with your local environment. And, it will even have it's own storage. You can even run multiple instances of the same image in different containers. Most of the stuff I'm looking at, I wouldn't want to. But I could.
I can also feed local resources into these containers. In the case of Plex, I feed in some local paths it can use to store configs and video files. This way, if I blow away the container, that data stays.
What this means is, I could have 2 Plex servers on the same machine pointing to different libraries of media if I wanted. A "his and hers" Plex setup. Pretty cool stuff.
From a development perspective it is even more powerful. I can use Docker Compose to define multiple containers to run together to build my app.
From an end user perspective though, it is pretty useful as well. Take my Plex setup. I had performance issues originally. So, instead of reinstalling Plex, I simply deleted the container I had, used the same image, moved the files around and mounted those new local files to the original container folders and started it back up again. Plex didn't need to know that the physical location of the files had changed. From it's perspective they were in the same spot they always were.
In short, Docker is yet another way to deliver applications. Where there are boundaries crossed between the container and the host, that layer is all abstracted by Docker. So, if all of the configuration and data is on the file system, you could even migrate the whole system to an entirely different OS and hardware. And even pull the image down from scratch again, and it would function as though it had always been there (assuming the image and Docker supported the new platform of course).
MQQT and RabbitMQ on the other hand are Message Queue technologies. On the surface they don't sound like much. And, in reality, they don't do much. But, leveraged correctly, what they do can be incredibly valuable. In my software I used RabbitMQ as basically to entire communication layer between all of classes. Using topics, I was able to broadcast a single message to anyone interested in receiving it. But, even if I only had a single consumer, I could still do other valuable things like make Rabbit hold onto the message until someone picked it up and even have it persist that message if the system went down.
Unlike Docker, this one really is only useful to technical types. Developers and those with a tech itch to scratch.
For me, one of the things I was interested in was building out low powered devices like Raspberry Pis or Arduino's to broadcast simple messages to the network to drive parts of the smart home. And, MQQT support is already baked into HA. So, if I had an MQQT instance running, I could point HA at it and send and receive messages from the queues and topics there. And then I could make my devices talk to that rather trying to figure out how to make those devices talk directly to HA.
From a developer standpoint, MQ systems have 2 valuable properties. These dedicated MQs are meant to do one thing and do it well. And, they are simple, because they just do one thing. In short, an MQ handles message delivery including dealing with failures etc... very well. And because they only deliver messages, there isn't a whole lot to the interfaces and they can be quite easy to talk to.
All of this says nothing about the ways in which they can communication itself much more interesting.
But, I don't want to wax poetic all night about MQs and Docker.
I think I managed to turn this around at the end and convert it more into a dev like discussion. And, I think my descriptions are "accurate enough". At the least, it helped me put my thoughts in order.
What's changed? Not a whole lot. I got it to the point where I wanted it. It even ran my house for a few weeks. The weakest link was me. But, the second weakest link was openHab. When I got my Raspberry Pi, I started tinkering and ended up trying out Home Assistant. And, after a few days, it was doing everything I was doing, as well as everything openHab wasn't doing.
I don't want to harp on openHab. It sounds like a few of the things I was missing are simply not fully tested. And, it seems like the HA community is a little more lax. In my case, I don't want to hook this up to an external network anyway, so security isn't my greatest concern. I really just want something to help consolidate controls for the whole smart home and power some displays with some nice looking information. The two things which sealed the deal were the better looking (and more flexible) UI in the form of the new Lovelace UI for HA and the better support. Literally every single piece of connected smart equipment in my home had support. But, that is a different story than I wanted to tell here.
All of this got me searching around for information on HA in various configuration. And Pi projects in various configurations. And a lot of these came back to Docker. Or rather, I discovered that a lot of these projects had pre-built Docker images and many even had easily templatized deployment scripts all ready for me.
I had gotten my feet a bit wet with Docker when I did my little home automation project. And, the rate at which it kept popping up continued to make me increasingly more curious.
The straw that broke the camels back was I couldn't find a lot of good information on just how much I can throw at my Pi install of HA before it starts crawling. At the same time, I read a lot of bad things about the general stability of running anything on a Pi. Now, I'm not in the same boat as many. My smart home doesn't require HA to function. And most people in the house wouldn't even know it broke. I could also, rather easily restore a backup and be up and running in very little time. At worst I'd be out the cost of an SD card.
But, I also wanted to do a media server. And potentially a few other things I could host in Docker.
So, I thought; why not turn my desktop, which is usually turned off, into a server for various Docker related projects? I mean, the thing has 16GB of RAM, 16 logical cores (8 physical), a decent SSD, a 1TB internal storage drive and a 1TB USB attached drive. Then, if my Pi SD card gets fried, I can simply spin it up in Docker on much more reliable (and less corruptable) hardware.
So, first up, Ubuntu 18.04. I like it when things just work. And while my desktop isn't exactly 100% Linux friendly, and Ubuntu throws a random error every time it starts on pretty much all of my machines... it nevertheless works. And, most importantly, it works well with Docker. Which leads to #2. Docker and Docker Compose. Easy installs on this version of Ubuntu. #3 is Portainer. I loaded this in a Docker container... and it helps me monitor and manage my Docker container (and everything else Docker related).
So, the last step a project. A reason to have this thing not running Windows. I bought a new SSD so I could unplug the Windows disk to switch back whenever I want. But, I still need a reason to leave this beast running most, if not all of the time. So, first up was a media server. I tried Plex, but it was slow to start and encoded choppy crap. I was also turned off by Plex Pass, as it wanted me to buy the rights to play it on my Android phone. So, I grabbed Emby, what I presumed was a free, OSS variant. It looked lovely, but I fire it up on my TV and it tells me that I haven't until the 2nd of next month to continue using it without paying.
So, I read around. It turns out Emby was once free. And then they kind of turned on the OSS community (may or may not have violated licenses), closed source important parts of the app and started charging. And not only charging... charging for basically everything.
Dig around a bit more and find that Plex only charges for mobile clients. Web and, more importantly TV apps are free on the same network as the server. And, where Emby's only option was subscription based, Plex did allow me to at least buy licenses for mobile devices for a low one time fee.
Next up was the shitty, choppy encoding. Partly my fault. I think a big part was that I had everything except the container itself running off of the USB. Configs, Transcoding and Video files. And, I was transferring my media library over to the device at the same time. BUT... Emby had no problems.
The clue was... and I hadn't missed this either, in the setup scripts. Emby's example Docker deployment command included adding in access to my hardware to handle transcoding. This was missing in the Plex scripts. A quick Google search confirmed; this was exactly what I was missing. Plus an extra line for good measure.
Now, I had the transcoding and config on my internal SSD and the media on the internal HDD instead of the USB. And, now, with the added hardware acceleration, I'm encoding 30+GB BluRay files over the network and the Docker container is using... 8% CPU. It is brilliant and I love it.
So, I'm back to free for the media server.
Next will be setting up a backup Home Assistant instance in Docker. Once all of the settings and everything match the Pi... I might take the Pi and turn it into a display to control the home instead of being the server. Or, I may grab a DAC hat and build out a Volumio device. Though, I may see if I can play with Volumio in a Docker image first... while I love quality sounding music, I'm not going to bother if the OS is a pain.
Anyway, I know this wasn't so much a dev related journal as talking about how I stopped developing my app and what I replaced it with. But, there are a number of important things to take away from this I think:
Firstly, I stopped developing the app for a simple reason; it had taught me the technologies I was interested in learning as far as it was going to go (Docker and RabbitMQ) and once Christmas vacation was over my investment time was going to drop. While there were still novel aspects left to my plans, HA is actually planning to incorporate similar features. In short, by the time I got it done, something better was likely to already exist.
So, it had taught me what I wanted it to teach me. I had taken it as far as I needed to. It was a working product. I didn't give up before it reached at least the minimum stage for me.
There is nothing wrong with either taking up a project just to learn something or stopping development when you find an existing tool which does the job better. In fact, in my opinion, too many people take too many clones too far. And at the same time, many people rely on others too much when at least trying something on your own can be incredibly valuable.
In fact, as you can see, Docker has VERY much become a skill and a tool I'm taking further. I'll also be working with RabbitMQ at work. But, it also opened up my eyes to MQ tech in general. And, part of what kicked off this whole thing in respect to HA was adding in MQQT to the equation. And MQQT is pretty much the same as RabbitMQ. Again, had I not done my own project I likely wouldn't have even thought about MQQT or considered it in the equation. Even worse, I may not have even considered HA.
For the informative bit of this post. Docker. What is it? If you are technical and want a technical description, you can find that anywhere. If you want a dumbed down explanation. Here it is; Docker is a tool which let's you spin up very lightweight application instances in their own pre-defined environments and which run more or less right on top of the OS, and so are quite fast.
For instance, HA, running in a Linux Container will include all of the dependencies HA needs to run. It will load it's own versions of them, so you can't have conflicts with your local environment. And, it will even have it's own storage. You can even run multiple instances of the same image in different containers. Most of the stuff I'm looking at, I wouldn't want to. But I could.
I can also feed local resources into these containers. In the case of Plex, I feed in some local paths it can use to store configs and video files. This way, if I blow away the container, that data stays.
What this means is, I could have 2 Plex servers on the same machine pointing to different libraries of media if I wanted. A "his and hers" Plex setup. Pretty cool stuff.
From a development perspective it is even more powerful. I can use Docker Compose to define multiple containers to run together to build my app.
From an end user perspective though, it is pretty useful as well. Take my Plex setup. I had performance issues originally. So, instead of reinstalling Plex, I simply deleted the container I had, used the same image, moved the files around and mounted those new local files to the original container folders and started it back up again. Plex didn't need to know that the physical location of the files had changed. From it's perspective they were in the same spot they always were.
In short, Docker is yet another way to deliver applications. Where there are boundaries crossed between the container and the host, that layer is all abstracted by Docker. So, if all of the configuration and data is on the file system, you could even migrate the whole system to an entirely different OS and hardware. And even pull the image down from scratch again, and it would function as though it had always been there (assuming the image and Docker supported the new platform of course).
MQQT and RabbitMQ on the other hand are Message Queue technologies. On the surface they don't sound like much. And, in reality, they don't do much. But, leveraged correctly, what they do can be incredibly valuable. In my software I used RabbitMQ as basically to entire communication layer between all of classes. Using topics, I was able to broadcast a single message to anyone interested in receiving it. But, even if I only had a single consumer, I could still do other valuable things like make Rabbit hold onto the message until someone picked it up and even have it persist that message if the system went down.
Unlike Docker, this one really is only useful to technical types. Developers and those with a tech itch to scratch.
For me, one of the things I was interested in was building out low powered devices like Raspberry Pis or Arduino's to broadcast simple messages to the network to drive parts of the smart home. And, MQQT support is already baked into HA. So, if I had an MQQT instance running, I could point HA at it and send and receive messages from the queues and topics there. And then I could make my devices talk to that rather trying to figure out how to make those devices talk directly to HA.
From a developer standpoint, MQ systems have 2 valuable properties. These dedicated MQs are meant to do one thing and do it well. And, they are simple, because they just do one thing. In short, an MQ handles message delivery including dealing with failures etc... very well. And because they only deliver messages, there isn't a whole lot to the interfaces and they can be quite easy to talk to.
All of this says nothing about the ways in which they can communication itself much more interesting.
But, I don't want to wax poetic all night about MQs and Docker.
I think I managed to turn this around at the end and convert it more into a dev like discussion. And, I think my descriptions are "accurate enough". At the least, it helped me put my thoughts in order.
Comments
Post a Comment