Dev Journal: December 20th 2018 - Linux vs. Windows and making progress

It is a tired topic for me that I keep coming back to. Windows vs. Linux.

I wiped my laptop again. After getting my Atom BayTrail tablet running Magic Mirror in Electron locally rather than from my laptop I decided to switch my development of my new project to hardware running natively rather than a VM. Virtual Box is simply too buggy and slow. No matter what hack I did, even running from the same machine was painful. In fact, it was so bad, that aside from random hot key fun, remoting was basically identical performance-wise.

So, why move now, aside from the performance issues? Well, my primary reason for using a VM in the first place is to reduce the amount of bloat installed on a machine I use regularly. To that end, 2 things changed; we were only using Magic Mirror on one device lately. And I feel I've nailed down the dev environment requirements and they are within reason.

For Magic Mirror, the one machine we were using wasn't the server. So, there was some setup to be done. But, since I had recently installed Lubuntu 18.10 on that tablet with great success, it was actually fairly easy to set everything up there. Once that was out of the way, I was free to use the laptop for dev purposes.

Why wipe the laptop? Because it was running Ubuntu 18.10 and there is no official stable build for that version. I had to go back to the latest LTS, 18.04. I took the opportunity to load a new DE (Kubuntu) and install using the minimal install as the basis. And I'm quite happy.

I would have been fine installing it, even if Magic Mirror had still been there, had that issue not been there. Frankly, I'm looking to complete this project. And I don't want odd behavior caused by something unsupported which prompted the need for the 18.04 based OS. My dev installs are minimal and not crazy interdependent in such a way as I suspect uninstalls would break me. I'm running Docker, Docker Compose, SQLite, VS Code and Dot Net Core SDK. Some extras are managed within VS Code as Extensions or, in the case of RabbitMQ, as Docker images. It all seems to be light a snappy enough performance wise.

That being said, it isn't my dream environment. I would prefer Windows. Visual Studio Community is more bloated than VS Code, but it is a better IDE in my opinion. And Docker doesn't seem to have the same sort of distro problems as it has on Linux. The only real thing going for it is, I want the final product to run on Linux or Windows, and I'm more confident that if it runs on this stack in Linux that it will also run on Windows. Whereas, the other way around I feel no confidence.

What rules out Windows? Licensing. What I REALLY want is the ability to run either a natively supported, and sandboxed VM which doesn't need another license. Even more ideal would be if the performance were near native as well. Basically, I want a sandboxed environment I can deploy and code in to my hearts content and throw it away later without any concerns or loss of data in my primary environment if things go wrong. Since most PCs sold to consumers come with Home licenses, I don't even have the fun hyper visor bits which might actually enable that for me.

Linux provides me with a supported, free environment. I can dual boot (though it messes with my time in Windows), I can make an unlimited army of VMs. Whatever I want.

On the app front. Progress is being made. I have a static page with static text loading while the REST Api remains functional. Next I need to make that page contain dynamic content or serve up dynamic pages instead. But, at this point, the key is... I have a server, a service and a UI now.

The server side is slick. The RabbitMQ based system works great. I have a standardized payload I can send. Unlimited publishers and subscribers. Little atomic pieces of work. It is insanely nerdy, but also insanely cool. I've begun abstracting the plug-in configuration out as well.

The proof of concept at the moment uses Nest's APIs. The configuration is currently hard coded. But, as said, it is abstracted out. So, the plug-in itself will be completely unaware when it gets finalized. Though, I may improve upon that design further. There are a 3 different plug-ins at the moment. One which makes the devices call to the Nest API, this returns data about things like Thermostats and Cameras. I then have a parser which takes the raw data from that out of the payload, parses out the temperature related data, transforms it into an internal format and posts the serialized form of that to another queue. Then I have a SQLite logger which consumes that and writes it to a DB for historical/graphing purposes.

Again, this design, while cool, is also critical for my plans. MagicMirror loads JavaScript at the client which executes the code in many cases. So, the plug-in getting camera snapshots for instance, runs on every client. Same with weather queries. The problem? Most weather APIs and services like Nest throttle calls. You can only make so many in a given time frame. So, not only is it wasteful. It also potentially runs afoul in a multi-room setup like I want.

So, the other thing my plug-ins do at the moment is cache data once retrieved and when it is requested, they push out the last retrieved result instead of triggering a new read out of the normal flow of things. So, a client could request the temperature from my Nest Thermostat a thousand times a minute, but with my 30 second query, Nest's services will only ever hear from my twice. If I throw in an initialization call, which I probably will, that would be up to a max of 3 times per minute.

Which means, it makes the most sense for other things interested in the current temperature to do the same. Make an initialization call only to get the current value, which itself is probably cached data. Then, just subscribe and wait for the data you want.

Other things it means which I'm excited about; code for handling passwords, auth tokens, etc... only ever needs to be executed on the server. The clients just get the post processed data that they're interested in. That being said, I'm not personally a huge security nut on this project. The goal is a local only server. And, with it being Dockerized in the end, it should be reasonably sandboxed from the host system as well. But, some free perks won't hurt if I plan on releasing this some day. Even if I plan on releasing with warnings and caveats galores.

So, yeah... things I still want to accomplish? Well, next is getting that UI in place. Coming up with a strategy for UI plug-ins. Then, I kind of want to take a look at containerized openHab and building some plug-ins there. Ultimately, it is a cheap kind of a cheat. But, openHab ALSO has a REST API and I already know it can be used to drive lights and whatnot. Lighting, Temperature, Security Cameras and text (Date and Time, WiFi passwords, etc...) on a room by room basis are really the big ones for me.

Basically, what I want long term is a simple admin screen which gives me some basic control over room configs. Room configs will define things like room name, optional password, which client side background and UI plug-ins are included and where/how to render (probably a very simple grid configuration).

Then, the screens themselves will be simple as well. There will be a master, password protected screen which allows authorized terminals to drill into any room configuration. And then individual rooms will each have a clock + date, maybe the weather, perhaps a background image polling from a library somewhere, controls for that rooms lights, see the temperature, and based on room maybe allow change to temperature. And, that is pretty much it. Also, since they will be in bedrooms, they will likely need 2 other features. A sleep feature which allows the user to set the screen to all black, and potentially a day time, screensaver like feature.

Anyway, one thing at a time I guess.

Comments

Popular Posts