Microservices... again
OK. So I know I've been spreading the word of "hybrid microservices". But, I keep running into problems which make we wish that more of my functionality was in Microservices rather than hybrid.
Recent case; my connection to my cloud Postgres DB provider got just enough worse for a period that all of my connections were timing out. Yes, I could pay so that the DB doesn't need a cold start. But, this isn't for me and the purpose, at least initially, is to be able to offer something free.
When this connection failed it was causing the application to crash. That is absolutely my fault and can be easily fixed. But the service it was rendering was unrelated to the rest of the application and I shouldn't NEED to redeploy the whole thing just to address this one issue.
And so I shall split this out into a microservice.
But this post is to talk about how/why.
So, firstly, I do follow a hybrid microservices approach already. And that means that I have already made architecture decisions around many of the problems. I know how to build and add Microservices to my environment. I also rough guidelines in place for what I will and will not split out into a Microservice.
In this case, the application is rather small. Both in terms of complexity and volume of traffic. It is a bespoke solution for a single company. For that reason, I don't want to add communication complexity. I'm not doing Microservices for scalability reasons here. I'm doing it for maintainability. So, I avoid splitting off parts of the application where I would need a communication layer. The service in question is a background service responsible for syncing data offset, managing backs and restoring data in the event of a failure. So, it meets my criteria.
Next, I already know how I deploy Microservices. This system already has 4 (2 UIs, the API and an API Gateway/Reverse Proxy). I already said I don't need to scale, so this is all just managed in Docker Compose.
However, if I did need to scale, I would still use Docker Compose for dev and testing and then Kubernetes for Production. Why? API and Version Management.
One of the things, it occurred to me, that a lot of companies fear with Microservice is "how do I define which versions should and do work together". And the answer I use is "docker compose".
First things first, you will likely need multiple compose files for different environments because you will also want to use environment and version specific tags. And this is the first mistake a lot of people make. They either don't use tags, or they use the "latest" tag. That strategy is "ok" if your product suite is a single Docker image. But it falls apart quickly if you have multiple.
For me, the docker-compose file is like a meta package in NuGet. It should describe a specific configuration of versions which are either KNOWN TO WORK, or are simply being used in the pipeline to valid IF THEY WORK. Ideally, from there you would have a tool which would automate the creation of Kubernetes YML files based on a known good configuration to put into environments where scaling is required.
I also absolutely despise teams that try and use verbose tags but then update ALL images at the same time. To me, a good tag looks something like:
company-name/product-name:environment-version
So, something like: "microsoft/my-product:prod-1.0.2"
And then you make a "docker-compose.prod.yml" which describes which exact versions of the microservices should be loaded in the current production environment.
I will make one concession here. IF (and only if) you have a proper process in place for vetting and testing, you can re-tag all images in the suite with a series of consistent names. But only if those consistent names include ones which include a version.
In that case, once something is done testing and ready to be released, maybe you take all tagged images that were verified for that release and given them all a series of tags like: release, release-1.0, release-1.0.12223.
And you will see a lot of docker repositories do this. This allows end users (or end consumers who aren't users who are responsible for deployments) to grab either the latest release version, the latest release 1.0 version or, if a specific build needs to be tested they can still grab the exact version which they are interested in.
But, internally, while developing and testing you should maintain a docker-compose or equivalent file which shows which exact internal versions are used. And each team should be responsible for their own version and release schedules. This may mean that someone needs to own stitching appropriate application versions together, but this "meta-package" actually works pretty well and it means that other teams CAN still work to a different schedule without disrupting the overall release cycle.
I would also say that, since this is working at the scope of the entire product suite, that this works even better the meta-packages in things like NuGet or NPM. In those cases, your meta-packages are scoped at the dependency level and can have conflicts. Here, that shouldn't be the case. If there is a "conflict" it means that you have mixed incompatible service versions (which should be caught during testing, if not in development) and you simply downgrade the offending service(s) in the YML file and the conflict is resolved.
The other advantage of using versioned tags (and consequently a reason not to use un-versioned tags) is that you should not end up in a scenario where docker already has a local version of that tag and doesn't pull the latest from the repo on startup.
Comments
Post a Comment