Using RabbitMQ as your Data Access Layer

My recent pet project has been converting the application I wrote for my brother into a web application.

But, at the same time, I wanted to add some data redundancy and provide a mechanism to allow me to have both local and remote data.

Perhaps the critical part to this whole equation is that my brother doesn't have a huge distributed system. But, even then, there is no reason why this can't work. I've read some articles about basically re-implementing a lot of what TCP/IP does around Rabbit to address things like duplication, out of sequence delivery, etc...

The fact is, Message Queues are used in a lot of mission critical systems. So, it is possible to work with them in such as a way as to end up with a reliable transmission of data.

I have a bit of a lightweight handler for this myself. And, I could even build in things to handle re-transmission and data de-duplication to make it more robust. It just isn't a concern at this scale and in this application. Queues aren't going to be backing up. They will sit empty most of the time, and there will only be 2-4 queues for the whole application.

The interesting bit is that it is working. And it is cool.

Currently I'm just working on the server side of things. But, the proof of concept is already far enough along. I have a request queue which the server consumes. It deserializes into a header with a payload which is a JSON string of another complex or simple data type.

The header provides the information to determine the activity. And the activity knows how to deserialize and act upon that data.

Once complete, the result from the activity is written to a response queue. Then the client who posted picks up their message when completed and does what they want with the data.

This all sounds a bit clunky and it is where my handler comes in. I've built a wrapper around a lot of this using async calls. So, the pages I'm writing just say something like
await handler.FindAll();
The FindAll() function creates a header with a unique id for an action "FindAll_" which maps to the FindAll action on my server side code for that specific entity. It serializes the request to JSON, submits it to the request queue, waits for response which echoes back that Id, deserializes the body of the response into an IList and the async method returns.

Sure, the code might be a bit quicker if it were direct to the database. But, not really by a noticeable amount. And, like I said, in my case, I'm leveraging this to distribute requests between both a client and server.

As I've been working through this, I've also been having a number of ideas about how this architecture opens up more doors than it closes. An example would be sharding. You could basically convert any DB into a cluster of sharded DBs. Basically, you have a bunch of consumers and whoever gets the message first stores the data. Then, when a request for data comes in everyone gets that, and the one (or ones) which have the data respond back.

That is far more extreme than my example. But, the idea which popped up was how to handle reports. I'm using web pages, so I'll probably just build reports as web pages. But, I don't need to limit myself like that. The great part about using a middle layer like Rabbit is that I can build a client on any platform and in any language which supports RabbitMQ and JSON (de)serialization. Which is a decently broad sampling. In fact, I can split my service out into as many pieces as I want.

And I might. The UI is getting big and messy and the other pieces I mixed in are getting in the way. I started with Razor pages, but I'm not really loving it. It is really pushing towards a very specific way of doing things. And I'm not sure that I'm a huge fan of the approach. Also, refactoring.

Comments

Popular Posts