-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Native queue support #53
Comments
The ProblemFirst, I would like to try to make the problem we are trying to solve more clear. In my opinion, the problem has two parts: (1) essential and (2) tooling. EssentialThe essential part is basically the nature of distributed systems. Communication between services is not reliable, and we need mechanisms to work around this problem. To be more specific, let's take a look at the following scenario:
Given how reliable the current infrastructure is, developers seem to forget that this is a problem (as stated in the falacies of distributed computing [1]) and don't care to make their services resilient. For instance, it is common to see sync calls to remote resources with no retry mechanisms. [1] https://architecturenotes.co/fallacies-of-distributed-systems/ ToolingPart of the reason why developers don't care about reliability is because there is a big "investment" in terms of time and effort to make their services resilient. To implement a retry mechanism today, you need, for instance, to implement a producer/consumer pattern, configure a message queue, implement a retry mechanism, etc. This is a lot of work, and it is not trivial to do it right. SolutionFirts we need to remember that Herbs is positioned as a microservices library, so taking into account the distributed nature of the problem is a must. Since the essential part is given, we need to provide tooling that makes it "cheap" to implement a retry mechanism, dealing with the complexity of distributed systems. In order to make it simple we need to provide a solution with common patterns and best practices, also intregrated with the rest of the Herbs ecosystem. Queued Use casesThe idea is to provide a way to implement a retry mechanism for use cases. const { usecase, queued } = require('herbsjs')
const { userQueue } = require('/src/infra/queues/userQueue.js')
queued(usecase('Create User', {
}, userQueue) The When the use case is executed ( The When start the application ( The consumers would be responsible for processing the requests in the queue and call the "real" use case. Ex: Let's say the developer wants to implement a retry mechanism for an existing use case. The developer would need to wrap the use case with // no queued
const uc = usecase('Create User', {})
// queued
const uc = queued(usecase('Create User', {}), userQueue) Queued StepsThe idea is to provide a way to implement a retry mechanism for steps. const { usecase, step, queued } = require('herbsjs')
const { userQueue } = require('/src/infra/queues/userQueue.js')
usecase('Create User', {
'Retrieve Info from CRM': queued(step('Create User', {
}), userQueue)
}) Here Basically, the same idea as the queued use cases, but with a few differences. Executing a use case ( Later, the consumer would need to run something like Backend - Producer/ConsumerOnce we have the context data of a use case or step, it can be sent to a queue by a producer and processed by a consumer.
The glue between Herbs and the message queue would be a library that would implement the producer/consumer pattern. Ex: Herbs2Redis, Herbs2RabbitMQ, etc. `queued()` function (use case or step) [Herbs]
|
|
|
v
producer [Herbs2Redis, Herbs2RabbitMQ, etc] The same for the consumer: consumer [Herbs2Redis, Herbs2RabbitMQ, etc]
|
|
|
v
call use case or step [Herbs] Security ConsiderationsSince the authorization ( ConclusionWith this proposal, we would provide a way to implement a retry mechanism for use cases and steps that tries to be as simple as possible, making it "cheap" to implement a retry mechanism. And I would like to reinforce that this is not about decoupling, deployment, maintenance, etc. It is about reliability. This is a very rough idea of how we could implement a retry mechanism for use cases and steps. I'm sure there are areas that I'm not considering, so I would like to hear your thoughts. Others Topicspollyjs and local retry mechanismsPollyjs might be an improvement for the local retry mechanism, but I don't see it as a solution for the problem. Memory queues are not reliable as message queues, and the retry mechanism would be limited to the application instance. A alternative would be to use Server IndepotencyOut of the scope for this discussion. But it is important to mention that the server (the one receiving the request) should be idempotent. This means that the request should be processed only once, even if the caller / "requester" is retried multiple times. |
Is your feature request related to a problem? Please describe.
A message queue is a form of asynchronous communication between services used in serverless and microservices architectures. Messages are stored in the queue until processed and deleted. Each message is processed only once, by a single consumer. Message queues can be used to decouple heavy processing, to store work in buffers or batches, and to evenly process peak workloads.
In the modern cloud architecture, applications are decoupled into independent core components that are easier to develop, deploy and maintain. Message queues provide communication and coordination capabilities for these distributed applications. Message queues can greatly simplify coding decoupled applications and increase performance, reliability, and scalability.
Generally, messages are small and can be items such as requests, responses, error messages, or just information. To send a message, a component called a producer adds a message to the queue. The message is stored in the queue until another component called the consumer retrieves the message and does something with it.
Describe the solution you'd like
Considering the need to solve 2 main problems, I suggest we natively implement queue support in herbs.
problem 1
The unreliability of transmitting http requests between endpoints. That is, using the queue as a retry submission functionality (in a similar way to polly https://netflix.github.io/pollyjs/#/) .
problem 2
The big bottleneck that can be generated in massive data writing on the producer or consumer part. With the queuing system, both parties can process the request the way computational power processes requests.
Additional context
I imagine a solution implemented in some layer of buchu where I would mark that a usecase is consumer or producer.
Where options could be a group of sets about
producer: true, consumer:true, retrys: X
The text was updated successfully, but these errors were encountered: