Software Engineering Asked on October 29, 2021
Some design thoughts on an application that is mostly event driven using kubernetes and docker.
The application is a web based application, a single page one. It shows updates from the back-end such as showing current stock of an item in the inventory. It is pretty real-time, in the sense if an item was sold, it stock gets refreshed in the UI dynamically.
Given this, if the back-end service is not available to send an update to the user-interface, a message has to be shown in the user interface indicating that the updates are currently unavailable.
To address this, current thoughts are like this :
Implement a lightweight container in the same pod (like a side car). This container will check the availability of the service periodically by checking a URL(of the service container). It sends an event when available(a-la- heartbeat) If the service is ‘down’ it sends a "DOWN" that will be streamed to the client side. The client will render message using this – and starts to watch for an "UP" event to remove the message.
This approach can be implemented for each of such back-end service, but will have very similar implementations across the board that violates DRY and causes other corner cases.
Is there a standard pattern for this situation ? I was thinking of leveraging a monitoring solution like "Prometheus" and use a web hook to send the event for "UP" and "DOWN", but still there is some clutter on the client side.
Any other ideas for this ?
In general, when you are working with a microservices architecture, each service should assume that any other service it communicates with may be down in any given moment. Therefore, the frontend should be able to handle communication problems, like when a REST endpoint is not available, or a pipe is broken. If a frontend instance can not reach any backend, it could update the UI and simply retry. A side-car is not required for this. You can keep your code DRY by wrapping your calls in a function that would handle failures. When using a stateful connection like a socket, there should be event handlers for broken connections.
The tricky part is, when given service has multiple instances (which is a necessity if high availability is a concern). In that case, you would need to check the status of each pod and only connect to those which are currently UP. Using kubernetes liveliness/readiness probes is the way to do this. In short, kubernetes will probe the URL and decide whether the service is healthy (2xx response code) or not. Unhealthy pods will not get any requests, and will be restarted if they do not recover in the given time. Frontend would then just check if there is a reachable backend at any given time.
Answered by Ali Rasim Kocal on October 29, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP