Microservices logging is always a problem architects need to pay attention to. This is because each request runs across several services on different servers. Each service writes logs on its own running server. It ends up with logs scattering over several repositories and tracing the sequence of events becomes difficult.
One of the solutions is to use log consolidation tools, such as Splunk or Loggly, to periodically replicate logs from different servers and build the sequence of events. The problem of this approach is that it is not a real-time solution, some log information is not available until the completion of replication. Besides, this solution is not viable in a Cloud environment in which services and servers are created dynamically based on their load. Kafka Based Logging Solution for Microservices A more viable solution is to use Kafka as a message broker for the logs. Such logging architecture is an implementation of the data streaming, in which the data is the log. So this approach is also called Log Streaming. In Log Streaming, each service sends logs to Kafka instead of writing logs to the local repository. Several subscribers are written to subscribe to the log for various purposes. Unlike the log consolidation through replication, Log Streaming is a real-time solution which is more suitable for today’s digital platforms. Log Subscriber A typical subscriber is for the log consolidation tool, which then provides a holistic and real-time analysis of system events. Other than this, we can also develop subscribers for dynamic analysis. This is particularly good for adhoc incidents. For instance, we can develop a subscriber to subscribe to security or performance related events. We can also write a subscriber to monitor and act on some specific events, such as failure of login for over 3 times. The is a loose coupling approach to aggregate additional functions to the system. Network Congestion One of the problems of the log streaming model is the network congestion. When there are more and more services, huge amounts of logs will be pumped into the network, and cause network congestion. That might cause an impact to other normal Kafka services, e.g., delay of order placement. The solution is to 1. partition the network so logs will not flood the entire network, 2. delicate several Kafka servers for log purpose only, and 3. build a Kafka farm for log operation.
Thank you for reading. Here are some other articles from FinTech Insights:
For the details of the article, please contact us.
If you want to receive more information about finance and technology, please follow our LinkedIn or subscribe to the “FinTech Insights”.
Axisoft is a Top-Notch Financial Technology Provider in Hong Kong, Singapore and China. Since 1998, we have been helping banks implement global banking solutions.
Copyright © 2022 Axisoft. All Rights Reserved
Comments