Following with the previous post, we were thinking about how monitor our microservices, and here is our proposal:
- Data Collector (fluentd). Instead of using the common ELK Stack (elasticsearch, logstash, kibana) or Elastic Stack, we substitute logstash for fluentd. There are some comparisons like this one. logstash has a limited on-memory queue and you would need to install a message brocker, such as redis, RabbitMQ or Apache Kafka, to facilitate a buffered publish-subscriber model or enable persistent queues with limitations.Each microservice logs through console and we’ll use the Docker fluentd logging driver to process these logs and send them to our elasticsearch.
- Distributed tracing system (Zipkin). It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data. We’ll store the information in our elasticsearch.Each microservice will use Spring Cloud Sleuth to trace each request, maintain an unique identifier and keep track. Spring Cloud Sleuth will send the data to Zipkin.
- Distributed Search and Analytics (elasticsearch). We’ll store our data in elasticsearch so we will be able to search whatever we want in our logs.
- Explore, Visualize and Discover data (kibana). We’ll visualize our stored data using kibana.
At UST Global we are reviewing some microservice architectures which will help us to support our customers. We selected these three:
Netflix OSS Architecture
We will use the Netflix OSS (Netflix Open Source Software) components:
- Configuration Service (Spring Cloud Config). Each Spring Boot application connects to the Configuration Service to get its configuration. Each application configuration is saved in a Version Control Service (GIT), in order to maintain a record of all modifications.
- Gateway (Netflix/Zuul). The Gateway receives all the requests from outside and the gateway proxies them to the microservice depending on the configuration (what service and where it should be directed) and the registration (where the service is located).
- Service Registration & Discovery (Netflix/Eureka). Each microservice self-registers allowing to others microservices invoke them after discovering.
We will use the following architecture:
- Configuration & Service Discovery (Consul.io). Each microservice connects to the Configuration Service to get its configuration. This server is also used to discover other microservices.
- Gateway (nginx + Consul Template). The Gateway receives all the requests from outside and the gateway proxies them to the microservice depending on the registration (where the service is located). As alternative, we could use linkerd as the gateway.
- Service Registration (Registrator). Each docker instance is registered by Registrator inside Consul.
If we would not use Consul as DNS for other services (like databases,…), we would not need Registrator. Instead of it, we would let microservices to register them-self through Consul.
API Gateway Architecture
We are trying Tyk and also made two pull request to reduce the Docker image sizes for Tyk Gateway and Tyk Dashboard. Tyk offers an API management platform with an API Gateway, API analytics, developer portal and API Management Dashboard.
This API Gateway has the service discovery feature, we would not need any proxy/gateway (nginx + Consul template or Netflix/Zuul):
- Service Registration & Discovery (Consul.io). Each microservice connects to the Configuration Service to get its configuration. This server is also used to discover other microservices.
- Gateway (Tyk). The Gateway receives all the requests from outside and the gateway proxies them to the microservice depending on the registration (where the service is located).
I just read this post in DZone (this is the original) related to “API Development: Design-First or Code-First?” and I am not agree with the second phrase in “The design-first approach advocates for designing the API’s contract first before writing any code. This is a relatively new approach“.
People talks about API’s like if they were born recently but we integrate systems since long time ago, so I think we should design API’s for those integrations, don’t you think? You can ‘google’ for “WSDL-first” or “Contract-First” and you will get a lot of results about “design-first” when we built SOAP API’s long time ago.
I personally prefer “Design-first” for some reasons:
- It’s not only designing any API, it’s about your “Company API Strategy“. It’s about how will people think about the way you show your “business”.
- Server and client side are able to build at the same time; it’s not implementation aware.
- If you adopt “Code-first”, you have to be aware about the name conventions for the classes you declare. For instance, if you use the “DTO” suffix convention, may be this “DTO” suffix will be exposed in your API and it sucks.
Relacionado con el post anterior, varias observaciones que desarrollaré a lo largo del post:
- No reinventar la rueda si todos nuestros servicios son REST y desarrollados por nosotros con Spring.
- Obtener la información identificativa de cada petición para poder realizar búsquedas.
- Repositorio único de trazas.
Continue reading “Trazabilidad en las peticiones (II)”
En este mundillo donde cada aplicación/servicio se ha de comunicar con otras tantas, es crucial mantener una trazabilidad entre los servicios, de forma que podamos revisar las trazas y distinguir las peticiones/respuestas, recuperando aquellas que se correspondan con una petición concreta.
Para poder correlar las peticiones, necesitamos un identificador único que se propague en todas las peticiones hacia el resto de aplicaciones/servicios y que éstos, para cada traza que escriban relacionada con esa petición, muestren el identificador en la traza.
Continue reading “Trazabilidad en las peticiones”