Microservices – Rules of Engagement

Microservices – Rules of Engagement

Overview

Rules of Engagement is a set of principles and best practices that are valuable to follow to ensure an efficient microservices environment. Treselle Systems captured these rules while going thru the book “Microservices From Day One” by Cloves Carneiro Jr & Tim Schmelmer. Guiding principles like these help the development team spend less time thinking about high-level architecture, and more time writing business-related code and providing value to stakeholders. Check out how one of our microservice architecture implementation is inline with these Rules of Engagement.

The purpose of this blog is to share our experience with Microservice as per the Rules of Engagement. Please note that the architecture and other items are tweaked a bit to maintain confidentiality.

High Level Architecture Diagram

This architecture diagram is tweaked from the original but maintained the same level of capabilities. It follows most of the microservice requirements such as service registry, service discovery, log management, containerization, mediation & routing, caching, loosely coupled & highly cohesive services, continuous integration & deployment, etc.

AAP Physical Architecture_raghu_blog

RULE 1: Customer-Facing Applications Cannot Directly Touch Any Data Store

What is it?

“Consumer-facing applications” are interfaces used by an organization’s customers. They are usually but not always web sites, administrative interfaces, and native mobile applications; there could even be public facing APIs that you expose. Consumer-facing applications will be a mere mashup of data retrieved from authoritative systems-of-record, and will never have a database or any other mechanism that persists state. Such systems-of-record will always be service applications with well-defined, versionable interfaces. An important side effect of this rule is that all the features in consumer-facing applications are implemented via service calls to microservices.

What did we do?

The View Model Layer (VML) exactly mimics this purpose along with complimentary API Gateway. The frontend application talks to VML which is responsible for interacting with appropriate microservices thru API Gateway and mashes up the data. VML knows which microservices to call for a request and is the public facing API which will never preserve any state or interact with the persistent store. It performs marshaling and un-marshaling of request-response parameters and uses API Gateway for mediation and routing, caching, request-response tracking, API management such as threshold, authentication & authorization, and other operational responsibilities.

RULE 2: No Service Accesses Another Service’s Data Store

What is it?

This principle is mainly about implementation encapsulation; however, data stores are mentioned because that’s where we quite often seen this rule being broken, causing all sorts of implementation leakage that ends up slowing down service evolution. Similarly, all inter-service interactions should happen only through well defined, versioned APIs. While a service has ownership of the data for which it is itself the system-of-record (including direct read and write access), it can only access other information via its dependent authoritative services.

What did we do?

Most of the microservices use their own persistent store based on the use case such as ElasticSearch for search capabilities, MongoDB for geo-spatial and usage metrics capabilities, Neo4j for graph relationship & metadata management, and MySQL for other use cases. Even though few services use the same MySQL instance, their schemas are separated based on boundary context using domain driven modeling concepts.

RULE 3: Invest in Making Spinning up New Services Trivial

What is it?

With a growing number of deployable units (that is, services); it’s important to be serious about automation (DevOps) of all aspects of your infrastructure. In a perfect scenario, most code changes would kick off a build that runs a suite of automated tests. When a successful build happens, the change should be deployed to a shared development environment, and then later deployed to production without manual intervention in each step of the way. When a developer or development team decides that a new service is required, a lot of steps are needed to get that new service live and any delays in performing these steps will motivate the developers to bunch APIs into existing services to avoid any delays and thus defeating the whole purpose of loosely coupled and highly cohesive service paradigm. Some of those steps are:

  • Defining the language stack to be used based on requirements
  • Creating a repository
  • Defining a data-store
  • Setting up continuous integration,
  • Defining deployment
  • deploying To development/staging/production
  • Setting up monitoring and logging

What did we do?

Many of the trivial tasks from Frontend to VML to backend microservices are automated mostly using Jenkins plugins as it has connectors to Git, Slack, Swagger, AWS services, dockers, and etc. Jenkins job concept can be extended using Groovy code and automate with built-in scheduling mechanism. A complete Git branching strategy was developed along with continuous integration and deploy them as test containers. On Production instances, a blue-green deployment was implemented by utilizing consul-template to switch Nginx configuration on the fly and ability to rollback if there is any problem with the deployment.

Spinning up a new service is templatized using docker compose that uses the base docker image which takes care of trivial tasks such as setting up repository, connect to log management, self register the container with Consul using Registrator, etc.

CI and CD Pipeline Workflow_blog

RULE 4: Build Services in a Consistent Way

What is it?

All services that we build should behave consistently in many aspects. In the perfect scenarios, all services will:

  • Communicate using the same protocol and format—for example, JSON over HTTP
  • Capable of registering to the service and be discoverable
  • Define APIs in a consistent manner
  • Log request access using a standard log format
  • Be monitored with the same tools
  • Proper caching strategy
  • Consistent testing mechanism
  • Well laid out API versioning
  • Independently developed and deployable
  • High cohesive and loosely coupled
  • Contained runtime environment

What did we do?

  • All services including backend microservices and the VML mashup services exchange information using JSON on HTTP(S)
  • Consul and Registrator along with Consul server is used for consistent service registry and discovery which also serves for the service monitoring capabilities.
  • All API’s follow proper REST principles by utilizing GET, POST, PUT, HEAD, and DELETE verbs. Most of these services are intentionally made read-intensive for high scalability and throughput. The heavy write happens at the data pipeline layer which is not part of the microservice architecture.
  • All services attach a correlation ID along with request parameters and gets logged using Filebeat, Logstash, and ElasticSearch that can be visualized in Kibana.
  • API Management via API Gateway and cloudwatch logs are used for monitoring purpose along with Consul server for services health.
  • Caching strategy was implemented using Elasticache with appropriate TTL’s and warmup process.
  • Five layers of testing strategy is implemented for consistent testing mechanism.
  • Each service is continuously integrated and independently deployed on-demand using Jenkins pipeline integrated with Git via hooks.
  • Individual services are deployed into its own docker container for contained runtime environment. A base docker image is reused in many places for consistency

RULE 5: Define API Contracts in Code

What is it?

An API is generally defined by its inputs and outputs, which should ideally be described using an artifact that computers and other developers can parse. The API definition must be clearly defined, and clients of that API can validate and test it unambiguously. All services will expose and consume data via APIs; so documenting them is a necessity. At every point in time, there should be a clear way of describing endpoints in detail.

What did we do?

Swagger is extensively used for setting API contracts between frontend and VML and between VML and backend microservices. Swagger can be used to document APIs and specify validation rules that can be easily hooked into the Jenkins pipeline for API testing purpose.

swagger

References

1467 Views 9 Views Today