Microservice architectures enforce clearer, more pronounced internal boundaries between the components of an entire system than monolithic applications typically do. This can be used to the advantage of testing strategies applied to microservices; more options for places and methods to test are available. Because the entirety of the application is composed of services with clear boundaries, it becomes much more evident which layers can be tested in isolation. read more
Testing is a fundamental component of Software Development Life Cycle and requires specialized skills, processes, tools, and a tighter integration with the development team. However, designing, following, and managing a structured test process is a challenge for most businesses. Whether you are leading R&D, IT, or Product Management division, our experience has been that testing or QA often is considered as an afterthought. Not that you want it that way but usually other activities take higher priority or garner bigger proportion of the budget. This could result in serious quality issues that could hurt all the hard work and focused effort the developers invested in the first place. We strongly believe that testing should be part of the development process and should not come at the end of development or as an afterthought.
We can help. Our QA teams have worked with clients testing new application development, IT projects, technology re-architectures, mobile applications, and others. Contact us
Full cycle testing includes all aspects of functional, non-functional, unit, integration, performance and regression testing both manual and using open source automation tools and technologies such as Selenium, SOAPUI, Gatling, Supertest, Nightwatch, Cucumber, and custom programming as needed. Different testing strategies need to be applied for monolithic and microservice based architecture.
- Testing for Layered or Monolithic architecture: V-model verification and validation process is involved that covers different testing mechanism at different stages as follows:
- Unit test plan to perform unit testing during detailed low level design phase
- Integration test plan to perform integration testing during high level design phase
- System test plan to perform system testing during functional specification phase
- Acceptance test plan to perform acceptance testing during requirement analysis phase
- Testing for Microservice architecture:Pyramid model is more effective when testing microservices due to their decentralized nature that provides loose coupling and high cohesion. This model involves testing at different stages as follows:
- Unit testing is performed on different functions of a single service
- Contract testing is performed on each microservice independently treating it as a black box
- Integration testing is performed on multiple services that have to interact with each other to fulfil a functionality
- End to End service testing is performed on multiple services that includes all external interactions such as database and third party APIs
- UI/Functional testing is done on the whole system as an end user
Products and applications are being built using polyglot persistence that involves multiple SQL & NoSQL databases to solve a use case. Testing on such diversed persistence storage systems involve good knowledge of SQL to perform complex queries on RDBMS and database proprietary query languages for NoSQL. The following tools and query languages across different databases need to be mastered to perform high quality testing:
- MySQL/Oracle/Postgres: Testing using complex SQL queries, validating stored procedures, triggeres, functions, and views, checking constraints and referential integrity, and verify database schema with the reference model for data types, data length, character set, etc
- NoSQL: MongoDB query language to verify JSON documents and collections, CQL for Cassandra, Cypher QL for Neo4j, OrientDB SQL to verify nodes and relationships in OrientDB, Lucene queries to verify documents in ElasticSearch, and others
- Tools:Tools such as Compass, Metabase, Talend, Zeppelin, Jupyter, and programming languages like Python and Java are extremely useful while dealing with hybrid database systems
Big Data Testing is the most complex testing process due to huge volume of data flowing thru the system that are processed by multiple tools and technologies from ingestion to analytics. It’s very important to understand the source of the data, multiple transformation and filtering process it goes thru, identifying data lineage graph, different calculations and aggregations performed on the data, and have data sense towards outliers. Appropriate open source tools and technologies such as Talend, Zeppelin, Jupyter, and decent query abilities on Hive and SparkSQL are necessary to perform testing on Big Data that should cover entire data pipeline as below:
- Big Data Infrastructure Testing
- Big Data Governance Testing
- Big Data Ingestion Testing
- Big Data Profile Testing
- Big Data Extract, Load & Transform (ELT) Testing
Continuous development, integration, and deployment with containers on cloud using Agile iterative principles is the proven combination for faster time to market. Testing different moving parts at a fast rate in less than 2-week sprint cycle needs different testing strategies on top of regular full cycle testing such as below:
- Test the infrastructure & network topology and validate against server operational documents
- Automate scripts using AWS CLI & SDK to check various attributes and operations such as network in/out, S3 Object TTL, EBS snapshots, AMI frequency, and auto scaling
- Test user authentication & authorization by accessing AWS services based on IAM users
- Download and test Docker container images on multiple OS across multiple deployment environments
- Run Docker console and verify multiple container links and interaction are working as expected
- Run builds using Jenkins and validate CI & CD are working as expected
- Test blue-green deployments for continuous deployment with near zero downtime
- Test feature flags to make sure same applications behave different for different user groups
- Perform quality checks on retail store data between source data in MongoDB and transformed Master Data in MySQL.
- Perform quality checks between Master Data in MySQL and JSON cache files for accuracy.
- Perform random checks on large retail stores plotted on Google maps and compare against the API payload response.
- Capture screenshot of Google maps with markers at random to check the accuracy of retail store marker plots.
- Generate dashboard with different reports such as total stores, open/close stores, history of stores over the past few months, and others between source data in MongoDB and master data in MySQL for internal curators to perform sanity checks.
- Provide confidence to the business users that data persisted across systems such as MongoDB, MySQL, and Cache file systems, and the data plotted on Google maps are in sync.
Treselle QA team implemented a polyglot test programming using different programming languages such as Python with PyMongo to connect to MongoDB, DBUnit testing framework, Selenium with SOAPUI & PhantomJS, Metabase visualization, and others.
- Quality checks on half a million retail stores to provide confidence to the business users on data accuracy.
- Around 80 test cases to test data quality across different systems.
- Around 25 dashboard reports for internal curators to perform sanity checks without engineering involvement.
- Cut 95% QA resource time in reproducing the errors by using the screen capture of problematic stores.
- Incurred $0 license cost on the entire testing framework by using all open source technologies.
- Automated 90% of quality checks that saved thousands of dollars onQA resources and testing time.
- Perform data quality checks on two million entities between master data stored in MySQL and indexed data stored in ElasticSearch.
- Perform random checks with different keywords and compare the output including both faceted search and search results on the web page against the API payload response.
- Capture web page screenshot of faceted search on Left Hand Side (LHS) and search results on Right Hand Side (RHS) for mismatch results between the web page results and API payload.
- Perform stress test to ensure that search results are served within 2 seconds.
- Generate Dashboard for internal curators for deeper insights on variety of entities stored in ElasticSearch.
- Automate majority of the tests to avoid manual errors and save QA resource testing time.
Treselle QA team implemented a polyglot test programming using different programming languages such as ElasticSearch built in integration with test class ESIntegTestCase, Selenium was integrated with Cucumber for behavior-based acceptance test, PhantomJS for screenshot capture of mismatches, Gatling to execute performance test on API service, and Kibana visualization was used to design dashboard to create around 60 reports.
- Quality checks on two million entities to ensure data accuracy between MySQL & ElasticSearch.
- Smoke and full blown tests to run daily and twice a week, respectively to ensure high data quality.
- 98% of quality checks were automated that saved thousands of dollars on QA resources & testing time.
- Created 100 dashboard reports on two million entities for internal curators that provided deeper insights.
- Incurred $0 license cost on the entire testing framework by using all open source technologies.
- Proactive measures with 60 test cases to verify data quality and to execute integration, performance tests.
- Saved thousands of dollars in increasing the lead engineer’s time by avoiding 90% of manual code reviews.
- Enabled the client to assess vendor team’s code quality using multiple reports grouped by vendors.
- Created the ability to identify design issues such as Cyclomatic Complexity and helped reduce the code complexity. This improved maintainability.
- Ability to identify potential software quality issues before the code moves to production.
- Faster time to market enabled the client to stay competitive.
- Saved thousands of dollars by avoiding error prone manual testing on multiple browsers.
- Automated testing with screenshot captured enabled the client to ensure proper discounts are displayed for different user memberships and categories.
- Enabled the web design team to tune the website to work properly on all browsers with fallback mechanism for the older versions.
- Ability to test and compare the application before and after each release improved the confidence, and enabled faster time to market.