Consolidation & Optimization of AWS Infrastructure
Save $100,000 annually by consolidating and optimizing the AWS services
- Consolidate and Migrate AWS infrastructure of an acquired company into client account.
- Identify inefficiency, redundancy, and provide cost savings plan as part of the consolidation.
- Perform necessary changes to the acquired infrastructure as part of changes to the team structure and location.
- Reuse existing investment and migrate necessary services from the acquired infrastructure.
- Apply any and all performance optimizations to maintain same level of scalability but reduce infrastructure cost by 70%.
- Identify the purpose of each services, their uptime needs, historical traffic assessment, and understand non-functional requirements such as high availability, scalability, redundancy, fail-over, etc.
- The acquired infrastructure was spread across multiple regions in USA and Singapore to cater to different collocated team.
- Understand the regulatory process for data storage and archive needs.
- Prepared consolidation plan by identifying different AWS services in use, their purpose, their uptime needs, and data storage & archival needs.
- Identified all AWS EC2 instances were running as on-demand without making use of Reserved instance for some.
- Half a dozen AWS EC2 instances were needed only during certain workload but running as Reserved instances. Some of these were made into spot instances and others as on-demand and wrote programs to automatically take AMI and terminate the instances once the workload is done.
- Python Flask code was optimized and Apache web server was tuned for best performance that helped reducing 6 instances under the load balancer into 2 and still was able to serve the same traffic.
- Data stored in dedicated MongoDB GridFS instances were moved to S3 and necessary bridge programs were written to bypass GridFS and terminated these instances.
- S3 storage cost was optimized by moving historical data into Glacier and kept only 1 year of data in S3.
- Analyzed MongoDB data model, collection types, type of JSON document stored in them and performed several optimizations as follows:
- Duplicate and repetitive nested documents were identified and moved them to a separate collection and provided reference link from the main document.
- The key name has been changed to shorter names to reduce the overall size of the document.
- Migrated to latest version of MongoDB and applied wiredtiger storage engine to compress the data stored in the MongoDB.
- Terminated all Singapore region MongoDB for sharding and replication purpose and migrated two regions in USA.
- Identified historical data in some of the collections and confirmed that they were not needed and moved it to Glacier.
- Reduced number of MongoDB instances from 10 to 3.
- Migrated data from MSSQL to existing instance and terminated the windows instance.
- RabbitMQ and Celery worker processes migrated to AWS SQS and necessary producer and consumers were written to process the queues and terminated these instances.
- A dedicated MongoDB log system was migrated to AWS ElasticSearch and terminated the instances including the Log UI server which was replaced with Kibana.
- Created and implemented around 250 test scenarios to perform data quality checks, performance and stress test, integration test, functional test using Selenium, PhantomJS, Gatling, SOAPUI, and swagger test.