To err is human.
As long as we are humans, we are bound to make mistakes. This especially fits when we have mammoth responsibilities in front of us. But some mistakes cost us thousands of dollars!
And when it is AWS or security, this is usually the case. Probably this is why even AWS itself suggests its users to opt for Managed Providers and avoid extravagant costs.
Ever thought maybe you are spending more than you had planned? Managed AWS is the solution.
It was one of our usual days. One of our team members (who likes online shopping) was searching on Google and came across an E-commerce store for which Google was showing a catastrophic message – “This site may be hacked.”
It was straight forward that their site was infected with malware and needed immediate remedial actions. Our sales team reached out highlighting the security loopholes to them and got an almost instant response by Mr. Pahwa. We helped them fix the malware issue in a couple of hours and submitted the site for review and within 24 hours, their site was back to normal on Google.
Whilst the process of cleaning the malware, we discovered a need for a complete infrastructure re-design on AWS for overall security and cost optimizations.
Now here was the real challenge. They threw “what you can do the best” on us. And as the rule of the game, we had no data of their previous architecture and bill.
How We Made a Whole New Architecture?
The challenge was accepted.
We knew that AWS cost optimization journey would be long and requires consistent efforts so we outlined the strategies first.
AWS Cost Optimization Strategies
These strategies we implemented to cut down the cost. Remember, you can also achieve better results if your architecture is ill-designed. Have a look at these strategies and compare with yours to know the gap…
1. Architecture Redesign
Before everything, we collected basic information about the website and its traffic. For instance, on which application their site was running? The resource utilization patterns, the user behavior, and traffic and application trends.
Since their site was running on Magento and we have Magento specialization, it was not a backbreaker for us. Later, we studied the resource utilization patterns. Now, what does that even mean?
For instance, you see traffic to your website from 9 a.m to 12 p.m but does the 90% of the traffic hit your website from 9 a.m – 9.15 a.m? Or is it spread across 3 hours? This is important to understand while choosing AWS instances. So we used Google Analytics and Daily traffic stats to evaluate what type of instances were needed.
Not to miss, user behavior pattern and traffic matter much. In this case, they knew when and from where their traffic comes. The majority of their traffic came in from Facebook campaigns and the timings of those campaigns were fixed. So, we had a hunch about the traffic.
And of course, our Magento experience backed us to have a solid estimation.
We understood the demand and optimally matched it with the supply. However, always keep in mind the need for sufficient extra supply to allow for provisioning time and individual resource failures.
Thumb rule: Choosing the right instance for the right purpose always works.
2. Right Sizing
We selected the cheapest instance while meeting the performance requirements.
3. Constant Monitoring
Once we had designed the architecture, we were now supposed to monitor it closely. For 30 days, we analyzed the traffic and how our newly designed architecture worked. I cannot even stress how important it is to constantly monitor your design in AWS cost optimization. There could be many loopholes and you might end up spending more.
Steps we took here:
- Notifications can be a savior. You can turn on the “Receive Billing Alerts” in order to keep a track of your AWS usage charges.
- Set up Amazon CloudWatch alarms and notifications.
Remember: AWS Cost Optimization is “Going from pay what you use to pay what you need.” It is always an ongoing area of work.
AWS Cost Optimization Challenge: Giving a Closure
- Turn off the Unused Instances. Developer, test, training instances on weekends.
- Set up Amazon CloudWatch alarms and notifications to give you immediate insight into your expenditure. We monitored metrics for CPU utilization, data transfer, and disk usage activity from our EC2 instances. It notified us via email or a text message.
- Whenever our usage reached even 60%, we were notified. This way our expenditure was always less than the threshold.
- With Amazon Monthly calculator, we calculated our data transfer costs.
Since requirements change over time, therefore, optimization and reassessment are always required.
What Did We Get After 30-day Assessment?
We came into the picture in March.
Using these strategies and constant monitoring, we were able to achieve a cut down the expenses by 26.79%. Further, in April, we reduced the costs by 11.46%. We are able to maintain this level up until now.
Of course, excluding times when there is a need for upsizing depending upon the traffic or their big days.
The Cost on AWS was insanely high until February 2016. However, after the new and redefined infrastructure, shopnineteen was able to save 25-30% on their AWS.
P.S. Because of privacy concerns, we cannot disclose our client’s cost-related data but this graph is crafted on the real numbers. 🙂
Struggling with AWS cost optimizations? Talk to our experts and we will manage the rest for you!