Have you received your free AWS credits ($100-$20000) via an incubator, accelerator, or simply just getting it out of the blue?
Let’s discuss the options that you have (and the mistakes that people usually make)
- You get excited and plan to move all your current infrastructure from your existing hosting to AWS.
- You do the same as above, but you have read and spent some time understanding AWS services, hence you are not only looking at hosting, but also the hundreds of other services that are provided by AWS.
- You plan to explore AWS and its services, not necessarily moving your infrastructure to AWS.
- You do not use these credits as you are already running your infra on GCP/Azure or some other provider.
- AWS credits are a trap. You are happy with the way things are.
More often than not, we are brought in when your credits expire to salvage the situation. Think of it as the 13th month if you had credits to the last 12 months. In many situations, there have been bills worth $1000’s that the users had to pay because they were very “casual” with their approach. In the end, AWS (and its likes), are given a bad name that they suck the money and are extremely expensive whereas the truth is exactly the opposite.
As soon as you get the credits, you need to prepare a strategy document that should be in sync with your product vision and the development roadmap. Instead, what we have seen is the development team/owners in the case of smaller businesses just jumping onto AWS trying out as many services as possible (forgetting to switch them off), and then trying to superimpose that on your product roadmap.
Here is our guide which we have prepared after working with 15 clients who have been in the same space.
1. AWS is not only hosting
If you have a server hosted on say Godaddy, Hostgator, BlueHost, etc, and your objective is to save $10-$15 a month, then it is better you avoid going via the AWS route. Services like AWS make sense when you want to scale efficiently and quickly, but if that is not your aim, then no point in moving everything. A dedicated machine on these providers is absolutely OK.
2. Find the right person/team
Your developer and/or your current IT person (infrastructure), might not be the best person/s for this job. Very often we have seen that this approach ends up not getting you where you want along with wastage of your precious credits. There is a sharp learning curve associated with AWS and its ilk and it is always recommended to bring in a seasoned professional/team. This way your team and you can be brought up to speed.
3. Be the janitor and promote the same
With free credits or cost as per usage (which can often be an illusion), individuals do tend to get lazy and keep their experiments running. These experiments can often turn out to be very expensive. We have faced this problem and have created processes around the same to ensure there are no such leakages.
4. Keep an eye on the bill
With multiple regions and so many services keeping a check can become overwhelming, but here you need to work in a smart manner. The easiest way to check the details is the bill. It can give you the details around billing for each region. Now, in case you would like to dig into the details, please do so by selecting the appropriate region and the service.
5. Use alerts and notifications
This is a very basic but extremely useful tool provided by AWS. Essentially you can set up custom alerts for literally everything (once you get your hands dirty with cloudwatch). Decide a conservative monthly budget and set up alerts on 25%, 50%, 75%, and 100% usage.
6. It will not be free forever
The 1st month after your credits expire and you have to shell your own money is usually rough for founders. We have seen founders moving back to their old infrastructure and not paying their pending bills. You need to plan your 13th month (12 free months + 1 paid month) well in advance.
7. What should be my ideal spending?
This is a bit tricky and would vary case by case. We would present 2 scenarios for you to get an idea of how to approach this problem on your end.
7.1) Moving a batch process to the cloud – This is a project that we did for Broad Institute where there were initially 3 servers that were always running, waiting for the tasks (batch process running on custom C++ code). The 3 servers were small, medium, and large (think of them as x, 2x, and 4x to keep things simple). Let me assume the value of x to be $25 here (vague assumption, could be $10 or $100). From an infrastructure standpoint, we were spending 25+50+100 = $175 per month (conservatively). There were 2 applications, using different sets of servers, so 175*2=$350 was the total infrastructure cost. Along with that 2 hosting servers, let’s say $20 per server per month. Total cost of ownership $400 (not adding maintenance costs/ developer costs/ license costs etc.).
When Dignitas Digital took on this project we broke it down into smaller pieces, separating out the tightly coupled front end, back end, and the API calls. We took the C++ code, containerized it, and leveraged AWS Batch, to run these jobs. For the front end, we took a very lightweight application, putting the bulk of the processing on the Batch.
Using this approach, we brought the cost of infrastructure down to less than $30 a month, i.e. at 7.5% of the previous cost freeing the client’s budget to invest in the actual development of the project rather than underutilized resources.
7.2) While working for a video processing company, the company was spending a lot of money on its infrastructure, where a big server was being used to run sequential tasks, i.e. the only 1 task could run on a machine. If the client ran multiple big instances, there were huge costs to incur and if there were not enough tasks, then the instances were not being used. The client was spending close to $800 on the infrastructure.
Here our job was to convert this extremely static infrastructure into a more dynamic and flexible one while bringing the costs down and increasing the efficiency.
The first option was to enable auto-scaling of the servers, but unfortunately, that was not very feasible as there were 3rd party licenses that were involved, and the licensing was per machine. Hence this approach was dropped.
The second approach was to set up multiple instances having preset configurations and start and stop them at a fixed time (6 AM to 6 PM during work hours). The issue with this approach was again underutilization plus in some cases, there were clients running their processing during weekends. Therefore, this approach took a backseat as well.
The third approach was to start-stop servers based on the processing pipeline, i.e. we would be listening every 60 seconds to check if there is any processing required and switch the server on and off accordingly. We leveraged lambda functions for this. Keep in mind so far, we have 1 primary server (always on) and multiple secondary servers which switch on and off based on the load. The primary server was always on because otherwise the machine took 15 minutes to set up, run the batch scripts and be ready to pick up the videos to process.
An enhancement to the above approach was made by using hibernating instances so that they could be in the start state in a matter of seconds rather than minutes.
Using this approach, we have been able to bring down the costs of infrastructure down by 92%. Of course, this gives the client more width to spend on other services and provides better value to their customers.
I do not know the ABC of AWS
That is exactly what we are here for. With 12+ years of experience working with AWS (Yes true. While studying at the University of Pennsylvania we were introduced to AWS via “free credits” given to the Computer and Information Science department to play around – early days of AWS!), we have seen the services grow and have been actively involved in the AWS community as well.