I’ve always loved using excess or old computers and network infrastructure to lab things up or run a PoC for an application or service. I still have a Dell R710 and HP ML10v2 I use to run services like Home Assistant and various observability tools for testing.
So what does this have to do with Cloud migrations? I can’t afford to run my labs in with IaaS for long periods of time. But this is the easy answer to a phenomenon that seems to be happening more and more. Let’s take a look.
When cloud services started becoming a thing in the mid-late 2000s there were a few options available, some of those were storage, compute, and some xAAS offerings like unified communications. The promise of the cloud was redundancy, scalability, reduced reliance on employing specialists, and OpEx spend as opposed to CapEx spend. Why would a legal firm need a large IT team and own equipment, when you can outsource it to the experts.
This sounds pretty attractive to technology leaders because they no longer need to own assets, their related service contracts, renewals, and also reduce the number of staff required to manage their technology investment. If the company had a launch or other event where scale was needed, they could simply scale their services temporarily.
I’d personally leveraged hosted dedicated servers in the US and Australia primarily for reliability and high bandwidth for my personal projects, as well as recommending this to my customers at the time as a way to avoid the risk of localised power and connectivity issues for those hosting out of makeshift datacentres or worse, basements.
As technology evolved through the 2010s, we saw cloud native technology starting to appear which had the promise of autoscaling, and automating the build of applications and microservices. Many organisations jumped on this and migrated some of their applications to become cloud native. Others did not, they simply took their on-premise architecture and moved it into cloud service providers like AWS, GCP, and Azure.
Without application transformation, the cost of running these legacy architectures can become extraordinarily expensive over the long term. Investing developer resources into transforming a ‘good enough’ application on a legacy architecture doesn’t make sense, and neither does the cloud costs of running it in the cloud.
Some organisations simply took their legacy applications and placed it into a container, which then has the overhead of running the service container. containerisation has benefits of abstracting the service from the underlying platform, but there has to be some re-think of how the application works.
Put simply, if you are building something new, build it cloud native. If it is legacy and you have no intent on transforming the application to run on a cloud native architecture, you may not realise the expected savings long term.
When organisation go through the process of troubleshooting a complex application performance issue they will often go for quick wins:
- Increase bandwidth
- Add CPU cores
- Add Memory
- War room?
These changes are never reversed and these costs add up quickly. Nothing is more permanent than a temporary fix.
Additionally it can be cheaper to prototype solutions internally on excess and legacy hardware, which if tested in the cloud has ongoing costs that would not otherwise be seen. Hands up if you ever left something running in the cloud by accident?
I think we will continue to see hybrid environments. If I was starting a business today, I would certainly still run a hybrid environment where prototyping could be done on-prem, with the majority of anything built would be cloud native.
If you got this far, thanks for reading. Feel free to buy a book from here!
Leave a Reply