It’s all well and good to say take your time on your cloud journey. Plan, analyse, re-architect all applications then move. But let’s get practical. Here’s what really happens – and how to make reality work for you.
A data centre contract comes up for renewal. The company figures now is a good time to move its workloads to the cloud. A contract renewal is as good a driver as any for organisations already thinking about the cloud or continuing their cloud journey. Of course, motivations go deeper. Organisations are seeking cost effectiveness, a switch from capex to opex, productivity improvements, modernisation and more.
Nonetheless, this has been a common cloud migration pattern for years. As a first step, organisations arrange to switch to the cloud versions of the on-premises solutions, or to alternative cloud solutions that now replace their on-premises solutions. They arrange to house whatever isn’t ultimately moved to the cloud in a reduced footprint in the same or a different data centre. Then, for the remaining workloads, they start exploring private/public cloud hosting options.
What it means, though, is that by now there is a clock ticking down towards a deadline by which the cloud decisions and moves have to be made. Typically, there’s not enough time to adequately take stock of the remaining application usage and accessibility requirements, nor is it time enough to re-architect applications so they are native to the cloud, nor for that matter to plan the future network state. If they get the above right first, then, of course, everything will fall into place. But this is not how reality works.
What usually happens is that as soon as a cloud provider is selected, the time constraint means applications designated for cloud are migrated immediately, and these applications function in the same way as they have always been used. Then, some months later, the organisation ends up with a surprisingly high bill and . . . panic! Some organisations move workloads back on premises at this stage, even though they were running just fine in the cloud, simply to avoid the unexpectedly high charges.
But it doesn’t have to come to this.
If you must lift and shift (‘re-host’ in AWS parlance) relatively quickly, due to time constraints or other imperatives, don’t just leave the applications there and assume all is well. First of all, the interdependencies and data flows between applications are often overlooked, so they rarely work exactly the same as they did on premises. (A big ‘gotcha’ in the Cloud is the difference between on-premises and Cloud DNS services.) Secondly, even if they are working the same, this may not be the best outcome in the Cloud environment, especially in terms of cost.
You need to monitor and check how things are working from the start, not six months down the track, and take any necessary remedial action.
Re-platform – Change is Possible
One of our clients re-hosted its on-premises virtual desktop solution to the cloud, hoping to better serve its 30 thousand or so users, as well as save money. The Remote Desktop Solution was now hosted in AWS on very large instances, essentially mimicking the on-premises architecture. Over the next few months, this led to a cost blow out and reluctance to use the service to its full potential.
The client asked us to see what we could do, quickly, to help them get more from AWS at a lesser cost. Using some clever thinking and coding, our engineer developed a solution that made use of AWS auto-scaling features and introduced an alternative load-balancing algorithm to cater for variable user sessions. Our solution utilised smaller, cost-effective AWS EC2 instances rather than larger, expensive instances as originally deployed. A group of small instances used temporarily are generally a lot cheaper (depending on the application) than using one large instance all the time. This is a key difference between the physical world and the ephemeral world.
It’s a different mind-set to the on-premises days
When an organisation deals with a vendor for an on-premises solution, the application and / or software provider usually over specs. When you move all that capacity to the ‘user-pays’ cloud environment, you will be paying for it all the time whether you are using it or not, likely running up the costs unnecessarily. In the cloud you pay only for what you use, that is true. However if you run an application in the cloud 24/7 but your staff only access it during business hours, you are still paying for it to run 100% of the time. Our solution – which ensured that applications were running when needed and not running when no one would be using them – demonstrated an 80% saving on infrastructure costs, while improving user experience and reducing administration effort. It prompted the vendor (Citrix) to change/rescript/adjust its remote desktop solution.
Double Check Automation Script
In the early days of AWS it was difficult to figure out how to leverage that understanding … how to effectively shut down an application when you don’t need it, start it up again when you do and save money by only paying for what you use. Amazon addressed this by providing a script to help shut down these instances and spin them back up again, based on identified parameters.
However, there was a problem, and we found it. Whenever daylight savings took effect, the script didn’t adjust properly. We altered the script to allow for daylight saving and turn our client’s application off and on at the right times.
To summarise the lessons offered in this article, always check any off-the-shelf scripting provider by the cloud provider, double-check your bills, and absolutely assess/adjust your application usage. Ask questions. Look for fluctuations and figure out why they are happening. Don’t just assume it’s all working as it should. And don’t take old thinking to the cloud.
Damien Pedersen is the Chief Technology Officer at Envisian, guiding our team of technology experts to solve client challenges. Find out more about how his team ‘cloudified’ a vendor’s on-premises remote desktop solution.