Press "Enter" to skip to content

3 reasons not to repatriate cloud-based apps and data sets

Repatriation seems to be a hot topic these days, as some applications and data sets go back to their place of origin. I have even been labeled in some circles as a repatriation advocate, mainly due to this recent post.

Once again I will reaffirm my position: the general objective is to find the more optimized architecture to support your business. Sometimes it’s in a public cloud and sometimes it’s not. Or not yet.

Keep in mind that technology evolves and the value of using one technology over another changes a lot over time. I learned a long time ago not to fall in love with any technology or platform, including cloud computing, even though I chose a career as a cloud expert.

How do you find the most optimized architecture? Work from your business requirements to your platform, not the other way around. In fact, you’ll find that most of the applications and data sets being repatriated should never have existed in a public cloud in the first place. The decision to move to the cloud was driven more by enthusiasm than reality.

So today is a good day to explore the reasons why I would not do it You want to repatriate applications and data sets to legacy systems from public cloud platforms. Hopefully this balances the discussion a bit. I’m sure someone is going to label me a “mainframe geek” though, so don’t buy it either.

Here we go. Three reasons not to move applications and data sets out of public clouds and back on-premises:

Rearchitecture is expensive

Repatriating applications from the cloud to an on-premises data center can be a complex process. Redesigning and reconfiguring the application takes a lot of time and resources, which can negatively affect the value of doing it. However, redesign is usually required to allow applications and/or data sets to function in a near-optimized manner. These costs are often too high to justify any business value you would see from repatriation.

Of course, this mostly relates to applications that underwent some refactoring (code or data changes) to move to a public cloud provider, but not far. In many cases, these applications are poorly-architected, as they exist in public clouds and were also poorly designed within on-premises systems.

However, such applications are easier to optimize and refactor on a public cloud provider than on traditional platforms. The tools for redesigning these workloads are often better in public clouds these days. So, if you have a poorly-architected application, it’s generally better to run it in a public cloud and not repatriate it because the costs and hassle of doing so are often much higher.

Also Read:  Docker sunsets Free Team subscriptions, roiling open source projects

Public clouds offer more agility

Agility is a core business value of staying on a public cloud platform. Repatriating applications from the cloud often involves making tradeoffs between cost and agility. Moving back to an on-premises data center can result in reduced flexibility and slower time to market, which can be detrimental to organizations in industries that value agility.

Agility is often overlooked. People looking at repatriation options often focus on direct cost savings and don’t consider indirect benefits such as agility, scalability, and flexibility. However, these tend to provide much more value than tactical cost savings. For example, instead of simply comparing the cost of on-premises hard drive storage to storage at a cloud provider, consider the business values ​​that are less obvious but often more impactful.

Linked to physical infrastructure and old school skills

Obviously, on-premises data centers rely on physical infrastructure, which can be more susceptible to outages, maintenance issues, and other disruptions. This can result in lost productivity and decreased reliability compared to the high availability and scalability offered by public cloud platforms.

We tend to view the few reports of cloud outages as proof that applications and data sets need to go back on-premises. If you’re honest with yourself, you probably remember a lot more on-premises outages in the past than anything caused by public cloud downtime recently.

Also, keep in mind that finding talent for traditional platforms has been challenging in recent years, as top engineers have reshaped their careers toward cloud computing. You might find that having less-skilled personnel maintaining systems on-site causes more problems than you realize. The “good old days” suddenly become the time when your stuff was in the cloud.

Like all things, there are trade-offs. This is no different. Just be sure to ask the questions: “Should I?” and I could?” As you answer the fundamental questions, look at the business, technology, and cost trade-offs for each workload and data set you’re considering.

From there, make a fair decision, taking everything into consideration, with return on business value being the primary goal. I don’t fall in love with platforms or technologies for a good reason.

Copyright © 2023 IDG Communications, Inc.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *