When a database must stay tucked away in an enterprise data center, running a dependent service in the public clouds spells "latency." Or does it?
Most hybrid architectures propose moving only certain application building blocks to the cloud. A prime example is a Web front-end on Amazon Web Services and a back-end transaction server in an on-premises data center. But a new wave of hybrid thought says it's OK to separate data from applications, even to allow multiple applications to run from multiple cloud provider sites against a common data source that can be located in a company's data center.
Not so long ago (like yesterday) that suggestion might, at a minimum, earn you an eye roll and a dismissive hand wave. In more extreme cases, we're talking a recommendation for an IT straightjacket. Separating data from applications and putting half in the cloud would be nothing short of insane. The latency and security problems will kill you, right?
Not necessarily. Separating data services from application services is nothing new; we do it all the time in data centers today. However, we typically provide high-speed links between the two to avoid latency, and, since it is usually all locked in a secure data center, security is less of an issue. Some organizations actually split services between two data centers, but in the process have built the dedicated network bandwidth necessary to support the traffic and ensure adequate performance. In that situation, security is managed by keeping it all behind the firewall, limiting access, and encrypting data in motion.
I equate the first situation to having a private conversation with my wife while we're in the same room. The second is having the same conversation, but now we're yelling between rooms within our home (more likely this is the model for a conversation with my teenage daughter). In this situation, the communications may not be ideal, and the risk someone else may hear is higher, but it's still a somewhat controlled environment.
Complex or not, hybrid cloud is popular in enterprises.
When it comes to moving application services to the cloud and leaving the data in the data center, some people would claim an appropriate analogy is using a bullhorn to have a conversation with your teenager while you are home and she is at the mall.
New technologies and cost models, however, offer an alternative to connecting services over the public Internet. One contributing factor is the ability to get cost-effective, high-speed network connections from cloud provider sites to data centers, of sufficient quality to minimize latency and guarantee appropriate performance. In addition to services offered directly by hosting vendors, colocation companies are jumping into the fray to offer high-speed connections between their colo data centers, as well as from these locations directly to some cloud provider sites.
[Hybrid Security: New Tactics Required. Interested in shuttling workloads between public and private cloud? Better make sure it's worth doing, because hybrid means rethinking how you manage compliance, identity, connectivity, and more.]
For enterprise IT, there are three things to consider when deciding whether this model is for you and considering options for high-speed network connections.
1. Security: While data may have traveled only behind the firewall in the past, now it is stepping out over a network, potentially among multiple cloud locations. Keeping this information safe in transit and dealing with access-control issues will be critical, as discussed in depth in this recent InformationWeek Tech Digest. Heck, in a post-Snowden world, the business may even care. Settling compliance issues, such as encrypting data before it goes over the wire and making sure data doesn't travel outside the appropriate geographic borders, is also critical before you provision any new connections.
2. Network bandwidth and performance: Depending on the application's sensitivity to latency, the cost to guarantee a given performance level over the network may eliminate any savings gained in moving the application to the cloud. My advice: Know exactly what you're getting before committing. SLAs need to be clear, as do penalties, and make sure you have monitoring tools to validate performance. It's particularly important to look not just at the network specs but the real-world, end-to-end performance. Benchmark over a period of time before moving the application to gain a base model. Run simulated transactions and workloads based on the benchmark; that will give you a sense of equivalent performance in the new environment. Recognize as well that if you are going into a public cloud, your mileage may vary on any given day. This is discussed in depth in the 2014 Next-Gen WAN report.
3. Business continuity/disaster recovery: The number of things that could go wrong just increased exponentially. Meanwhile, managing recovery plans becomes more complex. Map disaster scenarios prior to getting locked in, and budget for redundancy. Cable cuts and floods happen. Be prepared at the onset, rather than scrambling along with everyone else affected if a network connection is impacted. Recognize as well that you now have two different environments that may require two different plans for DR and have different recovery times. Of the three, this will likely be the most complex to work out.
While I've focused on the challenges, there are clearly benefits to be had if you need the scalability of a cloud environment for your applications. But just because we can doesn't mean we should. Much like raising a teenagers, each scenario is different, and you need to be up for the challenge before you head down that road.