The hybrid cloud is gaining momentum: That’s clear from, among other things, the current IDG survey of 372 IT decision-makers from six European core regions. According to the survey, one in four companies currently has a hybrid cloud architecture – and that’s expected to double by 2021. It’s hardly surprising, since hybrid cloud solutions – when used correctly – combine the advantages of private and public cloud architectures. But what do concrete usage scenarios of a hybrid cloud infrastructure look like?
For companies, a hybrid cloud solution can serve as a gateway to the cloud world. The private cloud environment functions as a protected space for first steps and safe testing until the desired solution runs stably on cloud servers – in a second step it can be ported promptly into the public cloud if required. What is important here, however, is a hybrid cloud infrastructure with a uniform software and hardware basis to ensure maximum compatibility between the public and private clouds.
A suitable starting point could be legacy applications, for example: T-Systems' transformation projects show that around two thirds of business applications can basically be transferred to the cloud. This requires a thorough analysis and inventory of the existing IT landscape. T-Systems, for example, offers the so-called Cloudifier with standardized transformation services for entry and transition to the cloud.
Whether it’s the busy Christmas period in e-commerce, data-intensive simulations in product development or Big Data analyses in research: Many companies only need high-performance computing capacities from time to time rather than permanently. This is a case for the hybrid cloud, in which private and public clouds can be smoothly combined to form what is known as cloud bursting. This enables companies to flexibly absorb peak loads at any time by switching storage and computing resources on and off as required.
However, some companies need to store and process certain data in their own data center or in a private cloud. For example, to ensure they are protected against industrial espionage, to adhere to compliance regulations or to benefit from the lowest possible latencies. But the public cloud can also be an overflow basin for such companies: "It depends on how such scenarios are implemented in detail. If business-critical data always remains in the private environment and only the compute service comes from the public cloud, cloud bursting can also be feasible for sensitive company data," says Sascha Smets, Senior Product Manager at T-Systems. “This can be achieved either by anonymizing the data that is to be processed in the public cloud. Or by only providing applications in the public cloud with the exact information they need for a specific computing process. This makes the interaction secure and also saves bandwidth."
Hybrid cloud scenarios can also be used as backup or disaster recovery solutions. There are several possibilities for this kind of implementation – depending on the requirements of the respective company. On the one hand, the public cloud is an inexpensive long-term storage solution. For maximum security, for example, data can be stored in encrypted form in the Object Based Storage.
Companies that do not want to store certain data in the public cloud but still want to store it redundantly can also set up two private, separate availability zones (AZs) in which they mirror their systems. The distance between the AZs is important. They should be far enough apart so that both AZs don’t fail simultaneously in the event of a possible disaster such as a flood or fire. But close enough to benefit from the lowest possible latencies. "A benchmark that has become established among companies is a distance between the data centers of around 20 to 30 kilometers," says Sascha Smets.
Modern, agile application development increasingly integrates development, testing and operation. While the respective teams used to work independently and separately from each other, today they are linked by cooperation in a DevOps model. Hybrid cloud platforms enable teams to develop software faster and to shorten release procedures. Applications can be ported as needed between teams and their respective private or public environments.
What is important here is a hybrid cloud infrastructure that ensures a seamless transition between the public and private environments. The Open Telekom Cloud Hybrid, for example, offers DevOps teams this kind of unified environment, which is based on the same hardware and software for public and private clouds. Each developed application can run in both the public and private environments – in line with the motto: develop once, run anywhere. And this is true even if the company in question doesn’t work with container technology.
Edge Computing and the hybrid cloud cultivate a close relationship, because even with Edge Computing, companies can make use of decentralized computing and storage capacities as required. However, these are not located in a remote data center, but close to the action, at the edge of a network – hence the term "Edge Computing." Why is this necessary? One example is the real-time processing of data, where latencies can be kept as low as possible. For example, the transmission and processing of sensor data for autonomous driving practically require a data center at every intersection. The situation is similar for the control of industrial robots with AI algorithms, because low latencies also play a central role here.
"There is a growing demand for IT resources that can handle processes with virtually no latency," says Sascha Smets. "We will soon meet this demand with our new Edge Cloud offering: mini data centers for real-time applications based on the Open Telecom Cloud technology, which we can install and operate directly on our customers' premises if required.”
The so-called shadow IT is flourishing – to the chagrin of many companies. According to Forrester Research, almost half of employees in companies now use technologies without the knowledge of their IT departments. Hybrid cloud solutions offer IT departments the opportunity to change things by building a unified service catalog and establishing themselves as the company’s cloud brokers.
If the private and public parts are based on the same technology, the IT department will find it easier to build this type of unified service catalog. For applications with sensitive data that have to stay in the company, IT can then offer virtual machines with appropriate specifications. For less critical workloads that can operate in a public cloud, IT offers a VM with exactly the same specification – but at significantly lower cost. This means that users in specialist departments can select the services they need from private or public instances at the click of a mouse, without neglecting requirements such as scalability, security and governance.
Whether it be an oil rig, space station or remote research facility: Not every location can be easily supplied with a fast Internet connection. With the Open Telekom Cloud Hybrid, companies can also use IT resources in remote locations as a private cloud – regardless of the network connection. Telekom can implement the necessary servers directly where they are needed. A connection to the Internet or the public cloud is not absolutely necessary for operating them.
Servers give off a lot of heat: To ensure an optimum operation, the hardware in data centers is usually cooled. This results in double the energy requirements – once for server operation and then again for cooling. But it’s also possible to make sensible use of the thermal energy from servers. For example, supplying buildings with hot water and heating energy. In this way, companies not only save on the electricity for cooling the server cabinets, but also heating costs and they use their resources in a worthy and sustainable way. If the number of hybrid cloud architectures in companies doubles by 2021, more and more data centers will emerge in the future that are suitable for decentralized heat supply.