Are you looking to migrate your business to the cloud but you have a hard time cutting through the noise and hype around it?
As with every new and fast evolving technology, there is no single right way of doing things. Most of the times it comes down to the experience of your vendor or cloud expert partner to guide and help you though the process. As we navigate our own way through the fast adoption of cloud computing, we can help you clear the air and unveil some of the truths behind the myths.
Myth #1 – Better hardware utilization
Myth #1 – that migrating to the clouds leads to better hardware utilization – is only half the truth. The full truth is that hardware utilization is efficient and cost is low only if elasticity is high. Cloud elasticity refers to the ability to match the resources employed for running a system to the actual requirement for that system. Now the catch here is how elastic the cloud service really is.
Most of the existing/legacy software architectures are not elastic, they have at least one bottleneck that needs to be addressed. Database elasticity is also slow because adding/removing nodes means moving around large quantities of data which takes time. This means that a database will not be able to react quickly to a sudden spike in workload even if it is designed to be scalable.
This brings into discussion the debate around public versus private clouds.
Public clouds are much more elastic as compared to private clouds and one can benchmark in advance to determine the actual requirements of their system.
For small companies and startups that don’t have the resources to manage their own private cloud, public clouds are cost effective and they heavily optimize the hardware utilization. Myth #1 is true for such companies which employ public clouds.
For telcos and other large customers using public clouds could be more expensive than managing their own infrastructure as they already have data-centers and skilled people who manage them. Such companies typically have their own private clouds. Now, for private clouds most of the vendors bundle hardware into the cloud solution, so this means there is an initial investment to buy all those new racks. Again, it comes down to how much return they expect to get on this initial investment.
For many operators, especially in smaller countries like Romania, there is a hybrid approach, where core services will run in private clouds and everything else can be moved into a public cloud. However, this approach comes also with challenges, such as the anonymity for subscriber data that goes into public clouds, which requires modification to existing applications or even complete rewrites.
Myth #2 – Everything will auto scale
Auto-scalability refers to the ability to handle dynamic needs of an application within the boundaries of the available infrastructure. One needs to have a good understanding of the hardware requirements of their application based on which one can set the upper and lower limits.
Auto-scaling is related to the elasticity of an application. If an application is truly elastic, then cloud-related tools will be able to successfully auto-scale the application. But there are challenges associated with auto-scaling as well such as:
- To make the application elastic
- To benchmark the application to establish the proper resource requirements for certain throughput requirements.
- To have mature cloud tools that are able to implement auto-scaling without bugs/issues.
- Adaptation to a specific suite of cloud tools (container engines, cloud resource managers). This is needed if the application needs custom KPIs for auto-scaling and universal metrics like CPU usage are not enough.
Myth #3 – Service can auto heal itself
Auto healing is the feature of the system which can fix itself if it notices that it is not running as it should. But auto healing can only be achieved when the system knows the likely issues it may face and once such an issue occurs it can apply the appropriate fix. But not all issues can be known in advance.
The current capability of the existing cloud tools or container orchestration tools is that they can probe the running virtual machines (VM) or containers and do simple “health checks”:
- Is the VM or container running?
- Is a VM or container listening for network connections?
- Is the VM or container overloaded in terms of CPU/Memory?
Based on these simple checks, the tools can start/restart a VM or container. This again goes back to the elasticity and resilience of the application to such events like reconnecting dropped network connections, quick distribution of load to a restarted node, etc. For elastic and resilient applications healing will work as expected. However, it is a challenge to move a legacy application to an elastic and resilient architecture, and to create continuous integration/continuous deployment pipeline so that these applications can benefit from the existing tools.
Myth #4 – Everything will be 100% automated
100% automation can be difficult to achieve especially in normal support activities where you are always dealing with the unexpected. There will still be the need of manually healing/restoring/scaling a service. Ideally these manual procedures should be used only in exceptional situations but manual procedures will be needed more often especially for poorly designed cloud applications or legacy applications that were forcefully moved to cloud without proper redesign.
Standards like ETSI-NFV attempt to find common solutions for the challenges of clouds and define standard ways of moving services to a cloud environment. However, the ecosystem is moving very fast and standards cannot keep the pace with all the new technologies that come up.
Even though cloud computing has been around for a while now, there is a lot of understanding that needs to be developed about one’s own business and then matching that with the features provided by the cloud so that one can make the best use of it, save costs and improve efficiency.
Author: Alexandru Cristu – Solution Architect