A Cost-effective Way to Handle Tomorrow's Performance-hungry Applications

What is cloud computing and where is it heading to in 2019?

Written by Mantas Levinas
on June 03, 2019

Being an IT engineer, you probably know by heart what is cloud computing. It’s often
defined as distributed computing resources provisioned on-demand. At first it seems
just like another abstraction layer built for our convenience, but, oh boy, there’s much
more things left unmentioned here.


It’s a piece of cake

There’s no doubt cloud computing is a force to be reckoned with. Let’s take
Infrastructure as a Service (IaaS) for an instance, which is predicted to increase from
30.5B to 38.9B in 2019, maintaining a spectacular annual growth rate of 27.5%
according to Gartner. It’s no surprise entire industries are moving to a cloud: you no
longer need to have huge capital expenses, nor manage private data centers. Getting
cloud compute resources is now even easier than making a hotel reservation – not only
you can get servers automatically within minutes world-wide, but also drop them as
soon as they are no longer needed and be billed per hour. It’s never been easier to
make scientific calculations, train machine learning models, run 3D rendering or do
video transcoding – all thanks to cloud computing and its surrounding ecosystem.

If it sounds too good to be true, it probably is

There are plenty of cloud computing providers nowadays and yet around 79% of the
public cloud market is being concentrated in the hands of a few big boys, namely AWS,
Azure, GCP and Alibaba Cloud. These hyperscale providers have a low barrier to entry,
making it very easy to start using them, but increasing your expenses exponentially in
the long run. Sadly, it seems that engineers today got used to trading vendor lock-in for
ease of use and we are quickly moving towards a world where half of the internet may
run on a single cloud provider. This poses a risk to open standards of the Internet and
our ability to use it freely.

What about the remaining 21%?

Many cloud vendors like Digital Ocean or Vultr are trying to replicate success of the
hyperscalers, which is probably destined to fail, due to their smaller budgets. The
velocity of hyperscale providers have become too high to be matched.

The other group of cloud providers have adapted their portfolio to serve and
complement hyperscale providers. Companies like Rackspace and INAP now offer
consultation and management services for businesses running their infrastructure on
AWS. It’s a great compromise for those who are already committed to a hyperscale
provider, but start to find it somewhat lacking.

Finally, there’s a small, but no less potent group of cloud providers that are competing
asymmetrically. Cloud providers like Packet or Cherry Servers automate on raw bare
metal mostly and are strong promoters of open cloud with no vendor lock-in. This is a
perfect solution for businesses that can handle their own cloud-native stack and need to
run high-end workloads most efficiently.

Bare metal cloud computing

Traditionally cloud compute services were built with some hypervisor in between a
server instance and its hardware. Hyperscale cloud providers typically work like this,
offering overbooked hardware resources. Not only there are many tenants on the same
hardware, but also server resources are over-sold, resulting in fluctuating workloads
and increased security risks.

Bare metal cloud works is a different story. There is no virtualization layer, nor other
users on the same server. You are a single owner, having a whole physical machine
dedicated for your project only. Scaling up & down is a breeze, since servers are
deployed automatically in minutes, which you can do on your client portal or via API.
Bare metal cloud is a dream come true for DevOps engineers, since it integrates
seamlessly with open source cloud-native stack and gives the best price per compute
ratio in the market. If you know what you’re doing and you need that extra horsepower,
it’s probably the way to go.