The Beginning of The End of The Server … Towards a New Serverless Cloud

As a preface to this article, I’m rebuilding my academic infrastructure, again. I’ve moved from Windows .NET to ASP.NET, and now I’m…

Photo by Science in HD on Unsplash

The Beginning of The End of The Server … Towards a New Serverless Cloud

We are stuck in a world of Von Neumann architectures. He created a model where we shared the CPU and memory. And so this was efficient when we ran small programs, and needed to multitask on computers. But now the Cloud is almost one infinitely large server, and why should we share our code within the same CPU and memory? And in our current Cloud, we pay as we go, and where we are billed per hour for large servers that create, that most of the time is just ticking away at a small fraction of their potential. The future is where we pay for each line of code and for the time that it runs, and where we can secure every line of code to its own computational space. This is the new digital world!

Preface

As a preface to this article, I’m rebuilding my academic infrastructure, again. I’ve moved from Windows .NET to ASP.NET, and now I’m moving everything into the Cloud. And, it is such a refreshing place to be, as I’m not constrained in anything I want to do. Well, there’s cost and performance that hold me back. If I have 100 students logging-in on a Monday morning at 9am, I’ve got to provision for these with 4 vCPUs and 32GB of memory, otherwise, it will crash. But when the students finish, I still to pay for the over-provisioned server.

So, over the Summer I’m building a completely new teaching infrastructure for Cybersecurity. Previously I build it in ESXi, but this time it will be created in AWS. For me, I want an environment which matches a real-life corporate infrastructure with SSH, Web, Firewalls, IDSs, Windows servers, Linux servers, and so on. I should be a safe and isolated place for students, but allow them to, at least, view the outside world. And so I want to build it dynamically from GitHub, and where I can provision each student with their own environment. Within our ESXi infrastructure, we must create bloated operating systems and integrated these into a VLAN infrastructure. So I’m now building a new training infrastructure within AWS, and which will automatically scale up whenever required.

Introduction

We have an old viewpoint of our digital world. We have servers as big lumps of compute, and then lump all of the infrastructures onto them. This is a centralised approach to our infrastructure, and only really exists because that is the way that we built system. We then create a security bubble around them and aim to secure the infrastructure. On a single server, we run everything and its dog, all bundled together in a tangled mess. One minute we are running SSH on our server and then next we are checking someone’s mail and all shared in the same memory space. This Von Neumann type of architecture needs to be deprecated, asap.

To overcome our provisioning problem, we load balance, and where we just basically clone the big lumps of compute, and keep them in their bubbles. So we come to some fundamental questions in the age of the cloud:

  • We do we have servers? We have big bloated operating systems that basically just run a Web server or a database. Increasingly we run a pay-as-you-go model for servers, and there’s a whole lot of memory and CPU that is being provisioned for, that is never used.
  • Why do I pay for resources that are provisioned for the maximum load? Many companies will provision their infrastructure so that it can cope with a maximum load, and which can thus be expensive in wasted compute time.
  • Why can’t I just run my infrastructure as a whole lot of services, and where I can secure these?
  • Why should we pay for a lumped service? Surely if we just need more email provision from 9–10am on a Monday morning, we can basically spin-up the service we need, and not have to create costly servers to cope with the load?
  • Why do I pay for 32GB of memory, and where the memory is mostly empty?

Say Goodbye to EC2 and Hello to ECS, Docker and Fargate

And so this week, a new model of the Cloud is taking shape, with Docker and AWS announced a major step forward with a collaboration on integrating Docker with AWS’s Elastic Container Service (ECS) and on AWS Fargate.

With Amazon EC2, you basically provision your compute engine, and then you pay-as-you-go with your servers — it is an old model created from bloated servers. With AWS Fargate we have a serverless infrastructure, and where ECS and Kubernetes are used to automatically scale applications. It thus allows infrastructures to be created, and where each resource is paid for on its own, and where each service can also be secured in its own isolated space. So rather than provisioning 4 vCPUs and 32GB of memory to our server, Fargate allocates the correct amount of compute resource that is required, and so you do not over-provision by adding new servers.

Each of the services, too, are run in their own compute kernel, rather than on a shared server resource, and can thus be isolated and secured on their own:

With this environment I can put my code in GitHub, and get my Docker component, and then when a student starts a lab, I just provision the services required, and I’ll only pay for the little bit of compute that’s required to run a firewall, a Linux server with SSH, and a minimal client. So rather than having our big powerful ESXi cluster, I’ll just need the small amount of compute that is required for a student to perform an NMAP scan, and then change a firewall rule, and reperform it.

Conclusions

This is a major step forward for Docker and will take away much of the complexity involved, and allow developers to each build containerised services into their architectures. This should allow for improved performance and security but will take a different mindset to build these. Our teaching is still based on running things on a large compute engine, where the future is likely to be little isolated services and bits of code. In this world, we pay per tick of the clock, and not in our old world, where we run servers for days on end. The beginning of the end of the server is here …