Reducing Costs and Saving The Planet: Meet Serverless Technology

We are setting up an AWS Academy at Edinburgh Napier University, and the potential demand has been amazing from our students. Why? Because…

Photo by Ian Battaglia on Unsplash

Reducing Costs and Saving The Planet: Meet Serverless Technology

We are setting up an AWS Academy at Edinburgh Napier University, and the potential demand has been amazing from our students. Why? Because the future of computing lies in the public cloud. No matter which industry sector you will go into, there’s a good chance of you being involved with some form of integration into the public cloud. Every person has the power of a large company, at their fingertips.

Personally, I switched to AWS a while ago, and it was the best thing I did for my Webspace and coding. Overall, I can control everything I need, and the cost is far less than it was when I used a bare metal server. As someone who had to backup servers with tape drives or to subscribe to costly backup services, the joy of clicking a button for a snapshot is something that I will never lose.

And so, in a recent keynote talk, Martin Beeby — a principal developer advocate at AWS — outlined a focus on sustainability, and where we need to move away from server-based systems toward serverless technology. This, he outlined, has benefits in terms of saving money and also in reducing our energy footprint. While some might debate whether serverless technology is really fully serverless, it must still be a key focus … and where every extra clock tick that does not really serve a purpose is wasted energy.

Consuming energy?

Every server that your company leaves running - consumes more energy — as it is must be provisioned to run on a physical processor and will consume memory. If these servers were switched off, they can exist in storage, and which often requires much less energy. Overall, though, most of the energy, consumed for a running server is completed wasted, as often the server is running lots of unnecessary services.

A key skill in the public cloud is really understanding how much compute you actually need, and then finding the best costing for this. It’s often a trade-off, and you need to understand your peak loads, your normal loads, and your off-peak times. And the “distance” that the service is away from the consumer matters, too. The further it is, the most likely it will be slower, and that it will consume even more energy in getting it from one place to the next. There is thus a need to move computation closer to the consumer.

Overall, the server might be provisioned for 32GB memory and for four cores running at 3 GHz, but often the memory and CPU capacity are completely unused. So and while a public cloud infrastructure supports a balancing of the provision across various instances, the server must be provisioned for its requirements. This is often wasteful of energy.

And thus, IT has one of the largest energy footprints of any industry sector.

But, you must ask, do we actually need servers anymore? We might just need a simple call to a Web server and to render a Web page or process data with an API call. So, why do we need large and complex servers for that?

That old client-server model

Well, serverless technology aims to remove the requirement for the old-fashioned concept of servers and move toward running code on-demand. For AWS, it’s a fine line to tread, as their core business is around creating servers, and then charging for them on an hourly basis. If they just charged for the number of computer ticks that a piece of code used, it could significantly reduce their whole business model. We have been trained — often — to be wasteful in our coding, and have no real care about it being efficient. With Ethereum, though, a developer is charged gas for the more data and processing that is conducted, and which leads to more efficient code. Another downside to our client-server approach is that we centralise our architectures around servers. In a distributed approach, we can run code wherever it is best run, and where it would not be dependent on a server.

Security

And for security? Well, running large and complex servers can often be difficult to secure, and the more code you have, the larger the attack surface, and the more chance there is of having bugs. By breaking systems down into small chunks, it is perhaps easier to secure each of these in isolation and then rebuild the whole infrastructure in a secure manner. Certainly, the zero trust approach to building secure infrastructures requires an attention to the secure design of each element of the overall architecture, and a system might only be a secure as its weakest element.

Serverless

We are stuck in a world of Von Neumann architectures, and which created a model where we shared the CPU and memory. And so this was efficient when we ran small programs, and needed to multitask on computers. But now the Cloud is almost one infinitely large server, and why should we share our code within the same CPU and memory?

We thus have an old viewpoint of our digital world. We have servers as big lumps of compute, and then lump all of the infrastructures onto them. This is a centralised approach to our infrastructure, and only really exists because that is the way that we built the system. We then create a security bubble around them and aim to secure the infrastructure. On a single server, we run everything and its dog, all bundled together in a tangled mess. One minute we are running SSH on our server and then next we are checking someone’s mail and all are shared in the same memory space.

To overcome our provisioning problem, we load-balance, and where we just basically clone the big lumps of compute, and keep them in their bubbles. So we come to some fundamental questions in the age of the cloud:

  • We do we have servers? We have big bloated operating systems that basically just run a Web server or a database. Increasingly we run a pay-as-you-go model for servers, and there’s a whole lot of memory and CPU that is being provisioned for, that is never used.
  • Why do I pay for resources that are provisioned for the maximum load? Many companies will provision their infrastructure so that it can cope with a maximum load, and which can thus be expensive in wasted compute time.
  • Why can’t I just run my infrastructure as a whole lot of services, and where I can secure these?
  • Why should we pay for a lumped service? Surely if we just need more email provision from 9–10am on a Monday morning, we can basically spin-up the service we need, and not have to create costly servers to cope with the load?
  • Why do I pay for 32GB of memory, and where the memory is mostly empty?

With Amazon EC2, you basically provision your compute engine, and then you pay-as-you-go with your servers — it is an old model created from bloated servers. With AWS Fargate we have a serverless infrastructure, and where ECS and Kubernetes are used to automatically scale applications. It thus allows infrastructures to be created, and where each resource is paid for on its own, and where each service can also be secured in its own isolated space. So rather than provisioning 4 vCPUs and 32GB of memory to our server, Fargate allocates the correct amount of compute resource that is required, and so you do not over-provision by adding new servers.

AWS Cloud WAN

Martin Beeby also outlined a new innovation around AWS Cloud WAN (Wide-area Network), and which uses a single console to setup and manage hybrid cloud infrastructures. This defines running programmes with AWS availability zones, and which can reach out to deployment that are within an organisation’s own data centre (“Outposts”). Basically, this will allow a hybrid cloud infrastructure of AWS public/private cloud, VPCs and on-premise private clouds:

This type of hybrid architecture is particularly focused on organisations which need to run on-premise applications, such as those related to the public sector and finance.

Conclusion

IT has one of the largest energy footprints on the planet — and is also one of the most wasteful. You and your company have a role to play in reducing our energy footprint — so go do it, before it’s too late.