Creating Completely Distributed Cybersecurity Learning … Goodbye Cloud, Hello Kubernetes

We are building the next generation of cybersecurity training at Edinburgh Napier. Previously we built vSoC, and it has been used to…

Creating Completely Distributed Cybersecurity Learning … Goodbye Cloud, Hello Kubernetes

We are building the next generation of cybersecurity training at Edinburgh Napier. Previously we created vSoC, and it has been used to support thousands of students, and on many modules that run across the world.

For this we create different networks and then run scripts to create the infrastructure and virtual machines for each student and each model. This is a heavy weight system where we have a VM for each server and host. It runs off VMWare ESX, and it has served us well. We create the learning infrastructure, once, for the whole year, and then let it run. But our new environment will be truly distributed, and where we can concentrate on getting the learning infrastructure orchestrated, and then customised for each lab, and, for each student. Each lab, for each student, will then run fresh and newly created.

And so when Microsoft jump on the open-source trial, you know that something fundamentally is changing. For two long we have been roped into systems which locked in the vertical and horizontal stacks. That’s the way the industry generally liked it, and most companies who were in the stack were happy that they fitted in with others. We have build large and complex server infrastructures which are verbose, and still rely on complex infrastructures of process and storage to make them all work. Our first wave of moving to the Cloud basically just took our existing servers and services, and placed them somewhere else … from on-premise to off-premise.

Our systems are still bottle-necks, and still are overly centralised. But there’s another way of creating clusters of computers which need to work together … and that way is Kubernetes. With this we create a cluster to computers who are ready to work with us to perform a task, and work as a single unit. It breaks away from the centralised server model of the Internet, and starts to distribute workload, and, hopefully, make better use of our computing resource. An infrastructure is created with a container — such as a Web service from a Docker component — and is then run within a cluster of computers. In this way we do not have the overhead of scripting with an ESXi infrastructure. With this we have a Kubernetes master, and then some nodes who are will to share their resources.

Every node within a cluster runs a container runtime engine, such as Docker, and it waiting for work from the cluster master. The master can then manage the workload of its cluster, and give work to nodes which have the least workload (or are the most reliable for producing work). All of the components are then configured and networked, even though they are distributed across the network. A failure of any part can be healed with the workload applied to other nodes in the cluster.

And the advance for us? We write a lab, and then deploy it to any student who wants to take it. Every lab will be customised and will always be the same. They can be updated for every student with a change of a script in GitHub. And you say … but, you are dependent on Docker? But Docker is just one company providing containers, the future will see more companies supporting a standardised method of creating them.

And in the end, if it all sounds the same as ESXi, you are mainly right … but … it is open-source, and free, and … it’s fun to build in whatever computing resource you have!

If you want to build the future, go learn Kubernetes, and see a more distributed model of the Internet, and one that is scripted … and with no licence fees.