Validating containerization - What was small has become smaller, yet larger.

Let me explain this confusing headline. When virtualization became popular, users were amazed by the flexibility it provided. We were able to segment large hardware into several small virtual machines (VMs), each running its own operating system and software that would serve its own unique purpose. This exciting new development was widely adopted to make everything fit into small pieces that could work together as needed to replace something quite large and expensive. We were starting to think this was as small as we could get, but it turns out that this was just a starting place to make things small.

This is where containerization enters the picture. A mechanism by which we could run our applications in user space while sharing the same operating system of the virtual machine. In essence, with containers we managed to make what was already small (VMs) into something smaller (containers within VMs).

How can small be large?

Well, the fluidity of the containers, their light-weight nature, and their enablement of seamless installation in such varied types of environments/operating systems/hardware types gives them the next level of flexibility. We can spawn hundreds of pods (a deployed container) or destroy them in seconds. Based on needs and demands, you can scale the pods to handle thousands or millions of user requests, scale up different pod types (like databases), bring down the pods as demands subside, etc. The light-weight form factor of containers means that a malforming pod like a crashed application within a pod can be rebooted in seconds, giving a whole new flexibility to the legendary IT term “Have you tried turning it off and on again?”.

With great power comes great responsibilities

The nimble nature of pods means the middle boxes that are servicing or securing the containerized solution has a set of unique, never-before-seen challenges to handle the dynamic nature of containers. From a testing perspective, this means we need to emulate scenarios that replicate a containerized application. Keysight’s CyPerf is the industry’s first distributed, elastic performance and security test solution to enable this type of testing. Let’s take a look at how you can emulate containerized applications with CyPerf.

Creating containerized test infrastructure

CyPerf uses containers that can be deployed exactly as any other containers by following these steps:

1

Figure: Architecture diagram of a north-south topology where CyPerf agent pods are deployed at the server side to simulate multi-tiered containerized applications.

To learn more about how to use CyPerf’s cloud-native, lightweight agents to validate SD-WAN, content delivery network (CDN), secure access service edge (SASE), multi-cloud, WAFs, and more, check out the CyPerf Tutorials for New Users video series.

2

Figure: A sample yaml file showing the configurations to deploy CyPerf pods.

3

Figure: CyPerf UI showing the containers along with their individual tags to be used in the UI.

4

Figure: CyPerf statistics showing pods at the server scale up and down during runtime

Use cases

CyPerf’s ability to be containerized provides the flexibility to test a variety of containerized environments and container network interfaces (CNIs). Below are a few scenarios where CyPerf containers can help in your validation efforts:

  1. Test application performance and security efficacy of containerized next-generation firewalls (NGFW)
  2. Test web application firewall (WAF) and application load balancers that service containerized applications/web and databases
  3. Test performance of various CNIs like Calico or Flannel and the advantage/disadvantage of using one or the other
  4. Test performance of various Kubernetes implementations like EKS in AWS or GKE in google cloud
  5. A mix of all four of the above: test deployment of application and security tools in a variety of CNI and Kubernetes environments to gauge performance drops, latencies, security issues, etc.
limit
3