top of page
Search

On Stranger Tides: API and Container Security Part I



This is part 1 of a multi-part series I'm publishing on how breaches occur in service oriented architectures (SOAs), application programming interfaces (APIs), and containers and how to build resiliency against them. In this first part, I'll explain what SOAs and containers are and then in part 2, we'll demystify APIs, API breaches and how to secure against them, and in the final part, we'll discuss container breaches and how to secure them.


Rise of Microservices


In the beginning there were monoliths. These were applications designed as a self-contained, single-tiered software application where the UI and data access layers were combined into a single program.


Monoliths carry out tasks completely on their own from end-to-end made up of two components or layers: the user interface, which is the entry point of the application for the user, often called the UI or the graphical user interface (GUI) and the data layer, which is a wrapper to the data store (or database) -- sanitizing the data before its persisted to the data store.


Monoliths are designed for traditional server-side systems; an entire system based on a single application and are designed with performance and speed in mind. Monoliths typically run faster because they don’t communicate with different parts of itself over APIs and are perfectly suited for small, single function apps.


For example, a monolith might be an application that performs different functions, from authentication to reads and writes to a database, posts, may contain a permission structure and workflow approval, and runs on the same server using the same file system.


Monoliths are often described as being akin to having too many chefs in the kitchen with their hands in a single pot, each trying to add their own ingredients that may conflict with or ruin the whole or parts of the dish that other chefs may be working on.


It's important to note that Monoliths aren’t dead. Whether or not a developer chooses to write their application as a monolith or broken into microservices depends on the application.


On the contrary, microservices are designed with modularity, supporting the reuse of application parts and facilitating maintenance of different parts of the application by enabling repair or replacement of parts without requiring a wholesale rebuild or replacement of the entirety of the app. 


The rise of microservices ushered in a new concept of a service-oriented architecture (SOA), where the server architecture is compromised of multiple applications that all represent individual features of the same application. For example, the authentication, database, posts, permission structures, and workflow approvals are all separate applications that communicate with each other over APIs from their own container, such as Docker, and each may run on a different OS.


SOAs are well suited for applications requiring the ability to scale different parts of itself later as demand/load increases. With a SOA architecture, the posts container for example can be scaled up on its own as demand increases the number of posts per second. Additionally, an SOA architecture allows for the isolation and resolution of individual service problems that might otherwise bring down an entire monolith making the entire application unusable instead of just one part of it that can be marked as offline for maintenance such as new user registration allowing existing users to be able to login.


Containers also enable multiple development teams to work on different parts of their code without worrying how their changes may adversely impact other developers and their section of the application.


Is that a container in your pocket or are you just happy to see me?


To understand microservices, you need to understand what containers are. If you’re familiar with the concept of virtualization, you’ll have a fractional understanding of containers. Containers are entire virtualized servers running within a self-contained part of the system that carves out its own needs for the server's random access memory (RAM), CPU, and disk space. However, unlike traditional virtual machines, such as VMWare or HyperV, the virtual machines don’t access these core components and cards, such as the network interface card (NIC) or graphics card through a hypervisor using virtual device drivers. Instead, containers have direct access to the system’s components. This makes containers far more isolated and self-contained as well as prevents the issue of virtual device drivers crashing and some attacks leveraging VM virtual device drivers. It also gives containers far better performance in speed and allows them to take up far less disk space.


The World of Docker


Containers are typically deployed using Docker and as the number of Docker containers grows with demand and API traffic, users typically turn to Kubernetes (KBS) to take their existing Docker images and bring them under more of an autonomous management infrastructure. Kubernetes is a free, opensource project and is the de-facto go to for developers for large, distributed container management. 


Docker is a specific type of container allowing developers to develop and deploy apps within neatly packaged virtual containerized environments. Applications built in the container run the same no matter where they are or what they are running on, effectively making it a portable server.


Docker containers can be easily added, removed, stopped, and started again without affecting each other or the host machine it's running on. As a matter of fact, numerous Docker containers can run on the same host machine at the same time and often take much fewer system resources and run with much better performance than if multiple guest VMs were running on the same host with VMWare. Docker containers run one specific task, such as MySQL or NodeJS applications, and are networked together so they can all communicate with one another.


A large community site of contributed, pre-baked Docker containers are typically downloaded from Docker Hub, an online cloud repository of docker containers with pre-configured environments for their specific programming languages a developer might be developing in such as Ruby.


As mentioned, unlike VMs, Docker containers carve out a set amount of resources, disk space, memory, and processing power. Docker communicates natively with the system kernel, uses less disk space, and reuses files efficiently as a layered OS. Docker will keep a single copy of the files needed and share them with each container. 


What in the Kubernetes is that?

Kubernetes uses concepts referred to as nodes. Every Kubernetes deployment typically runs both a master node and worker nodes. Worker nodes handle multiple “pods” which is Kubernetes language for a group of containers clustered together as a single working unit.


Deploying a Kubernetes infrastructure is relatively simple. The administrator simply tells the Kubernetes master node the pod definitions specifying how many she wants to deploy. Kubernetes then deploys pods to worker nodes that have been pre-configured and defined within the system. If worker nodes become unavailable or performance suffers with a worker node, Kubernetes will start new pods on a new working node based on the node that has the most resources available required by those pods and automatically migrates and them to the new node.


Kubernetes originally started out as a Google brainchild, which Google later donated to the Cloud Native Computing Foundation to be taken over as a community developed and maintained open source project.


In short, Kubernetes enables the automation, deployment, auto-scaling, and operation of multiple containers and is not limited to just Docker, but other container formats as well.


Kubernetes Architecture


Master Node: The master node is part of a “cluster” within Kubernetes and knows about all servers built and made a part of the Kubernetes ecosystem and that the master node can then deploy containers to. The master node informs KBS what kind of image the admin wants to use (typically downloaded as pre-baked containers from Docker Hub). The admin then creates a “deployment” which specifies the amount of CPU, RAM, file storage, and other critical system configuration parameters, and is continuously run, monitored, and maintained by KBS over time. KBS will even auto-heal deployments if recovery of a deployment is needed. It isn’t a “push and done” system. KBS makes autonomous decisions on where to migrate deployments to based on their requirements. When configured, KBS can even perform rolling restarts of large numbers of deployments in the environment across different worker nodes.


Comment


So what's your opinion? Anything you want to change or expound upon in my explanations of APIs and containers in this article? Leave your comments in the section below!


Like & Share


As usual, if you liked this article, please support me by clicking LIKE and share it with your own feed! This is the best possible way that you can support me and my continued research. If anyone has anything to add or comment on in this article, please feel free to share it with everyone below in the comments section! Learn more about me at my homepage at www.alissaknight.comLinkedIn, watch my VLOGs on my YouTube channel, listen to my weekly podcast episodes, or follow me on Twitter @alissaknight.


6 views0 comments

Recent Posts

See All

SMTP Smuggling

What is SMTP Smuggling? SMTP smuggling involves exploiting vulnerabilities in mail servers to bypass security measures. Attackers manipulate the interaction between mail servers, leading to unauthoriz

bottom of page