Part 1: Securing Containers in Amazon ECS
"There are no rules of architecture for a castle
in the clouds." -Gilbert Chesterton
Over the past year alone, the number of breaches affecting Amazon Web Services (AWS) tenants has risen to an all-time high. However, because cloud service providers (CSPs) are a shared responsibility model, the onus of those breaches does not fall on Amazon, rather, the system and data owners of those cloud workloads.
This is part 1 of a multipart series on a strategic plan for securing workloads in the AWS cloud. Part 1 defines an actionable plan for securing containers deployed onto Elastic Cloud 2 (EC2) instances or via Amazon FarGate.
First, terminology is defined to introduce readers who are not familiar with what microservices, APIs, containers, container orchestration, and other related AWS ECS technologies are. Then, the actual plan for securing containers deployed in ECS is detailed, decoupling security controls that are provided natively by AWS to tenants from third-party solutions that should be considered as part of the strategy.
Today, for better or worse, we have an innumerable number of empirical reference data to AWS breaches. Everything from the recent Capital One breach of its S3 data, the breach of Accenture’s data in its S3 buckets in July, to Samsung’s Smart Things source code that had been compromised as a result of a mistaken hard-coded S3 bucket key in the source code of Samsung’s Gitlab repository. The list goes on and on and is surely to get worse before it gets better before year-end.
However, with the bruises these breaches have given their data owners comes lessons learned as well as a call to action for others not wanting to make those same headlines. The impetus for this series is exactly that, to keep you out of the data breach headlines. This article aims to help you understand what security controls are available to you for hardening and securing your cloud workloads in AWS and to provide your leadership team with the foundation for a broader plan for securing your workloads in the cloud.
In a Monolithic World
Before we can talk about how to secure microservices containers, you need to first fully understand how we got here. The journey to microservices starts with monolithic applications (otherwise known as monoliths).
A monolith is simply a massive, all-encompassing application where all functionality is built into a single app. I like to analogize monoliths as a bunch of chefs in a kitchen all working on a single pot of stew. The chefs are analogous to the developers, all working on a single application where there is a potential for the developers to break other parts of the code that other developers are working on or take down the entire application just to perform maintenance on a particular part of a web site -- for example, the furniture section of the amazon.com web site.
Microservices moves away from this concept of a bunch of chefs working on a single pot of stew. Instead, each chef would be working on their respective part of the recipe – one focused on marinating the vegetables, another focusing on making the broth, and so on. In microservices, the single monolith would be broken up into parts, with each part of the app or web site for example, being maintained by different teams of developers. An example would be the amazon.com web site. Instead of a single web site run from a single server, it would be broken up into parts where each part of the site is maintained by a different team and each part running in a different container that all communicate via application programmable interfaces (APIs).
This would allow different parts of the amazon.com web site to be maintained separately by different teams, preventing the accidental overwrite of the code of other developers, having to take the entire site down just to update or maintain different areas of the site, and so on.
The Numbers in Containers
The proof is in the pudding for why the world is quickly shifting away from the monolithic mindset to microservices. In a study conducted by IBM in 2017, 59% of organizations surveyed improved application quality and reduced defects while 57% also reduced application downtime and associated costs by migrating monoliths to microservices.
According to IBM’s study, container usage for production enterprise workloads is expected to increase from 25% to 44% within the next three years. Deployment will shift heavily to Hybrid Cloud and support for on-premises -- server-less containerized environments with deployments only on public clouds decreasing.
With this increased movement towards a containerized enterprise, hackers will continue to focus on container bust-outs, a tactic used by adversaries to break out of containers and pivot across container hosts as the attack surface continues to increase, creating a need for you to better understand how to secure them.
Data is the new currency and according to recent reports, is now worth more than oil. A new commodity spawns a lucrative, fast-growing industry, prompting antitrust regulators to step in to restrain those who control its flow. A century ago, the resource in question was oil. Now similar concerns are being raised by the giants that deal in data -- the oil of the digital era. These titans—Alphabet (Google’s parent company), Amazon, Apple, Facebook and Microsoft—look unstoppable. They are the five most valuable listed corporations in the world. Their profits are surging: they collectively racked up over $25 Bn in net profit in the first quarter of 2017. Amazon captures half of all dollars spent online in the United States. Google and Facebook accounted for almost all the revenue growth in digital advertising in the U.S. last year.
Adversaries are consistently on the search for expanding revenues in their illicit business models or creating new ones for targeting, pilfering, and monetizing this new data commodity. With the value of data predicated on the sensitivity of that data, PHI and CUI will surely be on their radars and what better than to take it from where its being stored in public clouds.
This first article in this series covers how to secure Docker containers running in Amazon ECS. Part 2 covers securing AWS S3 buckets, Part 3 covers securing Amazon EC2 instances, and will end with part 4 on securing APIs in the AWS cloud.
Threats to Containers
The number of threats to containers is ever-increasing. Today, these threats include application level DDoS and cross-site scripting attacks on public facing containers; compromised containers attempting to download additional malware, or scan internal systems for weaknesses or sensitive data; container breakout, allowing unauthorized access across containers, hosts, or data centers; a container being forced to use up system resources in an attempt to slow or crash other containers; live patching of applications which introduces malicious processes; and use of insecure applications to flood the network and affect other containers.
Defenses must be placed into the build process, shipment process, and runtime process of containers, which we'll cover in this first part of our series.
Cloud Service Providers (CSPs) operate off a shared responsibility model, meaning, there is a bright line between what the CSP is responsible for and what the tenant is responsible for.
Many organizations make the mistake of thinking that the CSP is responsible for regularly patching and monitoring their servers running in the cloud. However, this couldn’t be further from the truth. The figure below illustrates the division of responsibilities between the tenant of a CSP and the CSP itself (AWS).
Let me first demystify Amazon ECS before I explain how to secure containers running within it.
Amazon ECS (Elastic Container Service) is a container orchestration service, much like Kubernetes, which supports the deployment of Docker containers enabling tenants to run containerized applications on AWS.
What’s in a name? Everything. Amazon calls it elastic because similar to elastic, it expands and contracts automagically as traffic and utilization requirements demand it. Tenants are able to even issue API calls to launch and stop Docker-enabled apps with custom-built APIs and for added security, supports Amazon identity access management (IAM) roles, security groups, and load balancers.
ECS also supports other AWS services including CloudWatch, a monitoring service provided by AWS for monitoring applications and their infrastructure; CloudFormation, a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS; and CloudTrail, a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.
So to summarize, Amazon ECS is the infrastructure you use to deploy containerized applications in Docker, which is what we’ll focus on next for securing and hardening.
The Security Plan
Vulnerability and Patch Management
Similar to your assumed already existing cybersecurity program, vulnerability and patch management is a compulsory requirement for maintaining the security posture of your Docker containers in ECS. As new vulnerabilities are discovered in Docker, new patches and releases will be made that necessitates you either apply the patch or upgrade as they are released.
Using a vulnerability scanner purpose-built to scan for container related vulnerabilities is an essential purchase as part of your annual budget. Make sure that in addition to your cloud or on-prem scanner, you add a container vulnerability scanning tool to that war chest.
As an option, consider using the NeuVector solution, which scans for vulnerabilities during the entire CI/CD pipeline, from Build to Ship to Run. Using the Jenkins plug-in to scan during the build process, you can monitor images in registries and run automated tests for security compliance preventing deployment of vulnerable images with admission control, but are also able to monitor production containers.
Only Run Approved Images
Containers can be configured to only run signed images that have been approved for use. This can be achieved with the Docker Universal Control Plane (UCP), which enforces applications to only use Docker images signed by UCP users you trust. When a user tries to deploy an application to the cluster, UCP checks if the application uses a Docker image that is not trusted, and won’t continue with the deployment if that’s the case. By signing and verifying the Docker images, you ensure that the images being used in your cluster are the ones you trust and haven’t been altered either in the image registry or on their way from the image registry to your UCP cluster.
To configure UCP to only allow running services that use Docker images you trust, go to the UCP web UI, navigate to the Admin Settings page, and in the left pane, click Docker Content Trust. From there, select the Run Only Signed Images option to only allow deploying applications if they use images you trust.
With this setting, UCP allows deploying any image as long as the image has been signed. It doesn’t matter who signed the image.
Containers themselves are immutable, meaning any changes made to a running container instance will be made on the image, and then deployed. This allows more streamlined development and a higher degree of confidence when deploying.
One of the last pieces of a container’s lifecycle is deployment to production, for many organizations, this stage is the most critical. Oft-times, a production deployment is the longest period of a container lifecycle, and therefore, needs to be consistently monitored for threats, misconfigurations, and other weaknesses. Once you have containers live and running, it is vital to be able to take action quickly and in real time to mitigate potential attacks. Simply put, production deployments are extremely important pieces of infrastructure and highly valued to organizations and their customers.
One option for runtime security includes AppArmor (Application Armor), which is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program.
A commercial option is NeuVector, which is capable of discovering normal connections and application container behavior and automatically builds a security policy to protect container based services. Using process and file system monitoring with Layer 7 network inspection, unauthorized container activity or connections from containers can be blocked without disrupting normal container sessions.
In Docker, a secret is any blob of data, such as a password, SSH private key, TLS Certificate, or any other piece of data that is sensitive in nature. When you add a secret to the swarm, Docker sends the secret over to the swarm manager over a mutually authenticated TLS connection, making use of the built-in Certificate Authority that gets automatically created when bootstrapping a new swarm.
A critical element of building safer apps is having a secure way of communicating with other apps and systems, something that often requires credentials, tokens, passwords and other types of confidential information—usually referred to as application secrets. Docker has released a tool called Docker Secrets, which is a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform.
With containers, applications are now dynamic and portable across multiple environments. This made existing secrets distribution solutions inadequate because they were largely designed for static environments. Unfortunately, this led to an increase in mismanagement of application secrets, making it common to find insecure, home-grown solutions, such as embedding secrets into version control systems like GitHub.
It's imperative that you take steps to ensure secrets are not hard-coded and are distributed and stored securely.
Implement Container Firewalls
With Layer 7 network inspection, application level attacks such as DDoS and DNS on containers are detected and prevented. Real-time detection and alerting adds a layer of network security to the dynamic container environment. Like traditional bare-metal servers and virtual machines, host firewalls should not be ignored in your container security strategy.
AWS Security Controls
AWS has provided native security controls that can be used to harden and make containers running in ECS more resilient to attack. Examples of this include the capability to set access permissions to individual containers using IAM. Additionally, resources can be further restricted and made accessible to only specific containers via IAM as well.
For over two decades, I’ve preached to the choir about the importance of implementing secure network architectures through micro-segmentation. AWS enables you to implement container isolation via network segmentation that should be used to isolate workloads.
What micro-segmentation achieves is the ability to limit the damage caused by malware or a hacker who has busted out of a Docker container from pivoting or infecting other hosts in the ECS environment.
Containers can optionally be run on top of virtual instances. This model allows users to limit resource consumption, networking, and privileges and can also optionally be built using SELinux.
Separately, tools such as Docker Bench, a script that checks for dozens of common best-practices around deploying Docker containers in production, is a hardening script that ensures your Docker container does not violate a series of Docker security best practices -- a sort-of Bastille Linux script for Docker.
This first part in the series focused on what hardening and security measures should be taken on Docker containers running in Amazon ECS. In the next part, we'll address security around S3 buckets.
Like and Share
The best way you can support me in my continued content development and influence efforts in cybersecurity is to like and share my article.