Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

study & choose security approach for containerize env (?) #114

Open
36 tasks
nongrata081 opened this issue Aug 1, 2019 · 25 comments
Open
36 tasks

study & choose security approach for containerize env (?) #114

nongrata081 opened this issue Aug 1, 2019 · 25 comments

Comments

@nongrata081
Copy link
Owner

nongrata081 commented Aug 1, 2019

Introduction: What you need to know about container security



Topics to be mentioned (remove from this list once added to the containerized architecture security visualization):

  • vulnerability scanning (automated)
  • software updates (automated)
  • using a container-centric host OS
  • employing strong authentication and authorization
  • properly configure registry (Any efforts to secure container images can be rendered meaningless if the registry can be easily compromised. Special Coverage: KubeCon/CloudNativeCon )
    • Access to the registry should require encrypted and authenticated connections, preferably using credentials that are federated with existing network security controls.
    • Also, the registry should undergo frequent maintenance to ensure that it doesn’t contain stale images with lingering vulnerabilities.
  • images related
    • properly configure application images
      • configure image to be run with only user privileges that are necessary, deny all the rest
      • ensure images are restricted to launch extraneous daemons or services that allow unwanted access from the network
    • handle outdated, insecure versions of software or libraries
    • handle buggy applications
    • handle hidden malware
    • handle secrets stored within images, such as authentication keys or certificates
  • secure container orchestration tools (If you don't strictly scope access, a careless or malicious user could potentially do all sorts of mischief, from taking down apps to launching rogue ones.)
    • secure the administrative interface (especially in scenarios where a single orchestrator manages multiple applications. May include: two-factor authentication and at-rest encryption of data)
    • configure orchestrators to separate network traffic into discrete virtual networks, based on the sensitivity of the traffic being transmitted (low-sensitivity workloads, such as public-facing web apps, should be isolated from high-sensitivity workloads, such as tax-reporting software)
    • distribute workloads: each host should run containers only of a given security level (These measures make it much more difficult for a malicious actor to gain access to sensitive data when a low-sensitivity application such as a blog is compromised)
    • deploy and orchestrate clusters in ways that are secure by default (e.g. include end-to-end encryption of all network traffic between cluster nodes and mutually authenticated network connections between cluster members)
    • orchestrator should be able to introduce nodes to the cluster securely, maintain a persistent identity for each node throughout its lifecycle, and isolate and remove compromised nodes without affecting the overall security of the cluster. (These measures are especially important in large-scale environments that span multiple network organizations and scale to hundreds of hosts and thousands of containers.)
  • secure container runtime (serious concerns arises when the container runtimes that launch and manage containers—software such as containerd, CRI-O, and rkt—themselves contain vulnerabilities. NIST cautions that, left unpatched, such flaws can lead to “container escape” scenarios where an attacker could potentially gain access to other containers or the host operating system itself, so admins should make installing runtime security patches a high priority).
  • secure container runtime option configs (many configurable options available with container runtimes. A misconfigured container might be able to access too many devices, for example, which could potentially affect all containers running on the host. Other runtime options could allow a container to make unsafe system calls, mount sensitive directories in read-write mode, and even compromise the host OS)
  • scan network traffic for threats and anomalies (Containers deployed on multiple hosts typically communicate over a virtual, encrypted network, and they are assigned dynamic IP addresses that change continuously as applications are scaled and load balanced by the orchestrator. Detecting network traffic anomalies in such an environment requires specialized, application-aware network filtering tools)
  • secure (lock down) operating system (At the lowest level of the containerized stack, the host OS represents the most critical target for attacks. If compromised, it can expose all of the containers running on it)
    • run a pared-down, container-specific OS that limits the number of installed components to the bare minimum of software required to create and manage containers (Fewer components means fewer potential vulnerabilities that can be exploited)
    • keep up with OS security patches and apply them promptly to all host instances in the cluster. (This includes not just the OS kernel, but also the container runtime and any other system services or components recommended by the OS vendor)
    • Properly configure OS (these measures make the OS a more trustworthy environment, with far fewer avenues for attack)
      • mount sensitive file systems as read-only
      • run the host OS as immutable infrastructure, with no data stored uniquely and persistently on the host
      • the host should not provide any application-level dependencies except those that have been packaged and deployed as containers

@nongrata081
Copy link
Owner Author

Solutions:

@nongrata081
Copy link
Owner Author

Another lesson learned is that software alone cannot guarantee security. Containerization also requires that organizations examine their processes and teams and potentially adjust to the new operational model. The ephemeral nature of containers may call for different procedures than those used with traditional servers. For example, incident response teams will need awareness of the roles, owners, and sensitivity levels of deployed containers before they can know the proper steps to take in the event of an ongoing attack.

Security threats and mitigations are ever-evolving, and no one resource can provide all the answers. Still, the NIST Application Container Security Guide offers a solid foundation and framework for security policy for containerized environments. It’s well worth a read for anyone involved in building, deploying, managing, and maintaining containers and containerized applications, and it's a must-read for security professionals as the industry transitions to this next phase of IT.

@nongrata081
Copy link
Owner Author

@nongrata081
Copy link
Owner Author

@nongrata081
Copy link
Owner Author

  • Consider Clair (recommended by Michael from TwistLock)

Clair is an open source project for the static analysis of vulnerabilities in application containers (currently including appc and docker).

@nongrata081
Copy link
Owner Author

@nongrata081
Copy link
Owner Author

@nongrata081
Copy link
Owner Author

@nongrata081
Copy link
Owner Author

@nongrata081
Copy link
Owner Author

Containerized env security checklist

  • Use container-specific host OSs instead of general-purpose ones to reduce attack surfaces.

A container-specific host OS is a minimalist OS explicitly designed to only run containers, with all other services and functionality disabled, and with read-only file systems and other hardening practices employed. When using a container-specific host OS, attack surfaces are typically much smaller than they would be with a general-purpose host OS, so there are fewer opportunities to attack and compromise a container-specific host OS. Accordingly, whenever possible, organizations should use container-specific host OSs to reduce their risk. However, it is important to note that container-specific host OSs will still have vulnerabilities over time that require remediation.

Container-specific OSs:

  • CoreOS Container Linux
  • Project Atomic
  • Google Container-Optimized OS
  • etc

@nongrata081
Copy link
Owner Author

  • Only group containers with the same purpose, sensitivity, and threat posture on a single host OS kernel to allow for additional defense in depth.

While most container platforms do an effective job of isolating containers from each other and from the host OS, it may be an unnecessary risk to run apps of different sensitivity levels together on the same host OS. Segmenting containers by purpose, sensitivity, and threat posture provides additional defense in depth. By grouping containers in this manner, organizations make it more difficult for an attacker who compromises one of the groups to expand that compromise to other groups. This increases the likelihood that compromises will be detected and contained and also ensures that any residual data, such as caches or local volumes mounted for temp files, stays within its security zone.
In larger-scale environments with hundreds of hosts and thousands of containers, this grouping must be automated to be practical to operationalize. Fortunately, container technologies typically include some notion of being able to group apps together, and container security tools can use attributes like container names and labels to enforce security policies across them.

@nongrata081
Copy link
Owner Author

  • Adopt container-specific vulnerability management tools and processes for images to prevent compromises.

Traditional vulnerability management tools make many assumptions about host durability and app update mechanisms and frequencies that are fundamentally misaligned with a containerized model. For example, they often assume that a given server runs a consistent set of apps over time, but different application containers may actually be run on different servers at any given time based on resource availability. Further, traditional tools are often unable to detect vulnerabilities within containers, leading to a false sense of safety. Organizations should use tools that take the declarative, step-by-step build approach and immutable nature of containers and images into their design to provide more actionable and reliable results.
These tools and processes should take both image software vulnerabilities and configuration settings into account. Organizations should adopt tools and processes to validate and enforce compliance with secure configuration best practices for images. This should include having centralized reporting and monitoring of the compliance state of each image, and preventing non- compliant images from being run.

@nongrata081
Copy link
Owner Author

  • Consider using hardware-based countermeasures to provide a basis for trusted computing.

Security should extend across all tiers of the container technology. The current way of accomplishing this is to base security on a hardware root of trust, such as the industry standard Trusted Platform Module (TPM). Within the hardware root of trust are stored measurements of the host’s firmware, software, and configuration data. Validating the current measurements against the stored measurements before booting the host provides assurance that the host can be trusted. The chain of trust rooted in hardware can be extended to the OS kernel and the OS components to enable cryptographic verification of boot mechanisms, system images, container runtimes, and container images. Trusted computing provides a secure way to build, run, orchestrate, and manage containers.

@nongrata081
Copy link
Owner Author

  • Use container-aware runtime defense tools.

Deploy and use a dedicated container security solution capable of preventing, detecting, and responding to threats aimed at containers during runtime. Traditional security solutions, such as intrusion prevention systems (IPSs) and web application firewalls (WAFs), often do not provide suitable protection for containers. They may not be able to operate at the scale of containers, manage the rate of change in a container environment, and have visibility into container activity. Utilize a container-native security solution that can monitor the container environment and provide precise detection of anomalous and malicious activity within it.

@nongrata081
Copy link
Owner Author

Container runtimes

Every host OS used for running containers has binaries that establish and maintain the environment for each container, also known as the container runtime. The container runtime coordinates multiple OS components that isolate resources and resource usage so that each container sees its own dedicated view of the OS and is isolated from other containers running concurrently. Effectively, the containers and the host OS interact through the container runtime. The container runtime also provides management tools and application programming interfaces (APIs) to allow DevOps personnel and others to specify how to run containers on a given host. The runtime eliminates the need to manually create all the necessary configurations and simplifies the process of starting, stopping, and operating containers. Examples of runtimes include Docker [2], rkt [3], and the Open Container Initiative Daemon [7].

@nongrata081
Copy link
Owner Author

2.3.1 Image Creation, Testing, and Accreditation

In the first phase of the container lifecycle, an app’s components are built and placed into an image (or perhaps into multiple images). An image is a package that contains all the files required to run a container. For example, an image to run Apache would include the httpd binary, along with associated libraries and configuration files. An image should only include the executables and libraries required by the app itself; all other OS functionality is provided by the OS kernel within the underlying host OS.

The image creation process is managed by developers responsible for packaging an app for handoff to testing. Image creation typically uses build management and automation tools, such as Jenkins [8] and TeamCity [9], to assist with what is called the “continuous integration” process. These tools take the various libraries, binaries, and other components of an app, perform testing on them, and then assemble images out of them based on the developer-created manifest that describes how to build an image for the app.

Most container technologies have a declarative way of describing the components and requirements for the app. For example, an image for a web server would include not only the executables for the web server, but also some machine-parseable data to describe how the web server should run, such as the ports it listens on or the configuration parameters it uses.

After image creation, organizations typically perform testing and accreditation. For example, test automation tools and personnel would use the images built to validate the functionality of the final form application, and security teams would perform accreditation on these same images. The consistency of building, testing, and accrediting exactly the same artifacts for an app is one of the key operational and security benefits of containers.

@nongrata081
Copy link
Owner Author

2.3.2 Image Storage and Retrieval

Examples of registries include Amazon EC2 Container Registry [10], Docker Hub [11], Docker Trusted Registry [12], and Quay Container Registry [13].

Images are typically stored in central locations to make it easy to control, share, find, and reuse them across hosts. Registries are services that allow developers to easily store images as they are created, tag and catalog images for identification and version control to aid in discovery and reuse, and find and download images that others have created. Registries may be self-hosted or consumed as a service.

Registries provide APIs that enable automating common image-related tasks. For example, organizations may have triggers in the image creation phase that automatically push images to a registry once tests pass. The registry may have further triggers that automate the deployment of new images once they have been added. This automation enables faster iteration on projects with more consistent results.

Once stored in a registry, images can be easily pulled and then run by DevOps personas across any environment in which they run containers. This is another example of the portability benefits of containers; image creation may occur in a public cloud provider, which pushes an image to a registry hosted in a private cloud, which is then used to distribute images for running the app in a third location.

@nongrata081
Copy link
Owner Author

2.3.3 Container Deployment and Management

Examples of orchestrators are Kubernetes [14], Mesos [15], and Docker Swarm [16].

Tools known as orchestrators enable DevOps personas or automation working on their behalf to pull images from registries, deploy those images into containers, and manage the running containers. This deployment process is what actually results in a usable version of the app, running and ready to respond to requests. When an image is deployed into a container, the image itself is not changed, but instead a copy of it is placed within the container and transitioned from being a dormant set of app code to a running instance of the app.

The abstraction provided by an orchestrator allows a DevOps persona to simply specify how many containers need to be running a given image and what resources, such as memory, processing, and disk need to be allocated to each. The orchestrator knows the state of each host within the cluster, including what resources are available for each host, and determines which containers will run on which hosts. The orchestrator then pulls the required images from the registry and runs them as containers with the designated resources.

Orchestration tools are also responsible for monitoring container resource consumption, job execution, and machine health across hosts. Depending on its configuration, an orchestrator may automatically restart containers on new hosts if the hosts they were initially running on failed. Many orchestrators enable cross-host container networking and service discovery. Most orchestrators also include a software-defined networking (SDN) component known as an overlay network that can be used to isolate communication between apps that share the same physical network.

When apps in containers need to be updated, the existing containers are not changed, but rather they are destroyed and new containers created from updated images. This is a key operational difference with containers: the baseline software from the initial deployment should not change over time, and updates are done by replacing the entire image at once. This approach has significant potential security benefits because it enables organizations to build, test, validate, and deploy exactly the same software in exactly the same configuration in each phase. As updates are made to apps, organizations can ensure that the most recent versions are used, typically by leveraging orchestrators. Orchestrators are usually configured to pull the most up-to-date version of an image from the registry so that the app is always up-to-date. This “continuous delivery” automation enables developers to simply build a new version of the image for their app, test the image, push it to the registry, and then rely on the automation tools to deploy it to the target environment.

This means that all vulnerability management, including patches and configuration settings, is typically taken care of by the developer when building a new image version. With containers, developers are largely responsible for the security of apps and images instead of the operations team. This change in responsibilities often requires much greater coordination and cooperation among personnel than was previously necessary. Organizations adopting containers should ensure that clear process flows and team responsibilities are established for each stakeholder group.

@nongrata081
Copy link
Owner Author

nongrata081 commented Aug 1, 2019

@nongrata081
Copy link
Owner Author

nongrata081 commented Aug 1, 2019

how it all started (read an article & realize docker is outdated & there is a need for systematic approach for containerizing dev envs):

goodbye docker: https://technodrone.blogspot.com/2019/02/goodbye-docker-and-thanks-for-all-fish.html
https://news.ycombinator.com/item?id=19351236

@nongrata081
Copy link
Owner Author

nongrata081 commented Aug 5, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant