More and more organizations are experimenting with and shifting workloads to the Hyperscale Clouds like Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP).  These clouds can offer many capabilities and features to allow an organization to innovate at a faster pace and bring products and services to market sooner. At the same time, these new capabilities and features are stretching existing organizational processes and knowledge, including the information security and auditing front.

Leading cloud providers, such as AWS, provide a wide array of services and tools, each with their own risks and best practices. AWS is constantly adding to and extending the existing services, with 100’s of Product announcement in 2017 alone. This makes it hard for an organization to keep up with the best practices and security implications of each tool or service.

Moving to the cloud doesn’t negate an organization’s information security and compliance requirements. Those workloads must still be deployed in a secure manner and in accordance with company policy. AWS can make it easy to secure a workload, sometimes better than an organization’s existing data center. On the flip side, they make it easy to deploy a very insecure workload. It’s up to the workload owner to take advantage of the capabilities to secure their workload.

With the push of DevOps, design, build, and management of both infrastructures and applications is being pushed out to the workload owner. In many organizations, this blurs the lines of responsibility between teams.  This can also short-circuit the checks and balances, such as change management and security reviews, that may have existed in the traditional processes.

With all these changes, there is a need to evolve the organization’s security program to meet the speed and capabilities of the cloud. It does not mean that we throw all the existing security policies and procedures out the window. The various teams need to understand how the intent of the policies and procedures intersects with the Cloud capabilities to evolve the security program.

Below are some areas that should be addressed to you evolve your security program for the cloud.  This is not an exhaustive list, but more of a starting point in your journey.

Establish Cloud Governance

Someone in the organization needs to take ownership of the use of the Hyperscale Clouds in the organization. This may be a single person or shared within a team or working group structure. The intent is to create a position to enable and guide the process, not create a dictatorship.  Regardless, there needs to be someone at a high enough level within the organization to be able to set policy for the organization.

The governance process is to help the organization determine things like:

  • How the Hyperscale Clouds will be used
  • What can and cannot be put in the Hyperscale Clouds
  • Develop policies and procedures
  • Cloud accounts architecture for the company
  • Tiers of use and corresponding security requirements (i.e., dev, prod, sensitive data, etc.)

The organization needs to have a clear understanding of what is in Hyperscale Clouds, who is responsible for those assets, and how they are properly managed.

It may be useful to create a Hyperscale Cloud account clearinghouse within the organization. The clearinghouse would manage the administrative aspects of establishing cloud accounts, manage how billing or chargebacks are handled, monitor overall usage and spend, etc.  This function may or may not be the cloud usage approval authority.

There may also be a need for a process to detect and address rogue or un-approved Hyperscale Cloud use.  This could start with a method in the accounting or finance team to flag expense requests outside the organization’s cloud account management processes.

Understand Who is Responsible for What

AWS defines what they are responsible for, and what the customer is responsible for from a security and compliance standpoint. Their responsibility primarily covers the underlying infrastructure level, whereas the customer is responsible for what they put on top of the infrastructure.

Within your organization, use your existing policies and processes to look at who performs various functions in the overall life of a traditional workload and map that to how it will be handled in the cloud environment. This includes areas such as:

  • Development
  • Infrastructure management
  • Networking
  • Firewall and access management
  • Identity and access management
  • Security policy and governance
  • Auditing, compliance, and change management.

You don’t necessarily want to create a one-to-one mapping. The idea is to make sure the critical functions for your organization’s needs are addressed. This may also serve as an opportunity to improve those processes.

If the mapping changes traditional roles and responsibilities, the organization needs to ensure the new owners understand how to perform those functions.

Minimize Shadow IT

The use of cloud services can make it easier for shadow IT operations to occur. When this occurs, the organization loses control of what systems, assets, or data may be exposed. Shadow IT projects tend to bypass the normal security and operational checks-and-balances, increasing the risk to the organization.

The ease of deployment may be a distinct benefit of the Hyperscale Clouds, but can lead to teams “working around the system.” Why should a team wait for days, weeks, or months for new servers to deploy internally when they can spin the up resources in minutes in the cloud?

There should be clear organizational policies and processes for using Hyperscale Clouds that are flexible enough to take advantage of cloud capabilities but maintain awareness and control of all usage. If this is not flexible enough, someone will pull out a credit card and work around the organization’s processes.

Translate Security Requirements to Cloud Capabilities

Most organizations have a security framework or some level of regulatory or industry standards they must follow. The security team and cloud team need to work together to map existing security requirements to how they will be implemented in the Hyperscale Clouds. The overall security requirements do not change as you move to these new platforms. The security controls may change to fit the environment, and where or how they get implemented could differ.

The out-of-the-box Hyperscale Cloud services may not be sufficient for all uses. These platforms provide an open and extensible base of services. It’s up to the customer to use that service(s) appropriately for their organization. For example, a new OS instance can be quickly launched from the AWS marketplace in minutes. However, this may not meet the organization’s requirements to only deploy hardened versions of specific operating systems and authenticate against a common identity management solution. It is up to the customer to determine how to accomplish these tasks.

On the other side, the organization may have to change security requirements or controls to work within the cloud provider’s environment. Not all hardening steps may be applicable or easy to implement. Some controls may need to be implemented in a completely different way.

The organization must also ensure the underlying cloud services being used meet the required compliance standards. Not all Hyperscale Cloud services and/or regions may have validated or audited against the various compliance standards.

Ultimately, the organization needs to understand the risks to their systems and how to implement sufficient security controls within the cloud-based solution.

Follow Best Practices

The Hyperscale Clouds publish best practices, including security related ones, for their own services for a reason. Many of the various security related partner vendors provide guidance as well.   The Center for Internet Security (http://www.cisecurity.org), which maintains the most commonly used hardening guides, has also published security benchmarks for workloads in AWS.

Use these resources as a reference – don’t re-invent the wheel!

Automate

One of the key features of using a cloud provider like AWS is the level of automation that can be built around their services.

Whether it is built in-house or acquired via one of AWS’s partners, take advantage of the various automation capabilities. Here are just a few possible automations from a security perspective:

  • Enable continuous audit at the AWS management layer. Configuration information can be gathered against all the AWS services, compared against acceptable standards, and reported on. There are many open source and commercial tools already available.
  • Use automation to deploy services in the cloud, whether it is spinning up instances, applying security hardening, configuring settings, or deploying applications. This makes it easier to prevent configuration drift and audit.
  • Use features like CloudFormation to define and build all production environments. Build or use automation to audit the environment against the defined configuration.
  • Automation can be used to detect and alert when controlled configuration items change. For example, automation can alert the appropriate teams when an AWS Security Group changes or opens up potentially dangerous access.

There are endless ways automation can be used to provide a better managed and controlled workload.  The higher the risk profile of a workload, the more automation should be used.

Summary

Using the Hyperscale Clouds can help an organization improve the velocity at which it brings services to the market or their end-users.  The use of the cloud and the move to DevOps is blurring the line between the roles different teams specialized in and were good at.   At the same time, new services are being delivered at a rate faster than organizations can keep up with.

This transformation is forcing a need to evolve our information security practices to keep on top of these new capabilities and methods of delivery.  Organizations must still manage the risks to their data, assets, and reputation regardless of how they deliver their services.

This is not an easy problem, nor is the Internet getting any safer from attack.  It is crucial for all the teams to come together and work in partnership to develop the best cloud security model for the organization.   If not, it may not be long before we start seeing an increase of high-value compromises in the cloud.

Bob is a member of the CTO Architect Team at Sungard Availability Services. In his over 25 years of IT experience, he has touched just about every aspect of technology from desktops to large enterprise systems and everything that connects them together. His last 10 years has been focused on Information Security, including Red Team, Blue Team, technical controls, and managing security operations. Prior to Sungard Availability Services, Bob held several positions AT&T (formally USi) and TASC. Bob holds a B.S. in Electrical Engineering from Virginia Tech.

Comments

0
There are no comments.