The Security Practitioner's Guide to AWS re:Invent 2022
Recap of the interesting security focused announcements.
AWS's 11th re:Invent is now over. For me, this quiet period between re:Invent and the new year has been figuring out what my developer community was excited about, and what I, as their cloud security advisor, needed to advise them on. This was my 8th re:Invent and for the past several years have done a review of all the announcements to understand their implications for security practitioners.
Last year, re:Invent was smaller (international travel was still on hold), and the attendees were happy to be back in person. This year, re:Invent was slightly smaller than its pre-pandemic peak. It once again stretched the entire length of the strip - from Mandalay Bay on the south side to the Encore on the north side, or about 3.1 miles on foot.
In contrast to the Google, Microsoft, and Oracle events, AWS had five keynotes. Peter DeSantis kicked off on Monday afternoon, talking about the physical infrastructure used to build AWS and how it leads to improved performance around distributed applications and machine learning. Adam Selipsky's Tuesday morning keynote saw several announcements woven together via a grand vision of "Vast Unfathomable Extreme Possibilities". Adam's keynote was widely panned for having too much fluff. Wednesday had Swami Sivasubramanian keynote on data and machine learning. Wednesday also had Ruba Borno's AWS Partner Summit keynote. Finally, on Thursday, Werner Vogels's keynote focused on the asynchronous and event-driven nature of the world and how developers can build better and faster on AWS.
All throughout the week, AWS was releasing new products and features. There were 240 announcements in November prior to re:Invent. During the week of re:Invent there were an additional 117 announcements.
New Services & Announcements
In the keynotes for GCP, Microsoft, and Oracle Cloud, the idea of sovereign cloud was featured prominently. This new pledge from AWS doesn't introduce anything new. AWS has always let customers encrypt their data outside of AWS, and AWS has always been strict about never moving customer data across regional boundaries without direction from the customer. I suspect this new pledge is to provide a consistent answer for all the account execs, solution architects, and technical account managers when asked about AWS's position on sovereign cloud.
Today, AWS is announcing the preview of Amazon Verified Permissions, a scalable, fine-grained permissions management and authorization service for custom applications. With Amazon Verified Permissions, application developers can let their end users manage permissions and share access to data. For example, application developers can use Amazon Verified Permissions to define and manage fine grained permissions to determine which Amazon Cognito users have access to which application resources.
It will be very interesting to see what happens with this service. AWS is essentially trying to help developers leverage the same power and flexibility of AWS IAM in their own applications. It leverages a new policy language cedar. I'll be curious to see how this is accepted and ultimately leveraged over a more standard policy framework like OPA.
Today AWS announces the preview release of AWS Verified Access, a new service that allows you to deliver secure access to corporate applications without a VPN. Built using AWS Zero Trust guiding principles, Verified Access helps you implement a work-from-anywhere model in a secure and scalable manner.
Another product announcement that might be more than it appears. Zero Trust is hard to implement, specifically for older organizations with technical and security debt from being born on-prem. Some have compared this to Worklink, but Worklink was more about exporting the pixels of an application that ran inside the corporate network. I think this is more along the lines of GCP's Identity Aware Proxy and might be the biggest unheralded release of re:Invent 2022.
Amazon Security Lake automatically centralizes security data from cloud, on-premises, and custom sources into a purpose-built data lake stored in your account. Security Lake makes it easier to analyze security data so that you can get a more complete understanding of your security across the entire organization. Security Lake automatically gathers and manages all your security data across accounts and Regions. You can use your preferred analytics tools while retaining control and ownership of your security data. Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard. It helps normalize and combine security data from AWS and a broad range of enterprise security data sources.
This offering seems late to the game, and perhaps only fits a small cadre of AWS customers. If you're a large enterprise with an established security team, on-prem infrastructure, or operate in multiple clouds, you've already built a centralized SIEM with much of this data. If you're a small and scrappy organization, you probably don't have the security team capacity to get meaningful risk-reduction for the cost of storing all this data. It seems like there is a specific size and single cloud mindset this services was targeted to.
Amazon OpenSearch Service now offers a new serverless option, Amazon OpenSearch Serverless. This option simplifies the process of running petabyte-scale search and analytics workloads without having to configure, manage, or scale OpenSearch clusters. OpenSearch Serverless automatically provisions and scales the underlying resources to deliver fast data ingestion and query responses for even the most demanding and unpredictable workloads. With OpenSearch Serverless, you pay only for the resources consumed.
This service gained a lot of derision from the community due to the fact that the cheapest Amazon OpenSearch Serverless cluster you can run is $700/month. There are clearly two schools of thought on what "serverless" means. One school says serverless should allow you to "scale-to-zero", which this offering clearly does not. The second school is "no-dials-to-turn" when configuring a service for the right balance of price, performance, and capacity. Clearly AWS has adopted the latter as it's definition of serverless.
Today, AWS announces the preview of Amazon VPC Lattice, an application layer networking service that makes it simple to connect, secure, and monitor service-to-service communication. You can use VPC Lattice to enable cross-account, cross-VPC connectivity, and application layer load balancing for your workloads in a consistent way regardless of the underlying compute type – instances, containers, and serverless.
When I started doing AWS Security, the enforced network segmentation was one of the primary selling points of being in the cloud. Previously, our on-prem environment was one (mostly) wide-open flat network rife with lateral-movement opportunities. With AWS in particular, back then, every application was in its own VPC, and the only way to connect VPCs was via a Direct Connect and Firewall. This provided a good deal of security architecture that's been eroded over time with VPC Peering, Transit Gateways, PrivateLink, and now VPC Lattice. If your organization relies on innate network segmentation provided by VPCs, you want to take a long hard look at what this new service will do to your security posture.
Today, AWS is launching a preview of Amazon CodeCatalyst, a unified software development service that makes it faster to build and deliver software on AWS. CodeCatalyst provides software development teams with an integrated project experience that brings together the tools needed to plan, code, build, test, and deploy applications on AWS. Software teams spend significant time and resources on collaborating effectively and setting up tools, development and deployment environments, and continuous integration and delivery (CI/CD) automation. These activities can detract from their ability to quickly deliver new features or software updates to customers. With CodeCatalyst, teams can automate many of these complex activities, so they can focus on quickly enhancing their applications and deploying them to AWS.
Only available in the Oregon region while in preview, this seems to offer an AWS opinionated view of how to set up an entire Software Defined Life Cycle (SDLC). Given software developers often have their own opinions on things, I'm not sure who this was created for and why. It probably wasn't created for you or your company. But that doesn't mean there aren't some valuable ideas and patterns you or your organization cloud adopt. I'll be testing this out for my sandbox application at some point.
AWS Application Composer helps developers simplify and accelerate architecting, configuring, and building serverless applications. You can drag, drop, and connect AWS services into an application architecture by using AWS Application Composer’s browser-based visual canvas. AWS Application Composer helps you focus on building by maintaining deployment-ready infrastructure as code (IaC) definitions, complete with integration configuration for each service.
Announced by Werner in his keynote, this service promised to make event-driven architectures and IaC more adoptable by developers who don't have the deep cloud expertise that most serverless teams currently have. If you're in the security space, you might want to keep an eye on who uses this in your organization. If your security coding and cloud security outreach has been mostly focused on your developer community, this might require you to expand your awareness to new roles in your organization.
The remaining announcements were specific to certain verticals and aren't applicable to most of us:
- Announcing AWS Supply Chain (Preview)
- Introducing Amazon Omics
- Announcing AWS SimSpace Weaver
- Announcing the general availability of AWS Wickr
While touted as an end-to-end encrypted communications solution, Wickr's value is probably mostly in the Financial or Healthcare space where digital collaboration tools have strict regulatory requirements. It does make me wonder about the future of Amazon Chime.
New Security Features
While not new service announcements, AWS released several new features and capabilities to existing security services.
Amazon GuardDuty now offers threat detection for Amazon Aurora to identify potential threats to data stored in Aurora databases. Amazon GuardDuty RDS Protection profiles and monitors access activity to existing and new databases in your account, and uses tailored machine learning models to accurately detect suspicious logins to Aurora databases. Once a potential threat is detected, GuardDuty generates a security finding that includes database details and rich contextual information on the suspicious activity, is integrated with Aurora for direct access to database events without requiring you to modify your databases, and is designed to not affect database performance.
While we may never know, this announcement seems to be in response to "something bad" that happened. The GuardDuty pricing page doesn't list the price or what metrics are used for pricing, so I wouldn't run back to the office and immediately turn it on.
Amazon Inspector now supports AWS Lambda functions, adding continual, automated vulnerability assessments for Serverless compute workloads. With this expanded capability, Amazon Inspector now automatically discovers all eligible Lambda functions and identifies software vulnerabilities in application package dependencies used in the Lambda function code. All functions are initially assessed upon deployment to Lambda service and continually monitored and reassessed, informed by updates to the function and newly published vulnerabilities.
I often joke that I write all my applications in Lambda to avoid talking to my Vulnerability Management team. While Lambda functions have a smaller attack surface than containers and instances, that surface is not zero. I look forward to enabling this to see if I get meaningful findings or a pile of meaningless results with minimal risk context.
Today, AWS Key Management Service (AWS KMS) introduces the External Key Store (XKS), a new feature for customers who want to protect their data with encryption keys stored in an external key management system under their control. This capability brings new flexibility for customers to encrypt or decrypt data with cryptographic keys, independent authorization, and audit in an external key management system outside of AWS.
It's called XKS because EKS was already taken. I suspect this is another aspect of their Digital Sovereignty Pledge. However, if you don't absolutely require it, I recommend against it. You're trading away availability for confidentiality. You're not going to run an XKS cluster with the same high availability and durability as AWS runs KMS.
We are excited to launch delegated administrator for AWS Organizations to help you delegate the management of your Organizations policies, enabling you to govern your AWS organization and member accounts with increased agility and decentralization. You can now allow individual lines of business, operating in member accounts, to manage policies specific to their needs. By specifying fine-grained permissions, you can balance flexibility with limiting access to your highly privileged management accounts.
This is the most baffling of the announcements to date. We tell everyone, "Don't use the organization's management (aka payer) account" because that's where SCPs are managed and accounts get closed. Moving that out of the management account leaves what purpose for the payer? It would have been better to delegate the billing stuff and let FinOps have their own account.
AWS Config announces the ability to proactively check for compliance with AWS Config rules prior to resource provisioning. Customers use AWS Config to track the configuration changes made to their cloud resources and check if those resources match their desired configurations through a feature known as AWS Config rules. Proactive compliance allows customers to evaluate the configurations of their cloud resources before they are created or updated.
AWS Config has moved into the pre-deployment phase. Only a small number of proactive checks are available. What is unclear is: What happens when a proactive rule fails? Is the action denied? Can you pass IaC artifacts to AWS Config to ensure compliance before deploying or updating resources? I'm not a fan of AWS Config, no matter how hard I try to be. There are so many better solutions.
AWS CloudTrail Lake now integrates with AWS Config to support ingestion and query of configuration items. Now you can query and analyze both configuration items and CloudTrail activity logs in CloudTrail Lake, thereby simplifying and streamlining your security and compliance investigations. CloudTrail Lake enables security teams to perform retrospective investigations by helping answer who made what configuration changes to resources associated with security incidents such as data exfiltration or unauthorized access. CloudTrail Lake helps compliance engineers investigate noncompliant changes to their production environments by relating AWS Config rules with noncompliant status to who and what resource changes triggered them. IT teams can perform historical asset inventory analysis on configuration items using CloudTrail Lake’s default seven-year data retention period.
If you're using both of these services then this combination may provide value. Unlike CloudTrail, CloudTrail Lake is reasonably expensive and should only be deployed if you've got a good reason. You're paying up-front for that seven years of retention.
AWS Nitro Enclaves now supports Amazon EKS and Kubernetes for orchestrating Nitro enclaves. You can now use familiar Kubernetes tools to orchestrate, scale, and deploy enclaves from a Kubernetes pod.
Nitro Enclaves and Kubernetes. Two services that most people do not need. Combined, there are even fewer people who need them. Do not burden your company's cloud architecture with your personal resume padding.
Amazon VPC Reachability Analyzer now supports network reachability analysis across accounts in an AWS Organization
Amazon VPC Reachability Analyzer now supports network reachability analysis between AWS resources across different AWS accounts in your AWS Organization, allowing you to trace and troubleshoot the network reachability across your AWS Organization.
A major limitation of the previous version was the lack of visibility once you left the account boundary. And let's face it, when you really need an AWS service to tell you where your network is broken, it's when you've got Transit Gateway Hubs, VPC peering, and cross-referenced security groups all misbehaving with each other. I suspect VPC Reachability Analyzer will start to get a lot more usage.
AWS Backup & Data Protection
With ransomware being the hot topic in 2021 and 2022, it's no surprise that AWS invested in their AWS Backup service. Having evaluated AWS Backup as a ransomware recovery strategy, I can say many of these features add significant value to the service.
- AWS Backup launches delegation of organization-wide backup administration
- AWS Backup introduces support for Amazon Redshift
- AWS Backup adds legal hold capability for extended data retention beyond lifecycle policies
- AWS Backup Audit Manager adds centralized reporting for AWS Organizations
- AWS Backup launches application-aware data protection for applications defined using AWS CloudFormation
One thing that is very important with the first announcement - ensure that access to your AWS Backup delegated administration account is fully segregated from end-user access. Consider excluding it from your single sign-on or provide only limited audit access. If a ransomware operator were to gain access to this account, they'd be able to render your backup unrecoverable.
We are pleased to announce automated sensitive data discovery, a new capability in Amazon Macie that provides continual, cost efficient, organization-wide visibility into where sensitive data resides across your Amazon Simple Storage Service (Amazon S3) estate. With this new capability, Macie automatically and intelligently samples and analyzes objects across your S3 buckets, inspecting them for sensitive data such as personally identifiable information (PII), financial data, and AWS credentials. Macie then builds and continuously maintains an interactive data map of where your sensitive data in S3 resides across all accounts and Regions where you’ve enabled Macie, and provides a sensitivity score for each bucket. Amazon Macie uses multiple automated techniques including resource clustering by attributes such as bucket name, file types, and prefixes to minimize the data scanning needed to uncover sensitive data in your S3 buckets. This helps you continuously identify and remediate data security risks without manual configuration and lowers the cost to monitor for and respond to data security risks.
If your organization is over a certain size or it's had one or two mergers or acquisitions, I guarantee you do not know where all the PII in your cloud estate resides. Automated sensitive data discovery is designed to balance the absurd cost of Macie targeted scanning ($1.00/GB) and the need to not leave sensitive backups lying around in a forgotten corner of S3.
Announcing data protection in Amazon CloudWatch Logs, helping you detect, and protect sensitive data-in-transit
We are excited to announce data protection in Amazon CloudWatch Logs, a new set of capabilities that leverage pattern matching and machine learning capabilities to detect and protect sensitive log data-in-transit. Amazon CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services, in a single, highly scalable service. With log data protection in Amazon CloudWatch Logs, you can now detect and protect sensitive log data-in-transit logged by your systems, and applications.
This is a 24% premium on top of the CloudWatch Logs ingestion pricing. Use it if you have to, but your organization will better spend its money ensuring developers aren't writing PII to the logs in the first place. If Macie could scan CloudWatch Logs, that would be an excellent way to target the usage of this new feature.
These final three may be helpful for ensuring data privacy in your organization:
- Amazon Redshift announces support for Dynamic Data Masking (Preview)
- Announcing AWS Data Exchange for Amazon S3 (Preview)
- Announcing AWS Data Exchange for AWS Lake Formation (Preview)
In addition to OpenSearch "serverless", there were a few other serverless announcements of note:
- AWS Step Functions launches large-scale parallel workflows for data processing and serverless applications
- Amazon EventBridge Pipes is now generally available
- Announcing AWS Lambda SnapStart for Java functions.
While only for Java, SnapStart has the potential to significantly improve one of Lambda's biggest failings: cold starts. There are many caveats about what customers need to do to make their code SnapStart safe, and security practitioners need to ensure their organizations are following these best practices. The new parallel workflows would have been helpful in the past when I'd used Spray & Pray to iterate over a number of AWS accounts in parallel.
New Instance Types
It's not re:Invent without new instance types. In the keynote Adam said there were over 600 of them now, which to me doesn't seem like customer obsession.
- Introducing Amazon EC2 C6in instances
- Announcing Amazon EC2 C7gn instances (Preview)
- Introducing Amazon EC2 R7iz instances
- AWS announces Amazon EC2 Inf2 instances (Preview)
- Announcing Amazon EC2 Hpc6id instances
Amazon QuickSight now offers expanded API capabilities, allowing programmatic access to the underlying structure of QuickSight dashboards and analyses with the AWS Software Development Kit. The new and expanded APIs let customers and developers treat QuickSight assets like software code and integrate with DevOps processes, such as code reviews, audits, and promotion across development and production environments.
I only highlight this because this seems to be a way to run user-supplied code in QuickSight. That often presents an opportunity for security researchers looking for a Bug Bounty.
Looking at many of these service announcements, I get the impression that 2022 was the year AWS played catch-up with its competitors. Oracle and Microsoft were building sovereign clouds. AWS released XKS and its sovereign pledge. GCP had flat networks and identity-aware proxies. AWS released Lattice and Verified Access. Even Security Lake may be a response to Google's Chronicle services.
AWS likes to listen to its customers, and clearly, AWS's customers have been asking for things the other cloud providers already offer.