Tarsnap $1000 exploit bounty

For somewhat over four years, Tarsnap has been offering bounties for bugs found in the Tarsnap code. Two thirds of the bounties Tarsnap has paid out have been $1 each for cosmetic bugs (e.g., typos in source code comments), and a quarter of the bugs have been $10 each for harmless bugs — mostly memory leaks in error paths where the tarsnap client is about to exit anyway — but there have also been some more serious bugs: Several build-breakage bugs ($20 each); a variety of cases where tarsnap behaviour is wrong in a user-visible — but generally very obscure — way ($50 each); a few crashes ($100); and of course the critical crypto bug which first convinced me to offer bounties.

Most bugs are straightforward, but occasionally one comes up which is not so clear in its impact. Such is the case with a bug which is fixed in tarsnap 1.0.36. This bug causes the NUL string termination byte to overflow the heap-allocated buffer used for paths of objects examined as tarsnap traverses a directory tree; such one-byte heap overflows have been shown to be exploitable in the past. In the case of tarsnap, I will be very surprised if it turns out to be possible to cause anything worse than a crash, but I can't absolutely rule out the possibility.

In light of this, Tarsnap is offering a $1000 exploit bounty: The first person before the end of 2015 who can convincingly demonstrate a serious exploitation of this bug will receive $1000. While there are many organizations which pay more than this for exploits, I think this is a reasonable prize: After all, I'm already telling you what the bug is which you need to exploit! Fine print: No bounty if you're in Iran, North Korea, or some other problem countries. Bounties are awarded at my sole discretion; in particular, I get to decide whether the "convincingly demonstrate" and "serious exploitation" conditions are satisfied. Payment by US dollar check or paypal. To avoid races, contact me before publishing anything. If you can't accept cash prizes, the bounty can be donated to a mutually-acceptable charity of your choice.

There are two reasons I'm skeptical about the exploitability of this bug. First, the lengths of buffers which can be overflowed are very limited: Just powers of 2 times 1 kB, and only once per size. Sane mallocs will put these sizes of allocations into pages with other 1024 or 2048 byte allocations, or allocate entire pages (for 4kB and larger allocations), and I can't see any "interesting" data structures in tarsnap which we would overflow into. Second, tarsnap's "controllable surface area" is quite limited: Unlike a web server or a setuid program which can be attacked interactively, the avenues for attacking tarsnap seem to be limited to creating interesting directory trees for it to archive and possibly meddling with its network traffic — with that second option being very limited since all of tarsnap's traffic is both encrypted and signed.

But I'm far from an expert on exploitation techniques. In my time as FreeBSD Security Officer, I only wrote exploits for two bugs; for the rest I was satisfied with "this looks like it could plausibly be exploited, so we need to issue a security advisory for it". (There was one occasion when we didn't issue an advisory for a plausible looking "vulnerability", but that was because I discovered by accident that the code path which theoretically yielded arbitrary code execution actually crashed a dozen lines earlier due to an independent bug.) I know there are people who are far far better at this sort of thing than me, enjoy challenges, and may also care about the bounty; I'm hoping some of them will be interested enough to try their hands at this.

The main reason I'm offering a bounty is broader than this specific bug, however. As a researcher, I like to support research, including research into software exploitation techniques; and as a developer I'd like to know what techniques can be applied to exploiting bugs in my code specifically. The more I know, the better I can assess the impact of future bugs; and the more I know, the more I can look for mitigation techniques which will help to reduce the impact of future bugs.

A bug is a bug is a bug, and I would have fixed this bug even if I could prove that it was impossible to exploit. Similarly, I hope that every tarsnap user upgrades to the latest code — there are plenty of other bugs fixed in tarsnap 1.0.36, and even if nobody claims this bounty it's entirely possible that someone figured out how to exploit it but decided to hold on to that for their own personal use. (Rule #1 of bug bounties: No matter how much money you're offering, assume that someone evil found the bug first and didn't report it.)

But knowledge is power, and Tarsnap is fortunately in a position to be able to pay for this. So please, go look for ways that this bug can be exploited — and if you can't manage that, maybe at least you'll find some other bugs which you can win a bounty for.

UPDATE: Thomas Ptacek (founder of Starfighter and general security expert) has offered to match this bounty. So there's now $2000 in the pot for an exploit.

Posted at 2015-08-21 14:00 | Permanent link | Comments

"(null)" is a bad idea

What happens if you compile and run the following simple C program?
#include <stdio.h>

int
main(int argc, char * argv[])
{

	printf("%s\n", NULL);
}
If you believe the C standard, you may get demons flying out of your nose. Most developers who understand the implications of NULL pointers would assume that the program crashes. Unfortunately, on some misguided operating systems, the program exits successfully — after printing the string "(null)".

This is an example of an anti-pattern known sometimes as "defensive programming": If something goes wrong, pretend you didn't notice and try to keep going anyway. Now, there are places for this approach; for example, if you're writing code for a rocket, a plane, or a car where having your software crash is likely to result in... well, a crash. For most code, however, a crash is unlikely to have such serious consequences; and in fact may be very useful in two ways.

The first way a software crash can be useful is by making it immediately clear that a bug exists — and, if you have core dumps enabled, making it easier to track down where the bug is occurring. Now, in the case of printing "(null)" to the standard output, this is probably clear already; but if NULL were being passed to sprintf instead, the resulting string might be used in a way which conceals its bogosity. (In the case of the bug which inspired this blog post, the constructed string was being used as a path, and the resulting "file not found" error was not an unanticipated result of looking up that path.) During the software development and testing process, anything which results in bugs being found faster is a great help.

The second way a software crash can be useful is by mitigating security vulnerabilities. The case of BIND is illustrative here: BIND 4 and BIND 8 were famous within the security community for their horrible track records. BIND 9, in an attempt to avoid all of the problems of earlier versions, was a complete ground-up rewrite — and it is still responsible for an astonishing number of security advisories. However, there is a critical difference in the types of vulnerabilities: While earlier versions of BIND could be exploited in a variety of scary ways, vulnerabilities in BIND 9 almost always take the form "an attacker can cause an assertion to fail, resulting in a denial of service". If something weird happens — say, a NULL pointer shows up somewhere that a NULL pointer shouldn't show up — after software has gone through a lengthy testing process and been released, it's far less likely to be happening by accident; and so it's even more important to abort rather than blindly trying to recover.

Undefined behaviour is undefined behaviour, and developers shouldn't assume that passing a NULL pointer to a function which dereferences its arguments will result in a crash; that's what assert() is for. But at the same time, authors of libraries have a choice when it comes to what behaviour they provide in such circumstances, and when faced by a choice between making the presence of a bug immediately obvious or trying to hide the bug and potentially exposing serious security vulnerabilities... well, I know which I would prefer.

Posted at 2015-08-13 03:05 | Permanent link | Comments

Recent posts

Monthly Archives

Yearly Archives


RSS