Tarsnap $1000 exploit bounty

For somewhat over four years, Tarsnap has been offering bounties for bugs found in the Tarsnap code. Two thirds of the bounties Tarsnap has paid out have been $1 each for cosmetic bugs (e.g., typos in source code comments), and a quarter of the bugs have been $10 each for harmless bugs — mostly memory leaks in error paths where the tarsnap client is about to exit anyway — but there have also been some more serious bugs: Several build-breakage bugs ($20 each); a variety of cases where tarsnap behaviour is wrong in a user-visible — but generally very obscure — way ($50 each); a few crashes ($100); and of course the critical crypto bug which first convinced me to offer bounties.

Most bugs are straightforward, but occasionally one comes up which is not so clear in its impact. Such is the case with a bug which is fixed in tarsnap 1.0.36. This bug causes the NUL string termination byte to overflow the heap-allocated buffer used for paths of objects examined as tarsnap traverses a directory tree; such one-byte heap overflows have been shown to be exploitable in the past. In the case of tarsnap, I will be very surprised if it turns out to be possible to cause anything worse than a crash, but I can't absolutely rule out the possibility.

In light of this, Tarsnap is offering a $1000 exploit bounty: The first person before the end of 2015 who can convincingly demonstrate a serious exploitation of this bug will receive $1000. While there are many organizations which pay more than this for exploits, I think this is a reasonable prize: After all, I'm already telling you what the bug is which you need to exploit! Fine print: No bounty if you're in Iran, North Korea, or some other problem countries. Bounties are awarded at my sole discretion; in particular, I get to decide whether the "convincingly demonstrate" and "serious exploitation" conditions are satisfied. Payment by US dollar check or paypal. To avoid races, contact me before publishing anything. If you can't accept cash prizes, the bounty can be donated to a mutually-acceptable charity of your choice.

There are two reasons I'm skeptical about the exploitability of this bug. First, the lengths of buffers which can be overflowed are very limited: Just powers of 2 times 1 kB, and only once per size. Sane mallocs will put these sizes of allocations into pages with other 1024 or 2048 byte allocations, or allocate entire pages (for 4kB and larger allocations), and I can't see any "interesting" data structures in tarsnap which we would overflow into. Second, tarsnap's "controllable surface area" is quite limited: Unlike a web server or a setuid program which can be attacked interactively, the avenues for attacking tarsnap seem to be limited to creating interesting directory trees for it to archive and possibly meddling with its network traffic — with that second option being very limited since all of tarsnap's traffic is both encrypted and signed.

But I'm far from an expert on exploitation techniques. In my time as FreeBSD Security Officer, I only wrote exploits for two bugs; for the rest I was satisfied with "this looks like it could plausibly be exploited, so we need to issue a security advisory for it". (There was one occasion when we didn't issue an advisory for a plausible looking "vulnerability", but that was because I discovered by accident that the code path which theoretically yielded arbitrary code execution actually crashed a dozen lines earlier due to an independent bug.) I know there are people who are far far better at this sort of thing than me, enjoy challenges, and may also care about the bounty; I'm hoping some of them will be interested enough to try their hands at this.

The main reason I'm offering a bounty is broader than this specific bug, however. As a researcher, I like to support research, including research into software exploitation techniques; and as a developer I'd like to know what techniques can be applied to exploiting bugs in my code specifically. The more I know, the better I can assess the impact of future bugs; and the more I know, the more I can look for mitigation techniques which will help to reduce the impact of future bugs.

A bug is a bug is a bug, and I would have fixed this bug even if I could prove that it was impossible to exploit. Similarly, I hope that every tarsnap user upgrades to the latest code — there are plenty of other bugs fixed in tarsnap 1.0.36, and even if nobody claims this bounty it's entirely possible that someone figured out how to exploit it but decided to hold on to that for their own personal use. (Rule #1 of bug bounties: No matter how much money you're offering, assume that someone evil found the bug first and didn't report it.)

But knowledge is power, and Tarsnap is fortunately in a position to be able to pay for this. So please, go look for ways that this bug can be exploited — and if you can't manage that, maybe at least you'll find some other bugs which you can win a bounty for.

UPDATE: Thomas Ptacek (founder of Starfighter and general security expert) has offered to match this bounty. So there's now $2000 in the pot for an exploit.

Posted at 2015-08-21 14:00 | Permanent link | Comments

"(null)" is a bad idea

What happens if you compile and run the following simple C program?
#include <stdio.h>

int
main(int argc, char * argv[])
{

	printf("%s\n", NULL);
}
If you believe the C standard, you may get demons flying out of your nose. Most developers who understand the implications of NULL pointers would assume that the program crashes. Unfortunately, on some misguided operating systems, the program exits successfully — after printing the string "(null)".

This is an example of an anti-pattern known sometimes as "defensive programming": If something goes wrong, pretend you didn't notice and try to keep going anyway. Now, there are places for this approach; for example, if you're writing code for a rocket, a plane, or a car where having your software crash is likely to result in... well, a crash. For most code, however, a crash is unlikely to have such serious consequences; and in fact may be very useful in two ways.

The first way a software crash can be useful is by making it immediately clear that a bug exists — and, if you have core dumps enabled, making it easier to track down where the bug is occurring. Now, in the case of printing "(null)" to the standard output, this is probably clear already; but if NULL were being passed to sprintf instead, the resulting string might be used in a way which conceals its bogosity. (In the case of the bug which inspired this blog post, the constructed string was being used as a path, and the resulting "file not found" error was not an unanticipated result of looking up that path.) During the software development and testing process, anything which results in bugs being found faster is a great help.

The second way a software crash can be useful is by mitigating security vulnerabilities. The case of BIND is illustrative here: BIND 4 and BIND 8 were famous within the security community for their horrible track records. BIND 9, in an attempt to avoid all of the problems of earlier versions, was a complete ground-up rewrite — and it is still responsible for an astonishing number of security advisories. However, there is a critical difference in the types of vulnerabilities: While earlier versions of BIND could be exploited in a variety of scary ways, vulnerabilities in BIND 9 almost always take the form "an attacker can cause an assertion to fail, resulting in a denial of service". If something weird happens — say, a NULL pointer shows up somewhere that a NULL pointer shouldn't show up — after software has gone through a lengthy testing process and been released, it's far less likely to be happening by accident; and so it's even more important to abort rather than blindly trying to recover.

Undefined behaviour is undefined behaviour, and developers shouldn't assume that passing a NULL pointer to a function which dereferences its arguments will result in a crash; that's what assert() is for. But at the same time, authors of libraries have a choice when it comes to what behaviour they provide in such circumstances, and when faced by a choice between making the presence of a bug immediately obvious or trying to hide the bug and potentially exposing serious security vulnerabilities... well, I know which I would prefer.

Posted at 2015-08-13 03:05 | Permanent link | Comments

make ec2ami

As my regular readers will be aware, I have been working on bringing FreeBSD to the Amazon EC2 platform for many years. While I have been providing pre-built FreeBSD/EC2 AMIs for a while, and last year I wrote about my process for building images, I have been told repeatedly that FreeBSD users would like to have a more streamlined process for building images. As of a few minutes ago, this is now available in the FreeBSD src tree: make ec2ami

This code makes use of the FreeBSD release-building framework — in particular the work done by Glen Barber to create virtual-machine images — and the bsdec2-image-upload tool I wrote a few months ago. Once you have installed bsdec2-image-upload (from pkg or the ports tree) and set some AWS parameters (AWS key file, region, and S3 bucket to be used for staging) in /etc/make.conf, building an AMI takes just two commands:

# cd /usr/src && make buildworld buildkernel
# cd /usr/src/release && make ec2ami

This will build an AMI in the EC2 region you specified; if you want to copy the AMI out to all EC2 regions and make it public, you can do this by adding an extra option:

# cd /usr/src && make buildworld buildkernel
# cd /usr/src/release && make ec2ami EC2PUBLIC=YES

I did this a few hours ago, and after about an hour of bits being copied around, I had this on my console:

Created AMI in us-east-1 region: ami-aa0532c2
Created AMI in us-west-1 region: ami-bb2cccff
Created AMI in us-west-2 region: ami-534b6263
Created AMI in sa-east-1 region: ami-1315ae0e
Created AMI in eu-west-1 region: ami-55b12b22
Created AMI in eu-central-1 region: ami-4a97aa57
Created AMI in ap-northeast-1 region: ami-1de21f1d
Created AMI in ap-southeast-1 region: ami-a66f5cf4
Created AMI in ap-southeast-2 region: ami-2df98a17

Building an AMI in one command: Could it really get any easier?

Posted at 2015-04-01 00:00 | Permanent link | Comments

FreeBSD 10 iwn problems

Apologies to my regular readers: This post will probably not interest you; rather than an item of general interest, I'm writing here for the benefit of anyone who is running into a very specific bug I encountered on FreeBSD. Hint for Googlebot: If someone is looking for FreeBSD 10 iwn dies or iwn stops working on FreeBSD 10, this is the right place.

A few weeks ago I updated an old laptop from FreeBSD 9.3 to FreeBSD 10.1, and I soon found that the Intel Centrino Advanced-N 6205 wifi (using the iwn0 driver) would stop working after a few hours. Or to be more precise: After being connected to my wifi network for a while, it would drop the connection and attempt to reconnect; but wpa_supplicant would not be able to rejoin the network. Restarting the device via ifconfig iwn0 down; ifconfig iwn0 up would make it work again — until the next time it died, that is.

It turns out that there are some bugs in 802.11n support in the FreeBSD 10.1 iwn0 driver and/or 802.11 infrastructure. These are fixed in HEAD, but for 10.1 they can be worked around via disabling 802.11n by adding -ht to the ifconfig line for the interface. In my /etc/rc.conf I now have:

wlans_iwn0="wlan0"
ifconfig_wlan0="-ht WPA"
and everything is working perfectly.

Thanks to Adrian Chadd for helping me track down the problem and suggesting the workaround.

Posted at 2015-01-13 05:00 | Permanent link | Comments

When security goes right

I've written a lot over the years about ways that companies have gotten security wrong; as a pedagogical technique, I find that it is very effective, since people tend to remember those stories better. Today, I'd like to tell a different story: A story about how a problem was fixed.

A few weeks ago, I heard that my slow and unreliable ISP was raising their prices, and I decided to look around for a better ISP. I found one in the form of Novus, which has fibre runs to towers around the Vancouver area. I signed up; they came to the building and placed the necessary patch cable in the basement telecommunications room; and I had 50/10 Mbps internet access costing less than my previous 25/2.5 Mbps.

Then I went onto Novus' account management website and found my bandwidth utilization graph. "Cool", I thought, "they're using RRDtool. I wonder if they're generating the standard day/week/month/year graphs, or just the monthly graph." Right click, open in new tab, look at URL. "Huh, that's interesting, it's just a path to a .gif file, not a CGI invocation. Well, maybe they're rewriting usage_graphs to the directory with the graphs for whichever user is logged in. What do I get if I issue a request for the directory itself?"

Oops. Rather than a list of traffic graphs for my account, I received an Apache directory listing with thousands upon thousands of traffic graphs — all the traffic graphs ever created for their customers, in fact — and a few random clicks confirmed that I could indeed download those graphs. Being familiar with RRDtool, I knew exactly how this had happened: One of the common configurations generates traffic graphs on-demand but writes them into a directory to be fetched separately. This can save CPU time compared to always generating graphs, since RRDtool can point at an existing graph rather than regenerating it; but as in this case it isn't always a safe configuration to use.

Time to do something about this; but how? I've learned from experience that front-line technical support should only be contacted as a last resort; alas, the Shibboleet backdoor is not widely implemented. Not only do front-line technical support personnel tend to waste time because they don't understand the issues involved, they often mangle problem reports while forwarding them — more than once I've cornered an engineer and said "so, you never fixed this issue I reported a few months ago..." only to get an amazed "so that's what that ticket was about" reply.

Since I didn't want to phone the technical support line, I looked for another contact. No "security" contact on their website, unfortunately... I could have contacted them on Twitter, but I didn't really want to explain the issue in public (and wouldn't necessarily have reached anyone any more clueful)... Aha! They have valid whois data for their domain. Even better, they have separate "Administrative" and "Technical" contacts — and when companies do this, it's a good indication that the "Technical" contact will go to someone who is actually technical. (For ISPs, whois data for IP address blocks is also often useful; that would have taken me to the same person.)

He wasn't answering his phone when I first called, but ten minutes later I got through:

"Hi, I got your phone number from Novus' whois data. The RRDtool graphs for all your customers are visible. If I go to $URL I can see a list of the graphs and download any of them."

"That's not good... uhh, who are you?"

After I identified myself (and explained that I was a new customer) he promised to get the problem fixed right away, and within ten minutes Apache was sending 403 (Forbidden) responses for the directory; this solved most of the problem, but the file names were predictably based on the customer account number, so knowing someone's account number could still allow an attacker to fetch their traffic graphs. A short time later they started checking Referer headers; but as I pointed out to them, sending a fake Referer header is easy, so that only helps against the most careless attackers. I suggested several options for solving this, and they chose the simplest: Use a cron job to delete all the generated graphs every minute. While this does leave a very slight exposure, it's narrow enough to be inconsequential: In order to illicitly fetch a customer traffic graph, an attacker would need to be a logged-in Novus customer (that filtering was in place from the start), know the account number of their victim, and issue the request within 60 seconds of the victim viewing their traffic graph. (A completely robust approach would be to have code which checks the graph file name against the account number of the logged-in user, but writing new code always risks introducing new bugs, so I really can't fault them for taking the 99.99% solution.)

After fixing the issue, they sent me an email thanking me, and — much to my surprise — telling me that as a sign of their appreciation they were crediting a month of free internet access to my account. While I award bounties to people who report bugs (security or otherwise) in my company's product, I don't expect other companies to do likewise... especially if they don't publish any such bounties.


Obviously, Novus made a mistake in their original configuration, but in the spirit of Christmas, this post isn't about mistakes: It's about what went right. So what went right here?

First, they were using widely used open source code; if I hadn't been familiar with it, I wouldn't have noticed the problem. Bad guys will always dig deeper than unpaid good guys; so if you're going to benefit from having many eyeballs looking at what you're doing, it's much better if your bugs are shallow.

Second, I saw something and I said something. This is just good citizenship; I could have saved myself half an hour by ignoring the issue, but having seen it I felt that I had a moral duty to report it.

Third, I was able to find a way to report the problem to someone who would understand it and be able to fix it. If you're running a company and you're serious about security, put a page on your website with a GPG key and a security contact email address — this will guarantee that people who find issues will know where to send them, as well as announcing that you will take the issues seriously. If you don't want to go that far, at a minimum you should make sure that you have a valid technical whois contact for your domain: People who think to look up domain whois records are the people you want to hear from.

Fourth, I didn't ask for a "consulting fee" or any other remuneration for disclosing the vulnerability. This is something people new to the field often do, and it's a bad idea. Seriously kids, don't do this. You're not likely to get any money out of it, and you are likely to cross the line into extortion and end up in jail.

Fifth, while I had to "exploit" the vulnerability — loading other customer's traffic graphs — in order to confirm that it existed, I did so only to the minimum extent necessary. Unlike Andrew "weev" Auernheimer, upon finding that information had been accidentally made publicly available I did not proceed to download all of it; I knew that I was not intended to have that information, so I had no moral right to access it regardless of the questionable legality of doing so.

And finally, Novus immediately took the issue seriously and remained in contact with me to ensure that the issue was resolved. Many companies would file this into a bug tracker and have a sysadmin look at it months later; Novus gave it the attention it deserved, and when I wrote back to point out that their Referer-checking fix was inadequate they went back to apply a further fix.

Understanding the technical aspects of security is great, but soft skills matter too. Goodwill to all isn't just a Christmas message; it's also good security.

Posted at 2014-12-25 23:35 | Permanent link | Comments

Recent posts

Monthly Archives

Yearly Archives


RSS