Oil changes, safety recalls, and software patches

Every few months I get an email from my local mechanic reminding me that it's time to get my car's oil changed. I generally ignore these emails; it costs time and money to get this done (I'm sure I could do it myself, but the time it would cost is worth more than the money it would save) and I drive little enough — about 2000 km/year — that I'm not too worried about the consequences of going for a bit longer than nominally advised between oil changes. I do get oil changes done... but typically once every 8-12 months, rather than the recommended 4-6 months. From what I've seen, I don't think I'm alone in taking a somewhat lackadaisical approach to routine oil changes.

On the other hand, there's another type of notification which elicits more prompt attention: Safety recalls. There are two good reasons for this: First, whether for vehicles, food, or other products, the risk of ignoring a safety recall is not merely that the product will break, but rather that the product will be actively unsafe; and second, when there's a safety recall you don't have to pay for the replacement or fix — the cost is covered by the manufacturer.

I started thinking about this distinction — and more specifically the difference in user behaviour — in the aftermath of the "WannaCry" malware. While WannaCry attracted widespread attention for its "ransomware" nature, the more concerning aspect of this incident is how it propagated: By exploiting a vulnerability in SMB for which Microsoft issued patches two months earlier. As someone who works in computer security, I find this horrifying — and I was particularly concerned when I heard that the NHS was postponing surgeries because they couldn't access patient records. Think about it: If the NHS couldn't access patient records due to WannaCry, it suggests WannaCry infiltrated systems used to access patient records — meaning that someone else exploiting the same vulnerabilities could have accessed those records. The SMB subsystem in Windows was not merely broken; until patches were applied, it was actively unsafe.

I imagine that most people in my industry would agree that security patches should be treated in the same vein as safety recalls — unless you're certain that you're not affected, take care of them as a matter of urgency — but it seems that far more users instead treat security patches more like oil changes: something to be taken care of when convenient... or not at all, if not convenient. It's easy to say that such users are wrong; but as an industry it's time that we think about why they are wrong rather than merely blaming them for their problems.

There are a few factors which I think are major contributors to this problem. First, the number of updates: When critical patches occur frequently enough to become routine, alarm fatigue sets in and people cease to give the attention updates deserve, even if on a conscious level they still recognize the importance of applying updates. Easy problem to identify, hard problem to address: We need to start writing code with fewer security vulnerabilities.

Second, there is a long and sad history of patches breaking things. In a few cases this is because something only worked by accident — an example famous in the FreeBSD community is the SA-05:03.amd64 vulnerability, which accidentally made it possible to launch the X server while running as an unprivileged user — but more often it is simply the result of a mistake. While I appreciate that there is often an urgency to releasing patches, and limited personnel (especially for open source software), releasing broken patches is something which it is absolutely vital to avoid — because it doesn't only break systems, but also contributes to a lack of trust in software updates. During my time as FreeBSD Security Officer, regardless of who on the security team was taking responsibility for preparing a patch and writing the advisory, I refused to sign and release advisories until I was convinced that our patch both fixed the problem and didn't accidentally break anything else; in some cases this meant that our advisories went out a few hours later, but in far more cases it ensured that we released one advisory rather than a first advisory followed by a second "whoops, we broke something" follow-up a few days later. My target was always that our track record should be enough that FreeBSD users would be comfortable blindly downloading and installing updates on their production systems, without spending time looking at the code or deploying to test systems first — because some day there will be a security update which they don't have time to look over carefully before installing.

The problems of the large volume of patches and their reputation for breaking things is made worse by the fact that many systems use the same mechanism for distributing both security fixes and other changes — bug fixes and new features. This has become a common pattern largely in the name of user friendliness — why force users to learn two systems when we can do everything through a single update mechanism? — but I worry that it is ultimately counterproductive, in that presenting updates through the same channel tends to conflate them in the minds of users, with the result that critical security updates instead end up being given the lesser attention more appropriately due to a new feature update. Even if the underlying technology used for fetching and installing updates is the same, it may be that exposing different types of updates through different interfaces would result in better user behaviour. My bank sends me special offers in the mail but phones if my credit card usage trips fraud alarms; this is the sort of distinction in intrusiveness we should see for different types of software updates.

Finally, I think there is a problem with the mental model most people have of computer security. Movies portray attackers as geniuses who can break into any system in minutes; journalists routinely warn people that "nobody is safe"; and insurance companies offer insurance against "cyberattacks" in much the same way as they offer insurance against tornados. Faced with this wall of misinformation, it's not surprising that people get confused between 400 pound hackers sitting on beds and actual advanced persistent threats. Yes, if the NSA wants to break into your computer, they can probably do it — but most attackers are not the NSA, just like most burglars are not Ethan Hunt. You lock your front door, not because you think it will protect you from the most determined thieves, but because it's an easy step which dramatically reduces your risk from opportunistic attack; but users don't see applying security updates as the equivalent of locking their front door when they leave home.

Computer security is a mess; there's no denying that. Vendors publishing code with thousands of critical vulnerabilities and government agencies which stockpile these vulnerabilities rather than helping to fix them certainly do nothing to help. But WannaCry could have been completely prevented if users had taken the time to install the fixes provided by Microsoft — if they had seen the updates as being something critical rather than an annoyance to put off until the next convenient weekend.

As a community, it's time for computer security professionals to think about the complete lifecycle of software vulnerabilities. It's not enough for us to find vulnerabilities, figure out how to fix them, and make the updates available; we need to start thinking about the final step of how to ensure that end users actually install the updates we provide. Unless we manage to do that, there will be a lot more crying in the years to come.

Posted at 2017-06-14 04:40 | Permanent link | Comments

A plan for open source software maintainers

I've been writing open source software for about 15 years now; while I'm still wet behind the ears compared to FreeBSD greybeards like Kirk McKusick and Poul-Henning Kamp, I've been around for long enough to start noticing some patterns. In particular:

It seems to me that this is a case where problems are in fact solutions to other problems. To wit:

I'd like to see this situation get fixed. As I envision it, a solution would look something like a cross between Patreon and Bugzilla: Users would be able sign up to "support" projects of their choosing, with a number of dollars per month (possibly arbitrary amounts, possibly specified tiers; maybe including $0/month), and would be able to open issues. These could be private (e.g., for "technical support" requests) or public (e.g., for bugs and feature requests); users would be able to indicate their interest in public issues created by other users. Developers would get to see the open issues, along with a nominal "value" computed based on allocating the incoming dollars of "support contracts" across the issues each user has expressed an interest in, allowing them to focus on issues with higher impact.

I have three questions for my readers:

  1. If this existed, would you — an open source software developer — sign up and use it?
  2. If this existed, would you — a user of open source software — be likely to pay to support, and receive support from, an open source software developer?
  3. Does anyone want to build this?

Pending the answers to the first two questions, I'm enthusiastic about this idea; if I wasn't already running Tarsnap I would probably go ahead and build this myself. But I'm not Jeff Bezos and there's no way I could run two startup companies at once; so finding someone else interested in building this would be crucial.

I think something like this could fill an important gap in the world of open source software. Maybe I'm crazy. Feedback requested.


FAQ

Q: Are you talking about bounties for open source software development? There are websites for doing that.
A: No! This would be different from bounties in several ways:

  1. Sponsors would be supporting a specific developer, not setting aside money for whoever comes along and claims it. This is important because bounties tend to result in horrible "drive-by coding" which causes problems for maintainers later.
  2. This would be about providing ongoing funding for a developer, which would allow them to spend time on long-term code maintenance, not just chasing after the latest in-demand feature.
  3. While there would be a "dollar value" attached to issues, that value would just be informational — telling the developer how much users care about that particular issue. The developer would get money from their monthly subscribers regardless of which issues they address. (Of course, if they don't respond to issues they may find that supporters cancel their ongoing funding.)

Posted at 2017-05-11 04:10 | Permanent link | Comments

Cheating on a string theory exam

And now for something completely different: I've been enjoying FiveThirtyEight's "The Riddler" puzzles. A couple weeks ago I submitted a puzzle of my own; but I haven't heard back and it's too cute a puzzle to not share, so I've decided to post it here.

You have to take a 90-minute string theory exam consisting of 23 true-false questions, but unfortunately you know absolutely nothing about the subject. You have a friend who will be writing the exam at the same time as you, is able to answer all of the questions in a fraction of the allotted time, and is willing to help you cheat — but the proctors are alert and will throw you out if they suspect him of communicating any information to you.

You and your friend have watches which are synchronized to the second, and the proctors are used to him often finishing exams quickly and won't be suspicious if he leaves early. What is the largest value N such that you can guarantee that you answer at least N out of the 23 questions correctly?

Extra credit: Explain the connection between your solution and the topic of the exam.

For clarity: No, you cannot communicate information based on which hand your ambidexterous friend uses to write his exam answers, based on how quickly he walks on his way out of the exam, or any other channels not stated: The proctors will catch you and you'll get a zero on the exam. The only information you can receive is a single value: When your friend left the exam.

Posted at 2017-02-21 03:50 | Permanent link | Comments

IPv6 on FreeBSD/EC2

A few hours ago Amazon announced that they had rolled out IPv6 support in EC2 to 15 regions — everywhere except the Beijing region, apparently. This seems as good a time as any to write about using IPv6 in EC2 on FreeBSD instances.

First, the good news: Future FreeBSD releases will support IPv6 "out of the box" on EC2. I committed changes to HEAD last week, and merged them to the stable/11 branch moments ago, to have FreeBSD automatically use whatever IPv6 addresses EC2 makes available to it.

Next, the annoying news: To get IPv6 support in EC2 from existing FreeBSD releases (10.3, 11.0) you'll need to run a few simple commands. I consider this unfortunate but inevitable: While Amazon has been unusually helpful recently, there's nothing they could have done to get support for their IPv6 networking configuration into FreeBSD a year before they launched it.

To enable IPv6 support in an existing FreeBSD EC2 instance, you'll need to do three things:

  1. Install the net/dual-dhclient port:
    # pkg install dual-dhclient
    
  2. Add accept_rtadv to the appropriate ifconfig line in your /etc/rc.conf file, e.g.,
    ifconfig_DEFAULT="SYNCDHCP accept_rtadv"
    
  3. Add two more lines to your /etc/rc.conf file:
    ipv6_activate_all_interfaces="YES"
    dhclient_program="/usr/local/sbin/dual-dhclient"
    

If you want to launch a new FreeBSD/EC2 instance with IPv6 support, the following configinit script can be provided as the user-data upon instance launch:

>>/etc/rc.conf
firstboot_pkgs_list="dual-dhclient awscli"
ifconfig_DEFAULT="SYNCDHCP accept_rtadv"
ipv6_activate_all_interfaces="YES"
dhclient_program="/usr/local/sbin/dual-dhclient"
This tells configinit to add four lines to /etc/rc.conf, and the firstboot-pkgs tool will then install the dual-dhclient package as part of the initial system boot.

Third, the bad news: Enabling IPv6 support in EC2 is an absurdly lengthy process (and this is true regardless of what operating system you're running in EC2). You'll need to:

  1. Add an IPv6 address range to your VPC. (In the AWS Management Console: VPC -> Your VPCs -> right click on a VPC -> edit CIDRs -> Add IPv6 CIDR.) Most EC2 users will need to do this once for each EC2 region they're using.
  2. Add an IPv6 address range to each subnet. (VPC -> Subnets -> right click, Edit IPv6 CIDRs -> Add IPv6 CIDR.) Most EC2 users will need to do this once for each EC2 availability zone they're using.
  3. (Not necessary, but probably a good idea:) Enable auto-assignment of IPv6 addresses. (VPC -> Subnets -> right click on a subnet -> Modify auto-assign IP settings -> Enable auto-assign IPv6 address.) Again, once per availability zone; if you don't do this, you'll need to explicitly ask for an IPv6 address to be assigned for each new EC2 instance.
  4. (Not necessary, and probably not a good idea:) Create an Egress Only Internet Gateway. This is Amazon's attempt to reproduce the "you can't get there from here" semantics of IPv4 NAT networking in IPv6; rather than relying on this sort of broken network configuration, I'd recommend restricting access to your instances using EC2 Security Groups.
  5. Specify default routes for IPv6. (VPC -> Route Tables -> select a route table -> Routes tab -> Edit -> Add another route; add a destination of "::/0", select your Internet Gateway or Egress Only Internet Gateway, and click Save.) Most EC2 users will need to do this once for each EC2 region they're using.
  6. Add IPv6 to your Security Groups. (EC2 -> Network & Security -> Security Groups -> right click on the Security Group -> Edit inbound rules -> make your changes, noting that the "Anywhere" source now includes the IPv6 wildcard "::/0".)
  7. Remember that if you launch EC2 instances via the console, the new "launch-wizard-N" groups created by default will probably not include IPv6.

Finally, one important caveat: While EC2 is clearly the most important place to have IPv6 support, and one which many of us have been waiting a long time to get, this is not the only service where IPv6 support is important. Of particular concern to me, Application Load Balancer support for IPv6 is still missing in many regions, and Elastic Load Balancers in VPC don't support IPv6 at all — which matters to those of us who run non-HTTP services. Make sure that IPv6 support has been rolled out for all the services you need before you start migrating.

Posted at 2017-01-26 07:15 | Permanent link | Comments

A very valuable vulnerability

While I very firmly wear a white hat, it is useful to be able to consider things from the perspective of the bad guys, in order to assess the likelihood of a vulnerability being exploited and its potential impact. For the subset of bad guys who exploit security vulnerabilities for profit — as opposed to selling them to spy agencies, for example — I imagine that there are some criteria which would tend to make a vulnerability more valuable: Much to my surprise, a few weeks ago I stumbled across a vulnerability satisfying every one of these criteria.

The vulnerability — which has since been fixed, or else I would not be writing about it publicly — was in Stripe's bitcoin payment functionality. Some background for readers not familiar with this: Stripe provides payment processing services, originally for credit cards but now also supporting ACH, Apple Pay, Alipay, and Bitcoin, and was designed to be the payment platform which developers would want to use; in very much the way that Amazon fixed the computing infrastructure problem with S3 and EC2 by presenting storage and compute functionality via simple APIs, Stripe fixed the "getting money from customers online" problem. I use Stripe at my startup, Tarsnap, and was in fact the first user of Stripe's support for Bitcoin payments: Tarsnap has an unusually geeky and privacy-conscious user base, so this functionality was quite popular among Tarsnap users.

Despite being eager to accept Bitcoin payments, I don't want to actually handle bitcoins; Tarsnap's services are priced in US dollars, and that's what I ultimately want to receive. Stripe abstracts this away for me: I tell Stripe that I want $X, and it tells me how many bitcoins my customer should send and to what address; when the bitcoin turns up, I get the US dollars I asked for. Naturally, since the exchange rate between dollars and bitcoins fluctuates, Stripe can't guarantee the exchange rate forever; instead, they guarantee the rate for 10 minutes (presumably they figured out that the exchange rate volatility is low enough that they won't lose much money over the course of 10 minutes). If the "bitcoin receiver" isn't filled within 10 minutes, incoming coins are converted at the current exchange rate.

For a variety of reasons, it is sometimes necessary to refund bitcoin transactions: For example, a customer cancelling their order; accidentally sending in the wrong number of bitcoins; or even sending in the correct number of bitcoins, but not within the requisite time window, resulting in their value being lower than necessary. Consequently, Stripe allows for bitcoin transactions to be refunded — with the caveat that, for obvious reasons, Stripe refunds the same value of bitcoins, not the same number of bitcoins. (This is analogous to currency exchange issues with credit cards — if you use a Canadian dollar credit card to buy something in US dollars and then get a refund later, the equal USD amount will typically not translate to an equal number of CAD refunded to your credit card.)

The vulnerability lay in the exchange rate handling. As I mentioned above, Stripe guarantees an exchange rate for 10 minutes; if the requisite number of bitcoins arrive within that window, the exchange rate is locked in. So far so good; but what Stripe did not intend was that the exchange rate was locked in permanently — and applied to any future bitcoins sent to the same address.

This made a very simple attack possible:

  1. Pay for something using bitcoin.
  2. Wait until the price of bitcoin drops.
  3. Send more bitcoins to the address used for the initial payment.
  4. Ask for a refund of the excess bitcoin.
Because the exchange rate used in step 3 was the one fixed at step 1, this allowed for bitcoins to be multiplied by the difference in exchange rates; if step 1 took place on July 2nd and steps 3/4 on August 2nd, for example, an arbitrary number of bitcoins could be increased by 30% in a matter of minutes. Moreover, the attacker does not need an account with Stripe; they merely need to find a merchant which uses Stripe for bitcoin payments and is willing to click "refund payment" (or even better, is set up to automatically refund bitcoin overpayments).

Needless to say, I reported this to Stripe immediately. Fortunately, their website includes a GPG key and advertises a vulnerability disclosure reward (aka. bug bounty) program; these are two things I recommend that every company does, because they advertise that you take security seriously and help to ensure that when people stumble across vulnerabilities they'll let you know. (As it happens, I had Stripe security's public GPG key already and like them enough that I would have taken the time to report this even without a bounty; but it's important to maximize the odds of receiving vulnerability reports.) Since it was late on a Friday afternoon and I was concerned about how easily this could be exploited, I also hopped onto Stripe's IRC channel to ask one of the Stripe employees there to relay a message to their security team: "Check your email before you go home!"

Stripe's handling of this issue was exemplary. They responded promptly to confirm that they had received my report and reproduced the issue locally; and a few days later followed up to let me know that they had tracked down the code responsible for this misbehaviour and that it had been fixed. They also awarded me a bug bounty — one significantly in excess of the $500 they advertise, too.

As I remarked six years ago, Isaac Asimov's remark that in science "Eureka!" is less exciting than "That's funny..." applies equally to security vulnerabilities. I didn't notice this issue because I was looking for ways to exploit bitcoin exchange rates; I noticed it because a Tarsnap customer accidentally sent bitcoins to an old address and the number of coins he got back when I clicked "refund" was significantly less than what he had sent in. (Stripe has corrected this "anti-exploitation" of the vulnerability.) It's important to keep your eyes open; and it's important to encourage your customers to keep their eyes open, which is the largest advantage of bug bounty programs — and why Tarsnap's bug bounty program offers rewards for all bugs, not just those which turn out to be vulnerabilities.

And if you have code which handles fluctuating exchange rates... now might be a good time to double-check that you're always using the right exchange rates.

Posted at 2016-10-28 19:00 | Permanent link | Comments

Recent posts

Monthly Archives

Yearly Archives


RSS