Security Spotlight

RSSSubscribe to this blog
About Author

Contact Author

Email Security


There Weren't Really Chinese Backdoors in Military Chips

What happened and unsolicited advice

Article comments

In March, Cambridge researcher Sergei Skorobogatov and Quo Vadis Labs researcher Christopher Woods put up a draft paper on a cool new technique they used to ‘disable all the security’a security-enabled chip.

It sat there until around May 28, when (it seems) someone found it and put it up on Reddit, where it exploded (source: Skorobogatov’s web page on the brouhaha).

Then the stories hit the tech press, most of which apparently failed to read the primary source. They include: ‘Chinese ‘backdoor’ in U.S.-used chips?”, ‘Proof That Military Chips From China Are Infected?‘(this latter article all the more inflammatory because the URL says ‘Smoking Gun’in it, too), ‘Backdoor Found In China-Made US Military Chip?”, ‘Security backdoor found in China-made US military chip”, ‘Report: Chinese built US military chip has a back door”, and so on.

The best one of them I found was Simon Sharwood‘s ‘Researchers find backdoor in milspec silicon‘from The Register. He obviously went to the primary source and avoided the web-speculation that so many other stories just breathlessly regurgitated. I love The Register for their snark and attitude, and when they’re the ones leading taste and restraint, you know things have gotten out of hand.

Then there were some raised eyebrows, then updates, retractions and more. We now know that among other things:

  • It wasn’t a Chinese chip. It comes from a US chip house, Microsemi, who outsource manufacturing sometimes if not always to China.
  • It may or may not be a military chip. It has military customers, but it hasn’t been approved for government or military secrets.
  • The paper is going to be presented at CHES (Cryptographic Hardware and Embedded Systems) in September 2012.
  • The backdoor they found may or may not have been documented. The real controversy revolves around this. They found a debugging interface that was surprising. Why it’s there, what it does, and so on, is a very good question.
  • Even assuming the worst, physical access is needed (in general) to exploit the backdoor.

(Note: I am in no way criticizing Skorobogatov and Woods. Even when researchers are expansive in their claims, it’s the responsibility of journalists to do their own analysis, especially when they’re reporting on a pre-publication paper and especially especially when they get the source through Reddit or some other Internet aggregator. This is, in fact, the real story of what they found, what does it mean, and so on. With luck, there will be more after CHES in September.)

After the media frenzy, Skorobogatov put on his web site his own discussion of what’s what and he and Woods put up the full version of their CHES paper.

Even before that, there was plenty of counter-analysis on the Internet as well. I don’t mean to short-change any of the people who were sceptics, but you can google things as well as I can.

On May 31, Microsemi released their response to the media storm and Skorobogatov and Woods responded back. Those are well worth reading in part because if you compare them, they don’t actually disagree on many points, and you learn that Skorobogatov has been prodding Microsemi’s chips since 2001 and they still haven’t given him their CSO’s mobile number.

I’m not just being snarky here. I’ve spent a lot of time being a senior technical person at a security creator. I understand that people who go and hack your stuff are annoying. I know it’s more annoying when they publish their results without telling you. It’s even more annoying when they tried to tell you and no one was listening. That’s why your security policy needs to have clear instructions on your contact web page for how to contact someone.

There’s no real difference between a surprising backdoor and a debug or system management interface. Such things are often necessary — if you don’t put in debug or management interfaces, then the likely result of a stupid setup error is that the device is permanently ruined. I’ve worked on devices that had precisely that characteristic. Real, unbreakable security means that a failure means data loss. This is not always good.

Nonetheless, if someone takes a liking to your product and expresses their love by breaking it, swallow your pride and treat it as the opportunity it is. Those sort of people often charge £200 and £300 per hour, and this person’s doing a better job for free than the person you’re paying to break your products. Oh, you aren’t paying someone to break your products? Well, you should.

I understand that these relationships are hard to start. The researcher doesn’t want to do anything that sounds like extortion, and the company doesn’t want to do anything that smacks of buying the researcher off. But that’s why I recommend buying someone who breaks your product a beer. If you enjoy the conversation, it can go on to dinner. If dinner turns out to be enjoyable, call the researcher back and have a follow-on conversation.

At the very least, being friends with people who break your products is a good idea, the next time they (or someone they know) break something you become the first to know and not the last.

Jon Callas is a renowned information security expert and CTO of Entrust.

Jon previously co-founded and was CTO for PGP Corporation, as well as a stint as Security Privateer for Apple. His work in security policy supported the end of US cryptography export restrictions and help secure the modern Internet.
Enhanced by Zemanta

Email this to a friend

* indicates mandatory field






ComputerWorldUK Webcast

Advertisement
ComputerworldUK
Share
x
Open