Hacker News

36 Comments:
userbinator said 3 months ago:

IMHO unlike the tone of the article, this is cause for celebration, like every other time DRM is broken. Now all the proprietary firmware in those otherwise useless/insecure IoT devices etc. can be more easily reverse-engineered and replaced, possibly driving more hardware reuse and reducing e-waste.

http://www.gnu.org/philosophy/right-to-read.en.html

rini17 said 3 months ago:

Who is going to do all the reverse-engineering? And by the time it's done the chip will be obsolete, manufacturer can iterate anytime with source-compatible-only newer version.

the_pwner224 said 3 months ago:

DRM can be used for good as well as bad; it is about control. Most DRM is used for corporations to control you. But you can use things like this readout protection and Restricted ("Secure") Boot to get control over your own devices. If the BIOS lets you use your own keys (many do), Restricted Boot prevents attackers from booting unauthorized software on your computer. Similarly readout protection just hides the code on the device; this is useful for anyone who wants added security (security through obscurity is not perfect but it is always helpful).

In that regard this news is bad since it means a security tool has been broken. But it is good in that the security tool was very often used by evildoers.

Of course both of these examples rely on you trusting the bios & rdp implementation - ideally they would be open source.

saagarjha said 3 months ago:

> Similarly readout protection just hides the code on the device; this is useful for anyone who wants added security (security through obscurity is not perfect but it is always helpful).

Trying to hide your code is a stupid thing to do. Trying to hide cryptographic keys is more useful (though still often only used for DRM applications) but preventing people from dumping your firmware is misguided.

zimmerfrei said 3 months ago:

> Trying to hide your code is a stupid thing to do

That's not what I hear from reputable reverse engineers, at least for IoT devices.

Even though security by obscurity should be frowned upon, and being understood that hiding code might give a false sense of security, most RE workflows assume availability of firmware, and it is a giant pain to start breaking platforms where the unknown firmware must be manually extracted first, especially the boot loader.

maxbond said 3 months ago:

I agree.

I've said before on this site, obscurity is a totally valid tactic to impose additional costs on attackers. It's best to think of it more as a preemptive strike than a defensive layer. Thinking of it as a defensive layer can lead to complacency, but thinking of it as an opening gambit is totally fine.

One shouldn't be obsequious to heuristics like "never use security through obscurity," one should understand the systems they're building and make considered choices.

Additionally, preventing people from dumping your firmware is usually not about security as much as it is preventing some fly-by-night company from reversing your product & selling it as their own. Why engineer a product when you can steal someone else's IP?

saagarjha said 3 months ago:

It also gives security engineers who are looking for bugs in your code more reason to hate you, and heaven forbid that they have less perseverance than a determined attacker. Also, what’s so special about your slow and buggy libc and statically linked in crypto?

maxbond said 3 months ago:

The salary of the engineer who built it.

If you hire those security engineers, you can give them access to the source code. If you didn't hire them, there is no way to tell benign attackers from other attackers, and they can deal. It probably won't give them much trouble anyway. :)

saagarjha said 3 months ago:

> If you didn't hire them, there is no way to tell benign attackers from other attackers, and they can deal.

The better way to deal with this is to make your firmware secure regardless of whether someone can pull it off your device. Making security engineers’ lives difficult just means that you’ll find out about bugs from the news when they’re being sold to authoritarian countries to suppress dissents rather than the paper on firmware security that a graduate student was going to write before they decided to move to a much nicer platform.

maxbond said 3 months ago:

This is baffling to me for two reasons.

The first is that my thesis is explicitly that this is not a defense and is not an excuse for poor memory handling etc. in your firmware. (And the more I invest in creating a robust firmware, the more I stand to lose if someone rips off my product & undercuts me - security risks are not the only type of risk.)

The second is that the notion that I should rely on the charity of unpaid graduate students to discover bugs in my firmware is both inequitable and unsound.

baybal2 said 3 months ago:

Firmware extraction from STM32 family costs around $20-25k in China, including hardened varieties. While getting crypto keys from gemalto around $10k.

Not a big impediment.

maxbond said 3 months ago:

In what universe is $20k not a big impediment? Did you mean $20?

baybal2 said 3 months ago:

$20K is nothing for a consumer product manufacturing run.

For 9 out of 10 consumer devices, no factory will even listen to you if you are not willing to commit to put forward $1m

Only really simple, small gadgets are profitably manufacturable under <$100k

saagarjha said 3 months ago:

In the universe of a persistent attacker?

zdkl said 3 months ago:

> obscurity is a totally valid tactic to impose additional costs on attackers

But there's the rub: you'll only impose additional costs to the least sophisticated/determined adversaries. While that works to keep random scriptkiddies/scans out, I'd argue it has little to no effect if you require serious security guarantees.

maxbond said 3 months ago:

It imposes costs on all attackers. The value of that cost is skill dependent, but no one has unlimited time on their hands. In other contexts, like hiding an admin login page, shutting out low skill attackers means your log files have better signal to noise, and you can focus more resources on the more significant threats.

The reason I say to think of it as a preemptive strike rather than a defense is that you still do need strong defensive layers.

This is basically just setting a compiler flag. It's free for you and costs something for the attacker.

zwirbl said 3 months ago:

At least the bar for entry is a little bit higher, this in itself might be worth it then

saagarjha said 3 months ago:

As someone who is currently working on a firmware reverse-engineering project (with others who actually know a lot more about what they’re talking about!) pulling tricks like these is just a massive annoyance that we’ll usually get around anyways; we’ll just curse you the entire time we’re doing it.

bfrog said 3 months ago:

Where do you think keys are stored on occasion?

saagarjha said 3 months ago:

:(

ohazi said 3 months ago:

To be fair, "restricting" flash readout while allowing hardware debug access always seemed like a minefield, and I would hope that anyone with a security sensitive application would have seen this from a mile away.

You could have a completely bug-free, constant-time, constant-power cryptographic library running on one of these microcontrollers, and debug access would allow you to reliably extract encryption keys just by examining the execution path.

The amount of processor and system state that you have access to with a hardware ARM debugger is crazy, but that isn't really the problem -- you can extract a ton of state with a minimal debugger too. Just a log of instruction pointer values would get you 90% of the way there.

I think it's reasonable to assume that microcontrollers with exposed debug interfaces simply cannot be made secure, just as people generally assume that it's game over once someone has physical access to a computer.

mrlambchop said 3 months ago:

Yup - this exactly. The JTAG fuses should be blown on all devices that need to secure their flash (or secrets).

Working on these specific processors around 5 years ago, we implemented a serial port based "unlock" that would generate a challenge/response from the device that if correctly acknowledged, would unlock the JTAG whilst the chip has power (it locks again when it looses power). This worked great - we spent a lot of time on the UART driver to make sure it was super simple and robust during the period when it could listen to incoming bytes (no interrupts etc...).

remcob said 3 months ago:

Modern cryptography libraries make sure that the execution path (and memory access patterns) do not depend on sensitive data. Usually this is what is meant with 'constant-time'.

If a debugger can read out registers or memory you can just read out the sensitive material of course.

ohazi said 3 months ago:

Whoops... Yeah, you're right. You wouldn't expect to see instructions like bne, bge, etc. that depend on key material directly, so you wouldn't be able to rely the instruction pointer alone.

Instead, you might see instructions like addlt, so you'd also need to inspect the value before and after, which, as you correctly state, the debugger will happily let you do.

kosma said 3 months ago:

I've talked to Johannes Obermaier in the past... very nice guy. It's not their first bypass, and hopefully not the last either.

PS. I actually have yet another STM32F1 RDP bypass in my archive, waiting to be published. It used a technique where the MCU writes its own debug registers... pretty crazy stuff. If only I had some free time to write a proper publication about it...

userbinator said 3 months ago:

If only I had some free time to write a proper publication about it...

You could just drop some hints on a hardware forum and let the community figure out the rest.

leggomylibro said 3 months ago:

I think you can thank this sort of hack for the widespread availability of cheap cloned "ST-Link" debuggers. They use STM32F103 or F102 chips inside, with firmware that was probably lifted from the debuggers on ST's evaluation boards.

As recently as a few years ago, it was unusual to see standalone debugging hardware in the $2-20 range. Sometimes I wonder if ST bristled at the...reuse...of their IP, but it probably did more to promote STM32s as a learning platform than anything that ST did in that time period.

userbinator said 3 months ago:

but it probably did more to promote STM32s as a learning platform than anything that ST did in that time period.

...and thus drive further product sales in the future. If you think about it, sales of development hardware are not going to be frequent nor recurring, while sales of the actual product dominate their profits.

I'm personally glad that companies are starting to see the advantages of freely available documentation and cheap development hardware, and the days of 4/5-figure development boards with secret NDA documentation are slowing passing; ST was (and in some ways still is) one of the notoriously closed ones.

osamagirl69 said 3 months ago:

I am not sure if it was always the case, but at least with ST and NXP/freescale you can download the firmware for their debugger from the website for free. I suspect that it was a strategic decision by ST to release their dev kits for cheap (<$10 for a stm32 dev board with programmer!) to drive developer/hobby/edu interest in hopes of people using their chips in production.

Come to think of it, I think it was actually TI and the MSP430 that started the trend with the $4.30 kits with a socketed msp430 micro and onboard programmer. ST was the first to try it with an ARM as far as I know...

nrp said 3 months ago:

Both were likely a response to Arduino presumably increasing the adoption of AVR without Atmel having to do anything. I recall the MSP430 kits being pitched that way in any case.

saagarjha said 3 months ago:

> They use STM32F103 or F102 chips inside, with firmware that was probably lifted from the debuggers on ST's evaluation boards.

I wonder if it was lifted using a ST-Link debugger…

fest said 3 months ago:

There's actually even more than this to low price. I have seen knock-off ST-Link dongles with STM32F103C8 MCUs that are not supposed to have enough flash memory for the stlink firmware, yet they functioned.

How? Turns out they physically have more flash memory, but the accessible flash area is limited by programming tools and documentation to 64KiB (most likely price segmentation, but maybe there's a flash page remapping mechanism that would allow to bin devices based on manufacturing yield).

markrages said 3 months ago:

There is an open-source firmware that is better in many ways. https://github.com/blacksphere/blackmagic

You can even debug on a $2 "blue pill" or wirelessly on a $1 esp8266 board. It has certainly not hindered STM32 popularity.

ajross said 3 months ago:

tl;dr: The processor protects data accesses to the internal flash while the hardware debugger is connected so people with hardware access can't read out the code and config. But this protection only applies to the data side of the Harvard architecture buses. The instruction bus is used by the hardware to fetch the reset vector on a hardware reset. But the vector table is under software control. So by changing the reset vector to point to an arbitrary address in flash, then resetting the CPU under the debugger, you can get it to load your desired word from memory into the PC.

Pretty clever.

said 3 months ago:
[deleted]
rollulus said 3 months ago:

I'm curious how they managed to get in contact with STM. I once discovered a silicon error in their STM32F0 but failed to get in touch at all.