Whenever this conversation appears in the news and I listen to various law enforcement and political folks defend some sort of need for breakable encryption, I always jump to the logical end. Or at least one possible one.
In the future there WILL be brain computer interfaces. Memory offloading, recording of visual and auditory cortex data, and other more mundane uses. This may seem like fantasy/scifi. But barring some societal/technological collapse, this will eventually happen. If the precident is set for breakable encryption/back doors, this will absolutely be used to "supoena" people's "brain information" in that context. The invasive-ness will have no end.
> If the precident is set for breakable encryption/back doors, this will absolutely be used to "supoena" people's "brain information" in that context.
I believe this would be a terrible idea because people's recollection of events isn't going to magically improve just because someone connects to their brains directly instead of asking them some questions under oath. Add dreams and other literally crazy things that go in people's minds and there's a big filtering problem. Judicial systems that take "brain dumps" to be the truth would remove the need for human judges, and so we'd end up with a "Minority Report" plus "1984" plus "Skynet"/"Terminator" situation...people's brains being dumped (or "hacked") all the time and any information in that dump used against them to detect pre-crime and stop it. The logical conclusion is that all humans in the future are going to be in prisons run by robots, unless they're wiped out like we've seen in many books and movies already.
Of course it's a terrible idea, but they're gonna do it.
The scary thing is that a large number of people (sometimes in Australia I feel like it’s more than 50%) _would_ give access to their brain, vote for it even. People crave feedback, they can be scared of themselves, so something in their heads monitoring everything could actually be reassuring in an almost religious sense
Australians also don’t think a lot in general (many exist who do but as a general populace we’re culturally weak and short on history), so the long term effects of such reassurance just never really makes it into the discussion
Anecdotally there’s been a slight uptick in my social circles recently about people genuinely caring about privacy, so maybe there’s hope
The difference between the data you could get via a smart phone or smart watch and "brain information" is already quite thin.
Yes! This is a point I always try to bring up. Asking for someone’s smartphone password is like asking for a key to unlock their memories. A smartphone is just another substrate for our consciousness.
This is the argument I've made to people over the last few years, and I think is far better and more substantial then the technical based ones that so typically come up (and have in the comments here already, the whole "well a key will of course leak thing"). It's risky and unnecessary to go to technical arguments for something that's a moral matter, because technology changes. If you base your entire opposition around the key, what if at some point someone does have an entirely formally verified stack and strong measures and can reasonably argue that keys aren't going to leak? Apple's master key for example, it becoming available would be an enormous thing for both criminals and ordinary owners of iOS devices. Yet while there have been plenty of flaws that have been exploited to bypass a need for it, the private root key remains unleaked.
The real question that I would want to see some Congress person put to agencies is
>"Do you believe there should be any inherent limits at all? If we developed the technology someday to read people's minds, should it be permissible to go through their brains with a warrant? It would certainly let you find the guilty of some 'crimes', where for 'crimes' we should keep in mind that gay sex and interracial relations were felonies in the near past."
I mean that's the real thing, if security agencies could root through people's brains I see no need to beat around the bush that likely at least a few truly horrific crimes would be stopped or solved. There would be children saved, terrorists stopped, murderers caught. But I think not just the abuse of it, but even the use of it to eliminate any gray area for a human society would be so horrific that it's just plain not worth it. That yes, some children will be abused/kill, some murderers will escape, some terrorists succeed, and that really is the price we need to pay. That we should try to reduce it as much as possible but only in opposition to strong privacy and an inviolable personal sphere. And that should include artificial augmentations to our minds, which typical mobile devices are already arguably at the point of.
The incentive structures right now for law enforcement agencies and intel agencies remains geared always towards more more more, and paying attention to singular big harms rather then small harms across enormous swaths of the population. It hasn't evolved much from decades and centuries in the past arguably. I think that's the ground to fight on though, will they argue that total erasure of the private sphere is worth it? Will the public agree? I think the answer is no, and with that established it's a lot easier to argue back against "think of the children/terrorists/drugs" typical attack.
The next question needs to be (assuming they feel it should be allowed) is do they feel there should be allowances for someone to be above this law? In our current climate we are seeing criminal looking behavior in the US by people within the Executive branch of the government. They have been citing Presidential Privilege but if mind reading became legal there would be multiple issues here.
This law would rid us of the last safe harbor, your own mind. At that point allowing some individuals the ability to skirt this law would create a class definition of ultimate scale. Those who are allowed to keep their own thoughts to themselves and those who aren't allowed to.
Well put. To wander a bit off topic, but I feel this is a specific example of a general problem in free societies. There is no perfect safety without perfect surveillance. In my opinion, it is necessary to be at peace with the fact that there will be a certain level of bad outcomes in exchange for the protection of general freedoms. There will be crime, murder, kidnappings, embezzlement, death by neglect, etc. It is unavoidable absent a perfect surveillance state. I, for one, am willing to accept risks. Others will disagree. But I think that in a perfect surveillance society you are also pefectly stagnant.
There will be no perfect safety. Period.
There will be no perfect surveillance. Period.
Humans are humans, and inherently imperfect.
Until all humans are eliminated, there will be no perfect ... anything.
> It's risky and unnecessary to go to technical arguments for something that's a moral matter, because technology changes.
> If you base your entire opposition around the key, what if at some point someone does have an entirely formally verified stack and strong measures and can reasonably argue that keys aren't going to leak?
Irrelevant, since formally proving the stack says absolutely nothing about compromising the system at a human level, and 'reasonably argue' does not mean 'proving that the key cannot leak'.
The key will always leak. It's just a matter of when. There will never be any technical solution to that, and it's not a technical problem. It is a hole which fundamentally cannot be plugged, and the hole is people. Technology has nothing to do with this.
>>"Do you believe there should be any inherent limits at all? If we developed the technology someday to read people's minds, should it be permissible to go through their brains with a warrant? It would certainly let you find the guilty of some 'crimes', where for 'crimes' we should keep in mind that gay sex and interracial relations were felonies in the near past."
But that's always a question of legality, not technical capability. Legislation is all about where we draw lines.
However, it is entirely true that legislation (or rather, the 'undefined behaviour' in it) tends to be abused a great deal before it can be shut down.
The argument that the key will leak is not technical, and strictly speaking, nor is the argument that 'technology may one day do this and then where would we be?'. But one is a lot less speculative than the other, and admits no mitigations.
>The key will always leak. It's just a matter of when. There will never be any technical solution to that, and it's not a technical problem. It is a hole which fundamentally cannot be plugged, and the hole is people. Technology has nothing to do with this.
Ok, so again then what would be your response if some law enforcement official said "well what about Apple then?" iPhone is now 13+ years old, why hasn't their master key leaked? Microsoft or Google signing keys, same boat. Or what about SSL in general for that matter, the entire HTTPS paradigm depends upon master private keys that are not leaked, or at least not often. It would certainly be potentially worth a lot to certain adversaries if they could simply spoof major banks, not through some CA hack but literally just getting their keys. Yet broadly the effort to keep that information protected seems to be fairly successful, for better and for worse.
I mean, that's kind of my issue, it's not hard to raise plausible counter examples, and once we get into the weeds about "how likely" and "when" vs "how many will be saved in that time huh" I think we may have already lost. It distracts from the real debate, which is about how valuable a zone of private space is and the harm that comes from infringing upon it. That's the real cost. Just because you can doesn't mean you should.
>But that's always a question of legality, not technical capability.
I don't think that's quite justified by the history of law and privacy. There are plenty of practices law enforcement/intel can and do engage in now that were simply inconceivable in the relatively near past, when much of the law that still governs us was written. Not everything automatically adapts, it's worth watching out for things that are simply taken for granted because they're considered inviolable, and in turn lack explicit legal protection. There should be legal protection, what I'm saying is that winning it requires a frank discussion about harm tradeoffs and trying to get people to more generally grasp new kinds of costs, like emergent effects.
>The argument that the key will leak is not technical
I think it really is. I mean, even in your argument where you say "the hole is people", you're making a technological implicit assumption that people remain involved. Do your assumptions make the same sense if we imagine human equivalent or better AI? I'd rather just talk about why we shouldn't as a matter of ideals, even if we could, and even if it means some things we don't like go unstopped.
I think the whole concept that law should be enforced at all times is flawed. People should have more ways to resolve conflicts between themselves on their own, without cops getting involved - even if the law was broken.
Beyond that I would expect legal precedence set to sanitize inappropriate thoughts. I don't see any of this as far fetched. People will do what people can do.
That is very unreliable, because humans can easily invent memories of events that did not occur. And it may even fool any lie detecting technology, if they truly believe it themselves.
For example, I was 100% sure that some event in 10 years past did happen, because I was there and saw it. Then a friend of mine said that it did not happen. I did not believe and checked the recording - and yes, it did not happen. What I thought I remembered actually happened at another time with a completely different person.
Shoutout to Black Mirror. They've done a phenomenal job dramatizing this and other similar concepts. Do give it a watch.
Indeed! I enjoy many of the episodes. But the first time I had these thought was after reading Charles Stross' Accelerando. https://en.wikipedia.org/wiki/Accelerando
There's a character in there that has his "brain drive" stolen and he's disabled by not being able to access all that memory.
That part of the story got me thinking about how that data would be securely stored.
And then more recently in "Fall; or Dodge in Hell" there is a brief discussion about the uploaded peeoples data being encrypted in such a way that only they would know their own "thoughts", while those on the outside could observer via metadata, the goings on in the digital rhelm.
Honestly, once spying at a distance is perfected, there won't be any need to have broken encryption.
I was at this conference yesterday. The keynote was Kevin Mitnick giving step by step demos on how to steal gmail session cookies and clone prox access cards, but apparently discussions of anonymous digital dropboxes are out of bounds
That's crazy -- nothing should be out of bounds when trying to understand security for the purpose of improving security. It would be like trying to reduce unplanned pregnancies while not sharing information about how women get pregnant.
Given how trivial it is to set up anonymous shares, I can not fathom why discussion would be restricted.
Step 1. Create a file sharing mechanism. (http / https / sftp / rsyncd / nntp(s) / smtp(s) / whatever)
Step 2. Start Tor.
Step 3. Share link.
You might have heard of Snowden? The CIA whistleblower on soliciting electoral interference from Ukraine? These are hot topics in the US Govt, and having a talk about whistleblower technologies at an ASD/ACSC sponsored talk was obviously seen as too touchy by some 1/2/3 star at ASD.
They obviously don't understand the Streisand effect though, because it was completely hamfisted. They should have just arranged to schedule a bunch of other interesting talks at the same time, or put them late in the day in a small and distant meeting room during the cocktail hour. Amateurs.
I wish more non-technical people understood the fundamental point that many security measures are binary. Either they are secure and no-one has a back door that can compromise them, or they are not and anyone could have a back door that can compromise them, but there is no middle ground where only a politician's preferred government agents have a back door.
When I see comments like former Australian PM Malcolm Turnbull's "Well the laws of Australia prevail in Australia, I can assure you of that. The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia", I despair. Such profound ignorance should be kept well away from any sort of power. Regrettably, we haven't yet found a modern King Canute who can demonstrate the principle to our politicians and leave them without any doubt.
One amazing thing is that the "key under the mat" idea has been floated for decades, keeps getting discredited and keeps reappearing. It feels like we're going to be having the same discussion forever.
Another fundamental point that Schneier is very strong on is that in most countries, cybersecurity responsibilities have been co-opted by the military. Being that the military thinks in offensive terms (even when considering defence), that's the position they take here as well, which explains why they would rather hoard zero days to use as weapons than disclose them so that everybody can patch and be secure.
Many governments would rather bear a certain load of cybercrime than become less powerful themselves. In that sense, the back door argument is for them irrelevant.
Banks don't loose money due to weak crypto, but because of credit card fraud (no crypto involved). Customers lose money because they lose control over their endpoints (lose password confidentiality or SIM gets jacked or phone lost), or because they want to send Natasha from Minsk $5000 for a flight to visit them.
The argument to make to politicians is always: do you trust the other politicans not to spy on you when they are in power? That will always make them think twice.
Tho a counter-point can be made to the end that communication providers have been mandated to give law-enforcement special surveillance access for decades already, when most of it was analog. In most countries, these laws are still in effect, and in many there are very much like "API specifications".
On that end there's the inherent expectation of "If we can do it with analog phone, why can't we do it with digital?" by law&order politics. Another big part is also the false assumption that people have a perfect right to privacy.
In theory, people may have a right to privacy, but in practice, communication providers which actually facilitate communication are also bound by government actions .
Tho a counter-point can be made to the end that communication providers have been mandated to give law-enforcement special surveillance access for decades already, when most of it was analog.
But the counter to that counter-argument is that the traditional methods of interception had a significant practical cost and therefore an incentive to use them only when there was a legitimate interest in doing so. In our digital world, governments are trying to get access to everything on everyone.
Moreover, so far we have mostly been talking about threats due to reading data. To the extent that adverse changes are being effected as a result of attacks, most of them are probably being made using genuine credentials, which were themselves compromised.
However, the kinds of weaknesses we're talking mandating in encrypted communications would potentially also allow hostile actors to impersonate others and change the data directly. It doesn't take a genius to predict that this would create a future where everything in our ever-more-connected world is under threat, from financial transactions to medical instructions, from criminal records and intelligence profiles to control signalling for networked self-driving vehicles and essential utility supplies.
> But the counter to that counter-argument is that the traditional methods of interception had a significant practical cost and therefore an incentive to use them only when there was a legitimate interest in doing so.
The large scale surveillance also existed in the analog time see the GDR or programs like ECHELON.
That's why for many politicians this is supposedly only an issue of "scaling up", which has also been happening  and then got blown wide open with Snowden revealing PRISM&co.
This hostility to cryptography is also anything but new: Cryptography in many countries is still subject to strict regulations and has been for decades , while intelligence agencies like the NSA do spend quite some efforts in undermining even the process of standardization .
Let's also not forget that for a really long time the majority of the Internet didn't even use any TLS for most of its traffic, once we got https rolled out somewhat widespread, guess what happened? Hearthbleed, weird how that went down.
Feels more like they want to legalize a whole lot of what's been going on anyway. Maybe I'm just paranoid, but at this point, it's difficult not to be.
I wouldn't event say it was a cost issue. It was much more security through fences. When hardly any systems were connected together it was far more difficult for the non-authorities to access information, especially at the scale we are seeing today. You really needed to be able to infiltrate multiple systems simultaneously to get the kind of access you can now with a simple internet connection.
It makes the argument extremely ignorant when you realize we didn't have this problem in the past because it was just too difficult.
So true, and E2E doesn't matter if they force a patched binary that also sends the data to the government. This is EXACTLY what the Australian laws saws they must do, oh and it is a gag order so they can't disclose it. I think it is going to come down to this kind of attack. If you want to see your product in their country, you have to let them patch binaries. Look at what is happening in China right now with Hearthstone and the NBA and more. CEOs don't want to walk away from $BB of revenue and the growth they need to get their next bonus.
The PM wasn't ignorant, he was making the correct point that you can outlaw things that mathematically are unbreakable. Criminals can unbreakably use them, but will be violating the law. I even think he was making a joke. The laws of physics allow me to kill someone by dropping a heavy weight on them, but that is outlawed in Australia.
Now, I disagree with him as to what the law should be. But I think people have been ridiculously uncharitable by implying he believes ... you know, I don't even know what people are implying. They are just pointing and laughing and not making a coherent point.
The PM wasn't ignorant
I prefer to give people the benefit of the doubt unless I have reason not to, but in this case, I did see some of the speeches he was giving at the time, like this one. It was a textbook example of a politician repeatedly quoting the same obvious sound-bites and buzzwords ("leadership", "keep us safe", "rule of law", and so on) without really understanding the issue. Occasionally he seemed to get pushed off-script, like the section where someone asks him to explain what a backdoor actually is, and he is literally hand-waving rapidly as he delivers a superficial explanation that doesn't suggest any sort of real understanding.
I even think he was making a joke.
Again, if you've seen the footage of him making the infamous statement, he might be trying to make a joke, but it's more like he's trying to laugh off the nonsense he's saying as if that makes it OK.
They are just pointing and laughing and not making a coherent point.
When you're dealing with someone who is demonstrably unable to recognise the basic facts of a situation and make reasonable arguments, sometimes ridicule is all that is left. As I said, sometimes the best we can do is try to keep such profound ignorance away from any sort of power.
> Either they are secure and no-one has a back door that can compromise them, or they are not and anyone could have a back door that can compromise them, but there is no middle ground where only a politician's preferred government agents have a back door
Can you clarify? Wouldn't just having a second key which the government keeps in escrow in case they need to decrypt a message achieve exactly what you are saying is impossible?
(Note: I don't agree with the laws being created to undermine encryption ... but I think it undermines the argument against it to overstate the case.)
One problem (among many) is there's no such thing as "the" government. It's a relative term and its meaning depends on context.
If we take it to mean all governments, then that means China gets a copy of the key and can use it to spy on the communications of the Australian government.
If we take it to mean just one government (or a few select governments), then what is the criteria for deciding the government of country X should have a key but country Y should not?
Ultimately of course it comes down to a matter of coercive power and jurisdiction, which particularly complicates efforts trying to build products or services that are intended for global usage. I believe we need to push back against anti-encryption laws in the west so we have some ground to stand on when making a case that they shouldn't be accepted elsewhere.
The other problem is that even within one government there isn’t really “the” government. There’s office A and task group X and the prosecutor for this area and that special envoy and this law enforcement agency and that law enforcement agency and various courts, etc, etc, etc.
If you haven’t worked much with a large government, you don’t tend to realize just how fractured it all is.
And as someone that consulted for large government organization, the key will be emailed to anyone who needs access. I had access to ssh keys from the project owner for a government contract and the person did not even ask me to delete the keys after the project was over.
You probably signed a form saying you would. As long as they have a signed form and you have the keys, everyone is happy. Nothing is really secure anyway, except the stuff that matters.
> Nothing is really secure anyway, except the stuff that matters.
This is the key, I feel. Just enough CYA while still doing BAU (which includes security agencies snooping on people). Governments think they will be able to secure their important stuff. Everyone else can have the illusion of security, without actual security. When there is a master key leak, it will be someone else's problem, and can be blamed on bad actors.
It isn't overstating. A key escrow for a country being secure is about as viable as a five year old's plan to run away to disney world with a sippy cup and a bag of snacks. The scale and logistics are wrong and it shows laughable naivety in what it requires of other actors.
At this point the anti-encryption spooks and politicians have shown they don't want to listen so reaching them is impossible.
The rational debate has been over for decades and we won. Only irrational remains which means it is time to get downright mean. Instead cut them off and discredit them. Show no mercy to the evil fools.
Wouldn't just having a second key which the government keeps in escrow in case they need to decrypt a message achieve exactly what you are saying is impossible?
How would that work? A basic principle of encryption systems is that only relevant parties have access to the secret information. Typically in asymmetric systems they generate their own private key or other secret and never share it with anyone. Moreover, techniques like perfect forward secrecy rely on generating new session keys at each step.
As soon as you mandate a systemic backdoor where someone else's secret can be used to unlock anything, you have undermined the whole premise of the system and created a huge single point of failure.
Not only that, but normally if there is any concern that a key has been compromised then you can change it to at least continue to protect future communications. With this sort of back door, any compromise is permanent.
If you think that's not a serious risk and governments could be trusted to keep such an important back door key secure, let me first ask you this: which government(s) should have access to that key? What happens when, say, someone in the US has family in Iran, and the governments of the US and Iran both insist on being able to intercept messages between them? What happens if a more authoritarian government is elected in Australia and a more liberal one in Canada, and the Five Eyes agreement collapses?
Don't forget that a large part of the reason there is so much public debate about these issues today is because someone walked out of the most powerful intelligence agency in the world with the modern equivalent of a note of its deepest secrets in his back pocket and then shared that note with the world.
> How would that work? A basic principle of encryption systems is that only relevant parties have access to the secret information.
Well, the government is now one of the relevant parties. It doesn't change anything fundamental about the encryption. It just adds the government as a party. I get that you think that is bad (and I do too), that you don't trust the government. But it doesn't break the encryption, that keeps working perfectly, just like it did before. Just more parties have access.
The problem here is so many people are going around arguing against this by saying it "breaks" or "backdoors" encryption. And all those people are being completely ignored because the government is getting perfectly reasonable advice that says that it isn't breaking anything.
> Well, the government is now one of the relevant parties. It doesn't change anything fundamental about the encryption.
It does change something fundamental: now you have three parties instead of two. Take for instance the most popular key agreement protocol, Diffie–Hellman, and suppose Alice wants to use it to send a message to Bob. When the only parties are Alice and Bob, she can do some calculations with her public key and Bob's public key, and then she has the shared key to be used to encrypt and authenticate the message; these calculations can be done fully offline. If you try to add a third party (George), not only are the calculations more complex, but also they need to be online (all parties have to exchange messages before any party knows the shared key), which makes important use cases harder or impossible.
And note that I said above "encrypt and authenticate": possession of the shared key allows one to also forge messages. When there are only two parties, this works fine (Bob knows he hasn't forged anything, so the message can only have come from Alice); with more than two parties, that is no longer the case.
That is: the design of a three-party protocol is very different from and much more complex than a two-party protocol. And it gets even more complicated once you want one of the parties to be able to decrypt and validate but not forge messages.
Why can't the government use the powers it already has to compel one of the original parties to reveal the message? And punish harshly those that refuse, or destroy the ability to do so, another power it already has? Is seeking a warrant and due process that much of a burden?
Or is this really about seeking a means for dragnet surveillance?
Which government is now a party? All of them? In the entire world?
In any case, I don't know how to explain this without getting into a lot of technical details, but introducing an extra party who can read everything like that does fundamentally change the nature of the system. If you want a system where data is encrypted such that two independent parties can each decrypt it using only their own secrets, how do you think key exchange, encryption and decryption would work?
> But it doesn't break the encryption, that keeps working perfectly, just like it did before. Just more parties have access.
And if the government's private key has been leaked? How many parties will have access then?
All of them.
> [...] It doesn't change anything fundamental about the encryption.
Lol! No, no, no, no, no! It does entirely!
The problem is that the escrowed key (when it is inevitably leaked) can be used just as easily by people who aren't the government. So even if you trust your government (which, by the way, changes hands every couple of years) the mere existence of an escrow key means that by definition your cryptosystem is not secure against attackers. Personally, if I were an attacker, the first thing I would target is the government's key escrow -- because you can bet your life that will be the weakest point of any cryptosystem with key escrow.
Wouldn't just having a second key which the government keeps in escrow in case they need to decrypt a message achieve exactly what you are saying is impossible?
Others have already pointed out the problems of who has access to that key, I'll simply point out that the system would potentially collapse under its own weight. If there is a single master key then it's a huge risk. If that risk is recognized and there's a way to invalidate and replace that master key then suddenly you have an entire additional infrastructure of communications for key replacement that has to be in place. If you have multiple separate keys generated at the initiation of each encrypted communication, then there has to be a separate secure infrastructure for transmission of the additional key to some government entity. As a side note that potentially secure infrastructure by law probably has to have the same key sharing requirements. You also run into problems with things like embedded systems, and if you don't think that's a problem look back at the problems in the networking stack used in vxworks and other embedded systems that are in the field and effectively unpatchable.
Edit: ipnet, URGENT/11, https://www.bleepingcomputer.com/news/security/urgent-11-vxw...
Edit2: I forgot to mention, this is an all or nothing decision. If you're mandating this it has to apply to ALL communications or you end up with things shifting channels. Banking transactions? Master key. Medical patient portals? Master key. VPN to the office? Master key. Any secure website connection? Master key. You likely have to make using any non keyed encryption illegal with severe penalties, and you have to devise a system in which it's possible to identify encrypted communications not encrypted with that key without that key being available.
Such a secret key would be very valuable and it's almost inevitable that someone with access to it will be willing to sell it to criminals for the sort of money those criminals will be able to acquire with it.
> Such a secret key would be very valuable and it's almost inevitable that someone with access to it will be willing to sell it to criminals for the sort of money those criminals will be able to acquire with it
Wouldn't a similar argument apply to the secret keys at the roots of Microsoft's and Apple's and many other organization's code signing systems? And the secret keys at the root of SSL security?
We've already apparently decided that it is OK to rest our security on the idea that careful organizations can keep secret keys secret.
In theory yes but Microsoft and Apple only ever use the key to sign updates in one point in a regular process. That means the number of people with access to the system can be very small. This both means that you're less likely to have someone corruptible involved and that it's possible to figure out who spilled the key after the fact and punish them.
If a government escrow key was kept in a two rooms with air-gapped computers, one on either coast, and anybody hoping to decrypt messages will have to mail it on CD there and get a CD with the decrypted messages mailed back. I could see that system actually keeping the key secret. But proponents of the need for key escrow argue that existing decryption techniques are too slow and inconvenient I don't think that they would be satisfied with this solution. So I'd expect that anyone on the IT team at any of the FBI's 50 or so field offices to be able to get access to the key and so for it to leak in short order.
> argue that existing decryption techniques are too slow and inconvenient I don't think that they would be satisfied with this solution
This goes to the heart of the real issue and argument that I think is more powerful ultimately, though it is harder to express.
Authorities argue that they are not getting anything new because they always used to have the ability to intercept phone calls, open mail etc. So they argue that interception of encrypted messaging is just "status quo". But it is NOT status quo, because being able to do it at large scale, using automation, sophisticated data analysis and with an overall order of magnitude less effort actually changes the fundamental nature of it altogether.
The powerful argument to me is to use a judo-style move where opponents of intercepted messaging actually embrace the "status-quo" argument but then use that to re-establish the same high threshold for that interception that used to exist.
For example, if the authorities want access to a specific communication stream, then they should have to involve the messaging provider and they should have to get a one-time key that is set to expire or scoped in some way to specific messages, and they should have to concede to limitations on how secretly that can be done. The most powerful argument of all is that anything that is more invasive will foster a huge ecosystem of end-to-end encrypted services. Encryption is ultimately mathematics and you can't stop people using it. The best you can hope for is to create something that is close enough to OK with everybody that the ecosystem doesn't grow because most people don't care.
We've already apparently decided that it is OK to rest our security on the idea that careful organizations can keep secret keys secret.
I'm not sure that's true. We might tolerate that situation in specific circumstances where we haven't identified any better choice, but that isn't the same as thinking everything is OK.
For example, in the case of certificates used for things like downloading web content over HTTPS, the centralised nature of the CAs and the track record where not all CAs have proved to be trustworthy is a significant concern in the industry. This remains the case even though browsers can revoke the root for any CA that is compromised and despite the increasing use of mitigations like perfect forward secrecy.
The rise of Let's Encrypt in particular has been a boon for moving smaller sites to serving over HTTPS by default, but given how many sites now rely on it, it also creates a huge single point of failure in the security infrastructure of the web. Notwithstanding the precautions taken by the operators to keep the most important secrets secret, if anyone hostile did ever manage to compromise that infrastructure covertly, that would be catastrophic for security on the web.
A difference is that all of these private roots are subject to market acceptance, and the owners have strong incentives to keep the keys secure.
If Microsoft or Apple leak their keys, they at least will spend a lot of effort on cycling keys to their huge number of deployed products and on the PR fallout. If it's managed poorly, they may lose customers.
If the US leaks their key, tough noogies, it's mandated to use that one. Companies, much less people, will find it hard to move to another country with better key management procedures.
Who oversees the escrow? What are “reasonable uses” of said key? Etc...
> Wouldn't just having a second key which the government keeps in escrow
Imagine being an outspoken left-wing blogger in America that votes for this. Imagine then Trump gets elected and 'the government' is now an entity that actively breaks established laws and targets outspoken critics.
Imagine someone even worse than Trump is elected after that.
Imagine the key is leaked.
Imagine China makes a trade with your government for your key.
There is no taking back that key now, whoever becomes 'the government' at any point in the future has it.
The fact that governments change, sometimes radically, is an important point in arguing against escrow. For example, in the current supreme court case R.G. & G.R. Harris Funeral Homes Inc. v. EEOC, the case was brought under the Obama administration BY THE EOC ITSELF, arguing for the plaintiff. Then Trump was elected, and the Department of Justice switched sides -- they now argue against the government's own original case.
I really don't appreciate how you accuse the Trump Admin of doing something that only Democrats have been caught doing. It's like you can't think or something.
Who used the IRS to target conservatives? Who ran guns to Mexico in order to create a reason for more gun control laws? Who is using the FBI to subvert the 2016 election?
Beyond the existing answers, also look at how existing records are stored. You can use FOIA to get the keys needed to unlock access to government weapon stores (Defcon had presenters who talked about this). How many times have governments been hacked and had financial data stolen; that could have just as easily been some of those keys.
What happens if that escrowed key becomes compromised?
It's the same reason why having two kinds of TSA lines is less secure--there's now a second vector of attack!
All anyone has to do is become a "Trusted Traveler" and they have less security theater to go through, just send in your info, pay the fee, and off to terrorizing you go.
Also, my private data was leaked because of the OPM "hack" which I think was more like the Awan Bros. gave China the passwords to get in.
The only reason to want to break encryption is power and control over the masses. There is no "24" scenario that ever happens where people die because some file couldn't be read, that is just Hollywood bullshit.
Whenever a non-technical person makes these claims I wish someone would standup and ask what sort of repercussions they believe would be acceptable for breaking the law and using the key illegally. Equifax royally screwed 147 million people, and no one did any time for it. Even if they had we'd be talking microseconds per person they screwed over.
It becomes unfathomable how huge of a problem this could cause and then when it happens it get swept under the rug and not a big deal. If the PM thinks its ok to have a key under the mat then I wanna know how much time he should be doing when someone gets my personal information and makes me have to deal with identity theft. Would 10 seconds of jail time for my life long struggle be acceptable? For the Equifax CEO that would have been half a year.
Private citizens should have a right to private conversations with one another, regardless of the channel. Why is this so hard to grasp?
But they may say something illegal, and there are enough people who want to make that impossible that they are willing to sacrifice their rights to ensure the laws on illegal information are enforced.
Some people are fundamentally willing to sacrifice rights they don't view as necessary for some temporary gain, and trying to oppose this in any more direct way can result in having one's own moral standing challenged.
>they are willing to sacrifice their rights to ensure the laws on illegal information are enforced.
I think you meant to say "sacrifice everyone's rights." This isn't an opt in scenario, it would be your right to privacy being forcibly taken away.
The conclusion I always reach in these discussions is that if people truly want encryption, they will have to do it themselves and not rely on others. Others can always be forced or otherwise coerced, legally or otherwise.
Is china better or worse than Australia on this issue?
There is no meaningful comparison to make.
The "censored" talks immediately had their slides put online and were extensively advertised with Schneier in particular, stating it's your duty to read them now. Nobody was arrested because the courts would not even allow a prosecution as there's no law that has been broken. The rule of law, while imperfect, means something in Australia.
Let's be very critical of what has happened here by all means, that's how we preserve a rule of law and equality before it.
CyberCon ran by their Australian shillery; glad he stuck it to them
Conferences are nearly all in-person live ads, cloud ads disguised as talks, or tutorial talks so shallow a Googling is better.
Triple that for infosec industry.
Doesn't Schneier miss the mark here. Even if the US, Australia or all of the Five Eyes nations stop spying, that does not ensure encryption is unbroken or safe from spying eyes. If it is possible to break encryption, it will be broken by someone. It better be by us...
As far as designing in weaknesses and/or golded keys. Well, it does not add to security, but assuming that the other nation states are already able to break encryption and read what they want, it does not weaken it either for national security reasons. What it does provide however are easy means to stop terrorists, pedos and drug dealers from conducting business. That power in the right hands is good for society, just like having people with lethal weapons in law enforcement is good for society.
Remains the risk that some bad guys could get their hands on the golden keys. Yes, design to handle that?
> If it is possible to break encryption, it will be broken by someone. It better be by us...
The option isn't encryption broken by them or broken by us. It's secure encryption or encryption that can be broken by everybody with sufficient resources.
> Remains the risk that some bad guys could get their hands on the golden keys. Yes, design to handle that?
Yes, because secure systems are so secure that we should be trusting them to base all of the world's encryption on. It takes a single bug or intrusion to undermine everything. There's no way to design for a key that needs to be usable/accessible for when you need to access encrypted anything, while keeping it secure from bad actors.
That's even before considering how the US government and US institutions have, over the course of the past decades, pissed away any and all good will and trust that the public may have had in them. Why would anybody want to give the FBI or NSA free access to all of their data? After seeing enough stories of employees looking at naked pictures, stalking ex-girlfriends, dragnet surveillance or other gross abuses of power, there's no way these people can be trusted.
> As far as designing in weaknesses and/or golded keys. Well, it does not add to security, but assuming that the other nation states are already able to break encryption and read what they want, it does not weaken it either for national security reasons.
You are correct in exactly the same way that, if one assumes that the Earth is flat then it is impossible to fly around it in an airplane.
There exist cryptographic security measures that have flaws and can be broken by nation states. But there are also plenty of encryption techniques that will easily resist the efforts of even the most effective nation states (at least for the present day). The question is not whether such codes exist but whether we will pass laws preventing any law-abiding person or company from using them.
If you were right and the NSA could break these codes then they wouldn't be advocating for back doors, they would be encouraging everyone to encrypt their communications (to protect against everyone ELSE), then using their abilities to stop terrorists, pedis and drug dealers.
> Remains the risk that some bad guys could get their hands on the golden keys. Yes, design to handle that?
You can't. That's pretty simple and solid.
If you have design a master key, then it's the master key. Once it's out, it's out.
To play devils advocate: what about the keys with which Apple or Microsoft signs their updates? It's analogous to a master key, yet they didn't leak so far although the stakes are high.
Microsoft have had their golden keys leak before. 
Apple/Microsoft have a master key to one product which is held by one private company. What you're talking about is a master key to all communications accessible by every government agency of every country provided (maybe) that they can demonstrate "just cause" (or some other nebulous concept). Surely you can see the difference in magnitude?
Just as importantly, they use that key very infrequently which allows for a great deal more ceremony around when it's taken out and a lot more restriction around who has access to it. I'd expect that the key is only kept on air-gapped systems, for instance.
In theory, the key could stay at the company and the communications is handed over upon lawful request.
To be clear, I am not supporting this. But this will be an argument being made by the other side thus a good reply should be prepared. NOTPetya already demonstrated that malware can come with software updates. But up to now there is no hard evidence that keys of big players have been leaked.
And even for professionals it's hard to keep up. I am using Signal. From time-to-time I am reminded by Signal (on Android) that I need to update taking me even to an update screen. How would I know that it is genuine? No other app I have does this.
> In theory, the key could stay at the company and the communications is handed over upon lawful request.
What is a "lawful request"? Does that include legitimate court orders originating from China, Russia, Syria, Sudan, etc?
That's why this will never fly. It's a horribly stupid idea.
It's either a lawful request from any country or none. Should corporations get to decide what a lawful request is? That's a horrible idea either.
FWIW, I am appalled by this constant call to weaken encryption. This is not worthy of any country who deems themselves under the "rule of law". It's even more appalling that they do the dirty work for the countries you listed...
> It's either a lawful request from any country or none.
My preference would be to group countries into categories that respect users and their privacy, and those who don't. And then don't pursue selling into countries that don't respect privacy. And no one gets "gold key" or "backdoor access". It is only a legal front door to the data the provider possesses in plaintext. Specifically, data residing SOLELY on the device would NEVER be in said provider's possession in plaintext form, if the user desired that.
But that will never happen. Because, growth markets, amirite? (Sad face)
> Should corporations get to decide what a lawful request is? That's a horrible idea either.
Agreed. Corporations don't get to 2nd-level guess the law (ignoring lobbing in this example). They either get to choose to operate within laws of the territories they do business in, or they don't do business in a said territory. This is my EXACT complaint against Uber, AirBNB, etc.
> FWIW, I am appalled by this constant call to weaken encryption. This is not worthy of any country who deems themselves under the "rule of law". It's even more appalling that they do the dirty work for the countries you listed...
In total agreement. Furthermore it is what is view as extremely easy. Of course the NSA/CSS/CIA/ABC/DEF whatever will always target and crack-open the endpoints. To do a double-duty and attack the crypto itself is just fucking annoying to me due to the collateral damage said efforts bring. They already own the endpoints. Just focus on that. Don't attack the math operations.
We aren't aware if they leaked already or not. We aren't even aware if they were used in malicious way either - state actor could make both of those companies stay silent about that.
That could easily be leaked by a single person, like Edward Snowden. People are not the perfect way to store secrets.
> As far as designing in weaknesses and/or golded keys. Well, it does not add to security, but assuming that the other nation states are already able to break encryption and read what they want, it does not weaken it either for national security reasons. Except that it's only your country whose encryption gets broken. You're basically giving your enemies free reign to spy on you and your country if you force a golden key system into use. Do you think China will use your custom crypto system? Or any other country for that matter? You end up in a situation where you _lose_ power because you want to spy on journalists, human rights activists and undesirable parts of the population.
> What it does provide however are easy means to stop terrorists, pedos and drug dealers from conducting business.
I can't remember the last time a law passed to hunt down the pedos/terrorists/drug dealers hasn't been used to surpress or hurt some part of the population. Remember when drug dealers just used SMS? When pedos exchanged magazines and floppies? When terrorists just called each other? And how the government, with all its wiretapping power, couldn't stop any of them? Encryption is a nice bonus for these people, but a lack of it is not stopping anyone. Tangentially, if a country would focus on properly treating addiction instead of making boogeymen out of those darn drug dealers we might actually get anywhere as a species.
> Remains the risk that some bad guys could get their hands on the golden keys. Yes, design to handle that?
How would that even work in a crypto system? If you have the key, you can decrypt. That's the basis of cryptography. Wishing away the glaringly obvious problem with the proposed system by telling people to "design it not to break like that" doesn't work.
As far as designing in weaknesses and/or golded keys. Well, it does not add to security, but assuming that the other nation states are already able to break encryption and read what they want, it does not weaken it either for national security reasons.
That assumption is the weakness in your argument. If you design in a systemic back door, it becomes a known fact that a hostile actor could find a way into your system, because you have created one for them. Not only that, but in the case of a golden key, if it leaks then anyone can get in.
Of course there is no guarantee that any encryption scheme is 100% unbreakable. Such is the nature of mathematics and research. But there is a fundamental difference between needing to discover a systemic weakness in an encryption scheme, say an efficient solution to the discrete logarithm problem or a side channel attack on a given implementation, and merely needing to know some master password that will work even if the mathematics of the encryption method and its practical implementation are sound.
after all, it worked so well for the TSA security key: https://techcrunch.com/2016/07/27/security-experts-have-clon...