The New Wilderness(idlewords.com)
The dragon with its hoard of gold (private user data) is a great description of Facebook and Google's regulatory goals. As Maciej has also put it, "privacy is an essential right — your most intimate moments should be kept strictly between you and Google".
The terror that could be unleashed by their already existing hoard is almost unimaginable. For all the concern about rising authoritarian politics, few have figured out how bad it would be for the future equivalent of the Gestapo or the Red Guard (pick your country) to get a hold of a decades long dump of medical and financial data, metadata on relationships, posting history, browsing and location history, behavioral fingerprints, stylometry, device IDs, reading history, private photos, and on and on. The tech industry's "vow" to not create a Muslim registry for the US government had a touch of absurdity to it, since there are already companies that could plausibly give you a list of almost all Muslims in the US with a few queries, along with far more personal information than any census or registry has ever asked for.
"Brave New World" is about a coercion that you don't see as malevolent; what you are depicting is more in line with "1984".
I think the "Brave New World" approach is the most likely to happen / is starting to happen / has been happening for a while now.
Google or FB don't matter for this stuff, since the government goes straight to the ISPs (which covers incognito mode, etc)
But ISPs have access to much less detailed data. Or do you think encryption is trivially broken by them?
"Liberty" might not be completely off the mark. You can get into the habit of looking over your own shoulder all the time, interrogating your every expression and action for what an unsympathetic stranger might make of it and how it might be used against you. And that consciousness doesn't only extend to spaces where you know you're being monitored, because you can't know where and when and how you're being monitored. So you eventually experience it all the time.
This isn't intended as a shallow dismissal, but this sounds strikingly similar to the experience of paranoia. Nobody would say that the feeling that the Invisible Other is watching us is a good feeling. But unless something happens, it's also "just" a feeling. Might the feeling be doing more harm than the reality?
On the other hand, our ancestors would probably wonder at our obsession over the danger of invisible entities getting into our food or water, but germ theory is still real.
The problem with guarding against invisible dangers, even when they're real, is that they can be hard to distinguish from superstition. It's easy to point at some religious customs and say they are irrational, but is today's folk understanding of nutritional science all that much better? Badly understood science can create especially virulent memes.
When the harms are hard to demonstrate, privacy disputes sometimes feel similar.
It’s funny: to you this reads as paranoia, but if anything I’m drawing on my experience of social anxiety. Which, yes, is irrational, because no one is watching. The point of the discussion at hand is that, in an environment pervaded by automated, recorded invasion of privacy, such feelings may cease to be irrational.
The harms are not hard to demonstrate, if you study history, or have programming knowledge.
Liberty is the best way to describe it. It's not that we're necessarily having any autonomy taken from us right now, but surveillance does increase the power others have over us.
Unfortunately words like "Liberty", "Freedom", "Justice" and others have been ambiguous for so long that they can be twisted on a whim.
The idea is that two people can be talking about the same thing, both with different ideas (and plausible deniability) in their head.
Even privacy policies have their own form of ambiguous polite euphemisms, such as "advertising" or "telemetry" or "analytics", which would benefit from clarifying translations.
Great points here. We definitely don't have the language to discuss these problems currently. "ambient privacy" is a good start.
"I’ve lost something by the fact of being monitored." is very true, but deniers will say: "what's the something?" and we just don't have the language to explain it yet.
That’s where the comparison to environmental protection is so powerful. For the longest time, if you would say “We lose something by exploiting natural resources”, people would have just pointed out that there’s a seemingly infinite supply of unexploited nature left.
We’ve always been monitored in some situations, but never before around the clock, in the bedroom and the bathroom, at the doctor, etc. There’s no way to know what we lose, when we don’t even know how the information will be used. It is a passive blackmail situation. These companies have compromising information about all of us, and we really don’t have any idea who can or will be able to see it.
Perhaps ironically, environmental protection often requires sophisticated monitoring of nature. If nobody is watching, you don't know what's lost.
Part of it is of course that moral panics and societal attitudes make every human blackmailable. Maybe it would be nice if we could learn to just not care.
I use the analogy of letting strangers record you using the restroom. Same questions: Do you lose anything? Why not let them do it?
> I use the analogy of letting strangers record you using the restroom. Same questions: Do you lose anything? Why not let them do it?
There is a lot of cognitive dissonance in a society that is embarrassed to fart in public but claims they have nothing to hide. Somehow, "hide" became a bad word.
The nature analogy is useful, but it's also worth thinking about where this new problem differs.
Nature is a large interconnected system that, in some cases, has the benefit of being self correcting. Population control and food chains are examples of this. I worry that, in the case of privacy, there are far fewer natural "self correcting" aspects. How will this impact our response to this problem?
(My personal prediction is that it will simply make it possible to do more damage before we begin seeking a solution en masse.)
> Facebook’s early motto was “move fast and break things” (the ghost of that motto lives on as the Facebook guest wifi password).
What is the password?
Menlo Park Visitor Information: https://3qdigital.com/wp-content/uploads/2018/04/MPK-Visitor...
Sounds like something like `m0vefast`
That's what it is.
> The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library.
I used to really enjoy Maciej as a writer, but I've become really bothered by how bad some of his recent arguments are. This is just an amazingly bad analogy. Structurally, it's on the level of the standard libertarian "proof" that income tax is morally equivalent to slavery. Even though there seems to be some analytical validity in both cases, they are obviously bad analogies because slavery is obviously not the same as income tax, and ad tracking networks are obviously not the same as being in prison. This is a cute rhetorical flourish, but nothing more.
The truth is, most people don't care about this stuff. They really just don't. You might want them to, and you might think they would if they had the same understanding of the issues as you do, but as of today, they don't. And I don't see any benefit to the mental gymnastics people go through to avoid accepting this obvious truth.
I am glad you used to enjoy my writing!
I think in this case I may be trying to draw out a narrower point than you think. My argument is that you can't infer consent from how people adapt to circumstances in a situation where they are not given a choice.
The whole issue of consent in online privacy is fascinating, because I don't think even experts in the field could understand how their data is used (or will be used) enough to give meaningful consent. I certainly couldn't.
I also agree with you that there is an open empirical question of how much people actually care about this stuff. One way I would like to see it tested is having a legal basis for competitors to sites like Google to give binding privacy guarantees. Then at least we'd be able to put a dollar value (positive or negative) on privacy with a market test.
> I am glad you used to enjoy my writing!
And I am sorry for being a bit of a dick!
> My argument is that you can't infer consent from how people adapt to circumstances in a situation where they are not given a choice.
I see what you mean, but I don't buy the premise that people don't have a choice, at least not in the same way in which inmates can't choose to leave prison. Technically savvy people can avoid a lot (if perhaps not all) tracking, today, if they care to put in the effort - many don't. The technology for doing this has been productized and is available for purchase to less savvy users, if they choose to buy it - hardly any do.
I don't mean to suggest that there's a silver bullet that fixes this problem, that you can buy today for $19.99. My point is that when there is real demand for something (e.g. privacy protection), even imperfect solutions succeed in the market and grow in capability over time. Lots of people are attempting to satisfy this demand, duckduckgo being one good example. It looks to me like most of these products are having limited success, which I think is exactly what you'd expect if only a small number of people care about this issue.
I think the problem is that you can't tell what people's internal state is from their external acts on certain points.
I may _want_ to be nice to you, but am bad at expressing myself, so end up saying something mean to you. You can then interpret that as me not wanting to be nice to you. But that is in a sense "double counting" the final result of me saying something mean.
Similarly, people might use these services despite the privacy qualms. It _could_ be that most people don't care. It could be that people care but are accepting the tradeoff. And it could be that people are _forced_ to accept the tradeoff to participate in civil society (or at least that's their impression of things).
I feel for this interpretation because I used to not really care about privacy w/r/t Google in particular, and enjoying services from putting my data in their platforms. I still use some of them but over time the privacy thing (or more specifically the advertising thing) has made me feel more and more frustrated. I still use GMail! But I kinda hate that I ahve to at this point (because I _really need_ emails to be delivered and received consistently).
Similarly I used FB despite privacy up to a point, and only fairly recently finally got almost all my messenger usage down by going to different platforms. But the day before I did it, I still didn't like the privacy tradeoff! But I liked "being able to talk to friends"
you might think they would if they had the same understanding of the issues as you do, but as of today, they don't.
How many ordinary people had a good understanding of the neurological effects of leaded petrol before it got legislated against?
How many ordinary people have a good understanding of which furniture padding materials give off noxious fumes when burned before that was legislated about?
How many ordinary people understood Thalidomide or Vioxx - or had even heard of them - before they got removed from the market?
Why is “it has to get to the level where everyone understands it before experts can recommend action” any more valid regarding tech surveillance?
I never said it did, nor do I think that. I said people don't care.
If we want legal privacy protections even though people don't care, that's a reasonable thing to want. What I'm reacting against is the claim I see so often which is basically "people actually care about this but they have no choice". I think people care much less, yet have much more choice than is normally claimed.
Facebook and Google track you even if you don't use their services.
More choice is misleading, we need to get more freedom by raising the bar of what is protected, and this minimum should not be possible to negotiate away.
I always wondered why Google or Facebook or hell Comcast didn't start just Owning senators - "Mr. Senator, we want to be the fiber contractor for the new Vet building. Here's pages of logs of you attempting to learn how to use the darknet to access child pornography."
Any anti-choice politician whose mistress or daughter has had an abortion, some tech company probably knows about it.
Any anti-LGBT politician that's actually gay, Google, Facebook, hell maybe even Grindr is aware.
Any White Knight with a heinous secret fetish, Pornhub knows.
People were able to map out military bases using smartwatch GPS data. Imagine if you had straight up access to the databases of this information.
I guess individuals at these companies are caught within rigid corporate power structures? Maybe the internal tooling prevents that sort of thing (didn't prevent someone from deleting Trump's twitter feed that one time), or just rule of law in the USA is still too strong and the risk of being annihilated by the legal system is too great.
Still, I bet the temptation is strong.
It's unprofitable. The cost to user-trust (and hence future data collection and revenues off that data) is more than they stand to gain from blackmailing any one person - even a U.S. Senator or President, or Chinese Premier. So they don't. The only thing that would potentially justify this from a corporate strategy perspective would be an existential threat. Politicians in question know this, and so they don't bother trying to get rid of Google or Facebook, only reign them in enough to please their constituents.
When I was working at Google (which was close to a decade ago now, before tech overreach became a household buzzword) I'd say "People regularly underestimate how much data Google has on them and overestimate how interesting they are." Everybody's first fear is that Google's going to look up their search history and blackmail them with their porn fetishes. They never stop to think that their porn fetishes, no matter how hardcore, are boring, and shared by millions of other people. As a Google engineer you get desensitized very quickly to the fact that 10% of search queries are porn-seeking (17% on mobile), and looking at other people's kinky pasttimes is about the very last thing you would want to be doing.
Similarly, it's significantly more profitable to advertise to people than it is to kill them. Every dead person is one less potential customer in the global economy.
This is a point I've made repeatedly on discussions about the value of privacy. It's not about protecting the fact you're gay or have some weird fetish or cheated on your wife or whatever. That stuff doesn't matter in the slightest. The important thing is that the establishment shouldn't be able to quickly pinpoint and disable every potential whistleblower and every other kind of threat to the establishment itself.
I’d argue both things are a concern.
Corporo-government overreach is a concern for obvious reasons.
But it’s also concerning that any individual rogue employee could have malicious intent toward you for no other reason than you have something you might not want publicly known.
Outside of certain highly-privilege departments like high-level SREs (because their job description requires them to have root access on the boxes), individual rogue employees do not have this power. There are various access controls that prevent individual engineers from looking up data by PII, other than their own corporate GMail account or accounts they've been specifically authorized to look into by the account holder (usually for customer support reasons). In general engineers are only running aggregate analysis on a large number of anonymized records, and the logging & user info services enforce this.
Like I said, individual people are not interesting to Google.
Sure, but one is comparatively minor vs the biggest threat to democracy and human freedom.
That's a good point.
It could be argued that, for a person being blackmailed right now, one is a very real present threat, while the other is a hypothetical future.
Fortunately there are enough of us to go round, so we can collectively advocate for protections against both possibilities.
In other words, as long as nobody ever puts ideology, religion, or just power over money, we don't have to worry about what's going to happen with the data.
The answer to that is kind of boring and simple—because everyone involved would go to jail for a very long time. No employee would do this to benefit a corporation, because it's not the corporation that is going to be doing a decade or two of Federal time as a result of being caught.
The problem with trying to extort Congress is not just that they have subpoena and investigatory power, but that the crime is inherently political and will (quite rightly) bring the hammer down on you from the entire apparatus of the state.
That does not apply to intelligence agencies. They are probably able to wield that kind of power against corporations.
Someone tried that on Jeff Bezos and it didn't work out so well. Why would someone comfortably in a position of power take that sort of risk, given the likelihood of it backfiring? For the movie plot to work, they need a good motive.
Not everyone is the richest person on the planet.
For this reason the it backfired when tried on Bezos argument is somewhat blunted.
Come to think of it, the articles surrounding the Bezos blackmail attempt even mentioned others had been successfully blackmailed because they didn’t have the resources to say to the best legal firm: “fix this, I’m too busy”.
That sounds interesting. Do you have more details?
How do you know they haven't?
What can Google or Facebook do that Verizon or Comcast or other traditional telecom couldn't have done already as entities who own the physical mobile and broadband equipment that data is flowing to Google or Facebook already.
End-to-end encryption isn't perfect, but it would at least make it considerably harder to track people on someone else's service - especially as compared to the owner of the service. Plus, Google or Facebook can cross-correlate tons of different users behavior rather than narrowly spying on one single link (or several links, but still).
That's stepping on CIA's turf and I'd wager that CIA still has the bigger stick when it comes down to the physical world.
This is how capitalism works. At some point, companies need to buy laws to enforce their growth stratergy. This is exactly what Google and Facebook are doing right now. Buying laws. And if people dont think stuff like this isnt part of negotiating, they are not even having the same conversation.
Real laws get created over lunch. They dont come out of steering committees.
Where in the USA is rule of law strong?
We ban accounts that post flamebait like this. Would you mind reading the site guidelines and following them? We're trying to be a bit different from the usual internet here.
Everywhere, have you personally ever had to bribe a state or federal employee? Justice might not always be equal, or fair (and these are subjective), but the rule of law is strong in the US.
Submitted this previously: https://news.ycombinator.com/item?id=20188689
How does HN de-dupe stuff? In the past when I’ve submitted something that was a dupe it took me to the already posted link instead of making a new one. Curious why that didn’t happen here.
Really good article, gave me a lot to think about. I think the nature metaphor is a good one, though it doesn’t fill me with optimism for the future of ambient privacy.
This is in the FAQ: we don't count submissions as dupes if the story hasn't had significant attention yet. In other words, reposts are ok until the article has had a significant discussion, or at least significant upvotes. This is to give good articles more than one chance at getting attention, since it can be pretty random which articles get noticed on /newest.
In this case, we actually boosted the submission into the second-chance pool (see https://news.ycombinator.com/item?id=11662380), which lobbed it on the front page. The reason we picked this post rather than yours is that it was the first submission of the article. (It doesn't look like it right now, because the timestamp was adjusted to be the resubmission time, but you can tell from the ID in the URL.) In the future we want to do some sort of karma sharing so multiple submitters can get credit, but that's not implemented yet.
If I may, I would like to request a little more transparency when this happens, as it can be confusing. For example, my comment elsewhere in the thread was written days ago, and I only happened to see that it has bubbled back up with a modified timestamp. When someone replies to a comment that appears to be posted mere hours ago they might expect to get a reply.
I'm not sure how to do it in any way that wouldn't be too complicated. The status quo leads to confusion, which is bad, but other things we've tried have led to more, or to a lot of distracting meta comments.
Does your comment show in the correct time order in the timeline ("mere hours ago") in your personal comment feed here:
Or does it show in the original ("days ago") timeline location way further down than "a few hours ago"?
Oh, very cool. That makes total sense, and I agree it’s nice to give articles another chance to get up the page. Thanks for letting me know!
Dang: Pretty sure my submission was the first.
I'm afraid not: 20172745 < 20187007. The IDs tell all.
You can also look at https://news.ycombinator.com/from?site=idlewords.com. In this case you were the bronze medalist, behind OrwellianChild and mrzool.
Got it. Where do I collect my medal?
You’re not the only one: