Hacker News

245 Comments:
tptacek said 2 months ago:

What a great post, about something I've been working on almost as long as Avery. I'm sad I missed it the first time it came around.

I was going to write a long response here, but I think I'll save that for a blog post (short summary: I disagree vehemently with what I believe the premise here to be, think that people shouldn't be waiting for the IETF to give them permission to build new network layers, am fairly certain there's no such thing as a "layering violation", and think overlay networks are/will-ultimately make IPv6 irrelevant). So on this thread I'll just pick some nits.

Ethernet networking is not as gross as it's made out here. ARP isn't entirely pointless! For instance, at the ISP I ran tech for in the 1990s, I was able to pretty seamlessly move our "data center" and corporate offices across Chicago without renumbering just by exploiting ARP (I wrote a dumb proxy ARP policy router). We did similar things to route traffic to the particular terminal services customers were dialing into, or to the ISDN router who's PRI happened to service a particular customer. An IP purist would object that we weren't using OSPF the way God intended us to, but it worked and was probably more reliable than the bona fide routing protocols we replaced that with.

This narrative also, I think, gives short shrift to DHCP, which does a lot more than pick IP addresses out for new endpoints, but also pretty much fully configures their IP connection. If you had to tech support 10,000 random customers in the era before DNS servers were transparently assigned at connection, you're not pining for the simple elegance of RARP.

Also: nobody should care about "IGMP-snooping bridges", since IP multicast is and always was hopeless.

ChuckMcM said 2 months ago:

I was going to say the same thing, so I won't :-). That said, I was in the Sun Systems group when Bob Hinden ("Boss Bob" (there were three Bobs in the group) of the network group was proposing SIPP as the "next generation IP." It has been illustrative (but I don't think educational alas) to see how much more easily this protocol would have managed to be implemented and deployed.

That said, as Thomas points out (indirectly) in the parent to this comment, the Internet was deployed across a pre-existing network (the telephone switching network) without any co-operation from the people who defined or wrote or deployed the protocols the implement telephone switching. As long as the connection from point A to point B worked, the packets could figure out how to get from A to B. There is absolutely nothing preventing a suitably motivated group from creating their own elegant "network" that they layer on top of the existing broadband networks of today, without having to either consult, or get permission from, any standards organization.

cure said 2 months ago:

> There is absolutely nothing preventing a suitably motivated group from creating their own elegant "network" that they layer on top of the existing broadband networks of today, without having to either consult, or get permission from, any standards organization.

And there are numerous groups doing that, e.g. https://yggdrasil-network.github.io/ and https://github.com/cjdelisle/cjdns.

mrkstu said 2 months ago:

That is essentially what most SD-WAN devices do- treat the Internet as an 'underlay' network- most of them are using proprietary code to create their own network infrastructure that isn't standards based.

jiveturkey said 2 months ago:

It generally is standards based. Their customers demand it to be so. IPSec tunnel overlays, usually if not always full mesh. The non-standard part is tiny insignificant tweaks to IPSec that render it unacceptable to standards speaking endpoints, thus you can't coordinate with your open source IPSec device. Stupid myopia, because these systems depend on proprietary orchestration anyway.

basch said 2 months ago:

+1 for velocloud. SDWAN mesh between all your devices, and they provide a cloud gateway that allows you to connect to any compatible ipsec device, without having to backhaul all the data to one specific endpoint.

otterley said 2 months ago:

Here's the SIPP paper, in case anyone is interested: https://datatracker.ietf.org/doc/rfc1710/?include_text=1

azernik said 2 months ago:

ARP is also nice and abstract and well-defined; it can bridge from any multi-endpoint subnet's layer-2 address to an IP address. Not sure if anyone actually uses it for non-802, but the generality has forced a clean design.

To add to your praises of DHCP - it can also configure routers, and is in fact the standard solution for that in IPv6. Instead of giving you one or several addresses for NAT through DHCP, it gives the router an address for itself, and also a prefix to assign to clients on its internal network. Super neat stuff, and a boon to administrators.

Also to nitpick your summary, because nitpicking is what I do - layering violations are a thing, but only in the same way that violating software abstraction barriers are a thing. Not a hard-and-fast rule, and sometimes if you're doing weird enough stuff you just have to do it.

mcguire said 2 months ago:

To go further off the rails, layering violations are not a thing, because "layers" are a remarkably poor abstraction for a network protocol "stack".

azernik said 2 months ago:

Got to disagree. The level of abstraction is very useful as a means of swapping out one layer without changing the other technologies - e.g. running IP over point-to-point fiber, or AlohaNet, or 802, or carrier pigeon. Or running running ethernet over a phy with whatever ridiculous number of Mbps is the latest thing. (802.11, of course, has effectively zero phy/link distinction, but anything that has to deal with such high packet drop rates and negotiation of physical layer between endpoints is going to be a mess.)

There's an issue with the specific OSI layering, but that's higher in the stack: it has waaay too many layers at the top. Everything up to maybe the transport layer (TCP/UDP/SCTP) is very well delinked in most implementations, but the session/presentation/application layer distinctions are total BS.

tptacek said 2 months ago:

They're a useful tool for understanding the mindset of the original developers, but as you go "up" in the layers, the division of responsibilities becomes more and more arbitrary, with a very sharp uptick after "layer 3".

But more importantly, the notion that routing and forwarding "belongs" in IP, because that's the layer 3 protocol --- that's just false. There's no validity to it, and lots of systems have built overlays with layer 3 function on top of UDP (which in the "layering" model is a "layer 4" protocol, but is really best thought of as an escape hatch with which to build any new system you want on top of IP).

derefr said 2 months ago:

How about:

1. layers are a thing (and while any given piece of hardware or software can be serving as an amalgam of any contiguous sequence of layers, you can still analyze the behavior of such a component as if it were N separate abstract components, one for each layer it embodies);

2. layering and layering violations are a thing, in the particular sense of code that intermingles and entangles the concerns of different network layers being automatically a design smell (e.g. OpenVPN smells because, rather than building a clean layer-1 circuit abstraction on top of a layer-4/5/7 stream, and then running a regular substrate-oblivious layer-2 on top, OpenVPN runs a "dirty" layer-2 implementation directly on top of a layer-7 protocol (HTTP), where the layer-2 implementation knows things about HTTP and uses HTTP features to signal layer-2 data, such that it can no longer freely interoperate with other layer-2 implementations);

3. but just going down the layer stack, repeating layers, is not a layering violation. You can build all the way up to a circuit-switching abstraction like TCP, and then put PPP on that to go down to layer 2, and come back up again, and that's not even bad engineering.

mcguire said 2 months ago:

"1. layers are a thing (and while any given piece of hardware or software can be serving as an amalgam of any contiguous sequence of layers, you can still analyze the behavior of such a component as if it were N separate abstract components, one for each layer it embodies);"

* Path MTU discovery: For proper operation, TCP needs to know a link-layer property for each of the links between a source and destination.

This bypasses the IP layer, because IP fragmentation does not play well with TCP. On the other hand, TCP does not even see the concept of a "path" between the source and destination; IP may route each segment uniquely.

* TCP over wireless links: TCP makes the assumption that segment loss implies congestion; wireless links have the propensity to drop packets for a plethora of reasons that have nothing to do with congestion. Hey, it's a bad assumption, and there's work on congestion controls that don't make that assumption, but maybe we ought to ask Van Jacobson if life mightn't be easier if the link could tell the transport protocol, "My bad! That was me, I did that?"

azernik said 2 months ago:

* Path MTU discovery: that's part of the IP contract. IP provides an unreliable datagram service with an MTU that varies based on destination endpoint but will never be below 1280b (in IPv6 - IPv4 was 576b). IPv6 also wisely doesn't do fragmentation; sizing your packets correctly is the job of layer 4.

* TCP over wireless links: TCP's congestion control mechanism is a heuristic based on ever-evolving understanding of the characteristics of links in the wild. There are things that layer 3 can do that unambiguously get in layer 4's way (bufferbloat makes low-latency response unfeasible), but it's layer 4's job to deal with reliability and congestion control. (By the way - unlike LFNs, WiFi is actually not a pathological case for TCP congestion control and buffering. A good mental model for those periodic WiFi drops is of an Ethernet cable being disconnected and reconnected with a different one picked at random from a supply closet. In a lot of very common cases, when traffic gets passed again it will not be at the same throughput as before and so the endpoints need to rediscover the available throughput.)

To your more general suggestions about alternative designs: generally, schemes that have the link layer communicate with the endpoints using them scale BADLY to large internetworks, and the global internet is the largest.

azernik said 2 months ago:

Who makes systems that do routing on UDP?

tptacek said 2 months ago:

What does "on UDP" mean? UDP is just a means of running an arbitrary datagram protocol that rides on top of IP; it's how you'd build a system that treats IP the way IP treats Ethernet.

azernik said 2 months ago:

Sure, but you mentioned protocols that have "built overlays with layer 3 function on top of UDP". What are the examples you're referring to?

EDIT: My comment in reply to the sibling comment, which mentioned vxlan:

That's more of a recursive version of the lower layers; using layers 1-4 of one instance of the OSI model as layer 2 of another instance. If anything, this demonstrates just how useful the clear abstraction barrier between layer 2 and layer 3 is; you can have a very complicated software package (like a VPN) as a layer 2 instead of a physical network and all the code from layer 3 up doesn't even need to know.

dnautics said 2 months ago:

Vxlan, for starters.

azernik said 2 months ago:

That's more of a recursive version of the lower layers; using layers 1-4 of one instance of the OSI model as layer 2 of another instance.

If anything, this demonstrates just how useful the clear abstraction barrier between layer 2 and layer 3 is; you can have a very complicated software package (like a VPN) as a layer 2 instead of a physical network and all the code from layer 3 up doesn't even need to know.

mcguire said 2 months ago:

There are other models of modularity that make it easy to separate transport, routing, link, and physical protocols without starting from the assumption that "layer X can only interact with the minimum common denominator interface for layers X-1 and X+1". That assumption leads to everything from the PMTU discovery silliness to the pain of getting TCP to work correctly over links like wireless where packet loss does not imply congestion.

bdamm said 2 months ago:

I've heard some folks talk about TLS as a "session" layer, and it is fortunate that we no longer have to translate between ASCII and EBCDIC underneath the application, so the "presentation" layer now seems like it is mis-named. Ah how times change.

tssva said 2 months ago:

In the early to mid 80s "layer 3 switching" was becoming a thing and each switch vendor had their own method for implementation. Cabletron was a large switch vendor then and their method of layer 3 switching depended upon ARP. Each host would be assigned a /32 ip address and their default gateway would be their own ip address. There was a registry setting available on Windows NT server that would cause the DHCP server to provide hosts with DHCP address and router assignments that met these requirements.

Ports that had routers connected to them were designated as router ports and needed to have proxy arp enabled.

Whenever a host wanted to talk to any IP address which was not already in it's arp cache it would send an arp request. The management system of the switch, which in this case was software running on a server outside the switch, would look up in it's tables if it knew the IP address from another switch port. If so and all policies allowed the host sending the request to speak to the port the destination was associated with the manager would respond to the arp request with the mac of the destination. If the requested IP address didn't exist in it's tables the request would be flooded out all router ports.

sandos said 2 months ago:

"Windows NT is a family of operating systems produced by Microsoft, the first version of which was released on July 27, 1993"

NT did not exist in the early 80s, maybe just a typo?

tssva said 2 months ago:

Yes, a typo. I meant 90s.

runjake said 2 months ago:

Good points.

One issue, though "nobody should care about "IGMP-snooping bridges". I so wish this were true, but (first-hand knowledge) tons of infrastructure these days utilizes IP multicast, including building lighting, HVAC, intercom, VoIP, etc.

socraticmethod said 2 months ago:

| IP multicast is and always was hopeless.

Out of curiosity what do you mean by this? Are you referring to all Multicast solutions? Can I just be specific -- what do you think of Dante, Audio/Video-over-IP or other time sensitive an synced services that use Multicast?

throwaway2048 said 2 months ago:

He means multicast over the internet at large, not tightly controlled networks.

It is basically a completely unstoppable DDOS and abuse tool.

jandrese said 2 months ago:

Isn't plain old UDP already an unstoppable DDOS tool? Multicast doesn't make it that much harder to stop. In fact using it as a DDOS tool seems a bit problematic since the victim would need to join the groups to receive the traffic. Yes a piece of malware on the victim's computer could go and attempt to join every single multicast source on the internet, but it's a self correcting problem since they wouldn't be able to maintain their subscriptions with their link totally saturated. Much easier to stop than normal DDOS attacks.

The problem is that we have never figured out a multicast routing solution that would work at Internet scale. Especially one that can be implemented in hardware on routers.

pdkl95 said 2 months ago:

> we have never figured out a multicast routing solution that would work at Internet scale

Sure we did, it's called bittorrent. Ok, it isn't really multicast and you probably have to sacrifice ordered delivery, but for many of the use-cases where multiple-delivery would have been a good idea, bittorrent has proven to be a very successful "minimum viable multicast".

Bittorrent succeeded while decades of "multicast" research/experiments failed because bittorrent realized the multi-delivery problem was really about managing peers, which isn't solvable at layer-3.

edit: by which I mean: previous attempts at multicasting assumed it was a packet routing problem, when peer management is actually a question for the application layer.

jandrese said 2 months ago:

Bittorrent is the opposite of multicast. Instead of aggregating the data into a single channel to save bandwidth, we instead split it up across every single recipient in a huge NxN graph.

This also illustrates the other problem with multicast on the Internet: It's mostly saving bandwidth on the backbone and at the server. The backbone has plenty of bandwidth to spare, and servers are often in data centers these days where bandwidth is not a huge concern.

The use case where someone does video production in their basement and broadcasts it out to millions of people across the internet over their home cable modem connection is just not compelling enough for ISPs and the backbone providers to make Multicast happen. Just put it on Youtube and let Google sort it out.

floatboth said 2 months ago:

hmm. Multicast is often used for, like, IPTV. That's a very different task from BitTorrent. Torrents are indeed about managing peers. IPTV is centralized, not p2p, the benefit of multicast for IPTV is that the routers in between the source (ISP) and your client only carry one copy of the stream instead of one stream per client.

At internet scale.. well, it would be nice to have this efficiency for Twitch and YouTube Live. Which are also pretty centralized (CDN) so I don't see how this is about managing peers.

opencl said 2 months ago:

Bittorrent has a P2P streaming protocol called Bittorrent Live which was used to operate a TV service for several years but I have no idea how efficient it is compared to IPTV multicasting or central servers+CDN.

throwaway2048 said 2 months ago:

Multicast has the potential to almost arbitrarily amplify DDOS with IP spoofing (which, yes, still exists).

yusyusyus said 2 months ago:

How exactly? Sources have to pass RPF check following ucast path and receivers have to follow the path either to RP or source, or the packets don't get there.

tptacek said 2 months ago:

It's also, effectively, a promise to maintain Internet-wide routing table entries for every page on the web rather than every host (which is something we also can't really do today).

Dylan16807 said 2 months ago:

Multicast for everything is difficult. But would it be all that difficult to have 100k or 1M entries?

Something that would definitely be doable today is an IP header that stores 25 or 50 extra destination addresses. But it seems like nobody really cares. Just make streaming services send out a thousand packets with identical data.

pas said 2 months ago:

Well, it could be done based on microtransactions. To set up your mcast tree you need to pay. The slots are auctioned off every X minutes on a DAG-chain-block-thing.

Dylan16807 said 2 months ago:

No need to over-complicate things. You can sell them on a monthly/yearly basis just like phone numbers. That's not the hard part.

pas said 2 months ago:

Sure, but that means if you want to do a live broadcast something right now, you can't just allocate a slot for the next hours.

wahern said 2 months ago:

> Edit 2017-08-16: It turns out that nothing in this section requires IPv6. It would work fine with IPv4 and NAT, even roaming across multiple NATs.

That's not quite fair. IPv4 and NAT require maintaining a lot of state at critical intermediate routers. I'm sure we've all experienced (perhaps regularly) a NAT'ing router losing state because of a reboot, state tables overflowing, or similar hiccups.

The one absolutely redeeming quality of IPv6 is that with 128 bits routing can be kept much more hierarchical and stateless. Fragmentation will occur and is occurring because we now apply trust metrics to IP addresses (thanks, spammers!) so there's value in owning transferable subnets, but I think IANA and major ISPs can keep the fragmentation process slow enough that hardware/software capabilities can stay apace, preserving a largely stateless, hierarchical routing infrastructure.

That doesn't solve the mobility problem, but it does provide significant benefits to mobility solutions. Don't underestimate the cost of NAT'ing in terms of systems complexity. NAT'ing isn't the type of layer, like ethernet MAC addressing, that we'll always be stuck with.

It's also worth pointing out that some of the [now] unnecessary complexity of IPv6 will fall away over time. IPv6 will get simpler as time progresses, as it already has.

pdkl95 said 2 months ago:

> IPv4 and NAT require maintaining a lot of state at critical intermediate routers.

Far worse than the maintenance costs, NAT has has prevented (modulo a few niche environments) the development of any network software that isn't client-server. NAT is a party line[1]. Anyone that thinks NAT is useful for anything other than an address space hack (or a few other less common uses[2]): why aren't you replacing all of your household (or small business) phone numbers with a single party line? Surely you could implement a local PBX if you needed to support more than one kind of incoming call. Or what if the phone company offered a new type of line: 10% cheaper, no way to receive incoming calls. Well, sometimes you can get some incoming calls if you convince whomever actually controls the inbound phone number for your phone line to forward calls that come from certain numbers or some other criteria, like they did with the old party lines: just get people calling you at the shared number to pause after connecting and dial you per-arranged ring code[4]. (warning the phone calls might still be answered by whomever picked up first)

NAT keeps us on the client/increasingly-centralized-server model. NAT removed one of the most important benefits IP-based internetworking: no discrimination between different types of network host. Each addressable host has the power to publish - using any protocol, including new protocols - without needing an imprimatur[5]: the permission to publish from granted by some other party.

If we are very lucky, we might regain some of the lost capability with IPv6.

[preemptive response: no, NAT does NOT provide any security benefits. NAT just rewrites packet address/ports. Dropping packets is the job of the (probably stateful) firewall. Nobody wants to change that part, firewalls are important. And if you think NAT hides your internal addresses... really? Is it 192.168.x.y? Can I send GET requests to your router with a webpage full of hidden/obscured img tags with src="http{,s}://192.168.1.{1..254}/whatever?i=want"?]

[1] https://en.wikipedia.org/wiki/Party_line_%28telephony%29

[2] Most of the time, when people say "NAT", they are only referring to the many-to-one ip masquerading similar to what is implemented in most home routers, or perhaps the "carrier grade" extensions of the same basic idea. They are probably not referring to the other flavors of NAT[3].

[3] https://tools.ietf.org/html/rfc2663#section-4.0

[4] https://en.wikipedia.org/wiki/Party_line_%28telephony%29#Sel...

[5] https://www.fourmilab.ch/documents/digital-imprimatur/

userbinator said 2 months ago:

[preemptive response: no, NAT does NOT provide any security benefits. NAT just rewrites packet address/ports.

NAT automatically prevents attackers from scanning for and attacking listening ports on the hosts behind it. Given that those who want a service to listen to the Internet should also know enough to forward the ports, I'd say it's a pretty important security benefit and one that has greatly slowed the spread of worms.

Can I send GET requests to your router with a webpage full of hidden/obscured img tags

That requires extra action on the part of the "attackee", whereas without NAT the attacker would be able to directly connect to a port, to any machine anywhere on the Internet.

johncolanduoni said 2 months ago:

> NAT automatically prevents attackers from scanning for and attacking listening ports on the hosts behind it. Given that those who want a service to listen to the Internet should also know enough to forward the ports, I'd say it's a pretty important security benefit and one that has greatly slowed the spread of worms.

So does a few lines of iptables (which work just as well with IPv4), which even most ISP routers that support IPv6 have managed to get right. On the other hand the workarounds for when you actually do want to accept connections through a NAT (cough UPnP) have been consistently misconfigured or had implementations which are simply vulnerable.

Dagger2 said 2 months ago:

NAT does nothing of the sort.

Here's NAT, as implemented on a Linux machine: `iptables -t nat -A POSTROUTING -o wan0 -j MASQUERADE`. You see that "-o wan0" there? That means the rule only applies to outbound connections, and it does nothing whatsoever to inbound connections. How can a rule that doesn't apply to inbound connections possibly do anything about the behavior of inbound connections?

On the other hand, v6 does do something about the spread of worms: networks use 64 bits of address space, so it's difficult to even find an active network host by scanning. On v4, you can find all active vulnerable hosts by just trivially scanning the entire 32-bit address space. An exhaustive search of a /64, which is just one single network, requires 0.9 ZB of traffic.

(Of course there are other ways of finding active hosts, and being obscure doesn't make the hosts in question secure -- but it _does_ render network scanning as a tool for spreading worms more or less obsolete.)

Dylan16807 said 2 months ago:

> How can a rule that doesn't apply to inbound connections possibly do anything about the behavior of inbound connections?

Outside of exotic situations, part of setting up a NAT is putting your internal hosts on a private IP range. This makes inbound packets impossible (if your ISP didn't screw up).

Even if you didn't do that, all you have to clarify is that you're replacing the existing routing rules with NAT. Mystery solved.

Dagger2 said 2 months ago:

Due to the extreme address shortage in v4, of course it's typical to be putting your internal hosts on a private IP range. But "putting your internal hosts on a private IP range" isn't NAT.

And in fact, putting your internal hosts on a private IP range doesn't prevent inbound connections either. It'll limit the set of people that can do them, but anybody on your immediate upstream connection can do it just by sending packets to your router. That probably means at least your ISP and anybody who can order them to cooperate.

NAT isn't a security boundary.

(And if you replace your routing rules with NAT then everything will break, because the only thing NAT does is rewrite addresses in packets. They still need to go through the routing table to be routed.)

Dylan16807 said 2 months ago:

> Due to the extreme address shortage in v4, of course it's typical to be putting your internal hosts on a private IP range. But "putting your internal hosts on a private IP range" isn't NAT.

Maybe not private, but I think it's fair to say that putting them on a different IP range is expected enough that you can reasonably call it part of NAT.

> They still need to go through the routing table to be routed.

You're looking at things from a very iptables-centric view.

"Accept packets on one network interface, rewrite them, spit out the other interface if the rewrite succeeded" is a valid description of NAT that replaces all other routing.

It also prevents the "ISP screwed up" and "malicious ISP" scenarios, if your NAT is port-based. It doesn't even care if you used different IP spaces or just one.

Dagger2 said 2 months ago:

The choice of IP range for your network is still a conceptually separate thing from the choice of whether or not to NAT outbound connections from said network. Certain choices of IP range might make NAT more or less useful to you, and thus might influence your choice, but it's still a separate thing.

Yes, I'm describing behavior from the perspective of Linux and iptables because that's what I'm familiar with... but this is the generally-understood meaning of "NAT" when used in this context, and as far as I'm aware this is how basically everything implements it. This is the behavior you'll be seeing.

Dylan16807 said 2 months ago:

Nevermind the discussion of IP range for now, I didn't explain well enough and I don't want it being a distraction.

> this is the generally-understood meaning of "NAT"

If you tell someone that a device does NAT, but not routing, they're going to expect the packets to go in one side and come out the other side, right? I think that's a reasonable expectation, and it does not require any routing to do. And since the attack we're talking about depends on using routing to bypass the NAT, a dumb device that doesn't route is not vulnerable, and does not make your network vulnerable.

pdkl95 said 2 months ago:

[I'm posting this as an attempt at general education. No personal attacks or hostility intended]

> If you tell someone that a device does NAT, but not routing, they're going to expect the packets to go in one side and come out the other side, right?

Only if they haven't read RFC 2883 (or similar documentation). I would describe a "device that does NAT but not routing" as highly unusual, possibly purpose-built to update an older network or workaround some kind of compatibility/interoperability problem. Such a device would probably be a custom iptables (or equiv) configuration on a standard linux (or equiv) box, not a branded "router".

However, another interpretation is that the terms "NAT" and "routing" are being used in the colloquial sense that usually only refers to NAT and routing to refer to something roughly similar to the home devices that sit between the LAN and a {cable,ADSL} modem. In that sense, a device with two cables attached might not appear to "routing" anything. In previous posts, I'm using the technical definition of NAT as defined by RFC 2883, not this broader colloquial definition.

By the technical definition, the device is "routing packets"! NAT is defined as something that performs "transparent routing" between address realms. From RFC 2883: [1]

    2.2. Transparent routing

       The term "transparent routing" is used throughout the document to
       identify the routing functionality that a NAT device provides.  This
       is different from the routing functionality provided by a traditional
       router device in that a traditional router routes packets within a
       single address realm.

       Transparent routing refers to routing a datagram between disparate
       address realms, by modifying address contents in the IP header to be
       valid in the address realm into which the datagram is routed.
NAT transparently[2] "routes" between address realms[3], which are defined as:

   2.1. Address realm or realm

   An address realm is a network domain in which the network addresses
   are uniquely assigned to entities such that datagrams can be routed
   to them. Routing protocols used within the network domain are
   responsible for finding routes to entities given their network
   addresses.
NAT accomplishes this "routing" between realms "by modifying address contents in the IP header". Changing the SRC Address field in the header is routing the packet! Any further routing decisions do not involve NAT. Once the packet's address has been changed, the actual handling of the packet is performed by the "routing protocols used within the network domain". This is also what happens to packets that were received directly into that network domain. A simple implementation might do something roughly similar to this (note: oversimplified):

    |                       |        /-----------\
    |                       |       / [Firewall]  \
    \  New Packet Received  /       | unroutable, |
     \      From NIC       /        \ drop packet /
      \-------------------/          \-----------/
                 |                            ^
                 v              addr: unknown |
         +----------------+    +-----------------+
         | is DST addr in | Y  | [Packet Router] |
         | realm "WAN"?   |--->| rules: "WAN"    |
         +----------------+    +-----------------+
           N |                     addr: NAT |
             v                               v
         +----------------+   /--------------------------\
         | is DST addr in |  / "Route" from realm "WAN"   \
         | realm "LAN"    |  | to realm "LAN" by changing |
         +----------------+  \ the IP header address      /
           N |       | Y      \--------------------------/
             v       |                       | 
     /-----------\   |                       v
    / [Firewall]  \  |      +---------------------------+
    | Bad addr,   |  |      | Hand off packet to be     |
    \ drop packet /  |      | routed into its new realm |
     \-----------/   |      +---------------------------+
                     v                       |
         +-----------------+                /
         | [Packet Router] |<--------------/
         | rules: "LAN"    |
         +-----------------+
            |  To: the "LAN" firewall,
            |      other processing,
            |      maybe TX at the
            v      LAN's NIC
          (...)

NAT simply hands the packet back to be routed to the LAN. Since NAT is defined as a "transparent" routing, a NAT-translated packet should be handled the same as any other packet addressed to the LAN.

> a dumb device that doesn't route is not vulnerable

If packets going into the device are retransmitted in any way, the device is "routing" packets. Dumb retransmitting/repeating packets ix a type of routing, even if it isn't making important decisions about each packet. Fortunately, most devices include a stateful firewall that DOES make important decisions about how to handle each individual packet.

> does not make your network vulnerable.

*If-and-only-if you had that unusual NAT-only, no-firewall device - which is very different from a typical home router - it could route packets to your private LAN if they were addressed to a valid LAN address (perhaps 192.168.1.x?). They would bypass NAT and be routed to the LAN, just like the router's own communication with the LAN. You wouldn't expect NAT to touch packets sent from the router's management HTTP server to a host on the internal/private LAN.

The malicious packet might be sent from something like a smart TV" that you "isolated" in a 2nd LAN or DMZ connected to the same router. Fortunately, most routers are not vulnerable.... because they include a firewall that drops "obviously invalid" packets.

[1] https://tools.ietf.org/html/rfc2663#section-2.2

[2] Transparent to the src and dst hosts that sent/received the packet. This "transparent" is referring to the hosts using the NATing router do not need to do anything special when sending/receiving packets. The address changes are invisible to the endpoints.

Dylan16807 said 2 months ago:

> If packets going into the device are retransmitted in any way, the device is "routing" packets. Dumb retransmitting/repeating packets ix a type of routing, even if it isn't making important decisions about each packet.

A hub is not a router. An inline repeater is not a router. That is not a normal or useful definition.

> flowchart

When most people talk about routing, I think they mean the boxes you labeled [Packet Router]. A NAT-only device would basically remove every conditional box, remove every box labeled [Firewall], and remove every box labeled [Packet Router].

Packet received -> Rewrite headers -> Packet output.

Technically you'd have two identical flowcharts, one for each direction.

> If-and-only-if you had that unusual NAT-only, no-firewall device - which is very different from a typical home router - it could route packets to your private LAN if they were addressed to a valid LAN address (perhaps 192.168.1.x?).

Any packet coming in from the internet would hit the rewrite engine, the rewrite would fail, and it would not make it to the LAN.

> They would bypass NAT and be routed to the LAN, just like the router's own communication with the LAN. You wouldn't expect NAT to touch packets sent from the router's management HTTP server to a host on the internal/private LAN.

If you had to have a web interface, then packets coming from the NAT device itself would be sent directly to the port, not to the NAT engine.

But for this thought experiment let's just not have a web interface.

> The malicious packet might be sent from something like a smart TV" that you "isolated" in a 2nd LAN or DMZ connected to the same router.

Packet from a LAN? It gets NAT applied and goes to the internet. The internet can decide what to do with a destination of 192.168.0.107

(I would have said you can't even have two LANs without routing, but you could force it to happen, like making every WAN->LAN packet go out to both of them.)

All these attacks you're listing are using routing to bypass the NAT functionality. Not possible in a device that cannot route.

> Changing the SRC Address field in the header is routing the packet!

That is an interesting and pretty convincing argument. Two caveats, though. One is that this is still a separate thing from "packet routing" or whatever you want to call picking a direction/vlan for a packet to go to based on routing tables.

The other caveat is that it might just be imprecise language. What if "by" means "by also". They are assuming you have a normal router as a base, are making it transparent by adding header modification. So then transparent routing is a combination of routing and header modification, but header modification all by itself is not routing.

Because the RFC assumes you started with a router, maybe I should say "just the Network Address Translation portion of the NAT RFC" instead of "NAT", but that seems like an unnecessary level of pedantry.

pdkl95 said 2 months ago:

> NAT automatically prevents attackers from scanning for and attacking listening ports on the hosts behind it.

That the firewall, not NAT. NAT on it's own - without a firewall, a very unusual configuration - will route packets to your "private" addresses if that address is in the packet's DST address field (maybe; it depends on the rules the router uses when deciding how to rout a packet, which is also not NAT). NAT is only about changing the address field. RFC 2663 even recommends that NAT should usually be "used in conjunction with firewalls to filter unwanted traffic". Unless you're referring specifically to changing the address/port fields in the packet header, you are probably referring to a stateful firewall. That's the feature that is usually responsible for dropping packets, preventing scanning of your LAN.

TL;DR - address translation (NAT), and choosing what to do with a packet (firewall, routing rules) are separate, independent features. NAT - by itself - isn't really involved in routing/firewall step. That's why it's extremely unusual to see NAT in isolation. The thing you see on e.g. most home routers is basic use of NAT combined with a simple router and (hopefully) a powerful stateful firewall.

> should also know enough to forward the ports

That's nice iff you get to decide which ports are forwarded. Good luck getting a carrier to forward ports. It isn't going to happen if you're in some parts of China behind 7 layers of NAT. Port forward in that situation would be a nightmare, assuming it was even theoretically possible to convince the 7 upstream authorities that they should forward a port to you.

However, minutia about port forwarding doesn't address my main point: that NAT limits the type of software that is developed. Are you really going to write a network app that only works if people setup port forwarding? Would business use telephone for as many purposes if phone numbers were all shared party lines?

> That requires extra action on the part of the "attackee"

That "extra action" is probably the most common vector of infection for modern malware. Ignore it at your own peril.

Dylan16807 said 2 months ago:

> route packets to your "private" addresses if that address is in the packet's DST address field

And how does such a packet make it to your router, exactly?

I disagree that it's "the firewall" doing the real work here. If things are configured properly, a firewall should be 99% irrelevant.

ikiris said 2 months ago:

No, it doesn't.

It. Does. Not.

At all

Godel_unicode said 2 months ago:

There are an awful lot of home installations of Windows XP that aren't going to get exploited by BlueKeep because of NAT not forwarding 3389 to them.

wbl said 2 months ago:

And a firewall would do the same thing.

Godel_unicode said 2 months ago:

If configured correctly, sure, that's why I use one. I also realize that most random people don't have the technical savy to configure one to be anything other than effectively a NAT gateway.

The fact that something else can also provide that security benefit in no way means that NAT doesn't provide some security benefit. It does.

rohan1024 said 2 months ago:

That was a fault of OS and they should have fixed it. Network protocols and equipment is not responsible for OS security issues.

This NAT'ing for security has practically left the Internet broken. We are permanently dependent on Server to route packets to other client.

Godel_unicode said 2 months ago:

Now you're moving the goalposts, though. Saying that the network shouldn't play a role in security is totally different than saying that it currently plays none.

I find it really hard to understand this obsession with pining for a world where security doesn't need to exist. It does, and it always will. Design around that, it's not hard.

pdkl95 said 2 months ago:

> pining for a world where security doesn't need to exist

Nobody is doing that. We're "pining" for a world where our devices can have direct phone numbers instead of having to share a party line. Unfortunately, some people keep insisting that requiring households, businesses, or larger groups of people (i.e. CGNAT) to share a single phone number keeps everyone safer because it keeps most people from being able to receive incoming calls.

See my other post[1] for the technical reasons NAT doesn't actually provide security. TL;DR - this is a problem of definitions and a common misunderstanding about how NAT/routing works.

In the telephone analogy, I'm trying to say that you phone lines sh0uld have their own individual telephone numbers, because you might need them some day. Not having the ability to receive incoming calls will eventually limit you in important ways. "But incoming calls can be dangerous! Why are you trying to making us less secure?" We're not increase your options, which doesn't affect your security. Since incoming calls are dangerous, just disable your ringer or use a firewall that simply blocs all incoming calls.

[1] https://news.ycombinator.com/item?id=20181274

Godel_unicode said 2 months ago:

> See my other post[1] for the technical reasons NAT doesn't actually provide security.

You're just as wrong now as you were then, see my up-thread post to correct your misunderstanding about security.

Edit: either direct addressing isn't possible with NAT, which provides security benefits, or it is possible, which means your complaint is mis-placed. It cannot simultaneously prevent direct addressing and provide literally no security benefit.

maccam94 said 2 months ago:

Configuring a firewall correctly is much easier than configuring NAT correctly:

Block all incoming connections by default. Have your apps/OSes on firewalled machines prompt users to allow incoming connections, and use uPnP to talk to the firewall to open the port.

With NAT, you additionally have to deal with port renumbering (what if more than one host wants to run web servers, or ssh, or VNC, etc). And because the ports are a shared resource between all hosts, you may not allow uPnP so hosts can't fight over forwarding rules.

lmm said 2 months ago:

No, it would be straightforward for a worm to figure out what internal network addresses they were using, what routers there were behind, and send packets to those routers whose destinations were those internal network addresses (192.168.1.2 or whatever). NAT does nothing to stop that.

Most routers won't forward those packets. But that's got nothing to do with whether those routers are running NAT or not.

Dylan16807 said 2 months ago:

Straightforward? How do you send a packet with the wrong IP to a machine on the other side of the internet?

Godel_unicode said 2 months ago:

> Most routers won't forward those packets.

Good, we agree this is a pointless hypothetical which will never work. That does beg the question why even bring it up, though...

lmm said 2 months ago:

> Good, we agree this is a pointless hypothetical which will never work. That does beg the question why even bring it up, though...

Well you're the person best placed to answer that, since you brought it up.

said 2 months ago:
[deleted]
floatboth said 2 months ago:

> if you think NAT hides your internal addresses

More like cgNAT makes you share the external address between many people. It gives you plausible deniability: "dear $forum admin, it's not me creating multiple accounts for trolling from the same IP address, it's the guy from the next building, I swear!" / "it's my neighbor's porn visible on https://iknowwhatyoudownload.com/en/peer/ not mine!"

Of course that's Not Security, that's just obscurity, but damn, it kinda does feel good :D

ComodoHacker said 2 months ago:

>Each addressable host has the power to publish - using any protocol, including new protocols

OK, you remove NAT but if firewall remains with default DROP ALL policy, what's the difference? Even worse, the firewall now isn't in your home, it's in ISP network and out of your control. So you need imprimatur as before.

rohan1024 said 2 months ago:

I doubt that. If that had been the case BitTorrent network would have never worked.

ComodoHacker said 2 months ago:

BitTorrent have worked because of UPnP allowed by default in home routers.

In an IPv6 network without home routers the problem of malware spreading between peers remains. Chances are high that ISPs will be forced to restrict peer communications just like they did in NATed IPv4 networks.

nybble41 said 2 months ago:

Prior to CGNAT, ISPs did not generally restrict peer communications in NATed IPv4 networks. The subscriber's router had a public IPv4 address and could accept or forward incoming connections at will. In most cases this could be automated (by default) with UPnP, which is effectively the same as not having a firewall for incoming connections. (If the port is closed anyway then blocking the traffic at the router has no effect, and any application that can open a port can use UPnP to allow incoming connections through the firewall.) The only real restriction compared to IPv6 was that you couldn't run multiple services on the same well-known port, which offers no security advantage to offset the inconvenience.

CGNAT breaks all this, of course, since the public IP address is on the ISP's side and they're unlikely to implement port forwarding on demand. For that matter, there probably aren't enough ports available to support all the subscribers sharing a given public address, and there could be security/trust issues as well with incoming connections to the same IP being dynamically routed to different subscribers according to the port number. (All the problems associated with dynamic IP address reuse, but with much quicker turnover.)

With IPv6 you can either let the destination deal with incoming connections directly—which has about the same security as a NATed IPv4 network with UPnP—or manually configure the firewall to only allow specific connections through according to the destination IPv6 address and/or port. Either way you won't have any issues with multiple hosts wanting to accept traffic on the same port numbers. There is no technical reason why there couldn't be a protocol like UPnP(v6) just for opening ports in the router on demand, but in a NAT-free network it wouldn't really serve any purpose.

ComodoHacker said 2 months ago:

>which is effectively the same as not having a firewall for incoming connections

Not the same. Intranet-only services are still protected.

nybble41 said 2 months ago:

Between malicious or infected hosts inside the network (your own or guests') and the widespread prevalence of hacked routers you really shouldn't trust incoming traffic on the mere basis that it appears to originate from the local network. It's better to treat the intranet as nothing more than a more performant subset within the broader Internet, and all network services as Internet-facing services. Trust no one without authentication.

Assuming you're stuck with some insecure legacy protocol which relies on such rules, however, it's still quite simple to restrict incoming connections to a specific subnet on the host itself, either with local firewall rules or a few lines of code in the application. In a network with UPnP the host is the authority on which connections should be allowed, whether we're talking about Internet services or intranet-only ones, and the host can block incoming connections at least as well as the router. In the absence of NAT there is no need for the router to get involved.

jsn said 2 months ago:

> IPv4 and NAT require maintaining a lot of state at critical intermediate routers. I'm sure we've all experienced (perhaps regularly) a NAT'ing router losing state because of a reboot, state tables overflowing, or similar hiccups.

As far as I can see, that state is not very important for the mobile IP design describe in the article. A router reboot or other hiccups are handled more or less the same way as an address change (there is a temporary connectivity loss, and when it's restored, you create a new mapping on the Y server between the existing connection, identified by the same old uuid, and the new external/internal addresses/ports of the client X).

wahern said 2 months ago:

Exactly, it's not necessary. All it does is add complexity and unnecessary failure points.

Mobile providers use IPv6 for a reason. CGNAT is still used because, among other reasons, the internet (at least in the U.S. and Europe) is still predominately IPv4. If IPv4 disappeared tomorrow CGNAT (and the centralization of mobile network egress points) might be able to go away, too. If, additionally, QUIC completely replaced TCP, I can't think of any reason for maintaining such choke points.

In as much as IPv6 provides a faster, more reliable network, it benefits QUIC. And the more ubiquitous QUIC becomes the easier IPv6 will become to manage.

QUIC won't be a panacea any more than IPv6 was. But they're both important improvements to the network.

thaeli said 2 months ago:

> The one absolutely redeeming quality of IPv6 is that with 128 bits routing can be kept much more hierarchical and stateless.

For now. Yes, 128 bits is a lot. Yes, even the 64 bit host-part is a lot. But the old-fashioned, pre-CIDR allocation strategy of IPv6 is _incredibly_ wasteful of address space. There are advantages, and for a single-planet, pre-IoT Internet, that's plenty. But IPv6 has taken so long to adopt, and future major stack changes will only be harder, that we really do need to think about what will work for the Internet a hundred years from now.

jandrese said 2 months ago:

I don't think you realize just how big 64 bits for the network address is. It's one of those things where the scale is so large it's outside of normal human experience and we just can't hold it in our minds. There is no danger of running out of addresses for hundreds of years unless we decide it is necessary to give every single atom in the Earth its own /64. Even then we would have plenty of room to spare in the address space.

It's just an absurdly large number. Giving out a few thousand to anybody who asks will never be a problem until humans have literally filled the galaxy.

carapace said 2 months ago:

The example I've heard and like the best: With 64 bits you can address every cubic centimeter in the Solar system.

vardump said 2 months ago:

Earth's volume is 9.38e11 km^3. Or 9.38e23 liters (dm^3, cubic decimeters).

9.38 * 10^11 km^3 / (2^64) in m^3 = 50.8 m^3.

So you'd have one 64-bit address for every 50.8 cubic meters of earth.

Also, what's the volume of the solar system?

Dagger2 said 2 months ago:

Based on the GP post, I believe the solar system is a sphere with radius 16.4 km. The volume would, therefore, be almost exactly 2^64 cubic centimeters.

Dylan16807 said 2 months ago:

Yeah that number is definitely wrong.

The best fit I can find is that 128 bits is roughly enough to address every cubic meter in a sphere slightly larger than Neptune's orbit.

64 bits is way too big for a single dimension, but way too small for three dimensions.

nybble41 said 2 months ago:

128 bits is enough to allocate 170 unique addresses to every milligram of matter in the Solar System (~99.9% of which is the Sun, about 2 * 10^33 grams). I think that's a more useful measure than the volume of the Solar System, which after all is mostly empty space. After all, what's the smallest (lightest) device which can benefit from having its own IPv6 address?

zimpenfish said 2 months ago:

https://en.wikipedia.org/wiki/Solar_System suggests 40AU is a reasonable radius (call it 6 billion km). Which gives a spherical volume of 9.05x10^38 m^3 giving you one address per 4.16x10^10 km^3.

You could consider it a disc that just encapsulates the sun - then the volume is 1.57x10^35 m^3 and your address gets you a mere 8.5x10^6 km^3.

zAy0LfpBZLC8mAC said 2 months ago:

> But the old-fashioned, pre-CIDR allocation strategy of IPv6 is _incredibly_ wasteful of address space.

What would be wasteful would be keeping even more address space unallocated, as that would cause all kinds of costs, from administrative overhead, incentives to build unnecessarily suboptimal networks, delays due to address allocation processing, to more fragmented and thus larger routing tables and renumbering.

Saving address space for the sake of saving address space is not useful.

login01 said 2 months ago:

Funny, IPv6 is _incredibly_ wasteful of address space.... Only 1/8 of the address space is allocation using this strategy. Based on current projects, we won't need to break into the additional space till 2035 or later.

kortilla said 2 months ago:

But a 64 for a single host isn’t driven by any of that.

zAy0LfpBZLC8mAC said 2 months ago:

Who is suggesting that you use "64"(?) for a single host?

Dylan16807 said 2 months ago:

> for a single-planet, pre-IoT Internet, that's plenty

The main assignment blocks right now are some /12s. We can easily split off a /8 for every planet and moon, or whatever. Communication between stars won't work with IP, let's worry about that issue later.

Internet of things isn't an issue at all here. You can cram as many as you want into a single subnet. If you desperately need an enormous pile of subnets for IoT for some reason, it's straightforward to take a /64 and chop it into a billion subnets that hold a billion devices each.

stordoff said 2 months ago:

You could give every single IPv4 host its own complete IPv4 address space, and you would _still_ have enough addresses for 2 * 10^19 planets[1]. 2^128 is _absurdly_ large.

[1] 2^128 / (2^32 * 2^32) = 1.84 * 10^19

simias said 2 months ago:

I don't find IPv6 complicated, if anything it's a lot easier to grasp than IPv4 and it basically comes with batteries included.

The difficulty in my experience stems from one thing and one thing only: you can almost never go full IPv6. You always have to dual stack IPv4/IPv6, and of course IPv4 + IPv6 is always strictly more complicated than just IPv4.

I've had the opportunity to develop a solution using an IPv6-only environment and it was a pleasure to work with. No need to worry about the size of your subnets or clashing with 3rd party tools using the same address space etc...

>Wouldn't it be better if it had just been IPv4 with more address bits?

I often hear that but I genuinely don't get it. This change would effectively give you all the problems with IPv6 (i.e. you have to make sure all your hardware and software equipment can deal with the new, backward-incompatible format) without all the niceties that IPv6 provides. It's genuinely the worst of both worlds. NAT is probably the best compromise you can get, using the port number to sort-of extend the address information to identify several computers with the same IP, but it's obviously an ugly hack with many limitations.

zAy0LfpBZLC8mAC said 2 months ago:

> The difficulty in my experience stems from one thing and one thing only: you can almost never go full IPv6. You always have to dual stack IPv4/IPv6, and of course IPv4 + IPv6 is always strictly more complicated than just IPv4.

But it isn't? I mean, yes, every machine that requires both v4 and v6 is more complicated than just v4, sure. But potentially, you can get away with not having everything double stacked. You can run all your application and infrastructure servers on IPv6 only, and only have a load balancer speak IPv4, for example--that way, you get most of the benefit of IPv6 with only very little extra effort required for double stacking.

rstuart4133 said 2 months ago:

> But it isn't? I mean, yes, every machine that requires both v4 and v6 is more complicated than just v4, sure.

You don't need to do that. I've set up IPv6 only networks at conferences. If throw in a IPv6->IPv4 NAT (eg, tayga) the only machine that requires a dual stack is the router. We didn't have a single complaint about connectivity issues from over 500 delegates. Modern clients (phones, tablets, Mac's and so on) just slide seamlessly into an IPv6 environment without the user even being aware they are using IPv6.

zAy0LfpBZLC8mAC said 2 months ago:

I am pretty sure that if I put all of the servers that I am involved with behind NAT64, the complaints would be instant because tons of stuff would break, from services running on those servers not being reachable via IPv4 to software running on those machines not supporting IPv6.

kortilla said 2 months ago:

IPv6 pretty much requires DNS. There are tons of dead simple networks out there that administrators just memorize IP addresses on trivially.

pdkl95 said 2 months ago:

http://www.hungry.com/~jamie/hacker-test.html

From the Hacker Test:

    0361 Have you memorized the HOSTS.TXT table?
    0362 ... Are you up to date?
skrause said 2 months ago:

IPv6 doesn't prevent you from using memorizable addresses. You can configure your hosts with the ULA addresses fd::1, fd::2, fd::3 etc.

fulafel said 2 months ago:

It's of course better to use DNS, but if you like plain IP addresses you can still memorize them in v6 pretty easily: just memorize your prefix, which in theory is ~ 50-bit but is usually rather regular-shaped and is no harder than a phone numer, and you can use easy to remember numbering g in the rest of the /56 or /48.

PopeDotNinja said 2 months ago:

0.0.0.0

1.1.1.1

8.8.8.8

192.168.0.1

^^^ burned into my brain

2001:0db8:0000:0000:0000:ff00:0042:8329

^^^ say what now?!

Dagger2 said 2 months ago:

This is totally an unfair comparison. Why did you complicate the v6 address? That's 2001:db8::ff00:42:8329, which is a fair bit shorter than the version you wrote.

But let's go a step further. If you wanted to remember this address, why did you pick 2001:db8::ff00:42:8329 in the first place? Why not 2001:db8::53?

If you deliberately pick an address that's longer and more complicated than it needs to be, and you refuse to use the system (DNS) that's designed to handle that for you, then you don't get to complain about how long and complicated the address is.

PopeDotNinja said 2 months ago:

I just cut & paste the IPv6 I found in a Wikipedia. But that does touch on a good point... the formatting for how to express an IPv6 number is more complicated. The fact that 2001:0db8:0000:0000:0000:ff00:0042:8329 can be shorted to 2001:db8::ff00:42:8329 is less simpler to figure out than the good ol' <8bits>.<8bits>.<8bits>.<8bits> IPv4 format (at least that is true for me).

Dagger2 said 2 months ago:

It's not that complicated. You can add leading zeros into each field, but that's similar to how 192.168.0.1 can be written as 192.168.000.001 (but if you do that in v4, it turns the field into octal! At least in v6 it's just a superfluous 0.). The only real complication is ::, which just means "insert zeros here".

(192.168.0.1 can also be written as 192.168.1, 192.11010049 or 3232235521. Or 192.0xa80001. Or 0xc0.0xa80001. Or 0300.168.1. Or a good number of other ways. That's far worse than anything you can do to a v6 address.)

astrobe_ said 2 months ago:

> but that's similar to how 192.168.0.1 can be written as 192.168.000.001

I would advise you not to do that. Some systems read the numbers with atoi(), which will interpret a leading zero as a prefix for 'octal' like in C.

PopeDotNinja said 2 months ago:

For example, in Ruby...

  irb(main):001:0> require 'ipaddr'
  => true
  IPAddr.new("000.000.000.000").ipv4? 
  IPAddr::InvalidAddressError (zero-filled number in IPv4 address is ambiguous: 000.000.000.000)
rswail said 2 months ago:

well that won't make a difference for 0.1 :)

Dagger2 said 2 months ago:

Yup, I was paying attention to that. Note how adding a leading 0 to 192 made it 0300.

And people think this is easier than v6, where adding a 0 to db8 just makes it 0db8...?

mcguire said 2 months ago:

"To do that pointless intermediate step [converting a router's IP address to its MAC address], you need to add ARP (address resolution protocol), a simple non-IP protocol whose job it is to convert IP addresses to ethernet addresses."

Well, technically, ARP is how every IP address is converted to a MAC address. If the IP address is not local (determined by the address, the host's own address, and the subnet mask), the host ARPs for the router interface; if it is local, it ARPs for the destination address.

"They're nowadays almost inseparable. It's hard to imagine a network interface (except ppp0) without a 48-bit MAC address, and it's hard to imagine that network interface working without an IP address."

Well, technically, it's not: CAN (11-bit "address"/priority/identifier thingy), ZigBee (16-bit? 64-bit? I dunno.), Bluetooth, etc.

In case you're wondering, there is a long tradition of these kinds of rants. Everything would be perfect if the IETF had done X instead of Y, if FOO had happened instead of BAR, if Steve Deering had tripped over his own wineglass model and received a traumatic brain injury (or not tripped over the wineglass and not received the brain injury; I'm not clear on that).

There's even people who argue we should have gone with the OSI protocols from the start, although they are literally evil cultists who prefer the worst possible option. (Wait, why does that sound appealing?)

pdkl95 said 2 months ago:

> there is a long tradition of these kinds of rants. Everything would be perfect if the IETF had done X instead of Y

https://tools.ietf.org/html/rfc1925

see (10), (11)

[and (11a) always applies]

mcguire said 2 months ago:

"(7a) (corollary). Good, Fast, Cheap: Pick any two (you can't have all three)."

Hell, you're exceptionally lucky to have one.

walshemj said 2 months ago:

The evil OSI overlord sits on his Thone and smiles "very good sub :-)"

I used to work in OSI and at one point had Level 6 (root) on the UK's main ADMD oh and Level 7 (beyond root) on the billing systems.

zamadatix said 2 months ago:

> "Bridging is still, mostly, hardware based and defined by IEEE, the people who control the ethernet standards. Routing is still, mostly, software based and defined by the IETF"

I disagree, the dataplane of each is usually hardware and the control plane of each is usually software. I think the dismissal of things like broadcast learning, ARP, and even DHCP to an extent, leads to the confusion.

> You know these "IP" headers are nonsense because the DHCP server has to open a raw socket and fill them in by hand; the kernel IP layer can't do it.

What the kernel IP layer exposes has little to do with whether something is a "real" IP packet. Under the guise of security anything that wasn't UDP or TCP bound to an OS configured IP interface isn't part of kernel IP sockets anymore. This historical conflation of IP = place to bind layer 4 socket is one reason why QUIC is UDP based.

Dunedan said 2 months ago:

As this post talked a bit about the history of computer networks I have to jump in and highly recommend "Where Wizards Stay Up Late: The Origins Of The Internet" (https://www.amazon.com/Where-Wizards-Stay-Up-Late-ebook/dp/B...) to everybody curious about how that thing called internet came to be. It's such a great book about an era not so long ago, but often already forgotten.

RyanShook said 2 months ago:

Reading it now and highly enjoying it! I’ve recently shared some of the early specs from BBN and RFCs here on HN if anyone is interested in early internet as well.

dboreham said 2 months ago:

Much history rewriting here. The author is wrong about mobile use cases not being thought about when IPng was being worked on. I remember hearing talks about how to interoperate across wireless lans and WANs, seamlessly, at the same IETF meeting events. Wireless LANs existed back then and so did Wireless WAN services. They were just slower and more expensive than today's and of course not widely used. The first wireless WAN I used with TCP/IP on a laptop was in 1994 FWIW. 8kbits/s.

lolc said 2 months ago:

I remember sitting in the sun ten years ago, wondering about all the layers and how there's no need for a MAC address when you have an IP6 address. Then I wondered why it's not like that. So that was a depressing read in the sense that there is no a priori reason it shouldn't be that way. Just hacks after hacks as usual.

jandrese said 2 months ago:

It's like that so you can send your IPv6 packets over 20 year old switches and have them still work. Switches don't care about the IP layer, all they look at is the MAC layer.

lolc said 2 months ago:

Wouldn't it be nice if modern switches could properly route packets without them being framed in legacy protocols? This could be negotiated when you connect to them. But because it's only marginal gain and needs dual-stack switches we just keep the hacks.

Maybe one day.

zamadatix said 2 months ago:

Then it'd be a router not a switch and you'd have to replace the router before you can use the new protocol (see: the current IPv6 deployment situation.

This is why layer 2/Ethernet is a valuable abstraction layer not just a "legacy protocol". It allows seamless transition to the new layer 3 abstraction without having to replace all of the hardware everywhere at once.

lolc said 2 months ago:

Here's the thing though: Managed switches are often configured like routers. And even dumb switches have "dynamic route discovery" if we want to call it that.

It would be nice if this could be negotiated away to get a pure IP6 link. If an intermediate link doesn't support that, legacy addressing could still be used. I'm not saying we don't need NDP, just that it would be nice if it could eventually be phased out.

zamadatix said 2 months ago:

If you're configuring something like a router then it's a router. It may be a crap router but if it routes it's still a router.

> It would be nice if this could be negotiated away to get a pure IP6 link.

Then you'd need WiredIPv6, WirelessIPv6, 4GIPv6, 5GIPv6... layer 2 provides the ability for an abstracted physical layer to transport an abstracted network layer. Remove it and any time the physical layer changes the network layer needs to as well as each of these have unique headers, formats, and data-link capabilities (not all support broadcast for example!).

Every abstraction layer is actively used today, remove it and there is a cost.

tzakrajs said 2 months ago:

IPv6 (L3) still uses MAC addresses because it relies on frames (L2) to encapsulate the packet. For example, the multicast address ff02::1 is really just 33:33:00:00:00:01. It allows for the coexistence of multiple network layer (L3) protocols.

lolc said 2 months ago:

Such are the sad details of how IP6 over Ethernet is implemented today.

said 2 months ago:
[deleted]
kazinator said 2 months ago:

There was no such world.

IP could be easily widened just by taking IP addresses to 64 bits, allocating some new protocol numbers for that, and keeping everything exactly the same. In applications, the same old quad dot notation would work textually: 0.0.0.0 to 65535.65535.65535.65535, the latter being the broadcast address and so on.

The IPv4 space could be embedded such that 1.2.3.4 is both an IPv4 address made of octets, and an "IPv5" made of four hexadecades. Gateways and protocol stacks could convert the packet formats on the fly: a wide packet with addresses whose four values are below 256 could be narrowed, and then received received by IPv4. An IPv4 packet could be widened to "IPv5".

No stupid "neighbor" protocol; just good old ARP, widended to 64 bit IP addresses.

Machines now have native integers that can hold a 64 bit IP address; great for all the masking operations in routing logic and whatnot.

lmm said 2 months ago:

> The IPv4 space could be embedded such that 1.2.3.4 is both an IPv4 address made of octets, and an "IPv5" made of four hexadecades. Gateways and protocol stacks could convert the packet formats on the fly: a wide packet with addresses whose four values are below 256 could be narrowed, and then received received by IPv4. An IPv4 packet could be widened to "IPv5".

What problem do you imagine this would solve?

IPv4 and IPv6 need to be completely independent, because if an IPv4 router were ever to route an IPv6 packet it would send it to the wrong place, likely into a loop. Worse, such misrouting would be unpredictable as BGP tables shifted, links failed over, etc; you might have packets go around a loop several times and then eventually reach their final destination, you might have packets going fine in one direction but misrouted in the other, etc.. It would be a nightmare to debug.

Given that IPv6 packets can never be routed by IPv4 routers, what value is gained by making it easy to embed IPv4 in IPv6? Hosts where both ends have IPv4 addresses ("narrow" in your terminology) are just going to talk IPv4 anyway. And it's not like 4over6 is particularly complex or delaying deployment.

stingraycharles said 2 months ago:

I guess you can have clients that purely talk IPv6 talk with destinations (edge locations) that only talk IPv4, without requiring the whole end-to-end connection and intermediate routers to talk IPv4? You'll just have a router at the end which translates this IPv4-in-IPv6 packet into a pure IPv4 packet and everything will work.

lmm said 2 months ago:

For a v6-only host to talk to a v4-only host you need a router performing stateful NAT64 at some point (it's fundamentally impossible to statelessly route IPv4 packets to v6 hosts because there aren't enough v4 addresses). While in theory that router could live anywhere on the IPv6 network (and packets are routed to it via normal IPv6), normally you'd have it on the 6-only host's network (and coordinate with their DNS servers if they wanted to be able to connect to 4-only hosts by hostname via DNS64).

It's a little ugly, but it's pretty much the same situation those v6-only hosts would be in if they were v4 hosts without public addresses (behind NAT), and they get to use ordinary direct connections when talking to a v6-enabled host on the outside.

cesarb said 2 months ago:

"Just widen the addresses" ideas like that are easy to come up with (though they usually keep the IPv4 address as either the prefix or the suffix of the "new" address), but the elephant in the room is always the same: how would a legacy host which only understands classical IPv4 (with 32-bit addresses) talk to a host which only has "new" (wider than 32-bit) addresses?

kazinator said 2 months ago:

> how would a legacy host which only understands classical IPv4 (with 32-bit addresses) talk to a host which only has "new" (wider than 32-bit) addresses?

Hosts with 64 bit addresses would have to support 32 bit addresses also, just like is the case with IPv6 hosts that have to have IPv4 stacks.

Under 64 bit addresses, it's conceivable that in fact there would not necessarily have to be an entire IPv4 stack; just protocol conversion to widen and unwiden the data format.

The stack would have to know that certain connections have IPv4 peers, and so it would parse IPv4 datagrams coming from those hosts, and likewise send them IPv4 datagrams.

It could be designed such that if the source and destination addresses (A.B.C.D) of a connection are such that A <= 255 && B <= 255 && C <= 255 && D <= 255, then the hosts may optionally use the smaller IPv4 packet format.

cesarb said 2 months ago:

Start from the other end: how would an IPv4-only host with an IP address of, for instance, 198.51.100.1, initiate a connection to a host with a "new" address of 459.256.369.257? The host at 459.256.369.257 can understand the "198.51.100.1" address, but the host at 198.51.100.1 has no way to represent 459.256.369.257 as the destination address in an IPv4 packet.

And even if the answer is "they can't initiate a connection", the same problem also happens in the opposite direction. When the host at 459.256.369.257 initiates a connection to the IPv4-only host at 198.51.100.1, what should it put in the "source address" field, which only has 32 bits since it's an IPv4 packet? If it puts a dummy address there, how would the reply to that packet reach it?

6nf said 2 months ago:

How does IPv6 solve that same problem?

zAy0LfpBZLC8mAC said 2 months ago:

It doesn't, because it can't be solved.

kazinator said 2 months ago:

> how would an IPv4-only host with an IP address of, for instance, 198.51.100.1, initiate a connection to a host with a "new" address of 459.256.369.257?

The destination address obviously makes no sense to the IPv4-only host, being outside of the space that it understands, so a direct connection is obviously impossible. (Are you even asking seriously, or is this just snark?)

The wide-address host would have to have an additional IPv4-compatible address bound to its network interface. Or else some other host would have to have such an IPv4-compatible address on its behalf and do NAT or proxying.

cesarb said 2 months ago:

> so a direct connection is obviously impossible. (Are you even asking seriously, or is this just snark?)

I am asking seriously, both because I've seen proposals where the proposer does seem to sincerely believe it to be possible to somehow fit more than 32 bits in a 32-bit field (that was a truly baffling one), and because it is possible with stateful hacks similar to NAT64/DNS64 (that is: when the host at 198.51.100.1 tries to look up the address of the host at 459.256.369.257, it receives a fake but valid address, and a router in the path knows to map the fake address to the real address while doing all the necessary NAT shenanigans).

> The wide-address host would have to have an additional IPv4-compatible address bound to its network interface.

Then you gain nothing other than extra complexity, since the total number of hosts is still limited by the 32-bit IPv4 address.

> Or else some other host would have to have such an IPv4-compatible address on its behalf and do NAT or proxying.

Then not only you gain nothing by making your "new" addresses a superset of IPv4 addresses, but also you start looking like IPv6 ended up being, with all its transition mechanisms.

kazinator said 2 months ago:

> I've seen proposals where the proposer does seem to sincerely believe it to be possible to somehow fit more than 32 bits in a 32-bit field.

Ah, well, that's obviously not a computer programmer with more than 2 or 3 years of experience. :)

> Then you gain nothing other than extra complexity, since the total number of hosts is still limited by the 32-bit IPv4 address.

Have we gained nothing?

Consider that the majority of the IP addresses in the network are individual subscriber IP addresses, not servers. So we can have a transitional situation in which major service providers use the old 32 bit address space (so they are reachable by every client, 32 or 64 bit), while we put new subscribers into the 64 bit address space.

The users who remain in the 32 bit space will increasingly find that they can't connect to some servers that are in the beyond-32 parts of 64 space, so they will have to upgrade their systems.

Users in 32 space cannot do peer-to-peer with 64 users outside of that space, of course.

lmm said 2 months ago:

> Consider that the majority of the IP addresses in the network are individual subscriber IP addresses, not servers. So we can have a transitional situation in which major service providers use the old 32 bit address space (so they are reachable by every client, 32 or 64 bit), while we put new subscribers into the 64 bit address space.

> The users who remain in the 32 bit space will increasingly find that they can't connect to some servers that are in the beyond-32 parts of 64 space, so they will have to upgrade their systems.

> Users in 32 space cannot do peer-to-peer with 64 users outside of that space, of course.

i.e. exactly the same situation that we currently have, only it would be less visible whether you've upgraded, and harder to debug when you had?

The hard part of deploying IPv6 isn't that the protocol is different (indeed the protocol is significantly simpler e.g. removing fragmentation). The hard part is that it's necessarily a new global routing table, that you necessarily can't use it between two hosts unless every router in between them supports it and has the routes set up. A more IPv4-like packet format would not change that at all.

kazinator said 2 months ago:

You might not have identified the really hard part.

Which is this: reams of application code is hard-coded to the AF_INET address family, and its binary sockaddr_in structure, and related data types and functions.

But that's a problem facing any proposed or actual IP facelift.

PopeDotNinja said 2 months ago:

> I've seen proposals where the proposer does seem to sincerely believe it to be possible to somehow fit more than 32 bits in a 32-bit field.

If someone seriously brings that to me, I'm gonna try and come up with something witty...

"The thing people forget is that the periods in an IPv4 address take up space, too. If you know anything about data octets, you'd realize that were we're wasting adding 8 bits for EACH period just to make those addresses human readable. Periods are the junk DNA if IP addresses.

Fun fact: the reason Comcast is able to charge more for their Internet is because they got Congress to allow them to charge extra for the dots. You know how some people pay $40 for their Internet and you are paying $70? While they are talking to 32 bit addresses at $10 per 8 bits, you are paying the same $10 per 8 bits, and another $30 for IP addess 'junk DNA'! Pull out your cable bill and look for through the fine print. There's a reason the fine print is so small. That's how they obfuscate how they you charge you 75% and get away with it."

cesarb said 2 months ago:

I wish I could remember where I saw it so I could share it with you all, but the argument was something like using more voltage levels to fit more values in each bit of the IP address. (It was probably in one of the ietf mailing lists, this sort of crankery seems to get posted there often for some reason.)

pdkl95 said 2 months ago:

> The wide-address host would have to have an additional IPv4-compatible address bound to its network interface.

Yes, we call that (having v4 and v6 addresses) a "dual stack" configuration. The host gets to choose which protocol to use as it desires. If you have a v4 address, you can even refer to that (v4) address in v6 packets[1] by setting the leading bits to 0. That is, v4 "A.B.C.D" as a v6 address is just "::A.B.C.D". The v4 address space is embedded into the v6 address space; this is generally automatic in dual-stack configurations. You can even refer to a v4-only address in a v6 packet as "::FFFF:A.B.C.D".

However, if you want your packets to convert back to v4 automagically (so you only have to speak a single v6 stack)... if you want to autoconvert between IP versions in transit...

> some other host would have to have such an IPv4-compatible address on its behalf

As jandrese already pointed out, that's called 4to6, setup the proxy (if needed), and use "64:ff9b::A.B.C.D" for your v4 addresses.

The thing you seem to be asking for is already a feature of IPv6. However, the thing that can never happen is doing any of this transparently in a v4 packet. Where, specifically, in the IPv4 header[2], are you going to put extra SRC and DST address bits? You cannot change the location of any existing bit in the header: everything (including IP-aware routing hardware) (including the software/firmware of every v4-speaking device, most of which will never - or cannot ever - receive an update) already assumes that e.g. "the source address is on header octet 12-15 (bit 96-127)". The only remotely hypothetical place anything could be added would be in an IP Option field, which would be a terrible idea. Other options might change the location of the new address bits in the header, meaning routers would have to sequentially parse all of the option fields to parse addresses, which would be a mandatory delay even with legacy pure-v4 packets. Also, most firewalls will probably drop the new, unexpected option as "possibly malicious". To do anything else to the header, you would need to change the version number, which immediately introduces the compatibility issues everyone complains about with IPv6.

[1] http://www.tcpipguide.com/free/t_IPv6IPv4AddressEmbedding.ht...

[2] https://en.wikipedia.org/wiki/IPv4#Header

kazinator said 2 months ago:

> ... having v4 and v6 addresses ...

> and use "64:ff9b::A.B.C.D" for your v4 addresses.

You're talking about IPv6 now, which this thread isn't about.

IPv6 implementations that support IPv4 do have separate stacks, because IPv6 is too darned different.

jandrese said 2 months ago:

That's just dual stacking. You're dual stacked at the instant you have to decide to convert the packet header or not. At the end of the day you need a new packet format regardless, so you might as well clean it up in the process.

Fun fact, there is a canonical way to encode an IPv4 address in an IPv6 packet. It's used by 4to6 gateways. IPv6 address parsers understand when you pass them addresses that look like: 64:ff9b::192.168.0.1

kazinator said 2 months ago:

If we convert an IPv4 datagram on the way up into the stack into the IPv4-64 format, and then just feed it to a single implementation of TCP or UDP or other protocol, I do not feel it is accurate to say that I we have two protocol stacks.

jandrese said 2 months ago:

That's what current Dual Stack does. TCP and UDP don't care about the IP layer for the most part. Checksums not withstanding.

said 2 months ago:
[deleted]
said 2 months ago:
[deleted]
wmf said 2 months ago:

Here's a detailed explanation of why that isn't useful: https://hackerfall.com/story/ipv6-non-alternatives-djbs-arti... and the HN discussion: https://news.ycombinator.com/item?id=10854570

XorNot said 2 months ago:

No it can not be. Routing hardware is hardware. 32 bits is physically etched into the ASIC chips which do it.

If all it took was a software upgrade then IPv6 would be done by now.

The IPv4 space is already embedded in the IPv6 space as part of the spec.

You can't negotiate a middle ground with hardware.

kazinator said 2 months ago:

> If all it took was a software upgrade then IPv6 would be done by now.

Even if the routers were upgradable, there is the problem that IPv4 applications contain hard-coded uses of types like struct sockaddr_in.

In some cases, entire programs are replicated just for IPv6: e.g. ping6, telnet6, ...

These things don't write themselves.

dboreham said 2 months ago:

Like TUBA.

phs2501 said 2 months ago:

TUBA was TCP, UDP, and the Internet protocols on top of the OSI network stack (CLNP). It's about as from from IPv4 as you could imagine.

https://tools.ietf.org/html/rfc1347

dboreham said 2 months ago:

For anyone curious why we didn't just widen addresses, this misunderstanding is why ;)

kazinator said 2 months ago:

The real reason is probably closer to Fred Brooks' "second system effect". A new generation IP was regarded by some over-zealous engineers as an opportunity of putting in all sorts of new requirements.

https://en.wikipedia.org/wiki/Second-system_effect

They also didn't know the story of the King's Toaster. (Since it refers to 80386's and Unix version 8, it must have already been written at the time.)

http://www.ee.ryerson.ca:8080/~elf/hack/ktoast.html

kortex said 2 months ago:

A real engineer would use no microcontrollers, and leverage the physical properties of the materials at hand to automatically lower and raise the toast. https://youtu.be/1OfxlSG6q5Y

runjake said 2 months ago:

I was an early adopter implementing and administering IPv6 in a large enterprise, and I still am administering it. It was a shit show for years but in recent years client support has stabilized.

However, I've recently, I've come to my own conclusion that IPv6 is largely doomed and that it will be usurped by IPv4 NATs/CGNATs and >L3 improvements like QUIC, DTLS, etc. Why?

IPv6 is complex and unfamiliar and has a big learning curve. People aren't going to "put up with it". They're going to adapt what they have (skills/tools/etc) and they're doing a good job of it.

I always get asked by people about IPv6 and how they should get started and my present advice is to "hold off, for now". I'm not even sad about that. The world is evolving and when framed properly, it doesn't seem that bad.

betterunix2 said 2 months ago:

I find IPv6 to be simpler than IPv4 -- once I stopped trying to reason about things in IPv4 terms. The only real complexity I have encountered is in the various efforts to maintain compatibility with IPv4.

One way or another IPv4 will go away. The limits of scale were hit years ago and the workarounds break applications (NAT is basically a violation of the end-to-end principle and the results are predictable), and merging IPv4 private networks is a nightmare.

kortilla said 2 months ago:

I don’t see it. No users care that NAT breaks the end to end principle. That’s only something users care about that want to accept connections, which is <1%.

TCP and UDP have been running over NAT for so long that they work completely fine. People who make the mistake of assuming asymmetric ports or putting IPs in the payload fail so early that they quickly learn.

There is so little actual pressure to implement ipv6 that it’s starting to look pretty doomed.

johncolanduoni said 2 months ago:

> I don’t see it. No users care that NAT breaks the end to end principle. That’s only something users care about that want to accept connections, which is <1%.

I'm not sure <1% of users want to video chat or to play games without dedicated servers.

> There is so little actual pressure to implement ipv6 that it’s starting to look pretty doomed.

There is so much pressure to implement IPv6 on mobile that Apple checks if your app can deal with an IPv6 only network before the approve it to show up on the store. If your website supports IPv6 and you have a sizable mobile userbase, you'll see plenty of it.

wmf said 2 months ago:

IPv6 is largely doomed and that it will be usurped by IPv4 NATs/CGNATs...

This argument could have worked ten years ago but now it has been falsified. Cellular carriers have already deployed IPv6 and there's no motivation for them to switch back to IPv4, especially given the high cost of CGN.

xvilka said 2 months ago:

IPv6 already succeeded. So far the many corporations already work very hard to perform even internal networks transition. And with new hardware upgrades IPv6 comes for free almost. For example, China recently did a huge leap, where many networks switched at once, along with mobile and web applications. And more is planned in the near future [1].

[1] https://blog.apnic.net/2019/06/06/100-by-2025-china-getting-...

ktpsns said 2 months ago:

tl,dr; QUIC is solving shortcomings of IPv6, amongst others the badness of remaining Ethernet and IPv4, by replacing TCP.

This is one of the most beautiful and deep articles about our networking/IP world I've ever read. It's so helpful to hear background stories from the early years of networking (like IPX, I still remember it but was too young to really make us of it)!

fulafel said 2 months ago:

This is a fair summary of the latter part of the article but I think the argument is flawed:

1. Mobile IP isn't actually IPv6's raison d'etre

2. Even for the mobile IP case, QUIC only addresses the HTTP client part of it - but HTTP didn't need mobile ip to begin with

azernik said 2 months ago:

This... doesn't really describe the motivations for IPv6 well.

Working in networking, it was much-beloved because it eased the address allocation problem by giving you an enormous, equally-sized address space for every subnet, and included a potentially-simpler, stripped-down version of DHCP for non-router nodes. (Edge routers usually use DHCPv6, but only to learn what prefixes they should hand out to other nodes.)

For core network administrators, the additional address space lets them organize their routing tables hierarchically, which makes them smaller and easier to optimize for latency.

Plus, while they were breaking compatibility anyway, fixing up a bunch of annoying things with IPv4 that had been bugging implementers forever, like dropping (ugh) fragmentation, rearranging the header information to make it easier to do routing in hardware, and adding support for metadata in the form of extensions (useful for ISPs tagging traffic with routing instructions). Mobile IP was part of this, and was indeed IMO misguided, but is not the only or (in my experience as an implementer) the main motivation.

That whole digression into the semi-mess that is edge networks over the 802 suite (ethernet + wifi) is irrelevant to this core-network motivation. And in any case, ARP is (again, speaking as an implementer) a very tidy and useful abstraction layer. It's even used on non-ethernet networks!

tl;dr the big "killer app" of IPv6 is that it makes the lives of router makers and network-stack coders easier. Writing new IPv6 code is harder than just using your IPv4 code, but for green-field projects IPv6 is so much more pleasant to work with.

jefftk said 2 months ago:

The post isn't "Why did people want IPv6" but "Why is IPv6 such a complicated mess compared to IPv4? Wouldn't it be better if it had just been IPv4 with more address bits?"

azernik said 2 months ago:

Except it doesn't actually get the reasons right. The complication is because IPv6 is optimized for implementers, not for users or students.

Zrdr said 2 months ago:

IPv4 address with more address bits would have worse, because it would have the same incompatibility problem as IPv6, without its benefits.

Actually, IPv6 is good design. Adoption is slow because most people and companies always prefer a cheap short term inferior solution compared to the good long term one.

Now, IPv6 is 25% of traffic. It will continue to grow, and at a point the network effect will be on IPv6 side. Not long after that, IPv4 will only be a niche for a few legacy systems.

Dagger2 said 2 months ago:

Nitpick: it's 25% (or 29% peak) of clients. As for traffic... dual-stacked eye-ball clients see about 50-70% of their traffic go over v6 on average.

(What about percentage of overall traffic on the internet? That's much harder to measure, but you'd expect it to be something like the product of 25% and 50-70%. But that particular stat isn't actually very interesting; percentage of clients and percentage of servers/traffic are a lot more useful.)

decnet said 2 months ago:

@jefftk: ‘Why is IPv6 such a complicated mess’

Because it was designed by a committee?

api said 2 months ago:

IPv6 is a complicated mess because it was designed by enterprise networking people. Everything in networking is always more complex than it needs to be. Always. Network engineers love to overthink everything and they've never heard of YAGNI.

kelnos said 2 months ago:

YAGNI only works when the cost of being wrong is low. In most software, if you do end up needing something later, you can add it without too much trouble. In hardware, it means you need to spend a ton of money on redesign, and then wait years (or decades) for people to upgrade their equipment.

In hardware, if you can reasonably foresee the need for some feature in the future (where "future" could even be > 10 years), and adding it now doesn't blow out your cost budget, you should probably add it.

azernik said 2 months ago:

Not just hardware - for the internetworking layer, you're creating a standard that everything has to follow. Extensibility is super important, but so is pre-specifying everything that a minimal implementation will have to support in 10 years.

api said 2 months ago:

The longevity of network stuff is a stronger argument for simplicity.

The cost of being wrong in the direction of excess complexity is higher than the cost of being wrong the other way. Layers are added, never removed. A deficiency can be fixed by adding something. A misfeature or ugly hack is a boat anchor we lug around for eternity.

Jweb_Guru said 2 months ago:

All I hear are buzzwords. Tell me how you can "just add something" when the protocol doesn't allow you to add it.

zamadatix said 2 months ago:

Network hardware isn't at the point where it's something you can iterate on as needs emerge. It's largely fixed hardware with a long deprecation period which is expected to connect a bunch of fast changing clients. People/applications are also adverse to changing network abstractions vs buying something that is compatible with what they have today but enables new things.

My guess is what you consider unused complexity is just something you don't see/deal with on a day to day basis so you wonder why it's there. This doesn't make it useless/inefficient it just means you don't use it (or don't interact with it directly even if it is used somewhere in the network path)

icedchai said 2 months ago:

Be glad network engineers think things through. These protocols are around for decades, minimum.

mindcrime said 2 months ago:

YAGNI is a mildly useful heuristic that applies in many situations, but far from all. It's not some iron-clad law of the universe.

syn0byte said 2 months ago:

"Any idiot can make a bridge that stands. It takes an engineer to build one that barely stands."

systemBuilder said 2 months ago:

Exactly correct! Overblown overdesign by people who don't even know the founding principle of the internet - the end to end argument in system design.

dang said 2 months ago:
sneakernets said 2 months ago:

"Bridging is still, mostly, hardware based and defined by IEEE, the people who control the Ethernet standards. Routing is still, mostly, software based and defined by the IETF, the people who control the Internet standards. Both groups still try to pretend the other group doesn't exist. "

I laughed so hard it hurts. Stories like this make me realize that the eggheads and boffins can be just as dumb and stubborn as I am.

walshemj said 2 months ago:

you know the old old joke that the 8th layer of the OSI stack is "politics"

mcguire said 2 months ago:

See also https://archive.org/details/elementsofnetwor00padl The Elements of Networking Style.

said 2 months ago:
[deleted]
BlueTemplar said 2 months ago:

So, why haven't the relevant people started working on IPv7/NewProtocol, like... 10 years ago, when it started to become clear that mobile Internet was going to become big and that IPv4/6 and TCP/UDP coudln't work well with it?

icedchai said 2 months ago:

These things take a very, very long time. Consider that IPv6 has been a thing since the late 90's. It is now older than the IPv4 internet was when it started taking off around 1995. IPv6 is it. I very much doubt we will see an IPv7 in our life times, if ever.

bdamm said 2 months ago:

From my experience, it's because some of the people that would need to be doing that have their heads in the sand about e.g. the necessity of middleboxes and application-layer proxies. Network-layer folks still seem to think that the Internet must not have application-layer proxies, and some actively resist good efforts to expand the application-layer protocols to better deal with the reality of application-layer middleboxes, such as those widely used in IoT applications. And some try to eliminate good mechanisms that otherwise would work, such as CONNECT over TLS.

kelnos said 2 months ago:

Probably because they noted the terrible time they were having getting people to migrate from IPv4 to IPv6, and had no desire to repeat it.

dcbadacd said 2 months ago:

It'll only take another 50 years :')

fanf2 said 2 months ago:

They did! https://www.schneier.com/blog/archives/2010/06/darpa_researc...

The problem is that the economic situation makes it very difficult to replace the Internet. Wholesale protocol replacements only work if they are backwards compatible and incrementally deployable and they provide new capabilities even when there is partial deployment.

basch said 2 months ago:

With most of the population buying a new phone every two years, 5G, and the carriers being able to be a compatibility layer, it seems like it wouldnt be hard at all to do something like QUiC/IP7 and have massive adoption within a couple years. Apple, Microsoft, Samsung, Google, ATT, and Verizon could force it on their own.

walshemj said 2 months ago:

I think it was obvious in the mid to late 90's that ipv6 was flawed once the internet for civilians really started to take off, not having a seamless migration path was not on.

Dagger2 said 2 months ago:

A seamless migration path (in the sense that you seem to be wanting it) was impossible. v4's design doesn't allow for it.

walshemj said 2 months ago:

And your basing this on?

Dagger2 said 2 months ago:

My knowledge of v4's design. The header fields for source and destination address are 32 bits wide, and the pigeonhole principle stops you from fitting more than 32 bits of addresses into those fields.

Ericson2314 said 2 months ago:

During networking class it sort of dawned on me that the internet people and LAN people were different and ignoring each other, but they never actually said this. Doing so would have made things a lot clearer.

systemBuilder said 2 months ago:

Thing about IPv6 is that it's designers didn't understand the internet. They didn't understand that in a client-server architecture only the servers need full-scale IP addresses because of the 64K ports. They had apparently never heard of the end to end argument. They saw 16-bit computers being wiped out by 32-bit and figured bigger = always better. xerox predicted 2^48 was all that we would ever need and as a Xerox protocol designer myself, I think it's still true. The IPv6 designers thought bigger = better = more router sales ( hey I'm looking at you, Steve Deering! )

I was asked by Qualcomm to determine when IPv6 would completely replace IPv4. This was in 2003 and they expected 2006 or 2007 at the latest! But, Qualcomm never hired internet visionaries ... When I told them "never" I almost got fired !!

When I left Google search 6 months ago, 15 years later, IPv6 was less than 0.1% of the crawled websites.

It's horribly wasteful for sensor networks and IOT.

It's only useful if you have more than 2^24 addresses because IANA hasn't added another class A private network range like 10.x.y.z (hint: there is exactly ONE company that needs a /7 private subnet). 3GPP uses it to give a unique ID to every phone but that's because they think like Qualcomm.

It will never replace IPv4.

IPv4 is like the roman chariot axle width. It was the right answer, and all roads today follow Rome's standard.

kalleboo said 2 months ago:

> It's horribly wasteful for sensor networks and IOT.

What's horribly wasteful for IoT isn't IPv6, it's streaming my baby monitor video to some server in China because my home internet is behind CGNAT. Of course, streaming it to some server is more profitable since now you can charge money for access/storage, so there's no incentive to optimize for efficiency.

systemBuilder said 2 months ago:

And an impractical security mess to tunnel into your home network. There are a hundred arguments against IPv6 and 3 or 4 - at most - in favor of it.

stingraycharles said 2 months ago:

Ok so here's a question for the HN community: I have IPv6 from my ISP, and as an experiment, I want to primarily use IPv6 to connect to my different home servers.

I use automatic address selection, so no DHCP. So this means that when a server reboots, chances are their address changes.

Other than setting up a custom DNS server and $somehow ensuring that each server registers itself at this DNS server at boot, are there any elegant solutions to this?

starfox64_ said 2 months ago:

Is there anything preventing you from assigning static IPv6 addresses on your machines?

magicalhippo said 2 months ago:

In my case, I get a new prefix every time my router reboots or otherwise loses connection.

Then there's the case of firewall rules. My router gets a new prefix which causes my server to get a new IP, how do I get my firewall (pfSense) to update my firewall pass rules to point to the new IP?

I must admit I haven't tried really hard, so there might be some obvious ways I'm missing.

fulafel said 2 months ago:

If your prefix is always skipping, you have to complain to your ISP and meanwhile automate your network renumbering. Is there a reason you are not using the pfsense device to do routing and (put the cpe device to bridge mode)? Then you don't need to worry about your firewall not knowing about reboots, and the firewall rules will just work (assuming pfsense supports using prefix relative rules, vs hardcoding the prefix to all firewall rules...)

magicalhippo said 2 months ago:

The cable modem is a dumb bridge, and the pfSense is doing the routing (AFAIK). I can't see any way to do relative rules though, which is what makes it difficult. Maybe I'm missing something though.

fulafel said 2 months ago:

The manual seems to talk about host and network aliases, which might be what you want. Hopefully there are magic aliases for the interface networkts.

edit: https://docs.netgate.com/pfsense/en/latest/firewall/firewall... also talks about "LAN net" and "WAN net" aliases.

stingraycharles said 2 months ago:

Not necessarily. How would I go about making these IPs easily discoverable by other hosts? DNS again?

teddyh said 2 months ago:

Or mDNS: http://www.multicastdns.org/

As used by Apple’s Bonjour implementation of the Zeroconf networking stack.

If using Debian or the like, just apt install avahi-daemon and libnss-mdns, and access your hosts on the local network as servername.local.

starfox64_ said 2 months ago:

Yes, you would probably then have to allow inbound traffic to these addresses in your ISP router's firewall or whatever you have.

If your router's default configuration is sane it should (by default) only allow returning connections and drop any other.

cesarb said 2 months ago:

> I use automatic address selection, so no DHCP.

This means you're using SLAAC instead of DHCPv6, right? In that case, your servers should have a fixed IPv6 address derived from their MAC address. Even if they also have privacy addresses (and prefer to use them for outbound connections), they should still have the non-privacy address which you could use for inbound connections.

Or is it the problem that, when your router reboots, the IPv6 prefix changes? That is a harder problem (and would be the same in IPv4).

zamadatix said 2 months ago:

If you want outside-of-your-home things to reach dynamically assigned public IP space then DDNS is your only option (there are public servers you can use if you're willing to make that trade as well).

If you just want your local stuff to have public IPv6 outbound and static IPv6 inside your home then you can just multinet. That is add a static private IPv6 to the same interface you get your public IPv6 via SLAAC.

mrfusion said 2 months ago:

I’m embarrassingly out of the loop. What happened with ipv6? Weren’t we supposed to run out of IP addresses the years ago? Will ipv6 still happen?

toast0 said 2 months ago:

We've sort of run out of IPv4 addresses; you can't get new ones direct from ARIN in a timely fashion anymore. You can still get IPs from most of the other regional authorities though. You can also get IPv4 addresses from other people who aren't using them; there's still a lot of addresses that are assigned, but not BGP advertised, so there's some slack.

However, IPv6 is happening. Primarily in two places:

a) mobile ISPs; T-Mobile is big on IPv6, so is Jio in India. This tends to be related to new investments or wealthy countries. Existing networks work enough, so where there's not a lot of money, adding or upgrading CGNAT is small incremental cost, and enabling IPv6 is a project. OTOH, a big new network like Jio has to pick from building out IPv6 with a small amount of IPv4 or buying a bunch of IPv4 to build out a lot of IPv4 -- it makes sense to push hard on IPv6.

b) big content sites. IPv6 networks currently don't have the same kind of junky middleboxes that IPv4 has --- serving as many customers as possible without junk in the way is a big deal.

Where you don't see a lot of movement is residential ISPs in wealthy countries. They've generally got enough IPv4 for their customers, so they don't care that much about IPv6. However, Comcast has done a good job of rolling out, and AT&T and CenturyLink do have something.

mindcrime said 2 months ago:

Will ipv6 still happen?

Yes.

https://www.google.com/intl/en/ipv6/statistics.html

https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...

I think we're clearly past the point of no return. How long it will take to achieve near 100% adoption I have no idea... but with ipv6 already at over 33% adoption in large countries like India and the US... yeah, it's gonna happen.

All we really need is for Russia and China to get moving on adoption...

xvilka said 2 months ago:

China already did. For obvious reasons it just not shown on the Google statistics page.

mindcrime said 2 months ago:

Good point. I totally blanked on that when looking at this.

lisper said 2 months ago:

My first startup, back in the early 90s, was called FlowNet [1]. It was an incredibly cool design invented by my co-founder, Mike Ciholas. It was fast, cost-effective (10x price-performance improvement over the contemporary competition), self-configuring, scalable, and included advanced features like quality-of-service guarantees.

Needless to say, it did not succeed. But the world would have been a better place if it had.

[1] https://www.linuxjournal.com/article/3293

h2odragon said 2 months ago:

> "Layers are only ever added."

Vernor Vinge had "Software Archeology" as a thing, it's already too real.

We've already given up the pretense of privacy, let's use a geophysical coordinate for the basis of our addressing and routing system. All the problems of mesh networking and mobilty are, if not solved, at least exposed as what they are, and not hidden behind "and then we tunnel through 4 layers of protocol to emulate the lack of 7".

cesarb said 2 months ago:

> let's use a geophysical coordinate for the basis of our addressing and routing system

Networking does not follow geography. My phone is right next to my laptop; but if I disable my phone's wifi, the network path between them takes a detour through the next city.

h2odragon said 2 months ago:

then the indirection that ties your mobile device to its nearest fixed forwarding point is an exposed layer. This happens now, you just have several layers of emulating old circumstances in between, as this article details.

cesarb said 2 months ago:

No, that's not the issue; even if the "location" of my phone were defined as "the location of its base station" or even "the location of the telco's datacenter next city", the physical wires connecting these "fixed forwarding points" do not have to follow a direct line. The best path to the USA (which is to the north of here) is to first go south, then go through the submarine cable.

h2odragon said 2 months ago:

I'm failing to communicate. The location of your phone is whatever it is. Right now calling you involves mapping your phones ID, at various levels, to whatever network device can actually route data to it. The first level is phone number to ... I dunno, but every time i look into it I'm repelled by the baroqueness.

kortilla said 2 months ago:

Hopefully nobody ever needs a cell phone that moves.

beagle3 said 2 months ago:

2010 saw the introduction of CurveCP[0], and 2011 its first implementation. It solves a most of what IPv6 does (and then some). The thing it does give up is complete symmetry - there IS a difference between client and server.

But we've effectively already given that up with NATs (and CGNATs), with essentially nothing lost[1], so I'd much rather have given that up willfully and get all the great things CurveCP has to offer, rather than the mess that is IPv6

An evolved version called MinimalT (published 2013[2], implemented then and e.g. in 2014[3]) goes much farther, with DNS integration, some DoS protection, and a few other nice properties while being faster than TCP/IP, and still being IPv4.

Instead we have QUIC and IPv6.

[0] http://www.curvecp.org/addressing.html

[1] And that's unfortunate - but peer to peer is essentially irrelevant now, when every communication service people actually like uses a server these days.

[2] http://cr.yp.to/tcpip/minimalt-20130522.pdf

[3] https://github.com/nimbus-network/minimalt

simias said 2 months ago:

>but peer to peer is essentially irrelevant now, when every communication service people actually like uses a server these days.

That's true but it's sort of a self-fulfilling prophecy the way you put it. Maybe what we lost is actually the potential to develop alternative, peer-to-peer, decentralized systems.

Take something like Dropbox, it's been very successful solving a very practical problem people are having, sharing files over the internet. Widespread NATing and publicly unreachable computers have a big role in that. Of course that's not to say that with IPv6 Dropbox would be irrelevant, there's also the problem of having the files always available, not having to worry about hardware failures and security issues. Still, I'm sure that in many cases if people could easily share files directly from their device without middle-man they would. It's just a pain to get to work reliably without going through "the cloud".

Of course nowadays that's almost an absurd concept. Everything gets uploaded on somebody's server and the decentralized web is long behind us.

beagle3 said 2 months ago:

> Maybe what we lost is actually the potential to develop alternative, peer-to-peer, decentralized systems.

For this to work properly one needs (a) mostly reliable addressing and routing, and (b) mostly online systems.

IPv4 and IPv6 were indeed developed for both (a) and (b). But both started to disappear when laptops overtook desktops, which would be over a decade now, and became completely irrelevant when mobiles overtook computers (laptops + desktops).

It's the world that has changed; it cannot come back in IPv4 due to address exhaustion, and won't come back in IPv6 because (a) requires stable addresses -- meaning VPNs for everyone -- and (b) requires the physically impossible "access to data when phone is out of battery and/or in a Farady cage".

The server-in-the-middle is a must for reliability; People who seriously use Syncthing or btsync instead of Dropbox have to set up at least one "constantly on" server because of both (a) and (b) above.

The last remaining use case for peer-to-peer is, I think, live (chat/voice/video) one on one conversations - and while it's an interesting and important use case, it can (and has) been solved with stable servers-in-the-middle, and I don't believe it is significant enough (or was, or will be) to stop progress; MinimalT makes much more sense than IPv6 or QUIC as a way forward, but we're unlikely to ever have that at scale.

cesarb said 2 months ago:

> requires stable addresses -- meaning VPNs for everyone

The true requirement is for a stable identity, not a stable address. You just mentioned Syncthing, which is an example of this: every Syncthing node has a stable identity (which is its public key), but not necessarily a stable address. All you need is a way to map the identity to the address, which does not have to be centralized (even though the current implementation in Syncthing, outside of broadcast-based local address discovery, is centralized); bittorrent DHT manages to map a torrent's hash to a set of addresses without needing any central node.

> requires the physically impossible "access to data when phone is out of battery and/or in a Farady cage" [...] People who seriously use Syncthing or btsync instead of Dropbox have to set up at least one "constantly on" server

You stated the solution yourself: to access the data when the phone is offline, mirror it to a node which is online. Just because these particular protocols require you to set up your own always-on node doesn't mean it's a hard requirement; some older peer-to-peer protocols from over a decade ago already securely mirrored data in nodes belonging to other users.

beagle3 said 2 months ago:

> All you need is a way to map the identity to the address, which does not have to be centralized (even though the current implementation in Syncthing, outside of broadcast-based local address discovery, is centralized); bittorrent DHT manages to map a torrent's hash to a set of addresses without needing any central node.

That's true; but I think bittorrent DHT is the only decentralized one that seems have succeeded (I remember quite a few unsuccessful attempts two decades ago), and its success is probably related to its use case - it is of everyone's interest to have hashes well-mapped in case you'd need them.

> Some older peer-to-peer protocols from over a decade ago already securely mirrored data in nodes belonging to other users.

And for various reasons, they are all gone, whereas e.g. rfc822 email - which is properly decentralized/federated but does require a stable online node, is still going strong nearly 40 years later, despite somewhat successful attempts to re-centralize it by the likes of google.

I think it's inherent - many people now only have a phone, but no one wants a service that becomes unavailable when you lose your phone or step in a faraday cage -- there even used to be on-phone voicemails back in the dumb phone days -- as peer to peer as dumb phones can get -- and they were not popular for the same reasons.

And if it is indeed inherent, it would be better to take it into account when designing the next stage.

kortilla said 2 months ago:

Where did you get the idea that we need stable addresses? This was solved with DNS, registration protocols, dhts, etc decades ago. Nobody has had stable addresses forever.

beagle3 said 2 months ago:

stable in the "for the next few minutes" sense, which you do need e.g. for mobile phone or video calls. With mobile phones your address can change every few seconds as you drive on a highway and get passed between cells and even carriers.

DNS never solved this, DHTs didn't either; and I'm not familiar with a DHT that is actually successful except for bittorrent despite many attempts, even for "stable for a few hours" case.

All the cases that sort of work (RTMFP flows, SIP registrations, old Skype, ...) essentially have peer-to-peer communication as a server-offload optimization (that is, peers may talk directly to each other after the server arranged everything iff the stars aligned correctly), not as their main method.

__jal said 2 months ago:

P2P will stay irrelevant insofar that we don't fix networking to make it easy again. And we really want it to be, for values of "we" that != carriers.

NAT and related IPv4 hacks hand a great deal of power to people who want to meter you by the byte, inspect your content for compliance with their business model, and generally define how you are "allowed" to use the internet.

It doesn't have to be that way.

beagle3 said 2 months ago:

> for values of "we" that != carriers.

It's everyone except technical end users. Google - not a carrier unless you use FI/Fiber - wants to intermediate everything. So do Facebook, Microsoft, Slack, Discord and everyone else - you can't be monetized if they don't control the flow. The government wants that too, by the way - it makes stuff like Prism realistic and so much easier.

> NAT and related IPv4 hacks hand a great deal of power to people who want to meter you by the byte, inspect your content for compliance with their business model, and generally define how you are "allowed" to use the internet.

They do no such thing; They already have all that power by virtue of your pipe going through them - even before the CGNAT days, my ISP would require extra payment to let you listen on e.g. TCP ports 25 and 80. Every packet you send or receive goes through your carrier, whether or not they fudge the IP address on it to a different space.

> It doesn't have to be that way.

And yet it can't be any other way because 99.9% of users are indeed better off if their incoming data is filtered. That doesn't have to be that way either - we just have to have secure software so being on the internet without a firewall is not a risk.

The internet is becoming a health-like thing: You need enough people around you to be vaccinated so that you don't suffer viruses/ddos attacks yourself. But most of the population is worse than antivaxers - they don't even know that they can be (or have been) infected, and there's no clinic to just go to even if they did.

... so it effectively does have to be that way, unfortunately.

tinus_hn said 2 months ago:

> You definitely couldn't write something like traceroute for bridging, because none of the tools you need to make it work - such as the ability for an intermediate bridge to even have an address - exist in plain ethernet.

Doesn’t Windows 10 come with a network mapping tool that does just that (when not on a domain)?

RyanShook said 2 months ago:

I think the heart of all networking issues is that it’s hard to imagine a system more complex than the one you have today. IPv6’s main weakness is the lack of mobile IP and even as we work to solve that problem we are creating new problems that will only be clear in another decade or two.

mcguire said 2 months ago:

Slide 14 of the BBR deck: "PROBE_RTT drains queue to refresh min_RTT: Minimize packets in flight for max(0.2s, 1 round trip) after actively sending for 10s. Key for fairness among multiple BBR flows."

Suddenly, I'm scared.

Abekkus said 2 months ago:

Why isn't TLS client auth a turnkey solution for maintaining a single web session as a client moves between network segments?

kelnos said 2 months ago:

Aside from the fact that TLS client auth is currently a usability nightmare... well, it could be, but you'd be building that session-resume capability into a higher layer than would be ideal. Meaning that anything that isn't TLS would have to build their own session resumption system.

Conceptually, you want to push things as low in the stack as is feasible.

Abekkus said 2 months ago:

Session is a higher OSI layer than transport, and any secure session management would probably be as complicated as TLS in the end.

said 2 months ago:
[deleted]
stjohnswarts said 2 months ago:

I'd be willing to bet that QUIC will not replace TCP in our lifetimes.

gloaming said 2 months ago:

Seriously. I can't even look at an IPv6 address.

I used to think that maybe they'd grow on me the same way dotted quad did, but nope, never happened. Sort of like memorizing excess digits of Pi. I used to tell myself someday, someday... But who fucking cares.

I think there was probably a shred of utility in the idea that trillions of hosts could all wire into the same address bus, and maybe sending packets to a space colony that your computer could interface with just as readily as it could another continent or the coffee pot in the kitchen is admirable in its scope of ambition, but most people are using devices so tightly controlled and locked down that it's not like most of use are even messing around with NIC cards and RJ45 jacks anymore.

I haven't looked at a tower PC in years, and so it probably just doesn't really matter anymore.

paulcarroty said 2 months ago:

Practice says good design isn't effective against greedy ISPs.

AFascistWorld said 2 months ago:

IPv6 opens up the possibility of comprehensive government surveillance, govs now can attribute every device fixed set of IPs, which link to your ID.

Dagger2 said 2 months ago:

That's not how v6 works. v6 addresses aren't linked to devices or IDs; typically your devices will generate themselves a random address, and generate a new one every day or so.

On the other hand, the extreme shortage of v4 addresses combined with pervasive use of NAT basically requires services to become more centralized, which makes surveillance easier. It's true there are other incentives for companies to centralize things, but surely removing the requirement to do so will make the situation better, rather than worse?

RandomTisk said 2 months ago:

I'd like to propose IPv7. Exactly like IPv4, but 48 bit addresses and must accept IPv4 traffic and treat addresses as beginning with 0's for first two octets. You're welcome.

asdfasgasdgasdg said 2 months ago:

That's more or less what the article says in its first section. Then it goes on to tell why that didn't happen. In short, it isn't that simple.