Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What is 1e100.net? (support.google.com)
376 points by jstrieb on June 1, 2018 | hide | past | favorite | 121 comments


Google should run a server on 1e100.net and redirect it to this answer. Some click tracking domains do this, and it helps non-technical people feel less worried when they punch the domain into their web browser and it goes somewhere with an explanation.


For some ISPs, if you go to https://as12345.net (where the 12345 is their globally unique AS number), you'll get a looking glass tool. Or a page with a link to their looking glass, contacts for NOC and abuse for use in NOC-to-NOC communications, etc. I don't think Google would do that, though.


Google has https://peering.google.com/ for that purpose. My understanding is that it provides similar tooling for ASNs that have a relationship (peering, etc.) with Google ASNs.


Yes, Google is pretty easy to peer with. At major public IXes they will bring up their V4 and v6 sessions on their side before yours, and leave them in an enabled state, since they use a suite of automation tools to manage traffic flows to regional peers. This makes it easy to bring up the session on your side and immediately see routes and live bgp prefix exchanges going both directions without having to interact with their noc.

Given the number of peers that they have around the world, it would be very labor intensive to hand craft a bgp session config and leave it in a down state until mutual noc contact was made to bring it live.

They obviously have a great interest in getting traffic like YouTube videos to the residential/commercial downstream eyeball ISP end users as efficiently as possible.


I wish google would peer their fiber with Comcast better. That pipe is too small.


After reading their explanation, I don't understand the purpose of 1e100.net. Maybe someone who is more technical than I am can explain how it is used to identify Google's servers and why that's useful.

On that note, I should answer their question: Was this article helpful? with No.


Servers need a name that's different from the product(s) they're serving.

For example, when I look up google.com and then reverse-lookup the IP address, I get sfo07s13-in-f14.1e100.net (you'd get something different depending where you are). The name sfo07s13-in-f14 could tell someone where to look if that machine is misbehaving.

If they'd named it sfo07s13-in-f14.google.com, then browsing to that URL sends google cookies. If it's some server from a recent acquisition that may not be up to Google's level of security, that's dangerous.

Even fairly small companies are well-advised to have a domain name for their brand and a separate domain name for their infrastructure.


Adding to your last paragraph: Medium to large sized ISPs and web entities are also highly likely to use a third domain, which never touches the public internet. You might have:

a) your public web presence, www.widgets.com for marketing, sales, customer web portal, customer billing, and so forth.

b) your domain name used for public reverse DNS for your ARIN, RIPE, APNIC etc IP space, for an ISP that has its own AS, such as as12345.net. This is the function that the 1e100 domain serves for Google.

c) your internal domain name that is used by your management network to address every piece of network equipment, hypervisor, virtual machine and so forth. This could be something like "widgets.internal". The company internal DNS servers that never touch the public Internet will be authoritative master/slave for this, and your intranet clients will be set up to query these DNS servers. Your reverse DNS for all of your RFC1918 IP space (10.x, 172.16.x, 192.168.x, etc) will have forward/reverse matches for everything that you manage and monitor in your network, so that automated tools can crawl the network and auto discover/auto-provision new equipment.

Internet users outside of your intranet will never see item C, but it definitely exists in a lot of companies' infrastructure.


Is it not recommended for the Part C to be done with a third, yet publicly owned, domain? Instead of widgets.local or widgets.internal, it would be widgets-internal.com or something.

Struggling to remember why, but I heard people mad about .local


.local implies multi-cast DNS, you can't get a signed certificate for .local from any major certificate vendor either.

See: http://www.mdmarra.com/2012/11/why-you-shouldnt-use-local-in...


To avoid stupid software breakage you should never use .local for anything. Choose any other TLD for internal use, doesn't matter what it is, as long as it isn't a real TLD that exists in the new gTLD and ccTLD system.

On a large scale for all internal stuff you are going to have your own, internal root and intermediate CA. The root CA signs the intermediate CA, the intermediate CA then signs SSL/TLS certificates for your internal infrastructure things.

As an example for a mid sized ISP with such a setup, the internal ticket portal for the NOC is accessible only while physically in the office or when on the VPN, speaks TLS1.2 only to standard web browser clients, and its URL is https://portal.tickets.burrito

where the "burrito" is obviously not the real name, I've changed it for this example, but the choice of TLD in your own internal DNS infrastructure is totally arbitrary.

Then the individual clients all have the public certs for the internal corporate root and intermediate CA installed in them, so that they trust certificates signed by the internal CA.

If you're big enough to care about having a serious multi-state-scale intranet, you probably don't want to be reliant upon external CAs to sign your stuff. Having your own internal CAs lets you do a lot of other things as well, without additional cost, such as sign per-client-device certificates as well.


Please do not suggest people use TLDs that are not a part of the gTLD, ccTLD, sTLD, or any other TLD system. Just because they are not used now does not mean they will never be used. We just saw the effects of that with 1.1.1.1 DNS system, where people assumed it would never be used, and when it does, now things break.

Currently, the only reserved TLDs are:

- [RFC 6761] example

- [RFC 6761] invalid

- [RFC 6761] localhost

- [RFC 6761] test

- [RFC 6762] local

- [RFC 7686] onion

With the exception of localhost, local, and onion, these can be used without any worry about future use. Any others should be considered real TLDs and not be used unless you actually own the domain name under that TLD.

[RFC 6761]: https://tools.ietf.org/html/rfc6761

[RFC 6762]: https://tools.ietf.org/html/rfc6762

[RFC 7686]: https://tools.ietf.org/html/rfc7686


First, you're absolutely correct that people should not just choose random TLDs and attempt to use them. But I think you misunderstand me: I'm talking about doing purely internal DNS for things in private IP space that will never touch the internet. IP space that will never be announced to BGP peers and is thoroughly firewalled off from the Internet.

Let's say I have an ISP named Burrito Corporation.

I want to uniquely address equipment in my private management IP space VRF and have a proper hostname for every piece of equipment, both for "forward" and rDNS.

It's a pretty common setup to have internal BIND servers which are authoritative master/slave for the TLD .burritocorp , which also have an ACL allowing the internal IP space (10/8, etc) to query them. Then set up all internal DNS stub resolvers/client devices to query those servers.

Now I have a core POP in Toronto and I've decided to use airport codes for site names, so the internal management IP interfaces for things at that site might get a hostname which is hierarchically contained under yyz01.ca.burritocorp

It is unlikely in the extreme that ICANN is ever going to create a TLD named "burritocorp".

I am not suggesting that people go and try to use self-created TLDs for things that have real world IPs in the global BGP4 routing table. See section 6.1 of RFC6761.


I think the risk in that case is that I, a cloud computing startup also called BurritoCorp, get .burritocorp registered as a real TLD to host my state-of-the-art cloud services that perfectly solve the peculiar infrastructure needs of on-demand Mexican food delivery consumer brands such as yours, but sadly you have to block the entire TLD to prevent collisions with your internal infrastructure, because when I first launched, I caused a four-hour outage for you.


Internally your OS resolver will be configured to point to a different DNS than anyone external to your network. You essentially have a private branch of the DNS at that point. The only real potential for conflict is when the FQDN of an internal server is the same as for an external server. In that case your internal DNS will serve up a record pointing to your internal server, and external users will get a record pointing to an external server. If staff want to order fresh burritos over the Burrito-over-TCP protocol that will have to use separate equipment. But nobody outside your intranet will ever see a name conflict.


I understand, but 1.1.1.1 was used by people for internal use, for example as a default for portals on wifi routers. Because of that, now anyone that wants to use the 1.1.1.1 DNS server can not when they are on wifi routers that use that IP as the traffic does not go where it was intended.

For example, my username is enzanki_ars. Let's say I set up enzanki.ars at home as a host for an in home only server. Now let's say that a new country registers .ars as a new TLD, for example Argentina as ARS is the ISO 4217 currency code for the Argentine peso. Now I am stepping over someone else's TLD, an someone could even register enzanki.ars before me and start using it.

My personal policy is "just because you can doesn't mean you should." If I wanted to disrespect the process we have in place to prevent people from stepping over other people's domains, I would have set up enzanki.ars already. Until then, 192.168.0.0/16 is how I reference my hosts at home.

Edit: See also: https://www.iana.org/assignments/special-use-domain-names/sp... for an up to date list of special use domains. "home.arpa." was recently assigned as one, and I may start using that one at home.


The 1/8 IP block, unlike anything from RFC1918, was never supposed to be used by anybody internally. RFC1918 was published 22 years ago so people have had ample time to stop being foolish.

With a totally internal DNS setup it doesn't matter even if ICANN does decide to create a ".ars" TLD someday. In such a theoretical setup, your clients are all querying your own internal DNS servers, and you are not publishing or pushing anything to the root nameservers. Your use of internal hostnames and rDNS for ".ars" internally does not conflict with or hurt anybody's use of their own domains in the real-world .ars out on the public Internet, and neither does their use of the domain affect your non-internet-connected management network. There is no "stepping over other domains".


> In such a theoretical setup, your clients are all querying your own internal DNS servers, and you are not publishing or pushing anything to the root nameservers.

And in the usages of 1.1.1.1, no one was publishing anything on BGP or pushing it out to global routing tables. However, they were building their own infrastructure (and, in the case of consumer routers, their customers' infrastructure) on the assumption that this portion of the namespace would never refer to an external resource they'd actually want to access.

Similarly, if you use .ars, you're taking up that namespace from the perspective of your internal users. If at some point it turns out that there is some resource under that name that you want to access, you're going to have a hell of a time rebuilding your infrastructure to not use that name.

tl;dr don't use random names in a globally-managed namespace. Even if you're the only person seeing those name -> resource mappings, you can end up with inconsistencies between your own usage and the globally-managed version.


> Similarly, if you use .ars, you're taking up that namespace from the perspective of your internal users. If at some point it turns out that there is some resource under that name that you want to access, you're going to have a hell of a time rebuilding your infrastructure to not use that name.

No, you're not, because an internal management network does not have a gateway out to the internet. The management VRF does not talk to the global routing table. There would be no way to access a public .ars website even if you did not take its namespace.


If it is never going to be exposed to the public internet, you might as well use .com, then at least when it does get connected to the public internet you’ll get feedback that it is broken pretty quickly.


I could see an IT department using these (misguidedly) for purely internal purposes: .cam .camera .coop .drive .equipment .institute .media .network .systems .wiki. These are now registered TLDs. You are begging for someone to misconfigure something if you use trivial TLD names internally.


It USED to be that .local was fine; no one else and no other standard used it.

That changed; this is what people are / were mad about.

The point is that some standard CAN change any other similar thing at any point in the future; particularly now that you can buy a TLD.

Thus the lesson is, buy a real name and reserve the namespace.


the gTLD local is already registered and can't be 'stolen'. DNS servers should never route a query for that domain.

you even get a link to the RFC when installing kubernetes with the default domain name (which is local). I sadly forgot the number, so cant cite it.


There are two interwoven threads of logic in my post. You've mistaken the second as limited to the first when it is building upon it.

To clarify:

It /used/ to be that .local wasn't mentioned in any standard.

Don't do anything like .local (a non TLD and not part of a standard), because there is now a history of standards being created that break existing private deployments, and also because OTHER TLD like things can now be purchased as TLDs. That is why the current best practice is now to have an internal subdomain of a real domain (even if not globally published).


this should be the rfc: https://tools.ietf.org/html/rfc6762

from wikipedia:

> The Internet Engineering Task Force (IETF) standards-track RFC 6762 (February 20, 2013) reserves the use of the domain name label local as a pseudo-top-level domain for hostnames in local area networks that can be resolved via the Multicast DNS name resolution protocol.[1] Any DNS query for a name ending with the label local must be sent to the mDNS IPv4 link-local multicast address 224.0.0.251, or its IPv6 equivalent FF02::FB. Domain name ending in local, may be resolved concurrently via other mechanisms, e.g., unicast DNS.


Somewhat common pattern for (C) is using names like foosrv45.hide.example.com if you don't want to register another domain for that purposes. For convenience of the DevOps team it even makes sense for such hostnames to be publicly resolvable (to your internal IPs).

One issue with doing this is that it tends to break in various wonderful ways when you internally use Windows and Active Directory.


For various security reasons it's a bad idea for your internal hostnames to be publicly resolvable from anywhere on the internet, unless you're on a VPN, and therefore possible for your client device stub resolver to query the internal DNS server that only talks to internal IP space.

Even if the internal hostnames only resolve to a non-globally-routable IP address somewhere in 10/8 or 172.16/12, etc.


In my opinion the convenience gained is more significant than these mostly theoretical "various security reasons".

In other words when exposing hostnames and addresses of your internal infrastructure to the public internet has meaningful security ramifications then you have considerably more critical security problem.


I hope you never have to experience the non-theoretical side of that decision.

The benefits of being able to resolve internal IPs external isn't even really a benefit - if I can't resolve internal.hostname.domain then I'm not on the VPN, which means I can't reach the machine anyway, so where's the benefit?

A perfect example of why this threat isn't merely theoretical is the $36k bug bounty that Google paid out recently[0]. With additional knowledge of their internal network exposed by the information leak from DNS, the damage a blackhat could have done is untold.

The problem isn't the exposition of hostnames and addresses, the problem is if blackhats do manage to get access to the internal network (through whatever means; breaking into the VPN, a hole in the firewall, social engineering, RCE in the public facing website as in Google's case), it's undoubtedly easier if they've been given a list of juicy targets, rather than have to discover them for themselves.

[0]https://sites.google.com/site/testsitehacking/-36k-google-ap...


I couldn't have put it better myself. I don't think the person who wants convenience has enable on any ASes routers, or at least I hope not. One thing I find interesting is that they're using 169.254/16. Doubtful Google uses DHCP for anything, their automated provisioning tools are probably quite unique. My best guess is that their stuff in that particular part of app engine is using the space because they've actually near exhausted the rest of rfc1918, which is pretty impressive.

The only other organization I know of that has done that is Comcast, which has been a leader in forcing vendor ipv6 support because they literally exhausted 10/8 for their management networks.


And yet Google still believes VPN as a security boundary is an anti-pattern since the whole intranet is as weak as it's least secured node. https://cloud.google.com/beyondcorp/


If you already have a proper, working VPN solution for remote access into the internal IP space, which any ISP will already have for network engineer staff and noc purposes... What great convenience do you gain by making the DNS and internal IP space scheme public?


I said nothing about making it public, because I don't view the possibility of someone capturing DNS packet of the kind "A? fooserv34.hide.example.com" as making anything public. (And even if your authoritative DNS server would allow AXFR for whole hide.example.com you would only leak information that is useful for somebody who also have capability to find out the exactly same information on his own).

The convenience is about not having whatever VPN solution you use to inject DNS recursors into laptop's OS, which end ups being not reasonably solvable when you have two such VPNs you have to use at once.


Who said anything about making IP addresses public? You seem to be assuming registering a domain name implies delegating that domain to an actual name server that must respond to public requests. These assumptions are both incorrect.

Registering a domain prevents those names from being used for other purposes in the future. Your domain does not have to have valid NS addresses. Even if a domain has NS record with a valid address of the authoritative name server, that server does not have to respond to requests from the public internet. It doesn't even have to have to be accessible on the public internet.

Register a domain for internal use and run an internal name server on e.g. 10.x.y.z that handles your private IP space, and configure the local recursive resolver name servers to hand out your internal server's address when asked about the associated NS record. At the same time, set the real NS records for your internal domain to e.g. a traditional DNS hosting service that returns a CNAME pointing to your public domain. (the public internet only sees *.private.example.com as a CNAME to www-public-name-com.example.com)


And .dev.

Nothing is really safe. Maybe an emoticon? IIRC, the use of emoticons was stopped after someone registered the poop symbol on .la.


from a dns server zonefile perspective, it's still plaintext, just some unicode string with the xn-- prefix on it...

https://en.wikipedia.org/wiki/Internationalized_domain_name

https://panic.com/blog/the-worlds-first-emoji-domain/

http://xn--ls8h.la/

unicode 1f32e is the taco emoji, so i think I will be migrating all my internal things to http://servername.1f32e


What's worse about dev is that chrome auto-forwarded to the https version even if you were using it internally for testing using your hosts file.


That is the hsts preload list, used across bruises; .dev is no different than any other domain on the list, only that it was the first tld


maybe .local was not the best example. In my opinion it's recommended to do with something that's not a public domain at all, since your DNS is entirely internal, you can set up bind9 servers to be authoritative for the root. You could have it by anything totally arbitrary of your choice for a top level domain, that is a non valid public domain or TLD, like widgets.burrito


> In my opinion it's recommended to do with something that's not a public domain at all ...

The recommended practice, AFAIK, is to register a public domain for the purpose. See the recent HN discussion of .home.arpa for all the reasoning and arguments and counter-counter-counter-points.


Really large ISPs have even planned for the disaster-recovery scenario of a collapse of public DNS infrastructure, so their internal management network and hostnames are not related to the existing gTLD or ccTLD, or legacy TLD in any way. There is no need to involve ICANN-approved TLDs in your internal DNS infrastructure, particularly if you have management interfaces for a lot of things that are firewalled off or air gapped from ever touching the public internet.


What does ICANN have anything to do with this? If the global DNS system crashed and burned, you would be in exactly the same place as if you had invented a TLD as you have suggested. But, if that doesn't happen... And the way more likely outcome of your "made up TLD" becoming real happens instead, you're in a way worse position.

So far I've read you suggest .taco, .burrito, and .burritocorp as good TLDs to pick. This is terrible advice.

Purchase a public domain name from a real TLD - use it knowing it's yours, and never going to conflict.


I'm using made up words as placeholders like foo, bar, or anything else. Not suggesting you use your own tld of burrito. The point of using a nonsensical example is that your TLD choice for hostnames and rDNS in a management VRF can be entirely arbitrary, since it is not part of "the internet".

Typically the name would be something relevant to your needs, such as the name of the company, or AS number. I think what you don't get is that the public root name servers do not need to have anything to do with an entirely internal dns infrastructure.

Not suggesting icann is going to crash and burn either, but that "real" TLD relevance to what you do in rfc1918 IP space in a network that is not routed to the internet is minimal at best.

Bet you $5 that Google's actual internal dns for the oob, SNMP management, automated provisioning tools which control those 1e100 hosts is not 1e100, and is not a zonefile you can find cached anywhere public. They built their own internal dns for it. Have seen the same at two large CDNs. Their internal authoritaive DNS servers for their management IP space have no connection whatsoever to the public internet or to the lettered root nameservers.


> I'm using made up words as placeholders like foo, bar, or anything else. Not suggesting you use your own tld of burrito. The point of using a nonsensical example is that your TLD choice for hostnames and rDNS in a management VRF can be entirely arbitrary, since it is not part of "the internet".

1.1.1.1 wasn't part of the internet either, and, guess what it does today?...

> and is not a zonefile you can find cached anywhere public

You seem to be conflating owning a legitimate domain name for the purpose of internal use with making the DNS records from that domain's zone public. One does not imply the other.

Buy the domain, know you will never conflict with something, and you're done (bar renewal!).


It's quite amusing you are promoting the use made up TLD names for internal use but previously you were deriding others for doing the same with 1/8 IP space(pre-2010) for the exact same internal only isolated use case.

I'm trying to picture a co-worker telling the team they went ahead and put all the management interfaces in the .burrito domain in order to save the company $10 in domain registrar fees.


Especially in that disaster-recovery scenario, when you're figuring out how to combine the different namespaces that different disconnected fragments of the internet are using, you really don't want namespace collisions.

Namespaces are abstract things that exist outside of the realm of actual network connectivity, and enable you to plan for all kinds of hypotheticals.


So, for example, the domain google.com points to the domain sfo07s13-in-f14.1e100.net (depending on where you're browsing) which then points to an IP address of the server that's serving you the content.

    google.com -> subdomain.1e100.net -> server IP address 
Then when you run a reverse DNS on the IP address, you get sfo07s13-in-f14.1e100.net.

    rDNS: server IP address -> subdomain.1e100.net
So, like you said, if something is wrong with the server, you can run a reverse DNS on the IP address to get the subdomain sfo07s13-in-f14.1e100.net.

Is this the correct understanding?

But don't you already have the IP address of the misbehaving machine?


Lots of people look at IP addresses. Most of the people who find the "1e100.net" records in their network logs would have no easy way of knowing that 1.2.3.4 (or whatever) was a Google IP. The reverse lookup allows you to easily find out what domain it belongs to (1e100.net, which a Google search shows is a Google address).

You can also embed other information in the reverse name, such as the region, datacenter, floor, switch, rack, or unit number of the machine. It can be incredibly easy to locate the machine by just looking at its reverse name. "Oh, 1.2.3.4 is in us-east-1, on floor 3, in rack 27, unit 5. Let me go check the network cable."


Yes.

The name sfo07s13-in-f14 is mnemonic to Google engineers -- it's probably in San Francisco data center 7. Also, that server probably has both an IPv4 and IPv6 address. Also, sometimes IP addresses change but you want servers to keep their identity. So names are convenient.


>If they'd named it sfo07s13-in-f14.google.com, then browsing to that URL sends google cookies. If it's some server from a recent acquisition that may not be up to Google's level of security, that's dangerous.

Sorry, I'm slightly confused.

I browse newproduct.google.com. My browser calls the DNS, asking for the IP. The IP comes back as 192.168.0.1 [1]. It connects to 192.168.0.1. Gets hit by an XSS, and sends your cookie value to evildoer.example.com.

How would it help you that the reverse-IP of 192.168.0.1 issfo07s13-in-f14.1e100.net? The browser doesn't know that. It thinks its going to newproduct.google.com.

[1]. Yup, that number is just an example.


Your example is not the same as the one in the comment you replied to. You picked a product hostname. The example was an infrastructure hostname.

The point is that Google (or any company with the same mindset) scopes down the number of machines that can receive your google.com cookie. Even their own machines often don't need it to do their job, so it's not worth the security risk to have your cookie sent more than necessary.


Sorry but I dont follow. I hope you can clarify. What's the disadvantage for providing a PTR -> sfo07-blah.google.com instead of sfo07-blah.1e100.net?

Browser will send cookies to X.google.com and it will not to X.1e100.net. Ok.

But how is that problematic in this case? Why will X.google.com misbehave when it recieves google's cookies? Why will it be online in the first place if it is not yet up to apropiate security standards?


For normal purposes, DNS translates domain names into IP addresses (e.g. youtube.com to one of Google’s IPs). However, you can also do it the other way around: a reverse DNS lookup where you ask the associated domain name of an IP address.

Google decided to let all their IPs return a reverse DNS lookup under the 1e100 domain name. It makes things simpler when you want to figure out who’s connecting to your servers, for example.


>It makes things simpler when you want to figure out who’s connecting to your servers, for example.

Why? Or rather I guess, How does it make things simpler?


Consider a scenario where you have a log file and in the log file is an IP address (172.217.5.14). You are not sure whose IP address it is. So you run the following commands:

# dig -x 172.217.5.14 +short

lga15s49-in-f14.1e100.net.

ord38s19-in-f14.1e100.net.

# dig lga15s49-in-f14.1e100.net. +short

172.217.5.14

# dig lga15s49-in-f14.1e100.net. +short

172.217.5.14

The first command (dig -x) checks the PTR record for the IP address 172.217.5.14. It returns two PTR records: lga15s49-in-f14.1e100.net. and ord38s19-in-f14.1e100.net.[0]. Those are subdomains of 1e100.net, which we know Google owns. However, you can set a PTR to pretty much whatever you want, so we now take an additional step as well. We run the dig command again to check the A records for the domains. This returns the same IP address we started with, which is good. Since Google controls the DNS for 1e100.net we can be reasonably sure that it is in fact a Google server. This is called Forward-confirmed reverse DNS (FCrDNS) and is one tool you can use to determine the ownership of an IP address. For example, it is frequently used as a weight in email spam filters. Although, because of the intricacies of email, in that case it is usually not used for identification and instead used as a general purpose check to determine whether a mail server is rogue or not, since spam servers very often do not have proper FCrDNS.

There are other tools to determine who owns an IP address, like whois, but in some instances one will garner useful information and the other will not. So it's nice to have both at your disposal.

[0] As a side note: the trailing . in those PTR records returned by dig is not a typo. All domains actually end in a dot, it's just usually implied.


> Those are subdomains of 1e100.net, which we know Google owns

Sorry but to the average user, the domain name 1e100.net doesn't ring a bell at all at this point. They would still have to look up the IP in ARIN/RIPE/etc to see that the IP range is effecively owned by a company called Google.

Do you really need a hostname at all? Wouldn't be the ARIN/RIPE/etc entry be sufficient to know who "owns" said IP address?


Google owns IPs that aren't used by Google services (e.g. all the customer IPs on GCE), it's useful to distinguish "Google" vs "hosted by Google".


I agree - that again is done via whois normally too.


See, I probably would have reached for whois before dig. Partially because reverse DNS seems less likely to be populated with useful info, in my limited experience.


rDNS can be populated with a great deal of useful information, if you are trying to diagnose an asymmetric routing issue between two internet service providers. Particularly if both of them have had the forethought to give reasonable, understandable, hierarchical names to their globally distributed POPs. Other things like "ae" that show up in a traceroute can be indications of an 802.3ad aggregated link, which juniper calls an Aggregated Ethernet. Same as interface abbreviations for Cisco and juniper you will find like "hu", "te", "xe", etc.

One example: say you have a $200/mo dedicated server customer, as an ISP, you're giving them a /29 of public IP space. That /29 exists as a vlan subinterface of one of your juniper routers and is trunked across the datacenter through various switches to the server. Let's say it's vlan 2659. Somewhere in the public rDNS for the default gateway IP of that /29, you would have the string "vl2659”.


Neat:) Probably just showing how little I run into this stuff; usually I'm just looking at login attempts and seeing which ip range to ban.


Their infrastructure probably allows them to host multiple products in the same subnet, it's possible any given (IP belonging to a) physical host could be hosting a youtube API one minute, google search next, gmail the next. So they picked a subdomain that belongs to none of these, to eliminate any confusion.


I thought the "who" was referring to users, but you're saying it's referring to different google services, is that right?


It's good practice to have working reverse DNS for all of your public IP space. Using a short domain name is convenient. There are a number of ISPs that own their AS number as a domain such as as12345.net and use that for their rDNS, so it will show up nicely in traceroutes either direction.

Google has done something a little bit different here, it's not their AS number, but same general concept.


Google is big enough they own/need several ASNs. Having a dedicated domain totally makes sense.


Its the domain name they use for their servers. Its generally a good practice to have a dns name that maps to a particular server apart form the website its supposed to be serving for administrative purposes.

Try this:

> dig google.com

;; ANSWER SECTION:

google.com. 299 IN A 172.217.164.110

> nslookup 172.217.164.110

Non-authoritative answer:

110.164.217.172.in-addr.arpa name = sfo03s18-in-f14.1e100.net.


It's to provide a domain name when you query the reverse DNS address of an IP.


they use subdomains of 1e100.net as names in their network


I don't see any requests to 1e100.net when loading Google sites like google.com or youtube.com. I see domains like gstatic.com and apis.google.com, but but not 1e100.net.


You can think of all user-facing domains in a system as an interface or API, which abstracts away implementation details about which specific servers are behind them.

This allows flexibility in infrastructure — you can swap machines in and out (e.g. by updating public DNS records, updating the machines’ IP addresses, or adding them to (or removing them from) a load balanced pool behind a reverse proxy). But you still need a way to reference individual machines regardless of whether they’re serving or not.

That’s where domains like 1e100.net come in — a system of concrete (non-abstract) references to specific machines in your infrastructure.


Try a reverse lookup

  yebyen:~$ host google.com
  google.com has address 172.217.10.46
  google.com has IPv6 address 2607:f8b0:4006:803::200e
  ...

  yebyen:~$ host 172.217.10.46
  46.10.217.172.in-addr.arpa domain name pointer lga34s13-in-f14.1e100.net.


You wouldn't. Multiple domain names can point to the same IP address. Any one IP address can only have one reverse (PTR) record.


    $ ping google.com
    PING google.com (172.217.7.206) 56(84) bytes of data.
    64 bytes from iad30s10-in-f14.1e100.net (172.217.7.206): icmp_seq=1 ttl=48 time=0.676 ms


FYI, Android phones are always sending pings to 1e100 even if Google Apps are not installed. This is a system feature that detects 'captive portals'. If you want to observe your phone doing this, download the app 'Net Monitor' (also on F-droid).

More info and how to disable this: https://www.reddit.com/r/LineageOS/comments/7m8tsq/mysteriou...


I also discovered while doing some whois checks that Google's domain registrar (Google Domains) is owned by a Google subsidiary called "Charleston Road Registry". http://charlestonroadregistry.com

This is the street that goes through the Google campus in Mountain View, CA.


I remember experimenting with traceroute and reverse DNS back in school and wondering what kind of mystery company even google was renting its servers from...


See also : atdn.net ; tfbnw.net ; cloudfront.net

In today's world of CDNs and IP anycast, I'm not sure how accurate it is to talk about these IPs corresponding to "a server". Obviously packets go to some NIC somewhere, but probably not the same one for each of us and likely not the same hour-to-hour.

Seems that Facebook, Yahoo/AOL don't seem to go along with the convention that endpoint IPs reverse resolve to a different domain -- elsewhere in this thread mentioned as a cookie-payload-leak countermeasure. Google and Amazon do.


I don't think people appreciate how much effort Google does to minimizing cookie leakage. It's good security and privacy hygiene, and it also speeds requests - transmitting sizable cookies isn't free, and takes up time, especially when you're paying by the millisecond.

This is why there are various different domains for static content, domains for user content (googleusercontent.com), domains for certain cloud services, etc. Protections against XSS and cookie leakage is taken very seriously.


Interesting that they group the machines by IATA airport codes. I guess that's the nearest airport for that datacenter.


Naming major city ISP POPs by IATA airport codes is a practice that goes back to uunet/AS701 in about 1995, maybe a bit earlier, when talking about reverse DNS for public IP space. It actually predates the existence of ARIN, even.

then appending a number, so your first POP/core site in the DFW area might be DFW01, then DFW02, DFW03, and so on.

On an international scale another non-airport or IATA related method is to group by ISO standard two or three country codes and then state/province/regional internal subdivision two letter abbreviations, so something in the bay area of california might have the last part of its DNS name as sfo01.ca.us.as12345.net


I prefer using the [United Nations Code for Trade and Transport Locations](http://www.unece.org/cefact/locode/service/location.html) (UN/LOCODE) value based on the address of the host's data center. It covers more specific locations than something like the IATA airport codes, and is still a well defined standard.


Most ISPs will usually use the icann standard two letter ccTLD for the country where the pop is, which is usually similar to what you linked, but not always.


FWIW this is a very wide practice in the industry.


From my understanding it's more of a "what's the most well known airport". Google LA is in Venice, so it's closest airport should be SMO, it's still referred to in corp speak and machine naming as LAX.


The point of naming convention isnt to be painfully pedantically correct, like this comment. It's to provide a useful guidepost to humans, who know LAX but don't know about SMO.

Why didn't you mention ICAO codes, they'd be more universally correct anyways, right? KSMO, KLAX?


But then NYC would not be LGA but JFK.


You will often see the 1e100.net in reverse domain lookups, such as those produced by traceroute and ping.

Try a traceroute to google.com and you should see at least one if not several 1e100.net hops.


Heh, I never realized doubles can fit a googol. I knew they work up to 1e300-ish, but never made this explicit connection.


Well, only approximately: as a double it is actually 10000000000000000159028911097599180468360808563945281389781327557747838772170381060813469985856815104


It still 'fits'. The box is just very weird and sometimes the object that comes out is slightly different in size, but at least it's predictable.


I'm usually extremely reluctant to use doubles past 2^52, when the minimum increment becomes larger than 1.0


In that case you should likely have used fixed point arithmetic to start with.

Doubles larger than 2^52 are no different from smaller numbers in context of scientific computing etc. which is what floating point is for.

E.g. in a particular numerical simulation you could bump above 2^52 by using units of nanometers or below by using units of kilometers... yet this is likely just a simple multiplicative factor and the result of the calculation does not change one bit because of what units of measurement you use.

Getting closer to 10^300 and it is another story.


“I would have liked to be able to represent (1 << 53) + 1 exactly, but all I have is JavaScript and importing a bignum library is a lot harder than just restricting the range” seems like an unfortunately common problem.


Interesting nomenclature but with google this makes sense.

1e100 = googol, which is 1 followed by 100 zeroes. And a googlgolplex is 1 followed by a googol zeroes.


So is all Google Cloud infrastructure routed through 1e100.net as well?


Domain names aren't routed. I assume every public ip maps to a name on this TLD, if that's what you're asking.


deelowe is right, this has not to do with routing, but to answer your question is spirit: yes, GCP machines user 1e100.net for their PTR records, Try pinging snapchat.com


DNS and IP routing (in the bgp/ospf/isis/rip/etc sense of the word) are two different things - you could theoretically build a rather large ISP with many peers and upstream transits but with no DNS. Just AS numbers, prefix lists, acls, vrfs, and v4 and v6 blocks from ARIN.


Google Cloud's infrastructure, AFAIK, has PTR records using 1e100.net. GC User's addresses, such as a GCE instance's public IP uses googleusercontent.com for PTR records.


Be sure to hit "YES" under the "Was this article helpful?" prompt.


Unless, of course, the article wasn't helpful. Then hit "NO" under the same prompt. Because, you know, YMMV.


1e100=1, not googol (which is 10e100). They should fix the name.


In scientific notation, “e” means “×10^”, not “^”. “1e100” thus means “1×10¹⁰⁰”, which is correct.


yes, this is the standard way to do the cookie syncing business.


Cookies will only use this domain if it's used in hostnames, but I think google only uses 1e100.net for reverse lookups (PTR results).


A googol is a 1 with 100 zeroes after it.

1.0 x 10^100


Scientific notation for the number one google. I'm sure no one else knew that so I won't scroll down and check before answering.


Why not use a google.com subdomain? The first time I saw 1e100.net in the output of netstat, it confused the heck out of me.


I recommend reading some of the other comments under this post - they dig into concepts like cookie leakage, reverse DNS separation, etc.


The real reason they opted to do this, is make mobile firewalling more of a pain. If each application talked to the specific (types of) google services they needed, firewalling would be easier.

Instead, users cannot have a global rule to block .1e100.net across their devices, due to this choice. In my case, this results in having literally dozens of rules - really one per application, in order to ensure that applications that are granted upstream internet access, are not granted .1e100.net access unless I explicitly feel that they need access to Google Services.


No such conspiracy theory needed: I don't think Google always uses separate IPs for each Google service. Their infrastructure is too large-scale, containerized and dynamic for that to be efficient.


How is this specific to “mobile firewalling”? Wouldn’t my computer have to undergo the same pain?


There should be a market for devices that sit between your wi-fi modem and the ethernet cable that fulfill these firewalling needs.

For security + convenience, they could be set up so that the internet can't update or configure it -- rather, you have to insert SD cards to upgrade or change rules. (Maybe the device already comes with "Full paranoia", "Dissident blogger", "All-family", etc. cards in different colors; updates are downloaded from the internet otherwise).


I don't understand anything you said.

Can you explain?


So, it's basically domain fronting.

It's okay for them, when they do it to shroud their individual services and keep you from blocking one. If I want to pass Mail and block Adsense, no dice for me.

But if I want to domain front, because I need to pass through a third party censor, tough darts for me.


No, this isn't domain fronting. This has to do with PTR records, reverse DNS.

Domain fronting is all about the forward lookups, normally you wouldn't be making PTR requests in a normal web request.

Putting all your infrastructure under a single utility domain is very common, all the ISPs do it. It's good practice for reasons listed elsewhere in here.

Bringing in the practice of domain fronting into this conversation is technically irrelevant, but it does advance an agenda, relying upon the less-than-technically literate HN audience from being confused between reverse DNS, domain fronting, and firewall rules.


All of their services and ad networks use the "regular" names (like youtube.com and google-analytics.com) for these services and webpages, so any browser or mobile device will be looking up IPs via those names. And it's not like they're using CNAMEs that point to mixed-in names either.

You can block them selectively using proxies without issue.

As far as 1e100.net goes, I'm not sure where that would be applicable or why it's an issue.

Now, if you're looking to do that kind of selective blocking at the IP/netblock layer... good luck. That's a nearly impossible task since large network service providers like Google have so many network spaces coming and going and can move around their services as demand dictates. It's a core competency.

And that wouldn't matter whether they were all part of one domain or all kept separate.

Unless your strategy was to work out which netblocks each business unit's domains were authoratitive for on reverse lookup and then blocking all of that.

But that's ... making a lot of assumptions.


This has nothing to do with domain fronting at all, this is industry standard reverse DNS and PTR records for ARIN, RIPE, APNIC, AFRINIC, etc IP space.


Exactly this, exactly my issue.


Why do you even try to block using the rDNS name? It's literally backwards.


Because using Google's servers to bypass a government level block puts them in a bad spot legally, whereas they're not in a bad spot by making it hard for you to block specific services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: