![]() |
News | Profile | Code | Photography | Looking Glass | Projects | System Statistics | Uncategorized |
Blog |
As mentioned previously, using off-net and far away DNS caches is a bad idea.
If you've forgotten why, it's because of DNS-based load-balancing, DNS-based GSLB (global server load-balancing), or just GSLB. The technology goes by many names.
Essentially, it comes down to authoritative DNS servers providing different resource records based on the querying address of the DNS cache. Based on either geolocation or latency, the authoritative server take its best guess and return resource records pointing to servers that it thinks are closest to your DNS cache. Now, if you run your own DNS cache or use your ISPs DNS caches, this best guess will be just about right, or at least good enough.
However, if you're using an off-net DNS cache that may be far away, this whole system goes down in flames. The authoritative DNS servers will return resource records that are appropriate for the DNS cache, but bad for you. Instead of grabbing YouTube videos from something at a Google data center a couple hundred miles away, you'll be grabbing YouTube videos from a couple thousand miles away. And, if you know anything about TCP, this is most likely going to be slower due to the increased latency.
Now, not everyone does this. In fact, most small sites just use static resource records. However, the big players and CDNs (Google, Yahoo, Apple, Akamai, etc.) all at least partially employ this type of technology.
Here's a really quick example. From my server in NYC, let's see what we get for www.google.com. via the DNS cache running on the server itself:
(dax:11:33)% host -t A www.google.com. 127.0.0.1 Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: www.google.com is an alias for www.l.google.com. www.l.google.com has address 173.194.33.104 (dax:11:33)% host -t A www.google.com. 127.0.0.1|grep 'has address'|awk '{ print $4; }'|fping -e 173.194.33.104 is alive (0.78 ms)
With a little grep and awk action, it looks like I'm going to be connecting to Google servers that are fairly close, possibly in the same building (111 8th). Now, let's hit one of Norton DNS' caches:
(dax:11:34)% host -t A www.google.com. 198.153.194.1 Using domain server: Name: 198.153.194.1 Address: 198.153.194.1#53 Aliases: www.google.com is an alias for www.l.google.com. www.l.google.com has address 72.14.204.147 www.l.google.com has address 72.14.204.103 www.l.google.com has address 72.14.204.104 www.l.google.com has address 72.14.204.99 (dax:11:36)% host -t A www.google.com. 198.153.194.1|grep 'has address'|awk '{ print $4; }'|fping -e 72.14.204.147 is alive (7.12 ms) 72.14.204.103 is alive (6.95 ms) 72.14.204.104 is alive (7.40 ms) 72.14.204.99 is alive (6.84 ms)
Still decent, but it tossed me to a server that's probably far outside Manhattan. One last test, let's try a DNS cache that is on the west coast of the United States:
(dax:11:36)% host -t A www.google.com. 216.240.130.2 Using domain server: Name: 216.240.130.2 Address: 216.240.130.2#53 Aliases: www.google.com is an alias for www.l.google.com. www.l.google.com has address 66.102.7.99 www.l.google.com has address 66.102.7.104 (dax:11:38)% host -t A www.google.com. 216.240.130.2|grep 'has address'|awk '{ print $4; }'|fping -e 66.102.7.99 is alive (87.4 ms) 66.102.7.104 is alive (87.6 ms)
87 msec. Those servers are probably on the west coast, which is close to the DNS cache but not close to my server. That'll certainly impact performance, if I wanted to hit GMail or watch some videos off YouTube.
So, it's a good idea to use a DNS cache that's close to you. Some free DNS services like Google may be anycasted enough that you're always likely to be using one that's close, latency speaking. However, as a general rule of thumb, you're better off to use a DNS cache that is local or on-net. Some folks might say that it's better to use your ISPs DNS caches vs. your own, since more popular records will already be cached - this is a valid argument. However, the same really can't be said about 3rd-party DNS caches, because even though the queries will be returned much faster, long-lived connections to servers based on the returned records (YouTube videos…) will be slower.
Anyway, I recently saw draft-vandergaast-edns-client-ip (version 01 when I looked at it), an IETF draft that uses an EDNS0 option to encode client address information in the DNS request. Well, this certainly looks like it might solve all the above. Section 4 of the draft pretty much sums it all up:
The edns-client-subnet extension allows DNS servers to propagate the network address of the client that initiated the resolution through to Authoritative Nameservers during recursion.
Servers that receive queries containing an edns-client-subnet option can generate answers based on the original network address of the client. Those answers will generally be optimized for that client and other clients in the same network.
The option also allows Authoritative Nameservers to specify the network range for which the reply can be cached and re-used.
Now, when are we going to get this stuff implemented? Google, ISC? You guys listening?
New comments are currently disabled for this entry.
![]() ![]() ![]() ![]() ![]() |
This HTML for this page was generated in 0.002 seconds. |