Present Location: News >> Blog

Blog

> Bottlecap Recycling
Posted by prox, from Charlotte, on March 19, 2011 at 18:17 local (server) time

Most plastic and glass bottles are recyclable, nowadays.  Unfortunately, the plastic bottle caps are not as easily recyclable since they're made with a polymer called polypropylene.  While the resin code for many plastic bottles is 1 (polyethylene terephthalate), the polypropylene is 5, since it has a different melting point.  So, if you encounter a recycling bin that accepts resin code 1 and not 5, either hold on to the bottle cap or just toss it in the trash.

This is fine, but I'm a little curious about the little neck ring that stays on most plastic bottles after the cap is removed:

Bottle Cap Ring

The little plastic ring on most bottles appears to be made of the same plastic as the bottle cap itself.  Surely the environmentalists in the world don't expect everyone to pry off the little plastic ring and toss it separately, too?  If not, then it seems that most plastic bottle recycling systems are still being contaminated with a good amount of polypropylene.

I'll admit, I tried to remove the plastic ring from plastic bottles for a few days, and it was a fairly unpleasant experience when I didn't have scissors on hand.

What's the deal?

Disclaimer: I'm not "green" by any stretch of the imagination, but I don't think sensible recycling is too crazy.

Comments: 5
> ICMPv6
Posted by prox, from Charlotte, on March 09, 2011 at 21:23 local (server) time

I'm getting sick and tired of the "ooh we'll make things more secure and block all ICMP" because it's stupid.  Folks who insist on this thought process may be able to slide by in the IPv4 world, but this non-logic causes tons of problems in the IPv6 world.  This is because ICMPv6 is used for path MTU discovery, among other things.  It's done between hosts with the aid of routers signalling the hosts when the MTU on the path is too small.  Here's a perfect example, Juniper Networks: http://ipv6.juniper.net/.  Juniper, I'm a big fan, but why'd you have to screw this up?  Didn't you learn anything from Brocade last year when they did the same thing?

Ok, here's the problem.  I've got a box (dax), which happens to be this webserver, on native IPv6 provided by Voxel dot Net with a few tunnels, one of which connects back home.  This tunnel connection has an IPv6 MTU of 1280.  So, most sites on the IPv6 Internet need to lower their MTU when serving pages and other content to my workstation.  What tells them to do this?  dax.  It's em0 interface facing Voxel's network sends out ICMPv6 packet too big messages telling sending hosts that they need to decrease their MTU to fit the packets down the tunnel.  Sometimes, it doesn't work, like with ipv6.juniper.net, apparently.  Here's what happens when I run wget fron my workstation:

(destiny:20:41)% wget --tries=1 http://ipv6.juniper.net/
--2011-03-09 20:44:32--  http://ipv6.juniper.net/
Resolving ipv6.juniper.net... 2620:12::5
Connecting to ipv6.juniper.net|2620:12::5|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1295 (1.3K) [text/html]
Saving to: “index.html”

 0% [                                       ] 0           --.-K/s   in 1m 41s  

2011-03-09 20:46:14 (0.00 B/s) - Read error at byte 0/1295 (Connection reset by peer). Giving up.

wget connects, but doesn't get any content back, and gives up.  Here's some tcpdump output showing the relevant traffic in and out of dax's em0 interface during the above test:

Yep, 2001:48c8:1:2::2 (dax's em0) is trying to tell 2620:12::5 (ipv6.juniper.net) that the length of the TCP packets (1268 + 32 + 40 = 1340) is quite a bit larger than the maximum MTU of the tunnel it's trying to forward the packet out, which is 1280.  ipv6.juniper.net should really be reacting to this ICMPv6 message and lowering its MTU to 1280, which would result in a TCP segment size of 1208.  Um, nope, we're not seeing this.  What it looks like is that Juniper is blocking these ICMPv6 messages or not statefully associating them with the TCP session.

So, what kind of firewall do we think Juniper is running one of their new fancy SRX firewalls or one of the older NetScreen, SSG, or ISG product lines?  Let's just say I know for a fact that the NetScreen product lines work great with IPv6 and don't exhibit these problems if you only allow TCP port 80.  They associate the ICMPv6 packet too big messages with the TCP session and pass it back to the host statefully.  If this is an SRX bug, I'm really going to be annoyed, for a number of different reasons!

Hopefully this'll be fixed soon.  ICMPv6 silliness is no laughing matter!

Comments: 3
> Google and URLs
Posted by prox, from Charlotte, on February 21, 2011 at 13:44 local (server) time

I came across this story, today.  From the article:

Google is working on a “major” overhaul of its Chrome browser user interface (UI). Among the options on the table is the elimination of the URL bar, which could be the most significant UI change to the web browser since its invention.

When they first decided to omit the http:// from the URL shown in the address bar, I thought it was bad.  Now, I'm convinced Google is trying to make users stupid, just like Apple is doing with iOS, and partially OS X.

So, in a decade or two when a good portion of the population doesn't know what a URL is anymore.. are they going to call it a website number?

Comments: 1
> IPv6 NAT+PAT.. maybe?
Posted by prox, from Charlotte, on February 20, 2011 at 22:48 local (server) time

Alright.. pick yourself off the floor.  YES, IPv6 and NAT+PAT (many folks call it just NAT or NAPT but I'm calling it NAT+PAT to be as precise as possible) are in the title.  Keep reading, don't be frightened.

NAT+PAT is a hack introduced in the mid-1990s to help combat IPv4 address exhaustion.  It became wildly popular in residences and large enterprises that needed to attach multiple computers to an Internet connection with only a single public (PA) IPv4 address.  Unfortunately it breaks end-to-end connectivity, which impacts many protocols, and provides some network administrators with a false sense of security.  Bottom line is that it sucks, and it needs to go away.

Enter IPv6, the new protocol with an almost limitless address space.  NAT+PAT won't be needed anymore with IPv6, they'll be a thing of the past!  Well, almost.  One accepted scenario where NAT+PAT will still be used is load balancing.  Although, some may not even view load balancing as actual NAT+PAT, but a simple TCP proxy.  Call it what you will, but the real servers in a load balanced environment (assuming no DSR) won't typically see the source address of the client hitting the VIP.  They'll see requests coming from address(es) on the load balancer itself.

So, other than load balancing, IPv6 NAT+PAT isn't needed, right?  Well, read on, because I'll present a somewhat common IPv6 dual-stack deployment that might be able to benefit from sugh a beast.

Foo and Associates

Enter a company called Foo and Associates.  This company doesn't exist, and I'm making all this stuff up.  However, the design may look strangely similar to other networks out there.  Here's a diagram to start you off:

Foo and Associates

The above diagram shows Foo and Associates' network, AS64499.  They've got chunks of IPv4 and IPv6 PI space and use 10.10/16 internally.  There's two data centers and a few branch offices with private (MPLS, leased lines, whatever) WAN connectivity back to the DCs.  The DCs have some users and servers, and the branch offices just have users.  Right now, the branch offices access the IPv4 Internet through either of the two DCs, and get translated (NAT+PAT) to the external address(es) of the Internet firewalls depicted in site 1 and site 2.

There is a "dirty" segment that the firewalls live on as well as the Internet routers, which maintain connections to a few transit providers.  The two Internet routers have a private WAN link between them to be used for redundancy in case the transit providers fail at one particular DC.  The following diagram shows both the IPv4 and IPv6 advertisements:

IPv4 and IPv6 Route Advertisements

Now let's talk about how IPv4 works.

The DC servers just talk to the Internet without NAT or PAT or any translation.  The Internet routers statically route the appropriate subnets to the DC firewalls, and everything works properly.  The IPv4 advertisements are tweaked (as you can see in the diagram) so each /23 worth of DC server space is routed symmetrically in a hot-potato fashion.

Internet access from the RFC 1918'ed campus networks goes through the Internet firewalls at each of the DCs.  An IPv4 default route is followed based on WAN link costs, and traffic is translated via NAT+PAT out the Internet firewalls.  If one DC dies, the branch offices that are sending Internet traffic to it just move over to the Internet firewall at the other DC.  Here's a diagram of the outbound IPv4 Internet access flow:

IPv4 Request

And here's one showing the return traffic:

IPv4 Response

Let's also say some of the WAN links are a bit more latent than the others, either due to fiber routes or the geographic location of the sites of Foo and Associates.  The internal routing always takes the path with the lowest latency to determine the best way to the Internet.  Sure, it could be EIGRP or it could be IS-IS with some mechanism to tune link costs based on latency.  The details aren't relly important, here.

Now, let's talk about what happens when Foo and Associates deploys IPv6.

After getting a /40 assignment from their local RIR, Foo and Associates deploys IPv6 right on top of their current infrastructure, essentially dual-stacking everything.  The first diagram shows all the IPv6 addressing, so you might want to look at that again.

However, instead of deaggregating and sending longer prefixes to the DFZ, they Go Greenࡊ and only announce their /40 equally out of each data center.  The DC servers don't have a problem with this, as there are IPv6 static routes (and advertisements in IBGP, just like there were for the IPv4 specifics in the previous section) pointing from the Internet routers at each DC to the /48s of public space.  Egress traffic from the servers will exit locally, and ingress traffic can be received either locally or from the other DC, where it will be sent over the WAN link toward the correct site.  The DC firewall sees both traffic flows, and everything works as expected.

What doesn't work correctly is Internet access from the branch offices.  Since any part of the company could access the Internet via either of the Internet firewalls, the Internet routers have the whole /40 routed to the Internet firewalls for return traffic.  Now, keep in mind that Foo and Associates has stateful firewall policies allowing internally-initiated outbound Internet traffic, but not Internet-initiated inbound traffic. Here's the outbound IPv6 Internet access:

IPv6 Request

And illustrating the specific case where things break, here's the return traffic:

IPv6 Response (error)

Now you see the problem.  If Google decided to send traffic back to Foo and Associates via transit provider B, it would get lost.  The Internet firewall at site 2 doesn't have a record of the outgoing connection (SYN packet to [2001:4860:8008::68]:80) so it denies it thinking it's invalid traffic.  Meanwhile, the Internet firewall at site 1 never sees any inbound traffic to the session it opened to [2001:4860:8008::68]:80.  Eventually, the user's browser at site 3 times-out and falls back to IPv4.

Potential Fixes

There are a few potential fixes to the above problem.  Unfortunatly, every one of them has some drawback.  Let's go over them individually.

1: Deaggregate

Foo and Associates could deaggregate and advertise a whole bunch of /48s in addition to (or in place of) its /40.  The /48s of the sites closest to site 1 would be advertised out of Internet router A and the /48s of the sites closes to site 2 would be advertised out of Internet router B.  This would allow the firewalls at each DC to see both the outbound and inbound traffic flow.  Sounds like it might work!

However, even if the ISPs accepted the longer prefixes (and their upstreams did as well), what if the latencies of the WAN links change one day, and sites that normally have their best path out of site 1 flip over to site 2?  Latencies of leased lines might not move around too much, but latencies of MPLS-provisioned L2VPNs might fluctuate if the provider is building out a new ring or making network changes.  So, again some sites might encounter the original problem and end up with asymmetric flows getting nuked by the firewalls.

2: Move the firewalls

An expensive way of solving this problem is to remove the Internet firewalls in the DCs and move them closer to the users.  This means that the campus networks at each of the DCs would have their own firewalls, and each of the branch offices would have theirs, as well.  Since there's only one way in and out from each of the campus networks, the problem goes away.

Unfortunately, even if firewalls grew on trees (or were free), there are some security implications with this design.  Essentially, Foo and Associates' private WAN becomes a public one.  Rogue Internet traffic can make its way into the WAN, even if it's blocked before it gets to the users.  A DDoS that might normally not cause the Internet transit links to blink could consume one of the WAN links easily, disrupting operations.  This option isn't too bad, but it's not going to fly if Foo and Associates happens to be a financial company.  Although, Foo and Associates could just ditch the WAN and move to Internet-based VPNs, but then QoS goes out the window for any VoIP-related things, MTU becomes an issue, and bleh.

3: Link the firewalls

If the Internet firewalls in each data center support high availablity and state table sharing, it might be possible to link them together.  Some vendors say that this type of setup will work up to 100 ms but it typically needs two links, jumbo (>= 9100 bytes) frame support, and no VLAN tag manipulation.  So, Foo and Associates could buy two dedicated MPLS L2VPNs with jumbo frame support and a latency SLA that's below 100 ms.  This way, the firewalls could maintain a common state table and know about the outgoing state even if it's been built through a different firewall.

This might work, actually.  But is it a good idea?  Maybe, maybe not.  Session set-up time may be longer if the session table has to be updated synchronously.  What happens if the provider breaks their SLA and the latency exceeds 100 ms?  Session table disaster is the probable outcome.

4: Only use one site for IPv6 Internet

Another simple yet unattractive option is to tweak routing so only one DC is used for IPv6 Internet access.  If site 1 was chosen, the /40 advertisement out of site 2 would be prepended, and the BGP local preference attribute would be used to influence the exit path.  If site 1 drops off the map, site 2 could still be used as backup (since there would be no other routing announcements to choose from).  Note that this wouldn't affect the DC servers.

The problem with this plan is obvious: latency.  Sites that have the shortest path to site 2 for Internet access would go through site 1, instead.  If site 4 is on the west coast and site 1 is on the east coast of the United States, this could get annoying.  What if site 4 was in San Francisco and site 1 was in New York?  Should the packets for UCLA's website over IPv6 really have to cross the country and back?  Yeesh.

5: Forget security

The title says it all.  Throw out those Internet firewalls.  Let's go back to the old days when there weren't any firewalls.  Seriously, if Foo and Associates is a college or university, this may be the way to go.  No firewalls, no session tracking, and no problems!

Although this may be a bit unorthodox in this day and age, a compromise might be to turn of SYN packet checking on the firewalls and allow traffic on ports above 1024 to generate sessions even if the 3-way handshake was never observed.  Or, replace the firewalls with routers that do stateless filtering to achieve the same goal.

6: IPv6 NAT+PAT

Here it is, folks.  Add IPv6 NAT+PAT to the Internet access policy on the Internet firewalls in both DCs, and you're set.  IPv6 Internet access is then optimized for latency (well, as well as it was for IPv4) and the problem with return traffic disappears.  Here's a diagram:

IPv6 NAT+PAT Request

And the return path:

IPv6 NAT+PAT Response

Problems.. there are tons of them.  This breaks end-to-end connectivity, AGAIN.  It feels like the mid-1990s all over again.  IPv6 NAT+PAT isn't really implemented in many devices, yet, but it's implemented in OpenBSD's PF.  The simplicity of this solution is hard to deny, even with the problems (philosophical and other) it presents.

And there you have it.  Are there solutions 7, 8, and 9?  I sure hope so.  Feedback, please.  I'd love to hear it.

Comments: 2
> Light Bulbs
Posted by prox, from Charlotte, on February 13, 2011 at 00:16 local (server) time

I was at the Home Depot picking up some miscellaneous tools, and decided I'd try out one of the new LED bulbs.  I've been using one of those horrid CFL bulbs for awhile, and although I'm not completely dissatisfied with it, I've been looking for something more natural.

Before installing the LED bulb, I figured I'd do a tiny comparison between three bulbs.  Disclaimer: this is completely unscientific and flawed.  I'm pretty much comparing apples to oranges, but I just wanted to see how they match up.

I pointed my camera at one of my lamps, set the exposure to manual (f/8.0, 1 second exposure, ISO 200, 41 mm), and took a shot per bulb.  I also happen to have a Kill A Watt that I used to get the real-life power consumption of the lamp for each trial.

LED

The first test was the LED.  It's an EcoSmart A19 8.6 Watt (40-Watt equiv.) bulb that was roughly $18.  According to the Kill A Watt, it consumed 8.7 Watts.

CFL

The second test was the CFL.  It's a GE 13 Watt (60 Watt equiv.) that I don't think is sold anymore.  I got a couple of them awhile back from Duke Power.  I had to let this thing sit for a couple minutes to warm up.  Kill A Watt indicated it consumed 10.2 Watts.

Incandescent

The last test was a plain ol' GE 40 Watt blub.  41.4 Watts according to Kill A Watt.

So, which looks the best?  I picked the CFL.  Isn't that sad?

Bulbs

Anyway, on another topic.  Look at the above photo.  What is with the wasteful packaging on the single LED bulbs compared to the six pack of incandescents?  Think cardboard vs. cardboard and thick plastic.  Aren't the LED bulbs supposed to be green?  Silly and annoying.

Comments: 0
> Super Bowl XLV
Posted by prox, from Charlotte, on February 06, 2011 at 21:07 local (server) time

Rather than attending a Super Bowl party at a friends' house, I decided to stay in due to a cold (or just allergies?) that started attacking me last night.  I really haven't been paying much attention to the game, but did see the Volkswagen commercial and thoroughly enjoyed it.

End of Line

What I also saw and did not enjoy, so far, was the halftime show, put on by The Black Eyed Peas and Usher.  I'm so sick of The Black Eyed Peas, it's not even funny.  I watched most of it with the audio muted, and thinking the Tron theme wasn't looking too bad, I unmuted the audio.  That was a big mistake.  Maybe the whole thing would have been better if Daft Punk was performing DJ duties, but I have a feeling it still would have been sub-par.

Maybe next year will be better, but I doubt it!

Comments: 0
> DNS Load Balancing
Posted by prox, from Charlotte, on January 17, 2011 at 11:47 local (server) time

As mentioned previously, using off-net and far away DNS caches is a bad idea.

If you've forgotten why, it's because of DNS-based load-balancing, DNS-based GSLB (global server load-balancing), or just GSLB.  The technology goes by many names.

Essentially, it comes down to authoritative DNS servers providing different resource records based on the querying address of the DNS cache.  Based on either geolocation or latency, the authoritative server take its best guess and return resource records pointing to servers that it thinks are closest to your DNS cache.  Now, if you run your own DNS cache or use your ISPs DNS caches, this best guess will be just about right, or at least good enough.

However, if you're using an off-net DNS cache that may be far away, this whole system goes down in flames.  The authoritative DNS servers will return resource records that are appropriate for the DNS cache, but bad for you.  Instead of grabbing YouTube videos from something at a Google data center a couple hundred miles away, you'll be grabbing YouTube videos from a couple thousand miles away.  And, if you know anything about TCP, this is most likely going to be slower due to the increased latency.

Now, not everyone does this.  In fact, most small sites just use static resource records.  However, the big players and CDNs (Google, Yahoo, Apple, Akamai, etc.) all at least partially employ this type of technology.

Here's a really quick example.  From my server in NYC, let's see what we get for www.google.com. via the DNS cache running on the server itself:

(dax:11:33)% host -t A www.google.com. 127.0.0.1                                                
Using domain server:                            
Name: 127.0.0.1
Address: 127.0.0.1#53
Aliases: 

www.google.com is an alias for www.l.google.com.
www.l.google.com has address 173.194.33.104
(dax:11:33)% host -t A www.google.com. 127.0.0.1|grep 'has address'|awk '{ print $4; }'|fping -e       
173.194.33.104 is alive (0.78 ms)

With a little grep and awk action, it looks like I'm going to be connecting to Google servers that are fairly close, possibly in the same building (111 8th).  Now, let's hit one of Norton DNS' caches:

(dax:11:34)% host -t A www.google.com. 198.153.194.1                                                
Using domain server:                                
Name: 198.153.194.1
Address: 198.153.194.1#53
Aliases: 

www.google.com is an alias for www.l.google.com.
www.l.google.com has address 72.14.204.147
www.l.google.com has address 72.14.204.103
www.l.google.com has address 72.14.204.104
www.l.google.com has address 72.14.204.99
(dax:11:36)% host -t A www.google.com. 198.153.194.1|grep 'has address'|awk '{ print $4; }'|fping -e
72.14.204.147 is alive (7.12 ms)
72.14.204.103 is alive (6.95 ms)
72.14.204.104 is alive (7.40 ms)
72.14.204.99 is alive (6.84 ms)

Still decent, but it tossed me to a server that's probably far outside Manhattan.  One last test, let's try a DNS cache that is on the west coast of the United States:

(dax:11:36)% host -t A www.google.com. 216.240.130.2                                                
Using domain server:
Name: 216.240.130.2
Address: 216.240.130.2#53
Aliases: 

www.google.com is an alias for www.l.google.com.
www.l.google.com has address 66.102.7.99
www.l.google.com has address 66.102.7.104
(dax:11:38)% host -t A www.google.com. 216.240.130.2|grep 'has address'|awk '{ print $4; }'|fping -e             
66.102.7.99 is alive (87.4 ms)
66.102.7.104 is alive (87.6 ms)

87 msec.  Those servers are probably on the west coast, which is close to the DNS cache but not close to my server.  That'll certainly impact performance, if I wanted to hit GMail or watch some videos off YouTube.

So, it's a good idea to use a DNS cache that's close to you.  Some free DNS services like Google may be anycasted enough that you're always likely to be using one that's close, latency speaking.  However, as a general rule of thumb, you're better off to use a DNS cache that is local or on-net.  Some folks might say that it's better to use your ISPs DNS caches vs. your own, since more popular records will already be cached - this is a valid argument.  However, the same really can't be said about 3rd-party DNS caches, because even though the queries will be returned much faster, long-lived connections to servers based on the returned records (YouTube videos…) will be slower.

Anyway, I recently saw draft-vandergaast-edns-client-ip (version 01 when I looked at it), an IETF draft that uses an EDNS0 option to encode client address information in the DNS request.  Well, this certainly looks like it might solve all the above.  Section 4 of the draft pretty much sums it all up:

The edns-client-subnet extension allows DNS servers to propagate the network address of the client that initiated the resolution through to Authoritative Nameservers during recursion.

Servers that receive queries containing an edns-client-subnet option can generate answers based on the original network address of the client. Those answers will generally be optimized for that client and other clients in the same network.

The option also allows Authoritative Nameservers to specify the network range for which the reply can be cached and re-used.

Now, when are we going to get this stuff implemented?  Google, ISC?  You guys listening?

Comments: 0
> Ecdysis NAT64/DNS64
Posted by prox, from Charlotte, on January 01, 2011 at 12:29 local (server) time

I normally don't play around with such silliness, but this morning I figured I should try out the NAT64/DNS64 implementation by Ecdysis.  NAT64 is a simple way for IPv6-only clients to access IPv4 systems.  It's done with a combination of AAAA record synthesis and NAT.  For a review of the IPv6 transition mechanisms, see my prior blog entry.

So, I downloaded and booted their Linux live CD (basically a modified Fedora disc) in VMware Workstation, and set it up to use 2001:48c8:1:12f::/96.  It started Unbound and loaded nf_nat64 into the kernel with some address parameters.  I then pointed a static route to the VM and injected 2001:48c8:1:12f::/64 (eh, not like I was going to use the rest of the /64 for anything else) into BGP.  A couple DIGs verified that DNS64 was indeed working:

% dig @red slashdot.org. AAAA

; <<>> DiG 9.7.2-P3 <<>> @red slashdot.org. AAAA
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29892
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;slashdot.org.           IN   AAAA

;; ANSWER SECTION:
slashdot.org.       3600 IN   AAAA 2001:48c8:1:12f::d822:b52d

;; Query time: 733 msec
;; SERVER: 2001:48c8:1:105:250:56ff:fe1a:afaf#53(2001:48c8:1:105:250:56ff:fe1a:afaf)
;; WHEN: Sat Jan  1 12:15:13 2011
;; MSG SIZE  rcvd: 58

Slashdot is, at first glance, a bad example, because one would think that being a tech. news site, they'd actually publish an official AAAA record and be accessible by IPv6.  The truth is, they're even lagging behind CNN, so with such things, so the example is valid.

Anyway, it seems to work!

% telnet 2001:48c8:1:12f::d822:b52d 80
Trying 2001:48c8:1:12f::d822:b52d...
Connected to 2001:48c8:1:12f::d822:b52d.
Escape character is '^]'.
HEAD / HTTP/1.1
Host: slashdot.org
Connection: close

HTTP/1.1 200 OK
Server: Apache/1.3.42 (Unix) mod_perl/1.31
SLASH_LOG_DATA: shtml
X-Powered-By: Slash 2.005001305
X-Fry: These new hands are great. I'm gonna break them in tonight.
X-XRDS-Location: http://slashdot.org/slashdot.xrds
Cache-Control: no-cache
Pragma: no-cache
Content-Type: text/html; charset=iso-8859-1
Content-Length: 146625
Date: Sat, 01 Jan 2011 17:17:27 GMT
X-Varnish: 1317912096 1317911450
Age: 47
Connection: close

Connection closed by foreign host.

And after adding a default route to the Ecdysis VM and poking a few holes in my PF rules, NAT64 works off-net, too.  The traceroutes are a little ridiculous, though:

core1.nyc1.he.net> traceroute ipv6 2001:48c8:1:12f::d822:b52d

Tracing the route to IPv6 node  from 1 to 30 hops

  1     1 ms   <1 ms   <1 ms 2001:504:1::a502:9791:1
  2     5 ms   <1 ms   <1 ms 0.ae1.tsr1.lga5.us.voxel.net [2001:48c8::822]
  3    <1 ms   <1 ms   <1 ms 0.ae2.csr2.lga6.us.voxel.net [2001:48c8::82e]
  4    <1 ms   <1 ms   <1 ms em0.dax.prolixium.net [2001:48c8:1:2::2]
  5    34 ms   41 ms   33 ms si3.starfire.prolixium.net [2001:48c8:1:1ff::1a]
  6    32 ms   36 ms   33 ms red.prolixium.com [2001:48c8:1:105:250:56ff:fe1a:afaf]
  7    34 ms   40 ms   33 ms 2001:48c8:1:12f::a03:5fe
  8    38 ms   42 ms   50 ms 2001:48c8:1:12f::a03:fd02
  9    57 ms   50 ms   48 ms 2001:48c8:1:12f::ac9:4001
 10    59 ms   56 ms   45 ms 2001:48c8:1:12f::184a:fe34
 11    56 ms   50 ms   50 ms 2001:48c8:1:12f::185d:4017
 12    55 ms   49 ms   54 ms 2001:48c8:1:12f::426d:652
 13    56 ms   60 ms   52 ms 2001:48c8:1:12f::426d:6ab
 14    58 ms   48 ms   50 ms 2001:48c8:1:12f::43b:c15
 15    56 ms   49 ms   54 ms 2001:48c8:1:12f::445:9608
 16   144 ms   50 ms   49 ms 2001:48c8:1:12f::d0aa:1751
 17    56 ms   51 ms   55 ms 2001:48c8:1:12f::cc46:c802
 18   110 ms   96 ms   89 ms 2001:48c8:1:12f::cc46:c4f2
 19    87 ms   80 ms   92 ms 2001:48c8:1:12f::cc46:c37a
 20   105 ms   87 ms   89 ms 2001:48c8:1:12f::4025:cfce
 21   105 ms   88 ms  113 ms 2001:48c8:1:12f::401b:a0c6
 22   106 ms   88 ms   87 ms 2001:48c8:1:12f::d822:b52d

The last 32 bits of the IPv6 address of each hop equates to the IPv4 address.  For example, if you take hop 17 and translate it, something meaningful is displayed:

% ping -c1 0xcc46c802  
PING 0xcc46c802 (204.70.200.2) 56(84) bytes of data.
64 bytes from 204.70.200.2: icmp_req=1 ttl=246 time=31.9 ms

--- 0xcc46c802 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 31.946/31.946/31.946/0.000 ms
% host 204.70.200.2   
2.200.70.204.in-addr.arpa domain name pointer cr2-te-0-0-0-0.atlanta.savvis.net.

After playing with this for awhile, it suddenly stopped working, though:

Ecdysis Panic

Oh well, they've got some bugs to fix in nf_nat64, I suppose.

Anyway, except for that panic, the Ecdysis software seems to work pretty well.  It'll be nice once it starts to be included in the package systems of major Linux distributions.

Oh, and.. Happy New Year!

Comments: 0

Previous PageDisplaying page 13 of 121 of 965 results Next Page