Present Location: News >> Blog

Blog

> L2 VPN over MPLS over GRE over IPsec on a Juniper SRX!
Posted by prox, from North Brunswick, on November 20, 2012 at 21:28 local (server) time

Yep, it's a mouthful.  However, it works and is very useful.

Traditionally, Juniper's ScreenOS line of firewalls (NetScreen, SSG, and ISG) have not supported any sort of L2 VPNs.  I'm referring specifically to connecting two LANs over an IPsec connection without the need for proxy ARP, static NAT, or any L3 hops.  This has been a severe limitation, considering other vendors such as Cisco Systems have supported this type of feature on their ASA product line for quite awhile.  It's also been supported in various forms on GNU/Linux systems, although the easiest implementation is with OpenVPN and the oh-so-awesome TUN/TAP driver.

That being said, Juniper's SRX product line supports a couple forms of L2 VPNs over IPSec.

A couple of weeks ago I worked with our Juniper Networks resident engineer to configure an SRX240 with one of the L2 VPN configurations.  We got a pair of SRX240 firewalls connected back to back in the lab and configured two types of L2 VPNs over them: VPLS and a Martini-style pseudowire.

The initial configuration we started with was a bit complex.  It was based on this example from Juniper, but we quickly changed it around and removed the need for the logical tunnel interfaces.

We tested the configuration with VPLS on two SRX240s cabled together and the BreakingPoint network testing appliance configured downstream.  The "routing robot" test components were used in UDP mode and we achieved roughly 70 Mbps in IMIX with the packet size ranging between 64 and 1400 bytes to avoid fragmentation.  The SPUs on the SRX240s were near 90-95% utilization.  There was no packet-loss, though!

Unfortunately, since the BGP-based VPLS signaling requires BGP (duh), some odd combination of IPsec IKE negotiations and BGP caused the complete tunnel setup time to be longer than expected.  It was on the order of minutes at one point.  For our purposes, since we only need one port on either side, we decided to scrap VPLS and use an l2circuit instead.

The l2circuit does not require BGP for signaling, only OSPF.  The BGP process was deleted and, as a result, the setup time appeared to be decreased slightly.  We also bumped up the L3 MTU on the downstream physical interfaces to 1600 in order to adhere to our standard MTU for WAN interfaces, knowing full-well that this would result in fragmentation since the Internet-facing interface is set to 1500.  We ended up with a configuration similar to the following:

ge-0/0/0 {
    mtu 1614;
    encapsulation ethernet-ccc;
    unit 0 {
        family ccc {
            filter {
                input CCC-packet-mode;
            }
        }
    }
}
gr-0/0/0 {
    unit 0 {
        tunnel {
            source 10.0.0.130;
            destination 10.0.0.129;
        }
        family inet {
            mtu 9000;
            address 10.0.0.134/30;
        }
        family mpls {
            mtu 9000;
            filter {
                input MPLS-packet-mode;
            }
        }
    }
}
ge-0/0/15 {
    unit 0 {
        family inet {
            address 192.0.2.10/30;
        }
    }
}
lo0 {
    unit 0 {
        family inet {
            filter {
                input protect-re;
            }
            address 10.0.0.2/32;
            address 127.0.0.1/32;
        }
    }
}
st0 {
    unit 0 {
        family inet {
            address 10.0.0.130/30;
        }
    }
}
static {
    route 0.0.0.0/0 next-hop 192.0.2.9;
}
protocols {
    mpls {
        interface gr-0/0/0.0;
    }
    ospf {
        area 0.0.0.0 {
            interface lo0.0 {
                passive;
            }
            interface gr-0/0/0.0;
        }
    }
    ldp {
        interface gr-0/0/0.0;
        interface lo0.0;
    }
    l2circuit {
        neighbor 10.0.0.1 {
            interface ge-0/0/0.0 {
                virtual-circuit-id 100000;
                encapsulation-type ethernet;
            }
        }
    }
}
firewall {
    family mpls {
        filter MPLS-packet-mode {
            term all-traffic {
                then {
                    packet-mode;
                    accept;
                }
            }
        }
    }
    family ccc {
        filter CCC-packet-mode {
            term 1 {
                then {
                    packet-mode;
                    accept;
                }
            }
        }
    }
}

I didn't include the security section or the protect-re firewall filter in the above configuration.  The security section consists of an IPsec VPN bound to st0.0 and associated zones and policies.  st0.0, gr-0/0/0.0, and lo0.0 were all put in the same zone with permissive policies.  Also, the "family ccc" under the firewall section is new to me.  I wasn't aware that the ccc family existed, prior to working on this!  It's apparently used here to instruct the SRX to process frames at L2 while also bypassing the flow module.

Suprisingly, after testing with large packets (up to the MTU of the interface), everything worked well, even with the fragmentation.  The speed with IMIX was roughly the same.  We then put this into production and everything has been working well for about a week, now!  In fact, the final setup involves running MPLS over this whole setup, so we've got an additional layer of MPLS to add to the fun.

Here's what things ultimately look like:

prox@srx240> show security ipsec security-associations
  Total active tunnels: 1
  ID    Algorithm       SPI      Life:sec/kb  Mon vsys Port  Gateway
  <131043 ESP:aes-256/sha1 cf2f7c23 1377/ unlim -  root 500   192.0.2.127
  >131043 ESP:aes-256/sha1 4cdcd526 1377/ unlim -  root 500   192.0.2.127

prox@srx240> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
10.0.0.133       gr-0/0/0.0             Full      10.0.0.1         128    35

prox@srx240> show ldp session
  Address           State        Connection     Hold time
10.0.0.1            Operational  Open             20

prox@srx240> show l2circuit connections
Layer-2 Circuit Connections:

Legend for connection status (St)
EI -- encapsulation invalid      NP -- interface h/w not present
MM -- mtu mismatch               Dn -- down
EM -- encapsulation mismatch     VC-Dn -- Virtual circuit Down
CM -- control-word mismatch      Up -- operational
VM -- vlan id mismatch		 CF -- Call admission control failure
OL -- no outgoing label          IB -- TDM incompatible bitrate
NC -- intf encaps not CCC/TCC    TM -- TDM misconfiguration
BK -- Backup Connection          ST -- Standby Connection
CB -- rcvd cell-bundle size bad  SP -- Static Pseudowire
LD -- local site signaled down   RS -- remote site standby
RD -- remote site signaled down  XX -- unknown

Legend for interface status
Up -- operational
Dn -- down
Neighbor: 10.0.0.1
    Interface                 Type  St     Time last up          # Up trans
    ge-0/0/0.0(vc 100000)     rmt   Up     Nov 12 15:49:15 2012           1
      Remote PE: 10.0.0.1, Negotiated control-word: Yes (Null)
      Incoming label: 299776, Outgoing label: 299776
      Negotiated PW status TLV: No
      Local interface: ge-0/0/0.0, Status: Up, Encapsulation: ETHERNET

prox@srx240>

We're probably going to use this solution again in the future, since it happens to work better than other solutions (proxy ARP, etc.).  However, it's unfortunate that we can't get more than roughly 70 Mbps over the connection.  I suspect an SRX650 can do much better.

Comments: 0
> DNC in Charlotte
Posted by prox, from Charlotte, on September 08, 2012 at 14:58 local (server) time

The DNC in Charlotte has finally come and gone.  It was fairly anticlimatic.

If you're outside the United States or have been hiding under a rock for the last few weeks, the RNC was held in Tampa, FL during August 27-30, 2012 and the DNC was held in Charlotte, NC during September 4-6, 2012.  Other than having to avoid uptown completely for a week, it really didn't impact me, at all.  Actually, my commute was faster, for some reason.  I'm guessing lots of folks avoided commuting altogether or got the week off (ahem.. Wells Fargo).

The Obama family apparently stayed in the Ballantyne Hotel, which is just down the road from me in the southern perimiter of Charlotte.  As a result, some of the roads in the area were closed, including Ballantyne Commons Parkway:

Ballantyne Road Closures

The above image is from the an article in the Charlotte Observer.

Here's a photo of Ballantyne Commons Parkway at Johnston Rd:

Ballantyne Commons Parkway and Johnston Rd

Here's a shot of the barricade consisting of dump trucks at the south entrance to the Ballantyne Hotel and Ballantyne Commons Parkway facing west:

Barricade at Ballantyne Commons Parkway

I suppose folks who took Community House Road to Ballantyne Commons Parkway during their daily commute may have suffered from some delays.  I was lucky enough to not even have to traverse Johnston Road to exit Ballantyne!

There is some leftover infrastructure still in the uptown area from the DNC.  Today, I saw a temporary cellular tower in the vicinity of Stonewall and Brevard.  I didn't bother checking what carriers it provided:

Cellular Tower in Uptown Charlotte

Comments: 0
> Sites That SHOULD Have AAAAs
Posted by prox, from Charlotte, on July 31, 2012 at 20:55 local (server) time

Although World IPv6 Launch has come and gone, there are some high-profile sites that still haven't published AAAA records.  In my opinion, some of these sites really should be ashamed of themselves, at this point:

Slashdot:

% ipin slashdot.org.
  A record #1
4 Address: 216.34.181.45
4 PTR: slashdot.org.
4 Prefix: 216.32.0.0/14
4 Origin: AS3561 [SAVVIS - Savvis]
% ipin www.slashdot.org.
  A record #1
4 Address: 216.34.181.48
4 PTR: star.slashdot.org.
4 Prefix: 216.32.0.0/14
4 Origin: AS3561 [SAVVIS - Savvis]

Slashdot used to be a geeky news aggregation site back in the late 90s and early 2000s.  In the last several years it has lost a little bit of its steam but still provides links to some interesting articles and attracts humorous comments.  All the geeks out there should really be wondering why AOL is dual-stacked but Slashdot isn't.  I'm sure Savvis offers IPv6 transit.

Massachussets Institute of Technology (MIT):

% ipin mit.edu.
  A record #1
4 Address: 18.9.22.69
4 PTR: WEB.MIT.EDU.
4 Prefix: 18.0.0.0/8
4 Origin: AS3 [MIT-GATEWAYS - Massachusetts Institute of Technology]
(dax:20:33)% ipin www.mit.edu.
  A record #1
4 Address: 18.9.22.169
4 PTR: WWW.MIT.EDU.
4 Prefix: 18.0.0.0/8
4 Origin: AS3 [MIT-GATEWAYS - Massachusetts Institute of Technology]

Operating AS3 and having a worldwide reputation as an excellent science and technology school, it's a bit disappointing that MIT doesn't have an IPv6-accessible website.

CERN:

% ipin cern.ch.
  A record #1
4 Address: 137.138.144.169
4 PTR: webr8.cern.ch.
4 Prefix: 137.138.0.0/16
4 Origin: AS513 [CERN CERN - European Organization for Nuclear Research]
% ipin www.cern.ch.
  A record #1
4 Address: 137.138.144.168
4 PTR: webr7.cern.ch.
4 Prefix: 137.138.0.0/16
4 Origin: AS513 [CERN CERN - European Organization for Nuclear Research]

The first web server software, CERN httpd was written at, well, CERN.  It's 2012, the nuclear research organization's website should really be dual-stacked.  Even more embarrassing is that the CERN website is apparently powered by Microsoft IIS!

Apple:

% ipin www.apple.com.
  A record #1
4 Address: 72.246.213.15
4 PTR: a72-246-213-15.deploy.akamaitechnologies.com.
4 Prefix: 72.246.212.0/22
4 Origin: AS16625 [AKAMAI-ASN1 Akamai Technologies European AS]
% ipin apple.com.    
  A record #1
4 Address: 17.172.224.47
4 PTR: apple.com.
4 PTR: itunesops.com.
4 PTR: st11p01ww-apple.apple.com.
4 PTR: asto.re.
4 Prefix: 17.168.0.0/13
4 Origin: AS714 [APPLE Apple Inc]
  A record #2
4 Address: 17.149.160.49
4 PTR: mammals.org.
4 PTR: myapple.net.
...
4 PTR: mach-os.net.
4 PTR: macmate.com.
4 Prefix: 17.148.0.0/14
4 Origin: AS714 [APPLE Apple Inc]

Apple really needs to get their act together with IPv6.  OS X has suffered from address selection and happy eyeballs problems for the last few releases (maybe they're finally fixed in 10.8) and only recently supports a DHCPv6 client.  iOS doesn't seem much better, either.  Apple dual-stacked their site for World IPv6 Day back in 2011 but that's about it.  It really shouldn't be that difficult to get IPv6 accessibility for their www CNAME since it's already powered by Akamai.

Twitter:

% ipin twitter.com.
  A record #1
4 Address: 199.59.149.230
4 PTR: www4.twitter.com.
4 Prefix: 199.59.148.0/22
4 Origin: AS13414 [TWITTER-NETWORK - Twitter Inc.]
  A record #2
4 Address: 199.59.148.10
4 PTR: r-199-59-148-10.twttr.com.
4 Prefix: 199.59.148.0/22
4 Origin: AS13414 [TWITTER-NETWORK - Twitter Inc.]
  A record #3
4 Address: 199.59.148.82
4 PTR: r-199-59-148-82.twttr.com.
4 Prefix: 199.59.148.0/22
4 Origin: AS13414 [TWITTER-NETWORK - Twitter Inc.]
% ipin www.twitter.com.
  A record #1
4 Address: 199.59.148.82
4 PTR: r-199-59-148-82.twttr.com.
4 Prefix: 199.59.148.0/22
4 Origin: AS13414 [TWITTER-NETWORK - Twitter Inc.]
  A record #2
4 Address: 199.59.149.230
4 PTR: www4.twitter.com.
4 Prefix: 199.59.148.0/22
4 Origin: AS13414 [TWITTER-NETWORK - Twitter Inc.]
  A record #3
4 Address: 199.59.148.10
4 PTR: r-199-59-148-10.twttr.com.
4 Prefix: 199.59.148.0/22
4 Origin: AS13414 [TWITTER-NETWORK - Twitter Inc.]

Twitter has been the odd one out when it comes to social networking sites and IPv6.  Facebook and Google have both been big proponents of IPv6, but Twitter has shown no interest in it.  Their autonomous system isn't even advertising their IPv6 prefix (2620:fe::/48), yet!

Comments: 0
> iOS Battery Level via CLI
Posted by prox, from Charlotte, on July 29, 2012 at 23:00 local (server) time

If you've got a jailbroken iOS device and have been wondering how to view the battery level and statistics via the CLI, look no further!  powerlog is apparently an easy way of doing this:

anubis:~ root# powerlog -B
07/29/12 22:50:54 [Battery] level=40.70%; voltage=3815 mV; current=447 mA; max_capacity=10586; charging_state=Inactive; charging_current=500 mA; battery_temp=27.70 C; adapter_info=4000; connected_status=1; usage=86:24:56; standby=86:24:56;
07/29/12 22:51:02 [Battery] level=40.72%; voltage=3815 mV; current=402 mA; max_capacity=10586; charging_state=Inactive; charging_current=500 mA; battery_temp=27.70 C; adapter_info=4000; connected_status=1; usage=86:25:03; standby=86:25:03;

powerlog is stored in /usr/bin and comes on every iOS device, as far as I can tell.  There are a few other options to it, but -B does the trick as far as I'm concerned.  I find this useful due of laziness, mostly...

Comments: 0
> Updates, IPv6, virtualization, etc.
Posted by prox, from Charlotte, on July 18, 2012 at 22:16 local (server) time

It's official, I am failing at my New Year's resolutions.  I was supposed to blog and tweet more while simulaneously posting on Facebook less often.  It turns out I've been neglecting them all fairly equally, unfortunately.  Anyway, a few topics and updates are below!

Networking Virtualzation

Over the past several months I've been using VirtualBox more extensively, specifically VBoxHeadless.  I first started using it because it seems to be the only hypervisor for FreeBSD that can take advantage of hardware acceleration with VT-x/AMD-V.  I've now got four virtual machines running on dax, including a Cisco ASA (it runs great!) so I can play with some stuff.  On other systems where I have the option of KVM or VMware, I've started using VirtualBox due to the multitude of system and networking options (the VBoxManage syntax seems to go on without end!).

I also recently converted part of my lab network to VirtualBox from VMware Server 2.0.  I only converted my end hosts and an x86 RouterOS box, the rest of the IOS and Junos instances stayed the same (Dynamips and KVM, respectively).  Anyway, it was certainly an upgrade I was long due for.  VMware Server was really a pain in the rear since the console requires the use of a web browser running on Windows.  I ended up flattening the VMDK files first and then converting them to VDI files using the clonehd function of VBoxManage.

The only snag I've found so far with VirtualBox is its lack of coexistance with KVM.  VirtualBox just doesn't work with KVM, where VMware Server did just fine.  Sure, I could use Qemu with KVM but it's dog slow.  A sufficient workaround is to disable hardware virtualization for the VirtualBox instances so it won't conflict with KVM.  I did this with the following:

VBoxManage modifyvm myvn --hwvirtex off

This doesn't really slow things down too much.  In fact, I haven't really noticed it at all.  One thing it does do is prevent the use of 64-bit guests, which is a bummer.  I had to reinstall one of my FreeBSD VMs, as a result.

IPv6 Multicast Issues

I've finally started to nail down a weird IPv6 Wi-Fi issue I've been having for several months (maybe years?) with at least three different WAPs.

All IPv6 address configuration on my home network is SLAAC, at the moment, and Wi-Fi authentication is WPA2-PSK.  Over time Wi-Fi clients will randomly lose their IPv6 default route or EUI-64 addresses.  This happens across a variety of operating systems, it's not limited to one particular OS or platform.  On Linux I have tried using the rdisc6 (part of the ndisc6 package in Debian and Ubuntu distributions) to force the upstream router to send a router advertisment message.  This would never work after the problem started happening and the solution is always to bounce the Wi-Fi interface on the client, which seems to work every time.

Today I ran some tcpdumps and verified that the router advertisement messages were being seen by wired clients on the same VLAN, but not the Wi-Fi clients.  On my Cisco 1142 WAP I started clearing things until I got to "clear interface dot11radio*" and things started working again.  It appears the wireless interfaces on my WAP are eating IPv6 neighbor advertisement messages (sent to ff02::1 - all clients, with a multicast MAC of 33:33:00:00:00:01) but NOT neighbor solicitation messages (sent to ff02::2 - all routers, with a multicast MAC of 33:33:00:00:00:02).  I'd initially say that this is a Cisco bug related to the IOS version that's on the WAP, currently 12.4(23c)JA, but since this problem has been exhibited on two other WAPs (a Cisco 1200 and a NETGEAR WAG102) over time, I'm now wondering if it's a WPA2 group re-key issue or something like that since it only seems to affect multicast.  It's very specific about the addresses though, which is puzzling.

More research is obviously needed, but I figured I'd throw this out there just in case anyone has the same issue.  At least I can remotely coax clients back online via IPv6 by just SSHing to the WAP and issuing clear commands.  I suppose I could setup a script to just clear the interfaces every morning at 04:00, although that reminds me of a story about some systems guys somewhere rebooting SMTP servers nightly due to a memory leak in sendmail.  Yuck.

IPv6 and NAT

I need to dedicate a whole blog entry for this, I think.  Essentially, I'm sick of folks being violently opposed to NAT and PAT implementations for IPv6.  I'm also starting to think that NAT and PAT can be classified as security tools.

Warning, don't take the below text too seriously.  Well, you can, because I am mostly serious about it, but do so at your own risk!

NAT and PAT are mostly used for IPv4 because hardly anyone has enough public IPv4 space for all of their internal networks.  In this case, NAT and PAT are used for Internet access.  They're also used for things like server load balancing, back-end business to business connectivity, and numerous cases where changing routing is just not possible.  Some of these same situations exist in IPv6 and I believe NAT and PAT can also be applied in these situations.  Should they be the first thing on the list?  No, but their use should not be discounted for some silly religious purity.  They are a tool, nothing more.  Folks can choose to use them if they want, or not use them if they can make do without them.

Let's take a simple case.  Verizon Wireless provides an IPv6 /64 along with an IPv4 NET-10 address to each client on their LTE network.  The Mi-Fi devices and Internet connection sharing software on handsets usually use NAT and PAT to share the IPv4 connectivity (ultimately double natted due to CGN).  The IPv6 /64 is not well suited to Internet connection sharing because it exists on the cellular interface, not on the Wi-Fi interface (or whatever medium is used for client connectivity).  There are a few options for this including NPTv6 and PAT.  I have heard folks specifically say they would rather wait and petition Verizon Wireless to support PD than use any type of NAT over IPv6.  Yeah, like that's going to happen any day.  Really, it sounds like using NAT for IPv6 in this situation might cause widespread famine and disease.  It's a little silly.

Now, let's talk about NAT as a security tool.  Very simply, it's not a security tool, but it can assist in securing networks and provide some security through obscurity.  For example, take a residential router that runs a single iptables command on Linux:

iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE

If you know iptables, this is a simple one.  The above command causes iptables to translate all packets sourced from 192.168.1.0/24 and routed out eth0 to whatever IPv4 interface is bound to eth0.  It activates NAT and PAT.  Now, assume the rest of the iptables chains in the filter table (INPUT, OUTPUT, and FORWARD) are all set to their defaults, which is ACCEPT.  We now have a router performing NAT and PAT (with connection tracking) but no actual firewalling.  The box is wide open.  However, this does provide some security, because folks on the Internet can't easily establish a new TCP connection with a Linux box at 192.168.1.100 on port 22 (SSH).  Why?  192.168.1.0/24 is not routed on the Internet.  Can you perform other tricks to get this type of access?  Sure, but it takes more effort.  NAT and PAT here provide a little bit of security.  Although, maybe one can say that the security is there to begin with (in the form of routing, or lack there of).  NAT and PAT just allow connectivity in one direction where no connectivity exists without them.

Now, how about security through obscurity?  It's simple.  Hiding a bunch of IPv4 or IPv6 addresses behind one single address, using NAT and PAT to translate all connections.. it obscures the real addresses!  Sure, RFC 4941 obscures source addresses, too, and so do proxy servers.  NAT and PAT used together is just another option for this and does, in my opinion, provide this type of protection (if you want to call it protection).  Semantics?  Maybe!

Unfortunately, this is not the first time I've talked about IPv6 and NAT.  I did it first back in early 2011.  I didn't really mention its use as a security tool, though.

What do you think?  Drop a comment, but be nice.

Comments: 1
> AirPlay and mDNS
Posted by prox, from Charlotte, on May 27, 2012 at 23:57 local (server) time

If you keep up at all with Apple products and technology, you probably know what AirPlay is, even if you haven't used it.  Yes, it's one of those proprietary protocols developed by Apple that allows iOS devices and iTunes to stream audio and video to an AppleTV device.  Display mirroring is also available.

The problem with AirPlay is that it depends exclusively on multicast DNS (shortened as mDNS) with DNS service discovery (DNS-SD) to operate.  mDNS is a zero configuration protocol that relies on link-local multicast messages on the local LAN to publish DNS records without the need for a centralized DNS server.  DNS-SD is a protocol that compliments mDNS and provides service discovery, too.  Apple's implementation of mDNS/DNS-SD is called Bonjour.  When I say that AirPlay relies on mDNS/DNS-SD exclusively, I'm saying that there is no way of plugging in an IPv4 and IPv6 address of an AirPlay receiver - if mDNS doesn't work, you're screwed.

And where does mDNS not work?  Between subnets (LANs, L2 domains, whatever you want to call it).  mDNS uses an IPv4 multicast address of 224.0.0.251 and an IPv6 multicast of ff0X::fb, where X is the scope value (see draft-cheshire-dnsext-multicastdns-15 for details).  The IPv4 implementation of mDNS has no hope of getting outside of the current L2 domain even with multicast routing, PIM, and IGMP enabled everywhere.  Link-local means those packets cannot ever be routed.  The IPv6 implementation of mDNS, on the other hand, might be able to work between L2 domains if the scope value is set to 5 (site).  However, I haven't seen an implementation of mDNS on any Apple products that set the scope to anything but 2, which is link-local.

Back to AirPlay!  Fortunately, there are some different ways of getting around the above limitations of AirPlay and mDNS.  One is purchasing a Bonjour Gateway from a company called Aerohive Networks.  This product apparently processes local mDNS packets and unicasts them to other Bonjour gateways that regenerate the packets.  The result is that all mDNS-aware devices see devices in other L2 domains.  This seems like a nice solution since the L3 infrastructure doesn't need to be changed.  Aerohive indicates that these Bonjour gateways can also filter mDNS messages, which sounds somewhat useful.

If you've got a network with Linux boxes, another option is to just run the Avahi daemon on each Linux box and toggle reflection.  Avahi is a FOSS implementation of mDNS and DNS-SD.  It's really as simple as adding these two lines to /etc/avahi/avahi-daemon.conf:

[reflector]
enable-reflector=yes

This tells the Avahi daemon to regenerate mDNS messages on all interfaces.  Easy, right?  Well, this requires your network to actually run Linux-based routers.  My network at home does just that, and I had this up and running in about a minute!  I used Aerodrom on Windows 7 as an AirPlay receiver, since I don't own an AppleTV.

Unfortunately, most networks don't run Linux-based routers, but this same solution should work with the help of OpenVPN.  OpenVPN can be configured to establish an L2 tunnel between two Linux boxes using TAP interfaces, perfect for use with avahi-daemon.  Putting a Linux box on each L2 domain and connecting them with OpenVPN TAP interfaces should be all that's necessary.  The Avahi daemon will relay the mDNS messages between the Linux boxes thinking they share a L2 domain.

Obviously this use of avahi-daemon doesn't scale, but wouldn't it be interesting if router manufacturers added this type of functionality to the control plane of their devices?  On the other hand, wouldn't it be interesting if Apple used ff05::fb for IPv6 mDNS and provided users the option to plug in an IP address or FQDN instead of requiring the use of mDNS?

Comments: 2
> Galaxy Nexus and GPS Issues
Posted by prox, from Charlotte, on April 25, 2012 at 22:51 local (server) time

I've owned a GSM Galaxy Nexus (Samsung i9250) for a few months, now.  It's generally been a good experience and a somewhat good (but not great) upgrade from the Nexus One.  I wrote a short review on it here.

The one deficiency that I didn't initially notice is the lack of decent GPS reception.

When I had the Nexus One I would typically use the My Tracks application to plot my routes when walking or jogging.  This, of courses, used the GPS.  The Nexus One would take maybe a minute at most to obtain a GPS lock on its own (no Wi-Fi required) and then would keep the lock the whole time when the phone was in my pocket.

The Galaxy Nexus is a completely different story.  The GPS is, quite simply, broken.  It takes, on average, 5-10 minutes to get a GPS lock when standing outside with a clear view of the sky and the phone in the palm of my hand.  Sometimes it takes longer but usually I give up after 10 minutes because, strangely enough, I do have a life.  Unfortunately, even after this GPS lock is achieved, it loses it easily.  Putting the phone in my pocket will cause the GPS lock to be lost within a few minutes, typically.

Unfortunately, it appears that I'm not the only one with the problem.  This is unfortunate because this means if I call Samsung asking for a replacement phone most likely my situation will not improve.

Strangely enough, if I enable Wi-Fi and are within the vicinity of some networks, I can get a GPS lock fairly quickly.  In fact, even sitting here in my condo typing this, with Wi-Fi enabled I can get a lock within a few seconds by holding the phone near the window.  The phone will only see 4-5 satellites, but that's all that is needed for a 3D lock.  This makes a little bit of sense because WPS probably seeds the GPS subsystem with location data so it knows exactly where to start (vs. a cold or warm start).

After searching around a little bit I found a few suggestions.  One was to shut off the phone and remove the battery for a few minutes, which seemed silly since this suggestion only temporarily fixes the problem.  The second, that seemed to work for a few people, was to force a cold start and redownload A-GPS data, both of which can be done using GPS Status & Toolbox, an application I've used in the past and is pretty darn neat.

Unfortunately, performing the cold start (reset) and redownloading the A-GPS data didn't work out for me.  I was still left in the same situation as I was before.  However, using the GPS Status & Toolbox provided me with some additional information about the GPS problems.  Apparently when the Galaxy Nexus is stuck searching for a GPS lock, usually it actually does see a whole boatload of satellites, but fails to receive any data from them.

Let's look at some screenshots to illustrate this.

Here's a screenshot of GPS Status & Toolbox when standing outside with a clear view of the sky:

GPS with Wi-Fi

The above has no GPS lock.  Note the bars in the middle of the screen.  Those indicate satellite signal strength and gray apparently means no data.

Now, here's a screenshot of a good GPS lock with Wi-Fi enabled.  I don't even have a clear view of the sky since I'm indoors.  However, I'm standing at a window:

GPS with no lock

The green apparently means the sallite is used in establishing the GPS lock.  The other color codes are below:

When I can get a lock I notice that the satellite colors transition from gray, to blue, to yellow, and then to green.  According to Wikipedia, almanac and ephemeris are two parts of the GPS message, the other being time information and satellite health.

Why does the GPS on the Galaxy Nexus not quickly receive the second and third parts of the GPS message from any satellites when Wi-Fi is disabled?  According to the first screenshot above, it can clearly be seen that a number of satellites are providing adequate signal strength, but most are just stuck in the no info stage or have only processed the first GPS message.  I wish I had an answer.

I suspect the problem may be due to inadequate RF shielding of the GPS receiver inside the hardware itself.  Perhaps the GPS receiver is getting a strong signal but it's too noisy and the messages are chock full of errors and can't be processed correctly.  This is really only speculation, though

I haven't had a chance to stop by a Verizon Wireless store to see if the LTE Galaxy Nexus has the same problem.  However, I think it may be difficult to test since I probably won't be able to take any of the phones outside for a good test!

Anyone have any suggestions or comments?

Comments: 2
> Google, According To Verizon Business
Posted by prox, from Charlotte, on April 16, 2012 at 15:45 local (server) time

Accoring to Verizon Business (AS701), Google is the Internet:

(evolution:15:32)% sudo traceroute -Iq 1 8.8.8.8  
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  226.sub-66-174-38.myvzw.com (66.174.38.226)  39.272 ms
 2  49.sub-69-83-51.myvzw.com (69.83.51.49)  52.978 ms
 3  *
 4  1.sub-69-83-32.myvzw.com (69.83.32.1)  52.690 ms
 5  TenGigE0-0-0-0.GW4.ATL5.ALTER.NET (63.122.230.125)  56.609 ms
 6  0.ge-1-0-0.XT2.ATL5.ALTER.NET (152.63.80.190)  56.562 ms
 7  0.xe-2-0-1.XT2.NYC4.ALTER.NET (152.63.0.153)  75.615 ms
 8  TenGigE0-5-4-0.GW8.NYC4.ALTER.NET (152.63.18.206)  94.438 ms
 9  Internet-gw.customer.alter.net (152.179.72.66)  123.363 ms
10  72.14.232.244 (72.14.232.244)  82.318 ms
11  *
12  *
13  64.233.175.109 (64.233.175.109)  82.116 ms
14  72.14.232.21 (72.14.232.21)  93.076 ms
15  google-public-dns-a.google.com (8.8.8.8)  85.996 ms

Usually, the customer.alter.net. subdomain is used for labeling the interface of the customer router.  It's usually the company name.  RAS even has this documented in his Traceroute document on page 12.  Here's an example for Juniper Networks:

(evolution:15:34)% sudo traceroute -Iq 1 juniper.net.
traceroute to juniper.net. (207.17.137.239), 30 hops max, 60 byte packets
 1  226.sub-66-174-38.myvzw.com (66.174.38.226)  40.993 ms
 2  49.sub-69-83-51.myvzw.com (69.83.51.49)  50.846 ms
 3  *
 4  1.sub-69-83-32.myvzw.com (69.83.32.1)  50.619 ms
 5  TenGigE0-0-0-0.GW4.ATL5.ALTER.NET (63.122.230.125)  57.569 ms
 6  0.ge-4-0-0.XT1.ATL5.ALTER.NET (152.63.83.37)  57.482 ms
 7  0.ge-3-0-0.XL3.SJC7.ALTER.NET (152.63.49.141)  124.462 ms
 8  TenGigE0-6-4-0.GW3.SJC7.ALTER.NET (152.63.49.166)  126.358 ms
 9  juniper-gw.customer.alter.net (152.179.48.62)  126.317 ms
10  ns-app-fw-dmz.juniper.net (207.17.136.1)  124.180 ms
11  juniper.net (207.17.137.239)  127.080 ms

Why is Google's called Internet?  Do they think Google is The Internet or something?  Using my handy dnsnew utility, I also found the following:

PTR 152.179.72.60 -> NXDOMAIN
PTR 152.179.72.61 -> TenGigE0-3-4-0.GW8.NYC4.ALTER.NET, A $_ -> NXDOMAIN
PTR 152.179.72.62 -> google-gw.customer.alter.net, A $_ -> NXDOMAIN
PTR 152.179.72.63 -> NXDOMAIN

That looks better.  Why is their other PTR labeled the way it is, though?

Curious!

Comments: 0

Previous PageDisplaying page 9 of 121 of 965 results Next Page