SD-WAN Reimagined – 128 Technology

128 Technology LogoMaybe it’s just me. I’ve always felt like SD-WAN was kludgy. Every time I listen to an explanation of how it works, I think of the picture of a mechanic putting duct tape on the wing of an aircraft while passengers sit inside awaiting departure. I imagine sitting in the window seat, watching it take place and asking the questions: “Is that really the best way to fix this problem?” “Are we trusting duct tape to hold the wing together?” and even “Shouldn’t the wing hold itself together?”

Despite having those questions, I hopped onto an aircraft and flew off to a Tech Field Day Exclusive with 128 Technology in July. After arriving, I didn’t think about duct tape once.

128 Technology is a five-year-old company focused on creating the best SD-WAN solution. As a new company, building a new product to answer a specific set of challenges, 128 Technology had an empty toolbox. That also meant they had no baggage to bring with them. It was a fresh start. They could make their solution be anything they wanted it to be.

Sue Graham Johnston introduces 128 Technologies to the Tech Field Day DelegatesAccording to Sue Graham Johnston, “…we decided to reorient networking to focus on the session, we can get rid of about 30 years’ worth of technology workarounds and overlays…” In case you are wondering, yes, that is duct tape she’s talking about.

That one statement piqued my interest, set the stage, and explained much of how their model works. It is simple enough to brilliant.

128 Technology uses a 5-tuple to identify each session: source and destination IPs, source and destination ports, and the protocol. When the session is built between the ingress and egress router, the first packet is encapsulated with 150-200 bytes of metadata to establish the session. After the session is established, no further encapsulation is needed as the ingress and egress routers have all of the data that is necessary. When each packet hits the ingress router, the source and destination addresses are changed until it hits the egress router. (Does this sound a bit like NAT? Because its NAT for SD-WAN.)

That’s their magic: No encapsulation, lower overhead, no need to fragment larger frames to provide space for additional headers, and significant bandwidth savings.

Now that you understand the basics of how 128 Technology builds sessions, it’s also essential to see how they integrate security. After all, this is intended to be an SD-WAN solution where data will traverse the internet.

Here again, there are a few basics to understand. All metadata to establish sessions is encrypted. Unencrypted traffic between Ingress and Egress routers is encrypted with AES128 or AES256. SSL or other encrypted traffic doesn’t need to be re-encrypted, so 128 Technology doesn’t. That reduces latency, complexity, and overhead. The last important piece of the puzzle is that the 128 Technology network operates as a zero-trust security environment. All data sessions must have a service policy created to allow traffic to flow. No service-policy means no traffic.

The last consideration is how to manage the SD-WAN environment. One router in the network is assigned the role of Conductor. All routers and the Conductor run a single code base, ensuring consistency in bug fixes and behavior. The Conductor is not required for configuration or operation but provides a central point configuration of all devices.

When I consider the takeaways from Networking Field Day Exclusive with 128 Technology, one thing jumps out far above the rest. Their focus on simplicity and the most critical part of data networks: the data session. I feel the solution is well thought out, and based on the customers that are using it in production; it seems the execution delivers on their promises.

The only remaining question I have do not relate to their technology at all.

  1. When will 128 Technology be acquired?
  2. Who will acquire them?
  3. Will it be to include them into an existing full-stack solution or acquired to be used by a service provider in their internal networks?

I hope that this will be a product that we can all benefit from as direct customers.

Take time to watch the videos and see if you agree.

128 Technology Networking Platform Overview from Gestalt IT on Vimeo.

An open letter to Senator Richard Burr

I sent this to Senator Richard Burr through his website. I am also leaving it here, and will update with his response:

Senator Burr,

First, I want to say Thank You for working on the behalf of North Carolina in our nation’s capital. I recognize that there are hundreds, if not thousands of issues that you are asked to consider on a regular basis, which cannot be easy.

I am contacting you regarding the encryption bill that you are working on with Senator Feinstein. North Carolina is a very tech savvy state. We have major technology companies in almost every tech sector, and now are home to some of the largest and most efficient data centers in the US. There is much to be proud of. With that in mind, I am surprised to see you as one of the advocates of the bill.

I recognize that as the Chair of the Senate Intelligence Committee you hear from our intelligence services on a regular basis. I am certain the current conversation is heavily geared towards how to deal with the pervasive nature of encryption. Today it is easy for a terrorist organization to have fully encrypted end-to-end communication. I am sure that is incredibly frightening to the intelligence services and their job is a very difficult one. I recognize that every attack on American citizens ultimately creates hundreds of questions like “How did the [insert three letter acronym] not know this was going to happen?” It’s an impossible battle.

I am a network engineer and I have worked in IT for many years. I intimately understand encryption and the basic underpinnings of the internet. I have spent many years protecting my employers networks and systems from outside attack. I understand that ever evolving battle first-hand.

With that said, I am very concerned that you feel that you can force companies to provide backdoor access to devices and communication without affecting every citizen who chooses to use an electronic device. I assume that you have chosen to believe the rhetoric which states that open access can be protected. Otherwise, the only other assumption is that you believe that normal everyday citizens should not have the ability to protect their private, personal information; that corporations should not have the ability to protect their intellectual property.

Assuming that you believe the former; I want you to consider these questions. How long do you expect that backdoor to be kept safe? How long do you think it will take before technical terrorist, both foreign and domestic find and utilize that backdoor?

If the US makes and is granted the demand, what prevents other foreign entities from doing the same? What do you think the economic impact would be for companies when China has a backdoor to every corporate device of every manufacturing company in the US? I have spent eight years of my career working with large international manufacturing companies. I know first hand what the impact of that is. I have watched it with my own eyes. I could argue this particular point, citing experience, but I want to respect your time. If you would like to discuss, I will be happy to do so.

I have one more question I would like to present. How do you expect that forcing backdoor access will actually aid the intelligence services? This is an exercise in futility and escalation. Assume for a moment that the NSA/CIA/FBI has root access to every device. What happens when the user also employs an encrypted communication app which also requires a passcode and does not store data locally? Let’s also suppose that they are always running a VPN or TOR client. Finally, let’s assume that the server the encrypted app on the encrypted phone, communicates to through an encrypted tunnel, lives in a non-friendly foreign state. What good does this legislation then do? The answer is, none. The US cannot compel the foreign server to give it a back door. But, the US, who loves to discuss freedom has created a wide exploit that will then begin to be used for a different type of terrorism and removed every citizens right to privacy with their most personal data.

I am not hurling these questions at a wall to see what sticks. I would like a response. This is a very important discussion to be had without rhetoric and fear-mongering. I can be contacted with the information provided if you would like to further discuss these or other concerns.

With respect,

Jonathan Davis

Geek Toys – The future of Apple TV

As WWDC approaches, I once again hope for a new Apple TV. The Apple TApple TVV has so much potential, and so much disappointment associated with it. Will WWDC be the time when we finally see an update? The bigger question is, with such strong competition from other products, has Apple already missed the boat?
I’ve spent quite a bit of time thinking about what I would like to see in a new Apple TV. There has been a lot of change in the last few months around home entertainment, and if Apple really wants to own the space, it has to adapt to compete. There are some key features that I think could make Apple TV ready to own the space again.

Siri

When I hear people discuss using Siri on an Apple TV, I rolled my eyes. I hate Siri. I refuse to use Siri. However, that changed just a little when I received an Amazon Echo. Amazon has knocked voice recognition out of the park! Alexa is fast, error free, and simply amazing. It is so good, I actually caught myself preparing to say “Thank you” to a piece of hardware! Each morning I ask Alexa for the news and my commute information. I use it when cooking for timers. Alexa is the only reason I use Prime Music. Let me repeat that. I began using Amazon Prime Music only because Alexa made it so easy. Make Siri that good on an AppleTV, and I get it now.

Facetime HD camera and mic

I do not understand why this hasn’t happened before. An Apple TV that could connect via FaceTime, is a no brainer in my opinion. Besides the ability to talk with relatives and friends through a TV, a camera could provide a lot of other features. The camera or mic could be used as a detector for HomeKit automation. Add some face recognition, and use it to choose the profile, and permit or deny content based on age restrictions. The list goes on and on.

HomeKit Integration

Imagine the Apple TV turning on lights when motion or sound is detected. It could also provide the remote view capabilities required by those of us who regularly travel and would like to check on our homes. This would be an easy way to integrate HomeKit and directly compete with the existing products on the market from Belkin and Wink and many other companies. I love my Wink Hub and the attached lights, sensors, and outlets. I hope that Apple gets the integration right.

4K

Apple has built the 5K iMac to encourage 4K content creation. 4K content only becomes valuable once there is an easy way to consume that content. Apple TV should be that avenue.

Glances and notifications

The notifications on Watch are the reason I love my watch. There is no reason why this same thing shouldn’t work as a pop-up on the Apple TV.

A decent remote!

Apple works hard to refine every detail of their products, which leads me to ask. What happened? The AppleTV Remote is simple, small, and sleek. It is also the worst of the worst of the entertainment hub remotes. It uses IR, which means it must be in direct line of site of the AppleTV. Anyone who has used both an Apple TV and a Roku or Amazon Fire TV understands what I am talking about. The Roku and Fire TV remotes can be oriented in any direction, and yet they still work. The devices themselves can be hidden behind TVs or in closets and they still work. Not so for the AppleTV. It is time to move to BluetoothLE for the remote and show IR the door.

Games, apps, blah blah, blah.

I don’t play games. I try to care…but I don’t.

RSA can’t be trusted. Death to RSA.

RSA has finally admitted that it’s root certificates were compromised, which affects ALL SecurID tokens.

I personally feel that this shows absolute failure on the part of RSA. First, their root certificate was compromised. Second, rather than admit it, begin contacting customers immediately, and notifying the public, they chose to hide behind NDA’s while their customers were being compromised. RSA’s excuse for their lack of communication was that they didn’t want to give the attackers more information that could be used to exploit further companies. Based on the target of the attacks: Lockheed, Northrup Grumman, and L3 Communications, it is clear that the attackers knew everything already.

A company that was built on trusts and security has now been found completely untrustworthy and insecure. I expect to see major lawsuits resulting from this. I hope to see heads roll.

The company I work for uses these tokens. We have asked RSA for more information multiple times, but they have been slow in providing anything.

http://www.net-security.org/secworld.php?id=11122

Google warns of World IPv6 Day

Google is warning users of tomorrow’s test of IPv6, and more importantly of the fact that current IPv4 addresses have been depleted. I was only able to see the yellow banner in Linux running Firefox4, it never appeared on my Windows 7 machine. Google warning of IPv6 testing on June 8th.

While the banner is sure to cause some discussion among the non-networking crowd, I wish Google had included a link to more information. Instead they only include a link to test a users internet connection for IPv6 readiness. I don’t think the average user understands that their ISP is responsible for providing IPv6 connectivity, or of the problems that currently face IPv4.

I will give Google credit for starting the conversation. Hopefully, tomorrow there will be a lot of companies asking themselves what they must do to be ready for IPv6. Enterprise must lead IPv6 adoption, because as we all know, carriers are more than happy to sit on their butts as long as no one complains. The fact that so many ISP are considering CGN is a perfect example of that.

The velociraptor died after choking on a rib bone, so creating IPv7 is out of the question

OK, I admit it. I’ve had my head stuck firmly in the sand for almost 11 years. 11 years ago, to the month, I was sitting in my first TCP/IP class. I had fought through the first two days of class feeling mentally exhausted. I was finally beginning to wrap my head around IPv4 and variable length subnet mask. In fact, I was understanding IPv4 well enough that I could help my fellow students decipher the statements coming from our newly minted (and very proud of it) CCIE.
I was feeling pretty good about myself, and may have started to strut, just a little, as I moved from desk to desk, helping other students.
I should mention now, that I’m fairly quick on the up-take. I’m not bragging, simply stating that I meet the minimal requirements to be a geek. For some reason, I had really struggled with IPv4, so once I felt like I had a firm grasp of the concept, I was feeling pretty good.
My CCIE instructor, from his seat of power, saw a little pride develop in his class as more people caught the basics of VLSM. He, in the ultimate wisdom which comes with that coveted CCIE number, decided it was time to strangle those good feelings until they were most certainly dead. He did so, by launching into a 30 minute diatribe of how IPv4 would die a “quick death” and how IPv6 would take its place.
I’m sure you can imagine the look of horror on the faces of the students in the room. He certainly saw it, and fed off the fear as he blew through the broad topic that is IPv6. He delighted in mentioning that every device would have multiple IP’s, that each IP would be part of a different subnet. He threw out new words like anycast to a group of people who barely understood muilticast and unicast.
Wait, what?
In 30 minutes, he convinced three students that IT was not really the field they wanted to pursue, and the rest that IPv6 was EVIL. I was so affected and confused by that 30 minute rant, it took me five years before I had a practical understanding of subnetting IPv4 networks again.
Since that time, I have done my best to ignore the existence of IPv6. I used the fact that vendors were still releasing new products without IPv6 support as a reason to keep my eyes and ears firmly closed.
<My IPv6 Rant>
I believe that when IPv6 was being created someone said, “Yes, we COULD do that, but SHOULD we do that”. The rest of the attendees sat silently as he was taken from the room, and forced to watch his organs being fed to a genetically engineered, but very bored, velociraptor. The group then hired a soothsayer to read the velociraptor droppings, which gave us IPv6, reality TV, and the song “Friday”. The velociraptor died after choking on a rib bone, so creating IPv7 is out of the question.
</My IPv6 Rant>
With that said, IPv6 is here to stay, and it’s time for us, as Network Engineers, to get on board. We can’t complain about NAT64, without being willing to make the commitment to IPv6. When new protocols like TRILL are brought up for discussion, it’s easy to get excited. TRILL takes something that we already know (IS-IS, L2, etc) and simply builds on it. It is also transparent to layers 4-7, so it doesn’t affect non-network types.
IPv6, causes us to backtrack. It changes all of the rules. It’s not just IPv6, it’s new routing protocols, DNS, application stacks, etc. We have to forget what we learned in IPv4, and relearn it for IPv6. Server admins and developers will also have to update their skills. It’s painful.
With that acknowledged, we can’t put off learning to subnet, route, and filter IPv6. It’s time to begin examining IPv6 routing protocols, and buying equipment or ordering circuits which don’t support IPv6 should be out of the question. Yes, it does feel like starting from scratch. Yes, you will have to learn every protocol that you thought you knew all over again. Yes, IPv6 makes everything more complicated.
System Admins and developers can’t support IPv6 until we do. We must move forward, so that they can move forward.
Most network engineers agree that NAT is a poor solution to the problem staring us down. There are only a few other options. We can upgrade our skills, beginning the long arduous task of becoming experts in IPv6. We can ignore the change, until we are required to upgrade; then deal with entire IT teams being unprepared, learning on the fly, while implementing poor solutions in the near-term. Finally, we can make the same choice that those three classmates of mine did. “Maybe networking isn’t for me, I’ll go do something easier, like lion taming.”

Texas Hold’em and the IETF – Did Brocade bet against TRILL?

For the last two post, which you can find HERE and HERE, I’ve knocked Cisco around. For those who don’t know me, I should warn that I am an equal opportunity offender. With that in mind, let’s take a look at Brocade’s implementation of TRILL.

As most of you should know, TRILL uses IS-IS on Layer 2 to identify the shortest path between switches, and load balance across those paths. Since this is happening at layer 2, not layer 3, it does away with Spanning Tree, which means more bandwidth and faster fail-over using the same number of ports, fiber paths, cables, and switches.

Of course, despite the fact that we all understand the above to be true, Brocade decided to go their own way and replace IS-IS with FSPF or Fabric Shortest Path First.

If you haven’t done much work in SAN environments, you may not be familiar with FSPF. Brocade created FSPF in 1997 to answer bandwidth concerns in Fiber Channel SANS. It has since become the standard path selection protocol in Fiber Channel fabrics.

With that understanding, let me back up and rephrase. As TRILL utilizing IS-IS was being developed by the IETF, Brocade a member of the IETF, decided to implement their own version of TRILL utilizing FSPF.

Brocade along with Cisco are both offenders. They both claim to be working with the IETF, yet at the same time both have released competitors to TRILL. Are we to believe that Brocade worked to make TRILL the best possible solution at the same time that they were creating a competitor to it? What about Cisco and FabricPath?

Both companies claim that their solution “extends” TRILL with additional features.

Were those “extended” features brought up in meetings when the TRILL standard was being discussed? Did the IETF choose to ignore those suggestions? I doubt it.

Cisco, Brocade, and most like every other vendor sat at the table the same way a poker player does during a game of texas hold ’em. No one showed their cards, but everyone watched the flop, river, and turn cards, to see what they could create with their own hands to drive the other players off the table.

Make no mistake, TRILL did not benefit from Brocade, Cisco, or any other vendor’s presence on the committee. Their involvement was for their own purposes, not the benefit of customers.

Cisco is SCARED! Why Cisco won’t release an emulator.

Greg Ferro posted on his blog another plea to Cisco to play nice and give network engineers tools for testing, verifying, and learning new technology. If you’ve missed the recent debate on the matter, it’s OK. Crawl back under that rock, you won’t miss a thing.

I generally read Greg’s posts while nodding my head like some sick bobble headed doll, with an occasional grunt in agreement. However today, my head stopped bobbing when I realized something…

Cisco is AFRAID of the virtual switch/router.

Let that sink in for a minute.

I know what you’re thinking. “They don’t have anything to be afraid of. That’s crazy talk.”  I’m sure that people said the same about Dell and HP when ESX was first announced. “They don’t have anything to worry about. No data center could ever virtualize all of their servers. That’s just crazy.” Only, it did happen. Right now I am sitting just a few hundred feet from 100 servers that would be over 500 servers if it wasn’t for vmWare. Think of the lost revenue to Dell and HP.

But, you say, “what about the Nexus 1000v”. What about it? Cisco had already lost sales because all of those virtual servers didn’t need individual switchports. That was Cisco’s way of getting some of that revenue back. It wasn’t about extending network engineer’s control into the virtual environment. It was about lost port revenue.

Imagine with me for a moment. What would happen if you could virtualize the Edge and Core layers of your network all onto a single HA cluster. (Maybe a couple of Dell or HP servers.)

Firewalls, Check
Routing, Check
IDS, Check
VPN, Check

Where is the need for 10GB, 40GB, 100GB, TRILL, or Fabric Path? What about all of the other technologies that Cisco will sell us over the next 10 years, forcing us to replace existing hardware?

Outside of the HA cluster, you would need a couple of switches for Distribution, and you would need your normal Access layer switches, but how many components of the network would be cut? Not only routers, firewalls, and switches, but adapters, redundant power supplies, wireless controllers.

It’s already been done. Look at Cisco Call Manager. A router, switch, and server that do the work of racks and racks of PBX equipment.

“But, I just want them to release it so that I can test.”

Cisco has three choices: 1. Stick fingers in their ears and hum loudly. (Current tactic) 2. Release a good virtual network platform, and wait for everyone to ask, “wait…why can’t we virtualize this for real?” 3. Release a crippled, barely working virtual platform, and then get derided for their poor product.

No matter how Cisco looks at it, they lose.

Suddenly I am asking myself. After IPv6, what is the next big thing to happen in networking? Could virtualization change networking the way it changed servers?

Is Cisco getting back on track?

Cisco’s big-man-in-charge, John Chambers, sent out an email to all employees this week, which outlined a few important things:

-Cisco has lost focus
-Cisco was caught off guard by certain movements within the Networking community (openflow, new products from other vendors, etc)
-Cisco makes it difficult for new product to make it to market
-Cisco has to focus on the core business components, rather than continuing to diversify into low margin consumer markets
-Most importantly, Cisco shareholders, employees, and customers are not happy with the current direction that Cisco has taken

The message is a great read, and gives me hope that Cisco can get back on the ball, and address some of it’s core issues. Kudos to the Cisco team for taking a hard look at where they are, and making decisions to correct their wandering trajectory. Here’s hoping they follow through!

http://blogs.cisco.com/news/message-from-john-chambers-where-cisco-is-taking-the-network/

Microsoft meets the first snag in plan to purchase IPv4 addresses

As you should now be aware of, Microsoft is planning on hoarding purchasing a large huge block of IP addresses from Nortel. Now ARIN chief, John Curran, has made it clear that if the plan does not meet the current ARIN requirements for transfers, the IP address space can be reassigned. Here are a couple of relevant quotes:

Companies that are allocating their address to a third party can ask for compensation if they want to, he said. However, the acquiring party is required to show an immediate and appropriate need for the addresses, he said.

Existing transfer policies allow up to 12-months worth of address space to be transferred from one entity to another, he said.

So, that brings up the question, can Microsoft show a need for 666,000 in the next 12 months?

Link: http://www.computerworld.com/s/article/9215091/IPv4_address_transfers_must_meet_policy_ARIN_chief_says