Crypto scandals you should know

Cryptocurrency has developed a bad reputation for being a good way for people to do fraudulent activities. But most of the cryptocurrency community agree’s that anonymity is more of an asset than a…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Game of Encapsulations

First there was a big, flat network full of broadcast packets running around which was not practical as it was utilizing precious bandwidth. To mitigate the solution, VLANs were invented and each VLAN was an independent broadcast domain which reduced the broadcast to a single subnet (VLAN) and allowed isolation and segmentation of resources. One of the biggest problems with traditional L2 networks is that due to a nature of L2 network half of the links will always be blocked. Introduction of VLANs created new protocols (trunk, DTP, VTP, per-vlan STP etc.) which increased complexity and made L2 networks even harder to troubleshoot and monitor. Limitation was in size as well, as VLAN header is 12-bit in size which limited the number of VLANs to 4094. Service providers started using double VLAN-tagging to be able to use (tunnel) one Core VLAN for multiple Customers VLANs (QinQ). MPLS did its part by tunneling whatever the customers wanted via MPLS tunnels, hence making VLANs inside the Service Provider networks irrelevant. All of that did give some breathing space and postponed introduction of new technologies.

As big data enter and cloud providers started to take off, VLANs were simply not good enough. Small, difficult to do deterministic traffic engineering and complex. To overcome the problem of VLANs, VXLAN was introduced. Nothing better than solving L2 problems by tunneling L2 traffic over L3. VXLAN standardization was proposed by representatives of different hardware and software companies (Cumulus networks, Arista, Cisco, VMware, RedHat, etc.). VXLANs have 24-bit header which gave an option to create over 16 million Layer 2 segments (VXLAN Network Identifiers — VNIs). VTEP (VXLAN tunnel endpoint) would check the frame destination, encapsulate the frame into VXLAN header and send it across Layer 3 network to the destination VTEP which would strip off VXLAN header and send the frame to the destination host using traditional protocols. As with VLAN, virtual machines on the same VNI can communicate directly with each other, whereas virtual machines on different VNIs need a router to communicate with each other. VTEPs can be physical devices (hardware VTEP) or they can run in hypervisor (software VTEP). One of the biggest problems VXLAN has had was not having an intelligent control plane and using flood and learn method to map VTEPs with MAC address and/or sending all information to a centralized controller (in case of SDN) which made VXLAN extremely difficult to scale.

To solve the problem, intelligent scalable control plane had to be introduced and of course the solution was to use the “Trashcan of the Internet” — BGP. Combining EVPN as a control plane with VXLAN as a data plane have made the solution more scalable as an EVPN address family in BGP is used to populate both VTEP IP addresses and end host MAC addresses. Now you could have millions of route entries across thousands of devices and at the same time utilize all the nice features that EVPN has to offer like active/active setups, load balancing, mass withdrawal, route reflectors, etc. EVPN+VXLAN RFC was created by Cisco, Juniper, Nokia, and AT&T. As one can notice no “software” companies were involved as for these combinations to properly work, basic and advances routing features had to be in place which at the moment most of the software solutions still lack.
Soon after it was standardized, some shortcomings of VXLAN started to be exposed like not enough flexibility in the header, lack of OAM features, single protocol support etc. and the new encapsulation showed called GENEVE (Generic Network Virtualization Encapsulation). Its specification explains that this is a purely data plane protocol, leaving control plane integration unspecified. It was design to offer maximum flexibility and it covered al shortcomings of VXLAN introducing protocol, OAM and other fields inside the header as well as TLVs for adding extra information and passing them between tunnel endpoints. Sounds like a great protocol proposed by VMware, RedHat, Intel, Microsoft but “hardware” companies were not so ecstatic about it. As GENEVE does not have a fixed length hardware implementation are not efficient. While “software” companies don’t really care about that as frames are processed in software anyway and TLVs give them flexibility to send any metadata across, “hardware companies” were loosing one of the biggest advantages they have had, i.e. fast processing of data in hardware.

At the similar time GENVE were proposed, “hardware” companies with its own proposal, GPE-VXLAN (Generic Protocol Extension for VXLAN). This was the extension of the VXLAN protocol that included a protocol bit, OAM flag bit, BUM traffic etc. The protocol bit includes IPv4, IPv6, Ethernet and NSH. NSH (Network Service Header) is a mechanism to send metadata transmission. Biggest difference is that GPE-VXLAN with NSH must include all information inside fixed-size fields.

Add a comment

Related posts:

LIPS GUIDE ME

Elancharan is a multidisciplinary artist and poet. He has a strange love for all things poetical and Sci-Fi. A winner of the Montblanc X Esquire Six-word Story prize 2017. His latest publications are…