< Back

Understanding Software Defined Networking (SDN)

Mon, 17 Jun 2013

networking

SDN, Software Defined Networking, is a form of network management consolidation which offers greater control over complex network deployments. To gain an understanding of what exactly SDN is and its importance as a new direction in network management, it is first of all necessary to understand how network configuration has traditionally been achieved.

A typical network will contain a number of devices such as routers and switches as well as the copper and fiber cabling that runs in between. At the level of the router and switch, there are numerous possible configurations that, depending on requirements, need to be programmed by the engineer.

From interface VLAN tags and QoS settings to switch wide protocol awareness and countless other variables: policies, firewalls, link aggregation, maximum transmission unit and so on - the job of a network technician in a large operational environment is very much hands on. Crucially, it requires the ability to access and configure individual boxes (switches and routers) so that an overall network fabric is maintained.

So how does SDN fit in to this paradigm? Services provided from a cloud stack which may not have a static physical location can require certain network based technologies such as traffic engineering.

However, traditionally an IP is associated with particular identity aspects which are derived from their topological location, the interface configuration itself (QoS, VLAN tag, etc). Through SDN, network operators can ‘…specify network services, without coupling these specifications with network interfaces.

Traditionally an IP is associated with particular identity aspects on the switch which are derived from the interface configuration itself.

‘This enables entities to move between interfaces without changing identities or violating specifications. It can also simplify network operations, where global definitions per identity do not have to be matched to each and every interface location. Such a layer can also reset some of the complexity build-up in network elements by decoupling identity and flow-specific control logic from basic topology-based forwarding, bridging, and routing.’ (Wikipedia)

Initial SDN related encapsulation and de-encapsulation through such protocols as OpenFlow can be done on hypervisor via a ‘TopOfRack’ method; These locally distributed cloud hypervisors connect upstream to SDN aware (commodity) switches which act as a forwarding plane.

Commodity switches form the core physical fabric of the network and run back to the SDN controller. Software based on the SDN controller works with more traditional control plane (routing engine) software and requires a more robust server cluster than the forwarding switches. This stack may or may not be distributed/replicated (through leader election) over multiple sites.

Further network control servers may be located between the routing/controller and switching/forwarding plane levels and aid in the decoupling of control (routing and network definition) and forwarding (processing of control plane definitions).

SDN and associated protocols such as OpenFlow allow for a decoupling of the control plane and a programmatic overlay of core routing software.

Protocols such as OpenFlow enable this kind of decoupling and overlay capacity. They contain the smarts that allows the controller and ultimately the network operator to perceive the whole fabric as a single logical (programmable) switch.

OpenFlow contains the smarts that allows the controller, and ultimately the network operator, to perceive the whole fabric as a single logical (programmable) switch.

This provides for some interesting capabilities in regards to dynamic provisioning and over-provisioning management. Software can recognise congestion build up and simply move services elsewhere on the grid. Other significant operational benefits also become available such as those listed in the chart below; however, there are some drawbacks worth mentioning:

Whilst many benefits for large scale networks are clear, the implementation of SDN is proving to be slow at all levels. General adoption is expected to take some. To view a fascinating insight into the topology and benefits of SDN, watch this video of a presentation by Amin Vahdat, Distinguished Engineer at Google, at the Open Networking Summit in April this year (sign up to view – Wednesday speakers).

One element of this slow uptake is the lack of SDN protocol standardisation. Whereas Internet protocols have settled on BGP, OSPF and ISIS, SDN is yet to determine its own standard packet descriptions.

Yet that lack of definition may be precisely what some SDN evangelists are looking for. Whilst recognising the need for standardisation, Nick Mckeown of The Stanford Clean Slate Program, Stanford University, describes OpenFlow as ‘A way to run experiments in the networks we use everyday.’ He sees the protocol as a way to overlay core routing with a dynamic form of self determining flow control - see the OpenFlow papers and presentations here.

This is distinct to what he calls ‘The vendor nightmare’, an open source move beyond the overlay and into the routing engine itself, thereby circumventing the core technologies of companies like Cisco and Juniper.

Graphic courtesy of Nick Mckeown, Stanford University

Nevertheless, although open source routing platforms for commodity hardware have long been available, Vyatta is one such example, traditional vendors have survived (and thrived) by providing timely products and solid support (as well as clever sales strategies). Furthermore, McKeown’s analysis seems to leave room for hardware vendor participation at the level of the routing software.

On the other hand, Mckeown is also demanding innovation and equipment he can trust in his wiring closet - which appears to be a reference to ‘clean slate’ commodity processing hardware as well as OpenFlow enabled vendor boxes. So it is interesting to look at how certain technology companies are dealing with SDN.

Traditional networking groups, Juniper, Cisco, seem to have attempted a dual strategy of both accepting, to varying degrees, the OpenSource overlay model whilst at the same time working on propriety end-to-end SDN products.

Juniper for example, while rolling out an OpenFlow application, have also bought SDN software company Contrail for $176M and intend to base their SDN strategy on their technology, which will be a software suite based on BGP and not OpenFlow.

Cisco, have also introduced their own software, ONE (Open Network Environment), which will be based on OpenFlow with what seems to be an admission that networking companies, in future, are going to be software companies (see David Ward’s - chief architect of Cisco’s Service Provider Group- comments in this article).

The common issue for both companies is the ability to deal with the realities of, on the one hand the SDN overlay abstraction interacting with their traditional box-centric software, and on the other, the technical realities of the end of vendor hegemony in the networking sphere.

Traditional networking companies such as Cisco and Juniper are having to deal the end of vendor hegemony.

With competition coming from pure software players such as VMWare and possibly even the likes of Google and Facebook themselves - not mention start-ups such as Big Switch Networks and Pluribus Networks (NetVizor) - perhaps the only option for the traditional players will be to move out of hardware altogether. Only the relentless march of time and the rapid developments in technology will tell.