As someone who has worked in both the telecommunications and IT industries, I find it very interesting to note how both have influenced one another and created similar technologies. For example, texting came from the telecom world, while the IT industry gave us Instant Messaging or IM, desktop and mobile phones came the telecom, while IP phones came from IT. Even the TCP/IP protocol is similar to the once famous telecom protocol called SS7 – an IP address is analogous to a point code in SS7, while DHCP is analogous to Global Title Translation (I know SS7 geeks will get this). I would also argue that SDN in the IT world is conceptually similar and influenced by the Intelligent Network and softswitching initiatives in the telecom world.
The idea of virtualizing the telecom network and TDM voice switches started back in the mid 1980s, after the break up of the Bell system that started telecom deregulation in the US. With the break up, the resulting seven Regional Bell Operating Companies (RBOCs) started the Intelligent Network (IN) initiative. The goal was to become independent of their “previous parent,” AT&T, who was also the manufacturer of the network switches and telephonic equipment. But the RBOCs realized that they needed to implement this massive change in phases. Rather than virtualize the whole architecture on day one, they decided that phase one would be to separate the services/apps from the switches and deploy them on commodity hardware. In other words, the focus of phase one was to virtualize the portion of the voice switches that defined the services/apps — such as voicemail and caller name apps — and deploy them independently outside the physical switches. That required the definition of an “open” interface so that these and other new services could easily be developed and deployed on general-purpose computers in the “cloud.” Since the “cloud” back then was not based on the Internet or the IP protocol, they decided to use SS7 as the protocol to connect the servers in the Intelligent Network (IN) “cloud.” At this phase, they didn’t worry too much about the control functions or the data plane of the switches – they figured that would come later, in phase two. After a few years, this initiative proved successful because it allowed non-switching vendors to offer new apps that integrated with and managed the voice network.
Phase two of virtualizing voice switching started about 10-15 years later, in the mid/late 1990s — this initiative was called softswitching. The goal of softswitching was not only to separate the control layer from the data layer so that they can be independently deployed on commodity hardware (ie. general purpose computing platform), but also to deploy new services over IP rather than SS7. The initial proof of concept (POC) and trials worked well — there was a de-facto southbound protocol between the control and data plane, initially based on MGCP, which was conceptually similar to today’s OpenFlow for SDN.
But industry leaders quickly realized that there were two major shortcomings of softswitching:
1. The use of commodity hardware for the data layer proved to be impractical because it quickly became apparent that purpose-built hardware was needed for media services such as for voice codec, tone/announcements and transcoding.
2. The emphasis on the control-data layers using a southbound protocol such as MGCP did not enable the integration of apps above the control layer in the IP network (the more complex SS7 protocol was still required). The need for a northbound protocol between the control and application layers in the IP network was desperately needed.
So how did the telecom industry address these two challenges? Today, almost 15 years later, here is where they ended up.
First, the use of purpose-built hardware for the data infrastructure is still alive in softswitching, because the industry has come to the conclusion that the processing and switching of media can only be done at scale using specialized silicon such as Digital Signal Processor (DSP) firmware. These purpose-built switches are referred to as media gateways (MG) and session border controllers (SBC) and are offered by both incumbent vendors like Alcatel-Lucent and Siemens, as well as newer companies like Sonus Networks and AcmePacket
The northbound API in the voice network, which initially took a backseat while vendors focused on the southbound API, is now called Session Initiated Protocol or SIP. It has now become more significant and prevalent than its southbound cousin which has evolved from MGCP to Megaco/H.248.
So what has softswitching taught us in how SDN will evolve? Well, I believe several things, but for now I will focus on three:
First, OpenFlow is a good starting point for a southbound API between the controller and the data plane. However, depending on your goals, it is not quite ready for prime time unless it is ‘customized’ to handle some of its existing weaknesses, such as scalability and redundancy — this is apparently what Google has done with its recent announcement of OpenFlow-based deployments. Therefore, I believe it is immature today and likely continue to evolve to a point where it can be production ready. At that point it may be known as OpenFlow 2.0 or by another name, just like the evolution of MGCP to Megaco/H.248.
Secondly, going forward, especially with the recent founding of the OpenDaylight Project, I believe there will be much more focus on application integration via the northbound API. This emphasis on application integration is consistent with the position of my employer, Enterasys Networks’ SDN message, which focuses on agility, simplicity and network orchestration/automation through a feature rich Northbound API for the deployment of new applications. This view was also supported by a poll result during an SDN webinar we conducted on April 7, 2013. During the webinar, one of the poll questions we asked 300+ attendees was: “How would you rate the importance of a standards based southbound protocol versus an open architecture with open northbound API? ” Over 80% of the respondents indicated that a northbound API was more important.
Finally, I believe the processing of data transport functions are still much better off in purpose-built hardware with ASIC-enabled silicon rather than on commodity hardware – especially in flow-based switching. With flow-based switches (which differ from traditional packet-based switches), the ability to perform deeper inspection of packets at scale enables better visibility and control of network traffic and resources. This means businesses can efficiently and easily capture , analyze and transform network data into actionable business information. This required depth and scale for network-based business intelligence cannot yet be done with commodity hardware.