One of the great benefits of Network Functions Virtualization (NFV) is that network functions – such as encryption, firewall, WAN acceleration, IP routing – which were previously fragmented and delivered in purpose-built hardware (or boxes) deployed on-premise in an enterprise networking closet, can now be captured in software and hosted in a data center.
The motivation behind this approach is that when functions are physical devices, they need to be installed, maintained, and upgraded. They also fail occasionally and need to be replaced. They can also be costly, particularly when enterprises must buy multiple boxes and chain them together to put together a complete enterprise application suite, such as encryption, routing, and a firewall.
But these functions are, at their core, just pieces of software. A firewall, for example, is just a sophisticated piece of software manages what can go through a network and what is rejected based on some corporate policies. A content distribution network appliance is software that decides whether a piece of data should be sourced locally – in a cache – or should be downloaded from the network.
With advances in multi-core processors and very-high processing and compute capability, these functions can be hosted on a server farm in a data center. This provides many economic benefits, as virtualized functions are usually much easier to upgrade, maintain, and provision with new applications.
NFV deployment models include a distributed model (sometimes called decentralized) to host those virtual network functions on the customer premises at the edge of the network, and a centralized model, in which those virtual network functions don’t reside out on the edge on the customer premises, but are centralized in a central office or data center. The most common model is some sort of hybrid of the two.
The distributed model (D-NFV)
If the point of NFV is to get functions out of the box and into a data center, why do we want to put them back on enterprise premises with D-NFV?
D-NFV is important because sometimes the function itself needs to be located at the edge of the network. This could be to measure OA&M functions, such as packet loss or latency. IT managers need end-to-end measurements, so these functions needs to be at the network edge. Or it could be a security function such as encryption, or a WAN optimization function such as compression, which, for practical reasons, need to be performed at the edge of the network. Resilience is another example, if the network employs multiple access links, and one acts as the backup, then the virtual network function needs to be on the edge to monitor the status of those links and ensure traffic is sent to an active link.
Performance is another reason. Latency can affect certain applications and the closer one gets to the end user with the data, the better the latency performance can be.
Economics, on the other hand, can go either way; it depends on the function and how much of the function can be shared with other users. If functions can be shared, a centralized model might be more cost-effective.
Finally, certain corporate policies and government regulations might dictate the D-NFV model be used. Where data and its manipulation can reside, for example, is often a concern. It can also be influenced by security issues – if links need to be encrypted, or if it’s a high-security environment that needs encryption to go end-to-end, then D-NFV is the logical choice.
What’s in the Box Determines the Color
The distributed model can be implemented in a couple of ways. The first uses physical network devices, which is the pre-NFV example in which each function has its own box and they’re all chained together. This is often the present mode of operation, for example, with a firewall, encryption, and a router.
Black Box: This physical approach is often deployed in what can be called a black-box implementation. All the required functions are stacked within a single vendor’s box, the vendor having incorporated all of these functions into their hardware – a router with an encryption card and firewall built in, for example. While the black-box approach reduces the number of boxes required, the multi-purpose device can be fairly complex. What’s more, customers are locked in to whatever solution the vendor decides to support for that box and functions can only evolve at the pace dictated by the vendor.
Grey Box: The next distributed option is what we call a grey box implementation, in which the interface device remains in the network and does the actual forwarding of packets in and out of the wide area network. The functions themselves, however, such as the firewall and encrypter, are software functions that run on a server module or blade within the box and can come from any interoperable vendor. This grey box is a halfway point between the black box in the previous example, and a white box in the next.
White Box: The white box is the ultimate NFV implementation. The functions run on a commercial off the shelf (COTS) server that hosts everything, including the network interface device functions. The actual hardware is often just a conventional x86 architecture running an operating system capable of running a variety of VNFs, including the network interface.
Any of these implementations can be rolled out to the edge of the network to provide the distributed NFV function, each with their own tradeoffs, in terms of flexibility, cost, and performance. The ultimate panacea is the white box model, and the closer customers get to that model, the more flexibility they have in terms of being able to independently deploy applications, experiment with different vendors and implementations of the applications, and upgrading them at their own pace, to name a few of the benefits. Total cost of operations also goes down the closer to the white box as well.
Centralized or hybrid?
Another dimension to look at is between the centralized and hybrid D-NFV models. There are a couple of choices available. In the completely distributed model, all the functions are at the edge in either a grey or white box implementation. Nevertheless, some of those functions could still reside in a central office. The advantage being that some of those resources, such as memory and software licenses, can be shared and software implementation can be rolled out relatively quickly.
What’s Right for You?
At the end of the day, there are always tradeoffs IT managers have to make in terms of where the network functions reside, how many servers need to be deployed and where, so a mix or hybrid approach is the most likely deployment scenario, the exact specifications depending on the latency, flexibility and total cost of ownership required. The good news is that IT managers now have choice.
Edited by Maurice Nagle