With Source Packet Routing in Networking (SPRING), the IETF standardized an alternative approach for signalling MPLS paths, also known as Segment Routing (SR). One of the many advantages of SPRING is the ability to define traffic-engineered paths through the network.

However, a few years ago, there was no Router-based Implementation available to provision or define these Traffic Engineering paths. Some commercial controllers (like Juniper Networks Northstar) were available on the market. At that time, we came up with the question: which multi-vendor Traffic Engineering options are available using Open Source tools? The research led to a development project in which we designed our own proof-of-concept SPRING Controller which works in our XT3Lab against a demo network topology.

Let us first have a look at the traditional label signalling protocols we use today, dive into the new SPRING approach and illustrate how we implemented our self-developed SPRING controller, using Open Source tools.


Label Distribution with LDP and RSVP

Traditionally two protocols for the signalling of MPLS labels within an autonomous system are in use:

  • The Label Distribution Protocol (LDP) provides any-to-any connectivity for creating end-to-end Label Switched Paths (LSPs) between elements within an MPLS network. With little control-plane overhead. LDP can provide fast failover via the IP Loop-Free Alternate (IP-LFA) feature, but as it has no traffic engineering capabilities, it will always follow the shortest path according to the Interior Gateway routing protocol (IGP).
  • If you want to force your traffic to use specific paths, you need to make use of the Resource Reservation Protocol – Traffic Engineering Extension (RSVP-TE). Like with LDP, you can achieve any-to-any connectivity using the shortest path along the IGP if desired. But in comparison to LDP, RSVP-TE also provides traffic-engineering capabilities which allow you to explicitly signal specific LSPs on a distinct path within your MPLS network. The main issue with RSVP-TE is that it creates Control-Plane states throughout the network on each traversing node. This behaviour is especially problematic on P routers and is the reason why the scaling of RSVP-TE is typically limited.


The new hope – SPRING

The idea behind SPRING is to envision networks as a collection of topological sub-paths, so-called segments. At a first glance, SPRING may sound similar to RSVP-TE in terms of TE capabilities, but the way SPRING signals labels within a network is very different. It follows the source routing approach, which means, that the list of segments that packets are intended to cross is encoded into the data packet itself. An additional benefit is that SPRING is not yet another protocol you add to your network. In contrast, it is a protocol addition to enrich the existing Routing Protocols, like IS-IS, OSPF or BGP with the label information required by SPRING.


SPRING Controller with Open Source tools

Inspired by Juniper Networks case study “Using BGP-Labelled Unicast for Segment Routing Traffic Engineering” Xantaro’s SPRING controller approach evolved into the project presented below. The prototype of a SPRING Controller provides a web-UI which is utilized to display the Network Topology via an interactive graph as depicted in the image below.



In addition to the topology display, it retrieves and displays various additional information gathered directly from the devices via NETCONF, Arista’s EOS API (eAPI) or any similar API. In parallel to retrieving or pulling this information actively form the devices, we also integrated the Juniper Telemetry Interface (JTI) as well as OpenConfig Telemetry streaming into the controller to consume and visualize real-time push-based interface statistics within the Controller UI.



Architecture of the SPRING Controller

The essential question answered by this project is which open source components can be used to form a fully functional SPRING controller. The following controller architecture provides a clear answer to this question.



Let’s have a look at the individual components:

  • ExaBGP: ExaBGP is a Python-based implementation of the BGP protocol. It handles all the message exchanges with remote BGP speakers. The controller makes use of two MP-BGP address families, namely BGP-LU (BGP-Labelled Unicast) and BGP-LS (BGP-Link State).
    BGP-LS is used to gather the IGP distributed topology information, which in case of SPRING are enriched with label information. It is sufficient to peer with only one BGP-LS speaking network device. BGP-LU sessions are established with every SPRING enabled device, which need to receive Traffic Engineering information. These sessions are used to announce the prefixes with an appropriate label stack to describe the intended patch across the network towards ingress nodes.
  • RabbitMQRabbitMQ is an Open Source Message Broker. You can publish messages which are then sent via exchanges into queues. The messaging system implements different ways of delivering messages. Think of it as unicast, multicast or broadcast type of message delivery. The utilization of the message queue concept allows for multiple parts of the SPRING Controller to be loosely coupled and thereby reinforces the scaling capabilities.
  • Redis: Redis is an in-memory key-value store. It is utilized as a central data store for the various components of the controller. Most of the scripts can run independently, but they need some persistence.
  • Python Scripts: Self-developed Python scripts provide the glue between the multiple components of the project:– ExaBGP2RabbitMQ: This script is just a simple forwarding agent. It takes the JSON output from ExaBGP, without parsing or inspecting it and pushes it into a RabbitMQ Exchange where the data is ultimately being delivered into a single shared Queue. This script is as short and lightweight as possible since it might become a bottleneck when processing large quantities of messages later.– ExaBGPParser: The ExaBGP Parser script reads messages from the Queue, parses them, converts and strips the contained information into a format that can be processed by the controller. The resulting dataset is then saved within the Redis K/V-Store, which spawns a new message containing the extracted information. Incoming messages are specific to the actual content (e.g. link, node or prefix updates). Type-specific Python scripts then consume these Messages. As the parsing is decoupled from the receiver or storage procedures, this module can be started multiple times in parallel, providing excellent scaling capabilities.– PyEZ / Netconf / EOS API (eAPI) scripts: These scripts subscribe to the prior mentioned Link, Node and Prefix updates. As soon as specific messages (e.g. prefix updates) are received we start collecting any additional information from that specific node and save the gathered information in the Redis backend.
  • Open-NTI: The Juniper Open Network Telemetry Interface (Open-NTI) project is a collection of open-source components composed of a framework that can collect Streaming Telemetry Data. Components used are, InfluxDB as a time series database, Telegraph as a receiver filter pipeline and Grafana as the Web-UI. It includes further components like a syslog server, but since we do not rely on this, we explicitly disabled this component. The native OpenConfig Telemetry components do interact directly with the InfluxDB instead of utilizing any Juniper Open-NTI added functionality.
  • web-UI: The web-UI is provided via an Nginx web server. A PHP process on the server accesses the Redis datastore and provides its data via an API to the js JavaScript graph framework, which is triggered via AJAX-calls. Vis.js is an open-source JavaScript charting and graphing framework which is used to display the nodes (SR-Nodes) and edges (Network-Links) using a gravitational model. Also, the PHP process reaches into the InfluxDB, which is part of the open-NTI framework to retrieve information on the available interface telemetry information.

The whole project is packaged as multiple Docker containers glued together via the docker-compose infrastructure. This approach makes the project very lightweight and simple to deploy.


Demo Topology

As you can see in the picture below, the demo topology consists of nine Juniper Networks vMX instances. Six of these instances provide an MPLS-P ring topology (orange), and three devices are connected as PEs to that core network (green). IS-IS is used as the IGP with the Segment-Routing extension enabled.

Each of the nodes maintains a BGP session with the SPRING Controller physically connected to Core2 and Core3. This session only carries the BGP-LU (Labelled Unicast) address family, which is used to announce IP prefixes from the controller to the network node.

BGP LU SAFI (Subsequent Address Family Identifiers) allows you to add additional information (MPLS Label Stack) to IP prefixes on top of what the usual IP SAFIs already provides. This label stack represents the path, which the packet has to take through the network.

The Core3 node is configured for an additional AFI, namely BGP-LS (Link-State). This address family is used to carry IGP link-state information, hence topology information within a BGP session towards the controller. Thereby the controller does not have to be part of the IGP, to be able to talk OSPF or IS-IS but receives all the information available via these protocols.


SPRING and controller in action

To give you a better insight into the developed SPRING controller and the functionality of SPRING, we have prepared a video which shows the start-up procedure, loading of live data from the network elements and how easy traffic engineering paths can be provisioned:



  • At the beginning, the project is initialized via Docker-Compose, and the web frontend is started. You can see the graphical output of the demo topology and the additional node information (router ID, version, chassis, uptime) via the tooltip view.
  • The “Nodelist” is used to search for a specific node and to zoom in on it. This functionality is especially useful for very large networks.
  • Subsequently, the “Telemetry” tab displays the real-time data of the connected interfaces.
  • Using the “SPRING-Path” function, the interactive provisioning of a TE path for the prefix is shown as an example. The recording is started via the button “Record Path”, and the path is defined by mouse clicks on the corresponding icons of the nodes and links.
  • The CLI commands “show route” and “traceroute no-resolve” are used on PE1 to check the successful provisioning of the correct path.
  • Finally, the interface utilization at PE1 is shown via the execution of pings.


For further details on SPRING, its general benefits and integration into an operational network, please use the contact form below.





Please find here further information regarding data protection.