• No results found

Virtual testbed for SURFnet: tool evaluation and prototype

N/A
N/A
Protected

Academic year: 2021

Share "Virtual testbed for SURFnet: tool evaluation and prototype"

Copied!
44
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Virtual testbed for SURFnet

Tool evaluation and prototype

Cees Portegies

June 8, 2018

Supervisor(s): MSc. Marijke Kaat, dr. Paola Grosso

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

(2)
(3)

Abstract

Physical testbeds are used as analogues for service providing networks in order to evalu-ate protocols, configurations and hardware before deployment. Such physical testbeds come with downsides, due to their physical nature. The hardware in these testbeds needs to be bought, placed and fed with power, all of which costs money and inhibits the scale and versatility of these testbeds. Additionally, their physical nature makes changing physical connections a time-consuming operation, exacerbating their limited versatility. Virtualiza-tion technologies offer the possibility to construct a virtualized testbed. Such a virtual testbed can help with these downsides since it allows for the decoupling of software and the underlying hardware. In this research a virtual testbed has been constructed to aid in the construction of SURFnet’s SURFnet8 network. To this end, Juniper virtual devices have been connected in a virtualized network using the Wistar software. It has been found that the virtual network can exhibit the same behavior as its physical counterpart. However, the scaling limitation of the physical testbed was not overcome in the virtual testbed. In the built virtual testbed, less nodes could be used than are currently in use in the physical testbed. This was due to the usage of the Juniper virtual device, as it is heavy on resource usage. The Juniper virtual device was required as it features the same configuration and protocol behaviour as the devices in the production network. This research found that the main factor which determines the viability of a virtual network is the quality and availability of virtual devices that behave like their physical counterparts.

(4)
(5)

Contents

1 Introduction 7

2 Virtualized networks 9

2.1 Virtual device . . . 10

2.2 Virtualization and emulation . . . 10

2.3 Networking . . . 11

3 SURFnet 13 4 Requirements and constraints 17 4.1 Behaviour . . . 17 4.2 System integration . . . 18 4.3 Security . . . 18 4.4 Management . . . 18 4.5 Scale . . . 19 4.6 Maintainability . . . 19

4.7 Summary and constraints . . . 20

5 Evaluation of software 21 5.1 Juniper virtual devices . . . 21

5.2 Wistar . . . 25

5.2.1 Management . . . 25

5.2.2 External connectivity and security . . . 25

5.2.3 Backend . . . 26

5.3 Eve-NG . . . 26

5.3.1 Management and integration . . . 26

5.3.2 Backend . . . 27

5.3.3 Licensing and security . . . 28

5.4 Other software . . . 28

5.5 Comparison of software . . . 29

6 Construction of a prototype virtual testbed 31 6.1 Construction of the prototype . . . 31

6.2 vMX configuration . . . 32

6.3 Results of configuration . . . 34

6.3.1 Link-Layer Discovery Protocol . . . 34

6.3.2 IS-IS routing protocol . . . 35

6.3.3 Resultant connectivity . . . 36

6.4 Performance evaluation . . . 37

7 Conclusion and future work 41 7.1 Future work . . . 42

(6)
(7)

CHAPTER 1

Introduction

Testing networks (testbeds) are widely used in the academia and industry to evaluate existing network configurations and protocols as well as to conduct experiments for new standards[1] [2][3][4]. Additionally, in a service provider setting, they might also be used to conduct net-working experiments on a small isolated scale before deploying to a service providing production network. For educational purposes, testbeds are constructed for students to conduct exercises and to familiarize them with various networking scenarios[5][6][7]. Traditionally speaking, testbeds consist of physical networking hardware and servers. This hardware is connected in the needed configuration for the experiment that would be of interest at that moment. To conduct a differ-ent experimdiffer-ent or test new configurations, time and resources would be required to physically change the testbed. New hardware might need to be acquired and integrated, as well as physical connections moved or reconfigured. This means that the physical testbed will be taking up more space and using more power. The hardware in question may need to be that of a specific vendor or vendors. Especially in the case of the testbed being used as analogue of a production network, the same hardware needs to be present in both to allow for representative testing. Such specific hardware often is expensive and might require licensing to correctly work, inhibiting the scale of such a networking testbed. These factors are significant drawbacks in the operation of physical testbeds, prompting the need to develop better ways to construct and manage them.

The advent of virtualization technologies has given rise to the possibility to create virtualized testbeds. Virtualization allows for the decoupling of software and the underlying hardware[7]. This decoupling allows for greater utilization of the hardware, running multiple virtual machines on a single physical one. It is possible to start new virtual machines, destroy old ones or copy existing ones all within a software environment. This allows for a dynamic usage of the under-lying hardware. Each of these actions would only take the time for the software to act, without having to take the time to buy and deploy new hardware. Additionally, due the to virtual na-ture of these machines, any changes in configuration or management can be achieved without physical action. Moreover, if a virtual machine crashes or becomes unresponsive, it can often be easily reset to a previously saved state or replaced entirely with a new virtual machine. Both these actions would require no physical interaction and are purely done in software. This has lead to the extensive adoption of virtualization technologies in datacenters to create virtualized production infrastructure[8]. These virtualization technologies can also be deployed to create virtualized networks for testing purposes; virtual testbeds. Such virtual testbeds have been con-structed successfully for various purposes, from the GENI network, used for research purposes[3] to small scale educational testbeds with only a few nodes[6][7].

The nodes in these virtual testbeds range from general Linux based operating systems[6], to special virtual machines aimed at emulating real world networking devices[9]. These virtual testbeds are constructed with various tools, sometimes specially developed for a specific use case such as providing an interface for lab exercises for students. Additionally, software stacks have been developed to help with the construction of virtual networks, based on various types of

(8)

virtual devices. These software stacks consists of elements which handle the management of the underlying hardware, the creation and destruction of the virtual machines as well as management of the software running inside the virtual machines.

This research aims to research these virtualization tools and software stacks to construct a virtual testbed that is focused on emulating a production network. The production network that this research focuses on is the SURFnet8 network, which is being developed by SURFnet1[10].

This organization currently uses a physical testbed, but is interested in exploring virtualization to augment this testbed. In order to achieve this, a virtual testbed needs to be able to offer the same functionality as the physical one. This means that it should properly represent the production network; allowing for testing of the protocols, configurations and software which are also to be used in the production network. This research therefore poses the question:

Can a virtual networking environment be constructed with currently available tools with representativity of a production network?

In order to answer this question, the context regarding virtual networks and their construction are explained in chapter 2. In chapter 3, SURFnet and physical testbed used for the development of SURFnet8 are described in more detail. In chapter 4, a list of requirements is given to which the virtual testbed needs to adhere to in order to correctly represent the production network. With these requirements established, various software components for the construction of the virtual testbed are evaluated in chapter 5. The constructed testbed prototype is described and evaluated in chapter 6.

(9)

CHAPTER 2

Virtualized networks

Network virtualization is formally defined as “... the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network.”1 For example, this can mean that a network no longer requires several physical

routers and switches, but instead uses virtual versions of these devices. This process of network virtualization consists of three main parts: the virtualized network device, the software that conducts the virtualization (hypervisor) and lastly, the networking part of the virtual network. This layered abstraction is visualized in figure 2.1, with the architecture of a traditional device in a network also shown. Since both the physical and virtual devices would be networking devices, they would have more than a single interface (NIC) to connect to the network. This figure also illustrates that with physical networking as the lowest part of the stack, both physical and virtu-alized devices could be connected together. Physical networking in this case consists of physical networking media as well other networking devices. The virtual device, virtual networking and virtualization software are explained in this chapter to create a more complete context on what role each of these components has in a virtual network. Additionally, this outlines what impact each component has on the capabilities and limitations on a virtual testbed.

DE VICE HARDW ARE CPU NIC APPLICATION PHYSICAL NETWORKING PHYSICAL DEVICE HOST HARDWARE VIRTUAL NETWORKING VIRTUALIZATION SOFTWARE VIRTUALIZED DEVICES VIR TU AL DE VICE VIRTUAL HARDWARE CPU NIC APPLICATION VIRTU AL DE VICE VIRTUAL HARDWARE CPU NIC APPLICATION

Figure 2.1: A traditional device and its virtualized counterpart

(10)

2.1

Virtual device

The nodes that make up the virtual topology of the testbed are the top part of the layered ab-straction. These can consist of specialized software developed to mimic traditional devices, such as the Vyatta2 brand of virtual router or specialized software focused on one network function,

such as wireless communication routing protocol behaviour as in research by Maier, Herrscher, and Rothermel[11]. Various vendors of enterprise networking equipment also offer emulation software or virtual devices that closely resemble their hardware offerings, such as Cisco, Juniper or Fortinet3.

The choice of what sort of virtual nodes are used in a virtualized network is determined by the goal of the virtual network. In academic research, when the goal is to familiarize students with general networking theories and practise, the choice can be made to use general Linux based machines or other generalized networking virtual software. For example, in research by Wannous and Nakano, the goal was to allow students to implement simple networking topologies in a controlled fashion and as such used a stripped down adaptation of Linux. This stripping down allowed the researchers to use simplified management and put less stringent requirements on the virtualization layer, as only limited features and no special software were required. These topologies only used simple routing and switching, without for example BGP peerings or dynamic routing protocols. Currently, the trend in network devices is to use general Linux as a base op-erating system, with specialized software on top[12]. This then allows for a similar approach for a virtual node in the virtual testbed. Assuming no drastic changes to the underlying Linux system, this means that any virtualization solution that works with Linux could work for these devices. In contrast, some virtual testbeds are aimed at testing or educating its users on specific hardware from certain vendors. Research by Li, Pickard, Li, et al. was, for instance, aimed at Cisco4certifications and as such needed devices that exhibited Cisco’s style of configuration and behaviour[5]. As such the research was focused on using Cisco’s Dynamips5emulation platform for its virtualization solution. This Dynamips software is aimed solely at emulating the behaviour of actual Cisco hardware, so no other types of virtual devices could be integrated via this software. The choice of virtual device or devices used in the testbed heavily influences the capabilities and requirements on the network. Not only do the virtual devices in the testbed need to be compatible with each other, they also require the underlying virtualization and networking to work with them. For instance, virtual devices that use special emulation have limited integra-tion opintegra-tions with other virtualized devices, such as Cisco’s Dynamips. This issue is an important point, since in research by Gal´an, Fern´andez, Fuertes, et al. the desire to use both Cisco and general Linux based virtual machines presented problems[7]. To allow both of these kinds of de-vices to be present in a virtual setting, the management software needed to be compatible with both styles of configurations files, requiring additional development effort. This is relevant for a production environment in which devices from various vendors may be present, either directly or where the network connects with other networks. For a testbed it is therefore useful to have the capability to connect devices from multiple vendors, either virtual or physical.

2.2

Virtualization and emulation

The second layer in the virtual network stack, the virtualization software, is established as the field of computer virtualization. This field has been developed since 1968 as a way to decouple hardware and software[7]. There are various sorts of virtualization solutions and each offers their own compatibility with virtual devices and networks as well as feature sets.

Firstly, emulation is a software solution that completely emulates the hardware for a specific

2https://en.wikipedia.org/wiki/Vyatta

3https://www.fortinet.com

4https://www.cisco.com/

(11)

device in software [13]. This allows for great compatibility with specific device software, since it can be run completely unmodified in the virtualized environment. The device software will run as it would on the actual hardware, allowing for very accurate testing and evaluation at the cost of the overhead associated with emulation. Depending on the emulation software however, it can be limited to only allowing one single device or device family to run. Again, the Dynamips emulation software is a good example as it is only capable of emulating for Cisco’s hardware. This is a limiting factor for this type of emulation software as it would require the integration of several virtualization solutions in a single testbed, if multiple types of devices need to be supported.

On the other end of the spectrum are paravirtualization and containerization, both of which require the device software to be specially made to run in these environments[13]. Paravirtual-ization requires the guest operating system to interact with the hypervisor to request resources or to run instructions. For containerization, the software in the container makes use of the same underlying kernel as all the other containers on the host, thus requiring the software to be specially made for such a kernel and environment. Due to these specific modifications, the performance offered is much greater, allowing for better scaling[11]. Additionally, the higher degree of integration between the virtualization layer and the node software can be of benefit to the management as the virtualization layer can have more interaction with the nodes. This requirement of specially made software means that only limited vendors support it, neither Cisco nor Juniper6offer virtual devices which can run in either containerized or paravirtualized fashion.

Due to this limitation, these virtualization options are only possible when these specific vendor’s virtual devices do not need to be present in the testbed. Notably, the Vyatta virtual router can be run in a paravirtualized environment. For example in research by Wannous and Nakano, the paravirtualization solution Xen7was a suitable choice since only stripped down Linux nodes

needed to be used and these are entirely compatible with the Xen hypervisor[6].

The last type of virtualization is native virtualization. This virtualization solution requires the virtual device to be compatible with the host hardware architecture, but the virtual device’s operating system does not need to be modified for special integration with the hypervisor[14]. In this type of virtualization, the hypervisor can allow the virtual machines direct access to some parts of the host hardware, like CPU instructions, PCI-e cards and more. This type of virtualiza-tion therefore allows for good performance, while setting only limited compatibility requirements. Its performance scaling potential for the use of virtual networks has been demonstrated in re-search by Chan and Martin[9]. In this rere-search, a VMware8 native virtualization solution was

used to construct a virtual network with 30 Vyatta nodes on a single physical host. Well known hypervisors that offer this kind of virtualization are KVM9, VMware and VirtualBox10. Virtual

devices may offer compatibility for certain hypervisor due to the exposed CPU features.

2.3

Networking

Lastly, since what is being constructed is a virtualized network, rather than isolated machines, networking needs to be possible. This networking component has two parts, the network interface (NIC) supplied to the VM and how this NIC is connected to other parts of the network. The networking interface supplied to the virtual machine can be either a software defined interface or a hardware interface which is passed through to the machine[14]. These possible architectures are displayed in figure 2.2, showing how virtual devices can be interconnected with each other or even with other physical devices. The virtual NIC is defined by the hypervisor and the virtual device needs to be compatible with the NIC. Most hypervisors offer multiple types of virtual NICs. So offers VMware their special vmxnet NIC which has good performance, but requires

6https://www.juniper.net/ 7https://xenproject.org

8https://www.vmware.com/

9https://www.linux-kvm.org/page/Main Page

(12)

the virtual machine to have special modifications[15]. Similarly, KVM offers the virtio11 NIC that also has good performance, at the cost of the same special compatibility. Additionally, both these hypervisors have emulation of the Intel12E1000 Ethernet adapter as possible virtual NIC.

For both these hypervisors, that means that as long as the virtual device is compatible with the E1000 hardware, it can be offered a suitable virtual NIC. Similarly, if a hardware NIC is passed through to the virtual machine, meaning it is given direct access to the NIC hardware, then the virtual device requires compatibility with that specific hardware.

HARDWARE NIC NIC PHYSICAL NETWORKING VIRTUAL NETWORKING SWITCH VIRTUAL DEVICE VIRTUAL HARDWARE NIC VIRTUAL DEVICE VIRTUAL HARDWARE NIC PASSTHROUGH PHYSICAL DEVICE HARDWARE NIC

Figure 2.2: The possible networking solutions for virtual devices, with hardware and software The second part of this networking stack is how these NICs, virtual or physical, are connected together to form the network. If the virtual device is offered a physical NIC, this interface can then be connected to other physical network interfaces. The device can be connected via cables to any other physical device, such as a router or switch or other physical NIC pass through to another virtual machine. For a virtual NIC, there are other possibilities, again depending on the hypervisor. VMware offers a virtual switch which allows NICs, virtual and physical, to be connected as if they were connected to a physical switch[15]. It aims to offer complete Ethernet Layer 2 functionality, however it is not possible to connect multiple virtual switches to each other. On the other hand, KVM and Xen use various Linux networking subsystems for their software networking. As such, virtual NICs can be connected via Linux bridges, which then serves as a virtual switch, but with several limitations. For instance, such Linux bridges do not forward LLDP frames by default[16]. However, there also exists Open vSwitch13, which also works with

Linux virtual and physical interfaces. This software does strive to be a more complete virtual switch and additionally has better scaling performance than the Linux bridges[17].

11https://wiki.qemu.org/Documentation/Networking#Virtual Network Devices

12https://intel.com/

(13)

CHAPTER 3

SURFnet

This research has been conducted in collaboration with SURFnet. SURFnet provides network services for all Dutch universities, as well as other higher education institutes and research cen-ters. Additionally they work on new technologies and collaborate on the innovation programs, such as the G ´EANT1 ICT infrastructure network[18]. They currently use their SURFnet7

net-work to provide netnet-working services, but are net-working on the construction of a new netnet-work, SURFnet8[10]. This new network is shown in figure 3.1, visualizing the network at a high level with its various international connections. This new network will have over 300 nodes.

Zwolle Nijmegen Utrecht Eindhoven Breda Delft Hilversum Almere Enschede W a g e n in g e n Tilburg Rotterdam Den Haag Leiden Groningen Leeuwarden Dwingeloo Hoorn Amersfoort Deventer Apeldoorn Arnhem M a as tr ic h t H e e rle n Den Bosch Boxtel Helmond Middelburg Vlissingen Yerseke Ossendrecht Maasland Zoetermeer Haarlem Alkmaar Heerhugowaard Petten Den Helder Drachten Assen Emmen Hoogeveen Meppel Doetinchem Zevenaar Venlo S itta rd Geleen B re uke le n Dronten Lelystad Lisse Zaandam So est e rb erg Schiphol

Core network (multi 100G) Access network (multi 10G) PoP site Repeater site PoP with Metro network Twin PoP with Metro network Node Core network Netherlight PoP Terschelling Texel Leased service (1G) Hamburg Aachen London Dordrecht Brussels Paris Geneva Core network (multi 100G) (International)

Warnsveld

SURFnet network

Amsterdam

Figure 3.1: High level overview of the SURFnet8 network

In order to aid in the construction of this new production network, SURFnet makes use of a physical testbed. Physical testbeds have been used to develop networking standards, for bandwidth, routing and the interoperability of protocols and configurations[2][19][20]. Similarly, SURFnet uses this physical testbed to test configurations and devices. Additionally connections to other networks are simulated and the interoperability with these external networks evaluated. A high level overview of this testbed is shown in figure 3.2. The J2K 02 MX2008 device is shown in the topology, but this device is not yet actually deployed in the physical testbed.

(14)

2-12-22-32-4 10 G Asd001A_Anritsu_01 4-14-24-34-44-54-64-74-8 1 G 1-11-21-31-4 10 G JNX-01-tst MX480 JNX-02-tst MX240 JNX-06-tst MX204 JNX-05-tst MX204 JNX-01 MX2008 JNA-01-tst ACX JNA-02-tst ACX J2K_02 MX2008 fusion 101 5160_11T 5410_01T 5410_01T fusion 101 SRX-01-tst SRX-02-tst

Figure 3.2: Physical testbed version 6.05 showing the connections between the devices In figure 3.2 the testbed is shown with the network connections represented by coloured lines. The colours of the lines are related to the capacity and medium of the connections. The various Juniper devices are denoted with an MX prefix, as they are part of the MX series of routers by Juniper2, or ACX for the ACX series of routers3. There are various models of these routers

deployed in the testbed, with the main differentiating factor being the number of ports and throughput they support. These types of devices will also be used in the production network. Using the same devices in the testbed allows for accurate testing, as the behaviour of these de-vices will be the same in both networks. As the production network will have connections with other networks, these external connections are also simulated in the physical testbed. These ex-ternal connections are simulated with the devices denoted by a “T” postfix in the figure. These connections in the testbed then allow for the evaluation of the interoperability between the ex-ternal devices and the Juniper devices used by SURFnet.

This testbed is coupled with their orchestration software, which manages the configurations on the devices on the testbed and will do this as well for the production network. Via this software, the network engineers of the organization deploy new configurations to the devices and in the testbed it is possible to see how the device with these new configurations. This allows for both the evaluation of the orchestration software, to see if it deploys configurations correctly, and to evaluate the configurations deployed on the devices.

The Anritsu4device in the lower left corner, Asd001A Anritsu 01, is used for much of the testing on the network. This device is capable of generating all kinds of traffic, both normal and mal-formed. This traffic is put on the network, and its behaviour is observed. This makes it possible

2https://www.juniper.net/us/en/products-services/routing/mx-series/

3https://www.juniper.net/us/en/products-services/routing/acx-series/

(15)

to not only look at the routes and performance of traffic, but in the case of malformed traffic, how the routers handle this. This can be used to validate that handling of such data is done in a correct manner, as well as to potentially expose bugs or hardware issues. As this physical testbed consists of hardware that will also be used in the production network, exposing such hardware problems is valuable as it will allow the organization to fix problems before deploying to the production network. This is also why, for example, the Fusion5 box in the top right is present,

as this a Juniper device of which the capabilities are being evaluated. Additionally, to allow for management which is separate from the testing traffic, each of the devices is connected to an isolated management network. This allows for management whenever the network configuration has failed or misbehaves.

Although the physical testbed is useful to conduct these experiments, it also comes with draw-backs. First of all, since all of these devices are physical, they have to be bought, placed and fed with power, all of which costs money. Due to the power, networking and space requirements, these devices are placed in an external datacenter. In the case of SURFnet, they are placed in a datacenter far from the offices where the configurations are built and tested. In addition to this, any configuration changes in terms of cabling is a physical action, requiring someone to take the time to go to the testbed and change it. Both of these factors inhibit the testing network in its scaling and versatility. It is unfeasible to constantly change the physical network topology for other tests, nor are all tests possible with a limited scale testbed. This limited versatility is further illustrated by the desire of SURFnet to remove the pictured JNA ACX devices, as they don’t meet some of the requirements set by SURFnet. However, physically removing them from the testbed and potentially replacing them with different hardware, would take time and further restrict the size of the testbed. The limited scale is evident in the difference of the number of nodes in the testbed, 8, as compared with the SURFnet8 network, which will have more than 300 nodes. The behaviour of traffic and configurations in a network of only 8 nodes is limited in how well it represents a much larger network. Especially since the last MX2008 device is not yet even deployed in the physical testbed. For example, in a network of only 8 nodes, it is only possible to construct limited routes, far fewer than in a network of 300 nodes, making it harder to evaluate routing protocols. In addition to this, malformed traffic or misconfiguration of a device can cause a device to malfunction. Due to the limited scale of the network, this then impacts what other tests can be ran on the network while the malfunctioning device is being repaired. Due to these limitations, SURFnet has expressed interest in the development of a virtualized testbed. This research therefore is to evaluate the viability of a virtual testbed to aid in the construction of the SURFnet8 network.

(16)
(17)

CHAPTER 4

Requirements and constraints

In this chapter, the requirements that the virtual testbed should meet are determined more con-cretely. These requirements are formed via discussions with the network engineers of SURFnet. The context for these discussions was formed via research into the academic state-of-the-art re-garding virtual testbeds. Additionally, there are various constraints and conditions which have to be taken into account for the design and construction of the virtual testbed. Some of these requirements are added as they illustrate the usage of the physical testbed, but they cannot be met in a virtualized setting.

4.1

Behaviour

The first and foremost requirement is that of the representativity of the virtual devices for the physical ones. The physical devices that need to be represented are the Juniper devices and as such this requirement is aimed as at these devices. This representativity requirement consists of several parts. The first is that the virtual devices need to be able to be configured in the same manner as the physical ones. Meaning they make use of the same style of configuration files and commands. This is important to allow the same configuration files and methods to be used between the production network, physical testbed and virtual testbed, so as to not incur extra work when using the testbed.

Secondly, there is the point of performance. This point is mostly important in the physical testbed, where the routing and forwarding performance of the hardware are evaluated. However, it is still important to evaluate the performance potential in a virtual environment, if only to indicate its limitation.

Thirdly, the protocols should function the same as they do on the physical devices. If the routing protocols would result in different routing behaviour, then the virtual testbed cannot be used to test the employed configurations.

The last of these behavioural aspects is that of physical feature replication. The physical devices in the testbed feature multiple data forwarding devices with many network interfaces. Addition-ally, they feature various redundancy systems, allowing for failures to occur in the hardware. Where possible they should be replicated in the virtual devices.

For each of these behaviour aspects it is possible that the virtual testbed may not be exactly the same as physical devices, but these differences then should be known. With the differences known, they can be accounted for and they then serve to illustrate the limitations of the virtual testbed.

(18)

4.2

System integration

The second requirement is that of interaction with external systems, specifically those for orches-tration. Since part of the testing is with regard to how the orchestration software deploys config-urations, this orchestration system needs to also be able to interact with the virtual testbed. The orchestration software likely to be used in SURFnet8 is Cisco NSO1. This orchestration software

requires an IPv4 based connection to each of the devices in the network and is hosted externally from the virtual topology. No mention was given to IPv6 connectivity, but this was a possible future addition as expressed by the network engineers. Therefore the virtualization layer and its networking capabilities need to allow at least IPv4 connectivity with possible IPv6 support. This is requirement 2: the virtual testbed needs to have IPv4 based connectivity with external systems. There is however another aspect to the integration of NSO. The SURFnet automation team expressed that that having an isolated instance of NSO in the testbed would also be a useful addition. This would then have to be realized to have a virtual machine in the virtual network based on Linux that can run NSO. Additionally, it would be useful to integrate more than just Juniper virtual devices in the virtual testbed, such as the virtual Cisco devices. This part of the requirement is therefore further extended to the ability to run multiple kinds of virtual devices in the virtual testbed.

4.3

Security

Tied into the previous requirement is that of security. Although the virtual testbed needs to be exposed to external systems, it should be isolated and secured in such away that misuse is impossible. This is a factor since any access to the virtual devices would also give access to other parts of the network, specifically the orchestration software. This security has two aspects for the built virtual testbed. Firstly is the internal security of the deployed virtual devices as they are exposed to the other external systems. Secondly, the security of the management software used. This management software has complete access to the virtual machines and the virtualization layer and should be secured against unauthorized access. This is requirement 3: the virtual testbed needs to have secure connectivity with external systems.

4.4

Management

The next point is regarding the ease of use of the virtual testbed and its management. Easy in this regard means that the virtual devices should be easy to create and delete with minimal manual configuration needed. In addition to this, the networking part should also require minimal effort and be transparent to the user. The testbed has the possibility to get in a non-functioning state, meaning one or more of the devices no longer work, due to the experimental nature of the employed configurations and orchestration as per chapter 3. In this case, the virtual testbed should have the possibility to reset the devices or configurations on the devices to get back into a functional state or allowing for a complete rebuild of the device. Such management can be achieved via the virtualization under the virtual devices or via direct management into the virtual devices. This should be one of the key advantages as compared to the physical testbed, as correcting the configurations on the physical devices could take more time. Additionally, it is valuable to have easy graphical insight into the topology with accompanying management for the aforementioned capabilities. This is then requirement 4: the virtual testbed needs to be easy to manage.

1

(19)

4.5

Scale

The next point is with regard to the scale of the production network and one of the main limita-tions of the physical testbed. Since the production testbed will consist of more than 300 nodes, the behaviour of routing protocols and peerings will be different from a small scale network, such as the physical testbed. The virtual testbed should offer a better scale topology, with more than 8 nodes. The engineers in especially the automation team of SURFnet expressed direct interest in replicating the entire physical topology of the SURFnet8 network in a virtualized environment. Requirement 5 is therefore more of a criterion, namely, the virtual testbed is better the closer its scale matches that of the production network. To this end, the backend of the virtual testbed needs to be able to scale with the computing resources in order to build virtual testbed at a large scale.

This scaling criterion comes with another added criterion, namely the possibility to run mul-tiple concurrent virtual networks with different configurations, thus allowing for mulmul-tiple testing scenarios to be conducted at the same time.

4.6

Maintainability

The last point is that of the virtual testbed being maintainable and “future-proof”. Rather than being a static entity, the virtual testbed should be able to adapt to new virtual devices or updates to existing ones. This is a factor since the physical devices in the production network will be updated with new software versions and these updates should be deployed to the virtual testbed as well. Additionally, the software stack which supports the virtual testbed should not require constant maintenance or manual updating. Thus, it should have good external support and development. However, for the employed software stack, it might be useful to have insight into its inner workings to allow some manual configuration or debugging. However, no specific desire was expressed for either closed source or open source for the software stack. This is requirement 6: the virtual testbed needs to be maintainable.

(20)

4.7

Summary and constraints

The above points are the key requirements and criteria set for the virtual testbed, summarized with their various subrequirements.

R.1 Behaviour The Juniper devices in the virtual testbed must behave like their physical counterparts and differences in behaviour, if any, should be known.

(a) Configuration style

(b) Performance for traffic and throughput (c) Protocol behaviour

(d) Physical feature replication

R.2 Systems integration The virtual testbed needs to have connectivity with external systems or integrate them.

(a) IPv4 based external connectivity (b) IPv6 based external connectivity (c) Direct integration of other systems

R.3 Security The virtual testbed needs to be secured against misuse. (a) Secured virtual devices

(b) Secured testbed management

R.4 Management The virtual testbed needs to be easy to manage. (a) Deployment of nodes with little manual configuration (b) Management of virtualization layer

(c) Direct management system for virtual devices (d) GUI for management

R.5 Scale The virtual testbed is better the closer its scale matches that of the production network.

(a) Backend scaling

(b) Multiple concurrent topologies

R.6 Maintainability The virtual testbed needs to be maintainable. (a) Support for future virtual devices

(b) Update support

(c) External support for issues (d) Manual Debugging possibilities

As for the additional constraints and conditions, the first point is regarding where the virtual testbed will be constructed and ran. For this research, SURFnet has provided resources in their cloud environment for virtual servers in which the virtual testbed can be built. This means that the virtual testbed will have to be built on top of a virtual machine. Additionally, licensing is a potential source of costs which have to be taken into account and evaluated with the constructed virtual testbed. These would have to be paid by SURFnet for the deployment and actual usage of the virtual testbed. These licensing costs are relevant for all elements of the software stack, from the virtual devices to the virtualization layer.

(21)

CHAPTER 5

Evaluation of software

In this chapter the established requirements are used to evaluate the virtual device to be used and various supporting software stacks. The virtual device used in this research is the Juniper vMX, the features and capabilities are described in the first section. The software stacks evaluated are the Wistar and Eve-NG virtual networking environments. Additionally, some other possible, but not extensively explored options are listed, to create a more complete picture of the possibilities. This evaluation concludes with a final summary with how well the software stacks match the set requirements of the previous chapter.

5.1

Juniper virtual devices

This section is focused on the virtual devices offered by Juniper and information listed here is a combination of reading the offered documentation by Juniper and experimentation with the virtual devices.

As established, the virtual network needs to be based on Juniper equipment and specifically the MX series of routers. Juniper offers not only a virtual MX router under the name vMX1, but also other virtual devices such as their vSRX virtual firewall2. However, they are not just meant for testbed usage, but also as actual routers or firewalls, that just run in a virtualized environment[21]. As such, the vMX software features a fully fleshed out dataplane engine, sim-ilar to that running on their physical routers, striving for a simsim-ilar performance. This high performance comes at the cost of high resource usage, which makes the devices less suited for testbed usage where performance isn’t evaluated. The usage of vMX is required in order to meet the behaviour requirement R.1, since there are no other virtual devices which would behave the same as the Juniper physical devices. This usage of vMX also comes with the first limitation which should be known for the employment of the virtual testbed. Namely, performance of the MX devices cannot be mimicked. This is due to that the virtual devices do not have access to the same specialized hardware which is used in the physical routers and thus don’t offer the same performance. This then already is the first important point for the behaviour requirement, as the performance part cannot be met. The virtual testbed as such can not be used to conduct testing regarding the traffic performance.

1https://www.juniper.net/us/en/products-services/routing/mx-series/vmx/

(22)

VIRTUAL CONTROL PLANE CONFIGURATION ENGINE MANAGEMENT vFP INTERFACES INTERFACES vCP NIC 1

. . .

NIC X FORWARDING ENGINE

VIRTUAL FORWARDING PLANE

NETWORKING

vMX INSTANCE

Figure 5.1: A vMX instance with its dual VM architecture

In the MX series of routers, the functionality is split between the control plane (CP) and the forwarding (data) plane (FP). They are implemented as entirely separate entities with separate tasks. The control plane implements the configuration system, which Juniper refers to as Junos OS3. The rest of the network will be connected to the forwarding plane, which then does the actual forwarding of data based on the control plane’s configuration. The forwarding plane nor-mally runs on specialized hardware which is optimized to allow for fast forwarding of data and the control plane on normal x86 hardware[21].

As mentioned, Juniper has created the vMX to be as close as possible to the MX router in most regards. As such, a vMX instance also consists of two entities, the virtualized control plane (vCP) and the virtualized forwarding plane (vFP). This split virtual machine architecture is vi-sualized in figure 5.1. The vCP and vFP have to be connected via dedicated interfaces on some kind of networking layer, be that virtual or physical, which offers Layer 2 Ethernet connectivity. The control plane runs on an x86 platform in the physical routers and as such is used as well in the virtualized environment, without heavy resource usage. The vCP also makes use of the same Junos OS configuration systems as used in the physical devices. This means that the same configuration commands can be used for the physical testbed and the virtual testbed. There-fore the requirement regarding behaviour of configuration is met. This configuration then also should take care of the security of the virtual network. The same security and firewall configura-tion used in the physical testbed can be applied in the virtual testbed to make the network secure. As the vFP is constructed to offer performance as close as possible to the physical hardware, the resultant resource requirements for its virtualization are high. This is caused by the special architecture that Juniper uses to enable the same forwarding engine to run on the specialized hardware as well as in the virtual environment.The vMX architecture does allow for the config-uration of a lite mode, which brings the hardware requirements down. This mode is meant for lab usage, where high throughput is not required[22].

Another difference is that not all physical features of the MX series of routers are present in their virtualized counterpart. This is important for the requirement regarding the physical fea-ture replication, which vMX can meet in limited fashion. Firstly, the physical MX routers may make use of several FP entities, called line cards. However, this is different for the vMX in-stances. Only with the latest vMX release (18.1R1.9) support was added for multiple vFPs for a single vCP, but with little accompanying documentation. For the vMX there is only a single type of virtual line card, where for the physical devices there are multiple types. Additionally is the limitation regarding the number of network interfaces, which is a maximum of 96 for the vFP. Moreover, the various physical redundancy features, such as having failover options with multiple routing engines, are not possible in vMX.

(23)

Juniper provides these virtual devices in images that can be virtualized in the KVM and VMware virtualization solutions with no support for Xen paravirtualization or explicit support for Virtu-alBox. For the networking layer, vMX is compatible with the VMware vmnet and KVM virtio interfaces. The E1000 emulated interface offered by KVM is not mentioned in the documenta-tion, but trying to use the E1000 emulated NIC resulted in a failure to boot without clear reason when it was tried in this research. It is also possible to pass-through physical hardware NICs and Juniper has made a special list of compatible hardware. Such physical NICs are mainly used to achieve the high performance for which the vMX is developed. Such high performance is not needed in the testbed for SURFnet as the nodes will only be required to interact with each other in a virtual manner. Such physical NICs would then only cost extra and would limit scaling to however many physical NICs can be present on a single host. These exact hardware requirements are shown in table 5.1 for the supported KVM and VMware hypervisors, showing both performance and lite mode.

KVM VMware

RAM CPUs RAM CPUs

vCP performance mode 4GB 1 4GB 1

vCP lite mode 1GB 1 2GB 1

vFP performance mode 12GB 7 12GB 7

vFP lite mode 4GB 3 8GB 3

Table 5.1: Hardware requirements for vMX version 18.1R1.9

These requirements, even in lite mode, are still far higher than for example the Vyatta virtual router. In research by Chan and Martin each fully functional Vyatta instance was configured with 512MB RAM[9]. Additionally, due to its limited CPU usage, it was found that up to 30 of these Vyatta instances could be ran on 8 CPU cores.

To examine how well vMX allows for scaling for the corresponding requirement R.5, vMX ver-sion 18.1R1.9 was deployed under the KVM hypervisor and its performance monitored. It was found that less hardware was required in terms of RAM to retain a functional router, and thus less was allocated. Less than 4096MB could not be allocated for the vFP, as it would result in various errors during the boot process of the vFP. As for the three required CPU cores, if fewer than three were allocated to the device, it would refuse to boot. The device clearly displayed a message indicating that three cores were required to boot. The virtual router was set in lite mode, with minimum further configuration. Only an static route on the management interface and login credentials were configured, to allow for access to the router. The exact specification of the test platform is shown in table 5.3. The found resource usage is shown in table 5.2. This shows that especially the vFP uses much of its allocated CPU resources even when forwarding no data. This is an important factor in scaling the testbed, since for example running the scale of the production network in vMX instances would require at least 300 nodes with 4 cores each, resulting in theoretical 1200 cores being needed. Virtualizing the physical testbed, with 8 nodes, would result in a theoretical resource usage of 32 cores with 36GB of RAM.

Memory allocated Memory used CPUs allocated CPU load

vCP 512MB 462MB 1 1.9%

vFP 4096MB 1803MB 3 74.6%

(24)

CPU Intel E5-2680 v3 @ 2.50GHz

OS Ubuntu 16.04.4 LTS

QEMU-KVM version 2.5

vMX version 18.1R1.9

vFP version 20180317

Table 5.3: The specifications of the testing platform

Juniper does not offer vMX for free and requires licensing for vMX to be feature complete as well as to increase its forwarding performance. There are three tiers of feature packages, which affect what protocols and features the vMX router is capable of4. These three tiers are listed in table 5.4. Separate from the feature licenses, there are also bandwidth licenses, which determine the maximum speed at which the forwarding plane will forward data. There are licenses from 100Mbps to 40Gbps. This speed is purely theoretical and does not mean that the underlying hardware is actually capable of such speeds. These throughput licenses are offered for when the vMX instance is used with physical interfaces and used as a production router. In addition to these licenses, there is a trial license which gives access to the premium feature license with 500Gbps maximum throughput. This trial license is only valid for 60 days however. For the routers needed by SURFnet, the features offered by the premium license are needed. For the purposes of this research the trail license is used, but for future use within SURFnet it is likely the premium license would have to be acquired. However, there is no need for a bandwidth license, since the bandwidth will not be tested in the virtual environment.

Tier Features

BASE IP routing with 256,000 routes in the forwarding table

Basic Layer 2 functionality, Layer 2 bridging and switching

ADVANCE Features in the BASE application package

IP routing with up to 2,000,000 routes in the forwarding table Layer 2 features include Layer 2 VPN, VPLS, and Layer 2 Circuit

VXLAN, EVPN 16 instances of Layer 3 VPN

PREMIUM Features in the BASE and ADVANCE application packages

IP routing with up to 4,000,000 routes in the forwarding table Layer 3 VPN for IP and multicast

IPsec, Group VPN Table 5.4: The tiers of feature licenses

(25)

5.2

Wistar

Wistar is described by its creators as “Wistar is a tool to help create and share network topologies of virtual machines”[23]. Is being developed by Emberyt under the umbrella of Juniper as an open source project hosted on Github5. The version used for evaluation is that from commit

784c7a3, which was committed to GitHub on 7 May 2018. The software is implemented as a webserver which communicates with the virtualization hypervisor for the deployment of a virtual network, management is done through the GUI offered by the webserver. The documentation lists vMX version 17 as the last listed supported version, with no mention of the newer version 18, however since the software is developed under the umbrella of Juniper it is likely that future vMX versions will be supported. Lastly, the software has no vendor support or commercial licensing and is offered completely for free. The only option for support is the Slack6 channel that the developers offer for communication.

Figure 5.2: Wistar Juniper topology example

5.2.1

Management

Since the software is developed with the Juniper virtual devices as a focus, it offers good man-agement systems for these devices. First of all, the deployment is facilitated due to the capability of Wistar to abstract away the dual virtual machine architecture of a vMX instance. As such, the vCP and vFP can be deployed and administered as a single entity, as shown in figure 5.2. The software contains templates to correctly setup the virtualization layer with the needed con-figuration for the vMX devices. As such the deployment requirement is met as it requires little manual configuration. This integration with Juniper devices extends further to the ability to configure the devices from within the GUI. The Wistar software makes use of the management interface of the virtual devices to issue Junos configuration commands. This management inter-face is also accessible for direct SSH connections for manual configuration. The software offers no special integration for any other kind of virtual device, except including templates for Ubuntu 16.04. This means that Wistar can be used to run NSO in the topology, but no Cisco virtual devices. Additionally, in the GUI there is direct control of the virtualization layer, being able to suspend, shutdown or destroy the underlying virtual machines for each of the virtual devices in the topology.

5.2.2

External connectivity and security

External connectivity to the virtual devices can be achieved in two ways. Firstly, it is possible to forward ports from the management interface on each virtual device to an external port on Wistar. This allows for external systems to directly interface with the management of the virtual device. Additionally, this external access means that the security of these virtual devices needs to be configured to take this into account. Secondly, it is possible to connect any of the nodes via the

5https://github.com/ 6https://slack.com/

(26)

virtual network to external systems. Wistar makes use of either Linux bridges or OVS switches to couple the virtual devices. Such a bridge or switch can be directly connected to a virtual device. To allow for external access, a physical NIC on the host can then be connected to such a virtual connection to the device. This allows the external orchestration software to interact with the nodes in the topology. Additionally, the orchestration software could be deployed on a Ubuntu node in the virtual topology. Once the virtual devices are exposed to external systems, the configuration of the virtual devices themselves determines their security. Additionally, Wistar itself does not have specific security considerations for external access. However, as it can make use of the Apache7 webserver, the security configuration for this webserver can be used. This

does not allow for different user roles or access restriction within the GUI, but only a simple access restriction to the entire Wistar instance.

5.2.3

Backend

Wistar is being developed as an open source technology, making use of other standard software elements for much of its backend. For the virtualization layer, Wistar has compatibility with several technologies. It can make use of the KVM hypervisor as well as VMware ESXI. Ad-ditionally, the software has compatibility with Openstack8. OpenStack is a management and

orchestration tool that manages virtualization within a cluster and relies on other hypervisors for the actual virtualization. This backend means that scaling is easier, since an OpenStack cluster can be many physical nodes, with any number of virtual machines hosted on them. This OpenStack compatibility makes adhering to the scaling requirement, R.5, possible, since many powerful physical hosts can be used to host the resource intensive vMX instances. Additionally, Wistar allows the simultaneous deployment of multiple topologies. For this evaluation the KVM virtualization layer was used. Since the project is open source, its code can be inspect. For this research, the KVM management code was inspected. It was found that the software makes use of the libvirt9 virtualization API. This allows for easy debugging of the deployed virtual machines, since they can be inspected and controlled via the same API outside of Wistar. Additionally, the templates used for the vMX deployment can be inspected and altered. These templates have been used to change virtual interfaces used for the KVM deployments from the E1000 emulation to virtio interfaces, since those were needed for the vMX 18.1R1.9 release.

5.3

Eve-NG

Eve-NG is a commercial offering that “... allows enterprises, e-learning providers/centers, in-dividuals and group collaborators to create virtual proof of concepts, solutions and training environments” [24]. It is a closed source solution that offers both a free (community) and paid (pro) license option with various feature differences, which are discussed later in this section. The software is a server-based solution, which offers web GUI for remote management and usage of the built topologies. Eve-NG offers a simple installer which completely sets it up for use. For evaluation, Eve-NG, community edition, version 2.0.3-86 has been deployed on Ubuntu 16.04.4 LTS with the aforementioned installer. As Eve-NG has commercial support it is likely that it will be continue to be developed for new versions of virtual devices as well as offer technical support for any issues.

5.3.1

Management and integration

Eve-NG is not focused solely on integration with Juniper virtual devices, but also has compat-ibility with Cisco, Palo Alto10 virtual devices. This is useful, since as per the requirements,

it is possible to want to integrate other devices than just Juniper ones. This however means that there also is no specific compatibility with the Juniper virtual devices, with only limited

7https://httpd.apache.org/

8https://www.openstack.org/

9https://libvirt.org/

(27)

documentation available regarding the deployment of vMX in the software. The dual node ar-chitecture of vMX is not abstracted away and needs to managed explicitly in the GUI. This is visualized in figure 5.3, where the manual connections that have to be made between the vCP and vFP are visible. This is lack of special integration makes the user interface more cluttered and decreases the ease of initial deployments.

Figure 5.3: Eve-NG vMX dual node topology example

This lack of integration extends further, rather than having direct management capabilities for the software on the virtual devices, Eve-NG only offers a serial console to each of the virtual nodes. Additionally, the usage of these consoles requires client-side software to be used. This makes the direct management of the virtual nodes less easy. Additionally, the GUI offers the possibility to add external connections. However, this is done in a nontransparent way. Within the Eve-NG host, it is not visible which physical NIC has to be coupled with what external connection used in the GUI.

5.3.2

Backend

To run the designed topology, Eve-NG relies on KVM for its virtualization backend, together with Linux bridges for the network part of the topology. The alternative technology Open vSwitch cannot be used. Eve-NG does not use the aforementioned libvirt API, but does allow for specific virtual machine configurations to be passed via the GUI. Since the software is closed source and does not use libvirt, it is impossible to directly influence the virtualization layer. As KVM is a single host hypervisor, the topology is limited in size to however many virtual devices can be run on a single host. This is relevant, since as discussed in section 5.1, 4 cores for a single vMX instance are needed with about 5GB of RAM allocated. To illustrate this scaling issue further, the most powerful Intel CPU at the time of this research has 48 logical cores11. This single

CPU would then allow a maximum of 12 vMX instances in a single topology. Therefore Eve-NG does not adhere to the scaling requirement. To help with the RAM requirements, Eve-NG uses a deduplication modification made to the Linux kernel called Ultra Kernel Samepage Merging (UKSM). UKSM merges memory pages that are the same, so that effective RAM usage can be decreased [25]. It is an improvement to pre-existing Kernel Samepage Merging (KSM) and has the potential to allow for 68% lower memory usage, depending on virtual machine utilization. This is valuable since as discussed in section 5.1 the vFP requires around 4GB of RAM. UKSM therefore helps Eve-NG with meeting the scaling criterion.

(28)

5.3.3

Licensing and security

Lastly is the point of licensing and security. The first security feature that Eve-NG offers, is that the webgui is password secured. With commercial licensing, this functionality can be further extended. The licenses allow additional roles within the software to be implemented. This would allow some users to only view and not edit the topology, or restrict the access to specific topologies. The licenses are also needed to be able to run multiple topologies concurrently. Using the free community edition it is only possible to have a single topology open and running. Additionally, as the free version has only a single user, only two connections to the same topology via the webgui are allowed at the same time. This limits how many people can work on the same topology at the same time.

5.4

Other software

Aside from the previously discussed options Eve-NG and Wistar there do exist other options, most of which require more manual configuration or have a different main goal rather than sup-porting virtual testbeds. These are briefly discussed since it is relevant to see why they have not been used for deployments in this research and to get a more complete picture of the established landscape concerning virtual networks.

First of all, Juniper supports vMX with a deployment script which preconfigures an environ-ment with the KVM hypervisor or deploys a single vMX instance to OpenStack. However, this deployment script does not setup any networking and doesn’t offer a GUI or easy management features. All configuration and additionally topologies would have to be configured completely manually. It is likely that Juniper offers some form of support for this script with the enterprise deployments of vMX as well as future updates with new vMX versions. This script is not further researched due to its lack of any sort of management features. A more complete option would be the use of Vagrant12, which is a tool to automate virtual machine deployment, mostly

fo-cused at software development [26]. It is mainly meant to allow for creating reproducible virtual machines in which software can then be tested in a predictable manner. This tool also offers the possibility to automate the deployment of networks between virtual machines, but creating such a virtual topology is not its main goal. The technology can use various backends, such as the aforementioned OpenStack system, for its virtualization, thus allowing for good scaling. This software also does not offer a GUI for easy management of the topology and its virtual nodes. All configuration would be done via configuration files and commands, requiring manual tuning for vMX and any new versions of vMX. Additionally, similar to EVE-NG, it would not abstract the dual virtual machine architecture away to a single entity and would require manual configuration of the networking between the vCP and vFP. Again, this software would not allow for easy usage and thus is not explored further.

A more similar option to Wistar and Eve-NG is GNS313, which is a client based application

rather than server based. This means that the software offers a local only GUI, with the topol-ogy being ran on the same device. This tool has been used in academic research to build lab exercises for students [27]. It also offers a so called marketplace, which offers templates for the deployment of various virtual devices. Similar to Eve-NG, GNS3 offers no special compatibility with vMX to abstract the dual VM architecture. Additionally, its client-based architecture makes it less suited for a standalone virtual testbed, as it complicates access as well as connections with external systems. Both these points are requirements for this research and thus GNS3 is not used.

12https://www.vagrantup.com/

(29)

5.5

Comparison of software

The two main discussed software solutions for the deployment of a virtual testbed and how well they meet the requirements is summarized in table 5.5. The requirements regarding the behaviour and security of the virtual devices are not dependent on the software stacks summarized in this table and as such are not added. There is only one choice for virtual device that can meet the set requirements and that is Juniper’s vMX. There are no other possible virtual devices which offer the same configuration and behaviour as Juniper’s software. The adherence to these requirements are solely determined by the used Juniper virtual device.

Wistar Eve-NG R.2 Systems integration IPv4 + + + IPv6 + + + Direct - + + R.3 Secured management + + + R.4 Manageability Deployment + + + Virtualization + + + + Virtual devices + + + GUI + + R.5 Scaling Backend scaling + + -Multi topology + + + R.6 Maintainability Future devices + + Updates + + External support - + Debugging + +

-Table 5.5: The various software solutions and how they meet the requirements

Based on this evaluation, it is evident that both Wistar and Eve-NG manage to meet most of the set requirements. However, Eve-NG has two main shortcomings that have lead to Wistar being chosen for the construction of the virtual testbed. Eve-NG has limited scaling potential due to only having a single backend and lacks specific integration and tuning for vMX, hampering its ease of use. Since neither software package has explicit future support for vMX, the open source nature of Wistar makes it easier to conduct any manual configuration, that is not possible in Eve-NG. As demonstrated, this was valuable for the deployment of the latest vMX version. Lastly, the highlighted other software options illustrate that there are more possibilities, but that they are mostly lacking in having an easy to use management system or integration with the required vMX virtual device. This evaluation therefore also highlights the dependency on the virtual device for the construction of a virtual testbed.

(30)
(31)

CHAPTER 6

Construction of a prototype virtual testbed

The final part of this research is the construction of a prototype virtual testbed. This prototype is then evaluated to demonstrate the viability of the virtual testbed. To evaluate the representa-tivity of the virtual testbed for the production network, the prototype will be compared to the physical testbed.

6.1

Construction of the prototype

As per the outlined constraints, the virtual testbed is built in the cloud environment offered by SURFnet. The virtual testbed is built on a single host machine. Since only a single host is used to host the virtual testbed and vMX instances are heavy on CPU usage, the maximum amount of CPU cores for a single host is used. The host configuration with the software versions used are visible in table 6.1. Wistar uses KVM based virtual machines, which require specific virtualization instructions to be accessible in the testbed host [28][29]. Allowing access to these virtualization instructions was specially requested for the host machine within the SURFnet cloud. The vMX software was acquired via SURFnet. Via the Juniper website a trial license was requested and this license was applied to each of the virtual devices, so that all features could be used.

CPU Intel E5-2680 v3 (16 cores) @ 2.50GHz

RAM 32GB

OS Ubuntu 16.04.4 LTS

QEMU-KVM version 2.5

vMX version 18.1R1.9

vFP version 20180317

Wistar version commit 784c7a3 @ 07-05-2018

(32)

Figure 6.1: The representative physical topology as deployed and visualized in Wistar To demonstrate the functionality of the virtual testbed, a configuration similar to the phys-ical testbed is built in Wistar. This is visible in figure 6.1, with corresponding names for the devices in the physical testbed also given. However, as per the limitations of vMX and the Wistar deployment, there are various differences. First of all, there is no Fusion virtual device as there exists no documentation on whether this is possible with vMX or whether virtual devices exist that can support this. Secondly, the JNA devices are part of the ACX series by Juniper, for which there exist no direct virtual counterpart. However, as the ACX devices are also routers with the configuration based on Junos, vMX nodes were deployed as substitutes. Since all nodes are connected using the same software networking, there are no differences in capacity or medium for the connections between the nodes, unlike in the physical testbed. In addition to this, the virtual testbed has been deployed in an isolated fashion. There are no connections with the Cisco hardware nor with with other external networks. As mentioned, Cisco devices can also not be virtualized in Wistar and as such are not used in the built topology. The Anritsu device, used for testing with many kinds of traffic in the physical testbed, also does not exist in the virtual testbed, as there is no virtual counterpart.

The base Wistar deployment has been done on Ubuntu 16.04.4 LTS with help of an automated deployment script available in the Wistar GitHub. Additional configuration has been done to enable the use of Open vSwitch as that should offer more features than the standard Linux bridges as per section 2.3. To help with the memory usage, the memory deduplication technique under the name KSM has been enabled on the host. KSM deduplicates memory pages that are the same, thus allowing for a lower resultant memory usage on the host[30]. This technology is useful since each of the vMX instances runs the same base image, thus likely having much of the same data in memory.

6.2

vMX configuration

To evaluate if the virtual devices function in the same fashion as the physical devices, the con-figurations employed in the physical testbed were used. However, as the vMX instances are not exact replicas of the physical devices and the topology is somewhat different, some changes are made. This is especially relevant for the JNA ACX devices in the physical testbed, which are substituted by vMX instances. Since the J2K 02 device is not yet actually deployed in the phys-ical testbed, there are no configurations available for this device to deploy in the virtual testbed. The device is added to the virtual topology since it will be installed in the physical topology,

(33)

thus having it represented in the virtual topology will be needed in the future. The configuration files were obtained from the SURFnet network engineers. Part of this research therefore was also to gain understanding of the Juniper configuration system in order to correctly deploy the vMX devices in the virtual testbed.

Firstly, all external authentication systems were stripped from the configuration. The physi-cal testbed makes use of external authentication to allow users to sign in. As the virtual testbed is deployed isolated, these external systems are not available and as such configuration is adapted to only allow locally defined users to sign in. In the physical testbed, the login would occur via the special fxp0 port with firewall configuration on the lo0 which then only allows certain IP’s to connect. These firewall rules have also been deleted, since the fxp0 port on each of the vMX instances is only available from within the Wistar host. Therefore, the firewall rules on fxp0 would be redundant, since access to the Wistar host would already grant complete access to the vMX instances. These firewall rules would be needed if the fxp0 management port would be made accessible to external systems and then they would have to be configured to still allow access via the fxp0 interface. To allow Wistar to login to the devices, a password and user known to the software have also been added to the configuration. Additionally, the physical devices are connected to an SNMP server, which again is not available in the testbed and thus stripped from the configuration. The removed configuration tree statements are shown in listing 6.1 in abbreviated fashion.

Listing 6.1: Login and system configuration removed statements syst em { r a d i u s −s e r v e r { } l o g i n { c l a s s r e a d o n l y −p l u s −v i e w c o n f i g u s e r f a l l b a c k u s e r f t p u s e r noc u s e r remote } ntp {} } snmp {} i n t e r f a c e s { l o 0 { u n i t 0 { f a m i l y i n e t { f i l t e r { i n p u t r e −p r o t e c t −v4 ; } f a m i l y i n e t 6 { f i l t e r { i n p u t r e −p r o t e c t −v6 ; } }

The physical devices in the testbed make use of specially named interfaces for their con-figuration, corresponding to both speed and the medium used for that specific interface. The interfaces for the vMX instances are all named in a similar and sequential manner. The con-figurations taken from the physical testbed therefore all had their interface names changed to whatever names were used in the virtual devices. The physical devices also have various redun-dant systems, to allow for hardware or software failures, but these are not present in the virtual devices. The configuration statements used to configure these redundant features have therefore also been stripped. The configuration files also contain statements regarding the additional line cards (FPCs) that the physical devices can have, these are removed as well. As the Fusion

Referenties

GERELATEERDE DOCUMENTEN

datzelfde boek zou publiceren. 106 Furly schrijf in zijn brieven dat de twee heren na het voorval nooit meer met elkaar gesproken hebben, maar helaas maakt hij geen verdere

Dit zijn automatisering (het digitaliseren van stappen in de journey die voorheen handmatig werden gedaan), proactieve personalisatie (alle informatie uit interacties met

Two of the most well- known among those arrested were sheikh Ali bin Khudayr al-Khudayr (b. 1968), both disciples of the late sheikh Hamoud bin Uqla al-Shuaybi, the most famous

cerevisiae has been used as a model organism to determine yeasts response to SO 2 (Park and Bakalinsky 2000). Bacteria, yeast and mammalian cells have been shown to have

Atriplex patula/hastata. akkerboterbloem knopherik schapezuring eenjarige hardbloem gewone spurrie ringelwikke veldsla uitstaande/spiesmelde melganzevoet korrelganzevoet

Ze bleven wellicht nog een tijdje bestaan, maar uiteindelijk werden alle huizen langs de Verversdijk afgebroken voor de bouw van de nieuwe kloostervleugel in 1699.. Tussen

storten en niet te vergeten van het ontwikkelen van een bekistingssysteem, waarmee eenvoudigweg niets mis kàn gaan. Dat alles is vooral nodig om een

Drive System and Wrist Structure Designed for Servo Control The light wrist weight and the wrist compactness (short distance from axes pivot point to payload