For those who have been reading my posts for a while, they’ll know that while currently I’m a DevOps Engineer, I spent the previous decade managing and configuring service provider networks. For the majority of that time, the network was configured by hand. The closest most people in the industry had to an automation toolset was either using a spreadsheet with variables, their own scripts they had created, or delegating the task to multiple junior engineers.

However in recent years, tools like Ansible and Salt have made it so that you no longer need to learn Python, Golang, Perl or other languages to reap the benefits of automation.

With that in mind, I am putting together a series of posts on using Ansible to manage a number of different vendor’s networking appliances. This is the first time I have put together a true series of posts, but I am hoping to do more series on other topics in the future.

What vendors will be covered?

I have chosen a number of different vendors to cover in this series, all for different reasons: -

Cisco IOS

Cisco is still a heavyweight in the network industry, and I have spent the majority of my career managing Cisco-based networks. Therefore IOS seems an obvious choice. I’ll also try and cover Cisco IOS-XE (which is mostly a continuation of the same/similar syntax, but with better process isolation and a more modern architecture)

Juniper JunOS

I currently work for a company who run an entirely Juniper-based network (save for a couple of Cisco-based Out-of-Band management switches). Juniper are also a very popular option in the service provider sector.

Arista EOS

Arista are very popular in the data centre space, and also they run on Linux. Automation in the data centre can allow rapid turn-up of services, bringing the time to deploy a service down from weeks and months, to hours and minutes.

Cumulus

Cumulus provide a network “distribution” that runs primarily on white box switching (i.e. hardware you can run your own operating system on, rather than being tied to a vendor). It runs on a variant of Debian.

Cumulus have contributed heavily to open source networking already, with significant changes to Quagga/FRR (including the ability to use unnumbered BGP neighbours, a topic I will cover later), and also provide the Cumulus VX virtual machine to train and test with.

Extreme EXOS

Extreme, who now own most of the Brocade (and therefore ex-Foundry) service provider portfolio. They also have their own operating system called EXOS. A lot of Extreme, Brocade and Foundry kit has formed the basis of many of the major peering LANs. LINX (London InterNet Exchange) ran one of their main peering LANs in London on Extreme switching until very recently, and their current Juniper LAN used to be based upon Foundry gear.

Beyond this though, I have little familiarity with Extreme gear, so this will be as much about learning EXOS for me, as managing it with Ansible too.

MikroTik RouterOS

MikroTik provide a lot of features at a very attractive price, and when you find the right use case for them, they can keep up with equipment that costs significantly more (sometimes at a tenth of the price).

A significant part of my career involved managing MikroTik devices, so they are an obvious choice to include in this series from my perspective.

VyOS

The fork of Vyatta, VyOS is open source, and easy to get hold of. I have used VyOS in previous posts as generic routers, and they provide a good option for virtualised routers too.

pfSense and OPNsense

I am not as familiar with pfSense as I am with Cisco ASAs, Juniper SRXs or Check Point firewalls, but they are a very popular option in the industry, and it will give me an opportunity to learn the inner workings of pfSense.

Ignoring the pfSense/OPNsense controversy, for sake of completeness I also want to include OPNsense, to see if those migrating from one to the other will be able to use the same tooling to manage them.

Other vendors

The above is non-exhaustive, and I am more than happy to look into other vendors if I get enough feedback on them. The following are ones I have considered, but with caveats as to whether I will include them: -

HP/Aruba Procurve

I do not currently have access to any of the hardware, so would need to source them from somewhere for this. If anyone has one I could get access to, either physically or remotely, I am more than happy to include them. If this series proves popular enough, I may look to source some anyway.

HPE/H3C Comware

Unfortunately, Comware does not seem to be a popular option any longer. I still have access to HP’s VSR1000 image if people would like me to include them, but (at least from the HPE perspective) they appear to be getting phased out.

This would also probably cover Huawei’s data centre and service provider line, as there are a number of similarities to Comware.

Again, if there is enough call for this, I am more than happy to include them.

Nokia (ex-Alcatel)

I am considering including Nokia, but this is more down to time constraints. While I have previously briefly played with some Nokia (then Alcatel) kit, their approach to services and customers within the configuration would take quite some time to refamiliarise myself with.

Cisco IOS-XR and NX-OS

For those who use IOS and IOS-XE, the transition to either IOS-XR or NX-OS is not the largest. Therefore many of the techniques used to manage IOS would be applicable to either. If there is enough demand, I will try to include them.

Can I include these?

If time permits, I will try and include the above (and maybe others, like Check Point, Fortinet, and any other vendor that provides a virtualised/training version of their operating system to use).

Labbing environment

The labbing environment I am going to use for this will be: -

  • The KVM hypervisor running on Linux
  • A virtual machine, running CentOS 8, that will run: -
    • FRR - Acting as a route server
    • Syslog
    • Tacplus (for TACACS+ integration)
  • Two routers/virtual machines of each vendor, one running as an “edge” router, one running as an “internal” router
  • A control machine that Ansible will run from, over a management network to all machines

Prerequisites

For each vendor, there will be some prerequisites required to be able to manage the devices via Ansible. This will likely be creating an Ansible user, with the correct access level, and allowing SSH.

Where possible, I will also use public SSH keys, so that passwords do not need to be stored within Ansible configuration. If it is not possible, I will use Ansible Vault to store the passwords in an encrypted manner.

Configuration objectives

For each vendor, I am setting myself the challenge of configuring the following, all using Ansible.

BGP

Each vendor “network” (i.e. the two routers) will run in their own Autonomous System. The “edge” router will run a BGP peering session to FRR running on the CentOS 8 virtual machine. The “internal” router will receive networks from the “edge” router via iBGP.

OSPF

OSPF will run between the two routers, to exchange loopback addresses (for BGP to run over). Authentication will be ran between the peers.

IPv6

IPv6 will be used as well as IPv4. This includes running BGP v6 peering sessions, OSPFv3 for IPv6 routing, and IPv6 addressing will exist throughout.

Firewall Filtering

Filtering and/or access-lists will be used to limit the “edge” router to only allow BGP between the “edge” router and the CentOS 8 VM, and will allow ICMP/pings and syslog from the loopback IPs of each router.

NAT to the internet

NAT will be used on the “edge” router to provide basic ICMP/ping access to the internet.

Management, logging and authentication

SNMPv3 will be configured, to allow monitoring from a Network Monitoring System. Syslog will be configured to forward all logs to the CentOS 8 virtual machine. TACACS+ will be configured, to allow authentication and authorization of remote user login sessions to the routers.

Anything else?

While I won’t be configuring all the features everyone will use on the different vendor equipment, it will hopefully provide enough information on how to use Ansible to manage your network. Once you know how to use Ansible to manage your network, you can then customize the approach to what features you need to use.

I also believe that showing it across multiple vendors (and how similar some aspects are) will make it easier for people to move from one vendor to another, without worrying about needing to relearn syntax or tools for managing their networking estate.

I intend to have the first post out very soon, so keep an eye out!