Recent disruptions to two undersea internet cables in the Baltic Sea have yet again...
1.2.3 IRP Technical Requirements
In order to plan the IRP deployment in your network, a series of requirements need to be met and specific information has to be determined to configure IRP.
1.2.3.1 Hardware Requirements #
- CPU
→ Recommended Intel® Xeon® Processor E3/E5 family, for example:
– 1x Intel® Xeon® Processor E3 family for up to 20 Gbps traffic;
– 1x Intel® Xeon® Processor E5 family for 40 Gbps or more traffic.
- RAM
→ If providing sFlow/NetFlow data at least 16 GB, recommended – 32 GB;
→ If providing raw traffic data by port mirroring:
– Minimum 16 GB for up to 10 Gbps traffic;
– Minimum 32 GB for 40 Gbps traffic.
→ Additional RAM would be required to maintain a large number of BGP & BMP sessions (for example, Bgpd occupies about 10 GB of RAM for 16 full-view BGP sessions; the estimation may change due to the growth of the world’s BGP table and new IRP features).
- HDD
→ At least 160GB of storage.
→ SAS disks are recommended (SSDs are required only for 40Gbps+ networks);
→ HDD partitioning:
– LVM is recommended;
– At least 100GB disk space usable for /var or separate partition;
– At least 10GB disk space usable for /tmp or separate partition. This is required for big MySQL table manipulation. More disk space might be required under heavy workload.
- NIC
→ If providing sFlow/NetFlow data – at least 1 x 1000Mbps NIC while two NICs are recommended (one will be dedicated to management purposes).
→ If providing raw traffic data by port mirroring – additional 10G interfaces are required for each of the configured SPAN ports (Myricom 10G network cards with Sniffer10G license are recommended to be used for high pps networks). When configuring multiple SPAN ports, the same number of additional CPU cores are needed to analyze traffic.
1.2.3.2 Software Requirements #
IRP has a dependency on MySQL/MariaDB server and it expects the latest version from official OS repositories. In case the DBMS has been installed from a different repository it is strongly advised that the database instance and its configuration is purged before proceeding with IRP installation.
IRP requires root access to local database instance during first installation. In case the root access can’t be given, use the statements below to grant all necessary privileges to the ‘irp’ user and database:
/etc/noction/db.global.conf
1.2.3.3 Network-Related Information and Configuration #
- Ownership of the AS for the network where IRP is deployed,
- BGP protocol is used for routing and,
- Network is multi-homed.
- Prepare a network diagram with all the horizontal (own) as well as upstream (providers) and downstream (customers) routers included. Compare if your network topology is logically similar to one or more of the samples listed in section Collector Configuration for example Figure: Flow export configuration.
- Identify the list of prefixes announced by your AS that must be analyzed and optimized by IRP.
- Review the output of commands below (or similar) from all Edge Routers:
→ `sh ip bgp summary
`
→ `sh ip bgp neighbor [neighboor-address] received-routes
`
→ `sh ru
` (or similar)
- Provide traffic data by:
→ sFlow, NetFlow (v1, 5, 9) or jFlow and send it to the main server IP. Make sure the IRP server gets both inbound and outbound traffic info.
Egress flow accounting should be enabled on the provider links, or, if this is not technically possible, ingress flow accounting should be enabled on all the interfaces facing the internal network.
NetFlow is most suitable for high traffic volumes, or in the case of a sophisticated network infrastructure, where port mirroring is not technically possible.
Recommended sampling rates:
– For traffic up to 1Gbps: 1024
– For traffic up to 10Gbps: 2048
→ Or: configure port mirroring (a partial traffic copy will suffice). In this case, additional network interfaces on the server will be required – one for each mirrored port.
- Setup Policy Based Routing (PBR) for IRP active probing.
→ Apart from the main server IP, please add an additional alias IP for each provider and configure PBR for traffic originating from each of these IPs to be routed over different providers.
→ No route maps should be enforced for the main server IP, traffic originating from it should pass the routers using the default routes.
→ Define Provider ↔ PBR IP routing map
In specific complex scenarios, traffic from the IRP server should pass multiple routers before getting to the provider. If a separate probing Vlan cannot be configured across all routers, GRE tunnels from IRP to the Edge routers should be configured. The tunnels are mainly used to prevent additional overhead from route maps configured on the whole IRP⟷Edge routers path.
- Configure and provide SNMP for each provider link, and provide the following information:
→ SNMP interface name (or ifIndex)
→ SNMP IP (usually the router IP)
→ SNMP community
This information is required for the report generation, Commit Control decision-making and prevention of overloading a specific provider with an excessive number of improvements.
- To setup cost related settings as well as the Commit Control mechanism, provide the maximum allowed interface throughput for each provider link as well as the cost per Mbps for each provider.