In recent years, the concepts of Artificial Intelligence (AI) and Machine Learning (ML)...
IRP Installation and Configuration Guide
2 Configuration
2.1 Configuration files
- /etc/noction/irp.conf - the main IRP configuration file contains configuration parameters for all IRP components, including algorithm parameters, optimization modes definitions and providers settings.
- /etc/noction/db.global.conf - database configuration file for all IRP components, except the Frontend component.
- /etc/noction/db.frontend.conf - database configuration file for the Frontend component.
- /etc/noction/exchanges.conf - Exchanges configuration file (3.12.7.2↓).
- /etc/noction/inbound.conf - Inbound prefixes configuration file (3.12.12↓).
- /etc/noction/policies.conf - Routing Policies configuration file (1.2.9↑).
- /etc/noction/user_directories.conf - User Directories configuration file (3.12.10.2↓).
2.2 Global and Core Configuration
- global.nonintrusive_bgp↓ - must be set to “1” until the configuration and route propagation tests are completed.
- global.improve_mode↓ - must be configured according to the specific network operator policies. See also: IRP Optimization modes↑
- global.aggregate↓ - in most cases, it is recommended to enable the aggregates, in order to reduce the number of prefixes advertised by the IRP. Please consult the network infrastructure and configuration.
- core.commit_control↓ - should be configured according to the specific network operator policies, see also: Commit Control↑
- core.outage_detection↓ - in most cases it must be enabled. For more details see Outage detection↑
2.3 Collector Configuration
2.3.1 Irpflowd Configuration
2.3.1.1 Flow agents
IPv4/interfaceID
peer.X.flow_agents = 8.8.8.8/1 8.8.8.8/2 8.8.8.8/3 8.8.8.8/4
2.3.1.2 Configuration
- NetFlow/sFlow/jFlow must be configured on the router(s), which must send traffic flow information to the main IRP server IP (Figure 2.1↓). See (2.3.1.3↓) for specific network device configuration instructions
-
Irpflowd must be enabled (by setting the collector.flow.enabled↓ parameter):
collector.flow.enabled = 1
- A list of all the networks, advertised by the edge routers that IRP will optimize, should be added to the configuration. This information should be specified in the collector.ournets↓ parameter.
-
For security reasons, the list of valid Flow sending IP addresses must be configured in the collector.flow.sources↓, to protect irpflowd from unauthorized devices sending Flow data.
Example:collector.flow.sources = 10.0.0.0/29
-
In case the Flow exporters are configured to use non-standard port numbers (2055 for NetFlow/jFlow and 6343 for sFlow), then collector.flow.listen.nf↓ and collector.flow.listen.sf↓ must be adjusted accordingly:
collector.flow.listen.nf = 2055 collector.flow.listen.sf = 6343
2.3.1.3 Vendor-specific NetFlow configuration examples
(config)# mls netflow (config)# mls flow ip interface-full (config)# mls flow ipv6 interface-full (config)# mls sampling packet-based 512 8192 (config)# mls nde sender version 7
(config)# ip flow-cache entries 524288 (config)# ip flow-cache timeout inactive 60 (config)# ip flow-cache timeout active 1 (config)# ip flow-export version 9 (config)# ip flow-export destination 10.11.12.14 2055
MLS NetFlow sampling must be enabled to preserve router resources.
(config)# int GigabitEthernet 3/6 (config-if)# mls netflow sampling (config-if)# ip flow ingress
(config)# flow monitor IRP-FLOW-MONITOR (config-flow-monitor)# record platform-original ipv4 full (config-flow-monitor)# exporter IRP-FLOW-EXPORTER (config-flow-monitor)# cache timeout inactive 60 (config-flow-monitor)# cache timeout active 60 (config-flow-monitor)# cache entries 1048576
(config)# flow exporter IRP-FLOW-EXPORTER (config-flow-exporter)# destination 10.11.12.14 (config-flow-exporter)# source Loopback0 (config-flow-exporter)# transport udp 2055 (config-flow-exporter)# template data timeout 120
(config)# sampler flow-sampler (config-sampler)# mode random 1 out-of 1024
(config)# interface FastEthernet0/0 (config-if)# ip flow monitor IRP-FLOW-MONITOR sampler flow-sampler input (config-if)# ip flow monitor IRP-FLOW-MONITOR sampler flow-sampler output
Router(config)# ip flow-cache entries 524288 Router(config)# ip flow-cache timeout inactive 60 Router(config)# ip flow-cache timeout active 1 Router(config)# ip flow-export version 9 Router(config)# ip flow-export destination 10.11.12.14 2055
Router(config)#interface FastEthernet 1/0 Router(config-if)#ip flow ingress Router(config-if)#ip flow egress
Router(config)#interface FastEthernet 1/0 Router(config-if)#ip route-cache flow
vyatta@vyatta# set system flow-accounting netflow server 10.11.12.14 port 2055 vyatta@vyatta# set system flow-accounting netflow version 5
vyatta@vyatta# set system flow-accounting interface eth0 vyatta@vyatta# commit
forwarding-options { sampling { input { family inet { rate 1000; } } } family inet { output { flow-server 10.10.3.2 { port 2055; version 5; source-address 10.255.255.1; } } } }
interfaces { xe-0/0/0 { unit 0 { family inet { sampling { input output; } } } } }
2.3.2 Irpspand Configuration
-
Configure port mirroring on your router or switch, as shown in figures (2.3.2↓) and (2.3.3↓).
or:
-
Enable the span collector by setting the collector.span.enabled↓ parameter in the configuration file:
collector.span.enabled = 1
-
Define the list of network interfaces that receive mirrored traffic, by setting the collector.span.interfaces↓ parameter (multiple interfaces separated by space can be specified):
collector.span.interfaces = eth1 eth2 eth3
- A list of all the networks advertised by the edge routers that IRP will optimize must be added to the configuration. This information should be specified in the collector.ournets↓ parameter.
-
In case blackouts, congestions and excessive delays are to be analyzed by the system, the collector.span.min_delay↓ must be turned on as well
collector.span.min_delay = 1
2.4 Explorer Configuration
-
An additional IP alias for each provider should be assigned and configured on the IRP server. This IP will be used as a source address during the probing process.It is recommended to configure reverse DNS records for each IP using the following template:
performance-check-via-<PROVIDER-NAME>.
HARMLESS-NOCTION-IRP-PROBING.<YOUR-DOMAIN-NAME>. - Policy-based routing (PBR) has to be configured on the edge router(s), so that traffic originating from each of these probing IP addresses will exit the network via specific provider. See specific PBR configuration in the Specific PBR configuration scenarios↓ section.
- Policy-based routing has to be configured to drop packets rather than routing them through the default route in case that the corresponding Next-Hop does not exist in the routing table.
2.4.1 Specific PBR configuration scenarios
10.0.0.0/24 - used on the IRP server as well as the probing VLANs 10.0.0.2/32 - main IRP server IP address 10.0.0.3-10.0.0.5 - probing IP addresses 10.0.0.250-10.0.0.254 - router-side IP addresses for the probing VLANs 10.0.1.0/24 - used for GRE tunnel interfaces, if needed 10.10.0.0/24 - real edge routers IP addresses 10.11.0.0/30 - BGP session with the 1st provider, 10.11.0.1 being the ISP BGP neighbor IP 10.12.0.0/30 - BGP session with the 2nd provider, 10.12.0.1 being the ISP BGP neighbor IP 10.13.0.0/30 - BGP session with the 3rd provider, 10.13.0.1 being the ISP BGP neighbor IP Vlan 3 - the probing Vlan eth0 - the probing network interface on the IRP server
access-list 1 permit ip host 10.0.0.3 access-list 2 permit ip host 10.0.0.4 ! route-map irp-peer permit 10 match ip address 1 set ip next-hop 10.11.0.1 set interface Null0 ! route-map irp-peer permit 20 match ip address 2 set ip next-hop 10.12.0.1 set interface Null0 ! interface ve 3 ip policy route-map irp-peer
configure ipv4 access-list irp-peer 10 permit ipv4 host 10.0.0.3 any nexthop1 ipv4 10.11.0.1 nexthop2 ipv4 169.254.0.254 11 permit ipv4 host 10.0.0.4 any nexthop1 ipv4 10.12.0.1 nexthop2 ipv4 169.254.0.254 end router static address-family ipv4 unicast 169.254.0.254 Null0 end interface FastEthernet1/1 ipv4 access-group irp-peer ingress end
[edit interfaces] xe-0/0/0 { unit 3 { family inet { filter { input IRP-policy; } } } } [edit firewall] family inet { filter IRP-policy { term irp-peer1 { from { source-address 10.0.0.3/32; } then { routing-instance irp-isp1-route; } } term irp-peer2 { from { source-address 10.0.0.4/32; } then { routing-instance irp-isp2-route; } } term default { then { accept; } } } } [edit] routing-instances { irp-isp1-route { instance-type forwarding; routing-options { static { route 0.0.0.0/0 next-hop 10.11.0.1; } } } irp-isp2-route { instance-type forwarding; routing-options { static { route 0.0.0.0/0 next-hop 10.12.0.1; } } } } routing-options { interface-routes { rib-group inet irp-policies; } rib-groups { irp-policies { import-rib [ inet.0 irp-isp1-route.inet.0 irp-isp2-route.inet.0 ]; } } }
ip route add default via 10.11.0.1 table 101 ip route add default via 10.12.0.1 table 102 ip rule add from 10.0.0.3 table 101 pref 32001 ip rule add from 10.0.0.4 table 102 pref 32002
# Setup the routing policy: set policy route IRP-ROUTE set policy route IRP-ROUTE rule 10 destination address 0.0.0.0/0 set policy route IRP-ROUTE rule 10 source address 10.0.0.3/32 set policy route IRP-ROUTE rule 10 set table 103 set policy route IRP-ROUTE rule 20 destination address 0.0.0.0/0 set policy route IRP-ROUTE rule 20 source address 10.0.0.4/32 set policy route IRP-ROUTE rule 20 set table 104 set policy route IRP-ROUTE rule 30 destination address 0.0.0.0/0 set policy route IRP-ROUTE rule 30 source address 0.0.0.0/0 set policy route IRP-ROUTE rule 30 set table main commit # Create static route tables: set protocols static table 103 route 0.0.0.0/0 nexthop 10.11.0.1 set protocols static table 104 route 0.0.0.0/0 nexthop 10.12.0.1 commit # Assign policies to specific interfaces, Vlan 3 on eth1 in this example: set interfaces ethernet eth1.3 policy route IRP-ROUTE # Verify the configuration: show policy route IRP-ROUTE show protocols static show interfaces ethernet eth1.3
Following IP addresses are configured on the routers: - 10.0.0.251 is configured on R1, VE3 - 10.0.0.252 is configured on R2, VE3
ip route add default via 10.0.0.251 table 201 ip route add default via 10.0.0.252 table 202 ip rule add from 10.0.0.3 table 201 pref 32101 ip rule add from 10.0.0.4 table 202 pref 32102
#/etc/sysconfig/network-scripts/route-eth0: default via 10.0.0.251 table 201 default via 10.0.0.252 table 202 #/etc/sysconfig/network-scripts/rule-eth0: from 10.0.0.3 table 201 pref 32101 from 10.0.0.4 table 202 pref 32102
Some Brocade routers/switches have PBR configuration limitations. Please refer to the "Policy-Based Routing" → "Configuration considerations" section in the Brocade documentation for your router/switch model. For example, BigIron RX Series of switches do not support more than 6 instances of a route map, more than 6 ACLs in a matching policy of each route map instance, and more than 6 next hops in a set policy of each route map instance. On the other hand, some Brocade CER/CES routers/switches have these limits raised up to 200 instances (depending on package version).
#Router R1 access-list 1 permit ip host 10.0.0.3 ! route-map irp-peer permit 10 match ip address 1 set ip next-hop 10.11.0.1 set interface Null0 ! interface ve 3 ip policy route-map irp-peer #Router R2 access-list 1 permit ip host 10.0.0.4 ! route-map irp-peer permit 10 match ip address 1 set ip next-hop 10.12.0.1 set interface Null0 ! interface ve 3 ip policy route-map irp-peer
modprobe ip_gre ip tunnel add tun0 mode gre remote 10.10.0.1 local 10.0.0.2 ttl 64 dev eth0 ip addr add dev tun0 10.0.1.2/32 peer 10.0.1.1/32 ip link set dev tun0 up
#/etc/sysconfig/network-scripts/ifcfg-tun0 DEVICE=tun0 TYPE=GRE ONBOOT=yes MY_INNER_IPADDR=10.0.1.2 MY_OUTER_IPADDR=10.0.0.2 PEER_INNER_IPADDR=10.0.1.1 PEER_OUTER_IPADDR=10.10.0.1 TTL=64
set interfaces tunnel tun0 set interfaces tunnel tun0 address 10.0.1.1/30 set interfaces tunnel tun0 description "IRP Tunnel 1" set interfaces tunnel tun0 encapsulation gre set interfaces tunnel tun0 local-ip 10.10.0.1 set interfaces tunnel tun0 remote-ip 10.0.0.2
interface Tunnel0 routers ip address 10.0.1.1 255.255.255.252 tunnel mode gre ip tunnel source Loopback1 tunnel destination 10.0.0.2
interfaces { gr-0/0/0 { unit 0 { tunnel { source 10.0.0.2; destination 10.10.0.1; } family inet { address 10.0.1.1/32; } } } }
ip route add default dev tun0 table 201 ip route add default dev tun1 table 202 ip route add default dev tun2 table 203 ip rule add from 10.0.1.2 table 201 pref 32101 ip rule add from 10.0.1.6 table 202 pref 32102 ip rule add from 10.0.1.10 table 202 pref 32103
#/etc/sysconfig/network-scripts/route-tun0: default dev tun0 table 201 default dev tun1 table 202 default dev tun2 table 203 #/etc/sysconfig/network-scripts/rule-tun0: from 10.0.1.2 table 201 pref 32101 from 10.0.1.6 table 202 pref 32102 from 10.0.1.10 table 203 pref 32103
#Router R1 access-list 1 permit ip host 10.0.1.2 access-list 2 permit ip host 10.0.1.6 ! route-map irp-peer permit 10 match ip address 1 set ip next-hop 10.11.0.1 set interface Null0 ! route-map irp-peer permit 20 match ip address 2 set ip next-hop 10.12.0.1 set interface Null0 ! interface Tunnel0 ip policy route-map irp-peer interface Tunnel1 ip policy route-map irp-peer #Router R2 access-list 1 permit ip host 10.0.1.10 ! route-map irp-peer permit 10 match ip address 1 set ip next-hop 10.13.0.1 set interface Null0 ! interface Tunnel0 ip policy route-map irp-peer
!--- repeated block for each peering partner no route-map <ROUTEMAP> permit <ACL> no ip access-list extended <ROUTEMAP>-<ACL> ip access-list extended <ROUTEMAP>-<ACL> permit ip host <PROBING_IP> any dscp <PROBING_DSCP> route-map <ROUTEMAP> permit <ACL> match ip address <ROUTEMAP>-<ACL> set ip next-hop <NEXT_HOP> set interface Null0 !--- block at the end of PBR file interface <INTERFACE> ip policy route-map <ROUTEMAP>
- <ROUTEMAP> represents the name assigned by IRP and equals the value of the Route Map parameter in PBR Generator ("irp-ix" in Figure 4)
- <ACL> represents a counter that identifies individual ACL rules. This variable’s initial value is taken from ACL name start field of PBR Generator and is subsequently incremented for each ACL
- <PROBING_IP> one of the configured probing IPs that IRP uses to probe link characteristics via different peering partners. One probing IP is sufficient to cover up to 64 peering partners
- <PROBING_DSCP> an incremented DSCP value assigned by IRP for probing a specific peering partner. This is used in combination with the probing IP
- <NEXT_HOP> represents the IP address identifying the peering partner on the exchange. This parameter is retrieved during autoconfiguration and preserved in Exchange configuration
- <INTERFACE> represents the interface where traffic conforming to the rule will exist the Exchange router. This is populated with the Interface value of PBR Generator
!--- repeated block for each peering partner no route-map <ROUTEMAP> permit <ACL> no ip access-list extended <ROUTEMAP>-<ACL> ip access-list extended <ROUTEMAP>-<ACL> permit ip host <PROBING_IP> any dscp-matching <PROBING_DSCP> route-map <ROUTEMAP> permit <ACL> match ip address <ROUTEMAP>-<ACL> set ip next-hop <NEXT_HOP> set interface Null0 !--- block at the end of PBR file interface <INTERFACE> ip policy route-map <ROUTEMAP>
load replace relative terminal [Type ^D at a new line to end input] interfaces { <INTERFACE> { unit <INTERFACE_UNIT> { family inet { filter { replace: input <ROUTEMAP>; } } } } } load replace relative terminal [Type ^D at a new line to end input] firewall { family inet { filter <ROUTEMAP> { replace: term <ROUTEMAP><ACL> { from { source-address <PROBING_IP>; dscp <PROBING_DSCP>; } then { routing-instance <ROUTEMAP><ACL>-route; } } ... replace: term default { then { accept; } } } } } load replace relative terminal [Type ^D at a new line to end input] routing-instances { replace: <ROUTEMAP><ACL>-route { instance-type forwarding; routing-options { static { route 0.0.0.0/0 next-hop <NEXT_HOP>; } } } ... } load merge relative terminal [Type ^D at a new line to end input] routing-options { interface-routes { replace: rib-group inet <ROUTEMAP>rib; } rib-groups { replace: <ROUTEMAP>rib { import-rib [ inet.0 <ROUTEMAP><ACL>-route.inet.0 ... ]; } } }
- <INTERFACE> represents the interface where traffic conforming to the rule will exist the Exchange router. This is populated with the Interface value of PBR Generator
- <INTERFACE_UNIT> is the value of the Interface Unit parameter in PBR Generator
- <ROUTEMAP>represents the name assigned by IRP and equals the value of the Route Map parameter in PBR Generator
- <ACL> represents a combined counter like "00009" that identifies individual ACL rules. This variable’s initial value is taken from ACL name start field of PBR Generator and is subsequently incremented for each ACL
- <PROBING_IP> one of the configured probing IPs that IRP uses to probe link characteristics via different peering partners. One probing IP is sufficient to cover up to 64 peering partners
- <PROBING_DSCP> an incremented DSCP value assigned by IRP for probing a specific peering partner. This is used in combination with the probing IP
- <NEXT_HOP> represents the IP address identifying the peering partner on the exchange. This parameter is retrieved during autoconfiguration and preserved in Exchange configuration
root@server ~ $ traceroute -m 5 8.8.8.8 -nns 10.0.0.3 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.0.0.1 0.696 ms 0.716 ms 0.783 ms 2 10.11.0.1 0.689 ms 0.695 ms 0.714 ms 3 84.116.132.146 14.384 ms 13.882 ms 13.891 ms 4 72.14.219.9 13.926 ms 14.477 ms 14.473 ms 5 209.85.240.64 14.397 ms 13.989 ms 14.462 ms root@server ~ $ traceroute -m 5 8.8.8.8 -nns 10.0.0.4 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.0.0.1 0.696 ms 0.516 ms 0.723 ms 2 10.12.0.1 0.619 ms 0.625 ms 0.864 ms 3 83.16.126.26 13.324 ms 13.812 ms 13.983 ms 4 72.14.219.9 15.262 ms 15.347 ms 15.431 ms 5 209.85.240.64 16.371 ms 16.991 ms 16.162 ms
root@server ~ $ /usr/sbin/explorer -s Starting PBR check PBR check failed for provider A[2]. Diagnostic hop information: IP=10.11.0.12 TTL=3 PBR check succeeded for provider B[3]. Diagnostic hop information: IP=10.12.0.1 TTL=3
root@server ~ $ iptables -t mangle -I OUTPUT -d 8.8.8.8 -j DSCP --set-dscp <PROBING_DSCP> root@server ~ $ traceroute -m 5 -nns <PROBING_IP> 8.8.8.8 traceroute to 8.8.8.8, 30 hops max, 60 byte packets 1 ... 2 ... 3 <NEXT_HOP> 126.475 ms !X^C
- <NEXT_HOP> is a Peering Partner’s next-hop IP address in IRP configuration
- <PROBING_DSCP> is a Peering Partner’s DSCP value in IRP configuration
- <PROBING_IP> is a Peering Partner’s probing IP address in IRP configuration
2.4.1.1 Current route detection
2.4.1.2 Providers configuration
Before configuring providers in IRP, the BGP sessions need to be defined, see BGPd Configuration↓
ISP1 - the provider’s name 10.0.0.1 - Router IP configured on the probing Vlan 10.0.0.3 - Probing IP for ISP1, configured on the IRP server 10.11.0.1, 10.11.0.2 - IP addresses used for the EBGP session with the ISP, 10.11.0.2 being configured on the router 400Mbps - the agreed bandwidth 1Gbps - the physical interface throughput ’public’ - read-only SNMP community configured on R1 GigabitEthernet2/1 - the physical interface that connects R1 to ISP1
peer.1.95th = 400 peer.1.95th.bill_day = 1 peer.1.bgp_peer = R1 peer.1.cost = 6 peer.1.description = ISP1 peer.1.ipv4.next_hop = 10.11.0.1 peer.1.ipv4.probing = 10.0.0.3 peer.1.ipv4.diag_hop = 10.11.0.1 peer.1.ipv4.mon = 10.11.0.1 10.11.0.2 peer.1.limit_load = 1000 peer.1.shortname = ISP1 peer.1.snmp.interfaces = 1:GigabitEthernet2/1 peer.1.mon.ipv4.bgp_peer = 10.11.0.1 snmp.1.name = Host1 snmp.1.ip = 10.0.0.1 snmp.1.community = public
root@server ~ $ snmpwalk -v2c -c irp-public 10.0.0.1 ifDescr IF-MIB::ifDescr.1 = STRING: GigabitEthernet1/1 IF-MIB::ifDescr.2 = STRING: GigabitEthernet1/2 IF-MIB::ifDescr.3 = STRING: GigabitEthernet2/1 IF-MIB::ifDescr.4 = STRING: GigabitEthernet2/2 IF-MIB::ifDescr.5 = STRING: GigabitEthernet2/3 IF-MIB::ifDescr.6 = STRING: GigabitEthernet2/4 root@server ~ $ snmpwalk -v2c -c irp-public 10.0.0.1 ifIndex IF-MIB::ifIndex.1 = INTEGER: 1 IF-MIB::ifIndex.2 = INTEGER: 2 IF-MIB::ifIndex.3 = INTEGER: 3 IF-MIB::ifIndex.4 = INTEGER: 4 IF-MIB::ifIndex.5 = INTEGER: 5 IF-MIB::ifIndex.6 = INTEGER: 6
2.4.2 Flowspec PBR
2.5 BGPd Configuration
- An internal BGP session using the same autonomous system number (ASN) must be configured between each edge router and the IRP. BGP sessions must not be configured with next-hop-self (route reflectors can’t be used to inject routes with modified next-hop) - the next-hop parameter advertised by IRP BGPd should be distributed to other iBGP neighbors.
- route-reflector-client must be enabled for the routes advertised by IRP BGPd to be distributed to all non-client neighbors.
-
Routes advertised by IRP BGPd must have a higher preference over routes received from external BGP neighbors.
This can be done by different means, on the IRP or on the router side:- Local-pref can be set to a reasonably high value in the BGPd configuration
- Communities can be appended to prefixes advertised by BGPd
Avoid colisions of localpref or communities values assigned to IRP within both its configuration and/or on customer’s network.- Multi-exit-discriminator (MED) can be changed to affect the best-path selection algorithm
- Origin of the advertised route can be left unchanged or overridden to a specific value (incomplete, IGP, EGP)
LocalPref, MED and Origin attribute values are set with the first nonempty value in this order: 1) value from configuration or 2) value taken from incoming aggregate or 3) default value specified in RFC4271.
Communities attribute value concatenates the value taken from incoming aggregate with configuration value. The router should be configured to send no Communities attribute in case it is required that IRP announces Communities attribute that contain only the configured value. - BGP next-hop must be configured for each provider configured in IRP (please refer to Providers configuration↑ and Providers settings↓)
For example, there are two routers: R1 and R2. R1 runs a BGP session with Level3 and R2 with Cogent. The current route is x.x.x.x/24 with the next-hop set to Level3 and all the routers learn this route via R1. The system injects the x.x.x.x/24 route to R2 with the next-hop updated to Cogent. In this case the new route is installed on the routing table and it will be properly propagated to R1 (and other routers) via iBGP.
However if the system injects the new route to R1 instead of R2, the route’s next-hop will point to R2 while R2 will have the next-hop pointing to R1 as long as the injected route is propagated over iBGP to other routers. In this case a routing loop will occur.
bgpd.peer.R1.as = 65501 bgpd.peer.R1.our_ip = 10.0.0.2 bgpd.peer.R1.peer_ip = 10.0.0.1 bgpd.peer.R1.listen = 1 bgpd.peer.R1.localpref = 190 bgpd.peer.R1.shutdown = 0 bgpd.peer.R1.snmp.ip = 10.0.0.1 bgpd.peer.R1.snmp.community = public
set protocols bgp 65501 neighbor 10.0.0.2 remote-as ’65501’ set protocols bgp 65501 neighbor 10.0.0.2 route-reflector-client set protocols bgp 65501 parameters router-id ’10.0.0.1’
delete system ipv6 disable-forwarding commit set protocols bgp 65501 neighbor 2001:db8:2::2 remote-as ’65501’ set protocols bgp 65501 neighbor 2001:db8:2::2 route-reflector-client set protocols bgp 65501 neighbor 2001:db8:2::2 address-family ’ipv6-unicast’ set protocols bgp 65501 parameters router-id ’10.0.0.1’
set protocols bgp 65501 neighbor 10.0.0.2 route-map import ’RM-IRP-IN’ set policy route-map RM-IRP-IN rule 10 action ’permit’ set policy route-map RM-IRP-IN rule 10 set local-preference ’190’
set protocols bgp 65501 neighbor 2001:db8:2::2 route-map import ’RM-IRP-IN’ set policy route-map RM-IRP-IN rule 10 action ’permit’ set policy route-map RM-IRP-IN rule 10 set local-preference ’190’
router bgp 65501 neighbor 10.0.0.2 remote-as 65501 neighbor 10.0.0.2 send-community neighbor 10.0.0.2 route-reflector-client
router bgp 65501 neighbor 2001:db8:2::2 remote-as 65501 neighbor 2001:db8:2::2 send-community neighbor 2001:db8:2::2 route-reflector-client or router bgp 65501 neighbor 2001:db8:2::2 remote-as 65501 no neighbor 2001:db8:2::2 activate address-family ipv6 neighbor 2001:db8:2::2 activate neighbor 2001:db8:2::2 send-community neighbor 2001:db8:2::2 route-reflector-client
router bgp 65501 neighbor 10.0.0.2 route-map RM-IRP-IN input route-map RM-IRP-IN permit 10 set local-preference 190
router bgp 65501 neighbor 2001:db8:2::2 route-map RM-IRP-IN input route-map RM-IRP-IN permit 10 set local-preference 190
router bgp 65501 neighbor 10.0.0.2 maximum-prefix 10000
router bgp 65501 neighbor 2001:db8:2::2 maximum-prefix 10000
[edit] routing-options { autonomous-system 65501; router-id 10.0.0.1; } protocols { bgp { group 65501 { type internal; cluster 0.0.0.1; family inet { unicast; } peer-as 65501; neighbor 10.0.0.2; } } }
[edit] routing-options { autonomous-system 65501; router-id 10.0.0.1; } protocols { bgp { group 65501 { type internal; cluster 0.0.0.1; family inet6 { any; } peer-as 65501; neighbor 2001:db8:2::2; } } }
[edit] routing-options { autonomous-system 65501; router-id 10.0.0.1; } protocols { bgp { group 65501 { type internal; peer-as 65501; neighbor 10.0.0.2 { preference 190; } } } }
protocols { bgp { group 65501 { neighbor 10.0.0.2 { family inet { any { prefix-limit { maximum 10000; teardown; } } } } } } }
2.5.1 AS-Path behavior in IRP BGPd
-
The advertised prefix will be marked with a recovered AS-Path attribute.
Recovered AS-Path is composed of consecutive AS-Numbers that are collected during exploring process. Please note that recovered AS-Path may differ from the actual BGP Path. - The advertised prefix will be marked with the AS-Path from the aggregate received via BGP.
- If the advertised prefix, for whatever reason, has an empty AS-Path, it can be announced or ignored, depending on the BGPd configuration.
- Routers may be configured to have more preferable localpref / weight values for such routes so the best path algorithm always selects these routes instead of the routes injected by the BGPd daemon.
- Routes may be filtered or have lower localpref / weight set up, using incoming route-map applied to BGP session with IRP.
- Networks that must be completely ignored by IRP can be specified in global.ignored.asn↓, global.ignorednets↓ parameters or marked with a BGP Community listed in global.ignored_communities↓, so no probing / improving / announcing will be made by IRP.
2.5.2 BGPd online reconfiguration
root@server ~ $ service bgpd reload
2.6 Failover configuration
2.6.1 Initial failover configuration
Prerequisites
- One IRP node is configured and fully functional. We will refer to this node as $IRPMASTER.
- Second IRP node is installed with the same version of IRP as on $IRPMASTER. We will refer to this node as $IRPSLAVE.
- IRP services, MySQL and HTTP daemons are stopped on $IRPSLAVE node.
- Network operator can SSH to both $IRPMASTER and $IRPSLAVE and subsequent commands are assumed to be run from a $IRPMASTER console.
Configure communication channel from $IRPMASTER to $IRPSLAVE
root@IRPMASTER ~ # ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa -C "failover@noction"
root@IRPMASTER ~ # cat ~/.ssh/id_rsa.pub | while read key; do ssh $IRPSLAVE "echo $key >> ~/.ssh/authorized_keys"; done
root@IRPMASTER ~ # ssh $IRPSLAVE
Install certificate and keys for MySQL Multi-Master replication between $IRPMASTER and $IRPSLAVE
# cd && rm -rvf irp-certs && mkdir -p irp-certs && cd irp-certs # openssl genrsa 2048 > ‘hostname -s‘-ca-key.pem # openssl req -new -x509 -nodes -days 3600 -subj "/C=US/ST=CA/L=Palo Alto/O =Noction/OU=Intelligent Routing Platform/CN=‘/bin/hostname‘ CA/ emailAddress=support@noction.com" -key ‘hostname -s‘-ca-key.pem -out ‘ hostname -s‘-ca-cert.pem # openssl req -newkey rsa:2048 -days 3600 -subj "/C=US/ST=CA/L=Palo Alto/O= Noction/OU=Intelligent Routing Platform/CN=‘/bin/hostname‘ server/ emailAddress=support@noction.com" -nodes -keyout ‘hostname -s‘-serverkey.pem -out ‘hostname -s‘-server-req.pem # openssl rsa -in ‘hostname -s‘-server-key.pem -out ‘hostname -s‘-serverkey.pem # openssl x509 -req -in ‘hostname -s‘-server-req.pem -days 3600 -CA ‘ hostname -s‘-ca-cert.pem -CAkey ‘hostname -s‘-ca-key.pem -set_serial 01 -out ‘hostname -s‘-server-cert.pem CHAPTER 2. CONFIGURATION 78 # openssl req -newkey rsa:2048 -days 3600 -subj "/C=US/ST=CA/L=Palo Alto/O= Noction/OU=Intelligent Routing Platform/CN=‘/bin/hostname‘ client/ emailAddress=support@noction.com" -nodes -keyout ‘hostname -s‘-clientkey.pem -out ‘hostname -s‘-client-req.pem # openssl rsa -in ‘hostname -s‘-client-key.pem -out ‘hostname -s‘-clientkey.pem # openssl x509 -req -in ‘hostname -s‘-client-req.pem -days 3600 -CA ‘ hostname -s‘-ca-cert.pem -CAkey ‘hostname -s‘-ca-key.pem -set_serial 01 -out ‘hostname -s‘-client-cert.pem
# openssl verify -CAfile ‘hostname -s‘-ca-cert.pem ‘hostname -s‘-server-cert.pem ‘hostname -s‘-client-cert.pem server-cert.pem: OK client-cert.pem: OK
# mkdir -p /etc/pki/tls/certs/mysql/server/ /etc/pki/tls/certs/mysql/client/ /etc/pki/tls/private/mysql/server/ /etc/pki/tls/private/mysql/client/ # cp ‘hostname -s‘-ca-cert.pem ‘hostname -s‘-server-cert.pem /etc/pki/tls/certs/mysql/server/ # cp ‘hostname -s‘-ca-key.pem ‘hostname -s‘-server-key.pem /etc/pki/tls/private/mysql/server/ # cp ‘hostname -s‘-client-cert.pem /etc/pki/tls/certs/mysql/client/ # cp ‘hostname -s‘-client-key.pem /etc/pki/tls/private/mysql/client/ # cd && rm -rvf irp-certs
root@IRPMASTER ~# scp "/etc/pki/tls/certs/mysql/server/$IRPMASTER-ca-cert.pem" "$IRPSLAVE:/etc/pki/tls/certs/mysql/client/" root@IRPMASTER ~# scp "/etc/pki/tls/certs/mysql/client/$IRPMASTER-client-cert.pem" "$IRPSLAVE:/etc/pki/tls/certs/mysql/client/" root@IRPMASTER ~# scp "/etc/pki/tls/private/mysql/client/$IRPMASTER-client-key.pem" "$IRPSLAVE:/etc/pki/tls/private/mysql/client/" root@IRPMASTER ~# scp "$IRPSLAVE:/etc/pki/tls/certs/mysql/server/$IRPSLAVE-ca-cert.pem" "/etc/pki/tls/certs/mysql/client/" root@IRPMASTER ~# scp "$IRPSLAVE:/etc/pki/tls/certs/mysql/client/$IRPSLAVE-client-cert.pem" "/etc/pki/tls/certs/mysql/client/" root@IRPMASTER ~# scp "$IRPSLAVE:/etc/pki/tls/private/mysql/client/$IRPSLAVE-client-key.pem" "/etc/pki/tls/private/mysql/client/"
# chown -R mysql:mysql /etc/pki/tls/certs/mysql/ /etc/pki/tls/private/mysql/ # chmod 0600 /etc/pki/tls/private/mysql/server/* /etc/pki/tls/private/mysql/client/*
Configure MySQL replication on $IRPSLAVE
root@IRPSLAVE ~# sed ’s|‘hostname -s‘|$IRPSLAVE|’ < /usr/share/doc/irp/irp.my_repl_slave.cnf.template > /etc/noction/mysql/irp.my_repl_slave.cnf
root@IRPSLAVE ~# service mysqld start root@IRPSLAVE ~# tail -f /var/log/mysqld.log root@IRPSLAVE ~# mysql irp -e "show master status \G" root@IRPSLAVE ~# service mysqld stop
Configure MySQL replication on $IRPMASTER
root@IRPMASTER ~# sed ’s|‘hostname -s‘|$IRPSLAVE|’ < /usr/share/doc/irp/irp.my_repl_master.cnf.template > /etc/noction/mysql/irp.my_repl_master.cnf
root@IRPMASTER ~# service mysqld restart root@IRPMASTER ~# tail -f /var/log/mysqld.log root@IRPMASTER ~# mysql irp -e "show master status \G"
Create replication grants on $IRPMASTER
mysql> CREATE USER ’irprepl’@’<mysql_slave1_ip_address>’ IDENTIFIED BY ’<replication_user_password>’; mysql> GRANT REPLICATION SLAVE ON *.* TO ’irprepl’@’<mysql_masterslave1_ip_address>’ REQUIRE CIPHER ’DHE-RSA-AES256-SHA’; mysql> CREATE USER ’irprepl’@’<mysql_master2_ip_address>’ IDENTIFIED BY ’<replication_user_password>’; mysql> GRANT REPLICATION SLAVE ON *.* TO ’irprepl’@’<mysql_slave2_ip_address>’ REQUIRE CIPHER ’DHE-RSA-AES256-SHA’;
Copy IRP database configuration and database to $IRPSLAVE
root@IRPMASTER ~# scp /root/.my.cnf $IRPSLAVE:/root/
root@IRPMASTER ~# scp /etc/noction/db.global.conf /etc/noction/db.frontend.conf $IRPSLAVE:/etc/noction/
root@IRPMASTER ~# rsync -av --progress --delete --delete-after --exclude="master.info" --exclude="relay-log.info" --exclude="*-bin.*" --exclude="*-relay.*" /var/lib/mysql/ $IRPSLAVE:/var/lib/mysql/
root@IRPMASTER ~# systemctl stop httpd24-httpd mariadb # CentOS root@IRPMASTER ~# systemctl stop apache2 mysql # Ubuntu root@IRPMASTER ~# systemctl start irp-stop-nobgpd.target systemctl start irp-shutdown-except-bgpd.target systemctl start irp-shutdown.target root@IRPMASTER ~# cd /var/lib/mysql && rm -vf ./master.info ./relay-log.info ./*-bin.* ./*-relay.* root@IRPMASTER ~# rsync -av --progress --delete --delete-after /var/lib/mysql/ $IRPSLAVE:/var/lib/mysql/
root@IRPMASTER ~# service irp stop nobgpd root@IRPMASTER ~# service httpd24-httpd stop # CentOS root@IRPMASTER ~# service mysqld stop # CentOS root@IRPMASTER ~# service apache2 stop # Ubuntu root@IRPMASTER ~# service mysql stop # Ubuntu root@IRPMASTER ~# cd /var/lib/mysql && rm -vf ./master.info ./relay-log.info ./*-bin.* ./*-relay.* root@IRPMASTER ~# rsync -av --progress --delete --delete-after /var/lib/mysql/ $IRPSLAVE:/var/lib/mysql/
Start replication (Slaves) on both $IRPMASTER and $IRPSLAVE
$IRPMASTER-mysql> CHANGE MASTER TO MASTER_HOST=’$IRPSLAVE-ip-address’, MASTER_USER=’irprepl’, MASTER_PASSWORD=’$IRPSLAVE-password>’, MASTER_PORT=3306, MASTER_LOG_FILE= ’$IRPSLAVE--bin.000001’, MASTER_LOG_POS= <$IRPSLAVE-bin-log-position>, MASTER_CONNECT_RETRY=10, MASTER_SSL=1, MASTER_SSL_CAPATH=’/etc/pki/tls/certs/mysql/client/’, MASTER_SSL_CA=’/etc/pki/tls/certs/mysql/client/$IRPSLAVE-ca-cert.pem’, MASTER_SSL_CERT=’/etc/pki/tls/certs/mysql/client/$IRPSLAVE-client-cert.pem’, MASTER_SSL_KEY=’/etc/pki/tls/private/mysql/client/$IRPSLAVE-client-key.pem’, MASTER_SSL_CIPHER=’DHE-RSA-AES256-SHA’;
$IRPSLAVE-bin.000001 and <$IRPSLAVE-bin-log-position>
by running the following MySQL command on $IRPSLAVE
mysql> show master status
For the initial configuration the values for $IRPSLAVE-bin.000001 and <$IRPSLAVE-bin-log-position> must be as follows:
Binlog file: $IRPSLAVE–bin.000001
Binlog position: 106
mysql> START SLAVE \G mysql> show slave status \G
$IRPSLAVE-mysql> CHANGE MASTER TO MASTER_HOST=’$IRPMASTER-ip-address’, MASTER_USER=’irprepl’, MASTER_PASSWORD=’$IRPMASTER-password>’, MASTER_PORT=3306, MASTER_LOG_FILE= ’$IRPMASTER-bin.000001’, MASTER_LOG_POS= <$IRPMASTER-bin-log-position>, MASTER_CONNECT_RETRY=10, MASTER_SSL=1, MASTER_SSL_CAPATH=’/etc/pki/tls/certs/mysql/client/’, MASTER_SSL_CA=’/etc/pki/tls/certs/mysql/client/$IRPMASTER-ca-cert.pem’, MASTER_SSL_CERT=’/etc/pki/tls/certs/mysql/client/$IRPMASTER-client-cert.pem’, MASTER_SSL_KEY=’/etc/pki/tls/private/mysql/client/$IRPMASTER-client-key.pem’, MASTER_SSL_CIPHER=’DHE-RSA-AES256-SHA’;
$IRPMASTER-bin.000001and <$IRPMASTER-bin-log-position>
by running the following MySQL command on $IRPMASTER
mysql> show master status
For the initial configuration the values for $IRPMASTER-bin.000001 and <$IRPMASTER-bin-log-position> must be as follows:
Binlog file: $IRPMASTER–bin.000001
Binlog position: 106
mysql> START SLAVE \G mysql> show slave status \G
# systemctl start irp.target
# service httpd24-httpd start # service irp start
Configure Failover using Wizard on $IRPMASTER
Synchronize RRD statistics to $IRPSLAVE
root@IRPMASTER ~ # rsync -av /var/spool/irp/ $IRPSLAVE:/var/spool/irp
2.6.2 Re-purpose operational IRP node into an IRP failover slave
- upgrade IRP to the version matching IRP failover master node
- create a backup copy of your configuration
- delete the following configuration files: /etc/noction/irp.conf,/etc/noction/exchanges.conf,/etc/noction/policies.conf
- proceed with configuration as detailed in Initial failover configuration↑
2.6.3 Re-purpose operational IRP failover slave into a new master
2.6.4 Recover prolonged node downtime or corrupted replication
- new server: follow configuration steps as detailed in Initial failover configuration↑.
- same server: follow recovery steps below.
MySQL Multi-Master recovery prerequisites
- Currently active IRP node is designated as MySQL sync ’origin’. This node currently stores reference configuration parameters and data. These will be synced to the node being recovered and we designate it as ’destination’.
- Recovery should be scheduled during non-peak hours.
- Recovery must finish before bgpd.db.timeout.withdraw↓ (default 4h) expires. If recovery can not be completed in time it is required to start MySQL on the active node.
MySQL Multi-Master recovery procedure
- destination: stop httpd, irp, mysqld
- origin: sync /etc/noction/db.* to slave:/etc/noction/
- origin: sync /root/.my.cnf to slave:/root/.my.cnf
-
origin: sync /var/lib/mysql/ to slave:/var/lib/mysql/
exclude files:
master.info relay-log.info -bin.* -relay.*
wait until sync at (4) succeeds and continue with: - origin: stop httpd, irp (except bgpd), mysqld
- origin: delete files master.info relay-log.info -bin.* -relay.*
- origin: sync /var/lib/mysql/ to slave:/var/lib/mysql/
- destination: start mysqld and check /var/log/mysqld.log for errors
- origin: start mysqld and check /var/log/mysqld.log for errors
- origin: run CHANGE MASTER TO from the /usr/share/doc/irp/changemasterto template
- destination: run CHANGE MASTER TO from the /usr/share/doc/irp/changemasterto template
- destination: show slave status \G
- origin: show slave status \G
- origin: start IRP (bgpd should be already running), httpd
- destination: start IRP, httpd
2.7 Frontend Configuration
2.8 Administrative Components
peer.X.snmp.ip↓/peer.X.snmp.ipv6↓, peer.X.snmp.interface↓ and peer.X.snmp.community↓ need to be set for each provider.
2.9 BMP configuration
- BMP monitoring station’s IP address and port are set and also
- filtering rules regarding what BMP data is sent to IRP are applied if needed.
- BMP monitoring station: BMP monitoring station settings↓.
- primary source of data for current route re-construction: bgpd.as_path↓.
- improvement old and new provider re-probing on AS Path changes: bgpd.retry_probing.new.bmp_path_change↓, bgpd.retry_probing.old.bmp_path_change↓.
2.10 IRP Initial Configuration
2.11 IRP software management
Software repository
Software installation and upgrade
yum install irplite
apt-get update apt-get install irplite
yum upgrade "irplite*"
apt-get update apt-get upgrade
yum downgrade "irplite*1.0*"
apt-cache policy irplite
Package: irplite irplite-* Pin: version 2.0.0-RELEASE~build11806~trusty Pin-Priority: 1001
apt-get update apt-get upgrade irplite
2.12 Starting, stopping and status of IRP components
Managing software components in OS with systemd
systemctl start explorer
systemctl stop explorer
systemctl start irp.target
systemctl start irp-shutdown.target
systemctl start irp-shutdown-except-bgpd.target
systemctl start irp-shutdown.target systemctl start irp.target
systemctl list-dependencies irp.target
Managing software components in OS with RC-files
service explorer start
service explorer stop
service irp start
service irp stop
service irp stop nobgpd
service irp restart
service irp status