- 26 Aug 2022
- 5 Minutes to read
- Print
- DarkLight
Create a Static Route in CentOS 7 for Unmetered Bandwidth to the Storage Platform
- Updated on 26 Aug 2022
- 5 Minutes to read
- Print
- DarkLight
Overview
Our HPE dedicated servers by default have a dedicated port for provisioning and storage.
To utilize this port for storage traffic, you'll need to create static routes inside the operating system of the server.
This tells the operating system to send traffic to/from the storage platform via the storage port, instead of via your default gateway (primary/monitored interface).
Requirements
- A HPE Dedicated Server ( Blaze or Enterprise Server ).
- You must already know your storage IP address, netmask, and gateway. To find those, please follow the guide here: Find Your Storage IP Address
- Root console access to your server, via SSH or IPMI console.
Steps
Configure your storage IP address
Use the "ip a" command to list all network interfaces.
The storage/provisioning port is usually on the same physical network adapter as the primary IP address.
In the below example, you can see that the primary IP address is assigned to network adapter "eth0".
You can see that "eth1" is on the same physical NIC as eth0, as it has a similar mac address, but only incremented by 1.
e.g. the MAC of eth0 is 5c:6f:69:06:85:4c and the MAC of eth1 is 5c:6f:69:06:85:4d.
You can also see that eth1 is plugged in, as the "State" shows as "UP".
Find your secondary network adapter name
[root@SAU-ECC6A-OR ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 5c:6f:69:06:85:4c brd ff:ff:ff:ff:ff:ff
inet 221.121.144.163/31 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5e6f:69ff:fe06:854c/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 5c:6f:69:06:85:4d brd ff:ff:ff:ff:ff:ff
inet6 fe80::5e6f:69ff:fe06:854d/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 5c:6f:69:d6:12:30 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 5c:6f:69:d6:12:31 brd ff:ff:ff:ff:ff:ff
Now that we've identified "eth1" as the secondary network adapter, we can configure the storage IP address on it.
You'll want to note down the MAC address of the interface. In this case, it's "5c:6f:69:06:85:4d".
Please save this to a notepad along with the interface name (eth1) as we'll use this to configure the interface.
Configure the storage IP address as a static IP on the storage network adapter
CentOS 7 uses 'ifcfg-*' files to configure network interfaces. So to configure the eth1 network adapter, we'll need to edit the config file for it.
nano /etc/sysconfig/network-scripts/ifcfg-eth1
Below is our complete file:
NAME="eth1"
HWADDR="5C:6F:69:06:85:4D"
ONBOOT="yes"
BOOTPROTO="static"
TYPE="Ethernet"
DEFROUTE="no"
IPADDR="100.64.24.78"
NETMASK="255.255.255.252"
Now that we have configured a static IP address on the eth1 network adapter, we need to configure a static route to the storage cluster, via our storage gateway. Create a route file in the network-script directory. The name of the file should be "route-<<network-adapter-name>>".
Example below:
nano /etc/sysconfig/network-scripts/route-eth1
The routes we need to add are:
A route to s3.si.servercontrol.com.au (27.50.66.224/28) for S3 object storage traffic, via your storage gateway (e.g. in this example ours is 100.64.24.77).
A route to storage*.si.servercontrol.com.au (100.64.15.0/24) for RBD/Ceph storage traffic, via your storage gateway (e.g. in this example ours is 100.64.24.77).
To achieve this, we add the following lines to our route-eth1 file:
27.50.66.224/28 via 100.64.24.77
100.64.15.0/24 via 100.64.24.77
Now we need to down, and up the interface to apply the changes.
[root@x ~]# ifdown eth1
[root@x ~]# ifup eth1
[root@x ~]# ip route
default via 221.121.144.162 dev eth0
27.50.66.224/28 via 100.64.24.77 dev eth1
100.64.15.0/24 via 100.64.24.77 dev eth1
100.64.24.76/30 dev eth1 proto kernel scope link src 100.64.24.78
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003
221.121.144.162/31 dev eth0 proto kernel scope link src 221.121.144.163
You can see the routes were applied correctly. The two lines below the 'default' route line show our new routes via eth1
Verify that the static routes applied correctly.
Check the routing table with the following command:
ip route
You should see the routes that we added to the configuration file in the output:
27.50.66.224/28 via 100.64.24.77 dev eth1
100.64.15.0/24 via 100.64.24.77 dev eth1
Next, you should do a few tests to make sure the static routes are working as expected.
1. Ping the storage endpoints..
First test the s3 endpoint:
ping s3.si.servercontrol.com.au
PING s3.si.servercontrol.com.au (27.50.66.227) 56(84) bytes of data.
64 bytes from 27-50-66-227.as45671.net (27.50.66.227): icmp_seq=1 ttl=61 time=0.353 ms
64 bytes from 27-50-66-227.as45671.net (27.50.66.227): icmp_seq=2 ttl=61 time=0.302 ms
64 bytes from 27-50-66-227.as45671.net (27.50.66.227): icmp_seq=3 ttl=61 time=0.308 ms
^C
--- s3.si.servercontrol.com.au ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2028ms
rtt min/avg/max/mdev = 0.302/0.321/0.353/0.022 ms
Next test one of the RBD/Ceph endpoints:
ping storage1.si.servercontrol.com.au
PING storage1.si.servercontrol.com.au (100.64.15.11) 56(84) bytes of data.
64 bytes from 100.64.15.11 (100.64.15.11): icmp_seq=1 ttl=61 time=0.087 ms
64 bytes from 100.64.15.11 (100.64.15.11): icmp_seq=2 ttl=61 time=0.100 ms
64 bytes from 100.64.15.11 (100.64.15.11): icmp_seq=3 ttl=61 time=0.091 ms
^C
--- storage1.si.servercontrol.com.au ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4089ms
rtt min/avg/max/mdev = 0.076/0.087/0.100/0.010 ms
Now we just need to make sure that the traffic is taking the correct path. To do this, use tracepath. We just need to verify that the first hop in the path is your storage gateway, and NOT the gateway of your public IP address.
You can press CTRL + C once you see the first hop, to quit the command.
tracepath -n s3.si.servercontrol.com.au
1?: [LOCALHOST] pmtu 1500
1: 100.64.24.77 0.636ms
1: 100.64.24.77 0.439ms
2: no reply
^C
You can see that the first hop was correct, 100.64.24.77 which is my storage gateway.
Check for the Ceph/RBD subnet as well:
tracepath -n storage1.si.servercontrol.com.au
1?: [LOCALHOST] pmtu 1500
1: 100.64.24.77 0.953ms
1: 100.64.24.77 0.640ms
2: no reply
The above tracepaths show that our static routes are working correctly.
If the static routes were not configured properly, you'd see the trace would be using your public IP address gateway instead. Something like this:
tracepath -n storage1.si.servercontrol.com.au
1?: [LOCALHOST] pmtu 1500
1: 221.121.144.162 0.658ms
1: 221.121.144.162 0.519ms
2: 100.64.105.74 1.567ms
If you see a public IP address as the first hop, then something has gone wrong and your static routes are not working correctly. If that's the case and you can't figure it out please feel free to submit a support case and we'll be happy to assist you further.