In addition to changing to a cluster configuration, I also wanted to practice setting up features using both IPv6 and IPv4, so I have implemented both protocols. However, I did not implement IPv6 peering in my provider core - I'm simply sending the IPv6 NLRI from each site to IPv4 peer address.
The Junos Security book has an excellent tutorial on setting up two branch SRX routers into a cluster. In this article, I've detailed the process I went through in setting up the cluster.
Physical Setup of the Cluster
The first step is to cable the routers. I used fe-0/0/4 and fe-0/0/5 on each node to create the fabric connectivity. I then cabled fe-0/0/7 on each node as the control link. These connections are used to pass control, data, and synchronization traffic between the SRX nodes.
Management of the SRXs becomes a little more interesting in cluster mode. When you enable cluster mode on the SRX210, fe-0/0/6 becomes the fxp0 management interface. So we have eaten up four revenue ports on each node.
For WAN connectivity, I cabled up ge-0/0/0 on each node. For the R1 LAN, I set aside fe-0/0/3 on both nodes to be part of Reth0. A Reth interface is a redundant ethernet interface that is serviced by whichever node is the primary. The secondary node can take over, if needed.
I could have implemented the WAN interfaces as a Reth, but I wanted to keep them local and separate for each node. Why did I not implement a Reth? No particular reason. Since I didn't use a Reth group for the WAN interface, my cluster is considered a mixed-mode cluster. The Junos Security book details four different cluster setups: active/active, active/passive, mixed-mode, and six pack.
Setting up the Cluster
There are two commands I needed to configure that would require the nodes to reboot. So let's get those out of the way first.
[edit]
set security forwarding-options family inet6 mode flow-based
commit and-quit
Now, from the operational mode enter the following on the router you want to be node 0. This will enable the Juniper Services Redundancy Protocol.
set chassis cluster cluster-id 1 node 0 reboot
Enter the following on the router you want to be node 1.
set chassis cluster cluster-id 1 node 1 reboot
Now the routers will reboot. You can check the status of the cluster with the command:
show chassis cluster status
You should get a new prompt now that informs you what node you are on and . You will see something similar to:
{primary:node0}
username@R1-A>
Configuration
You only need to maintain one configuration for the cluster. If you must, node specific configurations can be created under the groups hierarchy and applied to each node separately. The second node interfaces are referenced using FPC number 2.
## Last changed: 2011-06-04 03:21:50 UTC
version 10.4R3.4;
groups {
node1 {
system {
host-name R1-B;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.0.1.213/24;
}
}
}
}
}
node0 {
system {
host-name R1-A;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.0.1.212/24;
}
}
}
}
}
}
apply-groups "${node}";
system {
root-authentication {
encrypted-password "REMOVED"; ## SECRET-DATA
}
name-server {
208.67.222.222;
208.67.220.220;
}
login {
user myuser {
uid 2000;
class super-user;
authentication {
encrypted-password "REMOVED"; ## SECRET-DATA
}
}
}
static-host-mapping {
services.netscreen.com inet 207.17.137.227;
www.juniper.net inet 207.17.137.239;
}
services {
ssh;
xnm-clear-text;
web-management {
http {
interface fxp0.0;
}
https {
system-generated-certificate;
interface fxp0.0;
}
}
dhcp {
router {
192.168.1.1;
}
pool 192.168.1.0/24 {
address-range low 192.168.1.200 high 192.168.1.250;
name-server {
8.8.8.8;
}
}
}
}
syslog {
archive size 100k files 3;
user * {
any emergency;
}
file messages {
any critical;
authorization info;
}
file interactive-commands {
interactive-commands error;
}
}
max-configurations-on-flash 5;
max-configuration-rollbacks 5;
license {
autoupdate {
url https://ae1.juniper.net/junos/key_retrieval;
}
}
}
chassis {
cluster {
control-link-recovery;
reth-count 2;
heartbeat-interval 2000;
heartbeat-threshold 8;
redundancy-group 0 {
node 0 priority 254;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 254;
node 1 priority 1;
preempt;
hold-down-interval 5;
interface-monitor {
fe-0/0/3 weight 255;
fe-2/0/3 weight 255;
}
}
}
}
interfaces {
ge-0/0/0 {
unit 0 {
family inet {
address 192.0.2.194/30;
}
family inet6 {
address 2001:0db8:aaaa:d::2/64 {
ndp 2001:0db8:aaaa:d::1 mac 56:47:de:ad:01:db;
}
address ::ffff:192.0.2.194/126 {
ndp ::ffff:192.0.2.193 mac 56:47:de:ad:01:db;
}
}
}
}
fe-0/0/3 {
fastether-options {
redundant-parent reth0;
}
}
ge-2/0/0 {
unit 0 {
family inet {
address 192.0.2.198/30;
}
family inet6 {
address 2001:0db8:aaaa:e::2/64 {
ndp 2001:0db8:aaaa:e::1 mac 56:47:de:ad:01:fe;
}
address ::ffff:192.0.2.198/126 {
ndp ::ffff:192.0.2.197 mac 56:47:de:ad:01:fe;
}
}
}
}
fe-2/0/3 {
fastether-options {
redundant-parent reth0;
}
}
fab0 {
fabric-options {
member-interfaces {
fe-0/0/4;
fe-0/0/5;
}
}
}
fab1 {
fabric-options {
member-interfaces {
fe-2/0/4;
fe-2/0/5;
}
}
}
fxp0 {
description "Management Interface - fe-0/0/6";
unit 0 {
family inet {
address 10.0.1.211/24 {
master-only;
}
}
}
}
reth0 {
description "R1 LAN";
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 192.168.1.1/24;
}
family inet6 {
address 2001:0db8:beef:1::1/64;
}
}
}
}
routing-options {
graceful-restart;
static {
route 198.51.100.0/24 discard;
}
autonomous-system 2 loops 2;
}
protocols {
bgp {
group bgp-external {
type external;
family inet {
any;
}
family inet6 {
any;
}
peer-as 1;
neighbor 192.0.2.193 {
import [ PEER_192.0.2.193_IMPORT PEER_2001:0DB8:AAAA:1::1_IMPORT ];
export [ PEER_192.0.2.193_EXPORT PEER_2001:0DB8:AAAA:1::1_EXPORT ];
}
neighbor 192.0.2.197 {
import [ PEER_192.0.2.197_IMPORT PEER_2001:0DB8:AAAA:2::1_IMPORT ];
export [ PEER_192.0.2.197_EXPORT PEER_2001:0DB8:AAAA:2::1_EXPORT ];
}
}
}
}
policy-options {
policy-statement PEER_192.0.2.193_EXPORT {
term TERM1 {
from {
family inet;
route-filter 198.51.100.0/24 exact;
}
then {
metric subtract 1000;
accept;
}
}
}
policy-statement PEER_192.0.2.193_IMPORT {
term TERM1 {
from family inet;
then {
local-preference 120;
accept;
}
}
}
policy-statement PEER_192.0.2.197_EXPORT {
term TERM1 {
from {
family inet;
route-filter 198.51.100.0/24 exact;
}
then {
metric add 1000;
accept;
}
}
}
policy-statement PEER_192.0.2.197_IMPORT {
term TERM1 {
from family inet;
then {
local-preference 80;
accept;
}
}
}
policy-statement PEER_2001:0DB8:AAAA:1::1_EXPORT {
term TERM1 {
from {
family inet6;
route-filter 2001:0DB8:BEEF:1::/64 exact;
}
then {
metric subtract 1000;
accept;
}
}
}
policy-statement PEER_2001:0DB8:AAAA:1::1_IMPORT {
term TERM1 {
from family inet6;
then {
local-preference 80;
accept;
}
}
}
policy-statement PEER_2001:0DB8:AAAA:2::1_EXPORT {
term TERM1 {
from {
family inet6;
route-filter 2001:0DB8:BEEF:1::/64 exact;
}
then {
metric add 1000;
accept;
}
}
}
policy-statement PEER_2001:0DB8:AAAA:2::1_IMPORT {
term TERM1 {
from family inet6;
then {
local-preference 120;
accept;
}
}
}
}
security {
nat {
source {
pool PUBLICPAT {
address {
198.51.100.1/32;
}
}
rule-set untrust-trust {
from zone untrust;
to zone trust;
rule pat {
match {
source-address 192.168.1.0/24;
}
then {
source-nat {
pool {
PUBLICPAT;
}
}
}
}
}
}
}
zones {
functional-zone management {
host-inbound-traffic {
system-services {
ssh;
ping;
http;
https;
}
}
}
security-zone untrust {
host-inbound-traffic {
system-services {
ping;
ssh;
}
protocols {
bgp;
}
}
interfaces {
ge-0/0/0.0;
ge-2/0/0.0;
}
}
security-zone trust {
host-inbound-traffic {
system-services {
all;
}
protocols {
all;
}
}
interfaces {
reth0.0;
}
}
}
policies {
from-zone trust to-zone trust {
policy INTRAZONE-ALLOWED {
match {
source-address any;
destination-address any;
application any;
}
then {
permit;
}
}
}
from-zone trust to-zone untrust {
policy trust-untrust-allowed-all {
match {
source-address any;
destination-address any;
application any;
}
then {
permit;
log {
session-close;
}
}
}
}
}
forwarding-options {
family {
inet6 {
mode flow-based;
}
}
}
}
Configuration Explanation
The group configuration allows for creating specific configurations for the individual node members. I've created unique fxp0 ip addresses for each node. This is recommended for easier management of individual cluster nodes. I also have an interface fxp0 configuration applied globally that has a master ip address. This is useful so I can always ssh to the management IP address and get to whatever node is primary at that moment. The fab[01] interfaces were configured to support the intra-cluster fabric.
groups {
node1 {
system {
host-name R1-B;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.0.1.213/24;
}
}
}
}
}
node0 {
system {
host-name R1-A;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.0.1.212/24;
}
}
}
}
}
}
apply-groups "${node}";
interfaces {
fxp0 {
description "Management Interface - fe-[02]/0/6";
unit 0 {
family inet {
address 10.0.1.211/24 {
master-only;
}
}
}
}
fab0 {
fabric-options {
member-interfaces {
fe-0/0/4;
fe-0/0/5;
}
}
}
fab1 {
fabric-options {
member-interfaces {
fe-2/0/4;
fe-2/0/5;
}
}
}
}
The Chassis Cluster hierarchy allows you to setup a heartbeat for intra-cluster health checks and redundancy group information. Redundancy group 0 is for the control-plane. Other redundancy groups can be used for data-plane Reth interfaces. The SRX210 supports 8 groups. Weights can be applied to each node. The node with the higher number is the primary for that group.
I added fe-[02]/0/0 as a member of Reth0, configured Reth0 for IPv4 and IPv6, and I then enabled interface-monitoring for redundancy group 1. If one of the interfaces goes down apply a weight of 255 to the relevant node.
The SRX data-center models (ie 3000 and 5000 series) support IP monitoring. The branch office models do not support ip monitoring, at the moment.
Interesting Commands
Here are some helpful commands when working with this cluster configuration:
Show the status of the cluster and the redundancy groups
show chassis cluster status
Manually failover primary status for redundancy group 1 to node1
request chassis cluster failover redundancy-group 1 node 1
This would let you login to node1 from node0. There is also a generic other routing engine option you can use instead of specifying the node and node number.
request routing-engine login node 1
The Chassis Cluster hierarchy allows you to setup a heartbeat for intra-cluster health checks and redundancy group information. Redundancy group 0 is for the control-plane. Other redundancy groups can be used for data-plane Reth interfaces. The SRX210 supports 8 groups. Weights can be applied to each node. The node with the higher number is the primary for that group.
I added fe-[02]/0/0 as a member of Reth0, configured Reth0 for IPv4 and IPv6, and I then enabled interface-monitoring for redundancy group 1. If one of the interfaces goes down apply a weight of 255 to the relevant node.
The SRX data-center models (ie 3000 and 5000 series) support IP monitoring. The branch office models do not support ip monitoring, at the moment.
chassis {
cluster {
control-link-recovery;
reth-count 2;
heartbeat-interval 2000;
heartbeat-threshold 8;
redundancy-group 0 {
node 0 priority 254;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 254;
node 1 priority 1;
preempt;
hold-down-interval 5;
interface-monitor {
fe-0/0/3 weight 255;
fe-2/0/3 weight 255;
}
}
}
}
interfaces {
fe-0/0/3 {
fastether-options {
redundant-parent reth0;
}
}
fe-2/0/3 {
fastether-options {
redundant-parent reth0;
}
}
reth0 {
description "R1 LAN";
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 192.168.1.1/24;
}
family inet6 {
address 2001:0db8:beef:1::1/64;
}
}
}
}
The rest of the configuration is pretty basic, I setup basic source NAT for the R1-LAN IPv4 private network, security zones and policies for the outside and inside zones. Routing is eBGP to the MPLS provider core PE routers. I had started to configure IPv6 on the edge connections, but due to time constraints, I simply piggy back the IPv6 NLRI over the IPv4 peering. Because of that, I had to add IPv6 addresses for IPv4 compatibility on the PE routers and the CE cluster connections (ie. ::ffff:192.0.2.194). Of course, configuring static NDP isn't necessary, either.
Interesting Commands
Here are some helpful commands when working with this cluster configuration:
Show the status of the cluster and the redundancy groups
show chassis cluster status
Manually failover primary status for redundancy group 1 to node1
request chassis cluster failover redundancy-group 1 node 1
This would let you login to node1 from node0. There is also a generic other routing engine option you can use instead of specifying the node and node number.
request routing-engine login node 1
What now?
I hope this information has been helpful. So what's next? In the next series of articles, I will be configuring a Site-to-Site IPSec VPN between the cluster and R2.
No comments:
Post a Comment