DBA > Articles

How to Build a Cassandra Multinode Database Cluster on Oracle Solaris 11.3 with LUN Mirroring and IP Multipathing

By: Antonis Tsavdaris
To read more DBA articles, visit http://dba.fyicenter.com/article/


Cassandra database is a popular distributed database management system from Apache Foundation. It is highly scalable and comes with a master-less notion, in that there isn't a primary node to which other nodes are subservient. Every node in the cluster is equal and any node can service any request.

 

Oracle Solaris 11 is an enterprise-class operating system known for its reliability, availability, and serviceability (RAS) features. Its wealth of integrated features helps administrators build redundancy into every part of the system they deem critical, including the network, storage, and so on.

 

This how-to article describes how to build a Cassandra single-rack database cluster on Oracle Solaris 11.3 and extend its overall availability with LUN mirroring and IP network multipathing (IPMP). LUN mirroring will provide extended availability at the storage level and IPMP will add redundancy to the network.

 

In this scenario, the one-rack cluster is composed of six Oracle Solaris server instances. Three of them—dbnode1, dbnode2, and dbnode3—will be the database nodes and the other three—stgnode1, stgnode2, and stgnode3—will provide highly available storage. The highly available storage will be constructed from nine LUNs, three in each storage node.

 

At the end of the construction, the one-rack cluster will have a fully operational database even if two of the storage nodes are not available. Furthermore, the networks—the public network and the iSCSI network—will be immune to hardware failures through IPMP groups consisting of an active and a standby network card.

 
Cluster Topology

 

All servers have the Oracle Solaris 11.3 operating system installed. Table 1 depicts the cluster architecture.

 

In reality the Cassandra binaries as well as the data will reside on the storage nodes. The database nodes will serve the running instances.

 

Table 1. Oracle Solaris servers and their role in the cluster.

 
Node Name	Role in the Cluster	Contains
dbnode1	Database node	Running instance
dbnode2	Database node	Running instance
dbnode3	Database node	Running instance
stgnode1	Storage node	Binaries and data
stgnode2	Storage node	Binaries and data
stgnode3	Storage node	Binaries and data

 
Network Interface Cards

 

As shown in Table 2, every server in the cluster has four network interface cards (NICs) installed; net0 through net3 will be named. Redundancy is required at the network level and this will be provided by IPMP groups. IP multipathing requires that the DefaultFixed network profile be activated and static IP addresses be assigned to every network interface.

 

Table 2. NICs and IPMP group configuration.

                                                                                                               
Node Name	NIC	Primary/Standby NIC	IP/Subnet	IPMP Group Name	IPMP IP Address	Role
dbnode1	net0	primary	192.168.2.10/24	IPMP0	192.168.2.22/24	Public network
net1	standby	192.168.2.11/24
net2	primary	10.0.1.1/27	IPMP1	10.0.1.13/27	iSCSI initiator
net3	standby	10.0.1.2/27
dbnode2	net0	primary	192.168.2.12/24	IPMP2	192.168.2.23/24	Public network
net1	standby	192.168.2.13/24
net2	primary	10.0.1.3/27	IPMP3	10.0.1.14/27	iSCSI initiator
net3	standby	10.0.1.4/27
dbnode3	net0	primary	192.168.2.14/24	IPMP4	192.168.2.24/24	Public network
net1	standby	192.168.2.15/24
net2	primary	10.0.1.5/27	IPMP5	10.0.1.15/27	iSCSI initiator
net3	standby	10.0.1.6/27
stgnode1	net0	primary	192.168.2.16/24	IPMP6	192.168.2.25/24	Public network
net1	standby	192.168.2.17/24
net2	primary	10.0.1.7/27	IPMP7	10.0.1.16/27	iSCSI target
net3	standby	10.0.1.8/27
stgnode2	net0	primary	192.168.2.18/24	IPMP8	192.168.2.26/24	Public network
net1	standby	192.168.2.19/24
net2	primary	10.0.1.9/27	IPMP9	10.0.1.17/27	iSCSI target
net3	standby	10.0.1.10/27
stgnode3	net0	primary	192.168.2.20/24	IPMP10	192.168.2.27/24	Public network
net1	standby	192.168.2.21/24
net2	primary	10.0.1.11/27	IPMP11	10.0.1.18/27	iSCSI target
net3	standby	10.0.1.12/27

 

First, ensure that the network service is up and running. Then check whether the network profile is set to DefaultFixed.

 

root@dbnode1:~# svcs network/physical

STATE          STIME    FMRI

online         1:25:45  svc:/network/physical:upgrade

online         1:25:51  svc:/network/physical:default

 

root@dbnode1:~# netadm list

TYPE        PROFILE        STATE

ncp         Automatic      disabled

ncp         DefaultFixed   online

loc         DefaultFixed   online

loc         Automatic      offline

loc         NoNet          offline

 

 

Because the network profile is set to DefaultFixed, review the network interfaces and the data link layer.

 

root@dbnode1:~# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             unknown    1000   full      e1000g0
net1              Ethernet             unknown    1000   full      e1000g1
net3              Ethernet             unknown    1000   full      e1000g3
net2              Ethernet             unknown    1000   full      e1000g2

 

Create the IP interface for net0 and then configure a static IPv4 address.

 

root@dbnode1:~# ipadm create-ip net0

root@dbnode1:~# ipadm create-addr -T static -a 192.168.2.10/24 net0/v4

root@dbnode1:~# ipadm show-addr

ADDROBJ        TYPE     STATE      ADDR

lo0/v4         static   ok         127.0.0.1/8

net0/v4        static   ok         192.168.2.10/24

lo0/v6         static   ok         ::1/128

 

Following this, create the IP interfaces and assign the relevant IP addresses and subnets for each of the NICs, net0–net3, for each of the servers according to Table 2.

 

Note: There is an exceptional article by Andrew Walton on how to configure an Oracle Solaris network along with making it internet-facing: "How to Get Started Configuring Your Network in Oracle Solaris 11."

 
IPMP Groups

 

After the NICs have been configured and the IP addresses have been assigned, IPMP groups can be configured as well. IPMP is a great way to group separate physical network interfaces and, thus, provide physical interface failure detection, network access failover, and network load spreading. IPMP groups will be made of two NICs in an active/standby configuration. So, when an interface that is a member of an IPMP group is brought down for maintenance or when a NIC fails due to a mechanical error, a failover process will take place; the remaining NIC and related IP interface will step in to ensure that the node is not segregated from the cluster.

 

According to the planned scenario, two IPMP groups are going to be created in each server, one for every two NICs configured earlier. Each IPMP group will have its own IP interface, and one of the underlying NICs will be active, while the other will remain a standby. Table 2 summarizes the IPMP group configurations that must be completed on each node.

 

First, create the IPMP group IPMP0. Then, bind interfaces net0 and net1 to this group and create an IP address for the group.

 

root@dbnode1:~# ipadm create-ipmp ipmp0
root@dbnode1:~# ipadm add-ipmp -i net0 -i net1 ipmp0
root@dbnode1:~# ipadm create-addr -T static -a 192.168.2.22/24 ipmp0
ipmp0/v4

 

Now that IPMP0 has been created successfully, declare net1 as the standby interface.

 

root@dbnode1:~# ipadm set-ifprop -p standby=on -m ip net1
root@dbnode1:~# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       ipmp0       ok        10.00s    net0 (net1)

 

The ipmpstat command reports that the IPMP0 group has been built successfully and that it operates over two NICs, net0 and net1. The parentheses denote a standby interface.

 

Follow the above-mentioned approach to build the IPMP groups for the rest of the servers in the cluster, as shown in Table 2.

 
Local Storage

 

Full article...


Other Related Articles

... to read more DBA articles, visit http://dba.fyicenter.com/article/