If you're looking for a way to gain as much power from your MariaDB database server as possible, you might be interested in a MariaDB Galera Cluster. With this setup you can create an active-active database cluster, such that when data is written to one database, it's written to that same database on all nodes.
I want to walk you through the process of creating a MariaDB Galera Cluster. I'll be demonstrating this on freshly installed instance of Ubuntu Server 16.04. For this setup, you'll need at least three servers running Ubuntu 16.04. Do note, three servers is the minimum amount of nodes you can have for a MariaDB Galera Cluster. My servers will be at the following IP addresses:
- Node1 — 192.168.1.245
- Node2 — 192.168.1.235
- Node3 — 192.168.1.253
Obviously, you can have your servers at whatever necessary IP address you need; but you will need to follow closely and make sure you insert the correct IP addresses where needed.
SEE: Hiring kit: MongoDB administrator (Tech Pro Research)
The following steps are taken on all nodes.
The first thing you must do is install the necessary software. Since we're working with MariaDB, you'll need to add the correct repository. This is done with the following commands (the first adds the necessary key and the second adds the repository):
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8 sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://ftp.utexas.edu/mariadb/repo/10.1/ubuntu xenial main
Once the repository has been added, update apt with the command:
sudo apt update -y
Install MariaDB and rsync (needed to sync the nodes) with the command:
sudo apt-get install mariadb-server rsync
During the installation, you will be prompted to type a root user password for MariaDB (Figure A). Do this and the installation will complete.
Next you need to secure MariaDB. Do this by issuing the command:
Answer yes to all questions except Change the root password?.
Complete the above for all three nodes and you're ready to continue on.
SEE: Server deployment/migration checklist (TechRepublic)
Configuring the Nodes
From the terminal on Node1, issue the command sudo nano /etc/mysql/conf.d/galera.cnf, which will open an empty file for editing. In that file, copy the following content:
[mysqld] binlog_format=ROW default-storage-engine=innodb innodb_autoinc_lock_mode=2 bind-address=0.0.0.0 # Galera Provider Configuration wsrep_on=ON wsrep_provider=/usr/lib/galera/libgalera_smm.so # Galera Cluster Configuration wsrep_cluster_name="galera_cluster" wsrep_cluster_address="gcomm://192.168.1.245,192.168.1.235,192.168.1.253" # Galera Synchronization Configuration wsrep_sst_method=rsync # Galera Node Configuration wsrep_node_address="192.168.1.245" wsrep_node_name="Node1"
Remember, if your IP addresses are different, swap them out in the above file to reflect your needs.
Do the same for Node2 and Node3, only you'll use their respective IP addresses and Names in this section:
# Galera Node Configuration wsrep_node_address="192.168.1.245" wsrep_node_name="Node1"
So Node2 would be:
# Galera Node Configuration wsrep_node_address="192.168.1.235" wsrep_node_name="Node2"
And Node3 would be:
# Galera Node Configuration wsrep_node_address="192.168.1.253" wsrep_node_name="Node3"
Configure the firewall
This must be done on all three Nodes. Go back to the terminal window and issue the following commands to open up the necessary ports:
sudo ufw enable sudo ufw allow 3306/tcp sudo ufw allow 4444/tcp sudo ufw allow 4567/tcp sudo ufw allow 4568/tcp sudo ufw allow 4567/udp
Note: If you secure shell into these nodes, make sure to issue the following (to open up port 22 for ssh):
sudo ufw allow 22/tcp
That's it for the configuration.
Starting the Galera Cluster
The first thing you must do is stop the database on all nodes. Do this with the command (run on each node):
sudo systemctl stop mysql
Once the database servers are all stopped, start the Galera Cluster on the first node with the command:
The node will now be registered with the cluster. You can test this by issuing the command:
mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
You'll be prompted for the database root user password. Once authenticated, the command will report one node has joined the cluster (Figure B).
Go back to the second and third nodes and start up the database with the command:
sudo systemctl start mysql
Once the database servers are up and running, go back to the first node and issue the command:
mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
At this point, you should see all three nodes have joined (Figure C).
Testing the replication
Let's do a quick test, by creating a simple database on the first node. This database should automatically replicate to the other nodes. On the first node, gain access to the database shell with the command mysql -u root -p. Once within the database shell, create the new database with the command:
CREATE DATABASE tr_test;
Make sure the database was successfully created with the command:
You should see the newly created database listed (Figure D).
Exit out of the database shell with the command exit. If you hop over to nodes two and three, log into the database shell, and issue the command show databases; you will see the test database has automatically replicated.
Your MariaDB Galera Cluster is ready to serve.
Database power at your fingertips
Linux makes setting up such powerful systems as database clusters incredibly simple. Take a MariaDB Galera Cluster out for a spin to see just how much power you can have at your fingertips.
- How to set up server weight and HTTPS load balancing with NGINX (TechRepublic)
- How to create custom dashboards with NGINX Amplify (TechRepublic)
- How to setup simple load balancing with NGINX (TechRepublic)
- How to quickly install OpenNMS on Ubuntu 16.04 or Debian 8 (TechRepublic)
- What is DevOps? An executive guide to agile development and IT operations (ZDNet)
Jack Wallen is an award-winning writer for TechRepublic and Linux.com. He’s an avid promoter of open source and the voice of The Android Expert. For more news about Jack Wallen, visit his website jackwallen.com.