Setting up Ansible for our Cassandra Database Cluster to do DevOps/DBA tasks
Ansible is an essential DevOps/DBA tool for managing backups and rolling upgrades to the Cassandra cluster in AWS/EC2. An excellent aspect of Ansible is that it uses ssh, so you do not have to install an agent to use Ansible.
This article series centers on how DevOps/DBA tasks with the Cassandra Database. However the use of Ansible for DevOps/DBA transcends its use with the Cassandra Database so this article is good information for any DevOps/DBA or Developer that needs to manage groups of instances, boxes, hosts whether they be on-prem bare-metal, dev boxes, or in the Cloud. You don’t need to be setting up Cassandra to get use of this article.
Cassandra Tutorial Series on DevOps/DBA Cassandra Database
The first article in this series was about setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). You don’t need those articles to follow along, but they might provide a lot of contexts. You can find the source for the first and second article at our Cloudurable Cassandra Image for Docker, AWS, and Vagrant. In later articles, we will use Ansible to create more complicated playbooks like doing a rolling Cassandra upgrade, and we will cover using Ansible/ssh with AWS EC2.
Source code for Vagrant, and ansbile
We continue to evolve the cassandra-image GitHub project. In an effort for the code to match the listings in the article, we created a new branch where the code was when this article was written (more or less): Article 3 Ansible Cassandra Vagrant.
Let’s get to it. Let’s start by creating a key for our DevOps/DBA test Cassandra cluster.
Create key for test cluster to do Cassandra Database DevOps/DBA tasks with Ansible
To use Ansible for DevOps/DBA, we will need to setup
ssh keys
as Ansible uses ssh
instead of running an agent on each server like Chef and Puppet.
The tool ssh-keygen manages authentication keys for ssh(secure shell). The utility ssh-keygen generates RSA or DSA keys for SSH (secure shell) protocol version 1 and 2. You can specify the key type with the -t option.
setup key script bin/setupkeys-cassandra-security.sh
CLUSTER_NAME=test
...
ssh-keygen -t rsa -C "your_email@example.com" -N "" -C "setup for cloud" \
-f "$PWD/resources/server/certs/${CLUSTER_NAME}_rsa"
chmod 400 "$PWD/resources/server/certs/"*
cp "$PWD/resources/server/certs/"* ~/.ssh
...
Let’s break that down.
We use ssh-keygen to create a private key that we will use to log into our boxes.
In this article those boxes are Vagrant boxes (VirtualBox), but in the next article, we will use the same key to manage EC2 instances.
Use ssh-keygen to create private key for ssh to log into Cassandra Database nodes
ssh-keygen -t rsa -C "your_email@example.com" -N "" -C "setup for cloud" \
-f "$PWD/resources/server/certs/${CLUSTER_NAME}_rsa"
Then we restrict the access to the file of the key otherwise, ansible,
ssh
and scp
(secure copy) will not let us use it.Change the access of the key
chmod 400 "$PWD/resources/server/certs/"*
The above
chmod 400
changes the cert files so only the owner can read the file. This file change mod makes sense. The certification file should be private to the user (and that is what 400
does).Copy keys to area where it will be copied by Cassandra node provisioning
cp "$PWD/resources/server/certs/"* ~/.ssh
The above just puts the files where our provisioners (Packer and Vagrant) can pick them up and deploy them with the image.
Locally we are using Vagrant to launch a cluster to do some tests on our laptop.
We also use Packer and aws command line tools to create EC2 AMIs (and Docker images), but we don’t cover aws in this article (it is in the very next which is sort of part 2 to this article).
Create a bastion server to do ansible DevOps/DBA tasks for Cassandra Cluster
Eventually, we would like to use a bastion server that is a public subnet to send commands to our Cassandra Database nodes that are in a private subnet in EC2. For local testing, we set up a bastion server, which is well explained in this guide to Vagrant and Ansible.
We used Learning Ansible with Vagrant (Part 2⁄4) as a guide for some of the setup performed in this article. It is a reliable source of Ansible and Vagrant knowledge for DevOps/DBA. Their mgmt node corresponds to what we call a bastion server. A notable difference is we are using CentOS 7 not Ubuntu, and we made some slight syntax updates to some of the Ansible commands that we are using (we use a later version of Ansible).
We added a bastion server to our Vagrant config as follows:
Vagrantfile to set up the bastion for our Cassandra Cluster
# Define Bastion Node
config.vm.define "bastion" do |node|
node.vm.network "private_network", ip: "192.168.50.20"
node.vm.provider "virtualbox" do |vb|
vb.memory = "256"
vb.cpus = 1
end
node.vm.provision "shell", inline: <<-SHELL
yum install -y epel-release
yum update -y
yum install -y ansible
mkdir /home/vagrant/resources
cp -r /vagrant/resources/* /home/vagrant/resources/
mkdir -p ~/resources
cp -r /vagrant/resources/* ~/resources/
mkdir -p /home/vagrant/.ssh/
cp /vagrant/resources/server/certs/* /home/vagrant/.ssh/
sudo /vagrant/scripts/002-hosts.sh
ssh-keyscan node0 node1 node2 >> /home/vagrant/.ssh/known_hosts
mkdir ~/playbooks
cp -r /vagrant/playbooks/* ~/playbooks/
sudo cp /vagrant/resources/home/inventory.ini /etc/ansible/hosts
chown -R vagrant:vagrant /home/vagrant
SHELL
The bastion server which could be on a public subnet in AWS in a VPC uses the
ssh-keyscan
to add nodes that we setup in the host file into /home/vagrant/.ssh/known_hosts
.Running ssh-keyscan
ssh-keyscan node0 node1 node2 >> /home/vagrant/.ssh/known_hosts
This utility is to get around the problem of needing to verify nodes, and getting this error message:
The authenticity of host ... can't be established. ... Are you sure you want to continue connecting (yes/no)? no
when we are trying to run ansible command line tools.Modify the Vagrant provision script
Since we are using provision files to create different types of images (Docker, EC2 AMI, Vagrant/VirtualBox), then we use a provisioning script specific to vagrant.
In this vagrant provision script, we call another provision script to setup a hosts file.
000-vagrant-provision.sh
mkdir -p /home/vagrant/.ssh/
cp /vagrant/resources/server/certs/* /home/vagrant/.ssh/
...
scripts/002-hosts.sh
echo RUNNING TUNE OS
Setting up sshd on our Cassandra Database nodes in our DevOps Cassandra Cluster
The provision script
002-hosts.sh
configures /etc/ssh/sshd_config/sshd_config
to allow public key auth. Then it restarts the daemon for ssh communication sshd
. (The other provisioning scripts it invokes was covered in the first two articles).
Let’s look at the
002-hosts.sh
provision script. You can see some remnants from the last article where we setup csqlsh
, and then it gets to business setting up sshd
(secure server shell daemon).scripts/002-hosts.sh - sets up sshd and hosts file
#!/bin/bash
set -e
## Copy csqlshrc file that controls csqlsh connections to ~/.cassandra/cqlshrc.
mkdir ~/.cassandra
cp ~/resources/home/.cassandra/cqlshrc ~/.cassandra/cqlshrc
## Allow pub key login to ssh.
sed -ie 's/#PubkeyAuthentication no/PubkeyAuthentication yes/g' /etc/ssh/sshd_config
## System control restart sshd daemon to take sshd_config into effect.
systemctl restart sshd
# Create host file so it is easier to ssh from box to box
cat >> /etc/hosts <<EOL
192.168.50.20 bastion
192.168.50.4 node0
192.168.50.5 node1
192.168.50.6 node2
192.168.50.7 node3
192.168.50.8 node4
192.168.50.9 node5
EOL
This setup is fairly specific to our Vagrant setup at this point. To simplify access to the servers that hold the different Cassandra Database nodes, the
002-hosts.sh
creates an \etc\hosts\
file on the bastion server.
With our certification keys added to
sshd config
and our hosts configured (and our inventory.ini
file shipped), we can start using ansible from our bastion server.
This reminds me, we did not talk about the
ansible inventory.ini
file.Ansible config on bastion for Cassandra Database Cluster
Ansible has a
ansible.cfg
file, and an inventory.ini
file. When you run ansible
, it checks for ansible.cfg
in the current working directory, then your home directory, and then for a master config file (/etc/ansible
). We created an inventory.ini
file which lives under ~\github\cassandra-image\resources\home
, which gets mapped to \vagrant\resources\home
on the virtual machines (node0, bastion, node1, and node2). A provision script moves the inventory.ini
file to its proper location (sudo cp /vagrant/resources/home/inventory.ini /etc/ansible/hosts
).
The
inventory.ini
contains servers that you want to manage with Ansible. A couple of things are going on here, we have a bastion
group, this is for our bastion
server, next we have the nodes
group, and it is made up of node0
, node1
, and node2
.
Let’s see what the inventory.ini file actually looks like.
inventory.ini that gets copied to Ansible master list on Bastion
[bastion]
bastion
[nodes]
node0
node1
node2
Once we provision our cluster, we can log into bastion and start executing ansible commands.
Installing cert key for test DevOps/DBA Cassandra Cluster on all nodes using an ansible playbook
To make this happen, we had to tell the other servers about our certification keys.
We did this with an ansible playbook as follows:
Ansible playbook getting invoked from Vagrant on each new Cassandra Database node
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
# Define Cassandra Nodes
(0..numCassandraNodes-1).each do |i|
port_number = i + 4
ip_address = "192.168.50.#{port_number}"
seed_addresses = "192.168.50.4,192.168.50.5,192.168.50.6"
config.vm.define "node#{i}" do |node|
node.vm.network "private_network", ip: ip_address
node.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus = 4
end
...
node.vm.provision "ansible" do |ansible|
ansible.playbook = "playbooks/ssh-addkey.yml"
end
end
end
Notice the line
node.vm.provision "ansible" do |ansible|
and ansible.playbook = "playbooks/ssh-addkey.yml"
.
If you are new to Vagrant and the above just is not making sense, please watch Vagrant Crash Course. It is by the same folks (guy) who created the Ansible series.
Ansible playbooks are like configuration playbooks. You can perform tons of operations that are important for DevOps (like yum installing software, specific tasks to Cassandra, etc.).
Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. –Ansible Playbook documentation.
Here is the ansible playbook to add the RSA public key to the cassandra nodes as follows.
Ansible playbook ssh-addkey.yml to add test_rsa.pub to all Cassandra Database node servers
---
- hosts: all
become: true
gather_facts: no
remote_user: vagrant
tasks:
- name: install ssh key
authorized_key: user=vagrant
key="{{ lookup('file', '../resources/server/certs/test_rsa.pub') }}"
state=present
The trick here is that Vagrant supports running Ansible playbooks as well.
The Vagrant Ansible provisioner allows you to provision the guest using Ansible playbooks by executing ansible-playbook from the Vagrant host. –(Vagrant Ansible documentation)[https://www.vagrantup.com/docs/provisioning/ansible.html]
For users who did not read any of the first articles on setting up the Cassandra Cluster
If you have not done so already navigate to the project root dir (which is ~/github/cassandra-image on my dev box), download the binaries. The source code is at Github Cassandra Image project.
Running setup scripts
## cd ~; mkdir github; cd github; git clone https://github.com/cloudurable/cassandra-image
$ cd ~/github/cassandra-image
$ pwd
~/github/cassandra-image
## Setup keys
$ bin/setupkeys-cassandra-security.sh
## Download binaries
$ bin/prepare_binaries.sh
## Bring Vagrant cluster up
$ vagrant up
Even if you read the first article note that
bin/prepare_binaries.sh
is something we added after the first two articles. It downloads the binaries needed for the provisioning, does a checksum of the files and then installs them as part of the provisioning process.Where do you go if you have a problem or get stuck?
We set up a google group for this project and set of articles. If you just can’t get something to work or you are getting an error message, please report it here. Between the mailing list and the github issues, we can support you with quite a few questions and issues.
Running ansible commands from bastion
Let’s log into bastion and run ansible commands against the cassandra nodes.
Working with ansible from bastion and using ssh-agent
$ vagrant ssh bastion
So we don’t have to keep logging in, and passing our cert key, let’s start up an
ssh-agent
and add our cert key ssh-add ~/.ssh/test_rsa
to the agent.
The
ssh-agent
is a utility to hold private keys used for public key authentication (RSA, DSA, ECDSA, Ed25519) so you don’t have to keep passing the keys around. The ssh-agent
is usually started in the beginning of a login session. Other programs (scp, ssh, ansible) are started as clients to the ssh-agent utility.Mastering ssh is essential for DevOps and needed for ansible.
First set up
ssh-agent
and add keys to it with ssh-add
.Start ssh-agent and add keys
$ ssh-agent bash
$ ssh-add ~/.ssh/test_rsa
Now that the agent is running and our keys are added, we can use ansible without passing it the RSA private key.
Let’s verify connectivity, by pinging some of these machines. Let’s ping the
node0
machine. Then let’s ping all of the nodes
.
Let’s use the ansible ping module to ping the
node0
server.Ansible Ping the Cassandra Database node
$ ansible node0 -m ping
Output
node0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
To learn more about DevOps with ansible see this video on Ansible introduction. It covers a lot of the basics of ansible.
Now let’s ping all of the nodes.
Ansible Ping all Cassandra Database Cluster nodes
$ ansible nodes -m ping
Output
node0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
node2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
node1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Looks like bastion can run ansible against all of the servers.
Setting up my MacOSX to run Ansible against Cassandra Database Cluster nodes
The script
~/github/cassandra-image/bin/setupkeys-cassandra-security.sh
copies the test cluster key for ssh (secure shell) over to ~/.ssh/ (cp "$PWD/resources/server/certs/"* ~/.ssh
). It was Run from the project root folder which is ~/github/cassandra-image
on my box.
Move to the where you checked out the project.
cd ~/github/cassandra-image
In this folder is an
ansible.cfg
file and an inventory.ini file for local dev. Before you use these first modify your /etc/hosts
file to configure entries for bastion
, node0
, node1
, node2
servers.Add bastion, node0, etc. to /etc/hosts
$ cat /etc/hosts
### Used for ansible/ vagrant
192.168.50.20 bastion
192.168.50.4 node0
192.168.50.5 node1
192.168.50.6 node2
192.168.50.7 node3
192.168.50.8 node4
192.168.50.9 node5
We can use ssh-keyscan just like we did before to add these hosts to our
known_hosts
file.Add keys to known_hosts to avoid prompts
$ ssh-keyscan node0 node1 node2 >> ~/.ssh/known_hosts
Then just like before we can start up an
ssh-agent
and add our keys.Start ssh-agent and add keys
$ ssh-agent bash
$ ssh-add ~/.ssh/test_rsa
Notice that the
ansible.cfg
and inventory.ini
files are a bit different than on our bastion server because we have to add the user name.Notice the ansible.cfg file and inventory.ini file in the project dir
$ cd ~/github/cassandra-image
$ cat ansible.cfg
[defaults]
hostfile = inventory.ini
cat inventory.ini
[nodes]
node0 ansible_user=vagrant
node1 ansible_user=vagrant
node2 ansible_user=vagrant
Ansible will use these.
From the project directory, you should be able to ping
node0
and all of the nodes
just like before.
Ping
node0
with ansible.Ansible Ping Cassandra Database node
$ ansible node0 -m ping
Output
node0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Ping all of the Cassandra nodes with ansible.
Ansible Ping All Cassandra Database Cluster nodes
$ ansible nodes -m ping
Output
node0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
node2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
node1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
In the next article, we cover how to setup
~.ssh/config
so you don’t have to remember to use ssh-agent
.Using ansible to run nodetool on Cassandra Cluster nodes
You may recall from the first article that we would log into the servers (
vagrant ssh node0
) and then check that they could see the other nodes with nodetool describecluster
. We could run this command with all three servers (from bastion
or on our dev laptop) with ansible.
Let’s use ansible to run
describecluster
against all of the nodes.Ansible running nodetool describecluster against all Cassandra Cluster nodes
$ ansible nodes -a "/opt/cassandra/bin/nodetool describecluster"
This command allows us to check the status of every node quickly.
Let’s say that we wanted to update a schema or do a rolling restart of our Cassandra cluster nodes, which could be a very common task. Perhaps before the update, we want to decommission the node and back things up. To do this sort of automation, we could create an Ansible playbook.
Ansible Playbooks are more powerful than executing ad-hoc task execution and is especially powerful for managing a cluster of Cassandra servers.
Playbooks allow for configuration management and multi-machine deployment to manage complex tasks like a rolling upgrade or schema updates or perhaps a weekly backup.
Playbooks are declarative configurations. Ansible Playbooks also orchestrate steps into a simpler task. This automation gets rid of a lot of manually ordered process and allows for an immutable infrastructure.
Our describe-cluster playbook for Cassandra Database Cluster nodes
Creating a complicated playbook is beyond the scope of this article. But let’s create a simple playbook and execute it. This playbook will run the
nodetool describecluster
on each node.
Here is our playbook that runs Cassandra
nodetool describecluster
on each Cassandra node in our cluster.playbooks/descibe-cluster.yml - simple ansible playbook that runs Cassandra nodetool describecluster
---
- hosts: nodes
gather_facts: no
remote_user: vagrant
tasks:
- name: Run NodeTool Describe Cluster command against each Cassandra Cluster node
command: /opt/cassandra/bin/nodetool describecluster
To run this, we use
ansible-playbook
as follow.Running describe-cluster playbook
$ ansible-playbook playbooks/describe-cluster.yml --verbose
Between this article and the last, we modified our
Vagrantfile
quite a bit. It now uses a for loop to create the Cassandra nodes, and it uses ansible provisioning.
Here is our new Vagrantfile with updates:
Complete code listing of Vagrantfile that sets up our DevOps/DBA Cassandra Database Cluster
# -*- mode: ruby -*-
# vi: set ft=ruby :
numCassandraNodes = 3
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
# Define Cassandra Nodes
(0..numCassandraNodes-1).each do |i|
port_number = i + 4
ip_address = "192.168.50.#{port_number}"
seed_addresses = "192.168.50.4,192.168.50.5,192.168.50.6"
config.vm.define "node#{i}" do |node|
node.vm.network "private_network", ip: ip_address
node.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus = 4
end
node.vm.provision "shell", inline: <<-SHELL
sudo /vagrant/scripts/000-vagrant-provision.sh
sudo /opt/cloudurable/bin/cassandra-cloud -cluster-name test \
-client-address #{ip_address} \
-cluster-address #{ip_address} \
-cluster-seeds #{seed_addresses}
SHELL
node.vm.provision "ansible" do |ansible|
ansible.playbook = "playbooks/ssh-addkey.yml"
end
end
end
# Define Bastion Node
config.vm.define "bastion" do |node|
node.vm.network "private_network", ip: "192.168.50.20"
node.vm.provider "virtualbox" do |vb|
vb.memory = "256"
vb.cpus = 1
end
node.vm.provision "shell", inline: <<-SHELL
yum install -y epel-release
yum update -y
yum install -y ansible
mkdir /home/vagrant/resources
cp -r /vagrant/resources/* /home/vagrant/resources/
mkdir -p ~/resources
cp -r /vagrant/resources/* ~/resources/
mkdir -p /home/vagrant/.ssh/
cp /vagrant/resources/server/certs/* /home/vagrant/.ssh/
sudo /vagrant/scripts/002-hosts.sh
ssh-keyscan node0 node1 node2 >> /home/vagrant/.ssh/known_hosts
mkdir ~/playbooks
cp -r /vagrant/playbooks/* ~/playbooks/
sudo cp /vagrant/resources/home/inventory.ini /etc/ansible/hosts
chown -R vagrant:vagrant /home/vagrant
SHELL
end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
config.push.define "atlas" do |push|
push.app = "cloudurable/cassandra"
end
end
Conclusion
We set up Ansible for our Cassandra Database Cluster to do automate common DevOps/DBA tasks. We created an ssh key and then set up our instances with this key so we could use
ssh
, scp
, and ansible
. We set up a bastion server with Vagrant. We used ansible
playbook (ssh-addkey.yml
) from Vagrant to install our test
cluster key on each server. We ran ansible ping
against a single server. We ran ansible ping
against many servers (nodes
). We set up our local dev machine with ansible.cfg
and inventory.ini
so we could use ansible
commands direct to node0
and nodes
. We ran nodetool describecluster
against all of the nodes from our dev machine. Lastly, we created a very simple playbook that can run nodetool describecluster
. Ansible is a very powerful tool that can help you manage a cluster of Cassandra instances. In later articles, we will use Ansible to create more complex playbooks like backing up Cassandra nodes to S3.Cassandra Tutorial: Cassandra Cluster DevOps/DBA series
The first tutorial in this Cassandra tutorial series focused on setting up a Cassandra Cluster. The first Cassandra tutorial setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). The third article in this series was about configuring and using Ansible (building on the first two articles). This article (the 4th) will cover applying the tools and techniques from the first three articles to produce an image (EC2 AMI to be precise) that we can deploy to AWS/EC2. To do this explanation, we will use Packer, Ansible, and the Aws Command Line tools. The AWS command line tools are essential for doing DevOps with AWS.
Check out more information about the Cassandra Database
- Cassandra Consulting: Architecture Analysis
- Cassandra Consulting: Quick Start
- Cassandra Course
- Amazon Cassandra Support
I like your post very much. It is nice useful for my research. I wish for you to share more info about this. Keep blogging Apache Kafka Training in Electronic City
ReplyDelete