Setting up Ansible, SSH to configure AWS EC2 instances
We pick up with our ansible tutorial to focus on this AWS ansible tutorial on how to use ansible with EC2 as well as mastering ansible inventory setup, ssh-agent redux and covering ssh client config so you don't have to have long convoluted scripts and have to remember the id file, username, etc. for ssh, scp and ansible.
Overview
This amazon ansible tutorial covers the following:
- Setting up ssh client config
- Setting up ansible to manage our EC2 instance (ansible uses ssh)
- Setting up a
ssh-agent
and adding ssh identities (ssh-add
) - Setting ssh using
~/.ssh/config
so we don’t have to pass credentials around - Using ansible dynamic inventory with EC2
- AWS command line tools to manage DNS entries with Route 53
Lastly, we use AWS Route 53 to setup a DNS name for our new instance which we then use from ansible, and never have to reconfigure our ansible config when we create a new AMI or EC2 instance. If you are using EC2 and you are not using
ssh config files
and ansible and you are doing DevOps, this article is a must.
This is part 4 of this series of articles on creating a the Cassandra Database image and DevOps/DBA. You don’t truly need those articles to follow this one, but they might provide a lot of contexts. If you are new to ansible the last article on ansbible would be good to at least skim. This one picks up and covers ansible more deeply with regards to AWS/EC2.
You can find the source for the first, second, third and this Cassandra tutorial (Ansible/Cassandra tutorial) at our Cloudurable Cassandra Image for Packer, EC2, Docker, AWS and Vagrant. In later articles, we will set up a working cluster in AWS using VPCs, Subnets, Availability Zones and Placement groups. We will continue to use ansible to manage our Cassandra cluster.
The source code for this article is in this branch on github.
Ansible and EC2
Although we have base images, since Cassandra is stateful, we will want the ability to update the images in place for our Amazon Cassandra support.
The options for configuration and orchestration management are endless (Puppet, Chef, Boto, etc.). This article uses Ansible for many of these tasks. Ansible is an agentless architecture and works over ssh (secure shell) as we covered in our last article (Setting up Ansible for our Cassandra Cluster to do DevOps/DBA tasks). There are some very helpful Ansible/AWS integrations which will try to cover in future articles.
The Ansible framework allows DevOps/DBA staff to run commands against Amazon EC2 instances as soon as they are available. Ansible is very suitable for provisioning hosts in a Cassandra cluster as well as performing routine DevOps/DBA tasks like replacing a failed node, backing up a node, profiling Cassandra, performing a rolling upgrade and more.
Since Ansible relies on
ssh
, we should make sure that ssh
is working for us.Making sure ssh works before we get started with ansible
If ssh is not working, you can't use ansible because ansible needs ssh.
Before you go about using ansible with AWS/EC2 to manage your Cassandra clusters, you have to make sure that you can, in fact, connect with
ssh
.
The first step in our journey is to get the IP of the EC2 instance that you just launched.
Another key tip for using ansible is to use-vvvv
if it can’t connect so you can see why it can’t connect
Let’s get the IP of the new instance using
get-IP-cassandra.sh
, which we covered earlier.Getting the IP
$ bin/get-IP-cassandra.sh
54.218.113.95
Now we can log in with the
pem
file associated with our AWS key-pair that we used to launch our Cassandra EC2 instance.
Let’s see if we can log into the Cassandra EC2 instance with ssh.
Can I log in with the pem file?
ssh -i ~/.ssh/cloudurable-us-west-2.pem centos@54.218.113.95
If you can do this, then your security group is setup properly. If you can’t do this, make sure you VPC security group associated with the instance has port 22 open. (Limit logging into instances via SSH port 22 to only your IP address.)
In addition to the
pem
file that AWS creates, we have our private rsa key for the test cluster (which we covered in the last article). Recall that the rsa key is used with the ansible
user (also described in the last article on ansible).
Let’s see if we can log in with our RSA private key.
Can I log in with the key we generated for ansible?
ssh -i ~/.ssh/test_rsa ansible@54.218.113.95
If you can log in with the
pem
but not the rsa key we created for the test cluster, then you have an issue with a key mismatch (perhaps). You could try to regenerate the keys with bin/setupkeys-cassandra-security.sh
then either copy them with scp
copy or upload them with the ansible file/copy module or file/synchronize module.
Passing the key on each ansible command is tiresome, let’s use the
ssh-agent
(discussed in the last article), to add (ssh-add
) our cluster key identity (~/.ssh/test_rsa
) to all ssh
commands that we use (including ansible).Can I install the key and log in using ssh-agent?
$ ssh-agent bash
$ ssh-add ~/.ssh/test_rsa
$ ssh ansible@54.218.113.95
If you were able to log in with
ssh
by adding the key to the ssh-agent
, then you are ready to use ansible
. To test that you can connect via ansible add these entries to the inventory.ini
file in the project root (~/github/cassandra-cloud
).Setting up Ansible using the inventory.ini
We assume you have set up the cluster key as follow:
Setup cluster key for ansible
$ ssh-agent bash
$ ssh-add ~/.ssh/test_rsa
Recall that
bin/setupkeys-cassandra-security.sh
creates the RSA key and installs it under ~/.ssh/test_rsa. Then the provisioning scripts install the key correctly on the EC2 image (AMI).Add this to inventory.ini for ansible
[aws-nodes]
54.218.113.95 ansible_user=ansible
The above tells ansible that this server
54.218.113.95
exists, and it is in the group aws-nodes
, and that when we connect to it that we should use the user ansible
. (Remember we looked up the IP of the Cassandra Datbase EC2 instance using bin/get-IP-cassandra.sh
).
Once that is setup, we can run the ansible ping module against our Cassandra EC2 instance as follows:
Run ansible ping modules against aws-nodes
$ ansible aws-nodes -m ping
54.218.113.95 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Dynamic Ansible inventory
When doing Amazon Web Services EC2 DevOps, you could be managing several groups of servers. EC2 allows you to use placement groups, autoscale groups, security groups, and tags to organize and manage your instances. AWS EC2 is rich with meta-data about the instances.
If you are running Dev, QA, production or even green/blue deploys with CI and CD, you will be running many EC2 instances over time. Hosts can come and go in EC2. Because of the ephemeral nature of hosts in EC2, ansbile allows you to use external scripts to manage ansible inventory lists. There is such an ansible inventory script for AWS EC2.
As you can imagine if you are doing DevOps, ansible AWS EC2 dynamic inventory is a must.
You can set up AWS via your
~/.aws/config
and ~/.aws/credentials
files, and if you installed the aws command line, then you likely have this setup or the requisite environment variables.
To use Ansible’s
-i
command line option and specify the path to the script.
Before we do that, we need to download the ec2 ansible inventory script and mark it executable.
Download the dynamic ansible inventory script as follows.
Download the ansible ec2 inventory script, make it executable
$ pwd
~/github/cassandra-image/
$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py -O ansible-ec2/ec2.py
$ chmod +x ansible-ec2/ec2.py
$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini -O ansible-ec2/ec2.ini
After you download it, you can start using it.
Using a dynamic inventory
Now let’s use the dynamic inventory ansible script.
Before we do that, let’s add the
pem
associated with our AWS Key Pair to the ssh-agent
as follows (if you have not done so already).Add centos pem file (key pair)
$ ssh-add ~/.ssh/cloudurable-us-west-2.pem
Then we can ping the EC2 instance with the ansible ping module as follows:
Pass dynamic list to ansible use user centos
$ ansible -i ansible-ec2/ec2.py tag_Name_cassandra_node -u centos -m ping
54.218.113.95 | SUCCESS => {
"changed": false,
"ping": "pong"
}
The
-i
option passes a script that will generate a JSON inventory list. You can even write a custom script that produces an inventory as long as that script uses the same JSON format. Above we are passing the script that we just downloaded. Remember that an ansible inventory list is just a list of servers that we are managing with ansible.
Now we know we can use ansible with our AWS key pair and our AWS PEM file. But can we use it with our RSA key?
Using ansible with RSA Key and ansible user from last article
Please recall from the last article that we set up a user called
ansible
which used an RSA private key file that we created in the last article as well (~/.ssh/test_rsa
).
We should be able to manage our EC2 instance with the
ansible
user using the RSA key. Let’s try.
Add the ansible users RSA key to the
ssh-agent
as follows.Add ansible users RSA private key file - test_rsa file
$ ssh-add `~/.ssh/test_rsa`
Now we can access ansible via the ansible user using
~/.ssh/test_rsa
as our private key. Use ansible with the ansible users (-u ansible
) and the RSA key we just installed.Pass dynamic list to ansible use user ansible
$ ansible -i ansible-ec2/ec2.py tag_Name_cassandra_node -u ansible -m ping
54.218.113.95 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Often DevOps tasks require you to manage different machines, Ubuntu, CoreOS, CentOS, RedHat, Debian, and AmazonLinux. The various EC2 instances will have different users to log in. For example, CentOS has the user
centos
and Ubuntu has the user ubuntu
(I have run into admin
, root
, etc.). It is a good idea to create a standard user like ansible
(or devops
or ops
or admin
) to run ansible
commands against different flavors of Unix. Also, AWS PEM files / key pairs do not change once an instance if launched, and Cassandra instances tend to be less ephemeral (due to the statefulness of the Cassandra Database and the potentially large amounts of data on a node) then some other EC2 instances. The ability to regenerate the RSA key periodically is important as you do not want the keys to getting into the wrong hands.
The AWS inventory list command uses security groups, VPC ids, instance id, image type, EC2 tags, AZ, scaling groups, region and more to group EC2 instances to run ansible commands against, which is very flexible for DevOps operations.
Let’s see a list of all of the aliases and
ansible groups
that our one Cassandra Database EC2 instance is exposed.Show all ansible groups that our Cassandra Database EC2 instance can be accessed by
./ansible-ec2/ec2.py | jq "keys"
[
"_meta",
"ami_6db4410e", //by AMI
"ec2", //All ec2 instances
"i-754a8a4f693b58d1b", //by instance id
"key_cloudurable_us_west_2",//by key pair
"security_group_allow_ssh", //by security group
"tag_Name_cassandra_node", //by EC2 tag
"type_m4_large", //by EC2 instance type
"us-west-2", //by Region
"us-west-2c", //by AZ Availability Zone
"vpc_id_vpc_c78000a0" //by VPC Virtual Private Cloud
]
You can use any of these ansible groups to ping a set of servers. Let’s ping every server (we only have one) in the AWS
us-west-2
region.Ping all Cassandra Database nodes in the us-west-2 region
$ ansible -i ansible-ec2/ec2.py us-west-2 -u ansible -m ping
54.218.113.95 | SUCCESS => {
"changed": false,
"ping": "pong"
}
I don’t know about you, but I don’t like passing around the
-i
and -u
option on every command. Let’s see what we can do to remedy this.Installing Dynamic Inventory as the default inventory
Another option besides using the
-i
is to copy the dynamic inventory script to /etc/ansible/ec2.py
and chmod +x
it. You will also need to copy the ec2.ini
file to /etc/ansible/ec2.ini
. Then we will be able to use ansible EC2 dynamic inventory without passing the -i
(making it a lot easier to use).
Let’s install the ansible dynamic inventory script and config as follows.
Installing ansible EC2 dynamic inventory script as the default
$ cd ~/github/cassandra-image
$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py -O ansible-ec2/ec2.py
$ chmod +x ansible-ec2/ec2.py
$ sudo cp ansible-ec2/ec2.py /etc/ansible/ec2.py
$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini -O ansible-ec2/ec2.ini
sudo cp ansible-ec2/ec2.ini /etc/ansible/ec2.ini
You will also need to add the script (
ANSIBLE_HOSTS
) and the ini file (EC2_INI_PATH
) to environment variable which you can put in your `~/.bash_profile
.Environment variables needed to make dynamic inventory work
export ANSIBLE_HOSTS=/etc/ansible/ec2.py
export EC2_INI_PATH=/etc/ansible/ec2.ini
Now when you use ansible, you will not have to specify
-i
every time.
Let’s try ansible using the dynamic inventory list without the
-i
.Using dynamic inventory without -i
$ ansible tag_Name_cassandra_node -u ansible -m ping
54.218.113.95 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Now that we got rid of
-i
to specify the ansible dynamic inventory list script, let’s get rid of the -u
to specify the user. At least let’s try.
Again before we do that, let’s see if we can use
ssh
without passing the user name.Specifying default user via ~/.ssh/config
If you’re like most developers doing DevOps, you have a half dozen remote servers (or these days, local virtual machines, EC2 instances, Docker containers) you might need to deal with.
Remembering all of those usernames, passwords, domain names, identity files, and command line options to ssh can be daunting. You want a way to simplify your life with an ssh config file.
Using ssh effectively is another one of those essential DevOp skills!
You can create an ssh config file that configures host names, user names, and private keys to connect to ssh. There are many custom ssh config options to configure ssh and make life easier.
We will show how to configure
~/.ssh/config
to make logging into our EC2 instance easier, and eventually get rid of the need to run the ssh-agent
or use the -i
option when using ansible.
We wrote a small bash script that gets the DNS name of our instance using the
aws
command line as follows:
bin/get-DNS-name-cassandra.sh - Get the DNS name of our Cassandra EC2 instance using aws
command line
#!/bin/bash
set -e
source bin/ec2-env.sh
aws ec2 describe-instances --filters "Name=tag:Name,Values=${EC2_INSTANCE_NAME}" \
| jq --raw-output .Reservations[].Instances[].PublicDnsName
We can use
bin/get-DNS-name-cassandra.sh
to get the DNS name of our instance as follows:Getting the DNS name of the Cassandra EC2 instance
bin/get-DNS-name-cassandra.sh
ec2-54-218-113-95.us-west-2.compute.amazonaws.com
Now let’s see the IP address associated with this instance.
EC2 Cassandra host
$ host ec2-54-218-113-95.us-west-2.compute.amazonaws.com
54.218.113.95
Note that for this discussion that we are using
54.218.113.95
as the public IP address of our Cassandra node (that we created with packer and launched with the aws
command line tools).
Now we can configure
~/.ssh/config
to use this information.~/.ssh/config
# Note we can use wild star so any that match this pattern will work.
Host *.us-west-2.compute.amazonaws.com
ForwardAgent yes
IdentityFile ~/.ssh/test_rsa
User ansible
# Note we can use the IP address
# so if we ssh into it, we don't have to pass username and the id file
Host 54.218.113.95
ForwardAgent yes
IdentityFile ~/.ssh/test_rsa
User ansible
# We even create an alias for ssh that has username and the id file.
Host cnode0
Hostname 54.218.113.95
ForwardAgent yes
IdentityFile ~/.ssh/test_rsa
User ansible
Read the comments in the file above.
Now when we log into
cnode0
using ssh
as follows:ssh cnode0
$ ssh cnode0
Note that
cnode0
is an alias that we set up in ~/.ssh/config
and that we don’t have to use the -i
option to pass the identity file or use the username.
Would you rather need to remember
ssh cnode0
or ssh -i ~/.ssh/my-long-pem-name-with-region-info.pem someuserForEachLinuxFlavor@ec2-ip-address-that-changes-every-time-i-build-a-new-instance.us-west-2.compute.amazonaws.com
?
Keep in mind; you do not have to use
ssh-agent
or ssh-add
anymore to use ansible since we configured the identity file and username in ~/.ssh/config
. Forgetting to set up the ssh-agent
and adding the right key file with ssh-add
was error prone at best, and often left me personally confused. Now that issue of confusion is gone, but since we set up ForwardAgent yes
, once we log into a remote instance the keys we set up with ssh-agent
and ssh-add
get passed to the remote host. This way those keys do not have to live on the remote host. You can, for example, log into a bastion server and then ssh into a private subnet with the keys you set up with ssh-agent
, and none of those private keys have to live on the remote instances (to avoid getting used by someone else). Mastering ssh
, ssh-agent
, and ssh key management is essential to being good at DevOps.
Given the above config, you can also log into the Cassandra dev instance with its public domain name as follows:
ssh into box using public address
$ ssh ec2-54-218-113-95.us-west-2.compute.amazonaws.com
The above uses the
Host *.us-west-2.compute.amazonaws.com
which is a domain pattern that would work for all ec2 instances in the us-west-2
AWS region. Since different regions will use different AWS key-pairs, you can set up a pattern/key-pair pem
file for each region easily.Attempt Get ansible to use public DNS names instead of IP addresses (does not work)
Given the above
./ssh/config
to get rid of -u
option one might imagine you could tell ec2.py
, the script that generates the ansible inventory, you could configure it to use public domain names instead of public IP addresses, and you can.
If you make this change, to
ec2.ini
./etc/ansible/ec2.ini
Change vpc_destination_variable = ip_address to vpc_destination_variable = public_dns_name
vpc_destination_variable = public_dns_name
Then
ec2.py
will use the public domain names from EC2 instead of public IP addresses, for example, ec2-54-218-113-95.us-west-2.compute.amazonaws.com
instead of 54.218.113.95
.
But then all of the ansible commands stop working. It seems, as best that I can tell, that ansible does not like domain names with dashes. We searched for a workaround for this and could not find it. If you know the answer, please write us.
We even tried to add this directly to
inventory.ini
.adding ec2 host direct
[aws-nodes]
# 54.186.15.163 ansible_user=ansible
ec2-54-218-113-95.us-west-2.compute.amazonaws.com ansible_user=ansible
Then tried running the ansible commands against the
aws-nodes
and we got the same result until we tried the fix for EC2 domain name being too long for Ansible, but we never got the ec2.py
to work with the longer DNS names (we were able to get past parts of it).
This problem is either ansible not handling dashes or long dns name problem. The fix seems to be in the comments of this fix for EC2 domain name being too long for Ansible, but again worked but only in the non-dynamic config. For the most part, we tried the fix and it did not work (still getting
ERROR! Specified hosts and/or --limit does not match any hosts
).
It is okay, though. The only real limitation here is that when you use ansible with
ec2.py
that you will need to pass the user and continue to use ssh-agent
and ssh-add
.
This workaround of having to give the username with
-u
is not too serious. We still wish there was a way to use ansible without passing a username and identity file just like we have with ssh
. And there is, but it involves AWS Route 53 and configuring `~/ssh/config
.Using ansible without passing the id file or username
Another way to use ansible with our Cassandra cluster is to create DNS names for the Cassandra nodes that we want to manage. The problem with using the public IP address or the AWS generated DNS name is that they change each time we terminate and recreate the instance. We plan on terminating and recreating the instance a lot.
The solution is where DNS comes in and AWS route 53. After we create the instance, we can use an internal hosted zone of Route 53 (for VPN) or a public hosted zone and associate the IP address with our new instance. We could do this for all of the Cassandra seed nodes and all of the cluster nodes for that matter.
Before we get started let’s add two more variables to our
bin/ec2-env.sh
, namely, HOSTED_ZONE_ID
and NODE0_DNS
as follows:bin/ec2-env.sh
#!/bin/bash
set -e
export AMI_CASSANDRA=ami-abc1234
export VPC_SECURITY_GROUP=sg-abc1234
export SUBNET_CLUSTER=subnet-abc1234
export KEY_NAME_CASSANDRA=cloudurable-us-west-2
export PEM_FILE="${HOME}/.ssh/${KEY_NAME_CASSANDRA}.pem"
export IAM_PROFILE_CASSANDRA=IAM_PROFILE_CASSANDRA
export EC2_INSTANCE_NAME=cassandra-node
export HOSTED_ZONE_ID="Z1-abc1234"
export NODE0_DNS="node0.cas.dev.cloudurable.com."
Now let’s define a new script that will use the
aws
command line. We will use the aws route53 change-resource-record-sets to associate a DNS name with the IP address as follows:bin/associate-DNS-with-IP.sh
#!/bin/bash
set -e
source bin/ec2-env.sh
IP_ADDRESS=`bin/get-IP-CASSANDRA.sh`
REQUEST_BATCH="
{
\"Changes\":[
{
\"Action\": \"UPSERT\",
\"ResourceRecordSet\": {
\"Type\": \"A\",
\"Name\": \"$NODE0_DNS\",
\"TTL\": 300,
\"ResourceRecords\": [{
\"Value\": \"$IP_ADDRESS\"
}]
}
}
]
}
"
echo "$REQUEST_BATCH"
changeId=$(aws route53 change-resource-record-sets --hosted-zone-id "$HOSTED_ZONE_ID" --change-batch "$REQUEST_BATCH" \
| jq --raw-output .ChangeInfo.Id)
aws route53 wait resource-record-sets-changed --id "$changeId"
Notice that we are running this change against our Route 53 Hosted ZONE with
aws route53 change-resource-record-sets
as follows:Change batch for Route 53 hosted zone
{
"Changes":[
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Type": "A",
"Name": "node0.cas.dev.cloudurable.com.",
"TTL": 300,
"ResourceRecords": [{
"Value": "54.218.113.95"
}]
}
}
]
}
Notice we are using
UPSERT
which will update or add the A
record to Route 53’s DNS resources to associate the name node0.cas.dev.cloudurable.com.
with the IP address 54.218.113.95
.
Now that we have a domain name, and it is scripted/automated (we added a call to
bin/associate-DNS-with-IP.sh
into bin/create-ec2-cassandra.sh
), we can configure ~/.ssh/config
to use this domain name which will not change like the public IP or our Cassandra instance public DNS name changes.
Let’s update the
~/.ssh/config
to refer to our new DNS name as follows:
~/.ssh/config
- Use new DNS naming
Host *.us-west-2.compute.amazonaws.com
ForwardAgent yes
IdentityFile ~/.ssh/test_rsa
User ansible
Host *.cas.dev.cloudurable.com
ForwardAgent yes
IdentityFile ~/.ssh/test_rsa
User ansible
Host cnode0
Hostname node0.cas.dev.cloudurable.com
ForwardAgent yes
IdentityFile ~/.ssh/test_rsa
User ansible
Notice we added the pattern
*.cas.dev.cloudurable.com
(where cas
stands for Cassandra and dev
means this is our development environment). We also added an alias for our the Cassandra Database instance called cnode0
that refers tonode0.cas.dev.cloudurable.com
.
We can ssh into
cnode0
or node0.cas.dev.cloudurable.com
without passing the username or identity file (private key) each time. This config is like before but using a DNS name that does not change when we rebuild our servers. This concept is important; you would not want to modify ~/.ssh/config
every time you rebuild a server.
Now let’s change our inventory.ini file in the project directory (
~/github/cassandra-image
) to use this as follows:~/github/cassandra-image/inventory.ini
[aws-nodes]
cnode0
node0.cas.dev.cloudurable.com
Notice that we use the short name and the long name.
Note you truly just need one but we have two just for this article. Never put the same box twice in the same ansible group, all commands and playbooks will run twice.
Now we can run ansible ping against these servers and not pass the username or identity file.
Use ansible ping module against
cnode
and node0.cas.dev.cloudurable.com
.
Run against all (see note above).
Running ansible ping against all of the “instances”
$ ansible aws-nodes -u ansible -m ping
cnode0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
node0.cas.dev.cloudurable.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
We can also just run it against one of the instances by using just that instances name.
Run against
cnode0
.ansible cnode0 -u ansible -m ping
$ ansible cnode0 -u ansible -m ping
cnode0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
We can do this for any server.
Run against
node0.cas.dev.cloudurable.com
.ansible node0.cas.dev.cloudurable.com -u ansible -m ping
$ ansible node0.cas.dev.cloudurable.com -u ansible -m ping
node0.cas.dev.cloudurable.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
Keep in mind; you do not have to use
ssh-agent
or ssh-add
anymore to use ansible since we configured the identity file and username in ~/.ssh/config
. We can rebuild our server at will. Each time we do, our creation script will update the IP address in DNS to look at the new server. Then all of our ansible scripts and playbooks will continue to work.Using Ansible to manage our Cassandra Database cluster
We won’t do much actual cluster management in this article per se. And, unlike our last article that used Vagrant and ansible, we don’t have a cluster per se (or rather we have a cluster of one).
We can now use Ansible for our Cassandra Database Cluster to do automate common DevOps/DBA tasks.
Ansible running nodetool against all nodes
ansible aws-nodes -a "/opt/cassandra/bin/nodetool describecluster"
cnode0 | SUCCESS | rc=0 >>
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
86afa796-d883-3932-aa73-6b017cef0d19: [127.0.0.1]
node0.cas.dev.cloudurable.com | SUCCESS | rc=0 >>
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
86afa796-d883-3932-aa73-6b017cef0d19: [127.0.0.1]
Let’s say that we wanted to update a schema or do a rolling restart of our Cassandra nodes, which could be a very common task. Perhaps before the update, we want to decommission the node and back things up. To do this sort of automation, we could create an Ansible playbook.
Let’s run an Ansible playbook from the last article.
Running describe-cluster playbook
$ ansible-playbook playbooks/describe-cluster.yml --verbose
Using /Users/jean/github/cassandra-image/ansible.cfg as config file
PLAY [aws-nodes] ***************************************************************
TASK [Run NodeTool Describe Cluster command] ***********************************
changed: [node0.cas.dev.cloudurable.com] => {"changed": true, "cmd": ["/opt/cassandra/bin/nodetool", "describecluster"],
"delta": "0:00:02.192589", "end": "2017-03-03 08:02:58.537371", "rc": 0, "start": "2017-03-03 08:02:56.344782",
"stderr": "", "stdout": "Cluster Information:\n\tName: Test Cluster\n\tSnitch:
org.apache.cassandra.locator.DynamicEndpointSnitch\n\tPartitioner: org.apache.cassandra.dht.Murmur3Partitioner
\n\tSchema versions:\n\t\t86afa796-d883-3932-aa73-6b017cef0d19: [127.0.0.1]", "stdout_lines": ["Cluster Information:",
"\tName: Test Cluster", "\tSnitch: org.apache.cassandra.locator.DynamicEndpointSnitch",
...
PLAY RECAP *********************************************************************
cnode0 : ok=1 changed=1 unreachable=0 failed=0
node0.cas.dev.cloudurable.com : ok=1 changed=1 unreachable=0 failed=0
Cassandra Tutorial: Cassandra Cluster DevOps/DBA series
The first tutorial in this Cassandra tutorial series focused on setting up a Cassandra Cluster. The first Cassandra tutorial setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). The third article in this series was about configuring and using Ansible (building on the first two articles). This article (the 4th) will cover applying the tools and techniques from the first three articles to produce an image (EC2 AMI to be precise) that we can deploy to AWS/EC2. To do this explanation, we will use Packer, Ansible, and the Aws Command Line tools. The AWS command line tools are essential for doing DevOps with AWS.
Check out more information about the Cassandra Database
- Cassandra Consulting: Architecture Analysis
- Cassandra Consulting: Quick Start
- Cassandra Course
- Amazon Cassandra Support
This comment has been removed by the author.
ReplyDeleteReally nice post.provided a helpful information.I hope that you will post more updates like this. thanks for sharing AWS Online Training
ReplyDeletevery useful information, the post shared was very nice
ReplyDeleteand also we are providing E-Learning Portal Videos for students and working Professionals
Hurry Up! Bag All Courses in Rs - 10000 /- + taxes
41 Career building courses.
Designed by 33 industrial experts
600+ hours of video Content
DevOps and Cloud E-Learning Portal
I like your post very much. It is nice useful for my research. I wish for you to share more info about this. Keep blogging Apache Kafka Training in Electronic City
ReplyDeleteYour information was very clear. Thank you for sharing.
ReplyDeleteDevOps Online Training
This is most informative and also this post most user friendly and super navigation to all posts... Thank you so much for giving this information to me..
ReplyDeleteAWS Online Training
AWS Certification Training
AWS Certification Course Online
AWS Training
AWS Online Course
Well Explained. Keep updating more and more DevOps Online Training
ReplyDeleteDevOps Online Training India
DevOps Online Training hyderabad
Amazing write-up. The content is very interesting, waiting for your future write-ups. DevOps Training in Chennai | DevOps Training in anna nagar | DevOps Training in omr | DevOps Training in porur | DevOps Training in tambaram | DevOps Training in velachery
ReplyDeleteThanks for sharing . Keep updating more and more SEO Training
ReplyDeleteJava Training
python Training
Salesforce Training
Tableau Training
AWS training
Dot Net Training
DevOps Training
Selenium Training
It is very interesting article, but it is very hard to understanding for beginners. It will be easy to see video what are you doing. If you can make youtube video. Youtube subscribers you can get from here https://soclikes.com/buy-youtube-subscribers if you need
ReplyDeleteThank you so much for sharing all this wonderful information !!!! It is so appreciated!! You have good humor in your blogs. So much helpful and easy to read!
ReplyDeleteAWS Classes in Mumbai
AWS Classes in Chennai
Very informative and interesting content. I have read many blog in days but your writing style is very unique and understanding. If you have read my articles then click below.
ReplyDeletehimalayan salt lamp spiritual benefits
himalayan salt spiritual benefits
himalayan rock salt cooking tile
This comment has been removed by the author.
ReplyDeleteAt Attract Group, their support and maintenance services offer a seamless and reliable solution for businesses looking to keep their websites and apps running smoothly. With a dedicated team of experts who are committed to resolving any issues promptly and efficiently, clients can rest assured that their digital platforms are in good hands. The proactive approach of Attract Group ensures that potential problems are identified and resolved before they impact the user experience, leading to increased customer satisfaction and retention.
ReplyDelete