Monday, October 28, 2013

Vagrant Setup For Three Node Accumulo Cluster

I've put together three github projects (one for each Accumulo version) to start a three-node cluster of Virtual Machines using Virtual Box. They should be easy to adapt to any cloud provider.

https://github.com/medined/Accumulo_1_6_0_By_Vagrant

https://github.com/medined/Accumulo_1_5_0_By_Vagrant

https://github.com/medined/Accumulo_1_4_4_By_Vagrant

Let me know if you run into any problems. Or see a way to improve them.

Thursday, October 24, 2013

Wednesday, October 23, 2013

How to Setup Password-less SSH between Vagrant Nodes

UPDATE: Since my original post, I've changed:
  * ssh-keygen should be executed as the vagrant user.
  * copying the public keys should be done as the vagrant user.
  * there is no reason to avoid the 'Warning' message by running "ls -l" into the nodes via ssh.
  * run ssh-keygen for both dsa and rsa.

I make no claims the process below is the best technique. It does seem to work. The steps below are for a three node cluster.

1. When provisioning, run the following commands as root. These commands provide each node with its own private and public keys. And copy the public keys to the shared directory.

mkdir -p /home/vagrant/.ssh
chmod 700 /home/vagrant/.ssh
chown -R vagrant:vagrant /home/vagrant/.ssh
su vagrant -c "ssh-keygen -t rsa -P '' -f /home/vagrant/.ssh/id_rsa"

mkdir -p /vagrant/files/ssh
cp /home/vagrant/.ssh/id_rsa.pub /vagrant/files/ssh/`hostname`.pub

2. Create a file called /vagrant/files/post_spinup_sudo_setup_ssh.sh with the contents below. Use chmod to make it executable. This file will get run as root after the nodes are started and configured.

# Add nodes to known hosts to avoid the security question.
#
ssh-keyscan -t rsa affy-master affy-slave1 affy-slave2 > /etc/ssh/ssh_known_hosts


ssh-keyscan -t dsa affy-master affy-slave1 affy-slave2 >> /etc/ssh/ssh_known_hosts

3. Create a file called /vagrant/files/post_spinup_setup_ssh.sh with the contents below. Use chmod to make it executable.

sudo /vagrant/files/post_spinup_sudo_setup_ssh.sh

# Copy the public keys to the authorized keys file.
#
cat /vagrant/files/ssh/affy-master.pub >> /home/vagrant/.ssh/authorized_keys
cat /vagrant/files/ssh/affy-slave1.pub >> /home/vagrant/.ssh/authorized_keys
cat /vagrant/files/ssh/affy-slave2.pub >> /home/vagrant/.ssh/authorized_keys




4. After 'vagrant up' is complete. Run the following:

vagrant ssh master -c /vagrant/files/post_spinup_setup_ssh.sh
vagrant ssh slave1 -c /vagrant/files/post_spinup_setup_ssh.sh
vagrant ssh slave2 -c /vagrant/files/post_spinup_setup_ssh.sh

5. Done!

Monday, October 14, 2013

What is the default username and password for keystone in OpenStackInstaller?

After a bit of looking around I finally looked at the keystone-services.sh file which specifies the user name and password clear as day:

keystone  --os-username=admin  --os-password=openstack  --os-auth-url=http://localhost:35357/v2.0  token-get

Sunday, October 13, 2013

Getting nova-manage to run in uksysadmin/OpenStackInstaller project.

While working to install OpenStack in a VirtualBox Ubuntu instance I ran into an issue running the 'nova-manage version' command. The error was of the form:

Failed to parse /etc/nova/nova.conf ... No ':' or '=' found in assignment

The resolution was fairly easy. Some of the parameters needed True or False values.

Apparently an older version of nova-manage used both name and value in the configruation. But a newer version wants just the name. (or vice versa - it's not clear to me).

I assigned True or False to each parameter that produced an error message. About five of them. I choose True or False at whim. When no more errors were reported, the program reported the following version:

2013.1.3