tag:blogger.com,1999:blog-32079852024-03-08T17:15:08.690-05:00CodeBits - Tested Complex Code!<a href="http://www.codebits.com">Home</a> -
<a href="http://www.linkedin.com/in/affyadvice">LinkedIn Profile</a> -
<a href="mailto:david.medinets@gmail.com">Email</a>Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.comBlogger303125tag:blogger.com,1999:blog-3207985.post-40531585542908524362016-07-17T12:40:00.003-04:002016-07-17T12:40:33.744-04:00Docker Swarm 1.12 on PicoClusterI followed the directions at https://medium.com/@bossjones/how-i-setup-a-raspberry-pi-3-cluster-using-the-new-docker-swarm-mode-in-29-minutes-aa0e4f3b1768#.ma06iyonf but tweaked them a bit.<br />
<br />
First off, I wanted to have my cluster using eth0 to connect to my laptop and then share its WiFi connection. Using this technique means that my WiFi network name and password are not on the cluster. So the cluster should be able to plug into any laptop or server without changes. Follow instructions at https://t.co/2jRbNAOiCU to share your eth0 connection.<br />
<br />
<br />
<br />
Use lsblk to umount any directories on the SD cards you'll be using. See http://affy.blogspot.com/2016/06/how-did-i-prepare-my-picocluster-for.html for a bit of information about lsblk.<br />
<br />
Now flash the SD cards using the flash tool from hypriot. Notice that *no* network information is provided.<br />
<br />
I used piX naming convention so that I can easily loop over all five RPI in the PicoCluster.<br />
<br /><span style="font-family: "Courier New",Courier,monospace;">flash --hostname pi1 --device /dev/mmcblk0 https://github.com/hypriot/image-builder-rpi/releases/download/v0.8.1/hypriotos-rpi-v0.8.1.img.zip<br />flash --hostname pi2 --device /dev/mmcblk0 https://github.com/hypriot/image-builder-rpi/releases/download/v0.8.1/hypriotos-rpi-v0.8.1.img.zip<br />flash --hostname pi3 --device /dev/mmcblk0 https://github.com/hypriot/image-builder-rpi/releases/download/v0.8.1/hypriotos-rpi-v0.8.1.img.zip<br />flash --hostname pi4 --device /dev/mmcblk0 https://github.com/hypriot/image-builder-rpi/releases/download/v0.8.1/hypriotos-rpi-v0.8.1.img.zip<br />flash --hostname pi5 --device /dev/mmcblk0 https://github.com/hypriot/image-builder-rpi/releases/download/v0.8.1/hypriotos-rpi-v0.8.1.img.zip</span><br />
<br />
<br />Using this function, you can find the IP addresses for the RPI. <br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">function getip() { (traceroute $1 2>&1 | head -n 1 | cut -d\( -f 2 | cut -d\) -f 1) }</span><br /><br />List the IP addresses.<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">for i in `seq 1 5`; do echo "HOST: pi$i IP: $(getip pi$i.local)"; done</span><br /><br />Remove any fingerprints for the RPI. <br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">for i in `seq 1 5`; do ssh-keygen -R pi${i}.local 2>/dev/null; done</span><br /><br />Copy your PKI identity to the RPI.<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">for i in `seq 1 5`; do ssh-copy-id -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi${i}.local; done</span><br /><br />Download the deb file for Docker v1.12<br /><br /><span style="font-family: "Courier New",Courier,monospace;">curl -O https://jenkins.hypriot.com/job/armhf-docker/17/artifact/bundles/latest/build-deb/raspbian-jessie/docker-engine_1.12.0%7Erc4-0%7Ejessie_armhf.deb</span><br /><br />Copy the deb file to the RPI<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">for i in `seq 1 5`; do scp -oStrictHostKeyChecking=no -oCheckHostIP=no docker-engine_1.12.0%7Erc4-0%7Ejessie_armhf.deb pirate@pi$i.local:.; done</span><br />Remove older Docker version from the RPI<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">for i in `seq 1 5`; do ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi$i.local sudo apt-get purge -y docker-hypriot; done</span><br /><br />Install Docker<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">for i in `seq 1 5`; do ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi$i.local sudo dpkg -i docker-engine_1.12.0%7Erc4-0%7Ejessie_armhf.deb; done</span><br /><br />Initialize the Swarm<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi1.local docker swarm init</span><br /><br />Join slaves to Swarm - replace the join command below with the specific one displayed by the init command.<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">for i in `seq 2 5`; do <br /> ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi$i.local docker swarm join --secret ceuok9jso0klube8m3ih9gcsv --ca-hash sha256:f0864eb57963e3f9cd1756e691d0b609903e3a0bb48785272ea53155809025ee 10.42.0.49:2377;<br />done</span><br />Exercise the Swarm<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi1.local<br />docker service create --name ping hypriot/rpi-alpine-scratch ping 8.8.8.8<br />docker service tasks ping<br />docker service update --replicas 10 ping<br />docker service tasks ping<br />docker service rm ping</span><br />
<br />Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-47267850658501369622016-07-05T19:50:00.002-04:002016-07-05T19:50:32.670-04:00How I Got Apache Spark to Sort Of (Not Really) Work on my PicoCluster of 5 Raspberry PII've read several blog posts about people running Apache Spark on a Raspberry PI. It didn't seem too hard so I thought I've have a go at it. But the results were disappointing. Bear in mind that I am a Spark novice so some setting is probably. I ran into two issues - memory and heartbeats.<br />
<br />
So, this what I did.<br />
<br />
I based my work on these pages:<br />
<br />
* https://darrenjw2.wordpress.com/2015/04/17/installing-apache-spark-on-a-raspberry-pi-2/<br />
* https://darrenjw2.wordpress.com/2015/04/18/setting-up-a-standalone-apache-spark-cluster-of-raspberry-pi-2/<br />
* http://www.openkb.info/2014/11/memory-settings-for-spark-standalone_27.html<br />
<br />
I created five SD cards according to my previous blog post (see http://affy.blogspot.com/2016/06/how-did-i-prepare-my-picocluster-for.html).<br />
<br />
<span style="font-size: large;">Installation of Apache Spark</span><br />
<br />
* install Oracle Java and Python<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do (ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi0${i}.local sudo apt-get install -y oracle-java8-jdk python2.7 &); done</span><br />
<br />
* download Spark<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">wget http://d3kbcqa49mib13.cloudfront.net/spark-1.6.2-bin-hadoop2.6.tgz</span><br />
<br />
* Copy Spark to all RPI<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do (scp -q -oStrictHostKeyChecking=no -oCheckHostIP=no spark-1.6.2-bin-hadoop2.6.tgz pirate@pi0${i}.local:. && echo "Copy complete to pi0${i}" &); done</span><br />
<br />
* Uncompress Spark<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do (ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi0${i}.local tar xfz spark-1.6.2-bin-hadoop2.6.tgz && echo "Uncompress complete to pi0${i}" &); done</span><br />
<br />
* Remove tgz file<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do (ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi0${i}.local rm spark-1.6.2-bin-hadoop2.6.tgz); done</span><br />
<br />
* Add the following to your .bashrc file on each RPI. I can't figure out how to put this into a loop.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">export SPARK_LOCAL_IP="$(ip route get 1 | awk '{print $NF;exit}')"</span><br />
<br />
* Run Standalone Spark Shell<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi01.local</span><br />
<span style="font-family: Courier New, Courier, monospace;">cd spark-1.6.2-bin-hadoop2.6</span><br />
<span style="font-family: Courier New, Courier, monospace;">bin/run-example SparkPi 10</span><br />
<span style="font-family: Courier New, Courier, monospace;">bin/spark-shell --master local[4]</span><br />
<span style="font-family: Courier New, Courier, monospace;"># This takes several minutes to display a prompt.</span><br />
<span style="font-family: Courier New, Courier, monospace;"># While the shell is running, visit http://pi01.local:4040/</span><br />
<span style="font-family: Courier New, Courier, monospace;">scala> sc.textFile("README.md").count</span><br />
<span style="font-family: Courier New, Courier, monospace;"># After the job is complete, visit the monitor page.</span><br />
<span style="font-family: Courier New, Courier, monospace;">scala> exit</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
* Run PyShark Shell<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">bin/pyspark --master local[4]</span><br />
<span style="font-family: Courier New, Courier, monospace;">>>> sc.textFile("README.md").count()</span><br />
<span style="font-family: Courier New, Courier, monospace;">>>> exit()</span><br />
<br />
<span style="font-size: large;">CLUSTER</span><br />
<br />
Now for the clustering...<br />
<br />
* Enable password-less SSH between nodes<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi01.local</span><br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do avahi-resolve --name pi0${i}.local -4 | awk ' { t = $1; $1 = $2; $2 = t; print; } ' | sudo tee --append /etc/hosts; done</span><br />
<span style="font-family: Courier New, Courier, monospace;">echo "$(ip route get 1 | awk '{print $NF;exit}') $(hostname).local" | sudo tee --append /etc/hosts</span><br />
<span style="font-family: Courier New, Courier, monospace;">ssh-keygen</span><br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do ssh-copy-id pirate@pi0${i}.local; done</span><br />
<br />
* Configure Spark for Cluster<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">cd spark-1.6.2-bin-hadoop2.6/conf</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">create a slaves file with the following contents</span><br />
<span style="font-family: Courier New, Courier, monospace;">pi01.local</span><br />
<span style="font-family: Courier New, Courier, monospace;">pi02.local</span><br />
<span style="font-family: Courier New, Courier, monospace;">pi03.local</span><br />
<span style="font-family: Courier New, Courier, monospace;">pi04.local</span><br />
<span style="font-family: Courier New, Courier, monospace;">pi05.local</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">cp spark-env.sh.template spark-env.sh</span><br />
<span style="font-family: Courier New, Courier, monospace;">In spark-env.sh</span><br />
<span style="font-family: Courier New, Courier, monospace;"> Set SPARK_MASTER_IP the results of "ip route get 1 | awk '{print $NF;exit}'"</span><br />
<span style="font-family: Courier New, Courier, monospace;"> SPARK_WORKER_MEMORY=512m</span><br />
<br />
* Copy the spark environment script to the other RPI<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 2 5`; do scp spark-env.sh pirate@pi0${i}.local:spark-1.6.2-bin-hadoop2.6/conf/; done</span><br />
<br />
* Start the cluster<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">cd ..</span><br />
<span style="font-family: Courier New, Courier, monospace;">sbin/start-all.sh</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
* Visit the monitor page<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">http://192.168.1.8:8080</span><br />
<br />
And everything is working so far! But ...<br />
<br />
* Start a Spark Shell<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">bin/spark-shell --executor-memory 500m --driver-memory 500m --master spark://pi01.local:7077 --conf spark.executor.heartbeatInterval=45s </span><br />
<br />
And this fails...<br />
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-62978840325916878752016-06-25T13:05:00.000-04:002016-06-25T13:05:04.055-04:00How I got Docker Swarm to Run on a Raspberry PI PicoCluster with ConsulAt the end of this article, I have a working Docker Swarm running on a five-node PicoCluster. Please flash your SD cards according to http://affy.blogspot.com/2016/06/how-did-i-prepare-my-picocluster-for.html. Stop following that article after copying the SSH ids to the RPI.<br />
<br />
I am controlling the PicoCluster using my laptop. Therefore, my laptop is the HOST in the steps below.<br />
<br />
There is no guarantee this commands are correct. They just seem to work for me. And please don't ever, ever depend on this information for anything non-prototype without doing your own research.<br />
<br />
* On the HOST, create the Docker Machine to hold the consul service.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi01.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> consul-machine</span><br />
<br />
* Connect to the consul-machine Docker Machine<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">eval $(docker-machine env consul-machine)</span><br />
<br />
* Start Consul.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker run \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -p 8500:8500 \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> hypriot/rpi-consul \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> agent -dev -client 0.0.0.0</span><br />
<br />
* Reset docker environment to talk with host docker.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">unset DOCKER_TLS_VERIFY DOCKER_HOST DOCKER_CERT_PATH DOCKER_MACHINE_NAME</span><br />
<br />
* Visit the consul dashboard to provide it is working and accessible.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">firefox http://$(getip pi01.local):8500</span><br />
<br />
* Create the swarm-master machine. <b>Note that eth0 is being used instead of eth1.</b><br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-master \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi02.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-opt="cluster-advertise=eth0:2376" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-master</span><br />
<br />
* Create the first slave node.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi03.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-opt="cluster-advertise=eth0:2376" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave01</span><br />
<br />
* List nodes in the swarm. I don't know why, but this command must be run from one of the RPI. Otherwise, I see a "malformed HTTP response" message.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">eval $(docker-machine env swarm-master)</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">docker -H $(docker-machine ip swarm-master):3376 run \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --rm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> list consul://$(docker-machine ip consul-machine):8500</span><br />
<br />
* Create the second slave node.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi04.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-opt="cluster-advertise=eth0:2376" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave02</span><br />
<br />
* Create the first third node.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi05.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-opt="cluster-advertise=eth0:2376" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave03</span><br />
<br />
* Check that docker machine sees all of the nodes<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">$ docker-machine ls</span><br />
<span style="font-family: Courier New, Courier, monospace;">NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS</span><br />
<span style="font-family: Courier New, Courier, monospace;">consul-machine - generic Running tcp://192.168.1.8:2376 v1.11.1 </span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm-master - generic Running tcp://192.168.1.7:2376 swarm-master (master) v1.11.1 </span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm-slave01 - generic Running tcp://192.168.1.2:2376 swarm-master v1.11.1 </span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm-slave02 - generic Running tcp://192.168.1.5:2376 swarm-master v1.11.1 </span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm-slave03 - generic Running tcp://192.168.1.4:2376 swarm-master v1.11.1 </span><br />
<br />
* List the swarm nodes in Firefox using Consul.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">firefox http://$(docker-machine ip consul-machine):8500/ui/#/dc1/kv/docker/swarm/nodes/</span><br />
<br />
* Is my cluster working? First, switch to the swarm-master environment. Then view it's information. You should see the slaves listed. Next run the hello-world container. And finally, list the containers.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">eval $(docker-machine env swarm-master)</span><br />
<span style="font-family: Courier New, Courier, monospace;">docker -H $(docker-machine ip swarm-master):3376 info</span><br />
<span style="font-family: Courier New, Courier, monospace;">docker -H $(docker-machine ip swarm-master):3376 run hypriot/armhf-hello-world</span><br />
<span style="font-family: Courier New, Courier, monospace;">docker -H $(docker-machine ip swarm-master):3376 ps -a</span><br />
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">456fa23b8c52 hypriot/armhf-hello-world "/hello" 8 seconds ago Exited (0) 5 seconds ago swarm-slave01/nauseous_swartz</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">e1eb8a790e3f hypriot/rpi-swarm:latest "/swarm join --advert" 3 hours ago Up 3 hours 2375/tcp swarm-slave03/swarm-agent</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">122b89a2ae5d hypriot/rpi-swarm:latest "/swarm join --advert" 3 hours ago Up 3 hours 2375/tcp swarm-slave02/swarm-agent</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">449aa7087ecc hypriot/rpi-swarm:latest "/swarm join --advert" 3 hours ago Up 3 hours 2375/tcp swarm-slave01/swarm-agent</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">6355f31de952 hypriot/rpi-swarm:latest "/swarm join --advert" 3 hours ago Up 3 hours 2375/tcp swarm-master/swarm-agent</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">05ee666e8662 hypriot/rpi-swarm:latest "/swarm manage --tlsv" 3 hours ago Up 3 hours 2375/tcp, 192.168.1.7:3376->3376/tcp swarm-master/swarm-agent-master</span></div>
</div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
Jump up and down when you see that the hello-world container was run from swarm-master but run on swarm-slave01!</div>
<div>
<br /></div>
<div>
<br /></div>
Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-50739259671984133562016-06-22T23:52:00.007-04:002016-06-22T23:55:50.371-04:00How I attached a USB Thumb drive to my Raspberry PI and used it to hold Docker's Root Directory!This post tells how I attached a USB Thumb drive to my Raspberry PI and used it to hold Docker's Root Directory.<br />
<br />
The first step is to connect to the RPI.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ ssh -o 'StrictHostKeyChecking=no' -o 'CheckHostIP=no' 'pirate@pi02.local'</span><br />
<br />
Now create a mount point. This is just a directory, nothing fancy. It should be owned by root because Docker runs as root. Don't try to use "pirate" as the owner. I tried that. It failed. Leave the owner as root.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo mkdir /media/usb</span><br />
<br />
Then look at the attached USB devices.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo blkid</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">/dev/mmcblk0: PTTYPE="dos"</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">/dev/mmcblk0p1: SEC_TYPE="msdos" LABEL="HypriotOS" UUID="D6D9-1D76" TYPE="vfat"</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">/dev/mmcblk0p2: LABEL="root" UUID="81e5bfc7-0701-4a09-80aa-fe5bc3eecbcf" TYPE="ext4"</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">/dev/sda1: LABEL="STORE N GO" UUID="F171-FAE6" TYPE="vfat" PARTUUID="f11d6f2b-01"</span><br />
<br />
Note that the USB thumb drive is /dev/sda1. The information above is for the original formatting of the drive. After formatting the drive to use "ext3" the information looks like:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">/dev/sda1: LABEL="PI02" UUID="801b666c-ea47-4f6f-ab6b-b88acceff08f" TYPE="ext3" PARTUUID="f11d6f2b-01"</span><br />
<br />
This is the command that I used to format the drive to use ext3. Notiice that I named the drive the same as the hostname. I have no particular reason to do this. It just seemed right. Only run this formatting command once.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo mkfs.ext3 -L "PI02" /dev/sda1</span><br />
<br />
Now it's time to mount the thumb drive. Here we connect the device (/dev/sda1) to the mount point. After this command is run you'll be able to use /media/usb as a normal directory.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo mount /dev/sda1 /media/usb</span><br />
<br />
Next we setup the thumb drive to be available whenever the RPI is rebooted. First, find the UUID. It's whatever UUID is associated with sda1.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo ls -l /dev/disk/by-uuid</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">total 0</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">lrwxrwxrwx 1 root root 10 Jul 3 2014 801b666c-ea47-4f6f-ab6b-b88acceff08f -> ../../sda1</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">lrwxrwxrwx 1 root root 15 Jul 3 2014 81e5bfc7-0701-4a09-80aa-fe5bc3eecbcf -> ../../mmcblk0p2</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">lrwxrwxrwx 1 root root 15 Jul 3 2014 D6D9-1D76 -> ../../mmcblk0p1</span><br />
<br />
Now add that UUID to the /etc/fstab file so it will be recognized across reboots. If you re-flash your SD card, you'll need to execute this step again.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ echo "UUID=801b666c-ea47-4f6f-ab6b-b88acceff08f /media/usb nofail 0 0" | sudo tee -a /etc/fstab</span><br />
<br />
Some images already on the Hypriot SD card. We'll make sure they are available after we move the Docker Root directory.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ docker images</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">REPOSITORY TAG IMAGE ID CREATED SIZE</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">hypriot/rpi-swarm 1.2.2 f13b7205f2db 5 weeks ago 13.97 MB</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">hypriot/rpi-consul 0.6.4 879ac05d5353 6 weeks ago 19.71 MB</span><br />
<br />
Stop Docker to ensure that the Docker root directory does not change.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo systemctl stop docker</span><br />
<br />
Copy files to the new location. Don't bother deleting the original files.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo cp --no-preserve=mode --recursive /var/lib/docker /media/usb/docker</span><br />
<br />
If you are paranoid, you can compare the two directory trees.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo diff /var/lib/docker /media/usb/docker</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Common subdirectories: /var/lib/docker/containers and /media/usb/docker/containers</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Common subdirectories: /var/lib/docker/image and /media/usb/docker/image</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Common subdirectories: /var/lib/docker/network and /media/usb/docker/network</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Common subdirectories: /var/lib/docker/overlay and /media/usb/docker/overlay</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Common subdirectories: /var/lib/docker/tmp and /media/usb/docker/tmp</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Common subdirectories: /var/lib/docker/trust and /media/usb/docker/trust</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">Common subdirectories: /var/lib/docker/volumes and /media/usb/docker/volumes</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
Edit the docker service file to add --graph "/media/usb/docker" to the end of the ExecStart line.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo vi /etc/systemd/system/docker.service</span><br />
<br />
Now reload the systemctl daemon and start docker.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">sudo systemctl daemon-reload</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">sudo systemctl start docker</span><br />
<br />
Confirm that the ExecStart is correct - that is has the graph parameter.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ sudo systemctl show docker | grep ExecStart</span><br />
<br />
Confirm that the Docker Root Directory has changed.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ docker info | grep "Root Dir"</span><br />
<br />
And finally, confirm that you can see docker images.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">$ docker images</span><br />
<div>
<br /></div>
Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-42828317155551980952016-06-21T21:48:00.004-04:002016-06-21T21:48:56.035-04:00How Did I prepare My PicoCluster For Docker Swarm?How Did I prepare my PicoCluster?<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">DOCKER VERSION: 1.11.1</span><br />
<span style="font-family: Courier New, Courier, monospace;">HYPRIOT VERSION: 0.8</span><br />
<span style="font-family: Courier New, Courier, monospace;">RASPBERRY PI: 3</span><br />
<br />
From my Linux laptop, I created five SD cards using the flash utility from Hypriot.<br />
<br />
As I plugged each SD card into my laptop, I ran 'lsblk'. Then I used 'umount' for anything mounted to the SD card. For example.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">$ lsblk</span><br />
<span style="font-family: Courier New, Courier, monospace;">NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT</span><br />
<span style="font-family: Courier New, Courier, monospace;">sda 8:0 0 111.8G 0 disk </span><br />
<span style="font-family: Courier New, Courier, monospace;">├─sda1 8:1 0 79.9G 0 part /</span><br />
<span style="font-family: Courier New, Courier, monospace;">├─sda2 8:2 0 1K 0 part </span><br />
<span style="font-family: Courier New, Courier, monospace;">└─sda5 8:5 0 31.9G 0 part [SWAP]</span><br />
<span style="font-family: Courier New, Courier, monospace;">sdb 8:16 0 894.3G 0 disk </span><br />
<span style="font-family: Courier New, Courier, monospace;">└─sdb1 8:17 0 894.3G 0 part /data</span><br />
<span style="font-family: Courier New, Courier, monospace;">sr0 11:0 1 1024M 0 rom </span><br />
<span style="font-family: Courier New, Courier, monospace;">mmcblk0 179:0 0 15G 0 disk </span><br />
<span style="font-family: Courier New, Courier, monospace;">├─mmcblk0p1 179:1 0 64M 0 part /media/medined/3ABE-55E4</span><br />
<span style="font-family: Courier New, Courier, monospace;">└─mmcblk0p2 179:2 0 14.9G 0 part /media/medined/root</span><br />
<br />
<span style="font-family: inherit;">umount any mount points for mmcblk0 (or your SD card). For example,</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">umount /media/medined/3ABE-55E4</span><br />
<span style="font-family: Courier New, Courier, monospace;">umount /media/medined/root</span><br />
<br />
If the SD cards were flashed in the past then you'll need to run<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">umount /media/medined/HypriotOS</span><br />
<span style="font-family: Courier New, Courier, monospace;">umount /media/medined/root</span><br />
<br />
Here are the five flash commands that I used. Of course, I used my real SSID and PASSWORD. Note that this command leaves your password in your shell history. If this is a concern, please research alternatives.<br />
<br />
As you flash the SD cards, use a gold sharpie to indicate the hostname of the SD card. This will make it much easier to make sure they are in the right RPI.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">flash --hostname pi01 --ssid NETWORK --password PASSWORD --device /dev/mmcblk0 https://downloads.hypriot.com/hypriotos-rpi-v0.8.0.img.zip</span><br />
<span style="font-family: Courier New, Courier, monospace;">flash --hostname pi02 --ssid NETWORK --password PASSWORD --device /dev/mmcblk0 https://downloads.hypriot.com/hypriotos-rpi-v0.8.0.img.zip</span><br />
<span style="font-family: Courier New, Courier, monospace;">flash --hostname pi03 --ssid NETWORK --password PASSWORD --device /dev/mmcblk0 https://downloads.hypriot.com/hypriotos-rpi-v0.8.0.img.zip</span><br />
<span style="font-family: Courier New, Courier, monospace;">flash --hostname pi04 --ssid NETWORK --password PASSWORD --device /dev/mmcblk0 https://downloads.hypriot.com/hypriotos-rpi-v0.8.0.img.zip</span><br />
<span style="font-family: Courier New, Courier, monospace;">flash --hostname pi05 --ssid NETWORK --password PASSWORD --device /dev/mmcblk0 https://downloads.hypriot.com/hypriotos-rpi-v0.8.0.img.zip</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
Next after the SD cards are plaeced into the PicoCluster, I plugged it into power.<br />
<br />
As a sidenote, each time you restart the RPIs, their SSH fingerprint changes. You'll need to remove the old fingerprint. One technique is the following:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do ssh-keygen -R pi0${i}.local 2>/dev/null; done</span><br />
<br />
I dislike questions about server fingerprint's when connecting. Therefore, you'll see me using the "StrictHostKeyChecking=no" option with SSH. I take no stance on the security ramifications of this choice. I'm connecting to my local PicoCluster not some public server. Make your own security decisions.<br />
<br />
Ensure that you have a SSH key set. Look for "~/.ssh/id_rsa". If you don't have that file, use ssh-keygen to make one.<br />
<br />
Now copy your PKI credential to the five PRI to enable password-less SSH. You be asked for the password, which should be "hypriot", five times.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do ssh-copy-id -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi0${i}.local; done</span><br />
<br />
Next you can check that password-less SSH is working. After each SSH, you'll see a prompt like "HypriotOS/armv7: pirate@pi01 in ~". Just check the hostname is correct and then type exit to move onto the next RPI.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi0${i}.local; done</span><br />
<br />
You can use the following shell function to determine the IP address of an RPI. I also found it happy to log into my router to see the list of attached devices. By the way, if you haven't changed the default password for the admin user of your router, do it. This article will wait...<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">function getip() { (traceroute $1 2>&1 | head -n 1 | cut -d\( -f 2 | cut -d\) -f 1) }</span><br />
<br />
It's probably a good idea to place that function in your .bashrc file so that you'll always have it handy.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do echo "PI0${i}.local: $(getip pi0${i}.local)"; done</span><br />
<br />
Now comes the fun part, setting up the Docker Swarm. Fair warning. I don't know if these steps are correct.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-master \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi01.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="token://01" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi02.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="token://01" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave01</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi03.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="token://01" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave02</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi04.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="token://01" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave03</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">docker-machine create \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> -d generic \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --engine-storage-driver=overlay \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-image hypriot/rpi-swarm:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ip-address=$(getip pi05.local) \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --generic-ssh-user "pirate" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> --swarm-discovery="token://01" \</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave04</span><br />
<br />
Now you can run list the nodes in the cluster using Docker Machine:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">$ docker-machine ls</span><br />
<span style="font-family: Courier New, Courier, monospace;">NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS</span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm - generic Running tcp://192.168.1.12:2376 swarm (master) v1.11.1 </span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm-slave01 - generic Running tcp://192.168.1.7:2376 swarm v1.11.1 </span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm-slave02 - generic Running tcp://192.168.1.11:2376 swarm v1.11.1 </span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm-slave03 - generic Running tcp://192.168.1.23:2376 swarm v1.11.1 </span><br />
<span style="font-family: Courier New, Courier, monospace;">swarm-slave04 - generic Running tcp://192.168.1.22:2376 swarm v1.11.1 </span><br />
<br />
Notice that a master node is indicated but it is not marked as active. I don't know why.<br />
<br />
Before moving on, let's look at what containers are being run. There should be six.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">for i in `seq 1 5`; do echo "RPI ${i}"; ssh -oStrictHostKeyChecking=no -oCheckHostIP=no pirate@pi0${i}.local docker ps -a; done</span><br />
<span style="font-family: Courier New, Courier, monospace;">RPI 1</span><br />
<span style="font-family: Courier New, Courier, monospace;">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br />
<span style="font-family: Courier New, Courier, monospace;">ceb4a5255dc2 hypriot/rpi-swarm:latest "/swarm join --advert" About an hour ago Up About an hour 2375/tcp swarm-agent</span><br />
<span style="font-family: Courier New, Courier, monospace;">e9d3bf308284 hypriot/rpi-swarm:latest "/swarm manage --tlsv" About an hour ago Up About an hour 2375/tcp, 0.0.0.0:3376->3376/tcp swarm-agent-master</span><br />
<span style="font-family: Courier New, Courier, monospace;">RPI 2</span><br />
<span style="font-family: Courier New, Courier, monospace;">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br />
<span style="font-family: Courier New, Courier, monospace;">e2dca97c23fe hypriot/rpi-swarm:latest "/swarm join --advert" About an hour ago Up About an hour 2375/tcp swarm-agent</span><br />
<span style="font-family: Courier New, Courier, monospace;">RPI 3</span><br />
<span style="font-family: Courier New, Courier, monospace;">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br />
<span style="font-family: Courier New, Courier, monospace;">07d0b4fc4490 hypriot/rpi-swarm:latest "/swarm join --advert" 11 minutes ago Up 11 minutes 2375/tcp swarm-agent</span><br />
<span style="font-family: Courier New, Courier, monospace;">RPI 4</span><br />
<span style="font-family: Courier New, Courier, monospace;">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br />
<span style="font-family: Courier New, Courier, monospace;">88712d8df693 hypriot/rpi-swarm:latest "/swarm join --advert" 6 minutes ago Up 6 minutes 2375/tcp swarm-agent</span><br />
<span style="font-family: Courier New, Courier, monospace;">RPI 5</span><br />
<span style="font-family: Courier New, Courier, monospace;">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br />
<span style="font-family: Courier New, Courier, monospace;">b7738fb8c4b8 hypriot/rpi-swarm:latest "/swarm join --advert" 2 minutes ago Up 2 minutes 2375/tcp swarm-agent</span><br />
<br />
Currently, when you type "docker ps" you're looking at containers running on your local computer. You can switch so that "docker" connects to one of the "docker machines" using this command:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">eval $(docker-machine env swarm)</span><br />
<br />
Now "docker ps" returns information about containers running on pi01.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">$ docker ps</span><br />
<span style="font-family: Courier New, Courier, monospace;">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br />
<span style="font-family: Courier New, Courier, monospace;">ceb4a5255dc2 hypriot/rpi-swarm:latest "/swarm join --advert" About an hour ago Up About an hour 2375/tcp swarm-agent</span><br />
<span style="font-family: Courier New, Courier, monospace;">e9d3bf308284 hypriot/rpi-swarm:latest "/swarm manage --tlsv" About an hour ago Up About an hour 2375/tcp, 0.0.0.0:3376->3376/tcp swarm-agent-master</span><br />
<br />
One neat "trick" is to look at the information from the "swarm-agent-master" container. This is done using Docker's -H option. Notice that the results indicate there are six containers running. Count the number of containers found using the "for..loop" earlier. They are the same number.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">$ docker -H $(docker-machine ip swarm):3376 info</span><br />
<span style="font-family: Courier New, Courier, monospace;">Containers: 6</span><br />
<span style="font-family: Courier New, Courier, monospace;"> Running: 6</span><br />
<span style="font-family: Courier New, Courier, monospace;"> Paused: 0</span><br />
<span style="font-family: Courier New, Courier, monospace;"> Stopped: 0</span><br />
<span style="font-family: Courier New, Courier, monospace;">Images: 15</span><br />
<span style="font-family: Courier New, Courier, monospace;">Server Version: swarm/1.2.3</span><br />
<span style="font-family: Courier New, Courier, monospace;">Role: primary</span><br />
<span style="font-family: Courier New, Courier, monospace;">Strategy: spread</span><br />
<span style="font-family: Courier New, Courier, monospace;">Filters: health, port, containerslots, dependency, affinity, constraint</span><br />
<span style="font-family: Courier New, Courier, monospace;">Nodes: 5</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm: 192.168.1.12:2376</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ID: P4OH:AB7Q:T2T3:P6OK:BW5F:YSIB:NACW:Q2F3:FKU4:IJFD:AUJQ:74CZ</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Status: Healthy</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Containers: 2</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved CPUs: 0 / 4</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved Memory: 0 B / 971.7 MiB</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Labels: executiondriver=, kernelversion=4.4.10-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ UpdatedAt: 2016-06-22T01:39:56Z</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ServerVersion: 1.11.1</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave01: 192.168.1.7:2376</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ID: GDQI:WYHS:OD2W:EE67:CKMU:A2PW:6K5T:YZSK:B5KL:SPCZ:6GVX:5MCO</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Status: Healthy</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Containers: 1</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved CPUs: 0 / 4</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved Memory: 0 B / 971.7 MiB</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Labels: executiondriver=, kernelversion=4.4.10-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ UpdatedAt: 2016-06-22T01:39:45Z</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ServerVersion: 1.11.1</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave02: 192.168.1.11:2376</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ID: CA7H:C7UA:5V5N:NY4C:KECT:JK57:HDGN:2DNH:ASXQ:UJFQ:A5A4:US3Y</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Status: Healthy</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Containers: 1</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved CPUs: 0 / 4</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved Memory: 0 B / 971.7 MiB</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Labels: executiondriver=, kernelversion=4.4.10-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ UpdatedAt: 2016-06-22T01:39:32Z</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ServerVersion: 1.11.1</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave03: 192.168.1.23:2376</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ID: 6H6D:P6EN:PTBL:Q5E3:MP32:T6CI:XU33:PCQV:KT6H:KRJ4:LYSN:76EJ</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Status: Healthy</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Containers: 1</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved CPUs: 0 / 4</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved Memory: 0 B / 971.7 MiB</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Labels: executiondriver=, kernelversion=4.4.10-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ UpdatedAt: 2016-06-22T01:39:25Z</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ServerVersion: 1.11.1</span><br />
<span style="font-family: Courier New, Courier, monospace;"> swarm-slave04: 192.168.1.22:2376</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ID: 2ZBK:3DJE:D23C:7QAB:TLFS:L7EO:L4L4:IQ6Y:EC7D:UG7S:3WU6:QJ5D</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Status: Healthy</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Containers: 1</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved CPUs: 0 / 4</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Reserved Memory: 0 B / 971.7 MiB</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ Labels: executiondriver=, kernelversion=4.4.10-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ UpdatedAt: 2016-06-22T01:39:32Z</span><br />
<span style="font-family: Courier New, Courier, monospace;"> └ ServerVersion: 1.11.1</span><br />
<span style="font-family: Courier New, Courier, monospace;">Plugins: </span><br />
<span style="font-family: Courier New, Courier, monospace;"> Volume: </span><br />
<span style="font-family: Courier New, Courier, monospace;"> Network: </span><br />
<span style="font-family: Courier New, Courier, monospace;">Kernel Version: 4.4.10-hypriotos-v7+</span><br />
<span style="font-family: Courier New, Courier, monospace;">Operating System: linux</span><br />
<span style="font-family: Courier New, Courier, monospace;">Architecture: arm</span><br />
<span style="font-family: Courier New, Courier, monospace;">CPUs: 20</span><br />
<span style="font-family: Courier New, Courier, monospace;">Total Memory: 4.745 GiB</span><br />
<span style="font-family: Courier New, Courier, monospace;">Name: e9d3bf308284</span><br />
<span style="font-family: Courier New, Courier, monospace;">Docker Root Dir: </span><br />
<span style="font-family: Courier New, Courier, monospace;">Debug mode (client): false</span><br />
<span style="font-family: Courier New, Courier, monospace;">Debug mode (server): false</span><br />
<span style="font-family: Courier New, Courier, monospace;">WARNING: No kernel memory limit support</span><br />
<br />
And that's as far as I've gotten.Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-83390266733216358132015-08-24T21:20:00.002-04:002015-08-24T21:20:42.303-04:00Go Program to Read Docker Image List From Unix Socket (/var/run/docker.sock)It took me a bit of time to get this simple program working so I'm sharing for other people new to Go.<br />
<br />
<pre></pre>
<br />
package main<br /><br />import (<br /> "fmt"<br /> "io"<br /> "net"<br />)<br /><br />func reader(r io.Reader) {<br /> buf := make([]byte, 1024)<br /> for {<br /> n, err := r.Read(buf[:])<br /> if err != nil {<br /> return<br /> }<br /> println(string(buf[0:n]))<br /> }<br />}<br /><br />func main() {<br /> c, err := net.Dial("unix", "/var/run/docker.sock")<br /> if err != nil {<br /> panic(err)<br /> }<br /> defer c.Close()<br /><br /> fmt.Fprintf(c, "GET /images/json HTTP/1.0\r\n\r\n")<br /><br /> reader(c)<br />}<br />
<br />
Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-66178689921460151972015-04-24T23:01:00.001-04:002015-04-24T23:01:09.022-04:00Running the NodeJS Example Inside Docker ContainerYesterday, I showed how to run NodeJS inside a Docker container. Today, I updated my Github project (https://github.com/medined/docker-nodejs) so that the Example server works correctly.<br />
<br />
The trick is for the NodeJS code inside the container to find the container's IP address and listen on that address instead of localhost or 127.0.0.1. This is not difficult.<br />
<br />
<pre></pre>
<br />
require('dns').lookup(require('os').hostname(), function (err, add, fam) {<br /> var http = require('http');<br /> http.createServer(function (req, res) {<br /> res.writeHead(200, {'Content-Type': 'text/plain'});<br /> res.end('Hello World\n');<br /> }).listen(1337, add);<br /> console.log('Server running at http://' + add + ':1337/');<br />})<br />
<br />
If you're using my Docker image, then you'd just run the following to start the server. Use ^C to stop the server.<br />
<br />
<pre></pre>
<br />
node example.js<br />
<br />
<br />
Now you can browse from the host computer using the following URL. Note that the 'docker run' command exposes port 1337.<br />
<br />
<pre></pre>
<br />
http://localhost:1337/<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-85707851727631237812015-04-23T22:58:00.001-04:002015-04-23T22:58:19.960-04:00Running NodeJS (and related tools) from a Docker container.In my continuing quest to run my development tools from within Docker containers, I looked at Node today.<br />
<br />
The Github project is at https://github.com/medined/docker-nodejs.<br />
<br />
My Dockerfile is fairly simple:<br />
<br />
<pre>
</pre>
FROM ubuntu:14.04<br /><br />RUN apt-get -qq update \<br /> && apt-get install -y curl \<br /> && curl -sL https://deb.nodesource.com/setup | sudo bash - \<br /> && apt-get install -y nodejs \<br /> && npm install -g inherits bower grunt grunt-cli<br /><br />RUN useradd -ms /bin/bash developer<br /><br />USER developer<br />WORKDIR /home/developer<br />
<br />
<br />
It's built using:<br />
<br />
<pre></pre>
<br />
docker build -t medined/nodejs .<br />
<br />
<br /><br />
Using the 'developer' user is important because bower can't be used by root. By itself, this container does not look impressive. Some magic is added by the following shell script called 'node':<br />
<br />
<pre></pre>
<br />
#!/bin/bash<br /><br />CMD=$(basename $0)<br /><br />docker run \<br /> -it \<br /> --rm \<br /> -p 1337:1337 \<br /> -v "$PWD":/home/developer/source \<br /> -w /home/developer/source \<br /> medined/nodejs \<br /> $CMD $@<br />
<br />
I expose port 1337 because that's the port used on the NodeJS home page example. The current directory is exposed in the container at a convenient location. That location is used at the working directory.<br />
<br />
You might be puzzled at the use of $CMD. I symlink this script to bower, grunt, and npm. The $CMD invokes the proper command inside the container.<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-55353027953316163332015-04-20T21:26:00.001-04:002015-04-20T21:26:14.021-04:00Running Spring Boot inside DockerThis is another in my series of very short entries about Docker. I've been working to not install maven on my development laptop. But I still want to use spring-boot:run to launch my applications. Here is the Docker command I am using. Notice the server.port is specified on the command line so that I can change it as needed.<br />
<br />
<pre>
</pre>
docker run \<br /> -it \<br /> --rm \<br /> -p 8090:8090 \<br /> -e server.port=8090 \<br /> --link artifactory:artifactory \<br /> --link mysql:mysql \<br /> -v "$PWD/m2":/root/.m2 \<br /> -v "$PWD":/usr/src/mymaven \<br /> -w /usr/src/mymaven \<br /> maven:3.3-jdk-8 \<br /> mvn spring-boot:run<br />
<br />
The MySQL container was started like this:<br />
<br />
<pre></pre>
<br />
docker run \<br /> --name mysql \<br /> -p 3306:3306 \<br /> -v /data/mysql:/var/lib/mysql \<br /> -e MYSQL_ROOT_PASSWORD=password \<br /> -e MYSQL_DATABASE=docker \<br /> -e MYSQL_USER=docker \<br /> -e MYSQL_PASSWORD=password \<br /> -d \<br /> mysql/mysql-server:5.5<br />
<br />Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-52479544072662766672015-04-20T16:23:00.001-04:002015-04-20T16:23:49.924-04:00Running Maven inside Docker. I recently reinstalled Ubuntu on my zareason laptop. As I was thinking about installing my development tools, I thought about how to integrate Docker into the process. Below I show how simple using the Maven container can be:<br />
<br />
* Create an alias to the Maven container.<br />
<br />
alias mvn="docker run \<br /> -it \<br /> --rm \<br />
--name my-maven-project \<br />
-v "$PWD":/usr/src/mymaven \<br /> -w /usr/src/mymaven \<br /> maven:3.3-jdk-8 \<br /> mvn"<br />
<br />
* Clone my ragnvald Java project.<br />
<br />
git clone git@github.com:medined/ragnvald.git<br />
<br />
* cd ragnvald<br />
<br />
* Package the project.<br />
<br />
mvn package<br />
That's it. You're using Maven without installing onto your laptop! The results of the compilation are placed into the target directory.<br />
<br />
If you need to specify a Maven settings.xml file that's fairly easy as well. Simply create it alongside the pom.xml file. Then slightly modify your alias:<br />
<br />
alias mvn="docker run \<br /> -it \<br /> --rm \<br />
--name my-maven-project \<br />
-v "$PWD":/root/.m2 \<br /> -v "$PWD":/usr/src/mymaven \<br /> -w /usr/src/mymaven \<br /> maven:3.3-jdk-8 \<br /> mvn"<br />
<br />
The ragnvald project goes one step farther to use an Artifactory container so that I can use the Artifactory web interface if needed. That's quite convenient!<br />
<br />Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-62197630853236567292015-04-15T23:53:00.000-04:002015-04-15T23:53:07.526-04:00Running MySQL on DockerThis entry doesn't reveal any hidden secrets just the simple steps to start using MySQL on Docker.<br />
<br />
* Install docker<br />
<br />
* Install docker-compose<br />
<br />
* mkdir firstdb<br />
<br />
* cd firstdb<br />
<br />
* vi docker-compose.yml<br />
<br />
<span style="font-family: 'Courier New', Courier, monospace;">mysql:</span><br />
<span style="font-family: Courier New, Courier, monospace;">image: mysql:latest</span><br />
<span style="font-family: Courier New, Courier, monospace;">environment:</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_DATABASE: sample</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_USER: mysql</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_PASSWORD: mysql</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_ROOT_PASSWORD: supersecret</span><br />
<br />
* docker-compose up<br />
* docker-compose ps<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">Name Command State Ports</span><br />
<span style="font-family: Courier New, Courier, monospace;">-----------------------------------------------------------------</span><br />
<span style="font-family: Courier New, Courier, monospace;">firstdb_mysql_1 /entrypoint.sh mysqld Up 3306/tcp</span><br />
<br />
* Use a one-shot Docker instance to display environment variables. Notice <br />
the variables that start with MYSQL? Your programs can use these variables<br />
to make the database connection.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;"><b>docker run --link=firstdb_mysql_1:mysql ubuntu env</b></span><br />
<span style="font-family: Courier New, Courier, monospace;">PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin</span><br />
<span style="font-family: Courier New, Courier, monospace;">HOSTNAME=abfc8d50633b</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_PORT=tcp://172.17.0.23:3306</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_PORT_3306_TCP=tcp://172.17.0.23:3306</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_PORT_3306_TCP_ADDR=172.17.0.23</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_PORT_3306_TCP_PORT=3306</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_PORT_3306_TCP_PROTO=tcp</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_NAME=/nostalgic_rosalind/mysqldb</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_ENV_MYSQL_PASSWORD=mysql</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_ENV_MYSQL_ROOT_PASSWORD=supersecret</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_ENV_MYSQL_USER=mysql</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_ENV_MYSQL_DATABASE=sample</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_ENV_MYSQL_MAJOR=5.6</span><br />
<span style="font-family: Courier New, Courier, monospace;">MYSQL_ENV_MYSQL_VERSION=5.6.24</span><br />
<span style="font-family: Courier New, Courier, monospace;">HOME=/root</span><br />
<br />
* Use a one-shot Docker instance for a MySQL command-line interface. Once this<br />
is running, you'll be able to use command like 'show databases'.<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">docker run -it \</span><br />
<span style="font-family: Courier New, Courier, monospace;">--link=firstcompose_mysqldb_1:mysql \</span><br />
<span style="font-family: Courier New, Courier, monospace;">--rm \</span><br />
<span style="font-family: Courier New, Courier, monospace;">mysql/mysql-server:latest \</span><br />
<span style="font-family: Courier New, Courier, monospace;">sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'</span><br />
<br />
That's all it takes to start.Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-68084077284836991042014-11-23T20:01:00.004-05:002014-11-23T20:01:41.790-05:00Using AZUL 7 instead of OpenJDK Java for smaller Docker images.Witness a tale of two Dockerfiles that perform the same task. See the size difference. Imagine how it might change infrastructure costs.<br />
<br />
<h4>
DOCKERFILE ONE</h4>
<br />
<pre>FROM debian:wheezy
RUN apt-get update && apt-get install -y openjdk-7-jre && rm -rf /var/lib/apt/lists/*
ADD target/si-standalone-sample-1.0-SNAPSHOT.jar /
ENV JAVA_HOME /usr/lib/jvm/java-7-openjdk-amd64
ENV CLASSPATH si-standalone-sample-1.0-SNAPSHOT.jar
CMD [ "java", "org.springframework.boot.loader.JarLauncher" ]
</pre>
<br />
<br />
<h4>
DOCKERFILE TWO</h4>
<br />
<pre>FROM debian:wheezy
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 0x219BD9C9 && \
echo "deb http://repos.azulsystems.com/ubuntu precise main" >> /etc/apt/sources.list.d/zulu.list && \
apt-get -qq update && \
apt-get -qqy install zulu-7 && \
rm -rf /var/lib/apt/lists/*
ADD target/si-standalone-sample-1.0-SNAPSHOT.jar /
ENV JAVA_HOME /usr/lib/jvm/zulu-7-amd64
ENV CLASSPATH si-standalone-sample-1.0-SNAPSHOT.jar
CMD [ "java", "org.springframework.boot.loader.JarLauncher" ]
</pre>
<br />
Notice the only difference is which Java is being installed. Here are the image sizes:<br />
<br />
<pre>spring-integration openjdk 549.1 MB
spring-integration azul 261.3 MB
</pre>
<br />
That's a 288MB difference.<br />
<br />Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-90444077098805524282014-11-23T17:28:00.000-05:002015-01-05T22:18:06.154-05:00Is using "lsb_release -cs" a good idea inside a debian:wheezy Dockerfile?Update from Jan 2015: The Zulu team added formal Debian support last October, I just did not know about it. Look at the version history for Zulu 8.4, 7.7, and 6.6 at http://www.azulsystems.com/zulurelnotes. Also look on DockerHub for their 8.4.x Docker files. They don't use lsb_release -cs in Debian Dockerfiles anymore, and instead allow the Zulu repository to honor 'stable' as release name. 'stable' always pushes the highest level for a Java major version. - I am paraphrasing the comments from Matthew Schuetze below.<br />
<br />
I saw the following line in a Dockerfile<br />
<br />
<pre>RUN echo "deb http://repos.azulsystems.com/ubuntu `lsb_release -cs` main" >> /etc/apt/sources.list.d/zulu.list
</pre><br />
The lsb_release program is not part of the wheezy standard programs. But we can install it:<br />
<br />
<pre>$ apt-get update && apt-get install -y lsb
</pre><br />
How many files were created by that install?<br />
<br />
<pre>$ docker diff 09 | wc -l
30013
</pre><br />
Over 30,000 files!<br />
<br />
I next tried being a bit more specific with<br />
<br />
<pre>$ apt-get update && apt-get install -y lsb-release
</pre><br />
How many files were created by that install?<br />
<br />
<pre>$ docker diff 23 | wc -l
1689
</pre><br />
I conclude that hard-coding "wheezy" is better than using lsb_release in a Dockerfile. At least when using Debian as the base operating system.<br />
Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-71264370643842799292014-11-22T23:26:00.001-05:002014-11-22T23:30:31.850-05:00Using Docker to find out what apt-get update does!While I dabble in System Administration, I don't have a deep knowledge how packages are created or maintained. Today, we'll see how to use Docker to increase my understanding of "apt-get update". I was curious about this command because I read that it's good practice to remove the files created during the update process. <br />
<br />
I started a small container using<br />
<br />
<pre>docker run -i -t debian:wheezy /bin/bash
</pre><br />
In another window, I found the ID of the running container using "docker ps". Let's pretend that ID starts with "45...". Look for any changed files using<br />
<br />
<pre>docker diff "45"
</pre><br />
You'll see nothing displayed. Now run "apt-get update" in the wheezy container. Then run the diff command again. You should see the following differences:<br />
<br />
<pre>C /var
C /var/lib
C /var/lib/apt
C /var/lib/apt/lists
A /var/lib/apt/lists/http.debian.net_debian_dists_wheezy-updates_Release
A /var/lib/apt/lists/http.debian.net_debian_dists_wheezy-updates_Release.gpg
A /var/lib/apt/lists/http.debian.net_debian_dists_wheezy-updates_main_binary-amd64_Packages.gz
A /var/lib/apt/lists/http.debian.net_debian_dists_wheezy_Release
A /var/lib/apt/lists/http.debian.net_debian_dists_wheezy_Release.gpg
A /var/lib/apt/lists/http.debian.net_debian_dists_wheezy_main_binary-amd64_Packages.gz
A /var/lib/apt/lists/lock
C /var/lib/apt/lists/partial
A /var/lib/apt/lists/security.debian.org_dists_wheezy_updates_Release
A /var/lib/apt/lists/security.debian.org_dists_wheezy_updates_Release.gpg
A /var/lib/apt/lists/security.debian.org_dists_wheezy_updates_main_binary-amd64_Packages.gz
</pre><br />
Inside the wheezy container we now know where to look to find file sizes:<br />
<br />
<pre># ls -lh /var/lib/apt/lists
total 8.0M
-rw-r--r-- 1 root root 121K Nov 23 02:49 http.debian.net_debian_dists_wheezy-updates_Release
-rw-r--r-- 1 root root 836 Nov 23 02:49 http.debian.net_debian_dists_wheezy-updates_Release.gpg
-rw-r--r-- 1 root root 0 Nov 23 02:37 http.debian.net_debian_dists_wheezy-updates_main_binary-amd64_Packages
-rw-r--r-- 1 root root 165K Oct 18 10:33 http.debian.net_debian_dists_wheezy_Release
-rw-r--r-- 1 root root 1.7K Oct 18 10:44 http.debian.net_debian_dists_wheezy_Release.gpg
-rw-r--r-- 1 root root 7.3M Oct 18 10:07 http.debian.net_debian_dists_wheezy_main_binary-amd64_Packages.gz
-rw-r----- 1 root root 0 Nov 23 04:09 lock
drwxr-xr-x 2 root root 4.0K Nov 23 04:09 partial
-rw-r--r-- 1 root root 100K Nov 20 16:31 security.debian.org_dists_wheezy_updates_Release
-rw-r--r-- 1 root root 836 Nov 20 16:31 security.debian.org_dists_wheezy_updates_Release.gpg
-rw-r--r-- 1 root root 270K Nov 20 16:31 security.debian.org_dists_wheezy_updates_main_binary-amd64_Packages.gz
</pre><br />
Obviously, those .gz files might be interesting. It's easy enough to uncompress them:<br />
<br />
<pre>gzip -d http.debian.net_debian_dists_wheezy_main_binary-amd64_Packages.gz
</pre><br />
And now it's possible to see what's inside:<br />
<br />
<pre># more http.debian.net_debian_dists_wheezy_main_binary-amd64_Packages
Package: 0ad
Version: 0~r11863-2
Installed-Size: 8260
Maintainer: Debian Games Team <pkg-games-devel lists.alioth.debian.org="">
Architecture: amd64
Depends: 0ad-data (>= 0~r11863), 0ad-data (<= 0~r11863-2), gamin | fam, libboost-signals1.49.0 (>= 1.49.0-1), libc6 (>= 2.11), libcurl3-gnutls (>= 7.16.2), libenet1a, libgamin0 | libfam0, libgcc1 (>= 1:4.1.1), libgl1-mesa-glx | libgl1, lib
jpeg8 (>= 8c), libmozjs185-1.0 (>= 1.8.5-1.0.0+dfsg), libnvtt2, libopenal1, libpng12-0 (>= 1.2.13-4), libsdl1.2debian (>= 1.2.11), libstdc++6 (>= 4.6), libvorbisfile3 (>= 1.1.2), libwxbase2.8-0 (>= 2.8.12.1), libwxgtk2.8-0 (>= 2.8.12.1), l
ibx11-6, libxcursor1 (>> 1.1.2), libxml2 (>= 2.7.4), zlib1g (>= 1:1.2.0)
Pre-Depends: dpkg (>= 1.15.6~)
Description: Real-time strategy game of ancient warfare
Homepage: http://www.wildfiregames.com/0ad/
Description-md5: d943033bedada21853d2ae54a2578a7b
Tag: game::strategy, implemented-in::c++, interface::x11, role::program,
uitoolkit::sdl, uitoolkit::wxwidgets, use::gameplaying,
x11::application
Section: games
Priority: optional
Filename: pool/main/0/0ad/0ad_0~r11863-2_amd64.deb
Size: 2260694
MD5sum: cf71a0098c502ec1933dea41610a79eb
SHA1: aa4a1fdc36498f230b9e38ae0116b23be4f6249e
SHA256: e28066103ecc6996e7a0285646cd2eff59288077d7cc0d22ca3489d28d215c0a
...
</pkg-games-devel></pre><br />
Given the information about the text file, we can find out how many packages are available:<br />
<br />
<pre># grep "Package" http.debian.net_debian_dists_wheezy_main_binary-amd64_Packages | wc -l
36237
</pre><br />
Now you know why it's important to run the following in your Dockerfile after using apt-get to install software.<br />
<br />
<pre>rm -rf /var/lib/apt/lists/*
</pre><br />
Have fun exploring!<br />
<br />
Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-39849950262822260102014-11-12T23:13:00.001-05:002014-11-12T23:14:42.369-05:00Using Docker to Build BrooklynBrooklyn is a large project with a lot of dependencies. I wanted to compile it, but I also wanted to remove all traces of the project when I was done experimenting. I used Docker to accomplish this goal.<br />
<br />
See the files below at https://github.com/medined/docker-brooklyn.<br />
<br />
First, I created a Dockerfile to load java, maven, and clone the repository.<br />
<br />
<pre>$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER David Medinets <david.medinets@gmail.com>
#
# Install Java
#
RUN apt-get update && \
apt-get install -y software-properties-common && \
add-apt-repository -y ppa:webupd8team/java && \
echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections && \
echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections && \
apt-get update && \
apt-get install -y oracle-java8-installer
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
#
# Install Maven
#
RUN echo "deb http://ppa.launchpad.net/natecarlson/maven3/ubuntu precise main" >> /etc/apt/sources.list && \
echo "deb-src http://ppa.launchpad.net/natecarlson/maven3/ubuntu precise main" >> /etc/apt/sources.list && \
apt-get update && \
apt-get -y --force-yes install maven3 && \
rm -f /usr/bin/mvn && \
ln -s /usr/share/maven3/bin/mvn /usr/bin/mvn
RUN mkdir -p /root/.m2
ADD settings.xml /root/.m2/settings.xml
#
# Clone the brooklyn project
#
RUN apt-get install -y git
RUN git clone https://github.com/apache/incubator-brooklyn.git
WORKDIR /incubator-brooklyn
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
</pre><br />
There is one twist - that settings.xml file. It's used to connect to a Docker-based Artifactory image later.<br />
<br />
Then I created a script to build the image.<br />
<br />
<pre>$ cat build_image.sh
#!/bin/bash
sudo DOCKER_HOST=$DOCKER_HOST docker build --no-cache --rm=true -t medined/brooklyn.build .
</pre><br />
Also a script to run the image.<br />
<br />
<pre>$ cat run_image.sh
#!/bin/bash
#####
# Make sure that Artifactory is running.
#
ARTIFACTORY_COUNT=$(docker ps --filter=status=running | grep artifactory | wc -l)
if [ "${ARTIFACTORY_COUNT}" != "1" ]
then
echo "Starting Artifactory"
docker run --name "artifactorydata" -v /opt/artifactory/data -v /opt/artifactory/logs tianon/true
docker run -d -p 8081:8081 --name "artifactory" --volumes-from artifactorydata codingtony/artifactory
fi
IMAGEID=$(docker ps -a |grep "brooklyn.build" | awk '{print $1}')
if [ "$IMAGEID" != "" ]
then
echo "Stopping $IMAGEID"
IMAGEID=$(sudo DOCKER_HOST=$DOCKER_HOST docker stop $IMAGEID | xargs docker rm)
fi
sudo DOCKER_HOST=$DOCKER_HOST \
docker run \
--link artifactory:artifactory \
-i \
-t medined/brooklyn.build \
/bin/bash
</pre><br />
In the run script, an Artifactory image is started if one isn't running. Artifactory lets you compile Brooklyn over and over with needing to download the dependencies more than once.<br />
<br />
Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-55754891327676195262014-10-31T16:03:00.001-04:002014-10-31T22:17:56.935-04:00Using R to Fetch List of Pokemon Sets.<p>This document shows how to extract a dataset from an HTML page.</p><p>We’ll start by loading two libraries. RCurl is used to read an HTML page. XML is used to parse HTML which can be viewed as a form of XML.</p><pre class="r"><code>library(RCurl)</code></pre><pre><code>## Loading required package: bitops</code></pre><pre class="r"><code>library(XML)</code></pre><p>Let R know where to find the HTML page. Then download and parse it.</p><pre class="r"><code>theurl <- "http://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_Trading_Card_Game_expansions"
webpage <- getURL(theurl)
webpage <- readLines(tc <- textConnection(webpage)); close(tc)
doc <- htmlTreeParse(webpage, error=function(...){}, useInternalNodes = TRUE)</code></pre><p>Use XPATH to extract all tr (table row) nodes from the HTML page. There is a lot of extraneous information in those tr nodes so we’ll filter the list from 70 elements to 67 elements.</p><pre class="r"><code>tr <- getNodeSet(doc, "//*/tr")
tr_with_pokemon_sets <- tr[4:length(tr)-1]</code></pre><p>Let’s look at one example of the HTML. It holds information about one Pokemon set. The pound signs at the start of the lines are not part of the data, they are just part of the printing.</p><pre class="r"><code>tr_with_pokemon_sets[1]</code></pre><pre><code>[[1]]
<tr><th> 1
</th>
<td> 1
</td>
<td>
</td>
<td> <a href="/wiki/Base_Set_(TCG)" title="Base Set (TCG)">Base Set</a>
</td>
<td> Expansion Pack
</td>
<td> 102
</td>
<td> 102
</td>
<td> January 9, 1999
</td>
<td> October 20, 1996
</td></tr> </code></pre><p>In order to make sense of that HTML, we’ll use a custom function to manipulate each element in tr_with_pokemon_sets. Generally speaking, the function removes newlines and HTML syntax. It also provides data types and column names.</p><pre class="r"><code>xmlToCsv <- function(xml) {
a <- gsub('\n\n','\t', xmlValue(xml))
b <- gsub('\t\t','\t \t', a)
d <- gsub('\t\t','\t', b)
e <- gsub('^ |\t$','', d)
f <- gsub('\t ','\t', e)
cc <- c("numeric", "numeric", "character", "character", "character", "character", "character", "character", "character")
cn <- c("EngNumber", "JpNumber", "Icon", "EngSet", "JpSet", "EngCardCount", "JpCardCount", "EngDate", "JpDate")
g <- read.table(text=f, sep="\t", header=FALSE)
colnames(g) <- cn
keeps <- c("EngNumber", "EngSet", "EngCardCount")
return(g[keeps])
}</code></pre><p>Magic happens next. We apply the custom function, convert results toa data.frame and remove NA values.</p><pre class="r"><code>pokemon_set_dataframe <- na.omit(do.call(rbind, lapply(tr_with_pokemon_sets, xmlToCsv)))</code></pre><p>The information is displayed so you can see the data so far.</p><pre class="r"><code>pokemon_set_dataframe</code></pre><pre><code> EngNumber EngSet EngCardCount
1 1 Base Set 102
2 2 Jungle 64
3 3 Fossil 62
4 4 Base Set 2 130
5 5 Team Rocket 83*
6 7 Gym Challenge 132
7 8 Neo Genesis 111
8 9 Neo Discovery 75
9 10 Neo Revelation 66*
10 11 Neo Destiny 113*
11 12 Legendary Collection 110
14 13 Expedition Base Set 165
15 14 Aquapolis 186*
16 14 Aquapolis 186*
17 15 Skyridge 182*
18 15 Skyridge 182*
19 16 EX Ruby & Sapphire 109
20 17 EX Sandstorm 100
21 18 EX Dragon 100*
22 19 EX Team Magma vs Team Aqua 97*
23 20 EX Hidden Legends 102*
24 21 EX FireRed & LeafGreen 116*
25 22 EX Team Rocket Returns 111*
26 23 EX Deoxys 108*
27 24 EX Emerald 107*
28 25 EX Unseen Forces 145*
29 26 EX Delta Species 114*
30 27 EX Legend Maker 93*
31 28 EX Holon Phantoms 111*
32 29 EX Crystal Guardians 100
33 30 EX Dragon Frontiers 101
34 31 EX Power Keepers 108
35 32 Diamond & Pearl 130
36 33 Mysterious Treasures 124*
37 34 Secret Wonders 132
38 35 Great Encounters 106
39 36 Majestic Dawn 100
40 37 Legends Awakened 146
41 38 Stormfront 106*
42 40 Rising Rivals 120*
43 41 Supreme Victors 153*
44 42 Arceus 111*
45 43 HeartGold & SoulSilver 124*
46 44 Unleashed 96*
47 45 Undaunted 91*
48 46 Triumphant 103*
49 47 Call of Legends 106
50 48 Black & White 115*
51 49 Emerging Powers 98
52 50 Noble Victories 102*
53 51 Next Destinies 103*
54 52 Dark Explorers 111*
55 53 Dragons Exalted 128*
56 54 Boundaries Crossed 153*
57 55 Plasma Storm 138*
58 56 Plasma Freeze 122*
59 57 Plasma Blast 105*
60 58 Legendary Treasures 138*
61 59 XY 146
62 60 Flashfire 109*
63 61 Furious Fists 113*
64 62 Phantom Forces 122*
65 63 Primal Clash 150+</code></pre><p>Notice those extra asterisks and plus signs? The next bit of code removes them.</p><pre class="r"><code>pokemon_set_dataframe$EngCardCount <- gsub("\\*|\\+", "", pokemon_set_dataframe$EngCardCount)</code></pre><p>Here is the final dataset.</p><pre class="r"><code>pokemon_set_dataframe</code></pre><pre><code> EngNumber EngSet EngCardCount
1 1 Base Set 102
2 2 Jungle 64
3 3 Fossil 62
4 4 Base Set 2 130
5 5 Team Rocket 83
6 7 Gym Challenge 132
7 8 Neo Genesis 111
8 9 Neo Discovery 75
9 10 Neo Revelation 66
10 11 Neo Destiny 113
11 12 Legendary Collection 110
14 13 Expedition Base Set 165
15 14 Aquapolis 186
16 14 Aquapolis 186
17 15 Skyridge 182
18 15 Skyridge 182
19 16 EX Ruby & Sapphire 109
20 17 EX Sandstorm 100
21 18 EX Dragon 100
22 19 EX Team Magma vs Team Aqua 97
23 20 EX Hidden Legends 102
24 21 EX FireRed & LeafGreen 116
25 22 EX Team Rocket Returns 111
26 23 EX Deoxys 108
27 24 EX Emerald 107
28 25 EX Unseen Forces 145
29 26 EX Delta Species 114
30 27 EX Legend Maker 93
31 28 EX Holon Phantoms 111
32 29 EX Crystal Guardians 100
33 30 EX Dragon Frontiers 101
34 31 EX Power Keepers 108
35 32 Diamond & Pearl 130
36 33 Mysterious Treasures 124
37 34 Secret Wonders 132
38 35 Great Encounters 106
39 36 Majestic Dawn 100
40 37 Legends Awakened 146
41 38 Stormfront 106
42 40 Rising Rivals 120
43 41 Supreme Victors 153
44 42 Arceus 111
45 43 HeartGold & SoulSilver 124
46 44 Unleashed 96
47 45 Undaunted 91
48 46 Triumphant 103
49 47 Call of Legends 106
50 48 Black & White 115
51 49 Emerging Powers 98
52 50 Noble Victories 102
53 51 Next Destinies 103
54 52 Dark Explorers 111
55 53 Dragons Exalted 128
56 54 Boundaries Crossed 153
57 55 Plasma Storm 138
58 56 Plasma Freeze 122
59 57 Plasma Blast 105
60 58 Legendary Treasures 138
61 59 XY 146
62 60 Flashfire 109
63 61 Furious Fists 113
64 62 Phantom Forces 122
65 63 Primal Clash 150</code></pre><p>With a bit more complexity the first column of numbers can be removed.</p><pre class="r"><code>x <- as.matrix(format(pokemon_set_dataframe))
rownames(x) <- rep("", nrow(x))
print(x, quote=FALSE)</code></pre><pre><code> EngNumber EngSet EngCardCount
1 Base Set 102
2 Jungle 64
3 Fossil 62
4 Base Set 2 130
5 Team Rocket 83
7 Gym Challenge 132
8 Neo Genesis 111
9 Neo Discovery 75
10 Neo Revelation 66
11 Neo Destiny 113
12 Legendary Collection 110
13 Expedition Base Set 165
14 Aquapolis 186
14 Aquapolis 186
15 Skyridge 182
15 Skyridge 182
16 EX Ruby & Sapphire 109
17 EX Sandstorm 100
18 EX Dragon 100
19 EX Team Magma vs Team Aqua 97
20 EX Hidden Legends 102
21 EX FireRed & LeafGreen 116
22 EX Team Rocket Returns 111
23 EX Deoxys 108
24 EX Emerald 107
25 EX Unseen Forces 145
26 EX Delta Species 114
27 EX Legend Maker 93
28 EX Holon Phantoms 111
29 EX Crystal Guardians 100
30 EX Dragon Frontiers 101
31 EX Power Keepers 108
32 Diamond & Pearl 130
33 Mysterious Treasures 124
34 Secret Wonders 132
35 Great Encounters 106
36 Majestic Dawn 100
37 Legends Awakened 146
38 Stormfront 106
40 Rising Rivals 120
41 Supreme Victors 153
42 Arceus 111
43 HeartGold & SoulSilver 124
44 Unleashed 96
45 Undaunted 91
46 Triumphant 103
47 Call of Legends 106
48 Black & White 115
49 Emerging Powers 98
50 Noble Victories 102
51 Next Destinies 103
52 Dark Explorers 111
53 Dragons Exalted 128
54 Boundaries Crossed 153
55 Plasma Storm 138
56 Plasma Freeze 122
57 Plasma Blast 105
58 Legendary Treasures 138
59 XY 146
60 Flashfire 109
61 Furious Fists 113
62 Phantom Forces 122
63 Primal Clash 150 </code></pre><p>And we can plot the number of cards per set against the set number.</p><pre class="r"><code>plot(pokemon_set_dataframe[c(1,3)])</code></pre><p><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABUAAAAPACAMAAADDuCPrAAADAFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACzMPSIAAABAHRSTlMAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhIiMkJSYnKCkqKywtLi8wMTIzNDU2Nzg5Ojs8PT4/QEFCQ0RFRkdISUpLTE1OT1BRUlNUVVZXWFlaW1xdXl9gYWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXp7fH1+f4CBgoOEhYaHiImKi4yNjo+QkZKTlJWWl5iZmpucnZ6foKGio6SlpqeoqaqrrK2ur7CxsrO0tba3uLm6u7y9vr/AwcLDxMXGx8jJysvMzc7P0NHS09TV1tfY2drb3N3e3+Dh4uPk5ebn6Onq6+zt7u/w8fLz9PX29/j5+vv8/f7/qVjM+gAAAAlwSFlzAAAdhwAAHYcBj+XxZQAAIABJREFUeJzs3QmcTWUfB/A7M8YYa8iSsm+9skaFVqJIikpIpMVSkSVFq0gMCS1C1pBdEpIlu5BJyJ59X2dj9rn3985d5t5zZ7tzzpxznnPu/L6fz/s6z3Xu8/xTfnPvOc95HouFiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiPxape+mEBGpZUxB0aGmp9EgIlLPi6JDTU/jsKgnEZE6duJl0aGmp3EYILoEIvIbcxigRETKMECJiBRigBIRKcQAJSJSiAFKRKQQA5SISCEGKBGRQgxQIiKFGKBERAoxQImIFGKAEhEpxADNS2r16vuI6BqI/AgDNM8ovDDOvvpW8taKoish8hcM0LyiWTxgjY5MTv3//qJrIfITDNA8oqkViZ/kTz3oHAEwQYlUwQDNGwJv4XoZ52G+PbBWFlsNkZ9ggOYN45DivvSZ7wZ2iqyFyG8wQPOGy1joabwCaz5xpRD5DxMGaKHKdRvXrVxI0XvzbIDaUEvSSkYbYZUQ+RGTBWixrktP2hx74dlOLX25qOz359UAvQM2afMG3hdVCZE/MVWA5h8c5bWhaOR7wTJ7yKsBWgKQNiPxjqhKiPyJmQK06B/O3LTFRcQ5P4ZibRF5XeTVALVY0czTyG/FQ+JKIfIfZgrQZamJeXBo8/KBqceB5ZsPPZTaXiKvizwboCew2dMYhkRxlRD5ERMFaOvU7+wdvV7pnPqNvpWsPvJsgPaGzf0RtHQCfhZZC5HfMFGAzoPt0XQvNQN+ktVHng1QyxkkPu48qngNCfJvvxFRRiYK0JPYkOG1jTghq4+8G6BlbsG2PvUH0N2zU2Bt5vt8IvLNRAEah3EZXhuPOFl95N0AtZQ+7bgBl/q/m41F10LkJ0wUoJGYnOG1KYiQ1UceDlCL5Q3HFNqrYwJFF0LkL0wUoPtxIv1f/cCT2CerjzwdoKnKcC1QIhWZKEBHAyPTvRQGjJLVR14P0PRqPFq/oOgaiMzLRAFaPQFYVlfyQv3lQHw1WX0wQCWqb0m2XxU9/IToQojMykQBahlgvwtyZNrgLu2eatdl8LQj9mZfeV0wQD3G2a+IJtn/byvXZiJSxEwBaumXDG/JfWT2wAB1mw1ceSM1OVv+A/wnuhgiczJVgFpqLk6UxGfCwhpyO2CApnkF7seR+tuki4USUY6ZK0AtlqIdw1bsPnLmyO4VozsqeJyGAZomEn+7j8NgKy+wFCLTMluA5hID1KUlbHd6WpGYLq4UIvNigOZNC7yegZ2NM8IqITIxBmje9CfWSFq9ES2sEiITM22A5q9SKVT+uxigLjvwm6TVAzHCKiEyMXMF6MMjp3xU1X5QZUEcYN3ZTe5j3QxQl0U4JmlNxzlhlRCZmJkCNP9i++SlpO4WS5No10ymX2V+CmWAurSF9XZP6xrmiCuFyLzMFKDTnKGZ1Pi2i+65oBkXaMoWAzTNTWxxH38IW3WBpRCZlokCtJ4N2BE2NQqrPwQWNsofcv+S1ASt5fuNEgzQNH2BKa7Dl61eF0SJKKdMFKBfAV+l/lLpSsrJtKWVvwG+lNUHA9TtV+CkfUOpWpuAC3wYnkgJEwXoLtwobP/1feBKiPOlgjfwp6w+GKAeP6Z+frfGJqX+/79c0o5IERMF6A3XHsZ3w3PLYyGuyuqDASrxwN4U+3J2Z14RXQiRWZkoQJNdX9wLSRZWHpPdDudDIjJIwjzN6zSRwPovPcRPn0SKmShA41233EsBY9Ne+xqxWb9hCjIh7ys/EVHWTBSg57HO8Wtj4Pe01/7A2azfEFg8g734RfM6iSivMFGArkaKY/+OGUByFedL1VOwUlYffzNAiUg1JgrQfsBfVSxBvVJurEW44zGaUnuAfrL6YIASkXpMFKAlIwHbuRhg4nPAjQmvvTYhAogqLqsPBigRqcdEAWp5yea4DXSqeMAa9z0hmdUzQIlIPWYKUEuHc6mRuaOGxVJ0rTM+Y7rL7IEBSkTqMVWAWgLrNXfuAx/Qbt7uXb8MLCW3AwYoEanHXAGaawxQIlIPA5SISCEGKBGRQgxQIiKFTB6gLXv3lnU+A5SI1GPyAF0CyDqfAUpE6mGAEhEpxAAlIlKIAUpEpJCJAvSxTGxmgBKRMCYK0MzWlwcDlIiEYYASESlkqgBNOp1eLAOUiIQxUYCeRXRA+td4E4mIxDFRgC4Dqqd/jQFKROKYKEA/Bjqlf40BSkTimChAWwNfpn+NAUpE4pgoQEsDf6R/jQFKROKYKEAtgz5+J/1LwQUKyOqCAUpE6jFTgKqAAUpE6mGAEhEpxAAlIlKIAUpEpBADlIhIIQYoEZFCDFAiIoUYoERECjFAiYgUYoASESnEACUiUogBSkSkEAOUiEghBigRkUIMUCIihRigREQKMUCJiBRigBIRKcQAJSJSiAFKRKQQA5SISCEGKBGRQgxQIiKFGKBERAoxQImIFGKAEhEpxAAlIlKIAUpEpBADlIhIIQYoEZFCDFAiIoUYoERECjFA/Vlo5aKiSyDyZwxQv1VqxEEAZ7+rKroQIr/FAPVXXWKAmyduAEmfBIiuhchPMUD9VF8blt0faLE0mJGCyaKLIfJTDFD/9FCKrbfrsOUtvC60FiK/xQD1Tzsx0n3cGZcLCyyFyH8xQP1SHVwt5GntQGdxpRD5MQaoXxqIHyStvpgprBIif8YA9UvjMEDSehzrhVVC5M8YoH5pPPpJWs2wQVglRP6MAeqXBuF7SetNzBZWCZE/Y4D6pXtxMcTT2oRu4koh8mMMUL8U8A8+dDeewY1iAmsh8l8MUP/U0pbS0XX4QITXBVEiUg0D1E99BNuM6qm/lv0iAfP4MDyRJhig/urtBOB8+HEbbOODRNdC5KcYoH6r4qQbAOIWNBBdCJHfYoD6sfw1Gv6PT8ETaYcBSkSkEAOUiEghBigRkUIMUCIihRigREQKMUCJiBRigBIRKcQAJSJSiAFKRKQQA5SISCEGKBGRQgxQIiKF/DlAb++QwUksF10VEfkNfw7QJchEuOiqiMhv+HOAPr8ogwisFV0VEfkNfw7QTPAaKBGphwFKRKQQA5SISCEGKBGRQgxQIiKFGKBERAoxQImIFGKAEhEpxAAlIlKIAUpEpBADlIhIIQYoEZFCDFAiIoUYoERECjFAiYgUYoASESnEACUiUogBSkSkEAOUiEghBigRkUIMUCIihRigREQKMUCJzKBo11mr18x5o4ToOlRWedCidSsntc0nug6lGKBEJtDtChwi3hZdiZoKjE9y/mMdeEh0KQoxQIkML+BrYNe7bZ56ZzMwJ0h0Naq5fRdSFr/2xLOfn0RSd9HFKMMAJTK8DxH3aoDj6PkojBVcjGqCN+G/ex1HIWG25JaCq1GGAUpkdNWSUp5KO26SYGsoshYVvYlzd6Qdf4JTISJrUYoBSmR0kzDV0xiFxeIqUVPAGbR1NwL3oru4UpRjgBIZXMAl1Pa0yllvFRBXi4ruxSlJ63UsF1ZJLjBAiQyuOCKkzWOoIaoSVXXBXEmrOv4TVkkuMECJDK4KTkibO9BYVCWq6oNvJa2SuC6sklxggBIZXBHcDJQ0z6CKsFLU1BFLJK06OCSsklxggBIZ3TE09TRq4Jp/zAStjmuSB5D6e32hNw0GKJHRjcbPnsZUTBdXiar+xRvu45CTeE5gKYoxQImMrlQU3kk7fskWX1lkLSpqh+gGrsPA2QgPzPZkg2KAEhneizbbqML2gwIfpuBN0dWo5kdEveh4wOrOn3HzHtHVSOQrW6VUQI7OZIASGV+/FNyY0rfPd5dhGya6FvWELgWOjOo9YEk8Ig30JGfd+VEAzn1dNgfnMkCJTODhnc5li/a1El2JmgJ6X3T8U9mWVBNdilvg8BTYTp1ILSz6Rd9nM0CJTKFWv6+/ebeB7/PMJV+zj77/8o07RZchMQPxX1ZM/bXhfNh6+zybAUpElOYtxKStTdrTlnS/r9MZoERELkWv4wV3YzS2+jqfAUpE5PIKNngaBa/gbh/nM0CJiFzmoJekNd0z/zbL8xmgREQOm/CopDUIX/o4nwFKROSyGY9IWu/63D+FAUpE5DKXX+GzwwAloqx1w3pPI5Q3kdJhgBJR1opeg3v/PsswY0xj2rnzXmnzhZ07NR8yKwxQIsrG24hOm0jfw5b8gK/T9QhQ4DFpsw+g+ZBZYYASUTYCFiD+y0qpBw0XAAN9ns4AJSJyCxqZAtvx8ItATFffZwsI0P5I1nzIrDBAiSh7zuXszn9jlOXs0gXoBIHb7zFAiciXfJXuLWecBZW9A/T+a9iu+ZBZYYCSKAVDRVeQZ+QrpttQGgfoO8dTAReOpzkVk/rh+AMth8wWA5SEaDX/CnBxdjPRdeQBlcYcSkbsH28V0GU0jQP0M2TicBEth8wWA5QEKL0+9T/7W7Gp/7eiuOha/FzAJ4lAUmTqH/XZh/UYT/cAjfnrk6Jajpg9Bijpr8wpXPuovMVS+bMIHGaCamoqrHMfDLAUfukfJLbWYTyNAzRfgVTAEwXSCN66lAFKugvYiD13OA8rHOR/gJrqgbinnUdB3yOivPYDCrgLLxID1HzqvDnioxduE11FLryAs6XTju+6gqdF1uLnil7F82nHAQsxT/sR9QjQatUMc/+RAWo2zx10XPlJmnGX6EoU24hXPY2+WCmuEr/3hnQpkHIJKSU1H9GEi4kUqly3cd3KhRS9lwFqLvkmA5enfDhsbQquPer7dEMqnJwguWtayhofLK4Wf7cU3SWt1eio+YgmC9BiXZeetDl3kj619GX5d6MYoOYyFgl98tsPqq7Gzbqiq1HmfzgkbZ6FDlfm8qq/IV23aBSGaD6iTgEa9GCf4eMmuCnsJf/gKK87+pHvyf1hzgA1lUdtCY+7DoMWYJ/gO5AK1cU+afM4qoqqxP/9i9qS1nB8qvmIugRowFsXvOcyKeum6B/Od9viIuKcH0OxVuaUUgaoqWzD++7jAifRQWApypVCVD5PKzTBWlBcLf5uDdpJWvPwuuYj6hKgU9NPBlXWzbLUdx4c2ry8/YNIYPnmQw+ltpfI64IBaiblbLckYdMXC8SVkhv/QjIh8UX8Ka4Sv/cB5noahSNRRfMR9QjQZ1KD7uioV9q3c1PUTevU7+zeF4U7p36jbyWrDwaombTFakmrJk4IqyRX3kO4+1JT6GH0FlmLn6uUmFLf3fjC93ryuadHgK4Exgblvpt5sKW/EdsM+ElWHwxQM3kd0yStQogVVkmuFDyDWa4EDVmEw7wJr6EJOFnZddjZam2q/YB6BOgVHFHj8v9JbMjw2kaZn0oYoGbSAYskrXK4IKyS3Ln3FnY8mPprQLM9iPyf6Gr8Wsh23Ohln3Z+1w82DNJhQD0CNAnfqdFNHMZleG084mT1wQA1k3o4J1mUsUMmP0BNotE54MLmLZeBE7V9n025UPgXIG7n+sM2JL2lx3h6BOh5jFGjm0hMzvDaFETI6oMBaionpTdVN6GfuEpyqWjYFfvN0wvDlD3/QTK8uMf+Rx2/1NeGxOrQ5xqozHvlmduPE+mvBASe9J5j5xMD1FTewulSace9cFXgMl65Fli7WbNaOVvknHLprgdb3K/XTyo9ArQ9Ykqo0M1oYGS6l8KAUbL6YICaSvA2HHReMwwckGzrLLgaogx0mQe6BEtUuAtfPQFYJn2cr/7y1E/q1WT1wQA1l9J7kTTt+fsf7r8HtsGiiyHKQJcALbQE25vlPkIH2K9tHJk2uEu7p9p1GTztiL3ZV14XDFCTKfSD1fnsxblnRZdClJEeARoeHp76tyDmYLibwo76Jad7oim5j8weGKCmU/vzteHbF3bn3RcyIn0WVFbnUU6LpebiREkvCQtryO2AAUpE6jFXgFosRTuGrdh95MyR3StGd1RwU5YBSkTq0SNAy2ag+ZBZYYASkXpMtqBybjFAiUg9DFAiIoXMG6ATJrwk/00MUCJSj3kDFJgl/00MUCJSjx4B2j0DNXplgBKRYCaaxvSQN+A3x6+y+mCAEpF6TBSgGbuR3xcDlIjUo0eAfuY2ZvF1WCd+9pmibhigRGQset9ECh1qPaNwUwMg6bIEEO/4VVYfDFAiUo/+d+Hfx2FlC0PEwDZRsg08byIRkWD6B2i+s+iv6I0V1gFnPTts+w7QtqvXpReDNYrGJiLKSMA80JnYpfCdvWKAOSVdDd8BuiyzS6ZKl9IjIkpPQICOxnWlb62wFrjS0XnsO0BLtcjgGJYrHZuINJev/U/bwtcNlbHRRPkhv4f/uahTiHY1ZUdAgM5GvPI394gGlpezH/EaKJG/aXXU+UUxZVqxnL0h9GvXEsGnn9e2sizoH6DFb+BELt5eYQ0Q1cPCACXyO+9YsX/gI/e1n5KAQ3fm5A0ldiN5Vof7H+rzF2xDta4uM7oHaM2dwA+56sH+IfSPKgxQIj/Tzmr9wLl3WrU9CC/g+w1BG3C0juMo4O1EdNewtKzoEaCT3abM+8cGJMreicNb+dQPobEDGaBEfqXgebybdnzbMQzy/Y5XcaFc2nE3RKixe7pMAh7ljM/9xYo3ou0dzZL/RgYokVG9gd0B7kZLXPa9ke9hSNa0/B0Cdr7WPUCjZuby86dD+d8ZoET+5Ve8KmkdwYO+3lADVyUh2xZ/alCUD3oEaCu3Jx+sFuD7/Bx5vn//J+S/iwFKZFRHUFPSmoHXfL3haayQtErghvo1+WLeBZUVYYASGdUlSLebHIcBvt7QBXMlrSBbilofz3KOAUpEhrAfDSStBeji6w0t8YekVR4X1a/JFwYoERnCTAzxNIKv4W5fbyidEl/U03rD6wu9TkweoC1795Z1PgOUyKha46Jnoba3sc/3O9bjU/dx8FF01aKq7OkVoIU6f7dm9+4133YqqGq3S7igMpGfCNiJhWl31evfRDvf73gIiY+mvXkSDufTqrKs6ROgQR9EpE1juvF+oIodM0CJ/MbdkVhb2X4Q+GoMpufkHWMQ39cRm+V+RmxDLWvLgi4BWmC9dCbomvzq9cwAJfIfjS8h4bfhH0z8D5iVo5gInACcnvLhsOWxiGihdXWZ0SVAZ6fG5rGRLzz22AtfHEs9nKFezwxQIj9yxxTn4krHOuf0HW33O5dvmltJw7KypkeAPgAk9HTN0AromQjbfYq6eSwTmxmgRP6kxKvDwgY/KGdC532Dwob3LKNZQdnTI0AnAS94Wh2AiYq64a6cRGQsegToYe9nVHfgoKJuGKBEZCx6BGg0RkqboxClqBsg6XR6sQxQIhJGjwBNwMfS5sdIUNTNWURnuDLCm0hEJI4eAXoec7yHPKeom2VA9fSvMUCJSBw9AnQVYiSrrJSNUfjI6sdAp/SvMUCJSBw9ArQnsP22tEbRLcAbirppDXyZ/jUGKBGJo0eAhpwCLvZxfAgt8/Z54ISyR5FKw2vxKgcGKBGJo8uTSPfdss83uvTvv5fsv968V2E3gz5+J/1LwQVysHWfBAOUiNSjz2Ii9x/3zNs81kiHAbPCACUi9ei0nF2BN7bEO3bk3PyavI+MKjNOgN726rgpX71R1veJZEC39xo/Zewrt4sug4TTb0Hl4KqNGlYJ1mmwrBglQIuNdfw8QdL4IqJLIdlu/8654kX8yFDRpZBgJl+RXi6DBGiVw0j57YOeH65Owb+VRBdDMt1zEknLB/f8ZL0Nu+8QXQyJxQAVoPhR/FPHcVRvPw4W9XE2GcsdZ/FnDcfRA8ewm59B8zatAzRo/LSp0gt9ZadOG6v/3qNuxgjQyfg7be+XwnvxtdBaSK4F2BziOixxDMOE1kKiaR2gbwLjvF4YD7yq6YjZMkSA3pmc6Hkm9Z6URFFLGZISd9ti73Q3Gttu8SJ2nqZxgAadx0Xv2+4FLuG0mrsiyWOIAH0bCySt5egprBKS7yNMlbQ24UVhlZABaBygTwED0700EHhSyyGzZYgA/QHSvZj7KFxfmsRYjI6S1sf4QlglZAAaB+h3QLl0L90JgVf9DBGgS9Fe0uqARcIqIfk24DFJqwd+EFUIGYHGAbo7k9XnD2GXlkNmS6sADbh3wMgRfe/O2cnT8Lqk9SamaFISKVeyy6dhH7XJ/Ab7UjwnaX2A0fqURMakcYBewcoMr63CZS2HzJZGAfrIXudjqpvq5eTs/pglac1HPy1KIsWKfZ3k+LcZ0T8ok98dhvGS1mq8oldZZEQaB2gSfszw2o9I1HLIbGkToO+l4MrMjz+dH4n4nPxpVrXFeq5rVIi3VtKgJFKsxlGkbBw5eELqD8Xfi2X87Ya4Udzd+F9yEp/nzNM0DtCbWJLhtZ8Ro+WQ2dIkQN+E9XPH173bJsHaNgdv+Amr0z7b5FuHmepXRMqVPo2/nV8knr6EtfkynvAbFqbNZA7Z4fVxlPIejQP0FLZneO1PnNRyyGxpEaBVEm3uP8MhuH5bduc6lb2AVaWdR2twtpTqFVEuLMD2tIuf5S+if8YTKl3HIudn0ApbcYTPkeVtGgfoJiSkvxQfmoiNWg6ZLS0CdBqmexrr8WkO3tLoCqK+69ii0/cxuNRA9YIoF+6x3fLMk2+Na5ks/v1IBK6P79Ciy9RYnK6pY2lkQBoH6DDgpXQvdQE+03LIbGkQoPluoKqn9TgO5ORNFX62Oe5T2BbfpXY9lCtDvabl/oPWmZxT/TfnPUPr7NJ6lUUGpXGANgaOen8ELXgMeEDLIbOlQYBW87oikS8eOVte4u7Bc9fNfa+G2tVQLi33mqQ7yntHbrc6H81fN3tAZX1KIgPT+ln4rcAS6YX44KXAZk1HzJYGAdrU+zLvGVRQewTS0Z9oImn1xTfCKiEz0DpAH04BdtZxN+vsBFIe0nTEbGkQoLVwSNqMAW8rmNlqry/tn2K4sEr8U7Hn+w/uUVd0FerRfD3QQfaLRRs/eKJ+tfoth2ywN97VdsBsaRCgBeNTJFMB6+K82gOQnsZhhKS1Fp2EVeKPbp/kfEbhoLjlMFSm/YLKn9ogZftE4/GypcVd+JX40NOYjEmqD0A6aobznovYdycnFc/mXJKpwVmkrA0Lm3IJGC1uSTZV6bAifevDkvw8nNldTf1oEaAP2W42TDtulZxYTfUBSEcBOzzLg4Tu4GrXaqp2HWscK+HmfyseI0VXow49tvQIfH7uBUd6Xpj7vOCfO5o8ifQ9bjzuPOoUhyHq9096qhuLHwo6ju7YguM5eCyCcij1Z9MvaTeUmyfZHhZajFr02hOpaOV6lQ1wd0WTAM2/Clj3SuOmvXcC0wTuV0KqaB+PS1883rDtlFhcqC26GH/SBuc9y/d/KHIyjoq4qZwKgj656bxAce0tDXonnd2323W5ftmdvk+mHJstXVy9QIzNL/50GaCquL3H/O3bfnzZAJ+xKfcCmo1bs3vF0Dq+zyQZjkD6gX4l2gmrREUMUCLSQwSkUxomeW1sY1raBmjvLGg4ZPYYoESCeD+kNy/DKhmmpG2AIgsaDpk9BiiRIOu8djA9iYZZnmkiDFAi0kN/rPM0WuKCX0xY0TZAh7hMB5I3fzN06Lebk4HpQ8TNlWSAEgly21XPbooljmKAyFpUo8tNpJYxmFTGeVhmEmJaaD9iVhigRKK8hGTX7Y8K/2BXJktVm5AeAXrXDbznab2Ha3doPmRWGKBEwnwO7O5Zu1qLyfE46icriesRoCOxV3K5I2CfwCXCGKBE4rx8ybWW/3R/mTKtR4D+iy+kzS+wT/Mhs8IAJRKoUPeF/xzdOMJ/HpHVI0CjpY9wWSzvIlLzIbPCACUi9egRoLHe+yJ8g1uaD5kVBigpUfDR59s3yWSPeEPJ16T984/kbEcuUoseAXoIFwp5WoUu4KDmQ2aFAUryVf8pzn7hLmJcKdGVZKP015H2ImNnVxFdSZ6iR4COARYGpzWCFwJhmg+ZFQYoydY+DtatixbtBS7eK7qWLN13Gfhn0aJtVtxqK7qWvESPAC0XBex7wfHdIvSF/UBkWc2HzAoDlOR6KgVzHBsY37sRkXeLriYLtaOxvr79oMo8JLcUXU0eostE+mfsO0klH9yy5WBy6kFiG+1HzAoDlGQqE4WhrsN883HAmBdCg49gblpln+PG7dmeTCrSZzm7Zqc9z8GffFSHAbPCACWZvsMS93H+A+gpsJSs9cBe90Uyy3KMF1hKHqPTeqAFe26Nt6dn/JaeBfUYLysMUJIn8Aru8bQ6Y4O4UrKxSbrQUX1c9It1OkxBjwCtVr1GUOq3jKqNGlUN9n22phigJM/dOCVpFbUlGDGbApOshSXNc6gqrJS8Ro8ABY4b5b86BijJ8xg2SpvXYMTri2VwWdrcAv/Y8dIM9AjQJMzVfIwcYoCSPI2xQ9q8hcJZnSlQUURLm7vRSFQlGgi5v32rytoOka/hs21qKHurHgF6FlM1HyOHGKAkTxlbTIinVROXxJWSjWvSL+2hsdaS4kpRWfXZjh1vD7+t3ep3d02OcNzdHqLk9oweAbpPFoOxAAAgAElEQVTKOFfeGaAk0y7p3j1hmCaukmzMwghP4xVsE1eJynokwha+dOUV4J8Kvs9W5LnUhN7/86/ngOO15L9bjwB9CUla/cPLxQAlmbrggvvBj4YJ1noia8nSvba4+mnH5S6jg8ha1PQ6MNP+2Tro+f9wurQmQzxtxc/2xaECn9yL6/Ifg9UjQAPXYZNB1jhggJJMgRuxt5zzsMFFfCu2mCxNxvm6zqO79mGtUe7Z5lbNBLzjOiyxG6u0GKJMJEa5Dgutw+5Aue/XZR5oiVU4+lIR7cfxjQFKct3+L6K+qBVcoPGURKwUPQ8vK/l/R8L39xcIrj0qGvuK+z7fHH7G9+7jMhHQ4hGcifjZfVzkDDrKfb8eARoe/rcVsJ76JzyN5kNmhQFKshWZ73qILiksSHQtWcr3ZbKzSNtcI84TUKR4UrLka/swLe5FB0dIn5N4EyvkdqDPPFBua0xm9tAP/yXG7R/7P9F1ZKvWuH/jEv+b0lR0HeppjS2S1r04qv4Q9fGfpFUOUXI7YIASkSG9ihmSVhHEqD9EK6yWNm+hUFZnZkGPAC2bgeZDZoUBSmZRoHaLukIXjhDuJfwkaZXCVfWHaOY1xTIwySr3IrdOi4kYBQOUzKHRklupX9XiljUWXYhATXFA0mqJv9QfojKuSq5r18FZuR0wQIkMJ2iMDda96/akAN8YcwVSPeS7jvs9rXn4TIMxDuNZT2McJst9PwOUyHCmIXHMnam/lh2RgHn+MqtTvi+w0/0cbXNrfHkNhuiDo+4t6usreE6CAUpkNK/gVtp6So/cxJtCaxGp2Gn84po/3uQ6PtViiPx7sdm1V+A9ZyXTTnOKAUpkMPnPoZu78QKuyL0z7D/qXsfZXsUtlnsnJ2K+7KeEcqTiWVwbUMZiqTX2FtbJX7FErwBt/N60JSvddBkyMwxQMr5WOCj52r4LL4grRbSqu9MeYhimTX5aLGX/cA1h/VbBc2b6BOiD+4XMA61QJb0DDFAyvDDp2kqWIfhOWCXiBXb97QYSjn9bU8Mx2i+9aks6M1XRntX67MqZImQi/bcZZ/AD2/UZm0ix2ZJv8BZLOywTVgn5okeAlogGFrXpCdR5uPd6YGTt2poP6TDkRAYJ2KzP2ESKTcdrklYHLBJWCfmiR4AOgX3FqBbOD55tb6Kv5iNmiddAyfiGYpykNRxjhVVCvugRoBtwKdgdoJZnkSBuUQYGKBlfY5wv4G4E/4cWAmuh7OkRoJcx0+IIUOdDUzvxleZDZoUBSsYXsAfD3I33cTjvPotkfHoEaCI+sdj3h3VtaDgC/2o+ZFYYoGQCj6VY33Iddk+2tRZaC2VLjwCNxwep/98Irp0D30WE5kNmhQFKZtDXht+aBVvyPfoLMER0MZQNPQL0HMak/n9Z4BlH82skaj5kVhigZAodowBrhBW42V10KZQdPQJ0PX61/3LFuTxq/lPy14xSDQOUzKH0mP/se5WPu0N0IZQtPQJ0OK7Zbx9NhbVbgKXIPHgtk6ovBiiZRnBx+Y9mk870CNDGQMvUX+5JBq4cSkr9avKA5kNmhQGqu/wVS4ougUgrujzK+Xu4Y+vlPq6nKQdrP2JWGKD6Kjx4tw24OrOB6EKINKHrcnaPr0tE8tan9RswAwaorh6/DMSdugLYvue3UfJHOq8HGlBUq0WpcoYBqqdnk7G5eT6LpeLncVhl3C3ViRTjgsqklao3MdK1rmWDS14rtBH5CQYoaWU+5ruPm2qzow1pKqTV0Emjut0uugwj0zpAg8ZPmyrdBr7s1GljBW6SxQDVT/GkpLs8rfn4UFwppETw+zccN31TZpQTXYpxaR2gb8JraS6LZTzwqqYjZosBqp/2WCdpPY1NogohRYpvBLZ+1nPQ4jhcut/36XmUxgEadB4XC3i9UuASTou7kcQA1U9ffCNpVcEJYZWQAsF/4Fxzx1GFtbheRXA1hqVxgD4FDEz30kDgSS2HzBYDVD/9MEHSqoRTqvUcWlzsXI48oT8u3Ok6DFrm9WWCJDQO0O+A9NdP7gS+1nLIbDFA9fMCVktarbBVnW6rfnMRsP3ZmxNLNZXvCtq4G7ddx4MCazEyjQN0Nw5meO0Qdmk5ZLYYoPq5PSWhtKc1E0PV6DTw8yQgMRLAqcZq9EdZaO71N3cUxgurxNg0DtAryLgH/Cpc1nLIbDFAdbQMU93HDZKTqqrQZcBPSP6+QYClUPs9iH9ChQ4pCwO8vic2x0ZhlRibxgGahB8zvPYj1wPNG2rF4z3XYY1T6nyEGYzIx5xH+b5BZAU1uqRMfeE17eweHBBWibFpHKA3sSTDaz8jRsshs+VPARrc5uOwTzuEii4jGy/b8Kt9GZHiA6OxWY1rlqVvwr2/RcDPmfxwJrUM8toL9GG1rmAbQpGXhoZ92EKdZ4s1DtBT2J7htT9xUsshs+U/AZp/4FXHLOfoYYVEl5K19tHAlT1HUoD5BdXob6BzaW6nikmJt6nRKWWmjdedio8wRVglais+Ns7xN+dcTzXmcmgcoJuQkP4jUmiiwOspfhOgJTcD4V8NDttmw76KoovJWqnRF1L/U01a1Uyd7tbgBUlrPZ5Tp1vKqEA0mrgboWfRSmAtqqp1Asl/fDH463+BFYVz353GAToMeCndS12Az7QcMlv+EqAh23HWuVv4fQdw2MgfxAKrNrxHtfoO425JazwGqNUxZTAMB4umHU/E3wKfv1ZVufPYdY/jqP1VrMj9Z1CNA7QxcNT7I2jBYwBXpM+tT3E6bYWBIn/70dcrH45Deit/DN4XVon/K7wPe6o7j6YirpHgalTzCzakPRpZ5Sp65bo/rZ+F3wosySdpBy8FNms6Yrb8JECLxOBhd6N6UrKBv8SraovXQ2w/o4uwSvKACoeQ+OMrLZ4bcxG32oouRi334mYZd+M5XMj1nSStA/ThFGBnHXezzk4g5SFNR8yWnwRoB2yRtOblma+ywyUzSy233bTdmfWplGvFJic59+DZXE90KaoZKV2gIeAwHstth5qvBzrI/m9g4wdP1K9Wv+WQDfbGu9oOmC0/CdAwfCRpdRO4zam+aqYk1XI3vuID2lqr8P7Sdb+O96dHvtbgKUlrQu7DSPsFlT+1Qcr2icbjZctPAnQ6XpO0WuSdJJmMY2mz57vbkv3mwhzpZQ+k+xu+j9G57VCHFelbH5bk5+HWvt+gIQMHaOmugwZ1K+37PDvv+88vYLEmFWmi4HP9B/eoqfTdoX/h6uv2mwAVZwJ9sjgppM07g3v7z7fOrAQ0fXtw38eNvtVU7V6D+z1bwPd5Otno9aV9ZO5X+dZjS4/A5+decKTnhbnPC16HzLABWnNRimPx70V3+z7XYuntFZnjzbPfUMmvbjr+S/jzcYUdFFsFxO5YfwRIeCPzMwoPjXAMsa+94irNIPDVE45/zIv9jLwsVZu/HUVGfVFMdCUuU7wicxOez22Heu2JVLRyvcpFfZ+mNaMGaKc4xK4cN25lLOLSz5vNzJ3WeM9z4MWu4T7tKlNVg7NI2f7NmAVXgdEKf5QGtN9uvyYUPa1S5r9f7RBs4RNHzzkPTDdytORSkRXAiRlhUw6k/jAq4/t0MfJ9B1z6acx3u6w4Xsv36Xp4Cic9/1XUs8bleio9N5UzgrZWzCxlP7h9OqzP5uANc7DG/d1tFjZoVpi6ql3HVsdfpPzvJ2Kk4m7ueqhFo6y+FJY5g32OHydBvW5ipuIhjC7fWlzt4DhqeQp7jPow70TEv+OYw1j/L1y8y9fZugjcjy/Tjgv8ha9y3SED1ABKRnruqr+PyBzsgnjnJfxawnFUaDpi7tGsMlUF7MAvwa7jx5NsD2d7skLLsDUtThrcxItaDGEE7+FSZddhycNGXavzacQ3dR0WWIPfhdbi9mAixoY4jsptwn/Fc90fA9QAvsAaT+M3hOXgLfddxY2w1g1bfHYeN12PKVf6fN/V64cn1Mn+jQK1wcUi7sYnmjxP0RA373A33sBRf3kAMZ2CEZInCuqkJBpz18x/0N99XDICmvzAlK9TAo5/0Kxh2/E3cTZHNxyyp0eA9vbWq9uz9fL5fpcmjBmgx6X/cTXJ2eZBlX5zzWvYXtfRzjc60dm2/qjCEgmamI3BnkYhTabBj5buwxR0DveqP4QRtMNOSWtJlvMRhKqJ6yGe1ueYKK4UL03+cU2oXKzGxWM9AhQZJSwR8/PIkAFaEpGSOyoB11E263MlGo3fFL5lyqPORv41SJna/I7STcbEYl8JDYpUwRHUlrRWop36Q2xES0lrKt5UfwgjGIlPJa3XMEdYJdnoigWSVhOEC6skncBWM7eGbwir7fvMHBAUoKm+FfH1ypAB+j8ckjb3QsEcxsm46FqipdoBrDXmN9cISC85TUJv9Yc4hP9JWp9imPpDGIH3gxQtsVZYJdl412tJ5rtwTlglWtIjQIcMmQYkbfhm6NBvNiYB0z4cvdy+K9gYzQfOyJABWh5npc0TkL8Jd0NbXP2043KX0EFhJSFPvNqjvWazYs6ivKT1kxZLgezC/ZLWWKGPDWtoAvpJWs9hqbBKstHTa5WwdJ8SdHX7cz27P6nR1g263ERqEWMbV9J5ePsEW0wLi6Xgx8lIqaH9yOkZMkCDvS4HlrXGhmR9bhZmSX8c9cA2RXWUGB3puIi6Wp0vNxls8Jq2fNwr61Tyk9flwK3IyZQwE+rrtfrB6Nw/kKiFx/GPpNUdywXVcc/SZPt/1rETcn/LPRN6BOid1/GWp9UX1+1x0UfIR1BDBqhlsfSr5qdYJr+Ha5D8NCoYay2poIpG52HbM3v6hgQkZvGQTy4NkD6z3xIXNXgqrRP2e3qta40z6gzJXKpku+WZbFDoojE3bc8fCc8yJAE7oc1/VD71TkLC+umz9wJnGvg+WzY9AvQL7+vHfzuePAw8hr80HzkDYwZoE9yqm3Z8z03IX+6viPc+feFoKL+IWtHY6vjoWXIy0F3++30rHoGuacdFj7h37FRTyCnPIsshOzBOgyEMYQl+dl/n/hY7jHnN+3Psce+E9RYuivlh1hO27xw3VetuR4QG33n1CND9+FzaHIH99l8m4IrmI2dgzAC1TMdF14eIJueVPD9TFpekzc14RHYXQf9iQdrksp5IkH8ZNgdeR4Lr5kf5HfhHkxUmnramvOf8DFryd5wy6HSE3KtyA3OcgRQ8FvEGXZWq6FFscl5QD+iTZFN6XT53aibhdddh8GL8rf6XHj0CNAoDpc2BiLT/0h9Jmo+cgUEDNGQ1rHOf/d/dz8yxYo38K6CWoJQU6a6Xp1FddhddcdxzmX2mRjsGjwA2vVK7WvMvb+K462H+R74PP3FoTT/Vtk3qa8XunvWqPjL8Bi7l9Fpugy93Hj+ycYgxp6Nn7tFonBvcuErDfkeR8ILv08WofhZRIx+tVvf1P2Eb7Pt0LcyTLMFd8LQGj6bpEaCx0tnNFsvXuGX/5W3c0HzkDAwaoJZ8I5xbrSJ+pKJnDLZLt6ush8vyf9L+hh6eRkXbTW1WIHv5kmsS80LnVdpSy12T2q52VmuINqdcXa7O4dPXBadZnW+42d/3yYZR60/XP+aBpr5PFqVs2r/ec7le9UiZgrFWycyPNzX4269HgB7GWclfx9BzOGj/dQSOaD5yBkYNUIul3ICVBw+tGqjw6Zw+2Bvsbiz3/oGVM5GQTl/aq8UtcrtC3Rf+c3TjCNcl3yqnEDP6sWp1Xt0KDFVriPyd5ob/t2VMTncuLLEHiROfrFGr46/ATGNeTMzc499tO7Fz2jOCF4j0oeGozcf2zOsiakHQpl63X+7ENdVH0CNAxwI/uv81B82Fcz2Uddik+cgZGDdAcyfkFKak/RG/j5icPcokVQDx0uYvukwAKnQA213fm3smClqUIXA9Drsm3z8TDaHbJZDq2uNnSSsgyRac5akK6RGg5W8Cu591XNoLafc3EGP/blUqWYW1pGTz1wC13BeHNY57jGXnwKpgKeHAZKv02sE6tFCpsOx8hAPu1UVew1XnerEB9dt2aKHf3Z+XcMn9qb9FSkLl7M4ls3nSaxGoEFuC6l8xdJlI/4J9JmvSgS1bDti3+Ut2/P0eCTyh/cjp+W2AWh6+CutfP87YnIzYjkre/590VeagawqehpIt6Kp0f4VNjjnwwQPO26+ZJa/Sa1Gpv/GqpzFTxA910k4Nr9sBTTV4Gkqf5eyePOd5Bv6s87NNuUqVBGzn4r8Bain9teM5otjZVRW9fYL0wbsXnJepNfYgDktanexr+pXbBVxetWRnMpLf0aECi+VOW7Rk1kNjHNNlVNLLUemSNTM9iymrRqf1QAu/tT3BsQrT9jeFLrbmxwGa+umtadfuzZU+8lsjKdn9PEvp0+ipUk3Z6eY1WaoyTlpKHMPJtvZvWbd/a9PnOfZHvJYlDUpJMfombSTL25JFk1ukJKr/vUq/BZXzV2vUqKroXWr8OkBzJwzXHnMeVdiPnXos2Po2vpO0bsc1yxKEp1397GRN0WN++FNYJW3GoEhWZ5IZBf+Nna77lK1jMFz9AbgiPTkFLYZ19pPlSt0/IgbH7vB9fu518PqX0RAH7rPFeGbtjdZlkbZG2CtplUSssm4qvL/qr53L+5TK8RsCWk/eHL5pkoDbANm4rdfSHbtXf1RNdB1quvM4oofdV6pcq3k2zNNgyhcDlFwCh8alLdWtybo1GVS0RUsuOHyM6eMxytMudsuqQ4yH3rJJZlp3UbY/X9Bn8c4/uci3c/iOu7e77ghsEbAkWVa6XXXWlDRW9BdFNZX82fVHHfuRFrN8GaDkdueQjeev/D1WwUokyvwp2ePjtqtoGe61rNCvipc1leMnTHYfBx9QdO035DdYl3Rp3PT11cCMHP0lfSwK5z9/ouGTIy4iQv7SMRoZC2zu/dADneYkYatfXci4f9zfV85teF+bH8c6BWjQg32Gj5vgpseQmWKAGkkLJLh2JLGErsNmywVIH0f3XjVYKzWTbS+5DoNm4biSz16zcNF1ufbJqBytgV89AjOdz+aEzsH1SgqG1EA/xLl+YNU6jl/N9EiWULoEaMBbF7y389B+yCwwQA1lLBLedTwbUmcXrlSynIB0CtYUXaYCWPrAGuaYGVLxN8Td5+vsTDyGm+5Jq82Tk3LwnXwllqQFVOAvBllOvlyszf28eqVrEPTseo6Uf6hFo4K+T9OHLgE6Nf1+SNoPmQUGqKEEfQVcmT/m2x0pOF3X/vyTdGGh3XhclyIGpiBq6div/0jEjeZK3v+bdGXTSZIrAlmphQjPgtdlb9rkL52lgS8wz9PoIWKt3pwJeGmf44LmAkP8qekToM+k/gMfHfVK+3Zumg+ZFQaowTyx0/ETNXqMffrSQPzq+Z3atpsKFvZT4v61juWY4qbmcPkmb4USkiX33Gp7r8yaqSFeITvLGBs3/StdQzYkRos9p9VQbFVqeO5YfxhIELTCfTp6BOhKYKyK85MLVa7buG5lZetbM0ANp2LnQW82d156LBXleW4k/xYdH6ss3WFA31YKvxXegwPSZhSK+XrHTOnTo6mf9qZmeaZ+AhJs0h9XG9FMWCnZKbgb57raLx9X+Nbmtf2VMHoE6BUcUWsCVrGuS0/anHNtTi19uajs9zNADe0txD7jPCq0HKdVW2RZU03xp7R5BhV8vWMpnpO0OmCR6jXJF4o4aXMZxH1LzM5kHEv7aNzVlqzbdJFs6BGgSV5PnORC/sFRXpdSI9+TuzoVA9TYpsG29MlCgZXePYcoI/z1yIGqOCNpBSfYfK59OQl9Ja2B+Eb9ouSLhnQjwt1oIqySbNRMSazlbnypy5MWvugRoOdV2n+z6B+uid5xEXHOj6FYK3PCGgPU2AKGuCbzY8/domvJoaCrkJTawuvJpsy9hhWS1u+enfZEWo9OnkbplHhD7mc6HBM9jcLRRrhQq8810CWq9LMs9e/VwaHNy9uvBwSWbz70UGpbZs8MUKO7c+iuqJSzC54z9kLrUj9glvs44I8crMlcOt7q3oTV0tAWp2QPatW9iT2e+xRjlWytrYMtXktgLkUXYZW46RGg7RGjxgK5rVO/s3svddk59Rt9K1l9MEBJbRXiPBNWh+NyDi7Mh+FY2v4pd5zQYoULBUKOY2La5NTnrck53Y9PX8e9JgqP8WxhLY4u80CXYIkKd+HnwfZoupeaAT/J6oMBSqp7FbZRjtgsPQ0pOVkeJDQcJ5xTTluexk6dJmtlJrh6w1ppF8EeSMBCxzyugh+mSGe2ylSyTsNKmj3GdAj/k7TGwwC7AOoSoIWWYHuzXEfoyUxWetiIE7L6YICS+galIHrOx58uicOtTr7PTlVuN7D/y8FjDwCby/g+XSNVZtuX4E5c87Cz2TYGib9+9tHMG7AOVRiBAR132qfUXvxKoy1ZfoP0K+hGXTbu8kGPAA0PD0/9Y405GO6mqJs4jMvw2njv2Rc+MUBJAw9udd7d/CWnK8GFfhTheMf1d/RYeDVzPeKBg+F7kmCb6JzMUn1xiqOoXUo3Si6+OvWveXj4SeCaome6fOonfdKiYlKi/HmMqtMjQNM/yKnwUc7ITJ6Sm4IIWX0wQEkTlV8f9llXOev9BLcYEDagueqbRObcINgm2a8olvggGktdt+xKdf70816KF9grsg/nXrMvUHjfCiS3VqdKb6Wj8XTaceByzNBiDJlMFKD7cSL9rdnAk9gnqw8GKJHdI1Zr2ipU9a6pczdmHg6k7af9BaLKZXuuQn1x03WROd80XNNl2W8f9AjQshko6mY0MDLdS2GQLsGbAzkM0PqfzFo0va/PR0oovcDHwuYumtRNv12JSaGdkoX3WiJa4VSqcr2mLpo9/H7H8X22W+4thwKWSfcoVNFsJP/YJMBSpNu/iNPmMoFMJlpQuXoCsKyu5IX6y4F4efsP5ChA717n/JycPF2fldn9R7N/nX9yN4eJu7RHOVELlyVPTP2qbOXAQt8mOv99b2uQ2pqILzy/Vcl60+cTWUoEfp4EJNpvfh1soEX/spkoQC0D7P+ujkwb3KXdU+26DJ52xN7s6/ttUjkJ0AcjEDWxY4dui5JwqLzPs8mjezLOfdGhQ6/1wO9KdwclXfTBdEmru6Ln8W/fDevyVzu8OP4abrW2WA7jXslv/oX0cw5VUvWrIym4ta6HwMvHUmYKUEu/5HTXUpPlLsiSgwCtfgOLnJ88q+3BP4Z8os2gnkzBCOekxocuYb7gYihboyS7qVgsTbwXRMmZ/Jtx1DnfvvBU3KqfbkfTuRo+JpTP54JX+hEQoOVrK37MoebiREl8JiyUfcMwBwG6FvPT7lUVOyTzEmueFnoOQ9KOa0bqsqGRUsF3VLlddA1ifYEPJa2HsFV+F/1xwj2HdSr2BHov5DcPnZVXZyIaB+j5865ryS179057bUluVqQv2jFsxe4jZ47sXjG6o4JZYL4D9AFc8fwgbWiNN8BUM5Poje2exuv4V1wlPjReFJ364/fMeE3uE5tEL69H+Hpjruwegi7DcxenwCk8u89rS8B/jbmek+o0DlD3ah+S1MxVgOaS7wAN81o6aqPXJhOUnTXSzxz5bqBK1qeKFDQRsJ46cQmI6y66FnEq2iIlHw02KUiBR3BQ0noP00dL1+Wrjet54z4iAzSdlZ6puqmGYaiW5fiVC5CuLrYCbYVVkp2AeYgbab83eH/qf4dvia5GnNXw7I3bAZcKy+7gTUyStBpjV43kBPfmekHrEZar8kyDAZrOn15fPfoaY7VbU4iHdOLKDK99K4xjICIecB2+Y0s0xlQYEeokuK+CPhmr5N/Vx/hc0qqK45avcf4eZyt4Ni6YYz+BXDNhgGq7J9IKPCNpjcCnisbxE8U7T1gwuZ9kCbEGH8z4aUybzGf4nffazGKV1yd5oQo8NfqnmR/e61ggo0SUzbPX5xhs0mrMu3p9t+ib10pr1b0KXrVhTcsgi6Xe1GRFuzL19poq3xQ7LSHbEDOiksUS2nEv4pQ+T282JgtQ7fdE+hzjJa1tXnGax4SOinX+US9zRWiTHc7pD1f6ZLZaz0p08zTyR9kMMoc24K3Lzqp32u9x9MJKz28VjtDoQm3ZHx37fCJxooEfxWiT+udy68R1IGGwksWXmuCopPWR/Qt9gck22C6djgcO1VOrSqMzVYDqsSdSfUSWcjcetkXl3QnhxbcDG/t3eH1OLCIcezR2SsC1b7t1+nAfsDSTRSy7Y69nycJ3sEO3QrMV9COw/8PO3b65iuQeFstivCL5zXnoocWYNU8haWmvDm+vtOJYVd+ni1LsM/uDY+cnKfshEnhWsvFc4QtwfLB/YF4EYN3+tkFmuevATAGqz55Iy7Em7f5hqVNes+XyluA/cNJ5tbDsckRWs1iaJ2G8817Dc9FeNxBc8h/3bERcP1bmVgGamYCYFxwfsAqNhfVJy048IPnNTzRZDr7USWyt7Dj6314clH97RkcFKiufk94TVyq6DgMWeCaSlqqQd9LTYq4A1WdPpPKX8LvzdvK9R/GXJg/0mkJ/nE6bKBm0EBstBc/hs7Tfa3QTj2d8x0MJmOj8gfZMhGSfIKEeQaw7MT/C+UK7cL/kd71vhKhlFjan/WdTdK+Ou9vrLOg3nHPO/Cy1GDH/83G2vzJRgOq1J1K9s4if36NDv3U2hBtg2z9B8l2RzJMufAnN+knnyffHtkze8+xN3Pj+lU5D9qR+xzfIpY/1GORpbMHAJdp/ha9ijfNM0f9fSmKpbM41taJrgC0DX3ztx1u40UJ0MaKYKEB12xOpzGznBYK44Xn386eluddKq8Px9Qbpw5mhsdbMtqKo5brIcqWHZtviyFMyJV7yFfo5bOmJVZ520QhUUn/Md70+fa/0Smz/EvR+jPPf99KKvk/2U5oH6LanHbYBTz/tPlTUl+w9kWqMCEvvsteW3Fmq1GfSwq+7GmK7WVEGSCZa2+N00w1IP0r9gcw/c9QbPH3el88U1E2KZ0wAACAASURBVLQ0GR7DFkmrOKJLRKKNu/0t1mow5myvyByUyUY0/qPYi+MX/DBQ8Rr2fkDzAM2Uor5k74m0MrOh9ygaO+/xXm3iHhywWaWfKk2yVsTz3hfJE5F/ICIfcrXeQUJ9Dcb8DU9JWl0xW4MxyChMFKCy90RqMDiD89JpgJSNgV4/rh7D5ghI15lfh5Z6V6REM6+p8sUQYwmYi6Rx9hn/9y+BVZNv13PQVdIa4DWv2Owq9f7quw+apt9ZJy/TOECPZ05RX9wTSU9PYrek9Qm+2yLdRDYkxmaKtYzKWG9JrmM/bb8PFhhmhc2xmEhUu6zfmAuD8YOktQyvaTKKCHVWOz//nNRuqU/TMdGCyjruiUSW/DfwiLtR6Dya98dmz+/2kd6SN7LN6OdpbMC79l8aLLLf+zg7TqPb4zVsMZ7dzu5OSjLy85yydIjDrbn93vzyFDA9byy1lAMmClDd9kQiuyE4kbbocMBsbLNPZXLv3Vj7JjTZtlZ9T+Cm+6nCge6lXrVdUHkh1qXlS2g4vtVuIH09mISZjj+1oNfjvNZ8zNNMFKB67YlEDgV24mhDx1Gx2Yi+x2Jpa7WNds7ubH5VMlOnmLE/jMxAhPObevBHNuuzPk5Wx50XsMG5EED5bThq4KfhZQk54bma+1iCLa8sFuKLmQJUnz2RyKXMblh/7t6h47cRrmfhX0/C2eEdOry52f0sfMib628h+fCXlUTWmb38i4Ctb3XoMOyU41l4XdQ7j9hpnTu8PCcBx+R9QzKw3vjbcwvig0ymFOZNpgpQXfZEojShY11/2mtcKdB0r2sRl/7OGU0PnrE37PvMDjXIxPlMBLwT6ax6/8O6jXnHQueQ1mklfJ9sEmvwkqdROD4lj28qlcZcAarDnkgkcddb0xb9+FlDdzvo8TFzF33/sutb6VMJ2PdyYUtA09kpmGXcBLXc1uX7RXO/fDzI95nqqfXBzEUz3jXwUkyyXYP02bNNmS2GkBeZLUBziQGqnirRGO8Kpda30F9sMaSxQO8HKeab40EK7TFASaF5WOT+K9UOUfxKZzRBD/UbNai1Wus5REG6SccaPKlSvybHACVlSiYnStacXy13PgRpLN87lxzXYWNGqLMz9w7pLoH5o2wVsj41L2GAkjIvYp2k9TKfkTWW4uuBY+MHj/oLOKTKpdj3sd7T6IVwNfr0AyYP0Ja9e8s6nwGqmncxVtKqg3+FVUIZBW/EhWcdV1ga78VxNdYVu+063H/Xqt7A8yp06Q9MHqBy1xZlgKpmiNdTtDVxRFgllNH7OH+X67DgTvyoRpedbSlDnA9NPHIeSw0860JXDFBSphsWSVpP4w9hlVAGoRGS1bIqJVhVWbFzoBWHPm7bqtcKG9Yp21XcDzFASZmKtmjJjn6z8Im4Uii9tl5rac3AB6r02uqE8/mAuGHGfnxXTwxQUmgjwtzHdZOT/GnSuOkNxQhJ60UsVafbkBemrlo3v2/e3SksIxMF6GOZ2MwAFaZxijVt5eCKx/GN0FrI23d4W9J6yLPpMKnMRAGqxur2DFAVDYBtmn0qaGiP69iRh/ffM6CRXhuyPJ2zncBIAQYoKdYrETi8fmcc8GsR32eTfl7xisxRXjPOSE2mCtCk0+nFMkBFqrkkzv4z7J9OnNRiLKWSkzzr6BW6BP0WosprTBSgZxGd4e8pbyIJVvC+Fg+V930a6ewHbApOO56MP/kDTismCtBlQPX0rzFAiTJR5hxWO/d8Cp2E2AaCq/FjJgrQj4FO6V9jgFKTCXuunF0/yG/2blNH/UuI/LJNw8c/OYPYtr5PJ4VMFKCtgS/Tv8YAzetKr3DdTLz5Hr+nSpX/1fUHs/Ne0aX4MxMFaGlkfFyQAZrHVTqNyE8b3l7uqUU2Q6+KL0KDrzaGb5nanH8qWjJRgFoGffxO+peCC8ibf8gA9S8F9mOba6eJ1lEqPbBIlHNmClAVMED9yyAcdi8X/IQ17q7sziVSHwOUzCvgDFp5WvPxubhSKG9igJJ51cVZyRW+5tgjrhTKmxigZF7Pev3rLIZoYZVQHsUAJfPqjHmSVrAtiXecSV8MUDKv5tgmaVXFOWGVmEHJ/huPHN8xrIroOvwKA5TMq2hCchlPawAWiCvF8ALejXZOrE+aGCq6Fj/CACUTW4pv3cdFLuBZgaUYXOBcYFmHWjVbTUrCX8VEV+M/GKBkYnWSbV1chyHLsYOXQLM0ApFtnEf3HMVv/INSCwOUzKwvbOMdn6ca/oVrvLqXpepJyc3SjstfwQsia/ErDFAytXeSEb9u2px/gZO1RddiYN9iqqfRAzvEVeJnGKBkbvV+SbHfGrk+vKjvc/OuU2joaYTespbJ+lSSgwFKZlf6uZ7dWwT7Pi8PSzdF9k80FVaKn2GAEvm92xApbf6OJ0VV4m8YoER+LyDBVljSPIh6wkrxMwxQIv+3FR08jcq26BBxpfgXBiiR/3sb/wS5GzMxS1wlfoYBSuT/CpzGN2m3kV5BYlWhxfgTBihRHtAkHsvK2w+KjLWhh+hq/AcDlCgvaBmBxHUTvvo5Gsl9RdfiRxigRHlC+RkJ9icObL839H0u5RQDlCiPKN554PuvlhddhX9hgBIRKcQAJSJSiAFKRKQQA5SISCEGKBGRQgxQIiKFGKBERAoxQImIFGKAEhEpxAAlIlKIAUpEpBADlIhIIQYoEZFCDFAiIoUYoERECjFAiYgUYoASESnEACUiUogBSkSkEANUG3f1Gjflq9dK6zMYEYnBANVC8ckp9g0QkfhlIT2GIyIxGKAaqPEfklcM6fnxWiv+uUuH8YhIDBMGaKHKdRvXrazso50uAVrqJHb/z3HU8BD2FtZ+QCISw2QBWqzr0pM2x7dj26mlLxeV/X5dAnQWdoS6DosdxGjtByQiMUwVoPkHR0Eq8r1gmT3oEaBVrPEV3Y0G1vgSmo9IRGKYKUCL/uHMTVtcRJzzYyjWFpHXhR4B+i5mSVq/o5vmIxKRGGYK0GWpiXlwaPPyganHgeWbDz2U2l4irws9AnQ2uktag/CV5iMSkRgmCtDWqd/ZO3q90jn1G30rWX3oEaCr8JSk1RWzNR+RiMQwUYDOg+3RdC81A36S1YceAToXXSWtAZig+YhEJIaJAvQkNmR4bSNOyOpDjwAdgh8krWV4Q/MRiUgMEwVoHMZleG084mT1oUeA3oOYUu5G1cSUOzQfkYjEMFGARmJyhtemIEJWH7rMA12KVflch6HbMUn7AYlIDBMF6H6cCEz3UuBJ7JPVhy4BetcV/OZcRqT8Npworv2ARCSGiQJ0NDAy3UthwChZfejzLHyT67g17cUWL81JwLnaOoxHRGKYKECrJwDL6kpeqL8ciK8mqw+dVmOq/Ktznr91Pi+AEvkxEwWoZYA9k45MG9yl3VPtugyedsTe7CuvC93WA73ng5/WzXm3qj6DEZEYZgpQS79keEvuI7MHrkhPROoxVYBaai5OlMRnwsIacjtggBKReswVoBZL0Y5hK3YfOXNk94rRHeWvZscAJSIVmS1Ac4kBSkTq8ecA7XMig4RMHgclIlLGnwN0OjKxQ3RVROQ3TBug+atUCvVxSmCVDA7wKzwRqcZcAfrwyCkfOeZWVlkQB1h3dkv/bKcvvAZKROoxU4DmX2z/Dp7U3WJpEu36Qv6rr0+h6TBAiUg9ZgrQac7QTGp820X3Jc2MCzRliwFKROoxUYDWswE7wqZGYfWHwMJG+UPuX5KaoLVk9cEAJSL1mChAv4Jjf7ZKV1JOpi2t/A3wpaw+GKBEpB4TBegu3Chs//V94EqI86WCN/CnrD4YoESkHhMF6A3XHsZ3A3PSXluIq7L6YIASkXpMFKDJri/uhSQLK49Boqw+GKBEpB4TBWi865Z7KWBs2mtfI1ZWHwxQIlKPiQL0PNY5fm0M/J722h84K6sPBigRqcdEAboaKY79O2YAyVWcL1VPwUpZfTBAiUg9JgrQfsBfVSxBvVJurEX47fZXSu0B+snqgwFKROoxUYCWjARs52KAic8BNya89tqECCBK3q7BZgrQis/0bFdFdBFElA0TBajlJZvj6c1TxQPWuB/llFm9aQI0oMMexz/fvpfkrpdCRLoxU4BaOpyzL+hZw2IputYZnzHdZfZglgAttBiI/HnKkhvACgU7lxCRLkwVoJbAes2d+8AHtJu3e9cvA0vJ7cAkARqwHFG97U9bBb92AxuCRZdDRJkzV4DmmkkCtA+u13Ud1ryET4XWQkRZYoAaUKHraO1uPIIYeTfKiEgvDFADeh47Ja31eEVYJUSUHQaoAX2FjyWt/pgirBIiyg4D1IDmoouk9Qx+FVYJEWWHAWpAP6CHpNUJ84VVQkTZYYAa0BCvrZ7GYoSwSogoOwxQA6qDG8XcjYIX0URgLUSUNQaoEW2Q3Dcaj50BAkshoqwxQI2oXhzG5nMcBQ1HwgOCqyGiLDBADem5BBx4vUbxaq/uRXIX36cTkRAMUGN68LhruakzzUWXQkRZYYAaVEjvNWcjzq3vEyq6ECLKEgOUiEghBigRkUIMUCIihRigREQKMUCJiBRigBIRKcQAJSJSiAFKRKQQA5SISCEGKBGRQgxQIiKFGKBERAoxQImIFGKAEhEpxAAlIlKIAUpEpBADlIhIIQYoEZFCDFAiIoUYoERECjFAiYgUYoASESnEACUiUogBSkSkEAPUNBp8OnHSsMYBossgIjcGqEk8vBMOe1uJroSI0jBAzaF/Cs5P7NtnwknYhomuhYhcGKCm8AqsnxewHwQPSsIA0dUQkRMD1AzK3kSPtOP21qRqImshIjcGqBmMwc+exg+YIa4SIpJggJrBSTT2NKohIp+4UojIgwFqAkURI529dApVhZVCRBIMUBOoghPS5g7p51EiEocBmiP3DBg54u3yKheTY8URIW0eQw1RlRCRFAM0Bx7b4ZjDbv2lluoF5UjAJdT2tMpZbxUQUwcReWOA+vaxDZdmfDx0YRRiX1S/pJyYhKmeRhgWi6mCiNJhgPo0ACmfOD7ylZiO5CdULyknqiWltE47bpJgayikCCJKjwHqy93Jtg5px5/hcmF1C8qhDxH3qvNG/PNR+EpICUSUAQPUl7mY5D4O2Ib3Va0npwK+Bna92+apdzYDs4OElEBEGTBAfSgQY6vgabVBuKr15NzLZ52rMV15kwvaERkFA9SHOjgoaYWkpIj6/Ffg6a9XrJz4fEFBwxNRRiYM0EKV6zauW7mQovfKD9DHsV7avIwyigYmIj9ksgAt1nXpSZvjq6zt1NKXi8p+v/wAbYTdklZAvM1xQz6kVd/Bbz3Mi5FEeZupAjT/4ChIRb4XLLMH+QFaPDmhiKfVCKdS/z/kwxjH+BffCJTZGxH5EzMFaNE/nLlpi4uIc34Mxdoivt8mpeAu/AbpAsazMMFiuWMX8O+UsJkngV/EzGoiIkMwU4AuS03Mg0Obl7d/7Ass33zoodT2EnldKAjQVoiq6T62JlS2FNmPE83trYBO17GKX+OJ8i4TBWjr1O/sHb1e6Zz6jV7eHmtKnkRagPMPusa7iQ8slu9xsITrt6pcRj/Z/RGRvzBRgM6D7dF0LzUDfpLVh5IADV0H2+KXGj/YcyPwfYClQlKyZ02RNrgWIrtDIvITJgrQk9iQ4bWN3itl+qRoMZF8w2+5JrHbNyYagPleHbaR3yER+QcTBWgcxmV4bTziZPWhcD3Q0r0W/Ll1ZmfHHaP56Cr5neEYrqRDIvIHJgrQSEzO8NoU76WGfVJhRfoNaCZp9cSU3HZIRGZlogDdjxPpp10GnsQ+WX2oEKC/4BlJaxDG5rZDIjIrEwXoaGBkupfCgFGy+lAhQMdghKS1FL1z2yERmZWJArR6ArCsruSF+suB+Gqy+lAhQB/FuVB3464421257ZCIzMpEAWoZYL8TfmTa4C7tnmrXZfC0I/ZmX3ldqBCgATsxPu0432+Yk9v+iMi0zBSgln7J8JbcR2YPamxrXD8OYc5n8Asvw6U7ct0fEZmVqQLUUnNxoiQ+ExbK3t5XlX3hOybicN+GVZp8cAFRTXPfHRGZlbkC1GIp2jFsxe4jZ47sXjG6o/zV7NQJUMtDB10Rvu1uFXojIrMyW4DKUaRFBsdUCVBL0LPTd53Y9m0z32cSkR/z5wBdhEzs9v0+IqIc8ecAfWFdBpfwneiqiMhvmDRACzd95pnGob7PS2+cdHVkIqJcMWWAtljnmM+U8EtDue9kgBKRekwUoNUPHOhi/zXga/cFzZRBMvtggBKRekwUoO/g/+3deZwU5YHG8ZoZDrkTYQOuQVERj2xk13XXGIiIStTNKhr94Hps9kJRzIIrJm5CUD+gXCoCRhORjwuKxkUQXeScFRBNgooCE5VDjnCIoCDMwMwwVz9bZ3f10KZ7pHrrnfb3/afefrur5pnrma6u6hr1dpbjnOqs3Ly50lk28Z3oFCiA6DSjAp2vfUX24uwGaXHfEssqubBUqurWpG1QoACi04wKdIPecxZTpceCqV+pidczpkABRKcZFWi5FjqL97Uz+W+IjtutVU3aBgUKIDrNqECrtMBZlGtaau6/9FmTtkGBAohOMyrQbVrrLKr1YGpuomqatA0KFEB0mlGBvqoG5/+xf6QZqbnntKtJ26BAAUSnGRXordL99uJxfdYhmOr0uZY3aRsUKIDoNKMC7bhHtZdY1llHNNu7nrHVcm5TL0k/Sfu2ZLK1vqHWfA2JuBPkoD5RF3eE7JpFyLpEfdwRsmsmIasy/tpH4FDzKVDrJqluTGdrqLT2xq6W1e2mMmln0y4KOjzTBZoA4Evqk6e6y4N77bhHXhu/0old574dvuripm2h6JRTM+qrPf3MV6XL446Q3XoNiTtCdks0Nu4I2U3TrLgjZPdzvRl3hOxu1rbMv/cRODE/XZcfgyvSy3/X9yLa8EnaHtGW8umQ2scdIbu39TdxR8hupn4Ud4TsfnbUv/E20MBoLlGeX720Me4IhjjpsfJUfW69t11k26VAo0KBRoUCjQoFmtJuwKhn5q9YNHvKLedGuFUKNDIUaFQo0KhQoPlGgUaGAo0KBRoVCjTfKNDIUKBRoUCjQoHmGwUaGQo0KhRoVCjQfKNAI0OBRoUCjQoFmm8UaGQo0KhQoFGhQPONAo0MBRoVCjQqFGi+UaCRoUCjQoFGhQLNNwo0MhRoVCjQqFCg+UaBRoYCjQoFGhUKNN8o0MhQoFGhQKNCgeYbBRoZCjQqFGhUKNB8o0AjQ4FGhQKNCgWabxRoZCjQqFCgUaFA840CjQwFGhUKNCoUaL4dX7cu7gg52FXVMu4I2ZUmesUdIbupuiruCNndrnvijpBdP82MO0J2JzS8FXeEQtfnjLgT5ODb58WdIAcnN/G/rMTi+KuK446QXeur28YdIQc/+EbcCXLwvZ5xJwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADQVD3Hrfu8astzl8Wd4yjfuHzkS9slXZQ2a1jcXsPnbq6s2bNs1DfDs2aF7Hr9pNe2Ha4vX//8oFahabNCBk6qsL/lfVO3zUr5sFI2pKbNCunoeMurWysPrp8/7PTklHkhC8K/V/s/D3M6xB0l3d3BD+pF4Vmz4l6yOvnrVDOqKDltVkhrc+qXftMFyVnDQgaWKq1ADUuZuUANC2m76ZMg5fvBlHkhC8KP7a/o1icfXW4vlraMO0yaXzjf7Ira9AI1LO79do7Eh0vn/yFhD54MZg0L6RTop6sWvPzGQTtQ9fn+pGkhfbcqURsqUNNS2gW6ao5vSjBpWkjLGmVHaVi3aPG6A8kCNS9kQehVJ40vtgf97d+un8SdJs2wZQ/f0Kvoj2kFalrc+xNLbv6aMzjzDfsH80pv0rSQ1shr/txdtrhhv/SeN2dcSI+9A//EnlSBGpfSLtB/aDxnXEhrsFQ7prMzKjr3Zm/KvJCF4b+lV7zRDdKBjvGGySS9QE2Le+k5wajNWmmxNzQtZMgAu+ZPdkeGhlyqHR1CBWpcykwFalxI+69Q3fcbzRkXsjB0rJF6e8OiD6Wb402TSVqBmhx3kHTQHZgc0rKffrivghoa8hbpCitVoOalzFCg5oWcKj3YaMq8kIVhUOil8PukuXFmySytQE2O+y0p4ewiGR2yRaV0ijMwM2T3cj1rhQrUvJQZCtS4kO3KdaRLoznjQhaI8dL0YHyxtDXOLJmlFajJcftLn7oDk0PeIa12B2aGXKK9ncMFal5Ku0CXry2v3rX0ns7BlHEhB0jLrDOmbqo6UDb5DH/OuJAFYr50TzDubj+FahdnmIzSCtTkuL+UXnAHpoYsPr7f9IT2eK/aGhnS3oG/3goXqHkpU6cxVQ73p4wLOUoaf5t/zlL9aO/kOuNCFoh3pJuCcXHC370zSlqBGhz3rCNSH3dkZMjx3u/T4We6ebdNDGnvwLvHOVIFal5Ku0C3Lpmz1D3LcrI3ZVzIZ6TXpc9mjn1ql53yEXfOuJAFYoN0dfJGlXTOn3hsPNIK1Ny47dZIz3hDI0P6BbrgSv9sfxNDLtZB93SrVIGal3LwUO+MsP5lCl4NNS7kfOc7Pds5W761vVuki50540IWiO3S3yVvfC59J8YsmaUVqLFxS16RNrT3xkaG/OH06TMX7rV/nRZ7Z7AYGHKwdIs7SBWogSkDbVdJW9yDhsaFXGF/m9f658ovlkqdpXEhC4T5f5iaxTPQ4melXaf5N0wNadf8dfauZ6n7HNS8kPYO/DJvZPIz0JSzGqS/dQbGhVxkF+iN/rivVOf8YTcuZIEw/6WR5vAaaMks6ePkRRsMDenqVen/IpkXcrEq/T9BJr8GGvK2NMxZGhdytl2gXf1xif0N/65lYMgCYf7BuWZwFL7Fb+z+7JW8aWZI3y+lmc7SuJDXSiP8oclH4UOe9U9XNy7kFPtZZ/LGZukqy8CQBWJC6PSw/tK2OLNkllagRsZtOdfef09dNMzMkIF/ld5wlsaFvFONOMe4jUsZFhSocSFvzVCgxoUsENeH3qBwr/RSnFkySytQE+O2tv+47zgtNGFiyKQh0mvO0riQGQvUuJRhwS68cSHPtb96/tlq7i68c8DIuJAFolNN6vXkD6R/jDVMRmkFamDcNkukP6a9omRgyJRZ0tPO0riQ35+RVC0tnDHDOQxiXMqQMxsk99KA5oXclnrB80LpSBvLxJAF4sXkXyP7b9TBTvGGyST9akzGxW23zN4f6pE+Z1rI1qlh33rpWndkWsiQ0NWYTEuZeumwze+l7e5pTMaFdN7svs4/jWmp9LI7MC5kgTijThrr/Bz0OxB6mdkg6QVqWtz2K6UtJzWaNC3kI7OvbOMOOv/0sFTWwh2bFjIkVKCmpRy38FqvQy92TqT3r2lkWkir/W5prtORx/1Kqj/PnTMuZKEYbv8gbPn1pGUJ6bVW2R/+/6ile9nvSul1ZznAmzQs7jQ7zm/nJHnvUjEt5GSp7sPSeYs+sJ9+6pOz/FnDQoaECtS0lOPtQlpfOu9/nbckaGowa1hIy7roiLR/1oSn7SLV3f6ccSELxZ3Bv0qZZ9hVVo9LO6Zwmz9rVtwX0g98nOlPmxVybCjhwh7JabNChoQL1LCU41NfyYo7UtNmhbQN2OUHqrotOWdcyEJx+oSyA1Vbn78i7hyNZS5Qs+J+QYGaFdL69oiXPiyvr/z4tXG9w9NmhUxJK1CzUra9YuySLYcbKrbOHZrWQkaFdHQcvmJ37b63Rp8QmjMuJAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAiFQ3aXXcGQCgiTYrbFeT130heevSYyhBChRAM3SsBZr46+AWBQrgK8YuwTfnJD3Z1HWl0uAWBQrgK8Yuwb8/hnXrpQH+LQoUwFfMMRboU9K7Rd4tChTAV0yGAvXa7JpXd9bsnX9lavqbD31w+EDZAyda/yndFqz7nY3SDd79yQK9XPp1sNL90j8nt1l8c+nu6o1PnORMnDN9Q9X+0oHhD9ny35Z/cmTHrO+Gopw/tWx/7Selw9uGH1c0aP72GvWM5NMHgC/vCwq03Yv+YaWn/OeX1jUV3sS+S8MFet510uaW7q2sBdppsbeF8vMta3TCGz+S+pBdf+dNJR4KVu40Jzi4tbtv6nHHL3GnTo/6KwEATfQFBfqiDs196PENdlEN9SYvrZN2TrnrgXdVMT1coNZb0o+9R2Qp0Hf/R/ufmzhzv93BHUeqbumkqe/bm/9h8CHXvK5Pnxgx5h17bry37tft+6tfHvOzx+yPU3NB8Lh3F6h2+fTfbO2Vly8HAOQuc4E2aF5ne1T8iPRxiTPXfoc0q40zGuY8/QsVaH9pb3vnVrYCbdCzHexRlz9IL9RvOtvZ/BTpveBDSgs6OcPbE2rw9uJfkeb9mTMoGSltb5XczArnNYCikqi/EgDQROmnMblH1J02+30L996STVIfZ3C7/dTPr6zJ6QVqLZLuc25lK1CtKHanLrGHh051h633SCcHd29p463ykLTQWdrNvCxoySelfwket7FN5F8EAPgy0k+kd4vRaan+/t3j/B3030pX+1NdatMLtHdCFc4TxawFeqE3VXxImuLfPVMaGNw92J/rVK2GrvbyRemCYDOnSS8Fj7sxws8eAI5B5gI9FDz1+ydptL1odUS1rYNVlqcXqDVLeszKXqAVxf5cmXS5P7xXGuLfrS7BOgvcVi3ap/JUzIPa5j+uoUNEnzkAHKPMr4F+EIyvkSZZ7jPA95P3T21UoD1qVHtq9gJNbtN+NnuWP/wP6S7/7o+T27ef9f7Usror3SH/cduO+VMGgGh84XmgnqulyfbiPGll8v77GhWo86roh0407wAAAq1JREFU87mcB+p7U+rhD++U7vbvLktuf4Q0zrL+slGB1vuPW3OMnzAARCWKAu1SocRfRV2g9of86KKQfo03AwAxy61Ae/6pXXjLGiUtSRXoZVLyoiQTcy7Q1C78WHcX/hRp31FpKVAABsmtQNMOIi07qkDb7ZEuThZoH3eP3jM75wJtfBCp5WHpW43TUqAADJJbgToHfoL3rXeuPapAraHSOwOC1XoGp8dbVpt9uRdo6DSmhHMa06ups50yRgOAmOVYoE5D+qchTdLRBdriI2lasFpRuRL+Gy3vU+4FGpxIb+/1L3KWlyXfwOlq2XgzABCzHAu0/U5p5nHOaGgiQ4Fag9xLg/qrzZB+9zV7WfKThobcCzT1Vs6Ed+mQ+VL5Dd61TFoPXDGw8WYAIGbpb+Wc093KXKDWZXXSjsl3jVntXkxkSLBuUKBFzkVAgtXOOCLtmzHh6e3a/njOBbrmde19fMRoZzv+5Zg6rXGel04bNXLS0nL/jVAUKACDpL8TSX9hfUGBWtcdCi5n93PpR8G6QYG673BPrjao1nvsxrNzP41pdbdV/uXsHg2uoNd2Wn0y2GfnN94MAMQs5wK1uj+8vvJg2YMnWhOCA0rhArWWhArUOvOprdWfv313hyacB7raajXk9T01O5/vG0p32uiVu2uqd6989ArvmqMUKIDm7RWpd9wZAKA5antA1S3iDgEAzdEvpDlxZwCAZuSEB7x3ChXfUZe8sicAIAc9VLto9NBhj26Sd/VPAECOeqSuKzexKPvDAQCBkn4PrNx8sO7Ttyfy7zABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQOP4PBpfWWcGO0hcAAAAASUVORK5CYII=" title alt width="672" /></p><p>The EngCount column is actually a character data type which is not correct. The transform method changes the datatype.</p><pre class="r"><code>pokemon_set_dataframe <- transform(pokemon_set_dataframe, EngCardCount = as.numeric(EngCardCount))</code></pre><p>Now it’s possible to sum the card counts.</p><pre class="r"><code>noquote(format(sum(pokemon_set_dataframe$EngCardCount), big.mark=","))</code></pre><pre><code>[1] 7,372</code></pre>Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-45266299046991480782014-10-10T16:01:00.000-04:002014-10-10T16:01:41.303-04:00How do I provide a single file to multiple Docker containers?http://www.cyberciti.biz/faq/bash-shell-change-the-color-of-my-shell-prompt-under-linux-or-unix/Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-52087939468906567482014-09-29T17:07:00.001-04:002014-09-29T17:08:06.615-04:00Simple Explanation of the MIT D4M Accumulo Schema<a target="_new" href="https://github.com/medined/D4M_Schema">https://github.com/medined/D4M_Schema</a> provides a step-by-step introduction to the D4M nosql schema used by many organizations.<br />
<br />
D4M is a breakthrough in computer programming that combines the advantages of five distinct processing technologies (sparse linear algebra, associative arrays, fuzzy algebra, distributed arrays, and triple-store/NoSQL databases such as Hadoop HBase and Apache Accumulo) to provide a database and computation system that addresses the problems associated with Big Data.Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-86143537368536989062014-07-15T14:17:00.001-04:002014-07-15T15:12:41.755-04:00Sharing Files Between Docker Images Using Volumes<p>Recently I wanted to provide the same configuration file to two different Docker containers. I choose to solve this using a Docker volume. The configuration file will be sourced from within each container and looks like this:</p><br />
<pre><code>$ cat bridge-env.sh
export BRIDGENAME=brbob
export IMAGENAME=bob
export IPADDR=10.0.10.1/24
</code></pre><br />
<p>Before any explanations, let's look at the files we'll be using:</p><br />
<pre><code>./configuration/build_image.sh - wrapper for _docker build_.
./configuration/run_image.sh - wrapper for _docker run_.
./configuration/Dockerfile - control file for Docker image.
./configuration/files/bridge-env.sh - environment setting script.
</code></pre><br />
<p>All of the files are fairly small. Since our main topic today is Docker, let's look at the Docker configuration file first.</p><br />
<pre><code>$ cat Dockerfile
FROM stackbrew/busybox:latest
MAINTAINER David Medinets <david.medinets@gmail.com>
RUN mkdir /configuration
VOLUME /configuration
ADD files /configuration
</code></pre><br />
<p>And you can build this image.</p><br />
<pre><code>$ cat build_image.sh
sudo DOCKER_HOST=$DOCKER_HOST docker build --rm=true -t medined/shared-configuration .
</code></pre><br />
<blockquote><p>I setup my docker to use a port instead of a UNIX socket. Therefore my DOCKER<em>HOST is "tcp://0.0.0.0:4243". Since _sudo</em> is being used, the environment variable needs to be set inside the sudo enviroment. If you want to use the default UNIX socker, leave DOCKER_HOST empty. The command will still work.</p></blockquote><br />
<p>Then run it.</p><br />
<pre><code>$ cat run_image.sh
sudo DOCKER_HOST=$DOCKER_HOST docker run --name shared-configuration -t medined/shared-configuration true
</code></pre><br />
<p>This command runs a docker container called <em>shared</em>configuration<em>. You'll notice that the _true</em> command is run which exits immediately. Since this container will only hold files, it's ok there are no processes running in it. <b>However, be very careful not to delete this container.</b> Here is the output from <em>docker ps</em> showing the container.</p><br />
<pre><code>$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4a2aa46b5d9 medined/shared-configuration:latest true 7 seconds ago Exited (0) 7 seconds ago -shared-configuration
</code></pre><br />
<p>Now it's time to spin up two plain Ubuntu containers that can access the shared file.</p><br />
<pre><code>$ sudo DOCKER_HOST=$DOCKER_HOST docker run --name A --volumes-from=shared-configuration -d -t ubuntu /bin/bash
94638de8b615f356f1240bbe602c0b7862e0589f1711fbff242b6d6f74c7de7d
$ sudo DOCKER_HOST=$DOCKER_HOST docker run --name B --volumes-from=shared-configuration -d -t ubuntu /bin/bash
sudo DOCKER_HOST=$DOCKER_HOST docker run --name B --volumes-from=shared-configuration -d -t ubuntu /bin/bash
</code></pre><br />
<p>How can we see the shared file? Let's turn to a very useful tool called <em>nsenter</em> (or namespace enter). The following command installs nsenter if isn't already installed.</p><br />
<pre><code>hash nsenter 2>/dev/null \
|| { echo >&2 "Installing nsenter"; \
sudo DOCKER_HOST=$DOCKER_HOST \
docker run -v /usr/local/bin:/target jpetazzo/nsenter; }
</code></pre><br />
<p>I use a little script file to make nsenter easier to use:</p><br />
<pre><code>$ cat enter_image.sh
#!/bin/bash
IMAGENAME=$1
usage() {
echo "Usage: $0 [image name]"
exit 1
}
if [ -z $IMAGENAME ]
then
echo "Error: missing image name parameter."
usage
fi
PID=$(sudo DOCKER_HOST=$DOCKER_HOST docker inspect --format {{.State.Pid}} $IMAGENAME)
sudo nsenter --target $PID --mount --uts --ipc --net --pid
</code></pre><br />
<p>This script is used by specifying the image name to use. For example,</p><br />
<pre><code>$ ./enter_image.sh A
root@94638de8b615:/# cat /configuration/bridge-env.sh
export BRIDGENAME=brbob
export IMAGENAME=bob
export IPADDR=10.0.10.1/24
root@94638de8b615:/# exit
logout
$ ./enter_image.sh B
root@925365faded2:/# cat /configuration/bridge-env.sh
export BRIDGENAME=brbob
export IMAGENAME=bob
export IPADDR=10.0.10.1/24
root@925365faded2:/# exit
logout
</code></pre><br />
<p>We see the same information in both containers. Let's prove that the bridge-env.sh file is shared instead of being two copies.</p><br />
<pre><code>$ ./enter_image.sh A
root@94638de8b615:/# echo "export NEW_VARIABLE=VALUE" >> /configuration/bridge-env.sh
root@94638de8b615:/# exit
logout
$ ./enter_image.sh B
root@925365faded2:/# cat /configuration/bridge-env.sh
export BRIDGENAME=brbob
export IMAGENAME=bob
export IPADDR=10.0.10.1/24
export NEW_VARIABLE=VALUE
</code></pre><br />
<p>We changed the file in the first container and saw the changes in the second container. As an alternative to using nsenter, you can simply run a container to list the files.</p><br />
<pre><code>$ docker run --volumes-from shared-configuration busybox ls -al /configuration
</code></pre>Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-85538176276251338072014-07-12T09:35:00.000-04:002014-07-12T09:35:07.972-04:00Running a Single-Node Accumulo Docker container<br />
<p>Based on the work by sroegner, I have a github project at https://github.com/medined/docker-accumulo which lets you run multiple single-node Accumulo instances using Docker.<br />
</p><br />
<p>First, create the image.<p><br />
<pre>git clone https://github.com/medined/docker-accumulo.git
cd docker-accumulo/single_node
./make_image.sh
</pre><br />
<p>Now start your first container.</p><br />
<pre>export HOSTNAME=bellatrix
export IMAGENAME=bellatrix
export BRIDGENAME=brbellatrix
export SUBNET=10.0.10
export NODEID=1
export HADOOPHOST=10.0.10.1
./make_container.sh $HOSTNAME $IMAGENAME $BRIDGENAME $SUBNET $NODEID $HADOOPHOST yes
</pre><br />
And then you can start a second one:<br />
<br />
<pre>export HOSTNAME=rigel
export IMAGENAME=rigel
export BRIDGENAME=brrigel
export SUBNET=10.0.11
export NODEID=1
export HADOOPHOST=10.0.11.1
./make_container.sh $HOSTNAME $IMAGENAME $BRIDGENAME $SUBNET $NODEID $HADOOPHOST no
</pre><br />
And a third!<br />
<br />
<pre>export HOSTNAME=saiph
export IMAGENAME=saiph
export BRIDGENAME=brbellatrix
export SUBNET=10.0.12
export NODEID=1
export HADOOPHOST=10.0.12.1
./make_container.sh $HOSTNAME $IMAGENAME $BRIDGENAME $SUBNET $NODEID $HADOOPHOST no
</pre><br />
<blockquote>The SUBNET is different for all containers. This isolates the Accumulo containers from each other. </blockquote><br />
<p>Look at the running containers</p><br />
<div style="overflow:auto;"><pre>$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
41da6f17261f medined/accumulo:latest /docker/run.sh saiph 4 seconds ago Up 2 seconds 0.0.0.0:49179->19888/tcp, 0.0.0.0:49180->2181/tcp, 0.0.0.0:49181->50070/tcp, 0.0.0.0:49182->50090/tcp, 0.0.0.0:49183->8141/tcp, 0.0.0.0:49184->10020/tcp, 0.0.0.0:49185->22/tcp, 0.0.0.0:49186->50095/tcp, 0.0.0.0:49187->8020/tcp, 0.0.0.0:49188->8025/tcp, 0.0.0.0:49189->8030/tcp, 0.0.0.0:49190->8050/tcp, 0.0.0.0:49191->8088/tcp saiph
23692dfe3f1e medined/accumulo:latest /docker/run.sh rigel 10 seconds ago Up 9 seconds 0.0.0.0:49166->19888/tcp, 0.0.0.0:49167->2181/tcp, 0.0.0.0:49168->50070/tcp, 0.0.0.0:49169->8025/tcp, 0.0.0.0:49170->8088/tcp, 0.0.0.0:49171->10020/tcp, 0.0.0.0:49172->22/tcp, 0.0.0.0:49173->50090/tcp, 0.0.0.0:49174->50095/tcp, 0.0.0.0:49175->8020/tcp, 0.0.0.0:49176->8030/tcp, 0.0.0.0:49177->8050/tcp, 0.0.0.0:49178->8141/tcp rigel
63f8f1a7141f medined/accumulo:latest /docker/run.sh bella 21 seconds ago Up 20 seconds 0.0.0.0:49153->19888/tcp, 0.0.0.0:49154->50070/tcp, 0.0.0.0:49155->8020/tcp, 0.0.0.0:49156->8025/tcp, 0.0.0.0:49157->8030/tcp, 0.0.0.0:49158->8050/tcp, 0.0.0.0:49159->8088/tcp, 0.0.0.0:49160->8141/tcp, 0.0.0.0:49161->10020/tcp, 0.0.0.0:49162->2181/tcp, 0.0.0.0:49163->22/tcp, 0.0.0.0:49164->50090/tcp, 0.0.0.0:49165->50095/tcp bellatrix
</pre></div><br />
<p>You can connect to running instances using the public ports. Especially useful is the public zookeeper port. Rather than searching through the ports listed above, here is an easier way.</p><br />
<pre>$ docker port saiph 2181
0.0.0.0:49180
$ docker port rigel 2181
0.0.0.0:49167
$ docker port bellatrix 2181
0.0.0.0:49162
</pre><br />
<blockquote>Having '0.0.0.0' in the response means that any IP address can connect.</blockquote><br />
<p>You can enter the namespace of a container (i.e., access a bash shell) this way.<p><br />
<pre>$ ./enter_image.sh rigel
-bash-4.1# hdfs dfs -ls /
Found 2 items
drwxr-xr-x - accumulo accumulo 0 2014-07-12 09:13 /accumulo
drwxr-xr-x - hdfs supergroup 0 2014-07-11 21:06 /user
-bash-4.1# accumulo shell -u root -p secret
Shell - Apache Accumulo Interactive Shell
-
- version: 1.5.1
- instance name: accumulo
- instance id: bb713243-3546-487f-b6d6-cfaa272efb30
-
- type 'help' for a list of available commands
-
root@accumulo> tables
!METADATA
</pre><br />
<p>Now let's start an edge node. For my purposes, an edge node can connect to Hadoop, Zookeeper and Accumulo without running any of those processes. All of the edge node's resources are dedicated to client work.</p><br />
<pre>export HOSTNAME=rigeledge
export IMAGENAME=rigeledge
export BRIDGENAME=brrigel
export SUBNET=10.0.11
export NODEID=2
export HADOOPHOST=10.0.11.1
./make_container.sh $HOSTNAME $IMAGENAME $BRIDGENAME $SUBNET $NODEID $HADOOPHOST no
</pre><br />
<p>As this container is started, the 'no' means that the supervisor configuration files will be deleted. So while supervisor will be running, it won't be managing any processes. This is not a best practice. It's just the way I chose for this prototype.</p>Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-76517559367005751712014-07-10T08:52:00.001-04:002014-07-10T08:55:11.009-04:00How find the published port of a Docker container in JavaAfter I spin up Accumulo in a Docker container, well-known ports (like 2181 for Zookeeper) are not well-known any more. The internal private port (i.e., 2181) is exposed as a different public port (i.e., 49143). Java program trying to connect to Accumulo must automatically find the public port numbers.<br />
<br />
The java code below finds the public port for Zookeeper for a Docker container named "walt". I don't know why the slash is needed in the image name.<br />
<br />
<pre>int wantedPublicPort = -1;
String wantedContainerName = "/walt";
int wantedPrivatePort = 2181;
String dockerURL = "http://127.0.0.1:4243";
String dockerUser = "medined";
String dockerPassword = "XXXXX";
String dockerEmail = "david.medinets@gmail.com";
DockerClient docker = new DockerClient(dockerURL);
docker.setCredentials(dockerUser, dockerPassword, dockerEmail);
List<Container> containers = docker.listContainersCmd().exec();
for (Container container : containers) {
String[] names = container.getNames();
for (String name : container.getNames()) {
if (name.equals(wantedContainerName)) {
for (Container.Port port : container.getPorts()) {
if (port.getPrivatePort() == wantedPrivatePort) {
wantedPublicPort = port.getPublicPort();
}
}
}
}
}
System.out.println("Zookeeper Port: " + wantedPublicPort);
</pre><br />
In order to use the DockerClient object, I added the following to my pom.xml:<br />
<br />
<pre><lt;dependency>gt;
<lt;groupId>gt;com.github.docker-java<lt;/groupId>gt;
<lt;artifactId>gt;docker-java<lt;/artifactId>gt;
<lt;version>gt;0.9.0<lt;/version>gt;
<lt;/dependency>gt;
</pre>Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-59163548841032809732014-07-10T08:34:00.005-04:002014-07-10T08:34:50.520-04:00Finding Log Files Inside Docker ContainersAs a simple lay programmer, I sometimes have trouble figuring out where log files are stored on unix systems. Sometimes logs are within application directories. Other times they are in /var/log. With Docker containers, this uncertainty is eliminated. How? By the 'docker diff' command. I will show why. When connecting to a Docker-based system, you can see the running containers:<br />
<br />
<pre>$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90a9f7122c02 medined/accumulo:latest /run.sh walt 9 hours ago Up 9 hours 0.0.0.0:49153->50070/tcp, 0.0.0.0:49154->50090/tcp, 0.0.0.0:49155->50095/tcp, 0.0.0.0:49156->8025/tcp, 0.0.0.0:49157->8030/tcp, 0.0.0.0:49158->8088/tcp, 0.0.0.0:49159->10020/tcp, 0.0.0.0:49160->19888/tcp, 0.0.0.0:49161->2181/tcp, 0.0.0.0:49162->22/tcp, 0.0.0.0:49163->8020/tcp, 0.0.0.0:49164->8050/tcp, 0.0.0.0:49165->8141/tcp walt
</pre><br />
Then you can list changed files within the container using the image id or name.<br />
<br />
<pre>$ docker diff walt
...
D /data1/hdfs/dn/current/BP-1274135865-172.17.0.10-1404767453280/current/finalized/blk_1073741825_1001.meta
...
A /var/log/supervisor/accumulo-gc-stderr---supervisor-5H7Rr7.log
A /var/log/supervisor/accumulo-gc-stdout---supervisor-LK8wDU.log
...
A /var/log/supervisor/namenode-stdout---supervisor-mciN4u.log
A /var/log/supervisor/secondarynamenode-stderr---supervisor-EaluLZ.log
A /var/log/supervisor/secondarynamenode-stdout---supervisor-Ap4Fri.log
C /var/log/supervisor/supervisord.log
A /var/log/supervisor/zookeeper-stderr---supervisor-CCwUGw.log
A /var/log/supervisor/zookeeper-stdout---supervisor-lDiuIF.log
C /var/run
C /var/run/sshd.pid
C /var/run/supervisord.pid
</pre><br />
Armed with this list you can confidently either look in /var/lib/docker or use the nsenter command to join the namespace of the container to read interesting files.Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-24414878334707670162014-06-10T22:10:00.002-04:002014-06-10T22:10:42.146-04:00How to Detach From a Running Docker ImageHere is another quick note. This time about Docker.<br />
<br />
<pre># Run the standard Ubuntu image
docker run --name=bash -i -t ubuntu /bin/bash
# Do something
...
# Detach by typing Ctl-p and Ctl-q.
# Look at the image while on the Host system.
docker ps
# Reattach to the Ubuntu image
docker attach bash
</pre><br />
While experimenting with these commands, I noticed that I needed to press <ENTER> to see the prompt after the ^P^Q combination and after reattaching.<br />
Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-40755673681718557022014-05-25T17:15:00.000-04:002014-05-25T17:15:21.100-04:00Accumulo BatchScanner With and Without WholeRowIteratorThis note shows the difference between an Accumulo query both without and with an WholeRowIterator. The code snippet below picks up the narrative after you've initialized a Connector object. First we can see what a plain scan looks like:<br />
<br />
<pre> // Read from the tEdge table of the D4M schema.
String tableName = "tEdge";
// Read from 5 tablets at a time.
int numQueryThreads = 5;
Text startRow = new Text("6000");
Text endRow = new Text("6001");
List<Range> range = Collections.singletonList(new Range(startRow, endRow));
BatchScanner scanner = connector.createBatchScanner(tableName, new Authorizations(), numQueryThreads);
scanner.setRanges(range);
for (Entry<Key, Value> entry : scanner) {
System.out.println(entry.getKey());
}
scanner.close();
</pre><br />
The results of this query, using the data loaded by the SOICSVToAccumulo <br />
class from https://github.com/medined/D4M_Schema, is shown below.<br />
<br />
<pre>600006a870bb4c8471a27c9bd0f3f064265d062d :a00100|0.0001 [] 1401023353637 false
600006a870bb4c8471a27c9bd0f3f064265d062d :a00200|0.0001 [] 1401023353637 false
...
600006a870bb4c8471a27c9bd0f3f064265d062d :state|UT [] 1401023353637 false
600006a870bb4c8471a27c9bd0f3f064265d062d :zipcode|84521 [] 1401023353637 false
6000338cbf2daede3efd4355165c98771b3e2b66 :a00100|29673.0000 [] 1401023273694 false
6000338cbf2daede3efd4355165c98771b3e2b66 :a00200|20421.0000 [] 1401023273694 false
...
6000338cbf2daede3efd4355165c98771b3e2b66 :state|OR [] 1401023273694 false
6000338cbf2daede3efd4355165c98771b3e2b66 :zipcode|97365 [] 1401023273694 false
</pre><br />
Hopefully you can see that this output represents two 'standard' RDMS records with <br />
columns named 'a00100', 'a00200', etc. This organization becomes really obvious <br />
when the WholeRowIterator is used. The scanner part of the code for this is shown below:<br />
<br />
<pre> BatchScanner scanner = connector.createBatchScanner(tableName, new Authorizations(), numQueryThreads);
scanner.setRanges(range);
IteratorSetting iteratorSetting = new IteratorSetting(1, WholeRowIterator.class);
scanner.addScanIterator(iteratorSetting);
for (Entry<Key, Value> entry : scanner) {
System.out.println(entry.getKey());
}
scanner.close();
</pre><br />
The output for this code is:<br />
<br />
<pre>600006a870bb4c8471a27c9bd0f3f064265d062d : [] 9223372036854775807 false
6000338cbf2daede3efd4355165c98771b3e2b66 : [] 9223372036854775807 false
</pre><br />
What happened to all of the other information? We can find it again using the <br />
WholeRowIterator.decodeRow method as shown below:<br />
<br />
<pre> for (Entry<Key, Value> entry : scanner) {
try {
SortedMap<Key, Value> wholeRow = WholeRowIterator.decodeRow(entry.getKey(), entry.getValue());
System.out.println(wholeRow);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
</pre><br />
This code produces:<br />
<br />
<pre>{600006a870bb4c8471a27c9bd0f3f064265d062d :a00100|0.0001 [] 1401023353637 false=1, 600006a870bb4c8471a27c9bd0f3f064265d062d :a00200|0.0001 [] 1401023353637 false=1,
...
600006a870bb4c8471a27c9bd0f3f064265d062d :state|UT [] 1401023353637 false=1,
600006a870bb4c8471a27c9bd0f3f064265d062d :zipcode|84521 [] 1401023353637 false=1}
{6000338cbf2daede3efd4355165c98771b3e2b66 :a00100|29673.0000 [] 1401023273694 false=1, 6000338cbf2daede3efd4355165c98771b3e2b66 :a00200|20421.0000 [] 1401023273694 false=1,
...
6000338cbf2daede3efd4355165c98771b3e2b66 :state|OR [] 1401023273694 false=1,
6000338cbf2daede3efd4355165c98771b3e2b66 :zipcode|97365 [] 1401023273694 false=1}
</pre>Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0tag:blogger.com,1999:blog-3207985.post-24014922345412407782014-05-25T08:21:00.000-04:002014-05-25T08:21:26.307-04:00Data Distribution Throughout the Accumulo Cluster<br />
<h1>Data Distribution Throughout the Accumulo Cluster</h1><br />
This document answers these questions:<br />
<br />
* What is a tablet?<br />
* What is a split point?<br />
* What is needed before data can be distributed?<br />
<br />
A distributed database typically is thought of as having data spread across multiple servers. But how does the data spread out? That's a question I hope to answer - at least for Accumulo.<br />
<br />
At a high level of abstraction, the concept is simple. If you have two servers, then 50% of the data should go to server one and 50% should go to server two. The examples below give concrete demonstrations of data distribution.<br />
<br />
Accumulo stores information as key-value pairs (or entries). For a visual reference, below is an empty key-value pair. <br />
<br />
<pre>----------- ---------
| key | | value |
----------- ---------
| [nothing here yet] |
----------- ---------
</pre><br />
<h2>Tables</h2><br />
A collecton of key-values is called a table. This table is different from one<br />
found in a relational database because there is no schema associated with it.<br />
<br />
<blockquote>What is a Key? See below.</blockquote><br />
Note: Understanding the difference between a relational database and a <br />
key-value database is beyond the scope of this discussion. If you want, you<br />
can think of the "key" in this discussion as a primary key. But, fair warning,<br />
that is a false analogy. One which you'll need to forget as you gain more<br />
proficiency with key-value databases. <br />
<br />
A new Accumulo table has a single unit of storage called a tablet. When created, the tablet is empty. As more entries are inserted into a table, Accumulo may <br />
automatically decide to split the initial tablet into two tablets. As the size <br />
of the table continues to grow, the split operation is repeated. Or you can <br />
specify how the splitting occurs. We'll discuss this further below.<br />
<br />
>Tables have one or more tablets.<br />
<br />
Below is an empty table. For convenience, we'll use 'default' as the name of <br />
the initial tablet.<br />
<br />
<pre>----------- ----------- ---------
| tablet | | key | | value |
----------- ----------- ---------
| default | | <nothing here yet> |
----------- ----------- ---------
</pre><br />
Even though the table is empty, it still has a starting key of -infinity and<br />
an ending key of +infinity. All possible data occurs between the two extremes of infinity.<br />
<br />
<pre> -infinity ==> ALL DATA <== +infinity.
</pre>
This concept of start and end keys can be shown in our tablet depiction as
well.
<pre>----------- ----------- ---------
| tablet | | key | | value |
----------- ----------- ---------
| start key: -infinity |
-----------------------------------
| default | | <nothing here yet> |
-----------------------------------
| end key: +infinity |
----------- ----------- ---------
</pre>
After inserting three records into a new table, you'll have the following
situaton. Notice that Accumulo always stores keys in lexically sorted order.
So far, the start and end keys have not been changed.
<pre>----------- ------- ---------
| tablet | | key | | value |
----------- ------- ---------
| default | | 01 | | X |
| default | | 03 | | X |
| default | | 05 | | X |
----------- ------- ---------
</pre>
Accumulo stores all entries for a tablet on a single node in the clsuter. Since our
table has only one tablet, the information can't spread beyond one node. In
order to distribute information, you'll need to create more than tablet for
your table.
<blockquote>The tablet's range is still from -infinity to +infinity. That hasn't changed yet.</blockquote>
<h2>Splits</h2>
Now we can introduce the idea of splits. When a tablet is split, one tablet
becomes two. If you want your information to be spread onto three nodes, you'll
need two splits. We'll illustrate this idea.
<blockquote>Split point - the place where one tablet becomes two.</blockquote>
Let's add two split points to see what happens. As the split points are added, new tablets are created.
<h3>Adding Splits</h3>
<h4>First Split</h4>
First, adding split point 02 results in a second tablet being created. It's worth noting that the tablet names are meaningless. Accumulo assigns internal names that you rarely need to know. I picked "A" and "B" because they are easy to read.
<pre>----------- ------- ---------
| tablet | | key | | value |
----------- ------- ---------
| A | | 01 | | X | range: -infinity to 02 (inclusive)
| split point 02 |
| B | | 03 | | X | range: 02 (exclusive) to +infinity
| B | | 05 | | X |
----------- ------- ---------
</pre>
The split point does not need to exist as an entry. This feature means that you can pre-split a table by simply giving Accumulo a list of split points.
<h4>Tablet Movement</h4>
Before continuing, let's take a small step back to see how tablets are moved between servers. At first, the table resides on one server. This makes sense - one tablet is on one server.
<pre>--------------------------------
| Tablet Server |
--------------------------------
| |
| -- Tablet ---------------- |
| | -infinity to +infinity | |
| -------------------------- |
| |
--------------------------------
</pre>
Then the first split point is added. Now there are two tablets. However, they are still on a single server. And this also makes sense. Thinking about adding a split point to a table with millions of entries. While the two tablets reside on one server, adding a split is just an accounting change.
<pre>-----------------------------------------------------------------------
| Tablet Server |
-----------------------------------------------------------------------
| |
| -- Tablet --------------------- -- Tablet --------------------- |
| | -infinity to 02 (inclusive) | | 02 (exclusive) to +infinity | |
| ------------------------------- ------------------------------- |
| |
-----------------------------------------------------------------------
</pre>
At some future point, Accumulo might move the second tablet to another Tablet Server.
<pre>------------------------------------| |------------------------------------
| Tablet Server | | Tablet Server |
------------------------------------| |------------------------------------
| | | |
| -- Tablet --------------------- | | -- Tablet --------------------- |
| | -infinity to 02 (inclusive) | | | | 02 (exclusive) to +infinity | |
| ------------------------------- | | ------------------------------- |
| | | |
------------------------------------- -------------------------------------
</pre>
<h4>Second Split</h4>
You'll wind up with three tablets when a second split point of "04" is added.
<pre>----------- ------- ---------
| tablet | | key | | value |
----------- ------- ---------
| A | | 01 | | X | range: -infinity to 02 (inclusive)
| split point 02 |
| B | | 03 | | X | range: 02 (exclusive) to 04 (inclusive)
| split point 04 |
| C | | 05 | | X | range: 04 (exclusive) to +infinity
----------- ------- ---------
</pre>
The table now has three tablets. When enough tablets are created, some process
inside Accumulo moves one or more tablets into different nodes. Once that
happens the data is distributed.
Hopefully, you can now figure out which tablet any specific key inserts into.
For example, key "00" goes into tablet "A".
<pre>----------- ------- ---------
| tablet | | key | | value |
----------- ------- ---------
| A | | 00 | | X | range: -infinity to 02 (inclusive)
| A | | 01 | | X |
| split point 02 |
| B | | 03 | | X | range: 02 (exclusive) to 04 (inclusive)
| split point 04 |
| C | | 05 | | X | range: 04 (exclusive) to +infinity
----------- ------- ---------
</pre>
Internally, the first tablet ("A") as a starting key of -infinity. Any entry
with a key between -infinity and "00" inserts into the first key. The last
tablet has an ending key of +infinity. Therefore any key between "05" and
+infinity inserts into the last tablet.
Accumulo automatically creates split points based on some conditions. For example, if the tablet grows too large. However, that's a whole 'nother conversation.
<h2>What is a Key?</h2>
Plenty of people have described Accumulo's Key layout. Here is the
bare-bones explanation:
<pre>-------------------------------------------------------------------
| row | column family | column qualifier | visibility | timestamp |
-------------------------------------------------------------------
</pre>
These five components, combined, go into the _Key_.
<h2>Using Shards To Split a Row</h2>
Each row resides on a single tablet which can cause a problem if any single row has a few million entries. For example, if your table held all ISBN's using this schema:
<pre>------------------------------------------------
| row | column family | column qualifier |
------------------------------------------------
| book | 140122317 | Batman: Hush |
| book | 1401216676 | Batman: A Killing Joke |
</pre>
You can see how the _book_ row would have millions of entries. Potentially causing memory issues inside your TServer. Many people add a _shard_ value to the row to introduce potential split points. With shard values, the above table might look like this:
<pre>---------------------------------------------------
| row | column family | column qualifier |
---------------------------------------------------
| book_0 | 140122317 | Batman: Hush |
| book_5 | 1401216676 | Batman: A Killing Joke |
</pre>
With this style of row values, Accumulo could use book_5 as a split point so that the row are no longer unmanageable. Of course, this technique adds a bit of complexity to the query process. I'll leave the query issue to a future note.
Let's explore how shard values can be generated.
<h3>When an Accumulo table is created</h3>
It may be tempting to have the computers flip a virtual coin to decide which
server to target for each record. In the RDBMS world that procedure works but
in key-value databases, information is stored vertically instead of
horizontally so the coin flip analogy does not work. Let's quickly review why.
<h4>Coin Flip Sharding</h4>
Relational databases spread information across columns (i.e., horizontally). Hopefully, there is in Id value using a synthetic key (SK) and I hope you have them in your data. If not your very first task is to get your DBA's to add them. Seriously, synthetic keys save you a world of future trouble. Here is a simple relational record.
<pre>|--------------------------------------
| RELATIONAL REPRESENTATION |
|--------------------------------------
| SK | First Name | Last Name | Age |
|-------------------------------------|
| 1001 | John | Kloplick | 36 |
---------------------------------------
</pre>
Key-value database spread information across several rows using the synthetic key to tie them together. In simplified form, the information is stored in three key-value combinations (or three entries).
<pre>|----------------------------------
| KEY VALUE REPRESENTATION |
|----------------------------------
| ROW | CF | CQ |
|---------------------------------|
| 1001 | first_name | John |
| 1001 | last_name | Kloplick |
| 1001 | age | 36 |
-----------------------------------
</pre>
If the coin flip sharding strategy were used the information might look like the following. The potential split point shows that the entries can be spread across two tablets.
<pre>|-------------------------------------
| ROW | CF | CQ |
|------------------------------------|
| 1001_01 | first_name | John |
| 1001_01 | age | 36 |
| 1001_02 | last_name | Kloplick | <-- potential split point
--------------------------------------
</pre>
To retrieve the information you'd need to scan both servers! This coin flip sharding technique is not going to scale. Imagine information about a person spread over 40 servers. Collating that information would be prohibitively time-consuming.
<h4>HASH + MOD Sharding (using natural key)</h4>
Of course, there is a better sharding strategy to use. You can base the strategy on one of the fields. Get its hash code and then mod it by the number of partitions. Ultimately, this strategy will fail but let's go through the process to see why. Skip to the next section if you already see the problem.
"John".hashCode() is 2314539. Then we can mod that by the number of partitions (or servers) in our cluster. Let's pretend we have 5 servers instead of the two we used earlier for variety. Our key-value entries now look thusly:
<blockquote>2,314,539 modulo 5 = 4</blockquote>
<pre>|-------------------------------------
| ROW | CF | CQ |
|------------------------------------|
| John_04 | first_name | John |
| John_04 | age | 36 |
| John_04 | last_name | Kloplick |
--------------------------------------
</pre>
<blockquote>Note that the shard value is _not_ related to any specific node. It's just a potential split point for Accumulo.</blockquote>
It's time to look at a specific use case to see if this sharding strategy is sound. What if we need to add a set of friends for John? It's unlikely that the information about John's friends have his first name. But very likely for his synthetic key of 1001 to be there. We can now see choosing the first_name field as the base of the sharding strategy was unwise.
<h4>HASH + MOD Sharding (using synthetic key)</h4>
Using the synthetic key as the basis for the hash provides more continuity between updates. And regardless of how information changes, we'll always put the information in the same shard.
"1001".hashCode() is 1507424. If we use the first prime number less than 1,000 then the shard calculation generates a shard value of 957.
So the key-value information is now:
<blockquote>1,507,424 modulo 997 = 957</blockquote>
<pre>|--------------------------------------
| ROW | CF | CQ |
|-------------------------------------|
| 1001_957 | first_name | John |
| 1001_957 | age | 36 |
| 1001_957 | last_name | Kloplick |
--------------------------------------
</pre>
Using this technique makes it simple to add a height field.
<pre>|--------------------------------------------
| ROW | CF | CQ |
|-------------------------------------------|
| 1001_957 | height_in_inches | 68 |
---------------------------------------------
</pre>Anonymoushttp://www.blogger.com/profile/16707286767120221163noreply@blogger.com0