GitHub pull requests are great for contributors. They offer a very simple way to publicly post a patch against a piece of open source software and get feedback from the maintainers. For various reasons, though, actually doing the merge of a pull request via GitHub's UI may not be the ideal thing for a project maintainer. Reasons include:
* a policy of putting the commit through test prior to merging it
* amendment of a relevant issue ID to a commit message which lacks one (I've not done this to a contribution, but I've considered it)
* the pull request should be merged into a branch other than the one it was filed against
and I'm sure you can think of others. So the first couple of pull requests that I accepted were a bit painful, because my process was:
* git clone the contributor's repo
* use git format-patch to export the patches
* use git am to import the patches
In particular, the git clone of an entire repo just to get a 5-line patch seemed like a tremendous waste of time and bandwidth, and I knew there had to be a better way. Obviously github's website displays the diff associated with the pull request, so it has to be stored _somewhere_, right?
And of course it is. For example, here is a pull request:
https://github.com/eucalyptus/eucalyptus/pull/3/
To see the properly formatted patch associated with it, simply drop the trailing slash and add ".patch":
https://github.com/eucalyptus/eucalyptus/pull/3.patch
It's worth noting that I found this via a link tag which exists in the source of the pull request page:
<link rel='alternate' type='text/x-patch' href='/eucalyptus/eucalyptus/pull/3.patch' />
So there's probably some code floating around somewhere to simply follow that link instead of knowing how to alter the url. Anyway, I'm quite happy to have found this. I'm not sure why there's not an obvious link to it in each pull request page. I think it would serve project maintainers well.
Wednesday, August 15, 2012
Tuesday, July 3, 2012
Jira full text search tips
One of the advantages of using Jira at Eucalyptus is that it has very good
Lucene-based full-text search. It's not necessarily obvious how to use
it, though. If you want to search for a multi-word string, you have to
use the advanced search (JQL), and quote it like this:
text ~ "\"disk space\""
if you only want to find the individual words rather than the exact string, remove the escaped quotes:
text ~ "disk space"
The first query returns about 4 results for me, while the second returns dozens, so the subtle difference can be very important.
More search tips here.
Just be aware that most of those tips don't seem to work via "quick search". You have use them inside quotes in a JQL search. I tested things like:
text ~ "\"disk space\"~10" (disk and space within ten words of each other)
text ~ "behavio*r" (behavior/behaviour plus other possible but unlikely strings)
and so on. You can also search specific text fields such as:
summary ~ "behavio*r"
Hopefully this info will make it easier for people to find existing issues in our Jira instance as well as others around the web.
text ~ "\"disk space\""
if you only want to find the individual words rather than the exact string, remove the escaped quotes:
text ~ "disk space"
The first query returns about 4 results for me, while the second returns dozens, so the subtle difference can be very important.
More search tips here.
Just be aware that most of those tips don't seem to work via "quick search". You have use them inside quotes in a JQL search. I tested things like:
text ~ "\"disk space\"~10" (disk and space within ten words of each other)
text ~ "behavio*r" (behavior/behaviour plus other possible but unlikely strings)
and so on. You can also search specific text fields such as:
summary ~ "behavio*r"
Hopefully this info will make it easier for people to find existing issues in our Jira instance as well as others around the web.
Thursday, May 3, 2012
Sampling GitHub API v3 in Python
Eucalyptus is in the process of moving code to GitHub, and this week I finally decided to look at the available API tools for working with GitHub. I wanted a tool written in python, since that would be the fastest for me to extend, and I found github2. Unfortunately, that homepage had a prominent warning that the code only worked with GitHub's old APIs, which were being turned off this week. So I decided to investigate what I could do from scratch in a small amount of code. I had already started using restkit in jiranemo, so that seemed to be a reasonable starting point. Here's what I came up with:
There's not any magic in this code, but it took a couple of reads to wade past all of the OAuth talk in github's docs and realize that for a simple browserless tool, you can avoid using OAuth libraries altogether and still not have to store a hard-coded password.
import json from restkit import Resource, BasicAuth, Connection, request from socketpool import ConnectionPool pool = ConnectionPool(factory=Connection) serverurl="https://api.github.com" # Add your username and password here, or prompt for them auth=BasicAuth(user, password) # Use your basic auth to request a token # This is just an example from http://developer.github.com/v3/ authreqdata = { "scopes": [ "public_repo" ], " note": "admin script" } resource = Resource('https://api.github.com/authorizations', pool=pool, filters=[auth]) response = resource.post(headers={ "Content-Type": "application/json" }, payload=json.dumps(authreqdata)) token = json.loads(response.body_string())['token'] """ Once you have a token, you can pass that in the Authorization header You can store this in a cache and throw away the user/password This is just an example query. See http://developer.github.com/v3/ for more about the url structure """ resource = Resource('https://api.github.com/user/repos', pool=pool) headers = {'Content-Type' : 'application/json' } headers['Authorization'] = 'token %s' % token response = resource.get(headers = headers) repos = json.loads(response.body_string())
There's not any magic in this code, but it took a couple of reads to wade past all of the OAuth talk in github's docs and realize that for a simple browserless tool, you can avoid using OAuth libraries altogether and still not have to store a hard-coded password.
Thursday, April 26, 2012
Greenhopper, Jira, and REST
One of the somewhat frustrating problems I'm dealing with in Greenhopper is that I want the ability to treat a linked issue like a subtask, but without all the restrictions of a subtask. Subtasks have at least three limitations that get in my way:
The Greenhopper UI operates mostly via a REST API, and so far this API is not well documented. Last night I got around this lack of documentation by using mitmproxy to monitor calls while moving issues up and down the planning page in Greenhopper's Rapid Board. Then I added a simple rest client class to jiranemo based on restkit. I made two helper functions: one to get the rest representation of an issue, and another to change the rank of an issue in Greenhopper. My script looks like this:
- They must be in the same project as their parent
- They must have the same permissions (issue-level security) as their parent
- They must be of an issue type that is flagged as a "subtask" type, so for example, a "Feature" cannot be a subtask of a "Story" unless you create a separate "Feature (subtask)" issues type.
The Greenhopper UI operates mostly via a REST API, and so far this API is not well documented. Last night I got around this lack of documentation by using mitmproxy to monitor calls while moving issues up and down the planning page in Greenhopper's Rapid Board. Then I added a simple rest client class to jiranemo based on restkit. I made two helper functions: one to get the rest representation of an issue, and another to change the rank of an issue in Greenhopper. My script looks like this:
#!/usr/bin/python import sys import pyjira from jiranemo import jiracfg # Set the exception hook to enter a debugger on # uncaught exceptions from jiranemo.lib import util sys.excepthook = util.genExcepthook(debug=True, debugCtrlC=True) # Read ${HOME}/.jirarc, and set up clients and auth caches. cfg = jiracfg.JiraConfiguration(readConfigFiles=True) authorizer = pyjira.auth.CachingInteractiveAuthorizer(cfg.authCache) ccAuthorizer = pyjira.auth.CookieCachingInteractiveAuthorizer(cfg.cookieCache) client = pyjira.JiraClient(cfg.wsdl, (cfg.user, cfg.password), authorizer=authorizer, webAuthorizer=ccAuthorizer) # Do a simple JQL query via the SOAP client, return 20 results issues = client.client.getIssuesFromJqlSearch( '''project = "system testing 2" order by Rank DESC''', 20) for x in issues: # Get the REST representation of each issue, because links # aren't shown in the SOAP representation rest_issue = client.restclient.get_issue(x.key) for link in rest_issue['fields']['issuelinks']: if link['type'].has_key('inward') and \ link['type']['inward'] == "is blocked by": # Rank the linked issue above this one in Greenhopper result = client.restclient.gh_rank(link['inwardIssue']['key'], before=rest_issue['key'])The code could use some error checking, but this is a pretty simple starting point for doing something that Jira and Greenhopper can't do on their own.
Wednesday, April 25, 2012
Resurrecting Jiranemo
About six years ago, David Christian developed a JIRA CLI called jiranemo (his original blog post is, somewhat surprisingly, still around on the rPath website). After he left rPath, I spent some time updating the code for Jira 4 and adding some minor features, but it's been mostly stagnant for about two years. In the meantime, Jira 5 has been released, and the core dependency of jiranemo, SOAPpy, has been declared dead.
This month, Eucalyptus started on the migration path from using a combination of RT and Launchpad to using Jira. I'm really excited about the change, and it gave me a chance to pick up the jiranemo code again. I've now converted it from SOAPpy to suds, and on Monday I used it to import 2000 issues from RT into jira (stay tuned for details on that becoming a publicly-accessible system). I had database access to RT, but all of the interaction with jira was done through the SOAP API. ( I realize they now also have a REST API, which looks awesome, but I already had the code for using SOAP ).
I should also note that before I took on this work, I looked at Matt Doar's python-based CLI, which worked well for single commands (and was a reference for some of my jiranemo updates), but it didn't have a library interface, and it seemed very inefficient to keep spawning new python processes for thousands of commands. Jiranemo's separation of the command-line option handling and config file parsing from the client library and helper functions make it fantastic for integrating into more complex python apps.
I expect that the next phase of development for jiranemo will be a gradual migration toward the REST APIs. If this code is useful to you and you'd like to contribute to this effort, feel free to fork my bitbucket repo and send me pull requests.
This month, Eucalyptus started on the migration path from using a combination of RT and Launchpad to using Jira. I'm really excited about the change, and it gave me a chance to pick up the jiranemo code again. I've now converted it from SOAPpy to suds, and on Monday I used it to import 2000 issues from RT into jira (stay tuned for details on that becoming a publicly-accessible system). I had database access to RT, but all of the interaction with jira was done through the SOAP API. ( I realize they now also have a REST API, which looks awesome, but I already had the code for using SOAP ).
I should also note that before I took on this work, I looked at Matt Doar's python-based CLI, which worked well for single commands (and was a reference for some of my jiranemo updates), but it didn't have a library interface, and it seemed very inefficient to keep spawning new python processes for thousands of commands. Jiranemo's separation of the command-line option handling and config file parsing from the client library and helper functions make it fantastic for integrating into more complex python apps.
I expect that the next phase of development for jiranemo will be a gradual migration toward the REST APIs. If this code is useful to you and you'd like to contribute to this effort, feel free to fork my bitbucket repo and send me pull requests.
Thursday, March 8, 2012
An Online Identity Crisis
Yesterday I was working with some folks in #fedora-java, and after confusion over my IRC versus FAS nicks, someone asked me how many different nicks I had. The answer, unfortunately, is "at least four." I realized a couple of years ago that I had created a problem for myself as far as online identities go. I was never one to go signing up for new services to reserve my nick early, and I've not chosen particularly unique nicks. So, this is me, currently:
- mull - an IRC-only nick that dates back to my college days, maybe 1999-ish, when folks in the #aalug channel were trying out new nicks daily for a while. I have no idea why this stuck, and I've never used it anywhere else. UPDATE: I've now claimed "agrimm" on IRC, so you won't be seeing "mull" anymore.
- arg - I made a particularly huge mistake when I chose my initials for my FAS (Fedora Account System) ID, even though there was very little possibility I'd be able to use that elsewhere. I suspect that a FAS account is one of the less trivial ones to change, too, so I'm probably stuck with this being a one-off.
- agrimm - the obvious first initial + last name choice, more easily obtained than initials, but still not universally unique. I use it for email addresses and not much else, and I get a *lot* of misdirected email for people who share my last name.
- a13m - I started using this for twitter and some other random things when arg and agrimm were taken. In case it's not obvious, this nick derives from my full name, in the style of i18n, a11y, etc. While very short and almost never taken by anyone else, the relation to my name is subtle enough that most people don't make the connection.
Sunday, March 4, 2012
Beware of RHEL 6 / CentOS 6 kernels in Xen guests
A couple of weeks ago, I made my first attempt at creating a multi-hypervisor CentOS 6 image for Eucalyptus for demo purposes. I was pretty sure I had the image creation thing down to a science with ami-creator, but it seems there's always room for error. While I had all the correct drivers in the initrd (usually that's all that really matters), it turns out there's a kernel bug affecting device naming on Xen, and since ami-creator currently uses device names rather than UUIDs or labels, my image failed to boot. For those who don't want to read the whole bug, it's stated concisely by Kevin Stange:
So it seems that there are multiple workarounds to the problem, and it will be fixed in the 6.3 kernel, which is all good. However, I have to say that it's finally made me understand why some of my coworkers prefer their "single kernel" project, which aims to provide one kernel / ramdisk which can properly boot several distros on several hypervisors. I'm still partial to running the distro-provided kernel whenever possible, but having a known-good fallback that will at least be able to access the root filesystem & network is nice, so thanks to the Eucalyptus Support / IT team for working on that.
There was effective breakage between kernel-2.6.32-71.29.1.el6 and kernel-2.6.32-131.el6. When going from 6.0 to 6.1, the result is that if your Xen domain configuration file specified sda1 as a device name, it was previously renamed to xvda1. After 2.6.32-131.el6, the device is named xvde1 instead (because the names xvda - xvdd are reserved for hda - hdd device remapping). In situations where the configuration file explicitly lists "xvda1" or uses "hda1", "xvda1" continues to work.
So it seems that there are multiple workarounds to the problem, and it will be fixed in the 6.3 kernel, which is all good. However, I have to say that it's finally made me understand why some of my coworkers prefer their "single kernel" project, which aims to provide one kernel / ramdisk which can properly boot several distros on several hypervisors. I'm still partial to running the distro-provided kernel whenever possible, but having a known-good fallback that will at least be able to access the root filesystem & network is nice, so thanks to the Eucalyptus Support / IT team for working on that.
Thursday, February 2, 2012
Anaconda to the Rescue
I've always been a fan of the flexibility of anaconda and kickstart not just for installing systems, but also for rescuing a system when something goes horribly wrong. Yesterday I updated a remote test system from Fedora 16 to Rawhide, and I found myself with no network access to the machine due to a firmware issue. The system has DRAC 6 express, so I can reset the system and force a pxe boot, but I can't see or interact with the console when it boots. Recent Fedora releases have a great way to rescue a system in this state. First, you set up a kickstart file for the rescue (probably only the first two lines are needed, but I did not test with fewer lines than this):
Then set up these boot options in your PXE configuration:
This works just like rescue mode always has, except you don't need console access. Very cool.
I'm sure this feature isn't news to a lot of Fedora users, but sometimes cool new features like this sneak into a Fedora release and not everyone realizes it, so it seemed to be worth a quick blog.
rescue --nomount sshpw --username=root sekrit --plaintext url --url http://mirror.eucalyptus/fedora/releases/16/Fedora/x86_64/os/ lang en_US.UTF-8 firewall --enabled --port=22:tcp
Then set up these boot options in your PXE configuration:
ks=http://yourWebServer/ks/fedora-16-rescue.cfg ksdevice=link keymap=us lang=en_US sshd
This works just like rescue mode always has, except you don't need console access. Very cool.
I'm sure this feature isn't news to a lot of Fedora users, but sometimes cool new features like this sneak into a Fedora release and not everyone realizes it, so it seemed to be worth a quick blog.
Tuesday, January 31, 2012
Image creation, part deux
My last blog post was a long and quite hackish procedure for running a Fedora install on a live instance in a Eucalyptus 3 cloud... and now I'm going to show you the easier way to build an image. I spent some time kicking around ami-creator, and I only ran into a few small issues. I've forked it on github and committed the necessary changes. There is a sample kickstart file in the source tree. Installation is a snap (sorry for not having it in rpm form, but that wasn't the goal of the day):
- easy_install ez_setup
- git clone https://github.com/eucalyptus/ami-creator
- cd ami-creator
- python setup.py build
- python setup.py install
- mkdir ~/f16-image
- cp ks-fedora-16.cfg ~/f16-image/
- cd ~/f16-image
- optionally, go modify the kickstart file to point to your mirror, add the packages you want, change the disk size, etc.
- ami-creator -c ks-fedora16.cfg -n f16test -v -e
- f16test.img
- initramfs-3.2.2-1.fc16.x86_64.img
- initrd-plymouth.img
- vmlinuz-3.2.2-1.fc16.x86_64
Image creation in the cloud
This post is the result of a challenge given to me by Seth Vidal, which showed up in his weekend blog post. He was musing about whether it's possible to actually do a kickstart, or even an interactive install, in a cloud instance. I have to put some disclaimers around this post, because I am _not_ advocating this approach, and I'm going to show you a feature of Eucalyptus 3 that could void your warranty if used in anger. As my friend Michael likes to say, if you break it, you get to keep both pieces.
What we hashed out on Friday was that, in order to be able to kickstart inside an instance, you have to be able to pass boot parameters. In Eucalyptus 2, the only real way to do this was by patching the node controller with something similar to the NEuca patches. In Eucalyptus 3, we've implemented a sort of "escape hatch" called nc-hooks to allow folks to customize behaviors at instance definition and launch time. There's an example shell script in /etc/eucalyptus/nc-hooks/ which shows how you might write your own hooks.
Knowing that the nc-hooks feature existed, I had to think about exactly how to pass boot parameters and get them into libvirt.xml before instance launch. Passing them via userData was the obvious choice. I came up with a couple of xslt files and this script to make the magic happen:
The second is a little stranger, and was done with some help from StackOverflow:
So with these files in place, I now need to configure an installer kernel and ramdisk. These come from the /fedora/releases/16/Fedora/x86_64/os/images/pxeboot/ directory of your favorite Fedora mirror site. The kernel and ramdisk registration process is the usual:
Next, I need a volume to install into:
You definitely should not have this line uncommented for normal use, as it will allocate a port for vnc for every instance you launch, and without some extra configuration, it doesn't even require a password to connect. For quick debugging on a safe network, though, it's a good way to see what's going wrong during the boot process.
Now to launch my installer instance:
This boots into an interactive install, which listens for vnc connections. Note that due to the size of the initrd, this instance needs a significant amount of RAM; I used 2GB, but 1GB would have worked. Before proceeding, I attach the volume (which I could have done via block device mapping):
euca-attach-volume -i i-447E3E89 -d sdd vol-14AE3F68
I check euca-describe-instances for the instance's IP address, connect to it with a vnc client, and proceed with the install. Once the install completes, I detach the volume and terminate the instance:
Finally, I convert the volume to a snapshot and register it:
I boot an instance of my new EMI, and ... it fails to have a network. There were multiple problems with the networking configuration:
The whole process took me about an hour or so this morning (not counting writing the xsl and shell script yesterday), and I imagine that the process would be much faster for subsequent attempts, and even faster when a kickstart is used. Still, I'm not convinced that an approach like this has significant value over something like BoxGrinder or ami-creator. Let the debate begin! :-)
What we hashed out on Friday was that, in order to be able to kickstart inside an instance, you have to be able to pass boot parameters. In Eucalyptus 2, the only real way to do this was by patching the node controller with something similar to the NEuca patches. In Eucalyptus 3, we've implemented a sort of "escape hatch" called nc-hooks to allow folks to customize behaviors at instance definition and launch time. There's an example shell script in /etc/eucalyptus/nc-hooks/ which shows how you might write your own hooks.
Knowing that the nc-hooks feature existed, I had to think about exactly how to pass boot parameters and get them into libvirt.xml before instance launch. Passing them via userData was the obvious choice. I came up with a couple of xslt files and this script to make the magic happen:
#!/bin/sh event=$1 euca_scripts=/home/eucalyptus/scripts inst_home=$3 rewrite_libvirt_xml() { # Get only the value of the "bootparams=..." line from userData BP=$( xsltproc $euca_scripts/get-user-data.xsl $inst_home/instance.xml \ | base64 -d \ | sed -r "/bootparams=/!d; s/^.*bootparams=(.*)/\1/" || exit 1 ) # Substitute the value of $BP into the stylesheet sed -e "s!@@BOOTPARAMS@@!$BP!" < $euca_scripts/insert-boot-params.xsl \ > $inst_home/insert-boot-params.xsl || exit 2 # Rewrite and replace libvirt.xml for this instance xsltproc $inst_home/insert-boot-params.xsl $inst_home/libvirt.xml \ > $inst_home/libvirt.xml.new || exit 3 cp $inst_home/libvirt.xml $inst_home/libvirt.xml.orig mv -f $inst_home/libvirt.xml.new $inst_home/libvirt.xml } case "$event" in euca-nc-pre-boot) rewrite_libvirt_xml exit 0 ;; *) exit 0 ;; esacI don't have a vast amount of experience when it comes to xml processing, so forgive the horror of these stylesheets. The first one, get-usr-data.xsl, is quite simple:
<?xml version="1.0" encoding="UTF-8"?> <xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output encoding="UTF-8" indent="yes" method="text"/> <xsl:template match="/instance"> <xsl:value-of select="/instance/userData"/> </xsl:template> </xsl:transform>
The second is a little stranger, and was done with some help from StackOverflow:
<?xml version="1.0" encoding="UTF-8"?> <xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output encoding="UTF-8" omit-xml-declaration="yes" indent="yes" method="xml"/> <xsl:template match='node()|@*'> <xsl:copy> <xsl:apply-templates select='node()|@*'/> </xsl:copy> </xsl:template> <xsl:template match="cmdline"> <cmdline>@@BOOTPARAMS@@</cmdline> </xsl:template> </xsl:transform>
So with these files in place, I now need to configure an installer kernel and ramdisk. These come from the /fedora/releases/16/Fedora/x86_64/os/images/pxeboot/ directory of your favorite Fedora mirror site. The kernel and ramdisk registration process is the usual:
- euca-bundle-image --kernel true -i vmlinz
- euca-upload-bundle -b f16 -m /tmp/vmlinuz.manifest.xml
- euca-register f16/vmlinuz.manifest.xml
- euca-bundle-image --ramdisk true -i initrd.img
- euca-upload-bundle -b f16 -m /tmp/initrd.img.manifest.xml
- euca-register f16/initrd.img.manifest.xml
- dd if=/dev/zero of=fake-emi.img bs=1k count=10000
- mke2fs fake-emi.img
- euca-bundle-image -i fake-emi.img
- euca-upload-bundle -b f16 -m /tmp/fake-emi.img.manifest.xml
- euca-register --kernel eki-EA183EA8 --ramdisk eri-6ED23EF2 f16/fake-emi.img.manifest.xml
Next, I need a volume to install into:
- euca-create-volume -s 10 -z PARTI00
<graphics type='vnc' port='-1' autoport='yes' keymap='en-us' listen='0.0.0.0'/>
You definitely should not have this line uncommented for normal use, as it will allocate a port for vnc for every instance you launch, and without some extra configuration, it doesn't even require a password to connect. For quick debugging on a safe network, though, it's a good way to see what's going wrong during the boot process.
Now to launch my installer instance:
euca-run-instances -t m1.xlarge \ -d "bootparams=ksdevice=link ip=dhcp vnc keymap=us lang=en_US console=ttyS0" \ emi-BA8F405E
This boots into an interactive install, which listens for vnc connections. Note that due to the size of the initrd, this instance needs a significant amount of RAM; I used 2GB, but 1GB would have worked. Before proceeding, I attach the volume (which I could have done via block device mapping):
euca-attach-volume -i i-447E3E89 -d sdd vol-14AE3F68
I check euca-describe-instances for the instance's IP address, connect to it with a vnc client, and proceed with the install. Once the install completes, I detach the volume and terminate the instance:
- euca-detach-volume vol-14AE3F68
- euca-terminate-instances i-447E3E89
Finally, I convert the volume to a snapshot and register it:
- euca-create-snapshot vol-14AE3F68
- euca-register -n f16-test -s snap-2CBB42D9
I boot an instance of my new EMI, and ... it fails to have a network. There were multiple problems with the networking configuration:
- The MAC address is hard-coded.
- The device name has changed from eth0 to eth1 (maybe related to #1)
- The NIC is configured to be controlled by NetworkManager
The whole process took me about an hour or so this morning (not counting writing the xsl and shell script yesterday), and I imagine that the process would be much faster for subsequent attempts, and even faster when a kickstart is used. Still, I'm not convinced that an approach like this has significant value over something like BoxGrinder or ami-creator. Let the debate begin! :-)
Thursday, January 19, 2012
Configuring Eucalyptus 3-devel
In my last entry, I explained how to checkout eucalyptus 3-devel and build it from source on Fedora 16. This entry will explain how to follow that process with configuration and initialization of a single node cloud.
1) Configure environment variables.
export EUCALYPTUS=/opt/eucalyptus
export PATH=$PATH:$EUCALYPTUS/usr/sbin
2) Configure eucalyptus.conf -- Since this is a single node install on a network with DHCP, I am using SYSTEM mode for networking, which is the default.
USE_VIRTIO_NET="1"
INSTANCE_PATH="/opt/eucalyptus/instances"
VNET_BRIDGE="br0"
3) Set up proper file and directory permissions in the installed tree:
su -c "euca_conf --setup"
4) Initialize the database:
euca_conf --initialize
5) Create a bridge device and associate your primary NIC (this is specific to SYSTEM mode):
/etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE=br0
TYPE=Bridge
BOOTPROTO=dhcp
ONBOOT=yes
DELAY=0
NM_CONTROLLED=no
/etc/sysconfig/network-scripts/ifcfg-em1:
DEVICE="em1"
ONBOOT=yes
BRIDGE=br0
NM_CONTROLLED=nothen restart your network
6) UPDATE: Start the CLC before getting credentials.
su -c "/opt/eucalyptus/etc/init.d/eucalyptus-cloud start"
7) Get credentials and source them:
euca_conf --get-credentials admin.zip
8) Start the cloud components and register services:
At this point, you should have a running cloud. To verify the components:
euca-describe-walruses ; euca-describe-storage-controllers ; euca-describe-clusters
You should see something like:
WALRUS walrus walrus 192.168.51.251 ENABLED {}
STORAGECONTROLLER PARTI00 SC_251 192.168.51.251 ENABLED {}
CLUSTER PARTI00 CC_251 192.168.51.251 ENABLED {}
And to ensure that the node controller is advertising resources:
euca-describe-availability-zones verbose
which shows:
AVAILABILITYZONE PARTI00 192.168.51.251 arn:euca:eucalyptus:PARTI00:cluster:CC_251/
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0004 / 0004 1 128 2
AVAILABILITYZONE |- c1.medium 0002 / 0002 1 256 5
AVAILABILITYZONE |- m1.large 0001 / 0001 2 512 10
AVAILABILITYZONE |- m1.xlarge 0000 / 0000 2 1024 20
AVAILABILITYZONE |- c1.xlarge 0000 / 0000 4 2048 20
That's all for my second post. Comments and corrections welcome. See you on #eucalyptus !
1) Configure environment variables.
export EUCALYPTUS=/opt/eucalyptus
export PATH=$PATH:$EUCALYPTUS/usr/sbin
2) Configure eucalyptus.conf -- Since this is a single node install on a network with DHCP, I am using SYSTEM mode for networking, which is the default.
EUCALYPTUS="/opt/eucalyptus"
HYPERVISOR="kvm"
USE_VIRTIO_DISK="1"USE_VIRTIO_NET="1"
INSTANCE_PATH="/opt/eucalyptus/instances"
VNET_BRIDGE="br0"
3) Set up proper file and directory permissions in the installed tree:
su -c "euca_conf --setup"
4) Initialize the database:
euca_conf --initialize
5) Create a bridge device and associate your primary NIC (this is specific to SYSTEM mode):
/etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE=br0
TYPE=Bridge
BOOTPROTO=dhcp
ONBOOT=yes
DELAY=0
NM_CONTROLLED=no
/etc/sysconfig/network-scripts/ifcfg-em1:
DEVICE="em1"
ONBOOT=yes
BRIDGE=br0
NM_CONTROLLED=nothen restart your network
6) UPDATE: Start the CLC before getting credentials.
su -c "/opt/eucalyptus/etc/init.d/eucalyptus-cloud start"
7) Get credentials and source them:
euca_conf --get-credentials admin.zip
unzip admin.zip
source eucarc8) Start the cloud components and register services:
euca_conf --register-walrus -H <hostname> -C walrus -P walrus
euca_conf --register-sc -H <hostname> -C SC_251 -P PARTI00
euca_conf --register-cluster -H <hostname> -C CC_251 -P PARTI00
su -c "/opt/eucalyptus/etc/init.d/eucalyptus-cc start"
euca_conf --register-nodes <hostname>
su -c "/opt/eucalyptus/etc/init.d/eucalyptus-nc start"
At this point, you should have a running cloud. To verify the components:
euca-describe-walruses ; euca-describe-storage-controllers ; euca-describe-clusters
You should see something like:
WALRUS walrus walrus 192.168.51.251 ENABLED {}
STORAGECONTROLLER PARTI00 SC_251 192.168.51.251 ENABLED {}
CLUSTER PARTI00 CC_251 192.168.51.251 ENABLED {}
And to ensure that the node controller is advertising resources:
euca-describe-availability-zones verbose
which shows:
AVAILABILITYZONE PARTI00 192.168.51.251 arn:euca:eucalyptus:PARTI00:cluster:CC_251/
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0004 / 0004 1 128 2
AVAILABILITYZONE |- c1.medium 0002 / 0002 1 256 5
AVAILABILITYZONE |- m1.large 0001 / 0001 2 512 10
AVAILABILITYZONE |- m1.xlarge 0000 / 0000 2 1024 20
AVAILABILITYZONE |- c1.xlarge 0000 / 0000 4 2048 20
That's all for my second post. Comments and corrections welcome. See you on #eucalyptus !
Building Eucalyptus 3-devel
As some readers may already be aware, Eucalyptus has started publishing code from the Eucalyptus 3 development branch on launchpad. The build process has a fairly sizable set of dependencies, so I'd like to give a quick example of how I built and installed this code on a Fedora 16 system. I started with a minimal x86_64 install.
1) Add Eucalyptus' yum repository which contains dependencies for the source build:
2) Download the GPG key for verifying packages, and add it to your rpm database:
rpm --import http://downloads.eucalyptus.com/devel/gpg-keys/9d7b073c-eucalyptus-nightly-release-key.pub
3) Install dependencies. For simplicity, this list includes build and runtime deps for all components:
4)Install grub 1. UPDATE: This is no longer required in 3.1.
This package is obsoleted by grub2, so it cannot be installed by yum, but it has no conflicting files, so installing it outside of the rpm database is safe:
5) Create a user named eucalyptus on your system.
6) Disable iptables (eucalyptus must be allowed to control iptables for dynamic routing).
7) Increase shmmax on the system:
8) Modify /usr/lib64/axis2c/bin/tools/wsdl2c/WSDL2C.sh -- erase the existing lines and add these:
9) Log in as the eucalyptus user to checkout and build the code
10)Check out the code from bzr: bzr branch lp:eucalyptus && cd eucalyptus
UPDATE: Check out the code from GitHub: git clone https://github.com/eucalyptus/eucalyptus.git
11) Configure eucalyptus. I recommend running "./configure --help" and reading over the options, but this configuration should work:
./configure --with-axis2c=/usr/lib64/axis2c/ \
--with-apache2-module-dir=/usr/lib64/httpd/modules/
12) Modify ./clc/modules/postgresql/conf/scripts/setup_db.groovy :
Change PG_BIN on line 97 to "/usr/bin/pg_ctl"
Change PG_INITDB on line 103 to "/usr/bin/initdb"
13) Run make
Then as root, do the following:
14) Run make install
NOTE: there's an issue here which causes some files in the source tree to be root-owned after this step, so you may want to run "find . | xargs chown eucalyptus" to fix this. Otherwise, you may see "permission denied" errors the next time you run "make" or "make distclean".
15) Copy PolicyKit configuration for libvirt into place:
/var/lib/polkit-1/localauthority/10-vendor.d/eucalyptus-nc-libvirt.pkla
16) Restart libvirtd: systemctl restart libvirtd.service
I'll write up another post detailing configuration and initialization steps. For those of you who have used eucalyptus before, you will find that the eucalyptus.conf file is mostly unchanged from 2.0.x. The database initialization and component registration steps differ slightly, though. Stay tuned.
1) Add Eucalyptus' yum repository which contains dependencies for the source build:
[euca-deps]
name=euca-deps
baseurl=http://downloads.eucalyptus.com/devel/packages/3-devel/fedora/16/x86_64/
gpgcheck=1
enabled=1
2) Download the GPG key for verifying packages, and add it to your rpm database:
rpm --import http://downloads.eucalyptus.com/devel/gpg-keys/9d7b073c-eucalyptus-nightly-release-key.pub
3) Install dependencies. For simplicity, this list includes build and runtime deps for all components:
yum install axis2c rampartc axis2c-devel rampartc-devel python-boto \
euca2ools libvirt-devel openssl-devel gcc java-1.6.0-openjdk-devel ant \
curl-devel libxslt-devel apache-commons-logging xalan-j2-xsltc wsdl4j \
backport-util-concurrent httpd postgresql-server libvirt PyGreSQL make \
openssh-clients scsi-target-utils qemu-kvm axis2-codegen axis2-adb-codegen
4)
This package is obsoleted by grub2, so it cannot be installed by yum, but it has no conflicting files, so installing it outside of the rpm database is safe:
yum install yum-utils
yumdownloader grub
cd /; rpm2cpio /root/grub-0.97-*.rpm | cpio -id
cp /usr/share/grub/x86_64-redhat/* /boot/grub/
5) Create a user named eucalyptus on your system.
useradd -G kvm eucalyptus
passwd eucalyptus6) Disable iptables (eucalyptus must be allowed to control iptables for dynamic routing).
systemctl disable iptables.service
systemctl stop iptables.service
7) Increase shmmax on the system:
SHMMAX=$(( 48 * 1024 * 1024 ))
echo $SHMMAX > /proc/sys/kernel/shmmax
echo "kernel.shmmax = $SHMMAX" > /etc/sysctl.d/euca_shmmax8) Modify /usr/lib64/axis2c/bin/tools/wsdl2c/WSDL2C.sh -- erase the existing lines and add these:
java -classpath $(build-classpath axis2/codegen axis2/kernel axis2/adb \
axis2/adb-codegen wsdl4j commons-logging xalan-j2 xsltc \
backport-util-concurrent ws-commons-XmlSchema ws-commons-neethi \
ws-commons-axiom annogen ) org.apache.axis2.wsdl.WSDL2C $*9) Log in as the eucalyptus user to checkout and build the code
10)
UPDATE: Check out the code from GitHub: git clone https://github.com/eucalyptus/eucalyptus.git
11) Configure eucalyptus. I recommend running "./configure --help" and reading over the options, but this configuration should work:
./configure --with-axis2c=/usr/lib64/axis2c/ \
--with-apache2-module-dir=/usr/lib64/httpd/modules/
12) Modify ./clc/modules/postgresql/conf/scripts/setup_db.groovy :
Change PG_BIN on line 97 to "/usr/bin/pg_ctl"
Change PG_INITDB on line 103 to "/usr/bin/initdb"
13) Run make
Then as root, do the following:
14) Run make install
NOTE: there's an issue here which causes some files in the source tree to be root-owned after this step, so you may want to run "find . | xargs chown eucalyptus" to fix this. Otherwise, you may see "permission denied" errors the next time you run "make" or "make distclean".
15) Copy PolicyKit configuration for libvirt into place:
mkdir -p /var/lib/polkit-1/localauthority/10-vendor.d
cp -p tools/eucalyptus-nc-libvirt.pkla \/var/lib/polkit-1/localauthority/10-vendor.d/eucalyptus-nc-libvirt.pkla
16) Restart libvirtd: systemctl restart libvirtd.service
I'll write up another post detailing configuration and initialization steps. For those of you who have used eucalyptus before, you will find that the eucalyptus.conf file is mostly unchanged from 2.0.x. The database initialization and component registration steps differ slightly, though. Stay tuned.
Subscribe to:
Posts (Atom)