Tuesday, October 26, 2010

BuildAPI: A First Look

The Beginning
Previously we had a conference call with Armen Zambrano from Mozilla regarding Mozharness, BuildAPI and Simple Release Bugs (Thanks Armen!). He had introduced us to these topics and got us rolling on projects that we could tackle. Consequently, I took an interest in BuildAPI which is a Pylons project used by RelEng (Release Engineering) to surface information collected from two databases updated through Buildbot (http://buildbot.net/trac) masters as they run jobs.

The BuildAPI project wiki page can be found here.


A Little About BuildAPI
This project involves generating analytic reports by querying databases, which can be used for a multitude of purposes such as performance or usage reports or discovering rogue Buildbot slave machines. Furthermore, it uses Pylons which combines the very best ideas from the worlds of Ruby, Python and Perl, providing a structured but extremely flexible Python web framework.


Getting Started
However, there is a great deal of information that needs to be learned and it would definitely require me to take a strong iniative. The first step is figuring out where exactly to start and how to progress.
Fortunately, Armen had posted more information about BuildAPI on his blog (http://armenzg.blogspot.com/search/label/mozharness), which provided an objective:

What I need students to do is one of the two:
1) generate graphs, charts, CSVs and CPU totals for infrastructure load blog posts like this 
    a) this is very useful and could move us forward towards having this information being published publicly for consumption
    b)  I highly encourage this one as understanding the mental model behind it is easier
2) write a tool that analyzes our statusDB and figure out slaves that have been continually been burning jobs (sometimes it takes us several days to spot them)

He also provided us with snapshots from the database and extremely helpful links such as How to get started with BuildAPI and Google Chart API  (more on that later).


Challenges (I love 'em, bring 'em on!):
  • Need to wrap my head around BuildAPI's concepts
  • Absolutely no experience with Python (although it is similar to BASH, which I am fine with) and just barely touched Ruby
  • Need to learn how to integrate Google Chart API / how it works
  • Figure out the database structure and which information needs to be pulled to generate the required reports
  • How to integrate it all and create a tool that can accomplish the latter


What I have done so far:
- I have looked at the links Armen had posted, especially the ReleaseEngineering/BuildAPI  as it provides instructions on how to get started with BuildAPI.
- Went over the database and its structure, which can be seen here and here or below:


 and


- Started reading on Pylons framework from the following sources:
http://pylonshq.com/docs/en/0.9.7/gettingstarted/
http://pylonshq.com/docs/en/0.9.7/tutorials/
- Also, briefly went over Google Chart API (It's awesome!)


A Note about Google Chart API:
The Google Chart API allows you to dynamically generate charts using a URL string.
It is the perfect tool as the charts can be embedded on web pages which the advantage is there are no files to save or serve. Although, the API does provide the option to download the image for local or offline use.

Here is an example:
http://chart.apis.google.com/chart?cht=p3&chd=t:60,40&chs=250x100&chl=Work|Play

Isn't that awesome?

Monday, October 25, 2010

Signing RPM Packages and Creating Your Own Repository

So much for "break" week. With so much to do, I'll be spending most of it just doing work.
Anyway, aside from that, let's look at another lab which consists of two phases:
- signing RPM packages, and then
- creating our own yum repository from where we will serve our signed packages.


First Phase - Signing Packages
The first phase involved signing rpm packages and I had chosen to sign two rpms that I created using spec files: nled and snort.
This was an easy task which is accomplished by:
- generating a GPG key (using gpg --gen-key),
- editing the .rpmmacros file and adding my email address (adding %_gpg_name "asingh114@learn.senecac.on.ca"), and finally,
- signing the desired rpm packages using rpm --addsign packagefilename

Instructions can be found on our SBR600 Weekly Schedule wiki under "Week 5 (October 4) - Repositories/Distributing)"



Second Phase - Creating a Yum Repository
Again, instructions can be found on our SBR600 Weekly Schedule wiki.

Since I am running my host system (Fedora 12) in a virtual machine, I decided to create a local or internal yum repository and test it using another fedora virtual machine that I already had installed. The test virtual machine is running Fedora 13 64-bit edition. Furthermore, I will be using HTTP as the protocol to serve my repository directories and of course, Apache Web Server was already installed and running on the system.

The repository directories were created in the public HTTP directory, and will be served out of /var/www/html/fedora/12/. However, in order to organize content I created two additional directories:
- i386, for 32 bit Fedora editions, and
- x86_64 for 64 bit.

The following command was used:
mkdir -p /var/www/html/fedora/{i386,x86_64}

Then, I copied my signed repositories from Phase 1 over to their respective directory (either i386 or x86_64). Next is creating the repository metadata for both of the directories, which can be accomplished through the command:

createrepo /var/www/html/fedora/i386
and then do the same for x86_64 directory.

Or through a script such as this:


#!/bin/bash
destdir="/var/www/html/repo/fedora"

for repo in i386 x86_64
    do
    pushd ${destdir}/${repo}
    createrepo .
done

Modified from http://blogs.techrepublic.com.com/opensource/?p=609

Either way, this will create a repodata directory containing the repository metadata in both of those directories.



Testing the Repository
We will be using a GPG key for our repository, so I also had to create a GPG key file. This is simply done by using the command:
gpg --export --armour asingh114@learn.senecac.on.ca > RPM-GPG-KEY-asingh114
This created the gpg key output which is saved into the RPM-GPG-KEY-asingh114 file. This file is placed into the /etc/pki/rpm-gpg/ directory and will be used by the repository.

As mentioned before, this repository will be tested by serving to internal clients. Therefore, I created a new repository file in the /etc/yum.repos.d directory called asingh114.repo, which contained the following:

[asingh114 repo]
name=Asingh114 Repository
failovermethod=priority
baseurl=http://localhost/repo/fedora/$basearch
enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-asingh114

Notes:
- The gpgkey is pointing to the file I generated earlier.
- Notice the baseurl is pointing to my localhost. The yum built-in $basearch variable denotes the architecture such as i386 or x86_64, etc, hence using $basearch at the end allows it to be able to find the proper repository directories for the host depending on the host's architecure.
- Additionally, I could have used another yum variable, $releasever, which is the release version of the system (e.g. Fedora 12, 13 etc) to further organize content, but this is just a simple repository test and I figured it is not needed.

When I tested this on my host machine, it worked fine.


Moreover, these two files were packaged into the RPM below that was created using a SPEC file:
http://asdesigned.ca/asingh114-repo-1-1.fc12.i686.rpm

I wanted to try this on the test virtual machine (Fedora 13 64 bit) so I temporarily moved all other repository files from the test machines' /etc/yum.repos.d directory and installed the rpm I created. This placed my repository file and my GPG key file into the proper locations.
Then I ran yum to test it, which I was then prompted to import the keys - after which, it was able to pull from my own repository. Cool stuff!

By the way, we can also use rpm --import GRPM-GPG-KEY-asingh114 to import the GPG key files manually.

To summarize, the setup was as follows:
Host machine: Fedora 12 (32 bit)
- Repository Serving Protocol: HTTP, Web Server
- Repository Directories: /var/www/html/fedora/i386 and /var/www/html/fedora/x86_64

Test Virtual machine: Fedora 13 64 bit
- Moved all other repository files out of the /etc/yum.repos.d directory temporarily
- Repository configuration file installed: /etc/yum.repos.d/asingh114.repo via my RPM package
- GPG Key file installed: /etc/pki/rpm-gpg/RPM-GPG-KEY-asingh114 via my RPM package
- Ran yum

Monday, October 11, 2010

Mock and Koji testing with NLED (simple)

Mock:
After recreating my own NLED spec file, I test it with Mock using:
mock -r fedora-12-i386 --rebuild nled-252-6.fc12.src.rpm

The results were successful. I checked the log files in /var/lib/mock/fedora-12-i386/results/, mainly the root.log file and went through it. There were no errors and I did see the ncurses-devel dependency that I had originally placed into my spec file was tested. NLED was really simple so I didn't expect any errors anyway.
I expect to run into issues when I try my own SNORT and Irrsi spec file (when I get around to finishing them).

Koji:
Next, I installed Koji - the instructions are on http://fedoraproject.org/wiki/PackageMaintainers/UsingKoji
The basic steps are running yum install fedora-packager and then running the fedora-packager-setup, at which point you will be prompted to enter your FAS username and password.

In LAB 1 (Communication Lab), we created our FAS accounts, so I used those credentials successfully. It then creates the necessary certificates (see the fedoraproject.org link above for more information) and allows you to access the Koji Fedora Build system where you can see your builds and tests.

Here is a link to mine: Koji Fedora Builds
If you click on my "Tasks", you will see that I have tested NLED on Fedora 12 platforms and all were successful.

Below is the command and ouput for Koji on my Fedora system:
# koji build dist-f12 --scratch nled-2.52-6.fc12.src.rpm
2528731 build (dist-f12, nled-2.52-6.fc12.src.rpm) completed successfully

I have to say that Koji is excellent and was very easy to set up! It's good to have such a resource available.

Well, I am going to test SNORT and Irssi source RPMs here later and will post results.

P.S. Happy Thanksgiving! Mmm Turkey....and if you don't get any work done, blame it on the Tryptophan!!!

NLED Spec file revisited

Revisited:
In one of my previous posts, Creating an NLED Spec file, I had posted my spec file and RPM. However, I might not have had ran rpmlint on it so therefore there were some Warnings (no errors) that needed to be fixed.

Rpmlint Warnings
When I ran rpmlint on the spec file, I received warnings that I had macros in the %changelog section, which I did have.

Warning messages:   
"W: macro-in-%changelog %files"
"W: macro-in-%changelog %Install"

That was an easy fix, and I learned that no macro should be placed in the %changelog section at all (I initially thought it did not matter).

Running rpmlint on the nled-2.52-6.fc12.i686.rpm file, gave me two warnings:
nled.i686: W: no-documentation
nled.i686: W: no-manual-page-for-binary nled

After checking the SOURCE/nled-2.52/ directory under ~/rpmbuild, there is a nled.txt file there. So it was simple as adding %doc nled.txt to the %files section. Now the original source rpm does not have a manual page for the NLED binary so that is a warning that we can safely ignore (am I assuming right?)


Update: 
 I decided, for testing purposes, to add a man page to the NLED rpm. I used the output from the command "?" within NLED that gives the list of hotkeys and shortcuts to create a man page file called nled.1.
(To create your own man page, see here).

Then I edited the nled.spec file and added to the sections below.
Please note that comments were added in for explanations.

# I used Source1 instead of using PATCH 
# because it is just a straight file 
# that doesn't need to be uncompressed 
Source1:  nled.1       #This was put in the SOURCES directory

%prep
%setup -q

#This copies the nled.1 to the BUILD directory
cp -p %SOURCE1 %{_builddir}/%{name}-%{version}/      


%install
rm -rf %{buildroot}
mkdir -p %{buildroot}%{_bindir}/
cp -p  nled %{buildroot}%{_bindir}/

# This creates the man directory in the BUILDROOT
mkdir -p %{buildroot}%{_mandir}/             

# and this copies the nled.1 from the BUILD to the BUILDROOT man directory
cp -p nled.1 %{buildroot}%{_mandir}/         


%files
%defattr(-,root,root,-)
%doc nled.txt  #This adds the nled documentation
%{_bindir}/*

# This tells rpm that the nled.1 man file is part of the package
%{_mandir}/*


I will have to reupload the SPEC file and the RPM as they have changed, and the previous blog entry's versions were not tested.

Monday, October 4, 2010

Revisited: Testing RPM build times - Part 2

In one of my previous blog posts, Revisited: Testing RPM build times - Part 1, I had indicated that I would be testing the build time of SNORT using an automated script. Well that didn't turn out too well. I had left it running overnight on my laptop and when I checked in the morning, my VirtualBox was not responding - it crashed. Of course, I lost all the data.

Anyway, I redid this test using aircrack-ng which is much faster to build because I honestly do not have much time to try SNORT again and wait with 7 courses + work + web design work. Besides, I need to use my laptop tonight and don't want to skewer the build time results.


Summary of my results:

The first, and very first time I ran rpmbuild -ba aircrack-ng, it took approximately 1 minute and 31 seconds. I ran it again after it had "warmed up" and I was surprised to see that it now took 43 seconds to complete.

From the results, using a -j1 value proved to be the fastest which correlates with the fact that my virtual machine is only using 1 CPU.

 

The Results:


-j1
real      0m43.798s
user     0m9.046s
sys       0m27.381s
 
-j2
real      0m45.369s
user     0m9.536s
sys       0m28.681s

 -j3
real      0m46.145s
user     0m9.595s
sys       0m29.114s
 
-j4
real      0m46.829s
user     0m9.774s
sys       0m29.842



Quick Notes

Quote on time output: real, user, sys from a site on the internet:
Real refers to actual elapsed time; User and Sys refer to CPU time used only by the process.
  • Real is wall clock time - time from start to finish of the call. This is all elapsed time including time slices used by other processes and time the process spends blocked (for example if it is waiting for I/O to complete).
  • User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only actual CPU time used in executing the process. Other processes and time the process spends blocked do not count towards this figure.
  • Sys is the amount of CPU time spent in the kernel within the process. This means executing CPU time spent in system calls within the kernel, as opposed to library code, which is still running in user-space. Like 'user', this is only CPU time used by the process. See below for a brief description of kernel mode (also known as 'supervisor' mode) and the system call mechanism.