This repository provides methods to easily test and deploy a Katello server. There are two different types of setups—nightly and development. The latter is for setting up Katello from git repositories so that you can contribute to Katello. Nightly installs are production installs from the nightly RPMs that contain the bleeding edge Katello code.
In terms of the type of deployments, there are also two options: VM and direct. Using Vagrant will automatically provision the VM with VirtualBox or libvirt while a direct deployment assumes you are not using a VM or you already have the VM created. Check the table below to verify which operating system you can use for which type of deployment.
OS | 2.0 | Nightly | Development | Direct | Vagrant |
---|---|---|---|---|---|
CentOS 6 | X | X | X | X | X |
CentOS 7 | X | X | X | X | X |
RHEL 6 | X | X | X | X | |
RHEL 7 | X | X | X | X |
A Vagrant deployment will provision either a development setup (using git repositories) or an install using the nightly RPMs.
The first step in using Vagrant to deploy a Katello environment is to ensure that Vagrant and this repository are installed and setup. To do so:
- Ensure you have Vagrant installed
- For libvirt:
- Ensure you have the prerequisites installed
sudo yum install ruby rubygems ruby-devel gcc
- Vagrant 1.6.5+ can be downloaded and installed from Vagrant Downloads
- Ensure you have the prerequisites installed
- For Virtualbox, Vagrant 1.6.5+ can be downloaded and installed from Vagrant Downloads
- For libvirt:
- Clone this repository -
git clone https://github.com/Katello/katello-deploy.git
- Enter the repository -
cd katello-deploy
If you're using Linux, we recommend libvirt (see next section). The default setup in the Vagrantfile is for VirtualBox. It has been tested against VirtualBox 4.2.18. To use Install VirtualBox from the 4.2 downloads page
The Vagrantfile provides default setup and boxes for use with the vagrant-libvirt
provider. To set this up:
- Install libvirt. On CentOS/Fedora/RHEL, run
sudo yum install @virtualization libvirt-devel
- Install the libvirt plugin for Vagrant (see vagrant-libvirt page for more information)
vagrant plugin install vagrant-libvirt
- Make sure your user is in the
qemu
group. (e.g.[[ ! "$(groups $(whoami))" =~ "qemu" ]] && sudo usermod -aG qemu $(whoami)
) - Set the libvirt environment variable in your
.bashrc
or for your current session -export VAGRANT_DEFAULT_PROVIDER=libvirt
- If you are asked to provide your password for every command, follow these policykit steps.
Currently Katello is only available in the Katello nightly repositories. Provided is a Vagrant setup that will setup and install Katello on a CentOS box. Any base CentOS box and Vagrant setup should work but we have been testing and using Vagrant with libvirt.
Start the installation for CentOS 6:
vagrant up centos6-nightly
Start the installation for CentOS 7:
vagrant up centos7-nightly
This will create a libvirt based virtual machine running the Katello server on CentOS.
A Katello development environment can be deployed on CentOS 6 or 7. Ensure that you have followed the steps to setup Vagrant and the libvirt plugin.
To deploy to CentOS 6:
vagrant up centos6-devel
To deploy to CentOS 7:
vagrant up centos7-devel
The box can now be accessed via ssh and the Rails server started directly (this assumes you are connecting as the default vagrant
user):
vagrant ssh <deployment>
cd /home/vagrant/foreman
sudo service iptables stop
rails s
Sometimes you want to spin up the same box type (e.g. centos7-devel) from within the katello-deploy directory. While this can be added to the Vagrantfile directly, updates to the katello-deploy repository could wipe out your local changes. To help with this, you can define a custom box re-using the configuration within the Vagrantfile. To do so, create a boxes.yaml
file. For example, to create a custom box on CentOS 7 with nightly and run the installers reset command:
my-nightly-test:
box: centos7
installer: '--reset'
Options:
box -- the ':name' one of the defined boxes in the Vagrantfile
installer -- options that you would like passed to the katello-installer
options -- options that setup.rb accepts, e.g. --skip-installer
shell -- customize the shell script run
bridged -- deploy on Libvirt with a bridged networking configuration, value of this parameter should be the interface of the host (e.g. em1)
Entirely new boxes can be created that do not orginate from a box defined within the Vagrantfile. For example, if you had access to a RHEL Vagrant box:
rhel7:
box_name: rhel7
shell: 'echo TEST'
pty: true
libvirt: http://example.org/vagrant/rhel-7.box
Any file on path ./plugins/*/Vagrantfile
will be loaded on ./Vagrantfile
evaluation. plugins
directory is ignored by git therefore other git repositories can be cloned into plugins
to add custom machines.
Example of a plugin's Vagrantfile
:
module APlugin
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
DB = 'db'
WEB = 'web'
PARENT_NAME = 'centos6-devel'
PROJECT_PATH = "#{KatelloDeploy::ROOT}/../a_repo"
KatelloDeploy.define_vm config, KatelloDeploy.new_box(PARENT_NAME, DB) do |machine|
machine.vm.provision :shell do |shell|
shell.inline = 'echo doing DB box provisioning'
config.vm.synced_folder PROJECT_PATH, "/home/vagrant/a_repo"
config.vm.provider :virtualbox do |domain|
domain.memory = 1024
end
end
end
KatelloDeploy.define_vm config, KatelloDeploy.new_box(PARENT_NAME, WEB) do |machine|
machine.vm.provision :shell do |shell|
shell.inline = 'echo doing WEB box provisioning'
shell.inline = 'echo doing another WEB box provisioning'
config.vm.synced_folder PROJECT_PATH, "/home/vagrant/a_repo"
config.vm.provider :virtualbox do |domain|
domain.memory = 512
end
end
end
end
end
If you would like to inject hostname management and package caching without
updating the base Vagrantfile, you can install the vagrant-hostname
and
vagrant-cachier
plugins and then create
./plugins/my-custom-plugins/Vagrantfile
with the following content:
# this enables some customizations that should not be used until after you have a
# working basic install.
module MyCustomPlugins
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# set up some shared dirs
config.vm.synced_folder "/path/to/local/checkout/katello", "/home/vagrant/share/katello", type: "nfs"
config.vm.synced_folder "/path/to/local/checkout/foreman", "/home/vagrant/share/foreman", type: "nfs"
config.vm.synced_folder "/path/to/local/checkout/foreman-gutterball", "/home/vagrant/share/foreman-gutterball", type: "nfs"
if Vagrant.has_plugin?("vagrant-hostmanager")
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
end
if Vagrant.has_plugin?("vagrant-cachier")
# Configure cached packages to be shared between instances of the same base box.
# More info on http://fgrehm.viewdocs.io/vagrant-cachier/usage
config.cache.scope = :box
# disable gem caching for now, due to permissions issue
config.cache.auto_detect = false
config.cache.enable :yum
config.cache.synced_folder_opts = {
type: :nfs,
mount_options: ['rw', 'vers=4', 'tcp', 'nolock']
}
end
end
end
If you have problems installing the libvirt plugin, be sure to checkout the troubleshooting section of their README.
If you get this error:
There was an error talking to Libvirt. The error message is shown
below:
Call to virDomainCreateWithFlags failed: Input/output error
The easiest thing to do is disable selinux using: sudo setenforce 0
. Alternatively you can configure libvirt for selinux, see http://libvirt.org/drvqemu.html#securitysvirt
If you get this error:
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
Make sure nfs is installed and running:
sudo yum install nfs-utils
sudo service start nfs-server
Your OS may be installed with a large root parition and smaller /home
partition. Vagrant will populate ~/.vagrant.d/
with boxes by default; each of
which can be over 2GB in size. This may cause disk space issues on your /home
partition.
To store your Vagrant files elsewhere, you can create a directory outside of
/home
and tell Vagrant about it by setting VAGRANT_HOME=<path to vagrant dir>
. You may need to set this in your .bash_profile
so it persists between
logins.
This setup assumes you are either deploying on a non-VM environment or you already have a VM setup and are logged into that VM.
If on RHEL, it is assumed you have already registered and subscribed your system.
subscription-manager register --username USER --password PASSWORD --auto-attach
- ssh to target machine as root
- Install git and ruby -
yum install -y git ruby
- Clone this repository -
git clone https://github.com/Katello/katello-deploy.git
- Enter the repository -
cd katello-deploy
For a release version in production:
./setup.rb --version 2.2
For nightly production:
./setup.rb
For development:
./setup.rb --install-type=devel --devel-user=username
Included with katello-deploy is a small live test suite. The current tests are:
- fb-install-katello.bats - Installs katello and runs a few simple tests
To execute the bats framework:
- Using vagrant (after configuring vagrant according to this document):
- vagrant up centos6-bats
- vagrant ssh centos6-bats -c 'sudo fb-install-katello.bats'
- On a fresh system you've manually installed:
- ./bats/bootstrap.sh
- katello-bats
User defined scripts can be run after a successful installation to facilitate common per user actions. For example, if there are common setup tasks run on every devel box for a user these can be setup to run for every run of setup.rb
. This also translates to running on every up/provision when using Vagrant. To define a script to be run, create a scripts/
directory and then place the script inside. For example, if you wanted to have vim
installed on every box, make a file scripts/vim.sh
:
#!/bin/bash
yum -y install vim
The setup.rb script supports using Koji scratch builds to make RPMs available for testing purposes. For example, if you want to test a change to nightly, with a scratch build of rubygem-katello. This is done by fetching the scratch builds, and deploying a local yum repo to the box you are deploying on. Multiple scratch builds are also supported for testing changes to multiple components at once (e.g. the installer and the rubygem), see examples below. Also, this option may be specified from within boxes.yaml via the options:
option.
Single Scratch Build
./setup.rb --koji-task 214567
Multiple Scratch Builds
./setup.rb --koji-task 214567,879567,2747127
Custom Box
koji:
box: centos6
options: --koji-task 214567,879567
The setup.rb script supports specifying any number of modules and associated pull requests for testing. For example, if a module under goes a refactoring, and you want to test that it continues to work with the installer. You'll need the name of the module and the pull request number you want to test. Note that the name in this situation is the name as laid down in the module directory as opposed to the github repository name. In other words, use 'qpid' not 'puppet-qpid'. Formatting requires the module name followed by a '/' and then the pull request number. See examples below.
Single module PR:
./setup.rb --module-prs qpid/12
Multiple modules:
./setup.rb --module-prs qpid/12,katello/11
Custom Box:
module_test:
box: centos6
options: --module-prs qpid/12
The docker/clients directory contains setup and configuration to register clients via subscription-manager using an activation key and start katello-agent. Before using the client containers, Docker and docker-compose need to be installed and setup. On a Fedora based system (Fedora 20 or greater):
sudo yum install docker
sudo service docker start
sudo chkconfig docker on
sudo usermod -aG docker your_username
curl -L https://github.com/docker/compose/releases/download/VERSION_NUM/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
For other platforms see the official instructions at:
In order to use the client containers you will also need the following:
- Foreman/Katello server IP address
- Foreman/Katello server hostname
- Foreman Organization
- Activation Key
- Katello version you have deployed (e.g. nightly, 2.2)
Begin by changing into the docker/clients directory and copying the docker-compose.yml.example
file to docker-compose.yml
and filling in the necessary information gathered above. At this point, you can now spin-up one or more clients of varying types. For example, if you wanted to spin up a centos6 based client:
docker-compose up el6
If you want to spin up more than one client, let's say 10 for this example, the docker-compose scale command can be used:
docker-compose scale el6=10
When modifying or creating new Jenkins jobs, it's helpful to generate the XML file to compare to the one Jenkins has. In order to do this, you need a properly configured Jenkins Job Builder environment. The dockerfile under docker/jjb can be used as a properly configured environment. To begin, copy docker-compose.yml.example
to docker-compose.yml
:
cd docker/jjb
cp docker-compose.yml.example docker-compose.yml
Now edit the docker-compose configuration file to point at your local copy of the foreman-infra
repository so that it will mount and record changes locally when working within the container. Ensure that either your docker has permissions to the repository being mounted or that the appropriate Docker SELinux context is set: (Docker SELinux with Volumes)[http://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/]. Now we are ready to do any Jenkins Job Builder work. For example, if you wanted to generate all the XML files for all jobs:
docker-compose run jjb bash
cd foreman-infra/puppet/modules/jenkins_job_builder/files/theforeman.org
jenkins-jobs -l debug test -r . -o /tmp/jobs