Skip to content
This repository has been archived by the owner on Mar 9, 2021. It is now read-only.

Added initial creator for loopback based files #18

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
12 changes: 11 additions & 1 deletion blockstore/creator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,14 @@ The Gluster volume may be unmounted:
```sh
$ sudo umount /mnt/data
$ sudo rmdir /mnt/data
```
```

## Note on gluster-block-subvol-sc.yml

This is a convinence file placed here. This is to be used in an Openshift or a
k8s environment, when it is desired that the gluster-block-subvol be made the
default storage class. To enable gluster-block-subvol to be the default stroage
class, assuming that the PVs are created use,
```sh
$ kubectl apply -f gluster-block-subvol-sc.yml
```
3 changes: 1 addition & 2 deletions blockstore/creator/creator.sh
Original file line number Diff line number Diff line change
Expand Up @@ -134,8 +134,7 @@ while [ "$i" -le "$i_end" ]; do
echo "mkfs.xfs failed for ${blockfqpath}"
exit 2
fi
# TODO: Check mount (?)
# TODO: mkPvTemplate is as is, may need modifications

mkPvTemplate "$servers" "$volume_name" "$subdir" "$blockfile" "${volsize_gb}Gi" "$supervol_uuid" >> "${base_path}/pvs-${i_start}-${i_end}.yml"
((++i))
done
Expand Down
81 changes: 80 additions & 1 deletion blockstore/flex-volume/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,84 @@
# Installation of flex volume plugin

This is a flex volume plugin that needs to be installed on each Kubernetes node.
Included in this directory is an ansible playbook (`install_plugin.yml`) that
performs the install. This playbook:
* Creates the directory for the plugin:
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/rht~glfs-block-subvol`
* Copies both the plugin script `glfs-block-subvol` to that directory.

TODO
Upon first install, it may be necessary to restart kubelet for it to find the
plugin.

# Usage
To use the plugin, include the following as a volume description.
```yaml
flexVolume:
driver: "rht/glfs-block-subvol"
options:
cluster: 192.168.173.15:192.168.173.16:192.168.173.17
volume: "testvol"
dir: "00/01"
file: "0001"
```
The required options for the driver are:
* `cluster`: A colon separated list of the Gluster nodes in the cluster. The
first will be used as the primary for mounting, and the rest will be listed as
backup volume servers.
* `volume`: This is the name of the large Gluster volume that is being
subdivided.
* `dir`: This is the path from the root of the volume to the subdirectory which
will contain the file that would be loop mounted to be the volume.
* `file`: This is the name of the file within `dir` that is loop mounted as an
XFS file system as the volume for the claim.

The above example would use 192.168.173.15:/testvol/00/01/0001 to hold the PV
contents.

# Diagnostics/debugging
The `glfs-block-subvol` script has logging for all of its actions to help
diagnose problems with the plugin. The logging settings are at the top of the
script file:
```sh
# if DEBUG, log everything to a file as we do it
DEBUG=1
DEBUGFILE='/tmp/glfs-block-subvol.out'
```
When `DEBUG` is `1`, all calls and actions taken by the plugin are logged to
`DEBUGFILE`. The following is an example of the log file:
```
[1520361740.373690279] > init
[1520361740.373690279] < 0 {"status": "Success", "capabilities": {"attach": false, "selinuxRelabel": false}}
[1520361740.405577771] > mount /mnt/pods/00/volumes/vol1 {"cluster":"127.0.0.1:127.0.0.1","dir":"blockstore/00/00","volume":"patchy","file":"0000"}
[1520361740.405577771] volserver 127.0.0.1
[1520361740.405577771] backupservers 127.0.0.1
[1520361740.405577771] Using lockfile: /var/lock/glfs-block-subvol/127.0.0.1-patchy.lock
[1520361740.405577771] ! mount -t glusterfs -o backup-volfile-servers=127.0.0.1 127.0.0.1:/patchy /mnt/script-dir/mnt/blockstore/127.0.0.1-patchy
[1520361740.405577771] ! mount /mnt/script-dir/mnt/blockstore/127.0.0.1-patchy/blockstore/00/00/0000 /mnt/pods/00/volumes/vol1 -t xfs -o loop,discard
[1520361740.405577771] < 0 {"status": "Success", "message": "volserver=127.0.0.1 backup=127.0.0.1 volume=patchy mountpoint=/mnt/script-dir/mnt/blockstore/127.0.0.1-patchy bindto=/mnt/pods/00/volumes/vol1"}
[1520361740.849832326] > unmount /mnt/pods/00/volumes/vol1
[1520361740.849832326] ldevice=/dev/loop0
[1520361740.849832326] ldevicefile=/mnt/script-dir/mnt/blockstore/127.0.0.1-patchy/blockstore/00/00/0000
[1520361740.849832326] gdevicedir=/mnt/script-dir/mnt/blockstore/127.0.0.1-patchy
[1520361740.849832326] mntsuffix=127.0.0.1-patchy
[1520361740.849832326] ! umount /mnt/pods/00/volumes/vol1
[1520361740.849832326] Using lockfile: /var/lock/glfs-block-subvol/127.0.0.1-patchy.lock
[1520361740.849832326] /mnt/script-dir/mnt/blockstore/127.0.0.1-patchy has 0 loop mounted files
[1520361740.849832326] We were last user of /mnt/script-dir/mnt/blockstore/127.0.0.1-patchy; unmounting it.
[1520361740.849832326] ! umount /mnt/script-dir/mnt/blockstore/127.0.0.1-patchy
[1520361740.849832326] ! rmdir /mnt/script-dir/mnt/blockstore/127.0.0.1-patchy
[1520361740.849832326] < 0 {"status": "Success", "message": "Unmounting from /mnt/pods/00/volumes/vol1"}
```

In the log file, each line begins with a timestamp, and the timestamp remains
constant for the length of the execution of the script. The purpose is to allow
multiple, overlapping invocations to be teased apart. The second (optional)
field is a single character.
* Lines with ">" are logs of the scripts invocation arguments.
* Lines with "<" are the script's output back to the driver.
* Lines with "!" are external command invocations made by the script.
* Lines without one of these characters are free-form diagnostic messages.

In the event that the logging generates too much output, it can be disabled by
setting `DEBUG` to `0`. However, when changing this value, be careful to update
the script in an atomic fashion if the node is currently in-use.
39 changes: 22 additions & 17 deletions blockstore/flex-volume/glfs-block-subvol
Original file line number Diff line number Diff line change
Expand Up @@ -92,13 +92,13 @@ function doMount() {
local json="$2"

local cluster
cluster=$(echo "$2" | $JQ -r '.cluster')
cluster=$(echo "$2" | jq -r '.cluster')
local volume
volume=$(echo "$json" | $JQ -r '.volume')
volume=$(echo "$json" | jq -r '.volume')
local subdir
subdir=$(echo "$json" | $JQ -r '.dir')
subdir=$(echo "$json" | jq -r '.dir')
local blockfile
blockfile=$(echo "$json" | $JQ -r '.file')
blockfile=$(echo "$json" | jq -r '.file')
if [ ! -n "${cluster}" ] || [ ! -n "${volume}" ] || [ ! -n "${subdir}" ] || [ ! -n "${blockfile}" ]; then
local msg="cluster=$cluster volume=$volume subdir=$subdir blockfile=$blockfile"
local result="{\"status\": \"Failure\", \"message\": \"$msg\"}"
Expand Down Expand Up @@ -144,6 +144,8 @@ function doMount() {
# Protect loop mount as well within the lock, to ensure gluster
# volume is not unmounted before this is complete
execute mount "${mountname}/${subdir}/${blockfile}" "${mountdir}" -t xfs -o loop,discard
# TODO: if the above mount failed and this instance was resposible
# for mounting the main vol, then unmount the main vol
fi
# force unlock due to fork of mount process
flock -u -n 9
Expand Down Expand Up @@ -214,6 +216,13 @@ function doUnmount() {
log "Using lockfile: $lockfile"
( flock 9 || exit 99
local mcount
# TODO: In a tight loop test for mounting and unmounting devices, found
# that in certain cases the losetup returns 1 device as in use,
# and hence the unmount of the gluster volume is not done.
# Subsequently checking if losetup still reports the same, comes
# out -ve, which looks like some stale reporting by losetup.
# This is not addressed, as it does not cause an issue, and at a
# later mount/unmount cycle the gluster volume could get unmounted
mcount=$(losetup --list -O BACK-FILE | grep -c "$gdevicedir")
log "$gdevicedir has $mcount loop mounted files"
if [[ "$mcount" -eq 0 ]]; then
Expand All @@ -232,19 +241,15 @@ function doUnmount() {
retResult $rc "$result"
}


#-- make sure jq is installed
JQ=$(which jq 2> /dev/null)
if [[ ! -x ${JQ} ]]; then
rMessage="Unable to find 'jq' in PATH. Please install jq."
retResult $RC_FAIL "$rMessage"
fi
# TODO: remove rpm check and check for commands instead
if ! rpm -q util-linux >& /dev/null; then
rMessage="Package util-linux is not installed. Please install util-linux package."
retResult $RC_FAIL "$rMessage"
fi
# TODO: PATH check for which and losetup(?)
# It is assumed that coreutils and util-linux are installed, thus providing,
# - coreutils: date, dirname, echo, mkdir, rmdir
# - util-linux: mount, losetup, umount, flock
for chkcmd in jq sed grep awk; do
if ! command -v "$chkcmd" >/dev/null; then
rMessage="{\"status\": \"Failure\", \"message\": \"Unable to find '${chkcmd}' in PATH. Please install ${chkcmd}.\"}"
retResult $RC_FAIL "$rMessage"
fi
done

log "> $*"
cmd=$1
Expand Down
17 changes: 11 additions & 6 deletions blockstore/flex-volume/install_plugin.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,17 @@
name: "{{ install_loc }}"
state: directory

# TODO: Check package installation for which, jq and util-linux
- name: Make sure jq is available
copy:
src: jq
dest: "{{ install_loc }}/jq"
mode: 0755
- name: Install required dependencies
yum:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of yum:, consider package: so it auto-chooses which package manager to use.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I had a vague remembrance of this, but could not find it when writing this up. Will do in the next patch.

state: present
name: "{{ item }}"
with_items:
- coreutils
- util-linux
- sed
- grep
- gawk
- jq

- name: Copy plugin to workers
template:
Expand Down
18 changes: 17 additions & 1 deletion blockstore/test/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,19 @@
# Script sanity tests

TODO
This directory contains tests that help sanitize the various scripts.

## Tests available

- test-flex-volume.sh: This is used to test functionality of
`../flex-volume/glfs-block-subvol` script

## Testing glfs-flex-volume

Test script test-flex-volume.sh is written to test the script
`../flex-volume/glfs-block-subvol`.

The test assumes that a gluster volume is setup and the `../creator/creator.sh`
script has been executed to create at least 2 backing files.

To run the tests, execute the following command from the subdirectory containing
the script, `./test-flex-volume.sh "127.0.0.1:127.0.0.1" patchy`
69 changes: 55 additions & 14 deletions blockstore/test/test.sh → blockstore/test/test-flex-volume.sh
Original file line number Diff line number Diff line change
@@ -1,22 +1,27 @@
#! /bin/bash

# TODO: Write a how to execute!!!

# Globals to setup test environment
# See README.md under the parent directory of this script for details on how
# to run this.

# *** Globals to setup test environment ***
# SCRIPTDIR defines where the script will be copied and run from, this also
# decides where the gluster mount is going to appear in the system
# The gluster mount would appear under here,
# - ${SCRIPTDIR}/${mntprefix}
SCRIPTDIR="/mnt/script-dir"

# PODSBASE defines the root directory under which the virtual pods are going
# to request mounts. This is where the loop device is mounted into.
# A typical request would be to mount a PVC under,
# - ${PODSBASE}/${PODUID00}/${PODVOLUME}/${PODVOL1}
PODSBASE="/mnt/pods"
GLFS_CLUSTER_ADDR="127.0.0.1:127.0.0.1" # <addr>:<addr>:... as it suits the setup
GLFS_VOLUME="patchy"

# Globals for easy reference to relative paths/mounts
# *** Globals for easy reference to relative paths/mounts ***
PODUID00="00"
PODUID01="01"
PODUID02="02"
PODUID03="03"
PODVOLUME="volumes"
PODVOL1="vol1"
PODVOL2="vol2"
PODVOL3="vol3"

# Static(s) from the glfs-block-subvol script
mntprefix="mnt/blockstore"
Expand All @@ -31,10 +36,21 @@ JSON_UNMOUNT="Unmounting from "
# Hacks!
# 1. Testing on a local setup, hence do not have multiple Gluster ADDRs, hence faking the same address twice

usage()
{
echo "Usage: $0 <server1:server2:...> <volume>"
echo " - <server1:server2:...>: List of gluster server addresses."
echo " NOTE: If it is a single server setup repeat the address"
echo " twice, like so 192.168.121.10:192.168.121.10"
echo " - <volume>: Gluster volume name"
}

cleanup()
{
rm -rf "${LOCKPATH}"
# TODO: unmount first
if mountpoint -q "${SCRIPTDIR}/${mntprefix}/$(echo "$GLFS_CLUSTER_ADDR" | sed -r 's/^([^:]+):?(.*)/\1/')-${GLFS_VOLUME}"; then
umount "${SCRIPTDIR}/${mntprefix}/$(echo "$GLFS_CLUSTER_ADDR" | sed -r 's/^([^:]+):?(.*)/\1/')-${GLFS_VOLUME}"
fi
rm -rf "${SCRIPTDIR}"
# TODO: unmount loop devices first
# rm -rf "${PODSBASE}"
Expand All @@ -46,12 +62,22 @@ setup()
cp ../flex-volume/glfs-block-subvol "${SCRIPTDIR}"
}

# *** Setup environment ***
if [ $# -ne 2 ]; then usage; exit 1; fi

GLFS_CLUSTER_ADDR="$1"
GLFS_VOLUME="$2"

ret=$(echo "${GLFS_CLUSTER_ADDR}" | grep -c ":")
if [ "$ret" -eq 0 ]; then usage; exit 1; fi

# TESTS START
cleanup;
setup;

# TEST 1
# Test init failure
# - LOCKPATH is expected to be a directory, create a file instead!
touch $LOCKPATH
retjson=$("${SCRIPTDIR}"/glfs-block-subvol init)
status=$(echo "${retjson}" | jq -r .status)
Expand Down Expand Up @@ -90,7 +116,7 @@ cleanup;
setup;

# TEST 3
# Fail an non-existing mount
# Test unmounting an non-existing mount
retjson=$("${SCRIPTDIR}"/glfs-block-subvol init)
status=$(echo "${retjson}" | jq -r .status)
if [ "${status}" != "${JSON_SUCCESS}" ]; then
Expand All @@ -113,7 +139,7 @@ fi
echo "TEST 3 passed"

# TEST 4
# Test a bad JSON
# Test a bad JSON request
mount_json="{\"bcluster\":\"${GLFS_CLUSTER_ADDR}\",\"dir\":\"blockstore/00/00\",\"volume\":\"${GLFS_VOLUME}\",\"file\":\"0000\"}"
retjson=$("${SCRIPTDIR}"/glfs-block-subvol mount "${PODSBASE}/${PODUID00}/${PODVOLUME}/${PODVOL1}" "${mount_json}")
status=$(echo "${retjson}" | jq -r .status)
Expand Down Expand Up @@ -146,7 +172,7 @@ fi
echo "TEST 5 passed"

# TEST 6
# test a valid unmount
# Test a valid unmount
retjson=$("${SCRIPTDIR}"/glfs-block-subvol unmount "${PODSBASE}/${PODUID00}/${PODVOLUME}/${PODVOL1}")
status=$(echo "${retjson}" | jq -r .status)
if [ "${status}" != "${JSON_SUCCESS}" ]; then
Expand All @@ -167,7 +193,7 @@ fi
echo "TEST 6 passed"

# TEST 7
# Test a multiple mounts, and an unmount to ensure other mounts remain
# Test multiple mounts, and an unmount to ensure other mounts remain
# First mount
mkdir -p "${PODSBASE}/${PODUID00}/${PODVOLUME}/${PODVOL1}"
mount_json="{\"cluster\":\"${GLFS_CLUSTER_ADDR}\",\"dir\":\"blockstore/00/00\",\"volume\":\"${GLFS_VOLUME}\",\"file\":\"0000\"}"
Expand Down Expand Up @@ -211,16 +237,31 @@ if [ "${message}" != "${JSON_UNMOUNT}${PODSBASE}/${PODUID00}/${PODVOLUME}/${PODV
exit 2
fi

# Check base gluster mount remains active
if ! mountpoint -q "${SCRIPTDIR}/${mntprefix}/$(echo "$GLFS_CLUSTER_ADDR" | sed -r 's/^([^:]+):?(.*)/\1/')-${GLFS_VOLUME}"; then
echo "TEST 7: Did not find Gluster mount, here ${SCRIPTDIR}/${mntprefix}/$(echo "$GLFS_CLUSTER_ADDR" | sed -r 's/^([^:]+):?(.*)/\1/')-${GLFS_VOLUME}"
exit 2
fi

# Check second mount is also remains active
if ! mountpoint -q "${PODSBASE}/${PODUID01}/${PODVOLUME}/${PODVOL2}"; then
echo "TEST 7: Did not find pod mount, here ${PODSBASE}/${PODUID01}/${PODVOLUME}/${PODVOL2}"
exit 2
fi

# Unmount the last
retjson=$("${SCRIPTDIR}"/glfs-block-subvol unmount "${PODSBASE}/${PODUID01}/${PODVOLUME}/${PODVOL2}")
status=$(echo "${retjson}" | jq -r .status)
if [ "${status}" != "${JSON_SUCCESS}" ]; then
echo "TEST 7: Expected success from unmount"
exit 2
fi
message=$(echo "${retjson}" | jq -r .message)
if [ "${message}" != "${JSON_UNMOUNT}${PODSBASE}/${PODUID01}/${PODVOLUME}/${PODVOL2}" ]; then
echo "TEST 7: Expected message ${JSON_UNMOUNT}${PODSBASE}/${PODUID01}/${PODVOLUME}/${PODVOL2} from unmount, got ${message}"
exit 2
fi

echo "TEST 7 passed"

exit 0