Skip to content

Commit

Permalink
Merge pull request #5 from microsoft/md/addnosharedkeysupport
Browse files Browse the repository at this point in the history
Add managed identity / no-shared key support
  • Loading branch information
maxdymond authored Apr 11, 2024
2 parents 4432f24 + 4d02d37 commit 4fae913
Show file tree
Hide file tree
Showing 8 changed files with 437 additions and 116 deletions.
24 changes: 22 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ an Azure Function App to keep it up to date. For use with

# Getting Started

## Basic usage

To create a new Debian package repository with an Azure Function App, run

```bash
Expand All @@ -30,20 +32,38 @@ overridden by passing the `-l` parameter:
./create_resources.sh -l uksouth <resource_group_name>
```

## No shared-key access / Managed Identities

By default, the storage container that is created has shared-key access enabled.
You can instead create a deployment that uses Managed Identities, but this
requires Docker (as the function application and its dependencies must be
compiled and packed appropriately).

To create a new Debian package repository which uses Managed Identities, run

```bash
./create_resources_nosharedkey.sh [-s <suffix>] [-l <location>] <resource_group_name>
```

This creates an additional blob container (`python`) in the storage account to
hold the compiled function application zip file; the function application is
run directly from that zip file.

# Design

The function app works as follows:

- It is triggered whenever a `.deb` file is uploaded to the monitored blob
storage container
- It can be triggered by both blob storage triggers and by Event Grid triggers
- It is triggered by an Event Grid trigger.
- It iterates over all `.deb` files and looks for a matching `.package` file.
- If that file does not exist, it is created
- The `.deb` file is downloaded and the control information is extracted
- The hash values for the file are calculated (MD5sum, SHA1, SHA256)
- All of this information is added to the `.package` file
- All `.package` files are iterated over, downloaded, and combined into a
single `Package` file, which is then uploaded.
single `Package` file, which is then uploaded. A `Packages.xz` file is also
created.

As the function app works on a Consumption plan it may take up to 10 minutes for
the function app to trigger and regenerate the package information. In practice,
Expand Down
4 changes: 4 additions & 0 deletions create_resources.sh
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ az deployment group create \
--name "${DEPLOYMENT_NAME}" \
--resource-group "${RESOURCE_GROUP_NAME}" \
--template-file ./rg.bicep \
--parameter use_shared_keys=true \
${PARAMETERS} \
--output none
echo "Resources created"
Expand Down Expand Up @@ -84,6 +85,9 @@ echo "Function app code deployed"
# Clean up
rm -f build/function_app.zip

# Wait for the event trigger to exist
./waitfortrigger.sh "${FUNCTION_APP_NAME}" "${RESOURCE_GROUP_NAME}"

# Now run the second deployment script to create the eventgrid subscription.
# This must be run after the function app is deployed, because the ARM ID of the
# eventGridTrigger function doesn't exist until after deployment.
Expand Down
137 changes: 137 additions & 0 deletions create_resources_nosharedkey.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
#!/bin/bash
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.

set -euo pipefail

# This script uses Bicep scripts to create a function app and a storage account,
# then uses the Azure CLI to deploy the function code to that app.
# Uses managed identities.
# Requires Docker to be installed and running.

LOCATION="eastus"

function usage()
{
echo "Usage: $0 [-l <LOCATION>] [-s <CUSTOM SUFFIX>] <RESOURCE GROUP NAME>"
echo
echo "By default, location is '${LOCATION}'"
echo "A list of location names can be obtained by running 'az account list-locations --query \"[].name\"'"
}

PARAMETERS=""

while getopts ":l:s:" opt; do
case "${opt}" in
l)
LOCATION=${OPTARG}
;;
s)
PARAMETERS="${PARAMETERS} --parameter suffix=${OPTARG}"
;;
*)
usage
exit 0
;;
esac
done
shift $((OPTIND-1))

# Takes parameters of the resource group name.
RESOURCE_GROUP_NAME=${1:-}

if [[ -z ${RESOURCE_GROUP_NAME} ]]
then
echo "Requires a resource group name"
echo
usage
exit 1
fi

# Pack the application using the core-tools tooling
# Should generate a file called function_app.zip
docker run -it \
--rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PWD:/function_app \
-w /function_app \
mcr.microsoft.com/azure-functions/python:4-python3.11-core-tools \
bash -c "func pack --python --build-native-deps"

echo "Ensuring resource group ${RESOURCE_GROUP_NAME} exists"
az group create --name "${RESOURCE_GROUP_NAME}" --location "${LOCATION}" --output none

# Create the resources
DEPLOYMENT_NAME="${RESOURCE_GROUP_NAME}"
echo "Creating resources in resource group ${RESOURCE_GROUP_NAME}"
az deployment group create \
--name "${DEPLOYMENT_NAME}" \
--resource-group "${RESOURCE_GROUP_NAME}" \
--template-file ./rg.bicep \
--parameter use_shared_keys=false \
${PARAMETERS} \
--output none
echo "Resources created"

# There's some output in the deployment that we need.
APT_SOURCES=$(az deployment group show -n "${DEPLOYMENT_NAME}" -g "${RESOURCE_GROUP_NAME}" --output tsv --query properties.outputs.apt_sources.value)
STORAGE_ACCOUNT=$(az deployment group show -n "${DEPLOYMENT_NAME}" -g "${RESOURCE_GROUP_NAME}" --output tsv --query properties.outputs.storage_account.value)
PACKAGE_CONTAINER=$(az deployment group show -n "${DEPLOYMENT_NAME}" -g "${RESOURCE_GROUP_NAME}" --output tsv --query properties.outputs.package_container.value)
PYTHON_CONTAINER=$(az deployment group show -n "${DEPLOYMENT_NAME}" -g "${RESOURCE_GROUP_NAME}" --output tsv --query properties.outputs.python_container.value)

# Upload the function app code to the python container
echo "Uploading function app code to ${PYTHON_CONTAINER}"
az storage blob upload \
--auth-mode login \
--account-name "${STORAGE_ACCOUNT}" \
--container-name "${PYTHON_CONTAINER}" \
--file function_app.zip \
--name function_app.zip \
--overwrite \
--output none

# Create the function app
echo "Creating function app in resource group ${RESOURCE_GROUP_NAME}"
az deployment group create \
--name "${DEPLOYMENT_NAME}_func" \
--resource-group "${RESOURCE_GROUP_NAME}" \
--template-file ./rg_funcapp.bicep \
--parameter use_shared_keys=false \
${PARAMETERS} \
--output none
echo "Function App created"

# Get the generated function app name
FUNCTION_APP_NAME=$(az deployment group show -n "${DEPLOYMENT_NAME}_func" -g "${RESOURCE_GROUP_NAME}" --output tsv --query properties.outputs.function_app_name.value)

# Clean up
rm -f function_app.zip

# Wait for the event trigger to exist
./waitfortrigger.sh "${FUNCTION_APP_NAME}" "${RESOURCE_GROUP_NAME}"

# Now run the second deployment script to create the eventgrid subscription.
# This must be run after the function app is deployed, because the ARM ID of the
# eventGridTrigger function doesn't exist until after deployment.
az deployment group create \
--name "${DEPLOYMENT_NAME}_eg" \
--resource-group "${RESOURCE_GROUP_NAME}" \
--template-file ./rg_add_eventgrid.bicep \
${PARAMETERS} \
--output none

# Report to the user how to use this repository
echo "The repository has been created!"
echo "You can upload packages to the container '${PACKAGE_CONTAINER}' in the storage account '${STORAGE_ACCOUNT}'."
echo "The function app '${FUNCTION_APP_NAME}' will be triggered by new packages"
echo "in that container and regenerate the repository."
echo
echo "To download packages, you need to have apt-transport-blob installed on your machine."
echo "Next, add this line to /etc/apt/sources.list:"
echo
echo " ${APT_SOURCES}"
echo
echo "Ensure that you have a valid Azure credential, (either by logging in with 'az login' or "
echo "by setting the AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID environment variables)."
echo "That credential must have 'Storage Blob Data Reader' access to the storage account."
echo "Then you can use apt-get update and apt-get install as usual."
43 changes: 16 additions & 27 deletions function_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
import azure.functions as func
import pydpkg
from azure.storage.blob import ContainerClient
from azure.identity import DefaultAzureCredential

app = func.FunctionApp()
log = logging.getLogger("apt-package-function")
Expand Down Expand Up @@ -128,10 +129,21 @@ class RepoManager:

def __init__(self) -> None:
"""Create a RepoManager object."""
self.connection_string = os.environ["AzureWebJobsStorage"]
self.container_client = ContainerClient.from_connection_string(
self.connection_string, CONTAINER_NAME
)
if "AzureWebJobsStorage" in os.environ:
# Use a connection string to access the storage account
self.connection_string = os.environ["AzureWebJobsStorage"]
self.container_client = ContainerClient.from_connection_string(
conn_str=self.connection_string, container_name=CONTAINER_NAME
)
else:
# Use credentials to access the container. Used when shared-key
# access is disabled.
self.credential = DefaultAzureCredential()
self.container_client = ContainerClient.from_container_url(
container_url=os.environ["BLOB_CONTAINER_URL"],
credential=self.credential,
)

self.package_file = self.container_client.get_blob_client("Packages")
self.package_file_xz = self.container_client.get_blob_client("Packages.xz")

Expand Down Expand Up @@ -184,29 +196,6 @@ def create_packages(self) -> None:
log.info("Created Packages.xz file")


@app.blob_trigger(
arg_name="newfile",
path=f"{CONTAINER_NAME}/{{name}}.deb",
connection="AzureWebJobsStorage",
)
def blob_trigger(newfile: func.InputStream):
"""Process a new blob in the container."""
# Have to use %s for the length because .length is optional
log.info(
"Python blob trigger function processed blob; Name: %s, Blob Size: %s bytes",
newfile.name,
newfile.length,
)
if not newfile.name or not newfile.name.endswith(".deb"):
log.info("Not a Debian package: %s", newfile.name)
return

rm = RepoManager()
rm.check_metadata()
rm.create_packages()
log.info("Done processing %s", newfile.name)


@app.function_name(name="eventGridTrigger")
@app.event_grid_trigger(arg_name="event")
def event_grid_trigger(event: func.EventGridEvent):
Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,6 @@
# Manually managing azure-functions-worker may cause unexpected issues

azure-functions
azure-identity
azure-storage-blob
pydpkg
Loading

0 comments on commit 4fae913

Please sign in to comment.