The universal file transfer tool duck which runs in your shell on Linux and OS X or your Windows command line prompt now for GitHub Actions conveniently in a docker container.
with:
mode: 'list|longlist|upload|download|delete|purge|raw'
Requires URL
Returns a flat name-list in jobs.<job_id>.outputs.log
.
with:
mode: list
url: 's3:/'
Requires URL
Returns a detailed name-list in jobs.<job_id>.outputs.log
.
with:
mode: longlist
url: 's3:/'
Requires URL, Path
Uploads Path (relative to workspace) to URL recursively.
with:
mode: upload
url: 's3:/'
path: 'bin/Release/*'
Requires URL
Downloads element specified at URL
to path relative to workspace recursively.
with:
mode: download
url: 's3:/'
path: 'artifacts/'
Requires URL
Deletes element specified at URL
.
with:
mode: delete
url: 's3:/bucket/prefix/object'
Requires Purge
Purges CDN configuration.
with:
mode: purge
url: 's3:/bucket'
Use args
as raw command to duck
.
with:
mode: raw
args: '--help'
URL to remote file or directory. Check duck --help
or docker run ghcr.io/iterate-ch/cyberduck --help
for supported protocols.
Path to local file or directory, relative to /github/workspace
.
The following environment variable names are supported to be passed as part of the job.
Username to use for authentication with server passed to Cyberduck CLI with --username
.
Password to use for authentication with server passed to Cyberduck CLI with --password
.
Path to private key file for public key authentication with server passed to Cyberduck CLI with --identity
.
Returns full log output (quiet, output only) in a multiline string.
-
Upload contents of a directory to a S3 bucket passing secrets for authorization
uses: iterate-ch/cyberduck-cli-action@main id: upload-artifacts env: USERNAME: ${{secrets.S3_ACCESS_KEY}} PASSWORD: ${{secrets.S3_SECRET_KEY}} with: mode: upload url: 's3:/bucket/path/' path: 'target/Release/*'
-
Using output:
run: echo ${{ steps.upload.artifacts.outputs.log }}