Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump rdflib from 7.1.2 to 7.1.3 #642

Merged
merged 1 commit into from
Jan 24, 2025
Merged

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Jan 20, 2025

Bumps rdflib from 7.1.2 to 7.1.3.

Release notes

Sourced from rdflib's releases.

2025-01-18 RELEASE 7.1.3

A fix-up release that re-adds support for Python 3.8 after it was accidentally removed in Release 7.1.2.

This release cherrypicks many additions to 7.1.2 added to 7.1.1 but leaves out typing changes that are not compatible with Python 3.8.

Also not carried over from 7.1.2 is the change from Poetry 1.x to 2.0.

Included are PRs such as Defined Namespace warnings fix, sort longturtle blank nodes, deterministic longturtle serialisation and Dataset documentation improvements.

Changelog

Sourced from rdflib's changelog.

2025-01-17 RELEASE 7.1.3

A fix-up release that re-adds support for Python 3.8 after it was accidentally removed in Release 7.1.2.

This release cherrypicks many additions to 7.1.2 added to 7.1.1 but leaves out typing changes that are not compatible with Python 3.8.

Also not carried over from 7.1.2 is the change from Poetry 1.x to 2.0.

Included are PRs such as Defined Namespace warnings fix, sort longturtle blank nodes, deterministic longturtle serialisation and Dataset documentation improvements.

For the full list of included PRs, see the preparatory PR: RDFLib/rdflib#3036.

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [rdflib](https://github.com/RDFLib/rdflib) from 7.1.2 to 7.1.3.
- [Release notes](https://github.com/RDFLib/rdflib/releases)
- [Changelog](https://github.com/RDFLib/rdflib/blob/main/CHANGELOG.md)
- [Commits](RDFLib/rdflib@7.1.2...7.1.3)

---
updated-dependencies:
- dependency-name: rdflib
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Jan 20, 2025
Copy link

codecov bot commented Jan 20, 2025

❌ 57 Tests Failed:

Tests completed Failed Passed Skipped
2045 57 1988 9
View the top 3 failed tests by shortest run time
tests/test_connector.py::test_future_connector_multiple_request_fail[deploy]
Stack Traces | 0.001s run time
request = <SubRequest 'context' for <Coroutine test_connector_run_command[local]>>
kwargs = {'chosen_deployment_types': ['local', 'docker', 'docker-compose', 'docker-wrapper', 'slurm']}
func = <function context at 0x10f000a40>
event_loop_fixture_id = 'tests/test_connector.py::<event_loop>'
setup = <function _wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.setup at 0x1105467a0>
setup_task = <Task finished name='Task-411' coro=<_wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.setup() done, ...: not found\nsh: Status:: not found\nsh: 163b0ed1a5bbb25688ace08497dbc7b331dfa86fdac322d9263c2c8bce80ab92: not found')>

    @functools.wraps(fixture)
    def _asyncgen_fixture_wrapper(request: FixtureRequest, **kwargs: Any):
        func = _perhaps_rebind_fixture_func(fixture, request.instance)
        event_loop_fixture_id = _get_event_loop_fixture_id_for_async_fixture(
            request, func
        )
        event_loop = request.getfixturevalue(event_loop_fixture_id)
        kwargs.pop(event_loop_fixture_id, None)
        gen_obj = func(**_add_kwargs(func, kwargs, event_loop, request))
    
        async def setup():
            res = await gen_obj.__anext__()  # type: ignore[union-attr]
            return res
    
        context = contextvars.copy_context()
        setup_task = _create_task_in_context(event_loop, setup(), context)
>       result = event_loop.run_until_complete(setup_task)

.tox/py3.13-unit/lib/python3.13....../site-packages/pytest_asyncio/plugin.py:329: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py:720: in run_until_complete
    return future.result()
.tox/py3.13-unit/lib/python3.13....../site-packages/pytest_asyncio/plugin.py:324: in setup
    res = await gen_obj.__anext__()  # type: ignore[union-attr]
tests/conftest.py:152: in context
    await _context.deployment_manager.deploy(config)
streamflow/deployment/manager.py:180: in deploy
    await self._deploy(deployment_config, {deployment_name})
streamflow/deployment/manager.py:74: in _deploy
    await connector.deploy(deployment_config.external)
.../deployment/connector/container.py:1187: in deploy
    await self._populate_instance(self.containerId)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <streamflow.deployment.connector.container.DockerConnector object at 0x1105e65d0>
name = "Unable to find image 'alpine:3.16.2' locally\n3.16.2: Pulling from library/alpine\n213ec9aee27d: Pulling fs layer\n21...2e\nStatus: Downloaded newer image for alpine:3.16.2\n163b0ed1a5bbb25688ace08497dbc7b331dfa86fdac322d9263c2c8bce80ab92"

    async def _populate_instance(self, name: str):
        # Build execution location
        location = ExecutionLocation(
            name=name,
            deployment=self.deployment_name,
            stacked=True,
            wraps=self._inner_location.location,
        )
        # Inspect Docker container
        stdout, returncode = await self.connector.run(
            location=self._inner_location.location,
            command=[
                "docker",
                "inspect",
                "--format",
                "'{{json .}}'",
                name,
            ],
            capture_output=True,
        )
        if returncode == 0:
            try:
                container = json.loads(stdout) if stdout else {}
            except json.decoder.JSONDecodeError:
                raise WorkflowExecutionException(
                    f"Error inspecting Docker container {name}: {stdout}"
                )
        else:
>           raise WorkflowExecutionException(
                f"Error inspecting Docker container {name}: [{returncode}] {stdout}"
            )
E           streamflow.core.exception.WorkflowExecutionException: Error inspecting Docker container Unable to find image 'alpine:3.16.2' locally
E           3.16.2: Pulling from library/alpine
E           213ec9aee27d: Pulling fs layer
E           213ec9aee27d: Download complete
E           213ec9aee27d: Pull complete
E           Digest: sha256:65a2763f593ae85fab3b5406dc9e80f744ec5b449f269b699b5efd37a07ad32e
E           Status: Downloaded newer image for alpine:3.16.2
E           163b0ed1a5bbb25688ace08497dbc7b331dfa86fdac322d9263c2c8bce80ab92: [127] {"Id":"sha256:9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5","RepoTags":["alpine:3.16.2"],"RepoDigests":["alpine@sha256:65a2763f593ae85fab3b5406dc9e80f744ec5b449f269b699b5efd37a07ad32e"],"Parent":"","Comment":"","Created":"2022-08-09T17:19:53.47374331Z","DockerVersion":"20.10.12","Author":"","Config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=.../usr/local/sbin:.../usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],"Cmd":["/bin/sh"],"Image":"sha256:c0261ca8a4a79627f3e658c0c2b1e3166f56713a58e1411b1e3ab1e378962e75","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"Architecture":"amd64","Os":"linux","Size":5544164,"GraphDriver":{"Data":{"MergedDir":".../overlay2/bc9de49993e7eece539dbd89a7623235733b4b36e80b816411d47f28e4fcd7e9/merged","UpperDir":".../overlay2/bc9de49993e7eece539dbd89a7623235733b4b36e80b816411d47f28e4fcd7e9/diff","WorkDir":".../overlay2/bc9de49993e7eece539dbd89a7623235733b4b36e80b816411d47f28e4fcd7e9/work"},"Name":"overlay2"},"RootFS":{"Type":"layers","Layers":["sha256:994393dc58e7931862558d06e46aa2bb17487044f670f310dffe1d24e4d1eec7"]},"Metadata":{"LastTagTime":"0001-01-01T00:00:00Z"}}
E           Error: No such object: Unable
E           Error: No such object: to
E           Error: No such object: find
E           Error: No such object: image
E           Error: No such object: locally
E           sh: 3.16.2:: not found
E           sh: 213ec9aee27d:: not found
E           sh: 213ec9aee27d:: not found
E           sh: 213ec9aee27d:: not found
E           sh: Digest:: not found
E           sh: Status:: not found
E           sh: 163b0ed1a5bbb25688ace08497dbc7b331dfa86fdac322d9263c2c8bce80ab92: not found

.../deployment/connector/container.py:703: WorkflowExecutionException
tests/test_persistence.py::test_filter_config
Stack Traces | 0.001s run time
request = <SubRequest 'context' for <Coroutine test_workflow>>
kwargs = {'chosen_deployment_types': ['local', 'docker', 'docker-compose', 'docker-wrapper', 'slurm']}
func = <function context at 0x10f000a40>
event_loop_fixture_id = 'tests/test_persistence.py::<event_loop>'
setup = <function _wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.setup at 0x1110ee340>
setup_task = <Task finished name='Task-15421' coro=<_wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.setup() done...onnect to the Docker daemon at unix:.../var/run/docker.sock. Is the docker daemon running?.\nSee 'docker run --help'.")>

    @functools.wraps(fixture)
    def _asyncgen_fixture_wrapper(request: FixtureRequest, **kwargs: Any):
        func = _perhaps_rebind_fixture_func(fixture, request.instance)
        event_loop_fixture_id = _get_event_loop_fixture_id_for_async_fixture(
            request, func
        )
        event_loop = request.getfixturevalue(event_loop_fixture_id)
        kwargs.pop(event_loop_fixture_id, None)
        gen_obj = func(**_add_kwargs(func, kwargs, event_loop, request))
    
        async def setup():
            res = await gen_obj.__anext__()  # type: ignore[union-attr]
            return res
    
        context = contextvars.copy_context()
        setup_task = _create_task_in_context(event_loop, setup(), context)
>       result = event_loop.run_until_complete(setup_task)

.tox/py3.13-unit/lib/python3.13....../site-packages/pytest_asyncio/plugin.py:329: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py:720: in run_until_complete
    return future.result()
.tox/py3.13-unit/lib/python3.13....../site-packages/pytest_asyncio/plugin.py:324: in setup
    res = await gen_obj.__anext__()  # type: ignore[union-attr]
tests/conftest.py:152: in context
    await _context.deployment_manager.deploy(config)
streamflow/deployment/manager.py:180: in deploy
    await self._deploy(deployment_config, {deployment_name})
streamflow/deployment/manager.py:74: in _deploy
    await connector.deploy(deployment_config.external)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <streamflow.deployment.connector.container.DockerConnector object at 0x110ba9090>
external = False

    async def deploy(self, external: bool) -> None:
        await super().deploy(external)
        # Check if Docker is installed in the wrapped connector
        await self._check_docker_installed()
        # If the deployment is not external, deploy the container
        if not external:
            await self._prepare_volumes(self.volume, self.mount)
            if logger.isEnabledFor(logging.DEBUG):
                logger.debug(f"Using Docker {await self._get_docker_version()}.")
            # Pull image if it doesn't exist
            _, returncode = await self.connector.run(
                location=self._inner_location.location,
                command=["docker", "image", "inspect", self.image],
                capture_output=True,
            )
            if returncode != 0:
                await self.connector.run(
                    location=self._inner_location.location,
                    command=["docker", "pull", "--quiet", self.image],
                )
            # Deploy the Docker container
            deploy_command = [
                "docker",
                "run",
                "--detach",
                "--interactive",
                get_option("add-host", self.addHost),
                get_option("blkio-weight", self.addHost),
                get_option("blkio-weight-device", self.blkioWeightDevice),
                get_option("cap-add", self.capAdd),
                get_option("cap-drop", self.capDrop),
                get_option("cgroup-parent", self.cgroupParent),
                get_option("cgroupns", self.cgroupns),
                get_option("cidfile", self.cidfile),
                get_option("cpu-period", self.cpuPeriod),
                get_option("cpu-quota", self.cpuQuota),
                get_option("cpu-rt-period", self.cpuRTPeriod),
                get_option("cpu-rt-runtime", self.cpuRTRuntime),
                get_option("cpu-shares", self.cpuShares),
                get_option("cpus", self.cpus),
                get_option("cpuset-cpus", self.cpusetCpus),
                get_option("cpuset-mems", self.cpusetMems),
                get_option("detach-keys", self.detachKeys),
                get_option("device", self.device),
                get_option("device-cgroup-rule", self.deviceCgroupRule),
                get_option("device-read-bps", self.deviceReadBps),
                get_option("device-read-iops", self.deviceReadIops),
                get_option("device-write-bps", self.deviceWriteBps),
                get_option("device-write-iops", self.deviceWriteIops),
                f"--disable-content-trust={'true' if self.disableContentTrust else 'false'}",
                get_option("dns", self.dns),
                get_option("dns-option", self.dnsOptions),
                get_option("dns-search", self.dnsSearch),
                get_option("domainname", self.domainname),
                get_option("entrypoint", self.entrypoint),
                get_option("env", self.env),
                get_option("env-file", self.envFile),
                get_option("expose", self.expose),
                get_option("gpus", self.gpus),
                get_option("group-add", self.groupAdd),
                get_option("health-cmd", self.healthCmd),
                get_option("health-interval", self.healthInterval),
                get_option("health-retries", self.healthRetries),
                get_option("health-start-period", self.healthStartPeriod),
                get_option("health-timeout", self.healthTimeout),
                get_option("hostname", self.hostname),
                get_option("init", self.init),
                get_option("ip", self.ip),
                get_option("ip6", self.ip6),
                get_option("ipc", self.ipc),
                get_option("isolation", self.isolation),
                get_option("kernel-memory", self.kernelMemory),
                get_option("label", self.label),
                get_option("label-file", self.labelFile),
                get_option("link", self.link),
                get_option("link-local-ip", self.linkLocalIP),
                get_option("log-driver", self.logDriver),
                get_option("log-opt", self.logOpts),
                get_option("mac-address", self.macAddress),
                get_option("memory", self.memory),
                get_option("memory-reservation", self.memoryReservation),
                get_option("memory-swap", self.memorySwap),
                get_option("memory-swappiness", self.memorySwappiness),
                get_option("mount", self.mount),
                get_option("network", self.network),
                get_option("network-alias", self.networkAlias),
                get_option("no-healthcheck", self.noHealthcheck),
                get_option("oom-kill-disable", self.oomKillDisable),
                get_option("oom-score-adj", self.oomScoreAdj),
                get_option("pid", self.pid),
                get_option("pids-limit", self.pidsLimit),
                get_option("privileged", self.privileged),
                get_option("publish", self.publish),
                get_option("publish-all", self.publishAll),
                get_option("read-only", self.readOnly),
                get_option("restart", self.restart),
                get_option("rm", self.rm),
                get_option("runtime", self.runtime),
                get_option("security-opt", self.securityOpts),
                get_option("shm-size", self.shmSize),
                f"--sig-proxy={'true' if self.sigProxy else 'false'}",
                get_option("stop-signal", self.stopSignal),
                get_option("stop-timeout", self.stopTimeout),
                get_option("storage-opt", self.storageOpts),
                get_option("sysctl", self.sysctl),
                get_option("tmpfs", self.tmpfs),
                get_option("ulimit", self.ulimit),
                get_option("user", self.user),
                get_option("userns", self.userns),
                get_option("uts", self.uts),
                get_option("volume", self.volume),
                get_option("volume-driver", self.volumeDriver),
                get_option("volumes-from", self.volumesFrom),
                get_option("workdir", self.workdir),
                self.image,
                f"{' '.join(self.command) if self.command else ''}",
            ]
            stdout, returncode = await self.connector.run(
                location=self._inner_location.location,
                command=deploy_command,
                capture_output=True,
            )
            if returncode == 0:
                self.containerId = stdout
            else:
>               raise WorkflowExecutionException(
                    f"FAILED Deployment of {self.deployment_name} environment [{returncode}]:\n\t{stdout}"
                )
E               streamflow.core.exception.WorkflowExecutionException: FAILED Deployment of alpine-docker-wrapper environment [125]:
E               	docker: Cannot connect to the Docker daemon at unix:.../var/run/docker.sock. Is the docker daemon running?.
E               See 'docker run --help'.

.../deployment/connector/container.py:1177: WorkflowExecutionException
tests/test_scheduler.py::test_scheduling[slurm]
Stack Traces | 0.001s run time
request = <SubRequest 'context' for <Coroutine test_bind_volumes>>
kwargs = {'chosen_deployment_types': ['local', 'docker', 'docker-compose', 'docker-wrapper', 'slurm']}
func = <function context at 0x10f000a40>
event_loop_fixture_id = 'tests/test_scheduler.py::<event_loop>'
setup = <function _wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.setup at 0x12379b9c0>
setup_task = <Task finished name='Task-17330' coro=<_wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.setup() done...onnect to the Docker daemon at unix:.../var/run/docker.sock. Is the docker daemon running?.\nSee 'docker run --help'.")>

    @functools.wraps(fixture)
    def _asyncgen_fixture_wrapper(request: FixtureRequest, **kwargs: Any):
        func = _perhaps_rebind_fixture_func(fixture, request.instance)
        event_loop_fixture_id = _get_event_loop_fixture_id_for_async_fixture(
            request, func
        )
        event_loop = request.getfixturevalue(event_loop_fixture_id)
        kwargs.pop(event_loop_fixture_id, None)
        gen_obj = func(**_add_kwargs(func, kwargs, event_loop, request))
    
        async def setup():
            res = await gen_obj.__anext__()  # type: ignore[union-attr]
            return res
    
        context = contextvars.copy_context()
        setup_task = _create_task_in_context(event_loop, setup(), context)
>       result = event_loop.run_until_complete(setup_task)

.tox/py3.13-unit/lib/python3.13....../site-packages/pytest_asyncio/plugin.py:329: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py:720: in run_until_complete
    return future.result()
.tox/py3.13-unit/lib/python3.13....../site-packages/pytest_asyncio/plugin.py:324: in setup
    res = await gen_obj.__anext__()  # type: ignore[union-attr]
tests/conftest.py:152: in context
    await _context.deployment_manager.deploy(config)
streamflow/deployment/manager.py:180: in deploy
    await self._deploy(deployment_config, {deployment_name})
streamflow/deployment/manager.py:74: in _deploy
    await connector.deploy(deployment_config.external)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <streamflow.deployment.connector.container.DockerConnector object at 0x12381e5d0>
external = False

    async def deploy(self, external: bool) -> None:
        await super().deploy(external)
        # Check if Docker is installed in the wrapped connector
        await self._check_docker_installed()
        # If the deployment is not external, deploy the container
        if not external:
            await self._prepare_volumes(self.volume, self.mount)
            if logger.isEnabledFor(logging.DEBUG):
                logger.debug(f"Using Docker {await self._get_docker_version()}.")
            # Pull image if it doesn't exist
            _, returncode = await self.connector.run(
                location=self._inner_location.location,
                command=["docker", "image", "inspect", self.image],
                capture_output=True,
            )
            if returncode != 0:
                await self.connector.run(
                    location=self._inner_location.location,
                    command=["docker", "pull", "--quiet", self.image],
                )
            # Deploy the Docker container
            deploy_command = [
                "docker",
                "run",
                "--detach",
                "--interactive",
                get_option("add-host", self.addHost),
                get_option("blkio-weight", self.addHost),
                get_option("blkio-weight-device", self.blkioWeightDevice),
                get_option("cap-add", self.capAdd),
                get_option("cap-drop", self.capDrop),
                get_option("cgroup-parent", self.cgroupParent),
                get_option("cgroupns", self.cgroupns),
                get_option("cidfile", self.cidfile),
                get_option("cpu-period", self.cpuPeriod),
                get_option("cpu-quota", self.cpuQuota),
                get_option("cpu-rt-period", self.cpuRTPeriod),
                get_option("cpu-rt-runtime", self.cpuRTRuntime),
                get_option("cpu-shares", self.cpuShares),
                get_option("cpus", self.cpus),
                get_option("cpuset-cpus", self.cpusetCpus),
                get_option("cpuset-mems", self.cpusetMems),
                get_option("detach-keys", self.detachKeys),
                get_option("device", self.device),
                get_option("device-cgroup-rule", self.deviceCgroupRule),
                get_option("device-read-bps", self.deviceReadBps),
                get_option("device-read-iops", self.deviceReadIops),
                get_option("device-write-bps", self.deviceWriteBps),
                get_option("device-write-iops", self.deviceWriteIops),
                f"--disable-content-trust={'true' if self.disableContentTrust else 'false'}",
                get_option("dns", self.dns),
                get_option("dns-option", self.dnsOptions),
                get_option("dns-search", self.dnsSearch),
                get_option("domainname", self.domainname),
                get_option("entrypoint", self.entrypoint),
                get_option("env", self.env),
                get_option("env-file", self.envFile),
                get_option("expose", self.expose),
                get_option("gpus", self.gpus),
                get_option("group-add", self.groupAdd),
                get_option("health-cmd", self.healthCmd),
                get_option("health-interval", self.healthInterval),
                get_option("health-retries", self.healthRetries),
                get_option("health-start-period", self.healthStartPeriod),
                get_option("health-timeout", self.healthTimeout),
                get_option("hostname", self.hostname),
                get_option("init", self.init),
                get_option("ip", self.ip),
                get_option("ip6", self.ip6),
                get_option("ipc", self.ipc),
                get_option("isolation", self.isolation),
                get_option("kernel-memory", self.kernelMemory),
                get_option("label", self.label),
                get_option("label-file", self.labelFile),
                get_option("link", self.link),
                get_option("link-local-ip", self.linkLocalIP),
                get_option("log-driver", self.logDriver),
                get_option("log-opt", self.logOpts),
                get_option("mac-address", self.macAddress),
                get_option("memory", self.memory),
                get_option("memory-reservation", self.memoryReservation),
                get_option("memory-swap", self.memorySwap),
                get_option("memory-swappiness", self.memorySwappiness),
                get_option("mount", self.mount),
                get_option("network", self.network),
                get_option("network-alias", self.networkAlias),
                get_option("no-healthcheck", self.noHealthcheck),
                get_option("oom-kill-disable", self.oomKillDisable),
                get_option("oom-score-adj", self.oomScoreAdj),
                get_option("pid", self.pid),
                get_option("pids-limit", self.pidsLimit),
                get_option("privileged", self.privileged),
                get_option("publish", self.publish),
                get_option("publish-all", self.publishAll),
                get_option("read-only", self.readOnly),
                get_option("restart", self.restart),
                get_option("rm", self.rm),
                get_option("runtime", self.runtime),
                get_option("security-opt", self.securityOpts),
                get_option("shm-size", self.shmSize),
                f"--sig-proxy={'true' if self.sigProxy else 'false'}",
                get_option("stop-signal", self.stopSignal),
                get_option("stop-timeout", self.stopTimeout),
                get_option("storage-opt", self.storageOpts),
                get_option("sysctl", self.sysctl),
                get_option("tmpfs", self.tmpfs),
                get_option("ulimit", self.ulimit),
                get_option("user", self.user),
                get_option("userns", self.userns),
                get_option("uts", self.uts),
                get_option("volume", self.volume),
                get_option("volume-driver", self.volumeDriver),
                get_option("volumes-from", self.volumesFrom),
                get_option("workdir", self.workdir),
                self.image,
                f"{' '.join(self.command) if self.command else ''}",
            ]
            stdout, returncode = await self.connector.run(
                location=self._inner_location.location,
                command=deploy_command,
                capture_output=True,
            )
            if returncode == 0:
                self.containerId = stdout
            else:
>               raise WorkflowExecutionException(
                    f"FAILED Deployment of {self.deployment_name} environment [{returncode}]:\n\t{stdout}"
                )
E               streamflow.core.exception.WorkflowExecutionException: FAILED Deployment of alpine-docker-wrapper environment [125]:
E               	docker: Cannot connect to the Docker daemon at unix:.../var/run/docker.sock. Is the docker daemon running?.
E               See 'docker run --help'.

.../deployment/connector/container.py:1177: WorkflowExecutionException

To view more test analytics, go to the Test Analytics Dashboard
📢 Thoughts on this report? Let us know!

@GlassOfWhiskey GlassOfWhiskey self-requested a review January 24, 2025 08:29
@GlassOfWhiskey GlassOfWhiskey merged commit 8dc6fea into master Jan 24, 2025
32 checks passed
@GlassOfWhiskey GlassOfWhiskey deleted the dependabot/pip/rdflib-7.1.3 branch January 24, 2025 08:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant