Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: test private build #1295

Closed
wants to merge 1 commit into from

feat: test private build

9c5e414
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Closed

feat: test private build #1295

feat: test private build
9c5e414
Select commit
Loading
Failed to load commit list.
GitHub Actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++) failed Nov 28, 2023 in 0s

61 fail, 19 skipped, 1 034 pass in 3h 12m 51s

       24 files         24 suites   3h 12m 51s ⏱️
  1 114 tests   1 034 ✔️   19 💤   61
13 200 runs  12 652 ✔️ 243 💤 305

Results for commit 9c5e414.

Annotations

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_serve

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_serving_api (tests.sdk_cli_test.e2etests.test_flow_serve)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
Raw output
AssertionError: Response code indicates error 400 - {"error":{"code":"UserError","message":"Execution failure in 'echo_my_prompt': (APIRemovedInV1) \n\nYou tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.\n\nYou can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. \n\nAlternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`\n\nA detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742\n"}}
  
assert 400 == 200
 +  where 400 = <WrapperTestResponse streamed [400 BAD REQUEST]>.status_code
flow_serving_client = <FlaskClient <PromptflowServingApp 'promptflow._sdk._serving.app'>>

    @pytest.mark.usefixtures("flow_serving_client", "recording_injection", "setup_local_connection")
    @pytest.mark.e2etest
    def test_serving_api(flow_serving_client):
        response = flow_serving_client.get("/health")
        assert b'{"status":"Healthy","version":"0.0.1"}' in response.data
        response = flow_serving_client.post("/score", data=json.dumps({"text": "hi"}))
>       assert (
            response.status_code == 200
        ), f"Response code indicates error {response.status_code} - {response.data.decode()}"
E       AssertionError: Response code indicates error 400 - {"error":{"code":"UserError","message":"Execution failure in 'echo_my_prompt': (APIRemovedInV1) \n\nYou tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.\n\nYou can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. \n\nAlternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`\n\nA detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742\n"}}
E         
E       assert 400 == 200
E        +  where 400 = <WrapperTestResponse streamed [400 BAD REQUEST]>.status_code

tests/sdk_cli_test/e2etests/test_flow_serve.py:132: AssertionError

Check warning on line 0 in tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

1 out of 12 runs failed: test_run_bulk (tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun)

artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 28s]
Raw output
azure.core.exceptions.ServiceResponseError: ('Connection aborted.', TimeoutError(60, 'Operation timed out'))
self = <sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun object at 0x10e16ef40>
pf = <promptflow.azure._pf_client.PFClient object at 0x11d583670>
runtime = 'test-runtime-ci'
randstr = <function randstr.<locals>.generate_random_string at 0x11d5aba60>

    def test_run_bulk(self, pf, runtime: str, randstr: Callable[[str], str]):
        name = randstr("name")
>       run = pf.run(
            flow=f"{FLOWS_DIR}/web_classification",
            data=f"{DATAS_DIR}/webClassification1.jsonl",
            column_mapping={"url": "${data.url}"},
            variant="${summarize_text_content.variant_0}",
            runtime=runtime,
            name=name,
        )

tests/sdk_cli_azure_test/e2etests/test_run_operations.py:47: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/azure/_pf_client.py:252: in run
    return self.runs.create_or_update(run=run, **kwargs)
promptflow/_telemetry/activity.py:143: in wrapper
    return f(self, *args, **kwargs)
promptflow/azure/operations/_run_operations.py:231: in create_or_update
    self._service_caller.submit_bulk_run(
promptflow/azure/_restclient/flow_service_caller.py:61: in wrapper
    return func(self, *args, **kwargs)
promptflow/azure/_restclient/flow_service_caller.py:438: in submit_bulk_run
    return self.caller.bulk_runs.submit_bulk_run(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/tracing/decorator.py:78: in wrapper_use_tracer
    return func(*args, **kwargs)
promptflow/azure/_restclient/flow/operations/_bulk_runs_operations.py:398: in submit_bulk_run
    pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:230: in run
    return first_node.send(pipeline_request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/policies/_redirect.py:197: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/policies/_retry.py:553: in send
    raise err
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/policies/_retry.py:531: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:86: in send
    response = self.next.send(request)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/_base.py:119: in send
    self._sender.send(request.http_request, **request.context.options),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <azure.core.pipeline.transport._requests_basic.RequestsTransport object at 0x11d5a5bb0>
request = <HttpRequest [POST], url: 'https://eastus.api.azureml.ms/flow/api/subscriptions/96aede12-2f73-41cb-b983-6d11a904839b/resourceGroups/promptflow/providers/Microsoft.MachineLearningServices/workspaces/promptflow-eastus/BulkRuns/submit'>
kwargs = {'stream': False}, response = None
error = ServiceResponseError("('Connection aborted.', TimeoutError(60, 'Operation timed out'))")
connection_timeout = 300, timeout = (300, 300), read_timeout = 300

    def send(self, request: Union[HttpRequest, "RestHttpRequest"], **kwargs) -> Union[HttpResponse, "RestHttpResponse"]:
        """Send request object according to configuration.
    
        :param request: The request object to be sent.
        :type request: ~azure.core.pipeline.transport.HttpRequest
        :return: An HTTPResponse object.
        :rtype: ~azure.core.pipeline.transport.HttpResponse
    
        :keyword requests.Session session: will override the driver session and use yours.
         Should NOT be done unless really required. Anything else is sent straight to requests.
        :keyword dict proxies: will define the proxy to use. Proxy is a dict (protocol, url)
        """
        self.open()
        response = None
        error: Optional[AzureErrorUnion] = None
    
        try:
            connection_timeout = kwargs.pop("connection_timeout", self.connection_config.timeout)
    
            if isinstance(connection_timeout, tuple):
                if "read_timeout" in kwargs:
                    raise ValueError("Cannot set tuple connection_timeout and read_timeout together")
                _LOGGER.warning("Tuple timeout setting is deprecated")
                timeout = connection_timeout
            else:
                read_timeout = kwargs.pop("read_timeout", self.connection_config.read_timeout)
                timeout = (connection_timeout, read_timeout)
            response = self.session.request(  # type: ignore
                request.method,
                request.url,
                headers=request.headers,
                data=request.data,
                files=request.files,
                verify=kwargs.pop("connection_verify", self.connection_config.verify),
                timeout=timeout,
                cert=kwargs.pop("connection_cert", self.connection_config.cert),
                allow_redirects=False,
                **kwargs
            )
            response.raw.enforce_content_length = True
    
        except (
            NewConnectionError,
            ConnectTimeoutError,
        ) as err:
            error = ServiceRequestError(err, error=err)
        except requests.exceptions.ReadTimeout as err:
            error = ServiceResponseError(err, error=err)
        except requests.exceptions.ConnectionError as err:
            if err.args and isinstance(err.args[0], ProtocolError):
                error = ServiceResponseError(err, error=err)
            else:
                error = ServiceRequestError(err, error=err)
        except requests.exceptions.ChunkedEncodingError as err:
            msg = err.__str__()
            if "IncompleteRead" in msg:
                _LOGGER.warning("Incomplete download: %s", err)
                error = IncompleteReadError(err, error=err)
            else:
                _LOGGER.warning("Unable to stream download: %s", err)
                error = HttpResponseError(err, error=err)
        except requests.RequestException as err:
            error = ServiceRequestError(err, error=err)
    
        if error:
>           raise error
E           azure.core.exceptions.ServiceResponseError: ('Connection aborted.', TimeoutError(60, 'Operation timed out'))

/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/azure/core/pipeline/transport/_requests_basic.py:381: ServiceResponseError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_serve

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_stream_python_nonstream_tools[text/event-stream-406-application/json] (tests.sdk_cli_test.e2etests.test_flow_serve)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
Raw output
assert 400 == 406
 +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code
flow_serving_client = <FlaskClient <PromptflowServingApp 'promptflow._sdk._serving.app'>>
accept = 'text/event-stream', expected_status_code = 406
expected_content_type = 'application/json'

    @pytest.mark.usefixtures("recording_injection")
    @pytest.mark.e2etest
    @pytest.mark.parametrize(
        "accept, expected_status_code, expected_content_type",
        [
            ("text/event-stream", 406, "application/json"),
            ("application/json", 200, "application/json"),
            ("*/*", 200, "application/json"),
            ("text/event-stream, application/json", 200, "application/json"),
            ("application/json, */*", 200, "application/json"),
            ("", 200, "application/json"),
        ],
    )
    def test_stream_python_nonstream_tools(
        flow_serving_client,
        accept,
        expected_status_code,
        expected_content_type,
    ):
        payload = {
            "text": "Hello World!",
        }
        headers = {
            "Content-Type": "application/json",
            "Accept": accept,
        }
        response = flow_serving_client.post("/score", json=payload, headers=headers)
        if "text/event-stream" in response.content_type:
            for line in response.data.decode().split("\n"):
                print(line)
        else:
            result = response.json
            print(result)
>       assert response.status_code == expected_status_code
E       assert 400 == 406
E        +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code

tests/sdk_cli_test/e2etests/test_flow_serve.py:301: AssertionError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_serve

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_stream_python_nonstream_tools[application/json-200-application/json] (tests.sdk_cli_test.e2etests.test_flow_serve)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
Raw output
assert 400 == 200
 +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code
flow_serving_client = <FlaskClient <PromptflowServingApp 'promptflow._sdk._serving.app'>>
accept = 'application/json', expected_status_code = 200
expected_content_type = 'application/json'

    @pytest.mark.usefixtures("recording_injection")
    @pytest.mark.e2etest
    @pytest.mark.parametrize(
        "accept, expected_status_code, expected_content_type",
        [
            ("text/event-stream", 406, "application/json"),
            ("application/json", 200, "application/json"),
            ("*/*", 200, "application/json"),
            ("text/event-stream, application/json", 200, "application/json"),
            ("application/json, */*", 200, "application/json"),
            ("", 200, "application/json"),
        ],
    )
    def test_stream_python_nonstream_tools(
        flow_serving_client,
        accept,
        expected_status_code,
        expected_content_type,
    ):
        payload = {
            "text": "Hello World!",
        }
        headers = {
            "Content-Type": "application/json",
            "Accept": accept,
        }
        response = flow_serving_client.post("/score", json=payload, headers=headers)
        if "text/event-stream" in response.content_type:
            for line in response.data.decode().split("\n"):
                print(line)
        else:
            result = response.json
            print(result)
>       assert response.status_code == expected_status_code
E       assert 400 == 200
E        +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code

tests/sdk_cli_test/e2etests/test_flow_serve.py:301: AssertionError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_serve

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_stream_python_nonstream_tools[*/*-200-application/json] (tests.sdk_cli_test.e2etests.test_flow_serve)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
Raw output
assert 400 == 200
 +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code
flow_serving_client = <FlaskClient <PromptflowServingApp 'promptflow._sdk._serving.app'>>
accept = '*/*', expected_status_code = 200
expected_content_type = 'application/json'

    @pytest.mark.usefixtures("recording_injection")
    @pytest.mark.e2etest
    @pytest.mark.parametrize(
        "accept, expected_status_code, expected_content_type",
        [
            ("text/event-stream", 406, "application/json"),
            ("application/json", 200, "application/json"),
            ("*/*", 200, "application/json"),
            ("text/event-stream, application/json", 200, "application/json"),
            ("application/json, */*", 200, "application/json"),
            ("", 200, "application/json"),
        ],
    )
    def test_stream_python_nonstream_tools(
        flow_serving_client,
        accept,
        expected_status_code,
        expected_content_type,
    ):
        payload = {
            "text": "Hello World!",
        }
        headers = {
            "Content-Type": "application/json",
            "Accept": accept,
        }
        response = flow_serving_client.post("/score", json=payload, headers=headers)
        if "text/event-stream" in response.content_type:
            for line in response.data.decode().split("\n"):
                print(line)
        else:
            result = response.json
            print(result)
>       assert response.status_code == expected_status_code
E       assert 400 == 200
E        +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code

tests/sdk_cli_test/e2etests/test_flow_serve.py:301: AssertionError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_serve

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_stream_python_nonstream_tools[text/event-stream, application/json-200-application/json] (tests.sdk_cli_test.e2etests.test_flow_serve)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
Raw output
assert 400 == 200
 +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code
flow_serving_client = <FlaskClient <PromptflowServingApp 'promptflow._sdk._serving.app'>>
accept = 'text/event-stream, application/json', expected_status_code = 200
expected_content_type = 'application/json'

    @pytest.mark.usefixtures("recording_injection")
    @pytest.mark.e2etest
    @pytest.mark.parametrize(
        "accept, expected_status_code, expected_content_type",
        [
            ("text/event-stream", 406, "application/json"),
            ("application/json", 200, "application/json"),
            ("*/*", 200, "application/json"),
            ("text/event-stream, application/json", 200, "application/json"),
            ("application/json, */*", 200, "application/json"),
            ("", 200, "application/json"),
        ],
    )
    def test_stream_python_nonstream_tools(
        flow_serving_client,
        accept,
        expected_status_code,
        expected_content_type,
    ):
        payload = {
            "text": "Hello World!",
        }
        headers = {
            "Content-Type": "application/json",
            "Accept": accept,
        }
        response = flow_serving_client.post("/score", json=payload, headers=headers)
        if "text/event-stream" in response.content_type:
            for line in response.data.decode().split("\n"):
                print(line)
        else:
            result = response.json
            print(result)
>       assert response.status_code == expected_status_code
E       assert 400 == 200
E        +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code

tests/sdk_cli_test/e2etests/test_flow_serve.py:301: AssertionError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_serve

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_stream_python_nonstream_tools[application/json, */*-200-application/json] (tests.sdk_cli_test.e2etests.test_flow_serve)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
Raw output
assert 400 == 200
 +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code
flow_serving_client = <FlaskClient <PromptflowServingApp 'promptflow._sdk._serving.app'>>
accept = 'application/json, */*', expected_status_code = 200
expected_content_type = 'application/json'

    @pytest.mark.usefixtures("recording_injection")
    @pytest.mark.e2etest
    @pytest.mark.parametrize(
        "accept, expected_status_code, expected_content_type",
        [
            ("text/event-stream", 406, "application/json"),
            ("application/json", 200, "application/json"),
            ("*/*", 200, "application/json"),
            ("text/event-stream, application/json", 200, "application/json"),
            ("application/json, */*", 200, "application/json"),
            ("", 200, "application/json"),
        ],
    )
    def test_stream_python_nonstream_tools(
        flow_serving_client,
        accept,
        expected_status_code,
        expected_content_type,
    ):
        payload = {
            "text": "Hello World!",
        }
        headers = {
            "Content-Type": "application/json",
            "Accept": accept,
        }
        response = flow_serving_client.post("/score", json=payload, headers=headers)
        if "text/event-stream" in response.content_type:
            for line in response.data.decode().split("\n"):
                print(line)
        else:
            result = response.json
            print(result)
>       assert response.status_code == expected_status_code
E       assert 400 == 200
E        +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code

tests/sdk_cli_test/e2etests/test_flow_serve.py:301: AssertionError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_serve

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_stream_python_nonstream_tools[-200-application/json] (tests.sdk_cli_test.e2etests.test_flow_serve)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
Raw output
assert 400 == 200
 +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code
flow_serving_client = <FlaskClient <PromptflowServingApp 'promptflow._sdk._serving.app'>>
accept = '', expected_status_code = 200
expected_content_type = 'application/json'

    @pytest.mark.usefixtures("recording_injection")
    @pytest.mark.e2etest
    @pytest.mark.parametrize(
        "accept, expected_status_code, expected_content_type",
        [
            ("text/event-stream", 406, "application/json"),
            ("application/json", 200, "application/json"),
            ("*/*", 200, "application/json"),
            ("text/event-stream, application/json", 200, "application/json"),
            ("application/json, */*", 200, "application/json"),
            ("", 200, "application/json"),
        ],
    )
    def test_stream_python_nonstream_tools(
        flow_serving_client,
        accept,
        expected_status_code,
        expected_content_type,
    ):
        payload = {
            "text": "Hello World!",
        }
        headers = {
            "Content-Type": "application/json",
            "Accept": accept,
        }
        response = flow_serving_client.post("/score", json=payload, headers=headers)
        if "text/event-stream" in response.content_type:
            for line in response.data.decode().split("\n"):
                print(line)
        else:
            result = response.json
            print(result)
>       assert response.status_code == expected_status_code
E       assert 400 == 200
E        +  where 400 = <WrapperTestResponse 571 bytes [400 BAD REQUEST]>.status_code

tests/sdk_cli_test/e2etests/test_flow_serve.py:301: AssertionError

Check warning on line 0 in tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

1 out of 12 runs failed: test_basic_evaluation (tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun)

artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 5s]
Raw output
azure.core.exceptions.ResourceExistsError: The specified blob already exists.
RequestId:7f8d280d-501e-003a-76fa-2117f4000000
Time:2023-11-28T12:57:33.8902494Z
ErrorCode:BlobAlreadyExists
Content: <?xml version="1.0" encoding="utf-8"?><Error><Code>BlobAlreadyExists</Code><Message>The specified blob already exists.
RequestId:7f8d280d-501e-003a-76fa-2117f4000000
Time:2023-11-28T12:57:33.8902494Z</Message></Error>
self = <sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun object at 0x7fbf58ce91f0>
pf = <promptflow.azure._pf_client.PFClient object at 0x7fbf450f8fa0>
runtime = 'test-runtime-ci'
randstr = <function randstr.<locals>.generate_random_string at 0x7fbf4512aa60>

    def test_basic_evaluation(self, pf, runtime: str, randstr: Callable[[str], str]):
        data_path = f"{DATAS_DIR}/webClassification3.jsonl"
    
>       run = pf.run(
            flow=f"{FLOWS_DIR}/web_classification",
            data=data_path,
            column_mapping={"url": "${data.url}"},
            variant="${summarize_text_content.variant_0}",
            runtime=runtime,
            name=randstr("batch_run_name"),
        )

tests/sdk_cli_azure_test/e2etests/test_run_operations.py:70: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/azure/_pf_client.py:252: in run
    return self.runs.create_or_update(run=run, **kwargs)
promptflow/_telemetry/activity.py:143: in wrapper
    return f(self, *args, **kwargs)
promptflow/azure/operations/_run_operations.py:229: in create_or_update
    rest_obj = self._resolve_dependencies_in_parallel(run=run, runtime=kwargs.get("runtime"), reset=reset)
promptflow/azure/operations/_run_operations.py:949: in _resolve_dependencies_in_parallel
    task_results = [task.result() for task in tasks]
promptflow/azure/operations/_run_operations.py:949: in <listcomp>
    task_results = [task.result() for task in tasks]
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/_base.py:437: in result
    return self.__get_result()
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
    raise self._exception
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/thread.py:57: in run
    result = self.fn(*self.args, **self.kwargs)
promptflow/azure/operations/_run_operations.py:780: in _resolve_flow
    self._flow_operations._resolve_arm_id_or_upload_dependencies(
promptflow/azure/operations/_flow_operations.py:426: in _resolve_arm_id_or_upload_dependencies
    self._try_resolve_code_for_flow(flow=flow, ops=ops, ignore_tools_json=ignore_tools_json)
promptflow/azure/operations/_flow_operations.py:491: in _try_resolve_code_for_flow
    uploaded_code_asset, _ = _check_and_upload_path(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/ai/ml/_artifacts/_artifact_utilities.py:497: in _check_and_upload_path
    uploaded_artifact = _upload_to_datastore(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/ai/ml/_artifacts/_artifact_utilities.py:382: in _upload_to_datastore
    artifact = upload_artifact(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/ai/ml/_artifacts/_artifact_utilities.py:241: in upload_artifact
    artifact_info = storage_client.upload(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/ai/ml/_artifacts/_blob_storage_helper.py:120: in upload
    upload_directory(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/ai/ml/_utils/_asset_utils.py:678: in upload_directory
    future.result()  # access result to propagate any exceptions
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/_base.py:437: in result
    return self.__get_result()
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
    raise self._exception
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/thread.py:57: in run
    result = self.fn(*self.args, **self.kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/ai/ml/_utils/_asset_utils.py:566: in upload_file
    storage_client.container_client.upload_blob(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/core/tracing/decorator.py:78: in wrapper_use_tracer
    return func(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/storage/blob/_container_client.py:1101: in upload_blob
    blob.upload_blob(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/core/tracing/decorator.py:78: in wrapper_use_tracer
    return func(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/storage/blob/_blob_client.py:765: in upload_blob
    return upload_block_blob(**options)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/storage/blob/_upload_helpers.py:195: in upload_block_blob
    process_storage_error(error)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/storage/blob/_shared/response_handlers.py:184: in process_storage_error
    exec("raise error from None")   # pylint: disable=exec-used # nosec
<string>:1: in <module>
    ???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/storage/blob/_upload_helpers.py:105: in upload_block_blob
    response = client.upload(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/core/tracing/decorator.py:78: in wrapper_use_tracer
    return func(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/storage/blob/_generated/operations/_block_blob_operations.py:864: in upload
    map_error(status_code=response.status_code, response=response, error_map=error_map)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

status_code = 409
response = <RequestsTransportResponse: 409 The specified blob already exists., Content-Type: application/xml>
error_map = {304: <class 'azure.core.exceptions.ResourceNotModifiedError'>, 401: <class 'azure.core.exceptions.ClientAuthenticatio..., 404: <class 'azure.core.exceptions.ResourceNotFoundError'>, 409: <class 'azure.core.exceptions.ResourceExistsError'>}

    def map_error(
        status_code: int, response: _HttpResponseCommonAPI, error_map: Mapping[int, Type[HttpResponseError]]
    ) -> None:
        if not error_map:
            return
        error_type = error_map.get(status_code)
        if not error_type:
            return
        error = error_type(response=response)
>       raise error
E       azure.core.exceptions.ResourceExistsError: The specified blob already exists.
E       RequestId:7f8d280d-501e-003a-76fa-2117f4000000
E       Time:2023-11-28T12:57:33.8902494Z
E       ErrorCode:BlobAlreadyExists
E       Content: <?xml version="1.0" encoding="utf-8"?><Error><Code>BlobAlreadyExists</Code><Message>The specified blob already exists.
E       RequestId:7f8d280d-501e-003a-76fa-2117f4000000
E       Time:2023-11-28T12:57:33.8902494Z</Message></Error>

/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/azure/core/exceptions.py:165: ResourceExistsError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_test.TestFlowTest

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_node_test_with_connection_input (tests.sdk_cli_test.e2etests.test_flow_test.TestFlowTest)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 0s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 0s]
Raw output
promptflow.exceptions.UserErrorException: APIRemovedInV1: Execution failure in 'echo_my_prompt': (APIRemovedInV1) 

You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
self = <sdk_cli_test.e2etests.test_flow_test.TestFlowTest object at 0x7fdceee423b0>

    def test_node_test_with_connection_input(self):
        flow_path = Path(f"{FLOWS_DIR}/basic-with-connection").absolute()
        inputs = {
            "connection": "azure_open_ai_connection",
            "hello_prompt.output": "Write a simple Hello World! "
            "program that displays the greeting message when executed.",
        }
>       result = _client.test(
            flow=flow_path,
            inputs=inputs,
            node="echo_my_prompt",
            environment_variables={"API_TYPE": "${azure_open_ai_connection.api_type}"},
        )

tests/sdk_cli_test/e2etests/test_flow_test.py:177: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/_sdk/_pf_client.py:241: in test
    return self.flows.test(
promptflow/_telemetry/activity.py:143: in wrapper
    return f(self, *args, **kwargs)
promptflow/_sdk/operations/_flow_operations.py:93: in test
    TestSubmitter._raise_error_when_test_failed(result, show_trace=node is not None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_result = RunInfo(node='echo_my_prompt', flow_run_id='1414a5b1-80aa-4502-9935-0040bc2f4a15', run_id='1414a5b1-80aa-4502-9935-004...d=None, cached_flow_run_id=None, logs={'stdout': '', 'stderr': ''}, system_metrics={'duration': 0.001869}, result=None)
show_trace = True

    @staticmethod
    def _raise_error_when_test_failed(test_result, show_trace=False):
        from promptflow.executor._result import LineResult
    
        test_status = test_result.run_info.status if isinstance(test_result, LineResult) else test_result.status
    
        if test_status == Status.Failed:
            error_dict = test_result.run_info.error if isinstance(test_result, LineResult) else test_result.error
            error_response = ErrorResponse.from_error_dict(error_dict)
            user_execution_error = error_response.get_user_execution_error_info()
            error_message = error_response.message
            stack_trace = user_execution_error.get("traceback", "")
            error_type = user_execution_error.get("type", "Exception")
            if show_trace:
                print(stack_trace)
>           raise UserErrorException(f"{error_type}: {error_message}")
E           promptflow.exceptions.UserErrorException: APIRemovedInV1: Execution failure in 'echo_my_prompt': (APIRemovedInV1) 
E           
E           You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
E           
E           You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 
E           
E           Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
E           
E           A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

promptflow/_sdk/_submitter/test_submitter.py:391: UserErrorException

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_cli.TestCli

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_flow_test_inputs (tests.sdk_cli_test.e2etests.test_cli.TestCli)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 17s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 13s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 14s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 21s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 16s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 18s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 16s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 13s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 15s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 18s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 17s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 14s]
Raw output
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
self = <sdk_cli_test.e2etests.test_cli.TestCli object at 0x7f735ce450f0>
capsys = <_pytest.capture.CaptureFixture object at 0x7f7355c15a80>
caplog = <_pytest.logging.LogCaptureFixture object at 0x7f7355c146a0>

    def test_flow_test_inputs(self, capsys, caplog):
        # Flow test missing required inputs
        with pytest.raises(SystemExit):
            run_pf_command(
                "flow",
                "test",
                "--flow",
                f"{FLOWS_DIR}/print_env_var",
                "--environment-variables",
                "API_BASE=${azure_open_ai_connection.api_base}",
            )
        stdout, _ = capsys.readouterr()
        assert "Required input(s) ['key'] are missing for \"flow\"." in stdout
    
        # Node test missing required inputs
        with pytest.raises(SystemExit):
            run_pf_command(
                "flow",
                "test",
                "--flow",
                f"{FLOWS_DIR}/print_env_var",
                "--node",
                "print_env",
                "--environment-variables",
                "API_BASE=${azure_open_ai_connection.api_base}",
            )
        stdout, _ = capsys.readouterr()
        assert "Required input(s) ['key'] are missing for \"print_env\"" in stdout
    
        # Flow test with unknown inputs
        logger = logging.getLogger(LOGGER_NAME)
        logger.propagate = True
    
        def validate_log(log_msg, prefix, expect_dict):
            log_inputs = json.loads(log_msg[len(prefix) :].replace("'", '"'))
            assert prefix in log_msg
            assert expect_dict == log_inputs
    
        with caplog.at_level(level=logging.INFO, logger=LOGGER_NAME):
            run_pf_command(
                "flow",
                "test",
                "--flow",
                f"{FLOWS_DIR}/web_classification",
                "--inputs",
                "url=https://www.youtube.com/watch?v=o5ZQyXaAv1g",
                "answer=Channel",
                "evidence=Url",
            )
            unknown_input_log = caplog.records[0]
            expect_inputs = {"answer": "Channel", "evidence": "Url"}
            validate_log(
                prefix="Unknown input(s) of flow: ", log_msg=unknown_input_log.message, expect_dict=expect_inputs
            )
    
            flow_input_log = caplog.records[1]
            expect_inputs = {
                "url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g",
                "answer": "Channel",
                "evidence": "Url",
            }
            validate_log(prefix="flow input(s): ", log_msg=flow_input_log.message, expect_dict=expect_inputs)
    
            # Node test with unknown inputs
            run_pf_command(
                "flow",
                "test",
                "--flow",
                f"{FLOWS_DIR}/web_classification",
                "--inputs",
                "inputs.url="
                "https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h",
                "unknown_input=unknown_val",
                "--node",
                "fetch_text_content_from_url",
            )
            unknown_input_log = caplog.records[3]
            expect_inputs = {"unknown_input": "unknown_val"}
>           validate_log(
                prefix="Unknown input(s) of fetch_text_content_from_url: ",
                log_msg=unknown_input_log.message,
                expect_dict=expect_inputs,
            )

tests/sdk_cli_test/e2etests/test_cli.py:1091: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/sdk_cli_test/e2etests/test_cli.py:1047: in validate_log
    log_inputs = json.loads(log_msg[len(prefix) :].replace("'", '"'))
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/json/__init__.py:346: in loads
    return _default_decoder.decode(s)
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/json/decoder.py:337: in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <json.decoder.JSONDecoder object at 0x7f737e9c83d0>
s = 'ure.com//openai/deployments/text-davinci-003/completions?api-version=2023-07-01-preview "HTTP/1.1 200 OK"'
idx = 0

    def raw_decode(self, s, idx=0):
        """Decode a JSON document from ``s`` (a ``str`` beginning with
        a JSON document) and return a 2-tuple of the Python
        representation and the index in ``s`` where the document ended.
    
        This can be used to decode a JSON document from a string that may
        have extraneous data at the end.
    
        """
        try:
            obj, end = self.scan_once(s, idx)
        except StopIteration as err:
>           raise JSONDecodeError("Expecting value", s, err.value) from None
E           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/json/decoder.py:355: JSONDecodeError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_executable.TestExecutable

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

3 out of 12 runs failed: test_flow_build_executable (tests.sdk_cli_test.e2etests.test_executable.TestExecutable)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 10s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 20s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 12s]
Raw output
Exception: Process terminated with exit code 255, [2023-11-28 13:07:08,536][promptflow][WARNING] - Connection with name azure_open_ai_connection already exists. Updating it.
2023-11-28 13:07:08.567
self = <sdk_cli_test.e2etests.test_executable.TestExecutable object at 0x12bb57460>

    @pytest.mark.skipif(sys.platform == "win32", reason="Raise Exception: Process terminated with exit code 4294967295")
    def test_flow_build_executable(self):
        source = f"{FLOWS_DIR}/web_classification/flow.dag.yaml"
        target = "promptflow._sdk.operations._flow_operations.FlowOperations._run_pyinstaller"
        with mock.patch(target) as mocked:
            mocked.return_value = None
    
            with tempfile.TemporaryDirectory() as temp_dir:
                run_pf_command(
                    "flow",
                    "build",
                    "--source",
                    source,
                    "--output",
                    temp_dir,
                    "--format",
                    "executable",
                )
                # Start the Python script as a subprocess
                app_file = Path(temp_dir, "app.py").as_posix()
                process = subprocess.Popen(["python", app_file], stderr=subprocess.PIPE)
                try:
                    # Wait for a specified time (in seconds)
                    wait_time = 5
                    process.wait(timeout=wait_time)
                    if process.returncode == 0:
                        pass
                    else:
>                       raise Exception(
                            f"Process terminated with exit code {process.returncode}, "
                            f"{process.stderr.read().decode('utf-8')}"
                        )
E                       Exception: Process terminated with exit code 255, [2023-11-28 13:07:08,536][promptflow][WARNING] - Connection with name azure_open_ai_connection already exists. Updating it.
E                       2023-11-28 13:07:08.567

tests/sdk_cli_test/e2etests/test_executable.py:49: Exception

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_run.TestFlowRun

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

1 out of 12 runs failed: test_run_local_storage_structure (tests.sdk_cli_test.e2etests.test_flow_run.TestFlowRun)

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 1s]
Raw output
AssertionError: assert 3 == 5
 +  where 3 = len([PosixPath('/home/runner/.promptflow/.runs/web_classification_variant_0_20231128_130143_870410/node_artifacts/fetch_text_content_from_url'), PosixPath('/home/runner/.promptflow/.runs/web_classification_variant_0_20231128_130143_870410/node_artifacts/prepare_examples'), PosixPath('/home/runner/.promptflow/.runs/web_classification_variant_0_20231128_130143_870410/node_artifacts/summarize_text_content')])
self = <sdk_cli_test.e2etests.test_flow_run.TestFlowRun object at 0x7fc28c6a5280>
local_client = <promptflow._sdk._pf_client.PFClient object at 0x7fc28c3ff580>
pf = <promptflow._sdk._pf_client.PFClient object at 0x7fc28c3fff70>

    def test_run_local_storage_structure(self, local_client, pf) -> None:
        run = create_run_against_multi_line_data(pf)
        local_storage = LocalStorageOperations(local_client.runs.get(run.name))
        run_output_path = local_storage.path
        assert (Path(run_output_path) / "flow_outputs").is_dir()
        assert (Path(run_output_path) / "flow_outputs" / "output.jsonl").is_file()
        assert (Path(run_output_path) / "flow_artifacts").is_dir()
        # 3 line runs for webClassification3.jsonl
        assert len([_ for _ in (Path(run_output_path) / "flow_artifacts").iterdir()]) == 3
        assert (Path(run_output_path) / "node_artifacts").is_dir()
        # 5 nodes web classification flow DAG
>       assert len([_ for _ in (Path(run_output_path) / "node_artifacts").iterdir()]) == 5
E       AssertionError: assert 3 == 5
E        +  where 3 = len([PosixPath('/home/runner/.promptflow/.runs/web_classification_variant_0_20231128_130143_870410/node_artifacts/fetch_text_content_from_url'), PosixPath('/home/runner/.promptflow/.runs/web_classification_variant_0_20231128_130143_870410/node_artifacts/prepare_examples'), PosixPath('/home/runner/.promptflow/.runs/web_classification_variant_0_20231128_130143_870410/node_artifacts/summarize_text_content')])

tests/sdk_cli_test/e2etests/test_flow_run.py:736: AssertionError

Check warning on line 0 in tests.sdk_cli_test.e2etests.test_flow_run.TestFlowRun

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

2 out of 12 runs failed: test_get_metrics_format (tests.sdk_cli_test.e2etests.test_flow_run.TestFlowRun)

artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 2s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 2s]
Raw output
promptflow._sdk._errors.InvalidRunStatusError: Run 'classification_accuracy_evaluation_variant_0_20231128_130120_219360' is not completed, the status is 'Failed'.
self = <sdk_cli_test.e2etests.test_flow_run.TestFlowRun object at 0x7fada2a8e7a0>
local_client = <promptflow._sdk._pf_client.PFClient object at 0x7fada25d8af0>
pf = <promptflow._sdk._pf_client.PFClient object at 0x7fada25d9090>

    def test_get_metrics_format(self, local_client, pf) -> None:
        run1 = create_run_against_multi_line_data(pf)
        run2 = create_run_against_run(pf, run1)
        # ensure the result is a flatten dict
>       assert local_client.runs.get_metrics(run2.name).keys() == {"accuracy"}

tests/sdk_cli_test/e2etests/test_flow_run.py:748: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/_telemetry/activity.py:143: in wrapper
    return f(self, *args, **kwargs)
promptflow/_sdk/operations/_run_operations.py:268: in get_metrics
    run._check_run_status_is_completed()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <promptflow._sdk.entities._run.Run object at 0x7fad889972b0>

    def _check_run_status_is_completed(self) -> None:
        if self.status != RunStatus.COMPLETED:
            error_message = f"Run {self.name!r} is not completed, the status is {self.status!r}."
            if self.status != RunStatus.FAILED:
                error_message += " Please wait for its completion, or select other completed run(s)."
>           raise InvalidRunStatusError(error_message)
E           promptflow._sdk._errors.InvalidRunStatusError: Run 'classification_accuracy_evaluation_variant_0_20231128_130120_219360' is not completed, the status is 'Failed'.

promptflow/_sdk/entities/_run.py:550: InvalidRunStatusError

Check warning on line 0 in tests.sdk_cli_azure_test.e2etests.test_flow_serve

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

All 12 runs failed: test_serving_api (tests.sdk_cli_azure_test.e2etests.test_flow_serve)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 3s]
artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 1s]
artifacts/Test Results (Python 3.10) (OS windows-latest)/test-results-sdk-cli.xml [took 2s]
artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-sdk-cli.xml [took 3s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 1s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-sdk-cli.xml [took 3s]
artifacts/Test Results (Python 3.8) (OS macos-latest)/test-results-sdk-cli.xml [took 5s]
artifacts/Test Results (Python 3.8) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 1s]
artifacts/Test Results (Python 3.8) (OS windows-latest)/test-results-sdk-cli.xml [took 2s]
artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 3s]
artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 1s]
artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 2s]
Raw output
AssertionError: Response code indicates error 400 - {"error":{"code":"UserError","message":"Execution failure in 'echo_my_prompt': (APIRemovedInV1) \n\nYou tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.\n\nYou can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. \n\nAlternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`\n\nA detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742\n"}}
  
assert 400 == 200
 +  where 400 = <WrapperTestResponse streamed [400 BAD REQUEST]>.status_code
flow_serving_client_remote_connection = <FlaskClient <PromptflowServingApp 'promptflow._sdk._serving.app'>>

    @pytest.mark.skipif(condition=not is_live(), reason="serving tests, only run in live mode.")
    @pytest.mark.usefixtures("flow_serving_client_remote_connection")
    @pytest.mark.e2etest
    def test_serving_api(flow_serving_client_remote_connection):
        response = flow_serving_client_remote_connection.get("/health")
        assert b'{"status":"Healthy","version":"0.0.1"}' in response.data
        response = flow_serving_client_remote_connection.post("/score", data=json.dumps({"text": "hi"}))
>       assert (
            response.status_code == 200
        ), f"Response code indicates error {response.status_code} - {response.data.decode()}"
E       AssertionError: Response code indicates error 400 - {"error":{"code":"UserError","message":"Execution failure in 'echo_my_prompt': (APIRemovedInV1) \n\nYou tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.\n\nYou can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. \n\nAlternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`\n\nA detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742\n"}}
E         
E       assert 400 == 200
E        +  where 400 = <WrapperTestResponse streamed [400 BAD REQUEST]>.status_code

tests/sdk_cli_azure_test/e2etests/test_flow_serve.py:15: AssertionError

Check warning on line 0 in tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

1 out of 12 runs failed: test_archive_and_restore_run (tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun)

artifacts/Test Results (Python 3.9) (OS macos-latest)/test-results-sdk-cli.xml [took 26s]
Raw output
requests.exceptions.ConnectionError: ('Connection aborted.', TimeoutError(60, 'Operation timed out'))
self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x11d789910>
method = 'POST'
url = '/history/v1.0/subscriptions/96aede12-2f73-41cb-b983-6d11a904839b/resourceGroups/promptflow/providers/Microsoft.MachineLearningServices/workspaces/promptflow-eastus/rundata'
body = b'{"runId": "4cf2d5e9-c78f-4ab8-a3ee-57675f92fb74", "selectRunMetadata": true, "selectRunDefinition": true, "selectJobSpecification": true}'
headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-aliv...ffk0FMhWUonLEQsifwg0KXC5NIiFXnD8wq5kpaWilABHOjhhYTIsz9rw', 'Content-Type': 'application/json', 'Content-Length': '137'}
retries = Retry(total=0, connect=None, read=False, redirect=None, status=None)
redirect = False, assert_same_host = False
timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}
parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/history/v1.0/subscriptions/96aede12-2f73-41cb-b983-6d11a90483...romptflow/providers/Microsoft.MachineLearningServices/workspaces/promptflow-eastus/rundata', query=None, fragment=None)
destination_scheme = None, conn = None, release_this_conn = True
http_tunnel_required = False, err = None, clean_exit = False

    def urlopen(
        self,
        method,
        url,
        body=None,
        headers=None,
        retries=None,
        redirect=True,
        assert_same_host=True,
        timeout=_Default,
        pool_timeout=None,
        release_conn=None,
        chunked=False,
        body_pos=None,
        **response_kw
    ):
        """
        Get a connection from the pool and perform an HTTP request. This is the
        lowest level call for making a request, so you'll need to specify all
        the raw details.
    
        .. note::
    
           More commonly, it's appropriate to use a convenience method provided
           by :class:`.RequestMethods`, such as :meth:`request`.
    
        .. note::
    
           `release_conn` will only behave as expected if
           `preload_content=False` because we want to make
           `preload_content=False` the default behaviour someday soon without
           breaking backwards compatibility.
    
        :param method:
            HTTP request method (such as GET, POST, PUT, etc.)
    
        :param url:
            The URL to perform the request on.
    
        :param body:
            Data to send in the request body, either :class:`str`, :class:`bytes`,
            an iterable of :class:`str`/:class:`bytes`, or a file-like object.
    
        :param headers:
            Dictionary of custom headers to send, such as User-Agent,
            If-None-Match, etc. If None, pool headers are used. If provided,
            these headers completely replace any pool-specific headers.
    
        :param retries:
            Configure the number of retries to allow before raising a
            :class:`~urllib3.exceptions.MaxRetryError` exception.
    
            Pass ``None`` to retry until you receive a response. Pass a
            :class:`~urllib3.util.retry.Retry` object for fine-grained control
            over different types of retries.
            Pass an integer number to retry connection errors that many times,
            but no other types of errors. Pass zero to never retry.
    
            If ``False``, then retries are disabled and any exception is raised
            immediately. Also, instead of raising a MaxRetryError on redirects,
            the redirect response will be returned.
    
        :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
    
        :param redirect:
            If True, automatically handle redirects (status codes 301, 302,
            303, 307, 308). Each redirect counts as a retry. Disabling retries
            will disable redirect, too.
    
        :param assert_same_host:
            If ``True``, will make sure that the host of the pool requests is
            consistent else will raise HostChangedError. When ``False``, you can
            use the pool on an HTTP proxy and request foreign hosts.
    
        :param timeout:
            If specified, overrides the default timeout for this one
            request. It may be a float (in seconds) or an instance of
            :class:`urllib3.util.Timeout`.
    
        :param pool_timeout:
            If set and the pool is set to block=True, then this method will
            block for ``pool_timeout`` seconds and raise EmptyPoolError if no
            connection is available within the time period.
    
        :param release_conn:
            If False, then the urlopen call will not release the connection
            back into the pool once a response is received (but will release if
            you read the entire contents of the response such as when
            `preload_content=True`). This is useful if you're not preloading
            the response's content immediately. You will need to call
            ``r.release_conn()`` on the response ``r`` to return the connection
            back into the pool. If None, it takes the value of
            ``response_kw.get('preload_content', True)``.
    
        :param chunked:
            If True, urllib3 will send the body using chunked transfer
            encoding. Otherwise, urllib3 will send the body using the standard
            content-length form. Defaults to False.
    
        :param int body_pos:
            Position to seek to in file-like body in the event of a retry or
            redirect. Typically this won't need to be set because urllib3 will
            auto-populate the value when needed.
    
        :param \\**response_kw:
            Additional parameters are passed to
            :meth:`urllib3.response.HTTPResponse.from_httplib`
        """
    
        parsed_url = parse_url(url)
        destination_scheme = parsed_url.scheme
    
        if headers is None:
            headers = self.headers
    
        if not isinstance(retries, Retry):
            retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
    
        if release_conn is None:
            release_conn = response_kw.get("preload_content", True)
    
        # Check host
        if assert_same_host and not self.is_same_host(url):
            raise HostChangedError(self, url, retries)
    
        # Ensure that the URL we're connecting to is properly encoded
        if url.startswith("/"):
            url = six.ensure_str(_encode_target(url))
        else:
            url = six.ensure_str(parsed_url.url)
    
        conn = None
    
        # Track whether `conn` needs to be released before
        # returning/raising/recursing. Update this variable if necessary, and
        # leave `release_conn` constant throughout the function. That way, if
        # the function recurses, the original value of `release_conn` will be
        # passed down into the recursive call, and its value will be respected.
        #
        # See issue #651 [1] for details.
        #
        # [1] <https://github.com/urllib3/urllib3/issues/651>
        release_this_conn = release_conn
    
        http_tunnel_required = connection_requires_http_tunnel(
            self.proxy, self.proxy_config, destination_scheme
        )
    
        # Merge the proxy headers. Only done when not using HTTP CONNECT. We
        # have to copy the headers dict so we can safely change it without those
        # changes being reflected in anyone else's copy.
        if not http_tunnel_required:
            headers = headers.copy()
            headers.update(self.proxy_headers)
    
        # Must keep the exception bound to a separate variable or else Python 3
        # complains about UnboundLocalError.
        err = None
    
        # Keep track of whether we cleanly exited the except block. This
        # ensures we do proper cleanup in finally.
        clean_exit = False
    
        # Rewind body position, if needed. Record current position
        # for future rewinds in the event of a redirect/retry.
        body_pos = set_file_position(body, body_pos)
    
        try:
            # Request a connection from the queue.
            timeout_obj = self._get_timeout(timeout)
            conn = self._get_conn(timeout=pool_timeout)
    
            conn.timeout = timeout_obj.connect_timeout
    
            is_new_proxy_conn = self.proxy is not None and not getattr(
                conn, "sock", None
            )
            if is_new_proxy_conn and http_tunnel_required:
                self._prepare_proxy(conn)
    
            # Make the request on the httplib connection object.
>           httplib_response = self._make_request(
                conn,
                method,
                url,
                timeout=timeout_obj,
                body=body,
                headers=headers,
                chunked=chunked,
            )

/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connectionpool.py:715: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connectionpool.py:404: in _make_request
    self._validate_conn(conn)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connectionpool.py:1058: in _validate_conn
    conn.connect()
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connection.py:419: in connect
    self.sock = ssl_wrap_socket(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/util/ssl_.py:449: in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/util/ssl_.py:493: in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/ssl.py:501: in wrap_socket
    return self.sslsocket_class._create(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/ssl.py:1074: in _create
    self.do_handshake()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>
block = False

    @_sslcopydoc
    def do_handshake(self, block=False):
        self._check_connected()
        timeout = self.gettimeout()
        try:
            if timeout == 0.0 and block:
                self.settimeout(None)
>           self._sslobj.do_handshake()
E           TimeoutError: [Errno 60] Operation timed out

/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/ssl.py:1343: TimeoutError

During handling of the above exception, another exception occurred:

self = <requests.adapters.HTTPAdapter object at 0x11d77e880>
request = <PreparedRequest [POST]>, stream = False
timeout = Timeout(connect=None, read=None, total=None), verify = True
cert = None, proxies = OrderedDict()

    def send(
        self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None
    ):
        """Sends PreparedRequest object. Returns Response object.
    
        :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
        :param stream: (optional) Whether to stream the request content.
        :param timeout: (optional) How long to wait for the server to send
            data before giving up, as a float, or a :ref:`(connect timeout,
            read timeout) <timeouts>` tuple.
        :type timeout: float or tuple or urllib3 Timeout object
        :param verify: (optional) Either a boolean, in which case it controls whether
            we verify the server's TLS certificate, or a string, in which case it
            must be a path to a CA bundle to use
        :param cert: (optional) Any user-provided SSL certificate to be trusted.
        :param proxies: (optional) The proxies dictionary to apply to the request.
        :rtype: requests.Response
        """
    
        try:
            conn = self.get_connection(request.url, proxies)
        except LocationValueError as e:
            raise InvalidURL(e, request=request)
    
        self.cert_verify(conn, request.url, verify, cert)
        url = self.request_url(request, proxies)
        self.add_headers(
            request,
            stream=stream,
            timeout=timeout,
            verify=verify,
            cert=cert,
            proxies=proxies,
        )
    
        chunked = not (request.body is None or "Content-Length" in request.headers)
    
        if isinstance(timeout, tuple):
            try:
                connect, read = timeout
                timeout = TimeoutSauce(connect=connect, read=read)
            except ValueError:
                raise ValueError(
                    f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, "
                    f"or a single float to set both timeouts to the same value."
                )
        elif isinstance(timeout, TimeoutSauce):
            pass
        else:
            timeout = TimeoutSauce(connect=timeout, read=timeout)
    
        try:
>           resp = conn.urlopen(
                method=request.method,
                url=url,
                body=request.body,
                headers=request.headers,
                redirect=False,
                assert_same_host=False,
                preload_content=False,
                decode_content=False,
                retries=self.max_retries,
                timeout=timeout,
                chunked=chunked,
            )

/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/adapters.py:486: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connectionpool.py:799: in urlopen
    retries = retries.increment(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/util/retry.py:550: in increment
    raise six.reraise(type(error), error, _stacktrace)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/packages/six.py:769: in reraise
    raise value.with_traceback(tb)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connectionpool.py:715: in urlopen
    httplib_response = self._make_request(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connectionpool.py:404: in _make_request
    self._validate_conn(conn)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connectionpool.py:1058: in _validate_conn
    conn.connect()
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/connection.py:419: in connect
    self.sock = ssl_wrap_socket(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/util/ssl_.py:449: in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/urllib3/util/ssl_.py:493: in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/ssl.py:501: in wrap_socket
    return self.sslsocket_class._create(
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/ssl.py:1074: in _create
    self.do_handshake()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>
block = False

    @_sslcopydoc
    def do_handshake(self, block=False):
        self._check_connected()
        timeout = self.gettimeout()
        try:
            if timeout == 0.0 and block:
                self.settimeout(None)
>           self._sslobj.do_handshake()
E           urllib3.exceptions.ProtocolError: ('Connection aborted.', TimeoutError(60, 'Operation timed out'))

/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/ssl.py:1343: ProtocolError

During handling of the above exception, another exception occurred:

self = <sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun object at 0x10e38ac40>
pf = <promptflow.azure._pf_client.PFClient object at 0x11daab0d0>

    @pytest.mark.skipif(
        condition=not is_live(),
        reason="cannot differ the two requests to run history in replay mode.",
    )
    def test_archive_and_restore_run(self, pf):
        from promptflow._sdk._constants import RunHistoryKeys
    
        run_meta_data = RunHistoryKeys.RunMetaData
        hidden = RunHistoryKeys.HIDDEN
    
        run_id = "4cf2d5e9-c78f-4ab8-a3ee-57675f92fb74"
    
        # test archive
        pf.runs.archive(run=run_id)
>       run_data = pf.runs._get_run_from_run_history(run_id, original_form=True)[run_meta_data]

tests/sdk_cli_azure_test/e2etests/test_run_operations.py:372: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/azure/operations/_run_operations.py:490: in _get_run_from_run_history
    response = requests.post(url, headers=headers, json=payload)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/api.py:115: in post
    return request("post", url, data=data, json=json, **kwargs)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/api.py:59: in request
    return session.request(method=method, url=url, **kwargs)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/sessions.py:589: in request
    resp = self.send(prep, **send_kwargs)
/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/sessions.py:703: in send
    r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <requests.adapters.HTTPAdapter object at 0x11d77e880>
request = <PreparedRequest [POST]>, stream = False
timeout = Timeout(connect=None, read=None, total=None), verify = True
cert = None, proxies = OrderedDict()

    def send(
        self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None
    ):
        """Sends PreparedRequest object. Returns Response object.
    
        :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
        :param stream: (optional) Whether to stream the request content.
        :param timeout: (optional) How long to wait for the server to send
            data before giving up, as a float, or a :ref:`(connect timeout,
            read timeout) <timeouts>` tuple.
        :type timeout: float or tuple or urllib3 Timeout object
        :param verify: (optional) Either a boolean, in which case it controls whether
            we verify the server's TLS certificate, or a string, in which case it
            must be a path to a CA bundle to use
        :param cert: (optional) Any user-provided SSL certificate to be trusted.
        :param proxies: (optional) The proxies dictionary to apply to the request.
        :rtype: requests.Response
        """
    
        try:
            conn = self.get_connection(request.url, proxies)
        except LocationValueError as e:
            raise InvalidURL(e, request=request)
    
        self.cert_verify(conn, request.url, verify, cert)
        url = self.request_url(request, proxies)
        self.add_headers(
            request,
            stream=stream,
            timeout=timeout,
            verify=verify,
            cert=cert,
            proxies=proxies,
        )
    
        chunked = not (request.body is None or "Content-Length" in request.headers)
    
        if isinstance(timeout, tuple):
            try:
                connect, read = timeout
                timeout = TimeoutSauce(connect=connect, read=read)
            except ValueError:
                raise ValueError(
                    f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, "
                    f"or a single float to set both timeouts to the same value."
                )
        elif isinstance(timeout, TimeoutSauce):
            pass
        else:
            timeout = TimeoutSauce(connect=timeout, read=timeout)
    
        try:
            resp = conn.urlopen(
                method=request.method,
                url=url,
                body=request.body,
                headers=request.headers,
                redirect=False,
                assert_same_host=False,
                preload_content=False,
                decode_content=False,
                retries=self.max_retries,
                timeout=timeout,
                chunked=chunked,
            )
    
        except (ProtocolError, OSError) as err:
>           raise ConnectionError(err, request=request)
E           requests.exceptions.ConnectionError: ('Connection aborted.', TimeoutError(60, 'Operation timed out'))

/Users/runner/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/adapters.py:501: ConnectionError

Check warning on line 0 in tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

1 out of 12 runs failed: test_update_run (tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun)

artifacts/Test Results (Python 3.9) (OS windows-latest)/test-results-sdk-cli.xml [took 9s]
Raw output
AssertionError: assert 'test_display_name_0f408ccd-66d4-4039-b4a7-2192e443830c' == 'test_display_name_09fff7b1-dfd8-4b96-a976-bf500458cbce'
  - test_display_name_09fff7b1-dfd8-4b96-a976-bf500458cbce
  + test_display_name_0f408ccd-66d4-4039-b4a7-2192e443830c
self = <sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun object at 0x0000014D184BF0D0>
pf = <promptflow.azure._pf_client.PFClient object at 0x0000014D21F88160>
randstr = <function randstr.<locals>.generate_random_string at 0x0000014D0FF2A670>

    def test_update_run(self, pf, randstr: Callable[[str], str]):
        run_id = "4cf2d5e9-c78f-4ab8-a3ee-57675f92fb74"
        test_mark = randstr("test_mark")
    
        new_display_name = f"test_display_name_{test_mark}"
        new_description = f"test_description_{test_mark}"
        new_tags = {"test_tag": test_mark}
    
        run = pf.runs.update(
            run=run_id,
            display_name=new_display_name,
            description=new_description,
            tags=new_tags,
        )
        assert run.display_name == new_display_name
        assert run.description == new_description
        assert run.tags["test_tag"] == test_mark
    
        # test wrong type of parameters won't raise error, just log warnings and got ignored
        run = pf.runs.update(
            run=run_id,
            tags={"test_tag": {"a": 1}},
        )
>       assert run.display_name == new_display_name
E       AssertionError: assert 'test_display_name_0f408ccd-66d4-4039-b4a7-2192e443830c' == 'test_display_name_09fff7b1-dfd8-4b96-a976-bf500458cbce'
E         - test_display_name_09fff7b1-dfd8-4b96-a976-bf500458cbce
E         + test_display_name_0f408ccd-66d4-4039-b4a7-2192e443830c

tests\sdk_cli_azure_test\e2etests\test_run_operations.py:403: AssertionError

Check warning on line 0 in tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

1 out of 12 runs failed: test_input_mapping_with_dict (tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun)

artifacts/Test Results (Python 3.10) (OS macos-latest)/test-results-sdk-cli.xml [took 30s]
Raw output
promptflow.azure._restclient.flow_service_caller.FlowRequestException: Calling submit_bulk_run failed with request id: eb42fc77-6d97-4d1e-8095-1408ca7d7cd1 
Status code: 409 
Reason: Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776 
Error message: (TransientError) Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776-b431-5adfb1c0c2c7 with etag .
Code: TransientError
Message: Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776-b431-5adfb1c0c2c7 with etag .
self = <promptflow.azure._restclient.flow_service_caller.FlowServiceCaller object at 0x126178160>
args = ()
kwargs = {'body': <promptflow.azure._restclient.flow.models._models_py3.SubmitBulkRunRequest object at 0x1268d5ea0>, 'resource_..._name': 'promptflow', 'subscription_id': '96aede12-2f73-41cb-b983-6d11a904839b', 'workspace_name': 'promptflow-eastus'}

    @wraps(func)
    def wrapper(self, *args, **kwargs):
        if not isinstance(self, RequestTelemetryMixin):
            raise PromptflowException(f"Wrapped function is not RequestTelemetryMixin, got {type(self)}")
        # refresh request before each request
        self._refresh_request_id_for_telemetry()
        try:
>           return func(self, *args, **kwargs)

promptflow/azure/_restclient/flow_service_caller.py:61: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/azure/_restclient/flow_service_caller.py:438: in submit_bulk_run
    return self.caller.bulk_runs.submit_bulk_run(
/Users/runner/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/azure/core/tracing/decorator.py:78: in wrapper_use_tracer
    return func(*args, **kwargs)
promptflow/azure/_restclient/flow/operations/_bulk_runs_operations.py:402: in submit_bulk_run
    map_error(status_code=response.status_code, response=response, error_map=error_map)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

status_code = 409
response = <RequestsTransportResponse: 409 Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776, Content-Type: application/json>
error_map = {401: <class 'azure.core.exceptions.ClientAuthenticationError'>, 404: <class 'azure.core.exceptions.ResourceNotFoundError'>, 409: <class 'azure.core.exceptions.ResourceExistsError'>}

    def map_error(
        status_code: int, response: _HttpResponseCommonAPI, error_map: Mapping[int, Type[HttpResponseError]]
    ) -> None:
        if not error_map:
            return
        error_type = error_map.get(status_code)
        if not error_type:
            return
        error = error_type(response=response)
>       raise error
E       azure.core.exceptions.ResourceExistsError: (TransientError) Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776-b431-5adfb1c0c2c7 with etag .
E       Code: TransientError
E       Message: Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776-b431-5adfb1c0c2c7 with etag .

/Users/runner/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/azure/core/exceptions.py:165: ResourceExistsError

During handling of the above exception, another exception occurred:

self = <sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun object at 0x117468c40>
pf = <promptflow.azure._pf_client.PFClient object at 0x126bb17e0>
runtime = 'test-runtime-ci'
randstr = <function randstr.<locals>.generate_random_string at 0x126c1a710>

    def test_input_mapping_with_dict(self, pf, runtime: str, randstr: Callable[[str], str]):
        data_path = f"{DATAS_DIR}/webClassification3.jsonl"
    
>       run = pf.run(
            flow=f"{FLOWS_DIR}/flow_with_dict_input",
            data=data_path,
            column_mapping=dict(key={"a": 1}, extra="${data.url}"),
            runtime=runtime,
            name=randstr("name"),
        )

tests/sdk_cli_azure_test/e2etests/test_run_operations.py:609: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/azure/_pf_client.py:252: in run
    return self.runs.create_or_update(run=run, **kwargs)
promptflow/_telemetry/activity.py:143: in wrapper
    return f(self, *args, **kwargs)
promptflow/azure/operations/_run_operations.py:231: in create_or_update
    self._service_caller.submit_bulk_run(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <promptflow.azure._restclient.flow_service_caller.FlowServiceCaller object at 0x126178160>
args = ()
kwargs = {'body': <promptflow.azure._restclient.flow.models._models_py3.SubmitBulkRunRequest object at 0x1268d5ea0>, 'resource_..._name': 'promptflow', 'subscription_id': '96aede12-2f73-41cb-b983-6d11a904839b', 'workspace_name': 'promptflow-eastus'}

    @wraps(func)
    def wrapper(self, *args, **kwargs):
        if not isinstance(self, RequestTelemetryMixin):
            raise PromptflowException(f"Wrapped function is not RequestTelemetryMixin, got {type(self)}")
        # refresh request before each request
        self._refresh_request_id_for_telemetry()
        try:
            return func(self, *args, **kwargs)
        except HttpResponseError as e:
>           raise FlowRequestException(
                f"Calling {func.__name__} failed with request id: {self._request_id} \n"
                f"Status code: {e.status_code} \n"
                f"Reason: {e.reason} \n"
                f"Error message: {e.message} \n"
            )
E           promptflow.azure._restclient.flow_service_caller.FlowRequestException: Calling submit_bulk_run failed with request id: eb42fc77-6d97-4d1e-8095-1408ca7d7cd1 
E           Status code: 409 
E           Reason: Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776 
E           Error message: (TransientError) Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776-b431-5adfb1c0c2c7 with etag .
E           Code: TransientError
E           Message: Etag conflict on oT0SPqX5kUySNI2f-7Of9ZuYhuzjevBg2tbIy4TdiN1BJuqS0nkeivfWm1MKKBWq/56a29740-1351-4776-b431-5adfb1c0c2c7 with etag .

promptflow/azure/_restclient/flow_service_caller.py:63: FlowRequestException

Check warning on line 0 in tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

1 out of 12 runs failed: test_get_detail_against_partial_fail_run (tests.sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun)

artifacts/Test Results (Python 3.10) (OS ubuntu-latest)/test-results-sdk-cli.xml [took 24s]
Raw output
promptflow.azure._restclient.flow_service_caller.FlowRequestException: Calling submit_bulk_run failed with request id: 830d16cd-ac78-4822-87b2-dfbf10196448 
Status code: 409 
Reason: Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837 
Error message: (TransientError) Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837-ac21-f338fdc7e809 with etag .
Code: TransientError
Message: Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837-ac21-f338fdc7e809 with etag .
self = <promptflow.azure._restclient.flow_service_caller.FlowServiceCaller object at 0x7f119baabf10>
args = ()
kwargs = {'body': <promptflow.azure._restclient.flow.models._models_py3.SubmitBulkRunRequest object at 0x7f11aede1ed0>, 'resour..._name': 'promptflow', 'subscription_id': '96aede12-2f73-41cb-b983-6d11a904839b', 'workspace_name': 'promptflow-eastus'}

    @wraps(func)
    def wrapper(self, *args, **kwargs):
        if not isinstance(self, RequestTelemetryMixin):
            raise PromptflowException(f"Wrapped function is not RequestTelemetryMixin, got {type(self)}")
        # refresh request before each request
        self._refresh_request_id_for_telemetry()
        try:
>           return func(self, *args, **kwargs)

promptflow/azure/_restclient/flow_service_caller.py:61: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/azure/_restclient/flow_service_caller.py:438: in submit_bulk_run
    return self.caller.bulk_runs.submit_bulk_run(
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/azure/core/tracing/decorator.py:78: in wrapper_use_tracer
    return func(*args, **kwargs)
promptflow/azure/_restclient/flow/operations/_bulk_runs_operations.py:402: in submit_bulk_run
    map_error(status_code=response.status_code, response=response, error_map=error_map)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

status_code = 409
response = <RequestsTransportResponse: 409 Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837, Content-Type: application/json>
error_map = {401: <class 'azure.core.exceptions.ClientAuthenticationError'>, 404: <class 'azure.core.exceptions.ResourceNotFoundError'>, 409: <class 'azure.core.exceptions.ResourceExistsError'>}

    def map_error(
        status_code: int, response: _HttpResponseCommonAPI, error_map: Mapping[int, Type[HttpResponseError]]
    ) -> None:
        if not error_map:
            return
        error_type = error_map.get(status_code)
        if not error_type:
            return
        error = error_type(response=response)
>       raise error
E       azure.core.exceptions.ResourceExistsError: (TransientError) Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837-ac21-f338fdc7e809 with etag .
E       Code: TransientError
E       Message: Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837-ac21-f338fdc7e809 with etag .

/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/azure/core/exceptions.py:165: ResourceExistsError

During handling of the above exception, another exception occurred:

self = <sdk_cli_azure_test.e2etests.test_run_operations.TestFlowRun object at 0x7f11b3a2eda0>
pf = <promptflow.azure._pf_client.PFClient object at 0x7f11aed5f940>
runtime = 'test-runtime-ci'
randstr = <function randstr.<locals>.generate_random_string at 0x7f11aeee4af0>

    def test_get_detail_against_partial_fail_run(self, pf, runtime: str, randstr: Callable[[str], str]) -> None:
>       run = pf.run(
            flow=f"{FLOWS_DIR}/partial_fail",
            data=f"{FLOWS_DIR}/partial_fail/data.jsonl",
            runtime=runtime,
            name=randstr("name"),
        )

tests/sdk_cli_azure_test/e2etests/test_run_operations.py:765: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
promptflow/azure/_pf_client.py:252: in run
    return self.runs.create_or_update(run=run, **kwargs)
promptflow/_telemetry/activity.py:143: in wrapper
    return f(self, *args, **kwargs)
promptflow/azure/operations/_run_operations.py:231: in create_or_update
    self._service_caller.submit_bulk_run(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <promptflow.azure._restclient.flow_service_caller.FlowServiceCaller object at 0x7f119baabf10>
args = ()
kwargs = {'body': <promptflow.azure._restclient.flow.models._models_py3.SubmitBulkRunRequest object at 0x7f11aede1ed0>, 'resour..._name': 'promptflow', 'subscription_id': '96aede12-2f73-41cb-b983-6d11a904839b', 'workspace_name': 'promptflow-eastus'}

    @wraps(func)
    def wrapper(self, *args, **kwargs):
        if not isinstance(self, RequestTelemetryMixin):
            raise PromptflowException(f"Wrapped function is not RequestTelemetryMixin, got {type(self)}")
        # refresh request before each request
        self._refresh_request_id_for_telemetry()
        try:
            return func(self, *args, **kwargs)
        except HttpResponseError as e:
>           raise FlowRequestException(
                f"Calling {func.__name__} failed with request id: {self._request_id} \n"
                f"Status code: {e.status_code} \n"
                f"Reason: {e.reason} \n"
                f"Error message: {e.message} \n"
            )
E           promptflow.azure._restclient.flow_service_caller.FlowRequestException: Calling submit_bulk_run failed with request id: 830d16cd-ac78-4822-87b2-dfbf10196448 
E           Status code: 409 
E           Reason: Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837 
E           Error message: (TransientError) Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837-ac21-f338fdc7e809 with etag .
E           Code: TransientError
E           Message: Etag conflict on oT0SPqX5kUySNI2f-7Of9ezZK6k4eWvFEEE-u0oYPqErETMoYZX5dZvjc5QJstdw/88b011d0-cd14-4837-ac21-f338fdc7e809 with etag .

promptflow/azure/_restclient/flow_service_caller.py:63: FlowRequestException

Check warning on line 0 in tests.executor.unittests._utils.test_exception_utils.TestErrorResponse

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

3 out of 12 runs failed: test_get_user_execution_error_info[raise_tool_execution_error-ToolExecutionError] (tests.executor.unittests._utils.test_exception_utils.TestErrorResponse)

artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-executor.xml [took 0s]
Raw output
assert None
 +  where None = <function match at 0x7f61a65a2ca0>('Traceback \\(most recent call last\\):\n  File ".*test_exception_utils.py", line .*, in code_with_bug\n    1 / 0\nZeroDivisionError: division by zero\n', 'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n')
 +    where <function match at 0x7f61a65a2ca0> = re.match
self = <executor.unittests._utils.test_exception_utils.TestErrorResponse object at 0x7f619b2c8a50>
raise_exception_func = <function raise_tool_execution_error at 0x7f619b0423e0>
error_class = <class 'promptflow._core._errors.ToolExecutionError'>

    @pytest.mark.parametrize(
        "raise_exception_func, error_class",
        [
            (raise_general_exception, CustomizedException),
            (raise_tool_execution_error, ToolExecutionError),
        ],
    )
    def test_get_user_execution_error_info(self, raise_exception_func, error_class):
        with pytest.raises(error_class) as e:
            raise_exception_func()
    
        error_repsonse = ErrorResponse.from_exception(e.value)
        actual_error_info = error_repsonse.get_user_execution_error_info()
>       self.assert_user_execution_error_info(e.value, actual_error_info)

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py:480: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <executor.unittests._utils.test_exception_utils.TestErrorResponse object at 0x7f619b2c8a50>
exception = ToolExecutionError('')
error_info = {'filename': '/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py', 'lineno': 35, 'message': 'division by zero', 'name': 'code_with_bug', ...}

    def assert_user_execution_error_info(self, exception, error_info):
        if isinstance(exception, ToolExecutionError):
            assert error_info["type"] == "ZeroDivisionError"
            assert error_info["message"] == "division by zero"
            assert error_info["filename"].endswith("test_exception_utils.py")
            assert error_info["lineno"] > 0
            assert error_info["name"] == "code_with_bug"
>           assert re.match(TOOL_EXECUTION_ERROR_TRACEBACK, error_info["traceback"])
E           assert None
E            +  where None = <function match at 0x7f61a65a2ca0>('Traceback \\(most recent call last\\):\n  File ".*test_exception_utils.py", line .*, in code_with_bug\n    1 / 0\nZeroDivisionError: division by zero\n', 'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n')
E            +    where <function match at 0x7f61a65a2ca0> = re.match

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py:489: AssertionError

Check warning on line 0 in tests.executor.unittests._utils.test_exception_utils.TestExceptions

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

3 out of 12 runs failed: test_tool_execution_error (tests.executor.unittests._utils.test_exception_utils.TestExceptions)

artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-executor.xml [took 0s]
Raw output
assert None
 +  where None = <function match at 0x7f61a65a2ca0>('Traceback \\(most recent call last\\):\n  File ".*test_exception_utils.py", line .*, in code_with_bug\n    1 / 0\nZeroDivisionError: division by zero\n', 'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n')
 +    where <function match at 0x7f61a65a2ca0> = re.match
 +    and   'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n' = ToolExecutionError('').tool_traceback
 +      where ToolExecutionError('') = <ExceptionInfo ToolExecutionError('') tblen=2>.value
self = <executor.unittests._utils.test_exception_utils.TestExceptions object at 0x7f619ad97610>

    def test_tool_execution_error(self):
        with pytest.raises(ToolExecutionError) as e:
            raise_tool_execution_error()
    
        inner_exception = e.value.inner_exception
        assert isinstance(inner_exception, ZeroDivisionError)
        assert str(inner_exception) == "division by zero"
        assert e.value.message == "Execution failure in 'MyTool': (ZeroDivisionError) division by zero"
    
        last_frame_info = e.value.tool_last_frame_info
        assert "test_exception_utils.py" in last_frame_info.get("filename")
        assert last_frame_info.get("lineno") > 0
        assert last_frame_info.get("name") == "code_with_bug"
    
>       assert re.match(TOOL_EXECUTION_ERROR_TRACEBACK, e.value.tool_traceback)
E       assert None
E        +  where None = <function match at 0x7f61a65a2ca0>('Traceback \\(most recent call last\\):\n  File ".*test_exception_utils.py", line .*, in code_with_bug\n    1 / 0\nZeroDivisionError: division by zero\n', 'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n')
E        +    where <function match at 0x7f61a65a2ca0> = re.match
E        +    and   'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n' = ToolExecutionError('').tool_traceback
E        +      where ToolExecutionError('') = <ExceptionInfo ToolExecutionError('') tblen=2>.value

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py:654: AssertionError

Check warning on line 0 in tests.executor.unittests._utils.test_exception_utils.TestExceptions

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

3 out of 12 runs failed: test_additional_info (tests.executor.unittests._utils.test_exception_utils.TestExceptions)

artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-executor.xml [took 0s]
Raw output
assert None
 +  where None = <function match at 0x7f61a65a2ca0>('Traceback \\(most recent call last\\):\n  File ".*test_exception_utils.py", line .*, in code_with_bug\n    1 / 0\nZeroDivisionError: division by zero\n', 'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n')
 +    where <function match at 0x7f61a65a2ca0> = re.match
 +    and   'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n' = <built-in method get of dict object at 0x7f618e476e00>('traceback')
 +      where <built-in method get of dict object at 0x7f618e476e00> = {'filename': '/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py', 'lineno': 35, 'message': 'division by zero', 'name': 'code_with_bug', ...}.get
self = <executor.unittests._utils.test_exception_utils.TestExceptions object at 0x7f619ad94210>

    def test_additional_info(self):
        with pytest.raises(ToolExecutionError) as e:
            raise_tool_execution_error()
    
        additional_info = ExceptionPresenter.create(e.value).to_dict().get("additionalInfo")
        assert len(additional_info) == 1
    
        info_0 = additional_info[0]
        assert info_0["type"] == "ToolExecutionErrorDetails"
        info_0_value = info_0["info"]
        assert info_0_value.get("type") == "ZeroDivisionError"
        assert info_0_value.get("message") == "division by zero"
        assert re.match(r".*test_exception_utils.py", info_0_value["filename"])
        assert info_0_value.get("lineno") > 0
        assert info_0_value.get("name") == "code_with_bug"
>       assert re.match(TOOL_EXECUTION_ERROR_TRACEBACK, info_0_value.get("traceback"))
E       assert None
E        +  where None = <function match at 0x7f61a65a2ca0>('Traceback \\(most recent call last\\):\n  File ".*test_exception_utils.py", line .*, in code_with_bug\n    1 / 0\nZeroDivisionError: division by zero\n', 'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n')
E        +    where <function match at 0x7f61a65a2ca0> = re.match
E        +    and   'Traceback (most recent call last):\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py", line 35, in code_with_bug\n    1 / 0\n    ~~^~~\nZeroDivisionError: division by zero\n' = <built-in method get of dict object at 0x7f618e476e00>('traceback')
E        +      where <built-in method get of dict object at 0x7f618e476e00> = {'filename': '/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py', 'lineno': 35, 'message': 'division by zero', 'name': 'code_with_bug', ...}.get

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_exception_utils.py:699: AssertionError

Check warning on line 0 in tests.executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

3 out of 12 runs failed: test_generate_tool_meta_dict_by_file_exception[MetaFileNotFound] (tests.executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils)

artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-executor.xml [took 1s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-executor.xml [took 2s]
Raw output
assert None
 +  where None = <function match at 0x7f786f3a2ca0>("\\(MetaFileNotFound\\) Generate tool meta failed for python tool. Meta file '.*aaa.py' can not be found.", "(MetaFileNotFound) Generate tool meta failed for ToolType.PYTHON tool. Meta file '/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/script_with_import/aaa.py' can not be found.")
 +    where <function match at 0x7f786f3a2ca0> = re.match
self = <executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils object at 0x7f78638558d0>
flow_dir = 'script_with_import', tool_path = 'aaa.py', tool_type = 'python'
func = <function cd_and_run at 0x7f7863bd5080>
msg_pattern = "\\(MetaFileNotFound\\) Generate tool meta failed for python tool. Meta file '.*aaa.py' can not be found."

    @pytest.mark.parametrize(
        "flow_dir, tool_path, tool_type, func, msg_pattern",
        [
            pytest.param(
                "prompt_tools",
                "summarize_text_content_prompt.jinja2",
                "python",
                cd_and_run,
                r"\(PythonLoaderNotFound\) Failed to load python file '.*summarize_text_content_prompt.jinja2'. "
                r"Please make sure it is a valid .py file.",
                id="PythonLoaderNotFound",
            ),
            pytest.param(
                "script_with_import",
                "fail.py",
                "python",
                cd_and_run,
                r"\(PythonLoadError\) Failed to load python module from file '.*fail.py': "
                r"\(ModuleNotFoundError\) No module named 'aaa'",
                id="PythonLoadError",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "python",
                cd_and_run_with_bad_function_interface,
                r"\(BadFunctionInterface\) Parse interface for tool 'divide_num' failed: "
                r"\(Exception\) Mock function to interface error.",
                id="BadFunctionInterface",
            ),
            pytest.param(
                "script_with_import",
                "aaa.py",
                "python",
                cd_and_run,
                r"\(MetaFileNotFound\) Generate tool meta failed for python tool. "
                r"Meta file '.*aaa.py' can not be found.",
                id="MetaFileNotFound",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "python",
                cd_and_run_with_read_text_error,
                r"\(MetaFileReadError\) Generate tool meta failed for python tool. "
                r"Read meta file '.*divide_num.py' failed: \(Exception\) Mock read text error.",
                id="MetaFileReadError",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "action",
                cd_and_run,
                r"\(NotSupported\) Generate tool meta failed. The type 'action' is currently unsupported. "
                r"Please choose from available types: python,llm,prompt and try again.",
                id="NotSupported",
            ),
        ],
    )
    def test_generate_tool_meta_dict_by_file_exception(self, flow_dir, tool_path, tool_type, func, msg_pattern):
        wd = str((FLOW_ROOT / flow_dir).resolve())
        ret = generate_tool_meta_dict_by_file_with_cd(wd, tool_path, tool_type, func)
        assert isinstance(ret, str), "Call cd_and_run should fail but succeeded:\n" + str(ret)
>       assert re.match(msg_pattern, ret)
E       assert None
E        +  where None = <function match at 0x7f786f3a2ca0>("\\(MetaFileNotFound\\) Generate tool meta failed for python tool. Meta file '.*aaa.py' can not be found.", "(MetaFileNotFound) Generate tool meta failed for ToolType.PYTHON tool. Meta file '/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/script_with_import/aaa.py' can not be found.")
E        +    where <function match at 0x7f786f3a2ca0> = re.match

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_generate_tool_meta_utils.py:154: AssertionError

Check warning on line 0 in tests.executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

3 out of 12 runs failed: test_generate_tool_meta_dict_by_file_exception[MetaFileReadError] (tests.executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils)

artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-executor.xml [took 3s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-executor.xml [took 1s]
Raw output
assert None
 +  where None = <function match at 0x7f786f3a2ca0>("\\(MetaFileReadError\\) Generate tool meta failed for python tool. Read meta file '.*divide_num.py' failed: \\(Exception\\) Mock read text error.", "(MetaFileReadError) Generate tool meta failed for ToolType.PYTHON tool. Read meta file '/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_python_tool/divide_num.py' failed: (Exception) Mock read text error.")
 +    where <function match at 0x7f786f3a2ca0> = re.match
self = <executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils object at 0x7f7863855d90>
flow_dir = 'simple_flow_with_python_tool', tool_path = 'divide_num.py'
tool_type = 'python'
func = <function cd_and_run_with_read_text_error at 0x7f7863bd5120>
msg_pattern = "\\(MetaFileReadError\\) Generate tool meta failed for python tool. Read meta file '.*divide_num.py' failed: \\(Exception\\) Mock read text error."

    @pytest.mark.parametrize(
        "flow_dir, tool_path, tool_type, func, msg_pattern",
        [
            pytest.param(
                "prompt_tools",
                "summarize_text_content_prompt.jinja2",
                "python",
                cd_and_run,
                r"\(PythonLoaderNotFound\) Failed to load python file '.*summarize_text_content_prompt.jinja2'. "
                r"Please make sure it is a valid .py file.",
                id="PythonLoaderNotFound",
            ),
            pytest.param(
                "script_with_import",
                "fail.py",
                "python",
                cd_and_run,
                r"\(PythonLoadError\) Failed to load python module from file '.*fail.py': "
                r"\(ModuleNotFoundError\) No module named 'aaa'",
                id="PythonLoadError",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "python",
                cd_and_run_with_bad_function_interface,
                r"\(BadFunctionInterface\) Parse interface for tool 'divide_num' failed: "
                r"\(Exception\) Mock function to interface error.",
                id="BadFunctionInterface",
            ),
            pytest.param(
                "script_with_import",
                "aaa.py",
                "python",
                cd_and_run,
                r"\(MetaFileNotFound\) Generate tool meta failed for python tool. "
                r"Meta file '.*aaa.py' can not be found.",
                id="MetaFileNotFound",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "python",
                cd_and_run_with_read_text_error,
                r"\(MetaFileReadError\) Generate tool meta failed for python tool. "
                r"Read meta file '.*divide_num.py' failed: \(Exception\) Mock read text error.",
                id="MetaFileReadError",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "action",
                cd_and_run,
                r"\(NotSupported\) Generate tool meta failed. The type 'action' is currently unsupported. "
                r"Please choose from available types: python,llm,prompt and try again.",
                id="NotSupported",
            ),
        ],
    )
    def test_generate_tool_meta_dict_by_file_exception(self, flow_dir, tool_path, tool_type, func, msg_pattern):
        wd = str((FLOW_ROOT / flow_dir).resolve())
        ret = generate_tool_meta_dict_by_file_with_cd(wd, tool_path, tool_type, func)
        assert isinstance(ret, str), "Call cd_and_run should fail but succeeded:\n" + str(ret)
>       assert re.match(msg_pattern, ret)
E       assert None
E        +  where None = <function match at 0x7f786f3a2ca0>("\\(MetaFileReadError\\) Generate tool meta failed for python tool. Read meta file '.*divide_num.py' failed: \\(Exception\\) Mock read text error.", "(MetaFileReadError) Generate tool meta failed for ToolType.PYTHON tool. Read meta file '/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_python_tool/divide_num.py' failed: (Exception) Mock read text error.")
E        +    where <function match at 0x7f786f3a2ca0> = re.match

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_generate_tool_meta_utils.py:154: AssertionError

Check warning on line 0 in tests.executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils

See this annotation in the file changed.

@github-actions github-actions / Release Test Matrix [](https://github.com/microsoft/promptflow/actions/workflows/promptflow-release-testing-matrix.yml?query=branch:++)

3 out of 12 runs failed: test_generate_tool_meta_dict_by_file_exception[NotSupported] (tests.executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils)

artifacts/Test Results (Python 3.11) (OS macos-latest)/test-results-executor.xml [took 2s]
artifacts/Test Results (Python 3.11) (OS ubuntu-latest)/test-results-executor.xml [took 0s]
artifacts/Test Results (Python 3.11) (OS windows-latest)/test-results-executor.xml [took 2s]
Raw output
assert None
 +  where None = <function match at 0x7f786f3a2ca0>("\\(NotSupported\\) Generate tool meta failed. The type 'action' is currently unsupported. Please choose from available types: python,llm,prompt and try again.", "(NotSupported) Generate tool meta failed. The type 'ToolType._ACTION' is currently unsupported. Please choose from available types: python,llm,prompt and try again.")
 +    where <function match at 0x7f786f3a2ca0> = re.match
self = <executor.unittests._utils.test_generate_tool_meta_utils.TestToolMetaUtils object at 0x7f7863854590>
flow_dir = 'simple_flow_with_python_tool', tool_path = 'divide_num.py'
tool_type = 'action', func = <function cd_and_run at 0x7f7863bd5080>
msg_pattern = "\\(NotSupported\\) Generate tool meta failed. The type 'action' is currently unsupported. Please choose from available types: python,llm,prompt and try again."

    @pytest.mark.parametrize(
        "flow_dir, tool_path, tool_type, func, msg_pattern",
        [
            pytest.param(
                "prompt_tools",
                "summarize_text_content_prompt.jinja2",
                "python",
                cd_and_run,
                r"\(PythonLoaderNotFound\) Failed to load python file '.*summarize_text_content_prompt.jinja2'. "
                r"Please make sure it is a valid .py file.",
                id="PythonLoaderNotFound",
            ),
            pytest.param(
                "script_with_import",
                "fail.py",
                "python",
                cd_and_run,
                r"\(PythonLoadError\) Failed to load python module from file '.*fail.py': "
                r"\(ModuleNotFoundError\) No module named 'aaa'",
                id="PythonLoadError",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "python",
                cd_and_run_with_bad_function_interface,
                r"\(BadFunctionInterface\) Parse interface for tool 'divide_num' failed: "
                r"\(Exception\) Mock function to interface error.",
                id="BadFunctionInterface",
            ),
            pytest.param(
                "script_with_import",
                "aaa.py",
                "python",
                cd_and_run,
                r"\(MetaFileNotFound\) Generate tool meta failed for python tool. "
                r"Meta file '.*aaa.py' can not be found.",
                id="MetaFileNotFound",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "python",
                cd_and_run_with_read_text_error,
                r"\(MetaFileReadError\) Generate tool meta failed for python tool. "
                r"Read meta file '.*divide_num.py' failed: \(Exception\) Mock read text error.",
                id="MetaFileReadError",
            ),
            pytest.param(
                "simple_flow_with_python_tool",
                "divide_num.py",
                "action",
                cd_and_run,
                r"\(NotSupported\) Generate tool meta failed. The type 'action' is currently unsupported. "
                r"Please choose from available types: python,llm,prompt and try again.",
                id="NotSupported",
            ),
        ],
    )
    def test_generate_tool_meta_dict_by_file_exception(self, flow_dir, tool_path, tool_type, func, msg_pattern):
        wd = str((FLOW_ROOT / flow_dir).resolve())
        ret = generate_tool_meta_dict_by_file_with_cd(wd, tool_path, tool_type, func)
        assert isinstance(ret, str), "Call cd_and_run should fail but succeeded:\n" + str(ret)
>       assert re.match(msg_pattern, ret)
E       assert None
E        +  where None = <function match at 0x7f786f3a2ca0>("\\(NotSupported\\) Generate tool meta failed. The type 'action' is currently unsupported. Please choose from available types: python,llm,prompt and try again.", "(NotSupported) Generate tool meta failed. The type 'ToolType._ACTION' is currently unsupported. Please choose from available types: python,llm,prompt and try again.")
E        +    where <function match at 0x7f786f3a2ca0> = re.match

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/unittests/_utils/test_generate_tool_meta_utils.py:154: AssertionError