Skip to content

Commit

Permalink
chore: reduce warnings when building docs (#5520)
Browse files Browse the repository at this point in the history
  • Loading branch information
AnneYang720 authored Dec 15, 2022
1 parent f854d5d commit 3cc4762
Show file tree
Hide file tree
Showing 28 changed files with 46 additions and 49 deletions.
4 changes: 2 additions & 2 deletions docs/cloud-nativeness/k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ Jina supports two types of scaling:
- **Replicas** can be used with any Executor type and are typically used for performance and availability.
- **Shards** are used for partitioning data and should only be used with indexers since they store state.

Check {ref}`here <scale-out>` for more information about these scaling mechanisms.
Check {ref}`here <flow-complex-topologies>` for more information about these scaling mechanisms.

For shards, Jina creates one separate Deployment in Kubernetes per Shard.
Setting `f.add(..., shards=num_shards)` is sufficient to create a corresponding Kubernetes configuration.
Expand All @@ -120,7 +120,7 @@ If you want to learn more about this limitation, see [this](https://kubernetes.i
The {ref}`Gateway <flow>` is responsible for providing the API of the {ref}`Flow <flow>`.
If you have a large Flow with many Clients and many replicated Executors, the Gateway can become the bottleneck.
In this case you can also scale up the Gateway deployment to be backed by multiple Kubernetes Pods.
This is done by the regular means of Kubernetes: Either increase the number of replicas in the {ref}`generated yaml configuration files <kubernetes-deploy>` or [add replicas while running](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment).
This is done by the regular means of Kubernetes: Either increase the number of replicas in the {ref}`generated yaml configuration files <kubernetes-export>` or [add replicas while running](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment).
To expose your Gateway replicas outside Kubernetes, you can add a load balancer as described {ref}`here <kubernetes-expose>`.

````{admonition} Hint
Expand Down
2 changes: 1 addition & 1 deletion docs/cloud-nativeness/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -319,4 +319,4 @@ In short, there are just three key steps to deploy a Jina Flow on Kubernetes:
- {ref}`Kubernetes support documentation <kubernetes-docs>`
- {ref}`Monitor the Flow once it is deployed <monitoring>`
- {ref}`See how failures and retries are handled <flow-error-handling>`
- {ref}`Learn more about scaling Executors <scale-out>`
- {ref}`Learn more about scaling Executors <flow-complex-topologies>`
5 changes: 2 additions & 3 deletions docs/cloud-nativeness/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The Prometheus-only based feature will soon be deprecated in favor of the OpenTe
Refer to the {ref}`OpenTelemetry migration guide <opentelemetry-migration>` for updating your existing Prometheus and Grafana configurations.
```

We recommend the Prometheus/Grafana stack to leverage the {ref}`metrics <monitoring-flow>` exposed by Jina. In this setup, Jina exposes different {ref}`metrics endpoints <monitoring-flow>`, and Prometheus scrapes these endpoints, as well as
We recommend the Prometheus/Grafana stack to leverage the metrics exposed by Jina. In this setup, Jina exposes different metrics, and Prometheus scrapes these endpoints, as well as
collecting, aggregating, and storing the metrics.

External entities (like Grafana) can access these aggregated metrics via the query language [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) and let users visualize the metrics with dashboards.
Expand Down Expand Up @@ -204,6 +204,5 @@ client.search(inputs=DocumentArray.empty(size=4))

## See also

- {ref}`List of available metrics <monitoring-flow>`
- [Using Grafana to visualize Prometheus metrics](https://grafana.com/docs/grafana/latest/getting-started/getting-started-prometheus/)
- {ref}`Defining custom metrics in an Executor <monitoring-executor>`
- {ref}`Defining custom metrics in an Executor <monitoring>`
1 change: 0 additions & 1 deletion docs/cloud-nativeness/opentelemetry-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,5 @@ To adapt Prometheus queries in Grafana:
You can download a [sample Grafana dashboard JSON file](https://github.com/jina-ai/example-grafana-prometheus/blob/main/grafana-dashboards/flow-histogram-metrics.json) and import it into Grafana to get started with some pre-built graphs.

```{hint}
A list of available metrics which will soon be deprecated is in the {ref}`Flow Monitoring <monitoring-flow>` section.
A list of available metrics is in the {ref}`Flow Instrumentation <instrumenting-flow>` section.
```
2 changes: 1 addition & 1 deletion docs/cloud-nativeness/opentelemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,4 +178,4 @@ To update your existing Prometheus and Grafana configurations, refer to the {ref

## JCloud Support

JCloud doesn't currently support OpenTelemetry. We'll make these features available soon. Until then, you can use the deprecated Prometheus-based {ref}`monitoring setup <monitoring-flow>`.
JCloud doesn't currently support OpenTelemetry. We'll make these features available soon. Until then, you can use the deprecated Prometheus-based {ref}`monitoring setup <monitoring>`.
4 changes: 2 additions & 2 deletions docs/concepts/client/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ client.post(..., compression='Gzip')

Note that this setting is only effective the communication between the client and the Flow's gateway.

One can also specify the compression of the internal communication {ref}`as described here<serve-compress>`.
One can also specify the compression of the internal communication {ref}`as described here<server-compress>`.



Expand All @@ -188,7 +188,7 @@ One can also specify the compression of the internal communication {ref}`as desc

## Simple profiling of the latency

Before sending any real data, you can test the connectivity and network latency by calling the {meth}`~jina.Client.profiling` method:
Before sending any real data, you can test the connectivity and network latency by calling the {meth}`~jina.clients.mixin.ProfileMixin.profiling` method:

```python
from jina import Client
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/client/send-graphql-mutation.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Note that `response` here is `Dict` not a `DocumentArray`. This is because Graph
The Flow GraphQL API exposes the mutation `docs`, which sends its inputs to the Flow's Executors,
just like HTTP `post` as described {ref}`above <http-interface>`.

A GraphQL mutation takes the same set of arguments used in [HTTP](#arguments).
A GraphQL mutation takes the same set of arguments used in {ref}`HTTP <http-arguments>`.

The response from GraphQL can include all fields available on a DocumentArray.

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/client/send-receive-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ Refer to the gRPC [Performance Best Practices](https://grpc.io/docs/guides/perfo

## Returns

{meth}`~jina.clients.mixin.PostMixin.post` returns a `DocumentArray` containing all Documents flattened over all Requests. When setting `return_responses=True`, this behavior is changed to returning a list of {class}`~jina.types.request.Response` objects.
{meth}`~jina.clients.mixin.PostMixin.post` returns a `DocumentArray` containing all Documents flattened over all Requests. When setting `return_responses=True`, this behavior is changed to returning a list of {class}`~jina.types.request.data.Response` objects.

If a callback function is provided, `client.post()` will return none.

Expand Down
6 changes: 4 additions & 2 deletions docs/concepts/client/third-party-clients.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ A big thanks to our community member [Jonathan Rowley](https://jina-ai.slack.com

A big thanks to our community member [Peter Willemsen](https://jina-ai.slack.com/team/U03R0KNBK98) for developing a [Kotlin client](https://github.com/peterwilli/JinaKotlin) for Jina!

(http-interface)=
## HTTP

```{admonition} Available Protocols
Expand All @@ -29,6 +30,7 @@ You can always use `post` to interact with a Flow, using the `/post` HTTP endpoi

With the help of [OpenAPI schema](https://swagger.io/specification/), one can send data requests to a Flow via `cURL`, JavaScript, [Postman](https://www.postman.com/), or any other HTTP client or programming library.

(http-arguments)=
### Arguments

Your HTTP request can include the following parameters:
Expand Down Expand Up @@ -239,7 +241,7 @@ Below, in `Responses`, you can see the reply, together with a visual representat

We provide a [suite of templates for Jina Flow](https://github.com/jina-ai/jina/tree/master/.github/Jina.postman_collection.json). You can import it in Postman in **Collections**, with the **Import** button. It provides templates for the main operations. You need to create an Environment to define the `{{url}}` and `{{port}}` environment variables. These would be the hostname and the port where the Flow is listening.

This contribution was made by [Jonathan Rowley](https://jina-ai.slack.com/archives/C0169V26ATY/p1649689443888779?thread_ts=1649428823.420879&cid=C0169V26ATY), in our [community Slack](slack.jina.ai).
This contribution was made by [Jonathan Rowley](https://jina-ai.slack.com/archives/C0169V26ATY/p1649689443888779?thread_ts=1649428823.420879&cid=C0169V26ATY), in our [community Slack](https://slack.jina.ai).

## gRPC

Expand Down Expand Up @@ -291,7 +293,7 @@ response = endpoint(mut)

WebSocket uses persistent connections between the client and Flow, hence allowing streaming use cases.
While you can always use the Python client to stream requests like any other protocol, WebSocket allows streaming JSON from anywhere (CLI / Postman / any other programming language).
You can use the same set of arguments as [HTTP](#arguments) in the payload.
You can use the same set of arguments as {ref}`HTTP <http-arguments>` in the payload.

We use [subprotocols](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#subprotocols) to separate streaming JSON vs bytes.
The Flow defaults to `json` if you don't specify a sub-protocol while establishing the connection (Our Python client uses `bytes` streaming by using [jina.proto](../../proto/docs.md) definition).
Expand Down
13 changes: 6 additions & 7 deletions docs/concepts/executor/basics.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ The `.workspace` property contains the path to this workspace.
This `workspace` is based on the workspace passed when adding the Executor: `flow.add(..., workspace='path/to/workspace/')`.
The final `workspace` is generated by appending `'/<executor_name>/<shard_id>/'`.

This can be provided to the Executor via the {ref}`Python or YAML API <executor-api>`.
This can be provided to the Executor via the Python or {ref}`YAML API <executor-yaml-spec>`.

`````{dropdown} Default workspace
Expand All @@ -142,24 +142,24 @@ If you haven't provided a workspace, the Executor uses a default workspace, defi

By default, an Executor object contains {attr}`~.jina.serve.executors.BaseExecutor.requests` as an attribute when loaded from the Flow. This attribute is a `Dict` describing the mapping between Executor methods and network endpoints: It holds endpoint strings as keys, and pointers to functions as values.

These can be provided to the Executor via the {ref}`Python or YAML API <executor-api>`.
These can be provided to the Executor via the Python or {ref}`YAML API <executor-yaml-spec>`.

(executor-metas)=
### `metas`

An Executor object contains {attr}`~.jina.serve.executors.BaseExecutor.metas` as an attribute when loaded from the Flow. It is of [`SimpleNamespace`](https://docs.python.org/3/library/types.html#types.SimpleNamespace) type and contains some key-value information.
An Executor object contains `metas` as an attribute when loaded from the Flow. It is of [`SimpleNamespace`](https://docs.python.org/3/library/types.html#types.SimpleNamespace) type and contains some key-value information.

The list of the `metas` are:

- `name`: Name given to the Executor;
- `description`: Description of the Executor (optional, reserved for future-use in auto-docs);


These can be provided to the Executor via the {ref}`Python or YAML API <executor-api>`.
These can be provided to the Executor via the Python or {ref}`YAML API <executor-yaml-spec>`.

### `runtime_args`

By default, an Executor object contains {attr}`~.jina.serve.executors.BaseExecutor.runtime_args` as an attribute when loaded from the Flow. It is of [`SimpleNamespace`](https://docs.python.org/3/library/types.html#types.SimpleNamespace) type and contains information in key-value format.
By default, an Executor object contains `runtime_args` as an attribute when loaded from the Flow. It is of [`SimpleNamespace`](https://docs.python.org/3/library/types.html#types.SimpleNamespace) type and contains information in key-value format.
As the name suggests, `runtime_args` are dynamically determined during runtime, meaning that you don't know the value before running the Executor. These values are often related to the system/network environment around the Executor, and less about the Executor itself, like `shard_id` and `replicas`. They are usually set with the {meth}`~jina.orchestrate.flow.base.Flow.add` method.

The list of the `runtime_args` is:
Expand All @@ -175,7 +175,6 @@ You **cannot** provide these through any API. They are generated by the Flow orc

## See further

- {ref}`Executor in Flow <executor-in-flow>`
- {ref}`Debugging an Executor <debug-executor>`
- {ref}`Using an Executor on a GPU <gpu-executor>`
- {ref}`How to use external Executors <external-executor>`
- {ref}`How to use external Executors <external-executors>`
4 changes: 2 additions & 2 deletions docs/concepts/executor/containerize.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ When a containerized Executor is run inside a Flow,
Jina executes `docker run` with extra arguments under the hood.

This means that Jina assumes that whatever runs inside the container also runs like it would in a regular OS process. Therefore, ensure that
the basic entrypoint of the image calls `jina executor` {ref}`CLI <../api/jina_cli>` command.
the basic entrypoint of the image calls `jina executor` [CLI](../../api/jina_cli.rst) command.

```dockerfile
ENTRYPOINT ["jina", "executor", "--uses", "config.yml"]
Expand Down Expand Up @@ -100,7 +100,7 @@ The YAML configuration, as a minimal working example, is required to point to th

```{admonition} More YAML options
:class: seealso
To see what else can be configured using Jina's YAML interface, see {ref}`here <executor-api>`.
To see what else can be configured using Jina's YAML interface, see {ref}`here <executor-yaml-spec>`.
```

This is necessary for the Executor to be put inside the Docker image,
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/executor/dynamic-batching.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ When you enable dynamic batching, incoming requests to Executor endpoints with t
are queued together. The Executor endpoint is executed on the queue requests when either:

- the number of accumulated Documents exceeds the {ref}`preferred_batch_size<executor-dynamic-batching-parameters>` parameter
- or the {ref}`timeout<executor-dynamic-batching-parameters` parameter is exceeded.
- or the {ref}`timeout<executor-dynamic-batching-parameters>` parameter is exceeded.

Although this feature _can_ work on {ref}`parametrized requests<client-executor-parameters>`, it's best used for endpoints that don't often receive different parameters.
Creating a batch of requests typically results in better usage of hardware resources and potentially increased throughput.
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/executor/hub/hub-portal.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Portal

Executor Hub is a marketplace for {class}`~jina.Executor`s where you can upload your own Executors or use ones already developed by the community. If this is your first time developing an Executor you can check our {ref}`tutorials <create-hub-executor>` that guide you through the process.
Executor Hub is a marketplace for {class}`~jina.Executor`s where you can upload your own Executors or use ones already developed by the community. If this is your first time developing an Executor you can check our {ref}`tutorials <create-executor>` that guide you through the process.

Let's see the [Hub portal](https://cloud.jina.ai) in detail.

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/executor/hub/push-executor.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ jina hub push [--public/--private] --force-update <NAME> --secret <SECRET> --pro

The `--build-env` parameter manages environment variables, letting you use a private token in `requirements.txt` to install private dependencies. For security reasons, you don't want to expose this token to anyone else. For example, we have the following `requirements.txt`:

```txt
```
# requirements.txt
git+http://${YOUR_TOKEN}@github.com/your_private_repo
```
Expand Down
4 changes: 2 additions & 2 deletions docs/concepts/executor/instrumentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Read more on setting up an OpenTelemetry collector backend in the {ref}`OpenTele
```

```{caution}
Prometheus-only based metrics collection will soon be deprecated. Refer to {ref}`Monitoring Executor <monitoring-executor>` for this deprecated setup.
Prometheus-only based metrics collection will soon be deprecated. Refer to {ref}`Monitoring Executor <monitoring>` for this deprecated setup.
```

## Tracing
Expand Down Expand Up @@ -81,7 +81,7 @@ If tracing is not enabled by default or enabled in your environment, check `self
## Metrics

```{hint}
Prometheus-only based metrics collection will be deprecated soon. Refer to {ref}`Monitoring Executor <monitoring-executor>` section for the deprecated setup.
Prometheus-only based metrics collection will be deprecated soon. Refer to {ref}`Monitoring Executor <monitoring>` section for the deprecated setup.
```

Any method that uses the {class}`~jina.requests` decorator is monitored and creates a
Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/executor/serve.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ In Jina there are two ways of running standalone Executors: *Served Executors* a
It resides behind a {ref}`Gateway <architecture-overview>` and can thus be directly accessed by a {ref}`Client <client>`.
It can also be used as part of a Flow.
- A **shared Executor** is launched using the [Jina CLI](../cli/index.rst) and does *not* sit behind a Gateway.
- A **shared Executor** is launched using the [Jina CLI](../../cli/index.rst) and does *not* sit behind a Gateway.
It is intended to be used in one or more Flows.
Because a shared Executor does not reside behind a Gataway, it cannot be directly accessed by a Client, but it requires
fewer networking hops when used inside of a Flow.
Expand Down Expand Up @@ -63,7 +63,7 @@ print(Client(port=12345).post(inputs=DocumentArray.empty(1), on='/foo').texts)
````

Internally, the {meth}`~jina.serve.executors.BaseExecutor.serve` method creates and starts a {class}`~jina.Flow`. Therefore, it can take all associated parameters:
`uses_with`, `uses_metas`, `uses_requests` are passed to the internal {meth}`~jina.serve.executors.BaseExecutor.add` call, `stop_event` stops
`uses_with`, `uses_metas`, `uses_requests` are passed to the internal {meth}`~jina.Flow.add` call, `stop_event` stops
the Executor, and `**kwargs` is passed to the internal {meth}`~jina.Flow` initialisation call.

````{admonition} See Also
Expand Down Expand Up @@ -103,7 +103,7 @@ This type of standalone Executor can be either *external* or *shared*. By defaul
- An external Executor is deployed alongside a {ref}`Gateway <architecture-overview>`.
- A shared Executor has no Gateway.

Both types of Executor {ref}`can be used directly in any Flow <external-executor>`.
Both types of Executor {ref}`can be used directly in any Flow <external-executors>`.
Having a Gateway may be useful if you want to access your Executor with the {ref}`Client <client>` without an additional Flow. If the Executor is only used inside other Flows, you should define a shared Executor to save the costs of running the Gateway in Kubernetes.

## Serve via Docker Compose
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/flow/add-conditioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ That's exactly what you want for your filter!
````{admonition} See Also
:class: seealso
For a hands-on example of leveraging filter conditions, see {ref}`this how-to <flow-switch>`.
For a hands-on example of leveraging filter conditions, see {ref}`this how-to <flow-filter>`.
````

To define a filter condition, use [DocArrays rich query language](https://docarray.jina.ai/fundamentals/documentarray/find/#query-by-conditions).
5 changes: 2 additions & 3 deletions docs/concepts/flow/add-executors.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ from jina import Flow
f = Flow().add()
```

This adds an "empty" Executor called {class}`~jina.Executor.BaseExecutor` to the Flow. This Executor (without any parameters) performs no actions.
This adds an "empty" Executor called {class}`~jina.serve.executors.BaseExecutor` to the Flow. This Executor (without any parameters) performs no actions.

```{figure} no-op-flow.svg
:scale: 70%
Expand Down Expand Up @@ -376,8 +376,7 @@ Flow().add(host='123.45.67.89:443', external=True, tls=True)
After that, the external Executor behaves just like an internal one. You can even add the same Executor to multiple Flows.

```{hint}
Using `tls` to connect to the External Executor is especially needed to use an external Executor deployed with JCloud. See the JCloud {ref}`documentation <jcloud-external-executors>`
for further details
Using `tls` to connect to the External Executor is especially needed to use an external Executor deployed with JCloud. See the JCloud {ref}`documentation <jcloud>` for further details
```

### Pass arguments
Expand Down
Loading

0 comments on commit 3cc4762

Please sign in to comment.