-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy path3.10.2-master.json
1 lines (1 loc) · 87.9 KB
/
3.10.2-master.json
1
{"authors": "dev-team@jina.ai", "description": "Build cross-modal and multi-modal applications on the cloud", "docs": "https://docs.jina.ai", "license": "Apache 2.0", "methods": [{"help": "Start an Executor. Executor is how Jina processes Document.", "name": "executor", "options": [{"choices": null, "default": null, "default_random": false, "help": "\n The name of this object.\n\n This will be used in the following places:\n - how you refer to this object in Python/YAML/CLI\n - visualization\n - log message header\n - ...\n\n When not given, then the default naming strategy will apply.\n ", "name": "name", "option_strings": ["--name"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The working directory for any IO operations in this object. If not set, then derive from its parent `workspace`.", "name": "workspace", "option_strings": ["--workspace"], "required": false, "type": "str"}, {"choices": null, "default": "default", "default_random": false, "help": "The YAML config of the logger used in this object.", "name": "log_config", "option_strings": ["--log-config"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then no log will be emitted from this object.", "name": "quiet", "option_strings": ["--quiet"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then exception stack information will not be added to the log", "name": "quiet_error", "option_strings": ["--quiet-error"], "required": false, "type": "bool"}, {"choices": null, "default": 60, "default_random": false, "help": "The timeout in milliseconds of the control request, -1 for waiting forever", "name": "timeout_ctrl", "option_strings": ["--timeout-ctrl"], "required": false, "type": "int"}, {"choices": null, "default": "ANY", "default_random": false, "help": "\n The polling strategy of the Deployment and its endpoints (when `shards>1`).\n Can be defined for all endpoints of a Deployment or by endpoint.\n Define per Deployment:\n - ANY: only one (whoever is idle) Pod polls the message\n - ALL: all Pods poll the message (like a broadcast)\n Define per Endpoint:\n JSON dict, {endpoint: PollingType}\n {'/custom': 'ALL', '/search': 'ANY', '*': 'ANY'}\n \n ", "name": "polling", "option_strings": ["--polling"], "required": false, "type": "str"}, {"choices": null, "default": "BaseExecutor", "default_random": false, "help": "\n The config of the executor, it could be one of the followings:\n * the string literal of an Executor class name\n * an Executor YAML file (.yml, .yaml, .jaml)\n * a Jina Hub Executor (must start with `jinahub://` or `jinahub+docker://`)\n * a docker image (must start with `docker://`)\n * the string literal of a YAML config (must start with `!` or `jtype: `)\n * the string literal of a JSON config\n\n When use it under Python, one can use the following values additionally:\n - a Python dict that represents the config\n - a text file stream has `.read()` interface\n ", "name": "uses", "option_strings": ["--uses"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `with` configuration in `uses`\n ", "name": "uses_with", "option_strings": ["--uses-with"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `metas` configuration in `uses`\n ", "name": "uses_metas", "option_strings": ["--uses-metas"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `requests` configuration in `uses`\n ", "name": "uses_requests", "option_strings": ["--uses-requests"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe customized python modules need to be imported before loading the executor\n\nNote that the recommended way is to only import a single module - a simple python file, if your\nexecutor can be defined in a single file, or an ``__init__.py`` file if you have multiple files,\nwhich should be structured as a python package. For more details, please see the\n`Executor cookbook <https://docs.jina.ai/fundamentals/executor/executor-files/>`__\n", "name": "py_modules", "option_strings": ["--py-modules"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": 49296, "default_factory": "random_port", "default_random": true, "help": "The port for input data to bind to, default a random port between [49152, 65535]", "name": "port", "option_strings": ["--port-in"], "required": false, "type": "int"}, {"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address for binding to, by default it is 0.0.0.0", "name": "host_in", "option_strings": ["--host-in"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.", "name": "native", "option_strings": ["--native"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe type of array `tensor` and `embedding` will be serialized to.\n\nSupports the same types as `docarray.to_protobuf(.., ndarray_type=...)`, which can be found \n`here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>`.\nDefaults to retaining whatever type is returned by the Executor.\n", "name": "output_array_type", "option_strings": ["--output-array-type"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "Dictionary of kwargs arguments that will be passed to the grpc server as options when starting the server, example : {'grpc.max_send_message_length': -1}", "name": "grpc_server_options", "option_strings": ["--grpc-server-options"], "required": false, "type": "dict"}, {"choices": null, "default": [], "default_random": false, "help": "List of exceptions that will cause the Executor to shut down.", "name": "exit_on_exceptions", "option_strings": ["--exit-on-exceptions"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.", "name": "entrypoint", "option_strings": ["--entrypoint"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\nDictionary of kwargs arguments that will be passed to Docker SDK when starting the docker '\ncontainer. \n\nMore details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/\n\n", "name": "docker_kwargs", "option_strings": ["--docker-kwargs"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe path on the host to be mounted inside the container. \n\nNote, \n- If separated by `:`, then the first part will be considered as the local host path and the second part is the path in the container system. \n- If no split provided, then the basename of that directory will be mounted into container's root path, e.g. `--volumes=\"/user/test/my-workspace\"` will be mounted into `/my-workspace` inside the container. \n- All volumes are mounted with read-write mode.\n", "name": "volumes", "option_strings": ["--volumes"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "\n This argument allows dockerized Jina executor discover local gpu devices.\n\n Note, \n - To access all gpus, use `--gpus all`.\n - To access multiple gpus, e.g. make use of 2 gpus, use `--gpus 2`.\n - To access specified gpus based on device id, use `--gpus device=[YOUR-GPU-DEVICE-ID]`\n - To access specified gpus based on multiple device id, use `--gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2]`\n - To specify more parameters, use `--gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display\n ", "name": "gpus", "option_strings": ["--gpus"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Do not automatically mount a volume for dockerized Executors.", "name": "disable_auto_volume", "option_strings": ["--disable-auto-volume"], "required": false, "type": "bool"}, {"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address of the runtime, by default it is 0.0.0.0. In the case of an external Executor (`--external` or `external=True`) this can be a list of hosts, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "host", "option_strings": ["--host"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Do not display the streaming of remote logs on local console", "name": "quiet_remote_logs", "option_strings": ["--quiet-remote-logs"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe files on the host to be uploaded to the remote\nworkspace. This can be useful when your Deployment has more\nfile dependencies beyond a single YAML file, e.g.\nPython files, data files.\n\nNote,\n- currently only flatten structure is supported, which means if you upload `[./foo/a.py, ./foo/b.pp, ./bar/c.yml]`, then they will be put under the _same_ workspace on the remote, losing all hierarchies.\n- by default, `--uses` YAML file is always uploaded.\n- uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use `--workspace-id` to specify the workspace.\n", "name": "upload_files", "option_strings": ["--upload-files"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": "WorkerRuntime", "default_random": false, "help": "The runtime class to run inside the Pod", "name": "runtime_cls", "option_strings": ["--runtime-cls"], "required": false, "type": "str"}, {"choices": null, "default": 600000, "default_random": false, "help": "The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever", "name": "timeout_ready", "option_strings": ["--timeout-ready"], "required": false, "type": "int"}, {"choices": null, "default": null, "default_random": false, "help": "The map of environment variables that are available inside runtime", "name": "env", "option_strings": ["--env"], "required": false, "type": "dict"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies", "name": "shards", "option_strings": ["--shards"], "required": false, "type": "int"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of replicas in the deployment", "name": "replicas", "option_strings": ["--replicas"], "required": false, "type": "int"}, {"choices": null, "default": "57647", "default_factory": "random_identity", "default_random": true, "help": "The port for input data to bind to, default is a random port between [49152, 65535]. In the case of an external Executor (`--external` or `external=True`) this can be a list of ports, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "port", "option_strings": ["--port"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, spawn an http server with a prometheus endpoint to expose metrics", "name": "monitoring", "option_strings": ["--monitoring"], "required": false, "type": "bool"}, {"choices": null, "default": "55512", "default_factory": "random_identity", "default_random": true, "help": "The port on which the prometheus server is exposed, default is a random port between [49152, 65535]", "name": "port_monitoring", "option_strings": ["--port-monitoring"], "required": false, "type": "str"}, {"choices": null, "default": -1, "default_random": false, "help": "Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)", "name": "retries", "option_strings": ["--retries"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the current Pod/Deployment can not be further chained, and the next `.add()` will chain after the last Pod/Deployment not this current one.", "name": "floating", "option_strings": ["--floating"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry tracer will be available and will be enabled for automatic tracing of requests and customer span creation. Otherwise a no-op implementation will be provided.", "name": "tracing", "option_strings": ["--tracing"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the trace exporter agent.", "name": "traces_exporter_host", "option_strings": ["--traces-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the trace exporter agent.", "name": "traces_exporter_port", "option_strings": ["--traces-exporter-port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry metrics will be available for default monitoring and custom measurements. Otherwise a no-op implementation will be provided.", "name": "metrics", "option_strings": ["--metrics"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the metrics exporter agent.", "name": "metrics_exporter_host", "option_strings": ["--metrics-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the metrics exporter agent.", "name": "metrics_exporter_port", "option_strings": ["--metrics-exporter-port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, install `requirements.txt` in the Hub Executor bundle to local", "name": "install_requirements", "option_strings": ["--install-requirements"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, always pull the latest Hub Executor bundle even it exists on local", "name": "force_update", "option_strings": ["--force-update", "--force"], "required": false, "type": "bool"}, {"choices": ["NoCompression", "Deflate", "Gzip"], "default": null, "default_random": false, "help": "The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.", "name": "compression", "option_strings": ["--compression"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The address of the uses-before runtime", "name": "uses_before_address", "option_strings": ["--uses-before-address"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The address of the uses-before runtime", "name": "uses_after_address", "option_strings": ["--uses-after-address"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "dictionary JSON with a list of connections to configure", "name": "connection_list", "option_strings": ["--connection-list"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head", "name": "disable_reduce", "option_strings": ["--disable-reduce"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default", "name": "timeout_send", "option_strings": ["--timeout-send"], "required": false, "type": "int"}]}, {"help": "Start a Flow. Flow is how Jina streamlines and distributes Executors.", "name": "flow", "options": [{"choices": null, "default": null, "default_random": false, "help": "\n The name of this object.\n\n This will be used in the following places:\n - how you refer to this object in Python/YAML/CLI\n - visualization\n - log message header\n - ...\n\n When not given, then the default naming strategy will apply.\n ", "name": "name", "option_strings": ["--name"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The working directory for any IO operations in this object. If not set, then derive from its parent `workspace`.", "name": "workspace", "option_strings": ["--workspace"], "required": false, "type": "str"}, {"choices": null, "default": "default", "default_random": false, "help": "The YAML config of the logger used in this object.", "name": "log_config", "option_strings": ["--log-config"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then no log will be emitted from this object.", "name": "quiet", "option_strings": ["--quiet"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then exception stack information will not be added to the log", "name": "quiet_error", "option_strings": ["--quiet-error"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "The YAML path represents a flow. It can be either a local file path or a URL.", "name": "uses", "option_strings": ["--uses"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The map of environment variables that are available inside runtime", "name": "env", "option_strings": ["--env"], "required": false, "type": "dict"}, {"choices": ["HANG", "REMOVE", "COLLECT"], "default": "COLLECT", "default_random": false, "help": "\n The strategy on those inspect deployments in the flow.\n\n If `REMOVE` is given then all inspect deployments are removed when building the flow.\n ", "name": "inspect", "option_strings": ["--inspect"], "required": false, "type": "str"}]}, {"help": "Ping a remote Executor or a Flow.", "name": "ping", "options": [{"choices": ["flow", "executor", "gateway"], "default": "executor", "default_random": false, "help": "The target type to ping. For `executor` and `gateway`, checks the readiness of the individual service. For `flow` it checks the connectivity of the complete microservice architecture.", "name": "target", "option_strings": [], "required": true, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The host address with port of a target Executor, Gateway or a Flow, e.g. 0.0.0.0:8000. For Flow or Gateway, host can also indicate the protocol, grpc will be used if not provided, e.g http://0.0.0.0:8000", "name": "host", "option_strings": [], "required": true, "type": "str"}, {"choices": null, "default": 3000, "default_random": false, "help": "\nTimeout in millisecond of one check\n-1 for waiting forever\n", "name": "timeout", "option_strings": ["--timeout"], "required": false, "type": "int"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of readiness checks to perform", "name": "attempts", "option_strings": ["--attempts"], "required": false, "type": "int"}, {"choices": null, "default": 1, "default_random": false, "help": "The minimum number of successful readiness checks, before exiting successfully with exit(0)", "name": "min_successful_attempts", "option_strings": ["--min-successful-attempts"], "required": false, "type": "int"}]}, {"help": "Export Jina API and Flow to JSONSchema, Kubernetes YAML, or SVG flowchart.", "methods": [{"help": null, "name": "flowchart", "options": [{"choices": null, "default": null, "default_random": false, "help": "The input file path of a Flow YAML", "name": "flowpath", "option_strings": [], "required": true, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The output path", "name": "outpath", "option_strings": [], "required": true, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then the flowchart is rendered vertically from top to down.", "name": "vertical_layout", "option_strings": ["--vertical-layout"], "required": false, "type": "bool"}]}, {"help": null, "name": "kubernetes", "options": [{"choices": null, "default": null, "default_random": false, "help": "The input file path of a Flow YAML", "name": "flowpath", "option_strings": [], "required": true, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The output path", "name": "outpath", "option_strings": [], "required": true, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The name of the k8s namespace to set for the configurations. If None, the name of the Flow will be used.", "name": "k8s_namespace", "option_strings": ["--k8s-namespace"], "required": false, "type": "str"}]}, {"help": null, "name": "docker-compose", "options": [{"choices": null, "default": null, "default_random": false, "help": "The input file path of a Flow YAML", "name": "flowpath", "option_strings": [], "required": true, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The output path", "name": "outpath", "option_strings": [], "required": true, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The name of the network that will be used by the deployment name.", "name": "network_name", "option_strings": ["--network_name"], "required": false, "type": "str"}]}, {"help": null, "name": "schema", "options": [{"choices": null, "default": null, "default_random": false, "help": "The YAML file path for storing the exported API", "name": "yaml_path", "option_strings": ["--yaml-path"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "The JSON file path for storing the exported API", "name": "json_path", "option_strings": ["--json-path"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "The JSONSchema file path for storing the exported API", "name": "schema_path", "option_strings": ["--schema-path"], "required": false, "type": "typing.List[str]"}]}], "name": "export", "options": []}, {"help": "Create a new Jina toy project with the predefined template.", "name": "new", "options": [{"choices": null, "default": "hello-jina", "default_random": false, "help": "The name of the project", "name": "name", "option_strings": [], "required": true, "type": "str"}]}, {"help": "Start a Gateway that receives client Requests via gRPC/REST interface", "name": "gateway", "options": [{"choices": null, "default": "gateway", "default_random": false, "help": "\n The name of this object.\n\n This will be used in the following places:\n - how you refer to this object in Python/YAML/CLI\n - visualization\n - log message header\n - ...\n\n When not given, then the default naming strategy will apply.\n ", "name": "name", "option_strings": ["--name"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The working directory for any IO operations in this object. If not set, then derive from its parent `workspace`.", "name": "workspace", "option_strings": ["--workspace"], "required": false, "type": "str"}, {"choices": null, "default": "default", "default_random": false, "help": "The YAML config of the logger used in this object.", "name": "log_config", "option_strings": ["--log-config"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then no log will be emitted from this object.", "name": "quiet", "option_strings": ["--quiet"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then exception stack information will not be added to the log", "name": "quiet_error", "option_strings": ["--quiet-error"], "required": false, "type": "bool"}, {"choices": null, "default": 60, "default_random": false, "help": "The timeout in milliseconds of the control request, -1 for waiting forever", "name": "timeout_ctrl", "option_strings": ["--timeout-ctrl"], "required": false, "type": "int"}, {"choices": null, "default": "ANY", "default_random": false, "help": "\n The polling strategy of the Deployment and its endpoints (when `shards>1`).\n Can be defined for all endpoints of a Deployment or by endpoint.\n Define per Deployment:\n - ANY: only one (whoever is idle) Pod polls the message\n - ALL: all Pods poll the message (like a broadcast)\n Define per Endpoint:\n JSON dict, {endpoint: PollingType}\n {'/custom': 'ALL', '/search': 'ANY', '*': 'ANY'}\n \n ", "name": "polling", "option_strings": ["--polling"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.", "name": "entrypoint", "option_strings": ["--entrypoint"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\nDictionary of kwargs arguments that will be passed to Docker SDK when starting the docker '\ncontainer. \n\nMore details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/\n\n", "name": "docker_kwargs", "option_strings": ["--docker-kwargs"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe path on the host to be mounted inside the container. \n\nNote, \n- If separated by `:`, then the first part will be considered as the local host path and the second part is the path in the container system. \n- If no split provided, then the basename of that directory will be mounted into container's root path, e.g. `--volumes=\"/user/test/my-workspace\"` will be mounted into `/my-workspace` inside the container. \n- All volumes are mounted with read-write mode.\n", "name": "volumes", "option_strings": ["--volumes"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "\n This argument allows dockerized Jina executor discover local gpu devices.\n\n Note, \n - To access all gpus, use `--gpus all`.\n - To access multiple gpus, e.g. make use of 2 gpus, use `--gpus 2`.\n - To access specified gpus based on device id, use `--gpus device=[YOUR-GPU-DEVICE-ID]`\n - To access specified gpus based on multiple device id, use `--gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2]`\n - To specify more parameters, use `--gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display\n ", "name": "gpus", "option_strings": ["--gpus"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Do not automatically mount a volume for dockerized Executors.", "name": "disable_auto_volume", "option_strings": ["--disable-auto-volume"], "required": false, "type": "bool"}, {"choices": null, "default": 1000, "default_random": false, "help": "\n Number of requests fetched from the client before feeding into the first Executor. \n \n Used to control the speed of data input into a Flow. 0 disables prefetch (1000 requests is the default)", "name": "prefetch", "option_strings": ["--prefetch"], "required": false, "type": "int"}, {"choices": null, "default": null, "default_random": false, "help": "The title of this HTTP server. It will be used in automatics docs such as Swagger UI.", "name": "title", "option_strings": ["--title"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The description of this HTTP server. It will be used in automatics docs such as Swagger UI.", "name": "description", "option_strings": ["--description"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "\n If set, a CORS middleware is added to FastAPI frontend to allow cross-origin access.\n ", "name": "cors", "option_strings": ["--cors"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, `/status` `/post` endpoints are removed from HTTP interface. ", "name": "no_debug_endpoints", "option_strings": ["--no-debug-endpoints"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "\n If set, `/index`, `/search`, `/update`, `/delete` endpoints are removed from HTTP interface.\n\n Any executor that has `@requests(on=...)` bind with those values will receive data requests.\n ", "name": "no_crud_endpoints", "option_strings": ["--no-crud-endpoints"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\n A JSON string that represents a map from executor endpoints (`@requests(on=...)`) to HTTP endpoints.\n ", "name": "expose_endpoints", "option_strings": ["--expose-endpoints"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\nDictionary of kwargs arguments that will be passed to Uvicorn server when starting the server\n\nMore details can be found in Uvicorn docs: https://www.uvicorn.org/settings/\n\n", "name": "uvicorn_kwargs", "option_strings": ["--uvicorn-kwargs"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\n the path to the certificate file\n ", "name": "ssl_certfile", "option_strings": ["--ssl-certfile"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\n the path to the key file\n ", "name": "ssl_keyfile", "option_strings": ["--ssl-keyfile"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, /graphql endpoint is added to HTTP interface. ", "name": "expose_graphql_endpoint", "option_strings": ["--expose-graphql-endpoint"], "required": false, "type": "bool"}, {"choices": ["GRPC", "HTTP", "WEBSOCKET"], "default": "GRPC", "default_random": false, "help": "Communication protocol between server and client.", "name": "protocol", "option_strings": ["--protocol"], "required": false, "type": "str"}, {"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address of the runtime, by default it is 0.0.0.0. In the case of an external Executor (`--external` or `external=True`) this can be a list of hosts, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "host", "option_strings": ["--host"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, respect the http_proxy and https_proxy environment variables. otherwise, it will unset these proxy variables before start. gRPC seems to prefer no proxy", "name": "proxy", "option_strings": ["--proxy"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\n The config of the gateway, it could be one of the followings:\n * the string literal of an Gateway class name\n * a Gateway YAML file (.yml, .yaml, .jaml)\n * a docker image (must start with `docker://`)\n * the string literal of a YAML config (must start with `!` or `jtype: `)\n * the string literal of a JSON config\n\n When use it under Python, one can use the following values additionally:\n - a Python dict that represents the config\n - a text file stream has `.read()` interface\n ", "name": "uses", "option_strings": ["--uses"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `with` configuration in `uses`\n ", "name": "uses_with", "option_strings": ["--uses-with"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe customized python modules need to be imported before loading the gateway\n\nNote that the recommended way is to only import a single module - a simple python file, if your\ngateway can be defined in a single file, or an ``__init__.py`` file if you have multiple files,\nwhich should be structured as a python package.\n", "name": "py_modules", "option_strings": ["--py-modules"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": 56890, "default_factory": "random_port", "default_random": true, "help": "The port for input data to bind to, default a random port between [49152, 65535]", "name": "port", "option_strings": ["--port-in"], "required": false, "type": "int"}, {"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address for binding to, by default it is 0.0.0.0", "name": "host_in", "option_strings": ["--host-in"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.", "name": "native", "option_strings": ["--native"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe type of array `tensor` and `embedding` will be serialized to.\n\nSupports the same types as `docarray.to_protobuf(.., ndarray_type=...)`, which can be found \n`here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>`.\nDefaults to retaining whatever type is returned by the Executor.\n", "name": "output_array_type", "option_strings": ["--output-array-type"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "Dictionary of kwargs arguments that will be passed to the grpc server as options when starting the server, example : {'grpc.max_send_message_length': -1}", "name": "grpc_server_options", "option_strings": ["--grpc-server-options"], "required": false, "type": "dict"}, {"choices": null, "default": [], "default_random": false, "help": "List of exceptions that will cause the Executor to shut down.", "name": "exit_on_exceptions", "option_strings": ["--exit-on-exceptions"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": 51238, "default_factory": "random_port", "default_random": true, "help": "The port that the gateway exposes for clients for GRPC connections.", "name": "port", "option_strings": ["--port-expose"], "required": false, "type": "int"}, {"choices": null, "default": "{}", "default_random": false, "help": "Routing graph for the gateway", "name": "graph_description", "option_strings": ["--graph-description"], "required": false, "type": "str"}, {"choices": null, "default": "{}", "default_random": false, "help": "Dictionary stating which filtering conditions each Executor in the graph requires to receive Documents.", "name": "graph_conditions", "option_strings": ["--graph-conditions"], "required": false, "type": "str"}, {"choices": null, "default": "{}", "default_random": false, "help": "JSON dictionary with the input addresses of each Deployment", "name": "deployments_addresses", "option_strings": ["--deployments-addresses"], "required": false, "type": "str"}, {"choices": null, "default": "{}", "default_random": false, "help": "JSON dictionary with the request metadata for each Deployment", "name": "deployments_metadata", "option_strings": ["--deployments-metadata"], "required": false, "type": "str"}, {"choices": null, "default": "[]", "default_random": false, "help": "list JSON disabling the built-in merging mechanism for each Deployment listed", "name": "deployments_disable_reduce", "option_strings": ["--deployments-disable-reduce"], "required": false, "type": "str"}, {"choices": ["NoCompression", "Deflate", "Gzip"], "default": null, "default_random": false, "help": "The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.", "name": "compression", "option_strings": ["--compression"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default", "name": "timeout_send", "option_strings": ["--timeout-send"], "required": false, "type": "int"}, {"choices": null, "default": "GatewayRuntime", "default_random": false, "help": "The runtime class to run inside the Pod", "name": "runtime_cls", "option_strings": ["--runtime-cls"], "required": false, "type": "str"}, {"choices": null, "default": 600000, "default_random": false, "help": "The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever", "name": "timeout_ready", "option_strings": ["--timeout-ready"], "required": false, "type": "int"}, {"choices": null, "default": null, "default_random": false, "help": "The map of environment variables that are available inside runtime", "name": "env", "option_strings": ["--env"], "required": false, "type": "dict"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies", "name": "shards", "option_strings": ["--shards"], "required": false, "type": "int"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of replicas in the deployment", "name": "replicas", "option_strings": ["--replicas"], "required": false, "type": "int"}, {"choices": null, "default": "51267", "default_factory": "random_identity", "default_random": true, "help": "The port for input data to bind to, default is a random port between [49152, 65535]. In the case of an external Executor (`--external` or `external=True`) this can be a list of ports, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "port", "option_strings": ["--port"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, spawn an http server with a prometheus endpoint to expose metrics", "name": "monitoring", "option_strings": ["--monitoring"], "required": false, "type": "bool"}, {"choices": null, "default": "59153", "default_factory": "random_identity", "default_random": true, "help": "The port on which the prometheus server is exposed, default is a random port between [49152, 65535]", "name": "port_monitoring", "option_strings": ["--port-monitoring"], "required": false, "type": "str"}, {"choices": null, "default": -1, "default_random": false, "help": "Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)", "name": "retries", "option_strings": ["--retries"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the current Pod/Deployment can not be further chained, and the next `.add()` will chain after the last Pod/Deployment not this current one.", "name": "floating", "option_strings": ["--floating"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry tracer will be available and will be enabled for automatic tracing of requests and customer span creation. Otherwise a no-op implementation will be provided.", "name": "tracing", "option_strings": ["--tracing"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the trace exporter agent.", "name": "traces_exporter_host", "option_strings": ["--traces-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the trace exporter agent.", "name": "traces_exporter_port", "option_strings": ["--traces-exporter-port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry metrics will be available for default monitoring and custom measurements. Otherwise a no-op implementation will be provided.", "name": "metrics", "option_strings": ["--metrics"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the metrics exporter agent.", "name": "metrics_exporter_host", "option_strings": ["--metrics-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the metrics exporter agent.", "name": "metrics_exporter_port", "option_strings": ["--metrics-exporter-port"], "required": false, "type": "int"}]}, {"help": "Login to Jina AI with your GitHub/Google/Email account", "methods": [{"help": "Login to Jina AI Ecosystem", "name": "login", "options": [{"choices": null, "default": false, "default_random": false, "help": "Force to login", "name": "force", "option_strings": ["-f", "--force"], "required": false, "type": "bool"}]}, {"help": "Logout from Jina AI Ecosystem", "methods": [], "name": "logout", "options": []}, {"help": "Commands for Personal Access Token", "methods": [{"help": "Create a Personal Access Token", "name": "create", "options": [{"choices": null, "default": 7, "default_random": false, "help": "Validity period (days)", "name": "expire", "option_strings": ["-e", "--expire"], "required": false, "type": "int"}, {"choices": null, "default": null, "default_random": false, "help": "Name of Personal Access Token", "name": "name", "option_strings": [], "required": true, "type": "str"}]}, {"help": "Revoke a Personal Access Token", "name": "delete", "options": [{"choices": null, "default": null, "default_random": false, "help": "Name of Personal Access Token which you want to delete", "name": "name", "option_strings": [], "required": true, "type": "str"}]}, {"help": "List all Personal Access Tokens", "methods": [], "name": "list", "options": []}], "name": "token", "options": []}], "name": "auth", "options": []}, {"help": "Push/Pull an Executor to/from Jina Hub", "methods": [{"help": "Create a new executor using the template", "name": "new", "options": [{"choices": null, "default": null, "default_random": false, "help": "the name of the Executor", "name": "name", "option_strings": ["--name"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "the path to store the Executor", "name": "path", "option_strings": ["--path"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, always set up advance configuration like description, keywords and url", "name": "advance_configuration", "option_strings": ["--advance-configuration"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "the short description of the Executor", "name": "description", "option_strings": ["--description"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "some keywords to help people search your Executor (separated by comma)", "name": "keywords", "option_strings": ["--keywords"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "the URL of your GitHub repo", "name": "url", "option_strings": ["--url"], "required": false, "type": "str"}, {"choices": ["cpu", "tf-gpu", "torch-gpu", "jax-gpu"], "default": null, "default_random": false, "help": "The Dockerfile template to use for the Executor", "name": "dockerfile", "option_strings": ["--dockerfile"], "required": false, "type": "str"}]}, {"help": "Push an executor package to Jina hub", "name": "push", "options": [{"choices": null, "default": false, "default_random": false, "help": "If set, Hub executor usage will not be printed.", "name": "no_usage", "option_strings": ["--no-usage"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, more information will be printed.", "name": "verbose", "option_strings": ["--verbose"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "The Executor folder to be pushed to Jina Hub", "name": "path", "option_strings": [], "required": true, "type": "dir_path"}, {"choices": null, "default": null, "default_random": false, "help": "The file path to the Dockerfile (default is `${cwd}/Dockerfile`)", "name": "dockerfile", "option_strings": ["-f", "--dockerfile"], "required": false, "type": "None"}, {"choices": null, "default": null, "default_random": false, "help": "If set, push will overwrite the Executor on the Hub that shares the same NAME or UUID8 identifier", "name": "force_update", "option_strings": ["--force-update", "--force"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "A list of environment variables. It will be used in project build phase.", "name": "build_env", "option_strings": ["--build-env"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The secret for overwrite a Hub executor", "name": "secret", "option_strings": ["--secret"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, \"--no-cache\" option will be added to the Docker build.", "name": "no_cache", "option_strings": ["--no-cache"], "required": false, "type": "bool"}, {"choices": null, "default": "==SUPPRESS==", "default_random": false, "help": "If set, the pushed executor is visible to public", "name": "public", "option_strings": ["--public"], "required": false, "type": "bool"}, {"choices": null, "default": "==SUPPRESS==", "default_random": false, "help": "If set, the pushed executor is invisible to public", "name": "private", "option_strings": ["--private"], "required": false, "type": "bool"}]}, {"help": "Download an executor image/package from Jina hub", "name": "pull", "options": [{"choices": null, "default": false, "default_random": false, "help": "If set, Hub executor usage will not be printed.", "name": "no_usage", "option_strings": ["--no-usage"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "The URI of the executor to pull (e.g., jinahub[+docker]://NAME)", "name": "uri", "option_strings": [], "required": true, "type": "hub_uri"}, {"choices": null, "default": false, "default_random": false, "help": "If set, install `requirements.txt` in the Hub Executor bundle to local", "name": "install_requirements", "option_strings": ["--install-requirements"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, always pull the latest Hub Executor bundle even it exists on local", "name": "force_update", "option_strings": ["--force-update", "--force"], "required": false, "type": "bool"}]}, {"help": "Query an executor building status of of a pushed Executor from Jina hub", "name": "status", "options": [{"choices": null, "default": ".", "default_random": false, "help": "The Executor folder to be pushed to Jina Hub.", "name": "path", "option_strings": [], "required": false, "type": "dir_path"}, {"choices": null, "default": null, "default_random": false, "help": "If set, you can get the specified building state of a pushed Executor.", "name": "id", "option_strings": ["--id"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, more building status information of a pushed Executor will be printed.", "name": "verbose", "option_strings": ["--verbose"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, history building status information of a pushed Executor will be printed.", "name": "replay", "option_strings": ["--replay"], "required": false, "type": "bool"}]}, {"help": "List your local Jina Executors", "methods": [], "name": "list", "options": []}], "name": "hub", "options": []}, {"help": "Manage Flows on Jina Cloud", "name": "cloud", "options": [{"choices": ["DEBUG", "INFO", "CRITICAL", "NOTSET"], "default": "INFO", "default_random": false, "help": "Set the loglevel of the logger", "name": "loglevel", "option_strings": ["--loglevel"], "required": false, "type": "str"}]}, {"help": "Show help text of a CLI argument", "name": "help", "options": [{"choices": null, "default": null, "default_random": false, "help": "Lookup the usage & mention of the argument name in Jina API. The name can be fuzzy", "name": "query", "option_strings": [], "required": true, "type": "str"}]}, {"help": "Start a Pod. You should rarely use this directly unless you are doing low-level orchestration", "name": "pod", "options": [{"choices": null, "default": null, "default_random": false, "help": "\n The name of this object.\n\n This will be used in the following places:\n - how you refer to this object in Python/YAML/CLI\n - visualization\n - log message header\n - ...\n\n When not given, then the default naming strategy will apply.\n ", "name": "name", "option_strings": ["--name"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The working directory for any IO operations in this object. If not set, then derive from its parent `workspace`.", "name": "workspace", "option_strings": ["--workspace"], "required": false, "type": "str"}, {"choices": null, "default": "default", "default_random": false, "help": "The YAML config of the logger used in this object.", "name": "log_config", "option_strings": ["--log-config"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then no log will be emitted from this object.", "name": "quiet", "option_strings": ["--quiet"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then exception stack information will not be added to the log", "name": "quiet_error", "option_strings": ["--quiet-error"], "required": false, "type": "bool"}, {"choices": null, "default": 60, "default_random": false, "help": "The timeout in milliseconds of the control request, -1 for waiting forever", "name": "timeout_ctrl", "option_strings": ["--timeout-ctrl"], "required": false, "type": "int"}, {"choices": null, "default": "ANY", "default_random": false, "help": "\n The polling strategy of the Deployment and its endpoints (when `shards>1`).\n Can be defined for all endpoints of a Deployment or by endpoint.\n Define per Deployment:\n - ANY: only one (whoever is idle) Pod polls the message\n - ALL: all Pods poll the message (like a broadcast)\n Define per Endpoint:\n JSON dict, {endpoint: PollingType}\n {'/custom': 'ALL', '/search': 'ANY', '*': 'ANY'}\n \n ", "name": "polling", "option_strings": ["--polling"], "required": false, "type": "str"}, {"choices": null, "default": "BaseExecutor", "default_random": false, "help": "\n The config of the executor, it could be one of the followings:\n * the string literal of an Executor class name\n * an Executor YAML file (.yml, .yaml, .jaml)\n * a Jina Hub Executor (must start with `jinahub://` or `jinahub+docker://`)\n * a docker image (must start with `docker://`)\n * the string literal of a YAML config (must start with `!` or `jtype: `)\n * the string literal of a JSON config\n\n When use it under Python, one can use the following values additionally:\n - a Python dict that represents the config\n - a text file stream has `.read()` interface\n ", "name": "uses", "option_strings": ["--uses"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `with` configuration in `uses`\n ", "name": "uses_with", "option_strings": ["--uses-with"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `metas` configuration in `uses`\n ", "name": "uses_metas", "option_strings": ["--uses-metas"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `requests` configuration in `uses`\n ", "name": "uses_requests", "option_strings": ["--uses-requests"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe customized python modules need to be imported before loading the executor\n\nNote that the recommended way is to only import a single module - a simple python file, if your\nexecutor can be defined in a single file, or an ``__init__.py`` file if you have multiple files,\nwhich should be structured as a python package. For more details, please see the\n`Executor cookbook <https://docs.jina.ai/fundamentals/executor/executor-files/>`__\n", "name": "py_modules", "option_strings": ["--py-modules"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": 63311, "default_factory": "random_port", "default_random": true, "help": "The port for input data to bind to, default a random port between [49152, 65535]", "name": "port", "option_strings": ["--port-in"], "required": false, "type": "int"}, {"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address for binding to, by default it is 0.0.0.0", "name": "host_in", "option_strings": ["--host-in"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.", "name": "native", "option_strings": ["--native"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe type of array `tensor` and `embedding` will be serialized to.\n\nSupports the same types as `docarray.to_protobuf(.., ndarray_type=...)`, which can be found \n`here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>`.\nDefaults to retaining whatever type is returned by the Executor.\n", "name": "output_array_type", "option_strings": ["--output-array-type"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "Dictionary of kwargs arguments that will be passed to the grpc server as options when starting the server, example : {'grpc.max_send_message_length': -1}", "name": "grpc_server_options", "option_strings": ["--grpc-server-options"], "required": false, "type": "dict"}, {"choices": null, "default": [], "default_random": false, "help": "List of exceptions that will cause the Executor to shut down.", "name": "exit_on_exceptions", "option_strings": ["--exit-on-exceptions"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.", "name": "entrypoint", "option_strings": ["--entrypoint"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\nDictionary of kwargs arguments that will be passed to Docker SDK when starting the docker '\ncontainer. \n\nMore details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/\n\n", "name": "docker_kwargs", "option_strings": ["--docker-kwargs"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe path on the host to be mounted inside the container. \n\nNote, \n- If separated by `:`, then the first part will be considered as the local host path and the second part is the path in the container system. \n- If no split provided, then the basename of that directory will be mounted into container's root path, e.g. `--volumes=\"/user/test/my-workspace\"` will be mounted into `/my-workspace` inside the container. \n- All volumes are mounted with read-write mode.\n", "name": "volumes", "option_strings": ["--volumes"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "\n This argument allows dockerized Jina executor discover local gpu devices.\n\n Note, \n - To access all gpus, use `--gpus all`.\n - To access multiple gpus, e.g. make use of 2 gpus, use `--gpus 2`.\n - To access specified gpus based on device id, use `--gpus device=[YOUR-GPU-DEVICE-ID]`\n - To access specified gpus based on multiple device id, use `--gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2]`\n - To specify more parameters, use `--gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display\n ", "name": "gpus", "option_strings": ["--gpus"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Do not automatically mount a volume for dockerized Executors.", "name": "disable_auto_volume", "option_strings": ["--disable-auto-volume"], "required": false, "type": "bool"}, {"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address of the runtime, by default it is 0.0.0.0. In the case of an external Executor (`--external` or `external=True`) this can be a list of hosts, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "host", "option_strings": ["--host"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Do not display the streaming of remote logs on local console", "name": "quiet_remote_logs", "option_strings": ["--quiet-remote-logs"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe files on the host to be uploaded to the remote\nworkspace. This can be useful when your Deployment has more\nfile dependencies beyond a single YAML file, e.g.\nPython files, data files.\n\nNote,\n- currently only flatten structure is supported, which means if you upload `[./foo/a.py, ./foo/b.pp, ./bar/c.yml]`, then they will be put under the _same_ workspace on the remote, losing all hierarchies.\n- by default, `--uses` YAML file is always uploaded.\n- uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use `--workspace-id` to specify the workspace.\n", "name": "upload_files", "option_strings": ["--upload-files"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": "WorkerRuntime", "default_random": false, "help": "The runtime class to run inside the Pod", "name": "runtime_cls", "option_strings": ["--runtime-cls"], "required": false, "type": "str"}, {"choices": null, "default": 600000, "default_random": false, "help": "The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever", "name": "timeout_ready", "option_strings": ["--timeout-ready"], "required": false, "type": "int"}, {"choices": null, "default": null, "default_random": false, "help": "The map of environment variables that are available inside runtime", "name": "env", "option_strings": ["--env"], "required": false, "type": "dict"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies", "name": "shards", "option_strings": ["--shards"], "required": false, "type": "int"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of replicas in the deployment", "name": "replicas", "option_strings": ["--replicas"], "required": false, "type": "int"}, {"choices": null, "default": "49602", "default_factory": "random_identity", "default_random": true, "help": "The port for input data to bind to, default is a random port between [49152, 65535]. In the case of an external Executor (`--external` or `external=True`) this can be a list of ports, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "port", "option_strings": ["--port"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, spawn an http server with a prometheus endpoint to expose metrics", "name": "monitoring", "option_strings": ["--monitoring"], "required": false, "type": "bool"}, {"choices": null, "default": "61360", "default_factory": "random_identity", "default_random": true, "help": "The port on which the prometheus server is exposed, default is a random port between [49152, 65535]", "name": "port_monitoring", "option_strings": ["--port-monitoring"], "required": false, "type": "str"}, {"choices": null, "default": -1, "default_random": false, "help": "Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)", "name": "retries", "option_strings": ["--retries"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the current Pod/Deployment can not be further chained, and the next `.add()` will chain after the last Pod/Deployment not this current one.", "name": "floating", "option_strings": ["--floating"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry tracer will be available and will be enabled for automatic tracing of requests and customer span creation. Otherwise a no-op implementation will be provided.", "name": "tracing", "option_strings": ["--tracing"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the trace exporter agent.", "name": "traces_exporter_host", "option_strings": ["--traces-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the trace exporter agent.", "name": "traces_exporter_port", "option_strings": ["--traces-exporter-port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry metrics will be available for default monitoring and custom measurements. Otherwise a no-op implementation will be provided.", "name": "metrics", "option_strings": ["--metrics"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the metrics exporter agent.", "name": "metrics_exporter_host", "option_strings": ["--metrics-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the metrics exporter agent.", "name": "metrics_exporter_port", "option_strings": ["--metrics-exporter-port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, install `requirements.txt` in the Hub Executor bundle to local", "name": "install_requirements", "option_strings": ["--install-requirements"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, always pull the latest Hub Executor bundle even it exists on local", "name": "force_update", "option_strings": ["--force-update", "--force"], "required": false, "type": "bool"}, {"choices": ["NoCompression", "Deflate", "Gzip"], "default": null, "default_random": false, "help": "The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.", "name": "compression", "option_strings": ["--compression"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The address of the uses-before runtime", "name": "uses_before_address", "option_strings": ["--uses-before-address"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The address of the uses-before runtime", "name": "uses_after_address", "option_strings": ["--uses-after-address"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "dictionary JSON with a list of connections to configure", "name": "connection_list", "option_strings": ["--connection-list"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head", "name": "disable_reduce", "option_strings": ["--disable-reduce"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default", "name": "timeout_send", "option_strings": ["--timeout-send"], "required": false, "type": "int"}]}, {"help": "Start a Deployment. You should rarely use this directly unless you are doing low-level orchestration", "name": "deployment", "options": [{"choices": null, "default": null, "default_random": false, "help": "\n The name of this object.\n\n This will be used in the following places:\n - how you refer to this object in Python/YAML/CLI\n - visualization\n - log message header\n - ...\n\n When not given, then the default naming strategy will apply.\n ", "name": "name", "option_strings": ["--name"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The working directory for any IO operations in this object. If not set, then derive from its parent `workspace`.", "name": "workspace", "option_strings": ["--workspace"], "required": false, "type": "str"}, {"choices": null, "default": "default", "default_random": false, "help": "The YAML config of the logger used in this object.", "name": "log_config", "option_strings": ["--log-config"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then no log will be emitted from this object.", "name": "quiet", "option_strings": ["--quiet"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then exception stack information will not be added to the log", "name": "quiet_error", "option_strings": ["--quiet-error"], "required": false, "type": "bool"}, {"choices": null, "default": 60, "default_random": false, "help": "The timeout in milliseconds of the control request, -1 for waiting forever", "name": "timeout_ctrl", "option_strings": ["--timeout-ctrl"], "required": false, "type": "int"}, {"choices": null, "default": "ANY", "default_random": false, "help": "\n The polling strategy of the Deployment and its endpoints (when `shards>1`).\n Can be defined for all endpoints of a Deployment or by endpoint.\n Define per Deployment:\n - ANY: only one (whoever is idle) Pod polls the message\n - ALL: all Pods poll the message (like a broadcast)\n Define per Endpoint:\n JSON dict, {endpoint: PollingType}\n {'/custom': 'ALL', '/search': 'ANY', '*': 'ANY'}\n \n ", "name": "polling", "option_strings": ["--polling"], "required": false, "type": "str"}, {"choices": null, "default": "BaseExecutor", "default_random": false, "help": "\n The config of the executor, it could be one of the followings:\n * the string literal of an Executor class name\n * an Executor YAML file (.yml, .yaml, .jaml)\n * a Jina Hub Executor (must start with `jinahub://` or `jinahub+docker://`)\n * a docker image (must start with `docker://`)\n * the string literal of a YAML config (must start with `!` or `jtype: `)\n * the string literal of a JSON config\n\n When use it under Python, one can use the following values additionally:\n - a Python dict that represents the config\n - a text file stream has `.read()` interface\n ", "name": "uses", "option_strings": ["--uses"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `with` configuration in `uses`\n ", "name": "uses_with", "option_strings": ["--uses-with"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `metas` configuration in `uses`\n ", "name": "uses_metas", "option_strings": ["--uses-metas"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\n Dictionary of keyword arguments that will override the `requests` configuration in `uses`\n ", "name": "uses_requests", "option_strings": ["--uses-requests"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe customized python modules need to be imported before loading the executor\n\nNote that the recommended way is to only import a single module - a simple python file, if your\nexecutor can be defined in a single file, or an ``__init__.py`` file if you have multiple files,\nwhich should be structured as a python package. For more details, please see the\n`Executor cookbook <https://docs.jina.ai/fundamentals/executor/executor-files/>`__\n", "name": "py_modules", "option_strings": ["--py-modules"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": 63268, "default_factory": "random_port", "default_random": true, "help": "The port for input data to bind to, default a random port between [49152, 65535]", "name": "port", "option_strings": ["--port-in"], "required": false, "type": "int"}, {"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address for binding to, by default it is 0.0.0.0", "name": "host_in", "option_strings": ["--host-in"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.", "name": "native", "option_strings": ["--native"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe type of array `tensor` and `embedding` will be serialized to.\n\nSupports the same types as `docarray.to_protobuf(.., ndarray_type=...)`, which can be found \n`here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>`.\nDefaults to retaining whatever type is returned by the Executor.\n", "name": "output_array_type", "option_strings": ["--output-array-type"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "Dictionary of kwargs arguments that will be passed to the grpc server as options when starting the server, example : {'grpc.max_send_message_length': -1}", "name": "grpc_server_options", "option_strings": ["--grpc-server-options"], "required": false, "type": "dict"}, {"choices": null, "default": [], "default_random": false, "help": "List of exceptions that will cause the Executor to shut down.", "name": "exit_on_exceptions", "option_strings": ["--exit-on-exceptions"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.", "name": "entrypoint", "option_strings": ["--entrypoint"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "\nDictionary of kwargs arguments that will be passed to Docker SDK when starting the docker '\ncontainer. \n\nMore details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/\n\n", "name": "docker_kwargs", "option_strings": ["--docker-kwargs"], "required": false, "type": "dict"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe path on the host to be mounted inside the container. \n\nNote, \n- If separated by `:`, then the first part will be considered as the local host path and the second part is the path in the container system. \n- If no split provided, then the basename of that directory will be mounted into container's root path, e.g. `--volumes=\"/user/test/my-workspace\"` will be mounted into `/my-workspace` inside the container. \n- All volumes are mounted with read-write mode.\n", "name": "volumes", "option_strings": ["--volumes"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": null, "default_random": false, "help": "\n This argument allows dockerized Jina executor discover local gpu devices.\n\n Note, \n - To access all gpus, use `--gpus all`.\n - To access multiple gpus, e.g. make use of 2 gpus, use `--gpus 2`.\n - To access specified gpus based on device id, use `--gpus device=[YOUR-GPU-DEVICE-ID]`\n - To access specified gpus based on multiple device id, use `--gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2]`\n - To specify more parameters, use `--gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display\n ", "name": "gpus", "option_strings": ["--gpus"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Do not automatically mount a volume for dockerized Executors.", "name": "disable_auto_volume", "option_strings": ["--disable-auto-volume"], "required": false, "type": "bool"}, {"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address of the runtime, by default it is 0.0.0.0. In the case of an external Executor (`--external` or `external=True`) this can be a list of hosts, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "host", "option_strings": ["--host"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Do not display the streaming of remote logs on local console", "name": "quiet_remote_logs", "option_strings": ["--quiet-remote-logs"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "\nThe files on the host to be uploaded to the remote\nworkspace. This can be useful when your Deployment has more\nfile dependencies beyond a single YAML file, e.g.\nPython files, data files.\n\nNote,\n- currently only flatten structure is supported, which means if you upload `[./foo/a.py, ./foo/b.pp, ./bar/c.yml]`, then they will be put under the _same_ workspace on the remote, losing all hierarchies.\n- by default, `--uses` YAML file is always uploaded.\n- uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use `--workspace-id` to specify the workspace.\n", "name": "upload_files", "option_strings": ["--upload-files"], "required": false, "type": "typing.List[str]"}, {"choices": null, "default": "WorkerRuntime", "default_random": false, "help": "The runtime class to run inside the Pod", "name": "runtime_cls", "option_strings": ["--runtime-cls"], "required": false, "type": "str"}, {"choices": null, "default": 600000, "default_random": false, "help": "The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever", "name": "timeout_ready", "option_strings": ["--timeout-ready"], "required": false, "type": "int"}, {"choices": null, "default": null, "default_random": false, "help": "The map of environment variables that are available inside runtime", "name": "env", "option_strings": ["--env"], "required": false, "type": "dict"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies", "name": "shards", "option_strings": ["--shards"], "required": false, "type": "int"}, {"choices": null, "default": 1, "default_random": false, "help": "The number of replicas in the deployment", "name": "replicas", "option_strings": ["--replicas"], "required": false, "type": "int"}, {"choices": null, "default": "63775", "default_factory": "random_identity", "default_random": true, "help": "The port for input data to bind to, default is a random port between [49152, 65535]. In the case of an external Executor (`--external` or `external=True`) this can be a list of ports, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "port", "option_strings": ["--port"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, spawn an http server with a prometheus endpoint to expose metrics", "name": "monitoring", "option_strings": ["--monitoring"], "required": false, "type": "bool"}, {"choices": null, "default": "63438", "default_factory": "random_identity", "default_random": true, "help": "The port on which the prometheus server is exposed, default is a random port between [49152, 65535]", "name": "port_monitoring", "option_strings": ["--port-monitoring"], "required": false, "type": "str"}, {"choices": null, "default": -1, "default_random": false, "help": "Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)", "name": "retries", "option_strings": ["--retries"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the current Pod/Deployment can not be further chained, and the next `.add()` will chain after the last Pod/Deployment not this current one.", "name": "floating", "option_strings": ["--floating"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry tracer will be available and will be enabled for automatic tracing of requests and customer span creation. Otherwise a no-op implementation will be provided.", "name": "tracing", "option_strings": ["--tracing"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the trace exporter agent.", "name": "traces_exporter_host", "option_strings": ["--traces-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the trace exporter agent.", "name": "traces_exporter_port", "option_strings": ["--traces-exporter-port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry metrics will be available for default monitoring and custom measurements. Otherwise a no-op implementation will be provided.", "name": "metrics", "option_strings": ["--metrics"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the metrics exporter agent.", "name": "metrics_exporter_host", "option_strings": ["--metrics-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the metrics exporter agent.", "name": "metrics_exporter_port", "option_strings": ["--metrics-exporter-port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, install `requirements.txt` in the Hub Executor bundle to local", "name": "install_requirements", "option_strings": ["--install-requirements"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, always pull the latest Hub Executor bundle even it exists on local", "name": "force_update", "option_strings": ["--force-update", "--force"], "required": false, "type": "bool"}, {"choices": ["NoCompression", "Deflate", "Gzip"], "default": null, "default_random": false, "help": "The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.", "name": "compression", "option_strings": ["--compression"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The address of the uses-before runtime", "name": "uses_before_address", "option_strings": ["--uses-before-address"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The address of the uses-before runtime", "name": "uses_after_address", "option_strings": ["--uses-after-address"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "dictionary JSON with a list of connections to configure", "name": "connection_list", "option_strings": ["--connection-list"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head", "name": "disable_reduce", "option_strings": ["--disable-reduce"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default", "name": "timeout_send", "option_strings": ["--timeout-send"], "required": false, "type": "int"}, {"choices": null, "default": null, "default_random": false, "help": "The executor attached before the Pods described by --uses, typically before sending to all shards, accepted type follows `--uses`. This argument only applies for sharded Deployments (shards > 1).", "name": "uses_before", "option_strings": ["--uses-before"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The executor attached after the Pods described by --uses, typically used for receiving from all shards, accepted type follows `--uses`. This argument only applies for sharded Deployments (shards > 1).", "name": "uses_after", "option_strings": ["--uses-after"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "The condition that the documents need to fulfill before reaching the Executor.The condition can be defined in the form of a `DocArray query condition <https://docarray.jina.ai/fundamentals/documentarray/find/#query-by-conditions>`", "name": "when", "option_strings": ["--when"], "required": false, "type": "dict"}, {"choices": null, "default": false, "default_random": false, "help": "The Deployment will be considered an external Deployment that has been started independently from the Flow.This Deployment will not be context managed by the Flow.", "name": "external", "option_strings": ["--external"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "The metadata to be passed to the gRPC request.", "name": "grpc_metadata", "option_strings": ["--grpc-metadata"], "required": false, "type": "dict"}, {"choices": null, "default": false, "default_random": false, "help": "If set, connect to deployment using tls encryption", "name": "tls", "option_strings": ["--tls"], "required": false, "type": "bool"}]}, {"help": "Start a Python client that connects to a Jina Gateway", "name": "client", "options": [{"choices": null, "default": "0.0.0.0", "default_random": false, "help": "The host address of the runtime, by default it is 0.0.0.0. In the case of an external Executor (`--external` or `external=True`) this can be a list of hosts, separated by commas. Then, every resulting address will be considered as one replica of the Executor.", "name": "host", "option_strings": ["--host"], "required": false, "type": "str"}, {"choices": null, "default": false, "default_random": false, "help": "If set, respect the http_proxy and https_proxy environment variables. otherwise, it will unset these proxy variables before start. gRPC seems to prefer no proxy", "name": "proxy", "option_strings": ["--proxy"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "The port of the Gateway, which the client should connect to.", "name": "port", "option_strings": ["--port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, connect to gateway using tls encryption", "name": "tls", "option_strings": ["--tls"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, then the input and output of this Client work in an asynchronous manner. ", "name": "asyncio", "option_strings": ["--asyncio"], "required": false, "type": "bool"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry tracer will be available and will be enabled for automatic tracing of requests and customer span creation. Otherwise a no-op implementation will be provided.", "name": "tracing", "option_strings": ["--tracing"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the trace exporter agent.", "name": "traces_exporter_host", "option_strings": ["--traces-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the trace exporter agent.", "name": "traces_exporter_port", "option_strings": ["--traces-exporter-port"], "required": false, "type": "int"}, {"choices": null, "default": false, "default_random": false, "help": "If set, the sdk implementation of the OpenTelemetry metrics will be available for default monitoring and custom measurements. Otherwise a no-op implementation will be provided.", "name": "metrics", "option_strings": ["--metrics"], "required": false, "type": "bool"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this hostname will be used to configure the metrics exporter agent.", "name": "metrics_exporter_host", "option_strings": ["--metrics-exporter-host"], "required": false, "type": "str"}, {"choices": null, "default": null, "default_random": false, "help": "If tracing is enabled, this port will be used to configure the metrics exporter agent.", "name": "metrics_exporter_port", "option_strings": ["--metrics-exporter-port"], "required": false, "type": "int"}, {"choices": ["GRPC", "HTTP", "WEBSOCKET"], "default": "GRPC", "default_random": false, "help": "Communication protocol between server and client.", "name": "protocol", "option_strings": ["--protocol"], "required": false, "type": "str"}]}], "name": "Jina", "revision": null, "source": "https://github.com/jina-ai/jina/tree/master", "url": "https://jina.ai", "vendor": "Jina AI Limited", "version": "3.10.2"}