The following is an index of all the possible environment variables that may be used to configure the VDI service stack.
This example is the minimal configuration necessary to spin up the service.
Important
|
This configuration does not include the database configuration for any application databases or plugin handler service configurations. |
DATASET_DIRECTORY_SOURCE_PATH=/var/www/Common/userDatasets/vdi_datasets_feat_s/
DATASET_DIRECTORY_TARGET_PATH=/datasets
AUTH_SECRET_KEY=
ADMIN_AUTH_TOKEN=
LDAP_SERVER=
ORACLE_BASE_DN=ou=applications,dc=apidb,dc=org
USER_DB_TNS_NAME=apicommn
USER_DB_USER=
USER_DB_PASS=
USER_DB_POOL_SIZE=5
GLOBAL_RABBIT_USERNAME=someUser
GLOBAL_RABBIT_PASSWORD=somePassword
GLOBAL_RABBIT_HOST=rabbit-external
GLOBAL_RABBIT_VDI_EXCHANGE_NAME=vdi-bucket-notifications
GLOBAL_RABBIT_VDI_QUEUE_NAME=vdi-bucket-notifications
GLOBAL_RABBIT_VDI_ROUTING_KEY=vdi-bucket-notifications
KAFKA_SERVERS=kafka:9092
KAFKA_PRODUCER_CLIENT_ID=vdi-event-router
KAFKA_CONSUMER_GROUP_ID=vdi-kafka-consumers
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
S3_HOST=minio-external
S3_PORT=9000
S3_USE_HTTPS=true
S3_ACCESS_TOKEN=someToken
S3_SECRET_KEY=someSecretKey
S3_BUCKET_NAME=some-other-bucket
CACHE_DB_USERNAME=someUser
CACHE_DB_PASSWORD=somePassword
CACHE_DB_NAME=vdi
CACHE_DB_HOST=cache-db
SITE_BUILD=build-65
PLUGIN_HANDLER_NOOP_NAME=noop
PLUGIN_HANDLER_NOOP_DISPLAY_NAME="Example Plugin"
PLUGIN_HANDLER_NOOP_VERSION=1.0
PLUGIN_HANDLER_NOOP_ADDRESS=plugin-example:80
DB_CONNECTION_ENABLED_{SOME_PROJECT}=true
DB_CONNECTION_NAME_{SOME_PROJECT}=ProjectDB
DB_CONNECTION_LDAP_{SOME_PROJECT}=dbTnsName
DB_CONNECTION_PASS_{SOME_PROJECT}=someDBPass
DB_CONNECTION_DATA_SCHEMA_{SOME_PROJECT}=vdi_datasets_dev_n
DB_CONNECTION_CONTROL_SCHEMA_{SOME_PROJECT}=vdi_control_dev_n
Req. | Name | Type | Description |
---|---|---|---|
|
|
Port exposed and used by the VDI REST API service. |
|
❗ |
|
|
|
❗ |
|
|
|
❗ |
|
|
Secret key value used to decode and validate WDK user tokens for user authentication. |
❗ |
|
|
Auth token value used to authenticate requests to administration endpoints. |
|
|
Enable cross origin requests (used for development) |
|
|
|
Max file size allowed for a single upload in bytes. |
|
|
|
Quota cap for an individual user’s total uploads in bytes. |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Number of workers to use while processing hard-delete events. |
|
|
|
Size the worker pool job queue is allowed to fill to before blocking. |
|
|
|
Kafka client ID for the THIS VALUE MUST BE UNIQUE ACROSS ALL KAFKA CLIENT IDS |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Number of workers to use while processing import events. |
|
|
|
Size the worker pool job queue is allowed to fill to before blocking. |
|
|
|
Kafka client ID for the THIS VALUE MUST BE UNIQUE ACROSS ALL KAFKA CLIENT IDS |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Number of workers to use while processing install-data events. |
|
|
|
Size the worker pool job queue is allowed to fill to before blocking. |
|
|
|
Kafka client ID for the THIS VALUE MUST BE UNIQUE ACROSS ALL KAFKA CLIENT IDS |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Age at which a soft-deleted dataset becomes a candidate for pruning from the VDI system |
|
|
|
Frequency at which the pruner will run automatically. |
|
|
|
Frequency at which the pruner module will wake up and check for a service shutdown signal. |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Whether the full dataset reconciliation process is enabled. |
|
|
|
Interval at which the full reconciliation process will run. |
|
|
|
Interval at which the slim reconciliation process will run. |
|
|
|
Whether the reconciler should perform delete operations. |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Number of workers to use while processing reconciliation events. |
|
|
|
Size the worker pool job queue is allowed to fill to before blocking. |
|
|
|
Kafka client ID for the THIS VALUE MUST BE UNIQUE ACROSS ALL KAFKA CLIENT IDS |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Number of workers to use while processing share events. |
|
|
|
Size the worker pool job queue is allowed to fill to before blocking. |
|
|
|
Kafka client ID for the THIS VALUE MUST BE UNIQUE ACROSS ALL KAFKA CLIENT IDS |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Number of workers to use while processing soft-delete events. |
|
|
|
Size the worker pool job queue is allowed to fill to before blocking. |
|
|
|
Kafka client ID for the THIS VALUE MUST BE UNIQUE ACROSS ALL KAFKA CLIENT IDS |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Number of workers to use while processing update-meta events. |
|
|
|
Size the worker pool job queue is allowed to fill to before blocking. |
|
|
|
Kafka client ID for the THIS VALUE MUST BE UNIQUE ACROSS ALL KAFKA CLIENT IDS |
Req. | Name | Type | Description |
---|---|---|---|
❗ |
|
|
Hostname of the cache db instance. |
|
|
Port number for the cache db instance. |
|
❗ |
|
|
Name of the postgres database in the cache db instance to use. |
❗ |
|
|
Database credentials username. |
❗ |
|
|
Database credentials password. |
|
|
Database connection pool size. |
Req. | Name | Type | Description |
---|---|---|---|
❗ |
|
|
Kafka server(s) to connect to publish and consume message topics. |
Kafka consumer client tuning and configuration.
Req. | Name | Type | Description |
---|---|---|---|
|
|
The frequency that the consumer offsets are auto-committed to Kafka if
|
|
|
|
What to do when there is no initial offset in Kafka, or if the current offset
does not exist anymore on the server.
|
|
|
|
Close idle connections after this duration. |
|
|
|
Specifies the timeout for client APIs. This configuration is used as the
default timeout for all client operations that do not specify a |
|
|
|
If |
|
|
|
The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not an absolute maximum. Note that the consumer performs multiple fetches in parallel. |
|
|
|
The minimum amount of data the server should return for a fetch request. If
insufficient data is available the request will wait for that much data to
accumulate before answering the request. The default setting of |
|
❗ |
|
|
A unique string that identifies the consumer group this consumer belongs to. |
|
|
A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior. |
|
|
|
The expected time between heartbeats to the consumer coordinator when using
Kafka’s group management facilities. Heartbeats are used to ensure that the
consumer’s session stays active and to facilitate rebalancing when new
consumers join or leave the group. The value must be set lower than
|
|
|
|
The maximum delay between invocations of |
|
|
|
The maximum number of records returned in a single call to |
|
|
|
The amount of time to block waiting for input. |
|
|
|
The size of the TCP receive buffer ( |
|
|
|
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. |
|
|
|
The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. |
|
|
|
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. |
|
|
|
The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. |
|
|
|
The size of the TCP send buffer ( |
|
|
|
The timeout used to detect worker failures. The worker sends periodic
heartbeats to indicate its liveness to the broker. If no heartbeats are
received by the broker before the expiration of this session timeout, then the
broker will remove the worker from the group and initiate a rebalance. Note
that the value must be in the allowable range as configured in the broker
configuration by |
Kafka message producer client tuning and configuration.
Req. | Name | Type | Description |
---|---|---|---|
|
|
The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. Note: This setting gives the upper bound of the batch size to be sent. If we
have fewer than this many bytes accumulated for this partition, we will 'linger'
for the |
|
|
|
The total bytes of memory the producer can use to buffer records waiting to be
sent to the server. If records are sent faster than they can be delivered to the
server the producer will block for This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. |
|
❗ |
|
|
An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. |
|
|
The compression type for all data generated by the producer. The default is
none (i.e. no compression). Valid values are |
|
|
|
Close idle connections after the number of milliseconds specified by this config. |
|
|
|
An upper bound on the time to report success or failure after a call to
|
|
|
|
The producer groups together any records that arrive in between request
transmissions into a single batched request. Normally this occurs only under
load when records arrive faster than they can be sent out. However, in some
circumstances the client may want to reduce the number of requests even under
moderate load. This setting accomplishes this by adding a small amount of
artificial delay—that is, rather than immediately sending out a record, the
producer will wait for up to the given delay to allow other records to be sent
so that the sends can be batched together. This can be thought of as analogous
to Nagle’s algorithm in TCP. This setting gives the upper bound on the delay for
batching: once we get |
|
|
|
The configuration controls how long the |
|
|
|
The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this. |
|
|
|
The size of the TCP receive buffer ( |
|
|
|
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to thisz maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. |
|
|
|
The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. |
|
|
|
The configuration controls the maximum amount of time the client will wait for
the response of a request. If the response is not received before the timeout
elapses the client will resend the request if necessary or fail the request if
retries are exhausted. This should be larger than |
|
|
|
The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. |
|
|
|
The size of the TCP send buffer ( |
|
|
|
Setting a value greater than zero will cause the client to resend any record
whose send fails with a potentially transient error. Note that this retry is no
different than if the client resent the record upon receiving the error. Produce
requests will be failed before the number of retries has been exhausted if the
timeout configured by delivery.timeout.ms expires first before successful
acknowledgement. Users should generally prefer to leave this config unset and
instead use Enabling idempotence requires this config value to be greater than |
Names of the topics that various trigger events will be published to.
Req. | Name | Type | Description |
---|---|---|---|
|
|
Name of the hard-delete trigger topic that messages will be routed to for object hard-delete events from MinIO. A hard-delete event is the removal of a VDI dataset object in MinIO. Presently these events do not trigger any behavior in the VDI service. |
|
|
|
Name of the import trigger topic that messages will be routed to for import events from MinIO. An import event is the creation or overwriting of a user upload object in MinIO. These events will trigger a call to the plugin handler server to process the user upload to prepare it for installation. |
|
|
|
Name of the install-data trigger topic that messages will be routed to for data installation triggers from MinIO. An install-data event is the creation or overwriting of a VDI dataset data object in MinIO. These events will trigger a call to the plugin handler server to install the data that has just landed in MinIO. |
|
|
|
Name of the share trigger topic that messages will be routed to for share events from MinIO. A share event is the creation or overwriting of a "share" object in MinIO. These events will trigger an update to the share/visibility configuration for the target dataset. |
|
|
|
Name of the soft-delete trigger topic that messages will be routed to for soft-delete events from MinIO. A soft-delete event is the creation or overwriting of a soft-delete flag object in MinIO. These events will trigger a call to the plugin handler server to uninstall the data from the target application databases. |
|
|
|
Name of the update-meta trigger topic that messages will be routed to for metadata update events from MinIO. An update-meta event is the creation or overwriting of the dataset metadata object in MinIO. These events will trigger a call to the plugin handler server to install or update the metadata for the dataset in the target application databases. |
|
|
|
Name of the reconciliation trigger topic that messages will be routed to for events fired by the dataset reconciler. |
Names of the message key values that events will be keyed on when published to the various Kafka topics. Event messages that are not keyed on the appropriate value will be ignored by the VDI service.
Req. | Name | Type | Description |
---|---|---|---|
|
|
Message key for hard-delete trigger events. |
|
|
|
Message key for import trigger events. |
|
|
|
Message key for install-data trigger events. |
|
|
|
Message key for share trigger events. |
|
|
|
Message key for soft-delete trigger events. |
|
|
|
Message key for update-meta trigger events. |
|
|
|
Message key for reconciliation trigger events. |
Req. | Name | Type | Description |
---|---|---|---|
|
|
Optional name of the connection to the RabbitMQ service. This value will show in the RabbitMQ logs and in the management console to identify the VDI service’s connection. |
|
❗ |
|
|
Hostname of the global RabbitMQ instance that the VDI service will connect to. |
|
|
Port to use when connecting to the global RabbitMQ instance. |
|
❗ |
|
|
Credentials username used to authenticate with the global RabbitMQ instance. |
❗ |
|
|
Credentials password used to authenticate with the global RabbitMQ instance. |
|
|
Frequency that the global RabbitMQ instance will be polled for new messages from MinIO. |
|
|
|
Whether the connection to the target RabbitMQ instance should use TLS. Defaults to |
|
|
|
TCP connection timeout. |
Req. | Name | Type | Description |
---|---|---|---|
❗ |
|
|
Name of the target RabbitMQ exchange that will be declared by both the MinIO instance and the VDI service. |
|
|
Exchange type as declared bt the MinIO connection to the global RabbitMQ instance. |
|
|
|
Whether the exchange should be auto deleted when the connections from MinIO and the VDI service are closed. |
|
|
|
Whether the exchange should be durable (persisted to disk). This value must align with the exchange configuration as set by MinIO. |
|
|
|
Additional arguments to pass to the exchange declaration. |
Req. | Name | Type | Description |
---|---|---|---|
❗ |
|
|
Name of the RabbitMQ queue to declare. This value must align with the queue name as configured in MinIO. |
|
|
Whether the queue should be auto deleted when the connections from MinIO and the VDI service are closed. |
|
|
|
Whether the queue should be exclusive to the VDI service. See: Exclusive Queues |
|
|
|
Whether the queue should be durable (persisted to disk). This value must align with the queue configuration as set by MinIO. |
|
|
|
Additional arguments to pass to the queue declaration. |
Req. | Name | Type | Description |
---|---|---|---|
❗ |
|
|
MinIO hostname. |
❗ |
|
|
MinIO connection port. |
❗ |
|
|
Whether HTTPS should be used when connecting to the MinIO instance. |
❗ |
|
|
Name of the MinIO bucket that will be used by the VDI service. |
❗ |
|
|
MinIO username/access token to use when authenticating with the MinIO instance. |
❗ |
|
|
MinIO password/secret key to use when authenticating with the MinIO instance. |
Environment variables used by all plugins.
Req. | Name | Type | Description |
---|---|---|---|
❗ |
|
|
Site build number string (e.g. |
❗ |
|
|
Mount path in the plugin containers for the dataset install directory tree. |
Registers a VDI plugin with the service.
PLUGIN_HANDLER_<NAME>_NAME
PLUGIN_HANDLER_<NAME>_DISPLAY_NAME
PLUGIN_HANDLER_<NAME>_VERSION
PLUGIN_HANDLER_<NAME>_ADDRESS
PLUGIN_HANDLER_<NAME>_PROJECT_IDS
PLUGIN_HANDLER_<NAME>_CUSTOM_PATH
PLUGIN_HANDLER_<NAME>_SERVER_PORT
PLUGIN_HANDLER_<NAME>_SERVER_HOST
PLUGIN_HANDLER_<NAME>_IMPORT_SCRIPT_PATH
PLUGIN_HANDLER_<NAME>_IMPORT_SCRIPT_MAX_DURATION
PLUGIN_HANDLER_<NAME>_CHECK_COMPAT_SCRIPT_PATH
PLUGIN_HANDLER_<NAME>_CHECK_COMPAT_SCRIPT_MAX_DURATION
PLUGIN_HANDLER_<NAME>_INSTALL_DATA_SCRIPT_PATH
PLUGIN_HANDLER_<NAME>_INSTALL_DATA_SCRIPT_MAX_DURATION
PLUGIN_HANDLER_<NAME>_INSTALL_META_SCRIPT_PATH
PLUGIN_HANDLER_<NAME>_INSTALL_META_SCRIPT_MAX_DURATION
PLUGIN_HANDLER_<NAME>_UNINSTALL_SCRIPT_PATH
PLUGIN_HANDLER_<NAME>_UNINSTALL_SCRIPT_MAX_DURATION
Unlike most of the other environment key values defined here, these values
define components of wildcard environment keys which may be specified with any
arbitrary <NAME>
value between the defined prefix value and suffix options.
The environment variables set using the prefix and suffixes defined below
must appear in groups that contain the indicated suffixes. For example, given
the <NAME>
value "RNASEQ"
the following environment variables must be
present:
PLUGIN_HANDLER_RNASEQ_NAME
PLUGIN_HANDLER_RNASEQ_DISPLAY_NAME
PLUGIN_HANDLER_RNASEQ_VERSION
PLUGIN_HANDLER_RNASEQ_ADDRESS
Req. | Name | Type | Description |
---|---|---|---|
❗ |
|
|
Name of the plugin handler. This will typically be the type name of the dataset type that the plugin handles. |
❗ |
|
|
Display name for the plugin handler. This will be shown to the end users as the type of their datasets. |
❗ |
|
|
Version for the plugin handler. |
❗ |
|
|
Address and port of the plugin handler service. |
|
|
List of project IDs for which the plugin is relevant. If this value is omitted or set to a blank value, the plugin will be considered relevant to all projects. |
|
|
|
Custom $PATH variable additions to pass to plugin scripts. |
|
|
|
Port the plugin handler HTTP server will bind to. |
|
|
|
Address the plugin handler HTTP server will bind to. |
|
|
|
Path to the import script or binary in the plugin container. |
|
|
|
Max duration the import script will be permitted to run before being killed. |
|
|
|
Path to the compatibility check script or binary in the plugin container. |
|
|
|
Max duration the compatibility check script will be permitted to run before being killed. |
|
|
|
Path to the data install script or binary in the plugin container. |
|
|
|
Max duration the data install script will be permitted to run before being killed. |
|
|
|
Path to the metadata install script or binary in the plugin container. |
|
|
|
Max duration the metadata install script will be permitted to run before being killed. |
|
|
|
Path to the uninstall script or binary in the plugin container. |
|
|
|
Max duration the uninstall script will be permitted to run before being killed. |
DB_CONNECTION_ENABLED_<NAME>
DB_CONNECTION_NAME_<NAME>
DB_CONNECTION_USER_<NAME>
DB_CONNECTION_PASS_<NAME>
DB_CONNECTION_DATA_SCHEMA_<NAME>
DB_CONNECTION_CONTROL_SCHEMA_<NAME>
DB_CONNECTION_POOL_SIZE_<NAME>
# restrict to plugins
DB_CONNECTION_DATA_TYPES_<NAME>
# for LDAP
DB_CONNECTION_LDAP_<NAME>
# else, for manual connection
DB_CONNECTION_HOST_<NAME>
DB_CONNECTION_PORT_<NAME>
DB_CONNECTION_PLATFORM_<NAME>
Unlike most of the other environment key values defined here, these values
define components of wildcard environment keys which may be specified with any
arbitrary <NAME>
value following the defined prefix option.
The environment variables set using the prefixes defined below must appear
in groups that contain all prefixes. For example, given the <NAME>
value
"PLASMO"
, the following environment variables must all be present:
DB_CONNECTION_ENABLED_PLASMO
DB_CONNECTION_NAME_PLASMO
DB_CONNECTION_LDAP_PLASMO
DB_CONNECTION_USER_PLASMO
DB_CONNECTION_PASS_PLASMO
DB_CONNECTION_DATA_SCHEMA_PLASMO
DB_CONNECTION_CONTROL_SCHEMA_PLASMO
DB_CONNECTION_POOL_SIZE_PLASMO
Database connections detail sets MUST provide either an LDAP lookup name for a database OR a host and port.
Req. | Name | Type | Description |
---|---|---|---|
|
|
Whether the database connection should be enabled for use by VDI. |
|
❗ |
|
|
Name for the connection, typically the project ID or identifier for the application database. |
|
|
LDAP distinguished name for the database connection |
|
|
|
Host URI to use when providing manual database connection details. WARNING: This variable will be ignored if WARNING: This variable is required if |
|
|
|
Port to use when providing manual database connection details. WARNING: This variable will be ignored if WARNING: This variable is required if |
|
❗ |
|
|
Database credentials username. |
❗ |
|
|
Database credentials password. |
❗ |
|
|
Database schema where user dataset data is installed to. |
❗ |
|
|
Database schema where the VDI control tables are installed to. |
|
|
Connection pool size for the JDBC |
|
|
|
Dataset type names that align with plugins registered in the VDI environment configuration. If provided, VDI will only use this connection for datasets whose type name matches an item in the given list. If omitted, VDI will use this connection for all datasets whose type name does not match another DB connection with declared types. |
A single project may have multiple target databases registered provided that
each connection has a unique (name, dataset type)
pairing.
By default, if a database connection detail set does not contain a dataset type
list via DB_CONNECTION_DATA_TYPES_<NAME>
it will be used as a fallback for all
dataset types that do not match a configured connection.
If multiple database connection detail sets omit the data types restriction variable, VDI will refuse to start. If multiple database connection detail sets specify the same data type, VDI will refuse to start.
# This DB connection is a fallback because it provides no data type list.
# It is NOT used for datasets of type foo and bar.
DB_CONNECTION_ENABLED_PLASMO_1=true
DB_CONNECTION_NAME_PLASMO_1=PlasmoDB
#DB_CONNECTION_DATA_TYPES_PLASMO_1='*' # Wildcard is implied by omission
# This DB connection is only used for datasets of type foo and bar.
DB_CONNECTION_ENABLED_PLASMO_2=true
DB_CONNECTION_NAME_PLASMO_2=PlasmoDB
DB_CONNECTION_DATA_TYPES_PLASMO_2=foo,bar
Duration
-
Durations are a string representation of a time interval. Durations are represented as one or more numeric values followed by a shorthand notation of the time unit.
Time Unit Notations:
ns
Nanoseconds
5ns
us
Microseconds
5us
ms
Milliseconds
5ms
s
Seconds
5s
m
Minutes
5m
h
Hours
5h
d
Days
5d
Durations may also be a combination of multiple values such as
1d 12h
,1h 0m 30.340s
ImportantOnly the last segment of a duration may have a fractional part.
HostAddress
-
A
HostAddress
is a hostname port pair in the form{host}:{port}
, for examplegoogle.com:443
. List<T>
-
A list is a comma separated set of values that may be of any type that does not itself contain a comma, for example, a list may be of Durations or HostAddresses.
Example:
SOME_VARIABLE=item1,item2,item3
Map<K, V>
-
A map is a list of key/value pairs with the keys separated from values by a colon and the pairs separated by commas. Keys may only be simple types, and values may be of any type that does not contain a comma.
Example:
SOME_VARIABLE=key1:value,key2:value,key3:value