Cluster configuration

config.cluster.v3.Cluster

[config.cluster.v3.Cluster proto]

Configuration for a single upstream cluster.

{
  "transport_socket_matches": [],
  "name": "...",
  "alt_stat_name": "...",
  "type": "...",
  "cluster_type": "{...}",
  "eds_cluster_config": "{...}",
  "connect_timeout": "{...}",
  "per_connection_buffer_limit_bytes": "{...}",
  "lb_policy": "...",
  "load_assignment": "{...}",
  "health_checks": [],
  "max_requests_per_connection": "{...}",
  "circuit_breakers": "{...}",
  "upstream_http_protocol_options": "{...}",
  "common_http_protocol_options": "{...}",
  "http_protocol_options": "{...}",
  "http2_protocol_options": "{...}",
  "typed_extension_protocol_options": "{...}",
  "dns_refresh_rate": "{...}",
  "dns_failure_refresh_rate": "{...}",
  "respect_dns_ttl": "...",
  "dns_lookup_family": "...",
  "dns_resolvers": [],
  "use_tcp_for_dns_lookups": "...",
  "outlier_detection": "{...}",
  "cleanup_interval": "{...}",
  "upstream_bind_config": "{...}",
  "lb_subset_config": "{...}",
  "ring_hash_lb_config": "{...}",
  "maglev_lb_config": "{...}",
  "original_dst_lb_config": "{...}",
  "least_request_lb_config": "{...}",
  "common_lb_config": "{...}",
  "transport_socket": "{...}",
  "metadata": "{...}",
  "protocol_selection": "...",
  "upstream_connection_options": "{...}",
  "close_connections_on_host_health_failure": "...",
  "ignore_health_on_host_removal": "...",
  "filters": [],
  "track_timeout_budgets": "...",
  "upstream_config": "{...}",
  "track_cluster_stats": "{...}",
  "connection_pool_per_downstream_connection": "..."
}
transport_socket_matches

(repeated config.cluster.v3.Cluster.TransportSocketMatch) Configuration to use different transport sockets for different endpoints. The entry of envoy.transport_socket_match in the LbEndpoint.Metadata is used to match against the transport sockets as they appear in the list. The first match is used. For example, with the following match

transport_socket_matches:
- name: "enableMTLS"
  match:
    acceptMTLS: true
  transport_socket:
    name: envoy.transport_sockets.tls
    config: { ... } # tls socket configuration
- name: "defaultToPlaintext"
  match: {}
  transport_socket:
    name: envoy.transport_sockets.raw_buffer

Connections to the endpoints whose metadata value under envoy.transport_socket_match having “acceptMTLS”/”true” key/value pair use the “enableMTLS” socket configuration.

If a socket match with empty match criteria is provided, that always match any endpoint. For example, the “defaultToPlaintext” socket match in case above.

If an endpoint metadata’s value under envoy.transport_socket_match does not match any TransportSocketMatch, socket configuration fallbacks to use the tls_context or transport_socket specified in this cluster.

This field allows gradual and flexible transport socket configuration changes.

The metadata of endpoints in EDS can indicate transport socket capabilities. For example, an endpoint’s metadata can have two key value pairs as “acceptMTLS”: “true”, “acceptPlaintext”: “true”. While some other endpoints, only accepting plaintext traffic has “acceptPlaintext”: “true” metadata information.

Then the xDS server can configure the CDS to a client, Envoy A, to send mutual TLS traffic for endpoints with “acceptMTLS”: “true”, by adding a corresponding TransportSocketMatch in this field. Other client Envoys receive CDS without transport_socket_match set, and still send plain text traffic to the same cluster.

This field can be used to specify custom transport socket configurations for health checks by adding matching key/value pairs in a health check’s transport socket match criteria field.

name

(string, REQUIRED) Supplies the name of the cluster which must be unique across all clusters. The cluster name is used when emitting statistics if alt_stat_name is not provided. Any : in the cluster name will be converted to _ when emitting statistics.

alt_stat_name

(string) An optional alternative to the cluster name to be used while emitting stats. Any : in the name will be converted to _ when emitting statistics. This should not be confused with Router Filter Header.

type

(config.cluster.v3.Cluster.DiscoveryType) The service discovery type to use for resolving the cluster.

Only one of type, cluster_type may be set.

cluster_type

(config.cluster.v3.Cluster.CustomClusterType) The custom cluster type.

Only one of type, cluster_type may be set.

eds_cluster_config

(config.cluster.v3.Cluster.EdsClusterConfig) Configuration to use for EDS updates for the Cluster.

connect_timeout

(Duration) The timeout for new network connections to hosts in the cluster.

per_connection_buffer_limit_bytes

(UInt32Value) Soft limit on size of the cluster’s connections read and write buffers. If unspecified, an implementation defined default is applied (1MiB).

Attention

This field should be configured in the presence of untrusted upstreams.

Example configuration for untrusted environments:

per_connection_buffer_limit_bytes: 32768.0
lb_policy

(config.cluster.v3.Cluster.LbPolicy) The load balancer type to use when picking a host in the cluster.

load_assignment

(config.endpoint.v3.ClusterLoadAssignment) Setting this is required for specifying members of STATIC, STRICT_DNS or LOGICAL_DNS clusters. This field supersedes the hosts field in the v2 API.

Attention

Setting this allows non-EDS cluster types to contain embedded EDS equivalent endpoint assignments.

health_checks

(repeated config.core.v3.HealthCheck) Optional active health checking configuration for the cluster. If no configuration is specified no health checking will be done and all cluster members will be considered healthy at all times.

max_requests_per_connection

(UInt32Value) Optional maximum requests for a single upstream connection. This parameter is respected by both the HTTP/1.1 and HTTP/2 connection pool implementations. If not specified, there is no limit. Setting this parameter to 1 will effectively disable keep alive.

circuit_breakers

(config.cluster.v3.CircuitBreakers) Optional circuit breaking for the cluster.

upstream_http_protocol_options

(config.core.v3.UpstreamHttpProtocolOptions) HTTP protocol options that are applied only to upstream HTTP connections. These options apply to all HTTP versions. This has been deprecated in favor of upstream_http_protocol_options in the http_protocol_options message. upstream_http_protocol_options can be set via the cluster’s extension_protocol_options. See ref:upstream_http_protocol_options <envoy_v3_api_field_extensions.upstreams.http.v3.HttpProtocolOptions.upstream_http_protocol_options> for example usage.

common_http_protocol_options

(config.core.v3.HttpProtocolOptions) Additional options when handling HTTP requests upstream. These options will be applicable to both HTTP1 and HTTP2 requests. This has been deprecated in favor of common_http_protocol_options in the http_protocol_options message. common_http_protocol_options can be set via the cluster’s extension_protocol_options. See ref:upstream_http_protocol_options <envoy_v3_api_field_extensions.upstreams.http.v3.HttpProtocolOptions.upstream_http_protocol_options> for example usage.

http_protocol_options

(config.core.v3.Http1ProtocolOptions) Additional options when handling HTTP1 requests. This has been deprecated in favor of http_protocol_options fields in the in the http_protocol_options message. http_protocol_options can be set via the cluster’s extension_protocol_options. See ref:upstream_http_protocol_options <envoy_v3_api_field_extensions.upstreams.http.v3.HttpProtocolOptions.upstream_http_protocol_options> for example usage.

http2_protocol_options

(config.core.v3.Http2ProtocolOptions) Even if default HTTP2 protocol options are desired, this field must be set so that Envoy will assume that the upstream supports HTTP/2 when making new HTTP connection pool connections. Currently, Envoy only supports prior knowledge for upstream connections. Even if TLS is used with ALPN, http2_protocol_options must be specified. As an aside this allows HTTP/2 connections to happen over plain text. This has been deprecated in favor of http2_protocol_options fields in the in the http_protocol_options message. http2_protocol_options can be set via the cluster’s extension_protocol_options. See ref:upstream_http_protocol_options <envoy_v3_api_field_extensions.upstreams.http.v3.HttpProtocolOptions.upstream_http_protocol_options> for example usage.

Attention

This field should be configured in the presence of untrusted upstreams.

Example configuration for untrusted environments:

http2_protocol_options:
  initial_connection_window_size: 1048576.0
  initial_stream_window_size: 65536.0
typed_extension_protocol_options

(repeated map<string, Any>) The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections. The key should match the extension filter name, such as “envoy.filters.network.thrift_proxy”. See the extension’s documentation for details on specific options.

dns_refresh_rate

(Duration) If the DNS refresh rate is specified and the cluster type is either STRICT_DNS, or LOGICAL_DNS, this value is used as the cluster’s DNS refresh rate. The value configured must be at least 1ms. If this setting is not specified, the value defaults to 5000ms. For cluster types other than STRICT_DNS and LOGICAL_DNS this setting is ignored.

dns_failure_refresh_rate

(config.cluster.v3.Cluster.RefreshRate) If the DNS failure refresh rate is specified and the cluster type is either STRICT_DNS, or LOGICAL_DNS, this is used as the cluster’s DNS refresh rate when requests are failing. If this setting is not specified, the failure refresh rate defaults to the DNS refresh rate. For cluster types other than STRICT_DNS and LOGICAL_DNS this setting is ignored.

respect_dns_ttl

(bool) Optional configuration for setting cluster’s DNS refresh rate. If the value is set to true, cluster’s DNS refresh rate will be set to resource record’s TTL which comes from DNS resolution.

dns_lookup_family

(config.cluster.v3.Cluster.DnsLookupFamily) The DNS IP address resolution policy. If this setting is not specified, the value defaults to AUTO.

dns_resolvers

(repeated config.core.v3.Address) If DNS resolvers are specified and the cluster type is either STRICT_DNS, or LOGICAL_DNS, this value is used to specify the cluster’s dns resolvers. If this setting is not specified, the value defaults to the default resolver, which uses /etc/resolv.conf for configuration. For cluster types other than STRICT_DNS and LOGICAL_DNS this setting is ignored. Setting this value causes failure if the envoy.restart_features.use_apple_api_for_dns_lookups runtime value is true during server startup. Apple’s API only allows overriding DNS resolvers via system settings.

use_tcp_for_dns_lookups

(bool) Always use TCP queries instead of UDP queries for DNS lookups. Setting this value causes failure if the envoy.restart_features.use_apple_api_for_dns_lookups runtime value is true during server startup. Apple’ API only uses UDP for DNS resolution.

outlier_detection

(config.cluster.v3.OutlierDetection) If specified, outlier detection will be enabled for this upstream cluster. Each of the configuration values can be overridden via runtime values.

cleanup_interval

(Duration) The interval for removing stale hosts from a cluster type ORIGINAL_DST. Hosts are considered stale if they have not been used as upstream destinations during this interval. New hosts are added to original destination clusters on demand as new connections are redirected to Envoy, causing the number of hosts in the cluster to grow over time. Hosts that are not stale (they are actively used as destinations) are kept in the cluster, which allows connections to them remain open, saving the latency that would otherwise be spent on opening new connections. If this setting is not specified, the value defaults to 5000ms. For cluster types other than ORIGINAL_DST this setting is ignored.

upstream_bind_config

(config.core.v3.BindConfig) Optional configuration used to bind newly established upstream connections. This overrides any bind_config specified in the bootstrap proto. If the address and port are empty, no bind will be performed.

lb_subset_config

(config.cluster.v3.Cluster.LbSubsetConfig) Configuration for load balancing subsetting.

ring_hash_lb_config

(config.cluster.v3.Cluster.RingHashLbConfig) Optional configuration for the Ring Hash load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config may be set.

maglev_lb_config

(config.cluster.v3.Cluster.MaglevLbConfig) Optional configuration for the Maglev load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config may be set.

original_dst_lb_config

(config.cluster.v3.Cluster.OriginalDstLbConfig) Optional configuration for the Original Destination load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config may be set.

least_request_lb_config

(config.cluster.v3.Cluster.LeastRequestLbConfig) Optional configuration for the LeastRequest load balancing policy.

Optional configuration for the load balancing algorithm selected by LbPolicy. Currently only RING_HASH, MAGLEV and LEAST_REQUEST has additional configuration options. Specifying ring_hash_lb_config or maglev_lb_config or least_request_lb_config without setting the corresponding LbPolicy will generate an error at runtime.

Only one of ring_hash_lb_config, maglev_lb_config, original_dst_lb_config, least_request_lb_config may be set.

common_lb_config

(config.cluster.v3.Cluster.CommonLbConfig) Common configuration for all load balancer implementations.

transport_socket

(config.core.v3.TransportSocket) Optional custom transport socket implementation to use for upstream connections. To setup TLS, set a transport socket with name tls and UpstreamTlsContexts in the typed_config. If no transport socket configuration is specified, new connections will be set up with plaintext.

metadata

(config.core.v3.Metadata) The Metadata field can be used to provide additional information about the cluster. It can be used for stats, logging, and varying filter behavior. Fields should use reverse DNS notation to denote which entity within Envoy will need the information. For instance, if the metadata is intended for the Router filter, the filter name should be specified as envoy.filters.http.router.

protocol_selection

(config.cluster.v3.Cluster.ClusterProtocolSelection) Determines how Envoy selects the protocol used to speak to upstream hosts. This has been deprecated in favor of setting explicit protocol selection in the http_protocol_options message. http_protocol_options can be set via the cluster’s extension_protocol_options.

upstream_connection_options

(config.cluster.v3.UpstreamConnectionOptions) Optional options for upstream connections.

close_connections_on_host_health_failure

(bool) If an upstream host becomes unhealthy (as determined by the configured health checks or outlier detection), immediately close all connections to the failed host.

Note

This is currently only supported for connections created by tcp_proxy.

Note

The current implementation of this feature closes all connections immediately when the unhealthy status is detected. If there are a large number of connections open to an upstream host that becomes unhealthy, Envoy may spend a substantial amount of time exclusively closing these connections, and not processing any other traffic.

ignore_health_on_host_removal

(bool) If set to true, Envoy will ignore the health value of a host when processing its removal from service discovery. This means that if active health checking is used, Envoy will not wait for the endpoint to go unhealthy before removing it.

filters

(repeated config.cluster.v3.Filter) An (optional) network filter chain, listed in the order the filters should be applied. The chain will be applied to all outgoing connections that Envoy makes to the upstream servers of this cluster.

track_timeout_budgets

(bool) If track_timeout_budgets is true, the timeout budget histograms will be published for each request. These show what percentage of a request’s per try and global timeout was used. A value of 0 would indicate that none of the timeout was used or that the timeout was infinite. A value of 100 would indicate that the request took the entirety of the timeout given to it.

Attention

This field has been deprecated in favor of timeout_budgets, part of track_cluster_stats.

upstream_config

(config.core.v3.TypedExtensionConfig) Optional customization and configuration of upstream connection pool, and upstream type.

Currently this field only applies for HTTP traffic but is designed for eventual use for custom TCP upstreams.

For HTTP traffic, Envoy will generally take downstream HTTP and send it upstream as upstream HTTP, using the http connection pool and the codec from http2_protocol_options

For routes where CONNECT termination is configured, Envoy will take downstream CONNECT requests and forward the CONNECT payload upstream over raw TCP using the tcp connection pool.

The default pool used is the generic connection pool which creates the HTTP upstream for most HTTP requests, and the TCP upstream if CONNECT termination is configured.

If users desire custom connection pool or upstream behavior, for example terminating CONNECT only if a custom filter indicates it is appropriate, the custom factories can be registered and configured here.

track_cluster_stats

(config.cluster.v3.TrackClusterStats) Configuration to track optional cluster stats.

connection_pool_per_downstream_connection

(bool) If connection_pool_per_downstream_connection is true, the cluster will use a separate connection pool for every downstream connection

config.cluster.v3.Cluster.TransportSocketMatch

[config.cluster.v3.Cluster.TransportSocketMatch proto]

TransportSocketMatch specifies what transport socket config will be used when the match conditions are satisfied.

{
  "name": "...",
  "match": "{...}",
  "transport_socket": "{...}"
}
name

(string, REQUIRED) The name of the match, used in stats generation.

match

(Struct) Optional endpoint metadata match criteria. The connection to the endpoint with metadata matching what is set in this field will use the transport socket configuration specified here. The endpoint’s metadata entry in envoy.transport_socket_match is used to match against the values specified in this field.

transport_socket

(config.core.v3.TransportSocket) The configuration of the transport socket.

config.cluster.v3.Cluster.CustomClusterType

[config.cluster.v3.Cluster.CustomClusterType proto]

Extended cluster type.

{
  "name": "...",
  "typed_config": "{...}"
}
name

(string, REQUIRED) The type of the cluster to instantiate. The name must match a supported cluster type.

typed_config

(Any) Cluster specific configuration which depends on the cluster being instantiated. See the supported cluster for further documentation.

config.cluster.v3.Cluster.EdsClusterConfig

[config.cluster.v3.Cluster.EdsClusterConfig proto]

Only valid when discovery type is EDS.

{
  "eds_config": "{...}",
  "service_name": "..."
}
eds_config

(config.core.v3.ConfigSource) Configuration for the source of EDS updates for this Cluster.

service_name

(string) Optional alternative to cluster name to present to EDS. This does not have the same restrictions as cluster name, i.e. it may be arbitrary length. This may be a xdstp:// URL.

config.cluster.v3.Cluster.LbSubsetConfig

[config.cluster.v3.Cluster.LbSubsetConfig proto]

Optionally divide the endpoints in this cluster into subsets defined by endpoint metadata and selected by route and weighted cluster metadata.

{
  "fallback_policy": "...",
  "default_subset": "{...}",
  "subset_selectors": [],
  "locality_weight_aware": "...",
  "scale_locality_weight": "...",
  "panic_mode_any": "...",
  "list_as_any": "..."
}
fallback_policy

(config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetFallbackPolicy) The behavior used when no endpoint subset matches the selected route’s metadata. The value defaults to NO_FALLBACK.

default_subset

(Struct) Specifies the default subset of endpoints used during fallback if fallback_policy is DEFAULT_SUBSET. Each field in default_subset is compared to the matching LbEndpoint.Metadata under the envoy.lb namespace. It is valid for no hosts to match, in which case the behavior is the same as a fallback_policy of NO_FALLBACK.

subset_selectors

(repeated config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector) For each entry, LbEndpoint.Metadata’s envoy.lb namespace is traversed and a subset is created for each unique combination of key and value. For example:

{ "subset_selectors": [
    { "keys": [ "version" ] },
    { "keys": [ "stage", "hardware_type" ] }
]}

A subset is matched when the metadata from the selected route and weighted cluster contains the same keys and values as the subset’s metadata. The same host may appear in multiple subsets.

locality_weight_aware

(bool) If true, routing to subsets will take into account the localities and locality weights of the endpoints when making the routing decision.

There are some potential pitfalls associated with enabling this feature, as the resulting traffic split after applying both a subset match and locality weights might be undesirable.

Consider for example a situation in which you have 50/50 split across two localities X/Y which have 100 hosts each without subsetting. If the subset LB results in X having only 1 host selected but Y having 100, then a lot more load is being dumped on the single host in X than originally anticipated in the load balancing assignment delivered via EDS.

scale_locality_weight

(bool) When used with locality_weight_aware, scales the weight of each locality by the ratio of hosts in the subset vs hosts in the original subset. This aims to even out the load going to an individual locality if said locality is disproportionately affected by the subset predicate.

panic_mode_any

(bool) If true, when a fallback policy is configured and its corresponding subset fails to find a host this will cause any host to be selected instead.

This is useful when using the default subset as the fallback policy, given the default subset might become empty. With this option enabled, if that happens the LB will attempt to select a host from the entire cluster.

list_as_any

(bool) If true, metadata specified for a metadata key will be matched against the corresponding endpoint metadata if the endpoint metadata matches the value exactly OR it is a list value and any of the elements in the list matches the criteria.

config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector

[config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector proto]

Specifications for subsets.

{
  "keys": [],
  "single_host_per_subset": "...",
  "fallback_policy": "...",
  "fallback_keys_subset": []
}
keys

(repeated string) List of keys to match with the weighted cluster metadata.

single_host_per_subset

(bool) Selects a mode of operation in which each subset has only one host. This mode uses the same rules for choosing a host, but updating hosts is faster, especially for large numbers of hosts.

If a match is found to a host, that host will be used regardless of priority levels, unless the host is unhealthy.

Currently, this mode is only supported if subset_selectors has only one entry, and keys contains only one entry.

When this mode is enabled, configurations that contain more than one host with the same metadata value for the single key in keys will use only one of the hosts with the given key; no requests will be routed to the others. The cluster gauge lb_subsets_single_host_per_subset_duplicate indicates how many duplicates are present in the current configuration.

fallback_policy

(config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector.LbSubsetSelectorFallbackPolicy) The behavior used when no endpoint subset matches the selected route’s metadata.

fallback_keys_subset

(repeated string) Subset of keys used by KEYS_SUBSET fallback policy. It has to be a non empty list if KEYS_SUBSET fallback policy is selected. For any other fallback policy the parameter is not used and should not be set. Only values also present in keys are allowed, but fallback_keys_subset cannot be equal to keys.

Enum config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector.LbSubsetSelectorFallbackPolicy

[config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetSelector.LbSubsetSelectorFallbackPolicy proto]

Allows to override top level fallback policy per selector.

NOT_DEFINED

(DEFAULT) ⁣If NOT_DEFINED top level config fallback policy is used instead.

NO_FALLBACK

⁣If NO_FALLBACK is selected, a result equivalent to no healthy hosts is reported.

ANY_ENDPOINT

⁣If ANY_ENDPOINT is selected, any cluster endpoint may be returned (subject to policy, health checks, etc).

DEFAULT_SUBSET

⁣If DEFAULT_SUBSET is selected, load balancing is performed over the endpoints matching the values from the default_subset field.

KEYS_SUBSET

⁣If KEYS_SUBSET is selected, subset selector matching is performed again with metadata keys reduced to fallback_keys_subset. It allows for a fallback to a different, less specific selector if some of the keys of the selector are considered optional.

Enum config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetFallbackPolicy

[config.cluster.v3.Cluster.LbSubsetConfig.LbSubsetFallbackPolicy proto]

If NO_FALLBACK is selected, a result equivalent to no healthy hosts is reported. If ANY_ENDPOINT is selected, any cluster endpoint may be returned (subject to policy, health checks, etc). If DEFAULT_SUBSET is selected, load balancing is performed over the endpoints matching the values from the default_subset field.

NO_FALLBACK

(DEFAULT)

ANY_ENDPOINT

DEFAULT_SUBSET

config.cluster.v3.Cluster.LeastRequestLbConfig

[config.cluster.v3.Cluster.LeastRequestLbConfig proto]

Specific configuration for the LeastRequest load balancing policy.

{
  "choice_count": "{...}",
  "active_request_bias": "{...}"
}
choice_count

(UInt32Value) The number of random healthy hosts from which the host with the fewest active requests will be chosen. Defaults to 2 so that we perform two-choice selection if the field is not set.

active_request_bias

(config.core.v3.RuntimeDouble) The following formula is used to calculate the dynamic weights when hosts have different load balancing weights:

weight = load_balancing_weight / (active_requests + 1)^active_request_bias

The larger the active request bias is, the more aggressively active requests will lower the effective weight when all host weights are not equal.

active_request_bias must be greater than or equal to 0.0.

When active_request_bias == 0.0 the Least Request Load Balancer doesn’t consider the number of active requests at the time it picks a host and behaves like the Round Robin Load Balancer.

When active_request_bias > 0.0 the Least Request Load Balancer scales the load balancing weight by the number of active requests at the time it does a pick.

The value is cached for performance reasons and refreshed whenever one of the Load Balancer’s host sets changes, e.g., whenever there is a host membership update or a host load balancing weight change.

Note

This setting only takes effect if all host weights are not equal.

config.cluster.v3.Cluster.RingHashLbConfig

[config.cluster.v3.Cluster.RingHashLbConfig proto]

Specific configuration for the RingHash load balancing policy.

{
  "minimum_ring_size": "{...}",
  "hash_function": "...",
  "maximum_ring_size": "{...}"
}
minimum_ring_size

(UInt64Value) Minimum hash ring size. The larger the ring is (that is, the more hashes there are for each provided host) the better the request distribution will reflect the desired weights. Defaults to 1024 entries, and limited to 8M entries. See also maximum_ring_size.

hash_function

(config.cluster.v3.Cluster.RingHashLbConfig.HashFunction) The hash function used to hash hosts onto the ketama ring. The value defaults to XX_HASH.

maximum_ring_size

(UInt64Value) Maximum hash ring size. Defaults to 8M entries, and limited to 8M entries, but can be lowered to further constrain resource use. See also minimum_ring_size.

Enum config.cluster.v3.Cluster.RingHashLbConfig.HashFunction

[config.cluster.v3.Cluster.RingHashLbConfig.HashFunction proto]

The hash function used to hash hosts onto the ketama ring.

XX_HASH

(DEFAULT) ⁣Use xxHash, this is the default hash function.

MURMUR_HASH_2

⁣Use MurmurHash2, this is compatible with std:hash<string> in GNU libstdc++ 3.4.20 or above. This is typically the case when compiled on Linux and not macOS.

config.cluster.v3.Cluster.MaglevLbConfig

[config.cluster.v3.Cluster.MaglevLbConfig proto]

Specific configuration for the Maglev load balancing policy.

{
  "table_size": "{...}"
}
table_size

(UInt64Value) The table size for Maglev hashing. The Maglev aims for ‘minimal disruption’ rather than an absolute guarantee. Minimal disruption means that when the set of upstreams changes, a connection will likely be sent to the same upstream as it was before. Increasing the table size reduces the amount of disruption. The table size must be prime number. If it is not specified, the default is 65537.

config.cluster.v3.Cluster.OriginalDstLbConfig

[config.cluster.v3.Cluster.OriginalDstLbConfig proto]

Specific configuration for the Original Destination load balancing policy.

{
  "use_http_header": "..."
}
use_http_header

(bool) When true, x-envoy-original-dst-host can be used to override destination address.

Attention

This header isn’t sanitized by default, so enabling this feature allows HTTP clients to route traffic to arbitrary hosts and/or ports, which may have serious security consequences.

Note

If the header appears multiple times only the first value is used.

config.cluster.v3.Cluster.CommonLbConfig

[config.cluster.v3.Cluster.CommonLbConfig proto]

Common configuration for all load balancer implementations.

{
  "healthy_panic_threshold": "{...}",
  "zone_aware_lb_config": "{...}",
  "locality_weighted_lb_config": "{...}",
  "update_merge_window": "{...}",
  "ignore_new_hosts_until_first_hc": "...",
  "close_connections_on_host_set_change": "...",
  "consistent_hashing_lb_config": "{...}"
}
healthy_panic_threshold

(type.v3.Percent) Configures the healthy panic threshold. If not specified, the default is 50%. To disable panic mode, set to 0%.

Note

The specified percent will be truncated to the nearest 1%.

zone_aware_lb_config

(config.cluster.v3.Cluster.CommonLbConfig.ZoneAwareLbConfig)

Only one of zone_aware_lb_config, locality_weighted_lb_config may be set.

locality_weighted_lb_config

(config.cluster.v3.Cluster.CommonLbConfig.LocalityWeightedLbConfig)

Only one of zone_aware_lb_config, locality_weighted_lb_config may be set.

update_merge_window

(Duration) If set, all health check/weight/metadata updates that happen within this duration will be merged and delivered in one shot when the duration expires. The start of the duration is when the first update happens. This is useful for big clusters, with potentially noisy deploys that might trigger excessive CPU usage due to a constant stream of healthcheck state changes or metadata updates. The first set of updates to be seen apply immediately (e.g.: a new cluster). Please always keep in mind that the use of sandbox technologies may change this behavior.

If this is not set, we default to a merge window of 1000ms. To disable it, set the merge window to 0.

Note: merging does not apply to cluster membership changes (e.g.: adds/removes); this is because merging those updates isn’t currently safe. See https://github.com/envoyproxy/envoy/pull/3941.

ignore_new_hosts_until_first_hc

(bool) If set to true, Envoy will not consider new hosts when computing load balancing weights until they have been health checked for the first time. This will have no effect unless active health checking is also configured.

Ignoring a host means that for any load balancing calculations that adjust weights based on the ratio of eligible hosts and total hosts (priority spillover, locality weighting and panic mode) Envoy will exclude these hosts in the denominator.

For example, with hosts in two priorities P0 and P1, where P0 looks like {healthy, unhealthy (new), unhealthy (new)} and where P1 looks like {healthy, healthy} all traffic will still hit P0, as 1 / (3 - 2) = 1.

Enabling this will allow scaling up the number of hosts for a given cluster without entering panic mode or triggering priority spillover, assuming the hosts pass the first health check.

If panic mode is triggered, new hosts are still eligible for traffic; they simply do not contribute to the calculation when deciding whether panic mode is enabled or not.

close_connections_on_host_set_change

(bool) If set to true, the cluster manager will drain all existing connections to upstream hosts whenever hosts are added or removed from the cluster.

consistent_hashing_lb_config

(config.cluster.v3.Cluster.CommonLbConfig.ConsistentHashingLbConfig) Common Configuration for all consistent hashing load balancers (MaglevLb, RingHashLb, etc.)

config.cluster.v3.Cluster.CommonLbConfig.ZoneAwareLbConfig

[config.cluster.v3.Cluster.CommonLbConfig.ZoneAwareLbConfig proto]

Configuration for zone aware routing.

{
  "routing_enabled": "{...}",
  "min_cluster_size": "{...}",
  "fail_traffic_on_panic": "..."
}
routing_enabled

(type.v3.Percent) Configures percentage of requests that will be considered for zone aware routing if zone aware routing is configured. If not specified, the default is 100%. * runtime values. * Zone aware routing support.

min_cluster_size

(UInt64Value) Configures minimum upstream cluster size required for zone aware routing If upstream cluster size is less than specified, zone aware routing is not performed even if zone aware routing is configured. If not specified, the default is 6. * runtime values. * Zone aware routing support.

fail_traffic_on_panic

(bool) If set to true, Envoy will not consider any hosts when the cluster is in panic mode. Instead, the cluster will fail all requests as if all hosts are unhealthy. This can help avoid potentially overwhelming a failing service.

config.cluster.v3.Cluster.CommonLbConfig.LocalityWeightedLbConfig

[config.cluster.v3.Cluster.CommonLbConfig.LocalityWeightedLbConfig proto]

Configuration for locality weighted load balancing

{}

config.cluster.v3.Cluster.CommonLbConfig.ConsistentHashingLbConfig

[config.cluster.v3.Cluster.CommonLbConfig.ConsistentHashingLbConfig proto]

Common Configuration for all consistent hashing load balancers (MaglevLb, RingHashLb, etc.)

{
  "use_hostname_for_hashing": "...",
  "hash_balance_factor": "{...}"
}
use_hostname_for_hashing

(bool) If set to true, the cluster will use hostname instead of the resolved address as the key to consistently hash to an upstream host. Only valid for StrictDNS clusters with hostnames which resolve to a single IP address.

hash_balance_factor

(UInt32Value) Configures percentage of average cluster load to bound per upstream host. For example, with a value of 150 no upstream host will get a load more than 1.5 times the average load of all the hosts in the cluster. If not specified, the load is not bounded for any upstream host. Typical value for this parameter is between 120 and 200. Minimum is 100.

Applies to both Ring Hash and Maglev load balancers.

This is implemented based on the method described in the paper https://arxiv.org/abs/1608.01350. For the specified hash_balance_factor, requests to any upstream host are capped at hash_balance_factor/100 times the average number of requests across the cluster. When a request arrives for an upstream host that is currently serving at its max capacity, linear probing is used to identify an eligible host. Further, the linear probe is implemented using a random jump in hosts ring/table to identify the eligible host (this technique is as described in the paper https://arxiv.org/abs/1908.08762 - the random jump avoids the cascading overflow effect when choosing the next host in the ring/table).

If weights are specified on the hosts, they are respected.

This is an O(N) algorithm, unlike other load balancers. Using a lower hash_balance_factor results in more hosts being probed, so use a higher value if you require better performance.

config.cluster.v3.Cluster.RefreshRate

[config.cluster.v3.Cluster.RefreshRate proto]

{
  "base_interval": "{...}",
  "max_interval": "{...}"
}
base_interval

(Duration, REQUIRED) Specifies the base interval between refreshes. This parameter is required and must be greater than zero and less than max_interval.

max_interval

(Duration) Specifies the maximum interval between refreshes. This parameter is optional, but must be greater than or equal to the base_interval if set. The default is 10 times the base_interval.

Enum config.cluster.v3.Cluster.DiscoveryType

[config.cluster.v3.Cluster.DiscoveryType proto]

Refer to service discovery type for an explanation on each type.

STATIC

(DEFAULT) ⁣Refer to the static discovery type for an explanation.

STRICT_DNS

⁣Refer to the strict DNS discovery type for an explanation.

LOGICAL_DNS

⁣Refer to the logical DNS discovery type for an explanation.

EDS

⁣Refer to the service discovery type for an explanation.

ORIGINAL_DST

⁣Refer to the original destination discovery type for an explanation.

Enum config.cluster.v3.Cluster.LbPolicy

[config.cluster.v3.Cluster.LbPolicy proto]

Refer to load balancer type architecture overview section for information on each type.

ROUND_ROBIN

(DEFAULT) ⁣Refer to the round robin load balancing policy for an explanation.

LEAST_REQUEST

⁣Refer to the least request load balancing policy for an explanation.

RING_HASH

⁣Refer to the ring hash load balancing policy for an explanation.

RANDOM

⁣Refer to the random load balancing policy for an explanation.

MAGLEV

⁣Refer to the Maglev load balancing policy for an explanation.

CLUSTER_PROVIDED

⁣This load balancer type must be specified if the configured cluster provides a cluster specific load balancer. Consult the configured cluster’s documentation for whether to set this option or not.

Enum config.cluster.v3.Cluster.DnsLookupFamily

[config.cluster.v3.Cluster.DnsLookupFamily proto]

When V4_ONLY is selected, the DNS resolver will only perform a lookup for addresses in the IPv4 family. If V6_ONLY is selected, the DNS resolver will only perform a lookup for addresses in the IPv6 family. If AUTO is specified, the DNS resolver will first perform a lookup for addresses in the IPv6 family and fallback to a lookup for addresses in the IPv4 family. For cluster types other than STRICT_DNS and LOGICAL_DNS, this setting is ignored.

AUTO

(DEFAULT)

V4_ONLY

V6_ONLY

Enum config.cluster.v3.Cluster.ClusterProtocolSelection

[config.cluster.v3.Cluster.ClusterProtocolSelection proto]

USE_CONFIGURED_PROTOCOL

(DEFAULT) ⁣Cluster can only operate on one of the possible upstream protocols (HTTP1.1, HTTP2). If http2_protocol_options are present, HTTP2 will be used, otherwise HTTP1.1 will be used.

USE_DOWNSTREAM_PROTOCOL

⁣Use HTTP1.1 or HTTP2, depending on which one is used on the downstream connection.

config.cluster.v3.UpstreamBindConfig

[config.cluster.v3.UpstreamBindConfig proto]

An extensible structure containing the address Envoy should bind to when establishing upstream connections.

{
  "source_address": "{...}"
}
source_address

(config.core.v3.Address) The address Envoy should bind to when establishing upstream connections.

config.cluster.v3.UpstreamConnectionOptions

[config.cluster.v3.UpstreamConnectionOptions proto]

{
  "tcp_keepalive": "{...}"
}
tcp_keepalive

(config.core.v3.TcpKeepalive) If set then set SO_KEEPALIVE on the socket to enable TCP Keepalives.

config.cluster.v3.TrackClusterStats

[config.cluster.v3.TrackClusterStats proto]

{
  "timeout_budgets": "...",
  "request_response_sizes": "..."
}
timeout_budgets

(bool) If timeout_budgets is true, the timeout budget histograms will be published for each request. These show what percentage of a request’s per try and global timeout was used. A value of 0 would indicate that none of the timeout was used or that the timeout was infinite. A value of 100 would indicate that the request took the entirety of the timeout given to it.

request_response_sizes

(bool) If request_response_sizes is true, then the histograms tracking header and body sizes of requests and responses will be published.