TCP proxy
TCP proxy architecture overview
This filter should be configured with the type URL
type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy.
Dynamic cluster selection
The upstream cluster used by the TCP proxy filter can be dynamically set by
other network filters on a per-connection basis by setting a per-connection
state object under the key envoy.tcp_proxy.cluster. See the
implementation for the details.
Routing to a subset of hosts
TCP proxy can be configured to route to a subset of hosts within an upstream cluster.
To define metadata that a suitable upstream host must match, use one of the following fields:
Use TcpProxy.metadata_match to define required metadata for a single upstream cluster.
Use ClusterWeight.metadata_match to define required metadata for a weighted upstream cluster.
Use combination of TcpProxy.metadata_match and ClusterWeight.metadata_match to define required metadata for a weighted upstream cluster (metadata from the latter will be merged on top of the former).
In addition, dynamic metadata can be set by earlier network filters on the StreamInfo. Setting the dynamic metadata
must happen before onNewConnection() is called on the TcpProxy filter to affect load balancing.
Delayed upstream connection establishment
By default, the TCP proxy filter establishes the upstream connection immediately when a downstream connection is accepted. However, in some scenarios it is beneficial to delay upstream connection establishment until certain conditions are met, such as:
Inspecting initial downstream data. For example, extracting SNI from TLS
ClientHello.Waiting for the downstream TLS handshake to complete to access client certificate information.
Using the negotiated TLS parameters for routing decisions.
There are two ways to configure delayed upstream connection establishment:
Explicit configuration
The preferred method is to use upstream_connect_mode and max_early_data_bytes configuration fields. These provide explicit control over when the upstream connection is established and how early data is buffered.
Upstream Connection Modes:
IMMEDIATE(Default): Establish the upstream connection immediately when the downstream connection is accepted. This provides the lowest latency and is the default behavior for backward compatibility.ON_DOWNSTREAM_DATA: Wait for initial data from the downstream connection before establishing the upstream connection. This allows preceding filters to inspect the initial data before the upstream connection is established. This mode requiresmax_early_data_bytesto be set.ON_DOWNSTREAM_TLS_HANDSHAKE: Wait for the downstream TLS handshake to complete before establishing the upstream connection. This allows access to the full TLS connection information, including client certificates and negotiated parameters. This mode is only effective when the downstream connection uses TLS. For non-TLS connections, it behaves the same asIMMEDIATE.
Early Data Buffering:
The max_early_data_bytes field controls whether the filter chain can read downstream data before the upstream
connection is established (receive_before_connect mode). When set, downstream data is buffered up to the specified
limit and forwarded once the upstream connection is ready. When the buffer exceeds this limit, the downstream connection
is read-disabled to prevent excessive memory usage.
This field is independent of upstream_connect_mode. You can enable early data buffering with any connection mode:
IMMEDIATEwith early data buffering: Connect immediately but still buffer early data for filter inspectionON_DOWNSTREAM_TLS_HANDSHAKEwith early data buffering: Wait for TLS handshake while buffering dataON_DOWNSTREAM_DATA: Must have early data buffering enabled (validated at config load time)
Example configuration:
name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: tcp
cluster: upstream_cluster
upstream_connect_mode: ON_DOWNSTREAM_DATA
max_early_data_bytes: 8192
Attention
The ON_DOWNSTREAM_DATA mode is not suitable for server-first protocols where the server sends the initial
greeting (e.g., SMTP, MySQL, POP3). For such protocols, use IMMEDIATE mode.
Filter state configuration
The legacy method using filter state is still supported for backward compatibility but is not recommended for new
deployments. This can be done by setting the StreamInfo filter state object for the key
envoy.tcp_proxy.receive_before_connect to true. Setting this filter state must happen in the
initializeReadFilterCallbacks() callback of the network filter so that it is done before the TCP proxy filter
is initialized.
When the envoy.tcp_proxy.receive_before_connect filter state is set, the TCP proxy filter receives data before
the upstream connection has been established. In such a case, the TCP proxy filter buffers data it receives before
the upstream connection has been established and flushes it once the upstream connection is established. Filters can
also delay the upstream connection setup by returning StopIteration from their onNewConnection and onData
callbacks. On receiving early data, the TCP proxy will read disable the connection until the upstream connection is
established. This is to protect the early buffer from overflowing.
Note
When using the explicit configuration method (max_early_data_bytes), the filter state approach
is ignored. The two methods are mutually exclusive, with the explicit configuration taking precedence.
Tunneling TCP over HTTP
The TCP proxy filter can be used to tunnel raw TCP over HTTP CONNECT or HTTP POST requests. Refer to HTTP upgrades for more information.
TCP tunneling configuration can be used by setting Tunneling Config
Additionally, if tunneling was enabled for a TCP session by configuration, it can be dynamically disabled per connection,
by setting a per-connection filter state object under the key envoy.tcp_proxy.disable_tunneling. Refer to the implementation for more details.
Statistics
The TCP proxy filter emits both its own downstream statistics, access logs for upstream and downstream connections, as well as many of the cluster upstream statistics where applicable. The downstream statistics are rooted at tcp.<stat_prefix>. with the following statistics:
Name |
Type |
Description |
|---|---|---|
downstream_cx_total |
Counter |
Total number of connections handled by the filter |
downstream_cx_no_route |
Counter |
Number of connections for which no matching route was found or the cluster for the route was not found |
downstream_cx_tx_bytes_total |
Counter |
Total bytes written to the downstream connection |
downstream_cx_tx_bytes_buffered |
Gauge |
Total bytes currently buffered to the downstream connection |
downstream_cx_rx_bytes_total |
Counter |
Total bytes read from the downstream connection |
downstream_cx_rx_bytes_buffered |
Gauge |
Total bytes currently buffered from the downstream connection |
downstream_flow_control_paused_reading_total |
Counter |
Total number of times flow control paused reading from downstream |
downstream_flow_control_resumed_reading_total |
Counter |
Total number of times flow control resumed reading from downstream |
early_data_received_count_total |
Counter |
Total number of connections where tcp proxy received data before upstream connection establishment is complete |
idle_timeout |
Counter |
Total number of connections closed due to idle timeout |
max_downstream_connection_duration |
Counter |
Total number of connections closed due to max_downstream_connection_duration timeout |
on_demand_cluster_attempt |
Counter |
Total number of connections that requested on demand cluster |
on_demand_cluster_missing |
Counter |
Total number of connections closed due to on demand cluster is missing |
on_demand_cluster_success |
Counter |
Total number of connections that requested and received on demand cluster |
on_demand_cluster_timeout |
Counter |
Total number of connections closed due to on demand cluster lookup timeout |
upstream_flush_total |
Counter |
Total number of connections that continued to flush upstream data after the downstream connection was closed |
upstream_flush_active |
Gauge |
Total connections currently continuing to flush upstream data after the downstream connection was closed |