The Envoy configuration supports any number of listeners within a single process. Generally we recommend running a single Envoy per machine regardless of the number of configured listeners. This allows for easier operation and a single source of statistics.
Envoy supports both TCP and UDP listeners.
Each listener is independently configured with filter_chains, where an individual filter_chain is selected based on its filter_chain_match criteria.
An individual filter_chain is composed of one or more network level (L3/L4) filters.
When a new connection is received on a listener, the appropriate filter_chain is selected, and the configured connection-local filter stack is instantiated and begins processing subsequent events.
The generic listener architecture is used to perform the vast majority of different proxy tasks that Envoy is used for (e.g., rate limiting, TLS client authentication, HTTP connection management, MongoDB sniffing, raw TCP proxy, etc.).
Listeners are optionally also configured with some number of listener filters. These filters are processed before the network level filters, and have the opportunity to manipulate the connection metadata, usually to influence how the connection is processed by later filters or clusters.
Listeners can also be fetched dynamically via the listener discovery service (LDS).
See the Listener configuration, protobuf and components sections for reference documentation.
Envoy also supports UDP listeners and specifically UDP listener filters.
UDP listener filters are instantiated once per worker and are global to that worker.
Each listener filter processes each UDP datagram that is received by the worker listening on the port.
In practice, UDP listeners are configured with the
SO_REUSEPORT kernel option which
will cause the kernel to consistently hash each UDP 4-tuple to the same worker. This allows a
UDP listener filter to be “session” oriented if it so desires. A built-in example of this
functionality is the UDP proxy listener filter.