HTTP connection management

HTTP is such a critical component of modern service oriented architectures that Envoy implements a large amount of HTTP specific functionality. Envoy has a built in network level filter called the HTTP connection manager. This filter translates raw bytes into HTTP level messages and events (e.g., headers received, body data received, trailers received, etc.). It also handles functionality common to all HTTP connections and requests such as access logging, request ID generation and tracing, request/response header manipulation, route table management, and statistics.

HTTP connection manager configuration.

HTTP protocols

Envoy’s HTTP connection manager has native support for HTTP/1.1, WebSockets, and HTTP/2. It does not support SPDY. Envoy’s HTTP support was designed to first and foremost be an HTTP/2 multiplexing proxy. Internally, HTTP/2 terminology is used to describe system components. For example, an HTTP request and response take place on a stream. A codec API is used to translate from different wire protocols into a protocol agnostic form for streams, requests, responses, etc. In the case of HTTP/1.1, the codec translates the serial/pipelining capabilities of the protocol into something that looks like HTTP/2 to higher layers. This means that the majority of the code does not need to understand whether a stream originated on an HTTP/1.1 or HTTP/2 connection.

HTTP header sanitizing

The HTTP connection manager performs various header sanitizing actions for security reasons.

Route table configuration

Each HTTP connection manager filter has an associated route table. The route table can be specified in one of two ways:

  • Statically.
  • Dynamically via the RDS API.


Various configurable timeouts apply to an HTTP connection and its constituent streams:

  • Connection-level idle timeout: this applies to the idle period where no streams are active.
  • Connection-level drain timeout: this spans between an Envoy originated GOAWAY and connection termination.
  • Stream-level idle timeout: this applies to each individual stream. It may be configured at both the connection manager and per-route granularity. Header/data/trailer events on the stream reset the idle timeout.
  • Stream-level per-route upstream timeout: this applies to the upstream response, i.e. a maximum bound on the time from the end of the downstream request until the end of the upstream response. This may also be specified at the per-retry granularity.
  • Stream-level per-route gRPC max timeout: this bounds the upstream timeout and allows the timeout to be overriden via the grpc-timeout request header.