Redis proxy

Statistics

Every configured Redis proxy filter has statistics rooted at redis.<stat_prefix>. with the following statistics:

Name

Type

Description

downstream_cx_active

Gauge

Total active connections

downstream_cx_protocol_error

Counter

Total protocol errors

downstream_cx_rx_bytes_buffered

Gauge

Total received bytes currently buffered

downstream_cx_rx_bytes_total

Counter

Total bytes received

downstream_cx_total

Counter

Total connections

downstream_cx_tx_bytes_buffered

Gauge

Total sent bytes currently buffered

downstream_cx_tx_bytes_total

Counter

Total bytes sent

downstream_cx_drain_close

Counter

Number of connections closed due to draining

downstream_rq_active

Gauge

Total active requests

downstream_rq_total

Counter

Total requests

Splitter statistics

The Redis filter will gather statistics for the command splitter in the redis.<stat_prefix>.splitter. with the following statistics:

Name

Type

Description

invalid_request

Counter

Number of requests with an incorrect number of arguments

unsupported_command

Counter

Number of commands issued which are not recognized by the command splitter

Per command statistics

The Redis filter will gather statistics for commands in the redis.<stat_prefix>.command.<command>. namespace. By default latency stats are in milliseconds and can be changed to microseconds by setting the configuration parameter latency_in_micros to true.

Name

Type

Description

total

Counter

Number of commands

success

Counter

Number of commands that were successful

error

Counter

Number of commands that returned a partial or complete error response

latency

Histogram

Command execution time in milliseconds (including delay faults)

error_fault

Counter

Number of commands that had an error fault injected

delay_fault

Counter

Number of commands that had a delay fault injected

Runtime

The Redis proxy filter supports the following runtime settings:

redis.drain_close_enabled

% of connections that will be drain closed if the server is draining and would otherwise attempt a drain close. Defaults to 100.

Fault Injection

The Redis filter can perform fault injection. Currently, Delay and Error faults are supported. Delay faults delay a request, and Error faults respond with an error. Moreover, errors can be delayed.

Note that the Redis filter does not check for correctness in your configuration - it is the user’s responsibility to make sure both the default and runtime percentages are correct! This is because percentages can be changed during runtime, and validating correctness at request time is expensive. If multiple faults are specified, the fault injection percentage should not exceed 100% for a given fault and Redis command combination. For example, if two faults are specified; one applying to GET at 60 %, and one applying to all commands at 50%, that is a bad configuration as GET now has 110% chance of applying a fault. This means that every request will have a fault.

If a delay is injected, the delay is additive - if the request took 400ms and a delay of 100ms is injected, then the total request latency is 500ms. Also, due to implementation of the redis protocol, a delayed request will delay everything that comes in after it, due to the proxy’s need to respect the order of commands it receives.

Note that faults must have a fault_enabled field, and are not enabled by default (if no default value or runtime key are set).

Example configuration:

19          faults:
20          - fault_type: ERROR
21            fault_enabled:
22              default_value:
23                numerator: 10
24                denominator: HUNDRED
25              runtime_key: "bogus_key"
26            commands:
27            - GET
28          - fault_type: DELAY
29            fault_enabled:
30              default_value:
31                numerator: 10
32                denominator: HUNDRED
33              runtime_key: "bogus_key"
34            delay: 2s

This creates two faults- an error, applying only to GET commands at 10%, and a delay, applying to all commands at 10%. This means that 20% of GET commands will have a fault applied, as discussed earlier.

DNS lookups on redirections

As noted in the architecture overview, when Envoy sees a MOVED or ASK response containing a hostname it will not perform a DNS lookup and instead bubble up the error to the client. The following configuration example enables DNS lookups on such responses to avoid the client error and have Envoy itself perform the redirection:

11        typed_config:
12          "@type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
13          stat_prefix: redis_stats
14          prefix_routes:
15            catch_all_route:
16              cluster: redis_cluster
17          settings:
18            op_timeout: 5s
19            enable_redirection: true
20            dns_cache_config:
21              name: dns_cache_for_redis
22              dns_lookup_family: V4_ONLY
23              max_hosts: 100

AWS IAM Authentication

The redis proxy filter supports authentication with AWS IAM credentials, to ElastiCache and MemoryDB instances. To configure AWS IAM Authentication, additional fields are provided in the cluster redis settings. If region is not specified, the region will be deduced using the region provider chain as described in Regions. cache_name is required and is set to the name of your cache. Both auth_usernam and cache_name are used when calculating the IAM authentication token. auth_password is not used in AWS IAM configuration and the password value is automatically calculated by envoy. In your upstream cluster, the auth_username field must be configured with the user that has been added to your cache, as per Setup. Different upstreams may use different usernames and different cache names, credentials will be generated correctly based on the cluster the traffic is destined to. The service_name should be elasticache for an Amazon ElastiCache cache in valkey or Redis OSS mode, or memorydb for an Amazon MemoryDB cluster. The service_name matches the service which is added to the IAM Policy for the associated IAM principal being used to make the connection. For example, service_name: memorydb matches an AWS IAM Policy containing the Action memorydb:Connect, and that policy must be attached to the IAM principal being used by envoy.

 8    filter_chains:
 9    - filters:
10      - name: envoy.filters.network.redis_proxy
11        typed_config:
12          "@type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
13          stat_prefix: egress_redis
14          settings:
15            op_timeout: 5s
16          prefix_routes:
17            catch_all_route:
18              cluster: redis_cluster
19  clusters:
20  - name: redis_cluster
21    connect_timeout: 1s
22    type: STRICT_DNS
23    load_assignment:
24      cluster_name: redis_cluster
25      endpoints:
26      - lb_endpoints:
27        - endpoint:
28            address:
29              socket_address:
30                address: testcache-7dh4z9.serverless.apse2.cache.amazonaws.com
31                port_value: 6379
32    typed_extension_protocol_options:
33      envoy.filters.network.redis_proxy:
34        "@type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProtocolOptions
35        auth_username:
36          inline_string: test
37        aws_iam:
38          region: ap-southeast-2
39          service_name: elasticache
40          cache_name: testcache
41          expiration_time: 900s