Getting Started with Envoy Proxy - Beginners Guide
Envoy is a high-performant edge and service proxy server like NGINX or HAProxy built in C++, designed for cloud-native applications.
Envoy is a high-performant edge and service proxy server like NGINX or HAProxy built in C++, designed for cloud-native applications. Envoy was initially developed at Lyft, it was later open-sourced for the community.
Envoy Features
- Smaller memory footprint compared to other proxy servers
- Supports HTTP/2 and gRPC for incoming and outgoing connections
- Supports advanced load balancing features including automatic retries, circuit breaking, global rate limiting, request shadowing, and zone local load balancing.
- Deep observability of L7 traffic, native support for distributed tracing, and wire-level observability of MongoDB, DynamoDB, and more.
Due to envoy's high performant and smaller memory footprint functionality, major companies have been moving away from Nginx to Envoy. One such company is dropbox. In 2020 dropbox migrated its services from NGINX to Envoy.
Companies that use envoy include Airbnb, Amazon, Booking.com, CookPad, Digitalocean, Dropbox, eBay, f5, Google, Grubhub, IBM, Medium, Microsoft, Netflix, Pinterest, Salesforce, Snapchat, Stripe, Square, Tencent, Twilio, Uber, Verizon, Vmware, Yahoo Japan, Yelp, VSCO.
Installing Envoy
Depending on your operating system there are several ways to install envoy. For this guide, I'll demonstrate how to install envoy on Debian.
Now that you have envoy installed let us get a deeper understanding of envoy architecture.
Envoy Architecture/Configuration
Envoy use yaml
format for envoy proxy configuration. Before starting writing the envoy .yaml
file you will either start with a node
, dynamic_resource
or static_resource
.
node
: it usually uniquely identifies the proxy node.
dynamic_resources
: These are configurations that should be updated dynamically.
static_resources
: specifies where Envoy should retrieve its configuration from.
What are listeners
, filter_chains
, and clusters
in envoy?
- listeners: This is where you configure the port and IP the envoy should be exposed on.
- filter_chains: Based on our needs
filter_chains
can differ. Basically,filter_chains
allows us to modify what happens to incoming and outgoing requests. Just like filters in photos. At least you should have a filter that defines how incoming/outgoing traffic should be routed from/to the internal backend/microservices. You can have morefilter_chains
apart from the one that routes traffic. - clusters: This is where the configuration for the backend/microservices resides. We can also define the type of load balancing we want to achieve in the cluster.
- admin: This usually displays sensitive monitoring information for the admin and should not be exposed to the public.
Now that we have an understanding of the different components used to write the envoy .yaml
below is a simple implementation.
Let's assume we have two microservices we want to proxy with envoy:
- Microservice A: A NextJS for our frontend running on port
3000
- Microservice B: A go API backend running on port
5000
We can achieve this in envoy like below:
prefix_rewrite: "/"
on the route filter to remove envoy path prefix from reaching our backend. Instead, we rewrite /api
to /
.To run envoy run:
Conclusion
Envoy is a highly performant edge proxy despite its steep learning curve. This is why every big cooperation uses Envoy in one way or another. The best way to use envoy is with a service mesh control plane like Istio.