<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Envoy Proxy - Medium]]></title>
        <description><![CDATA[Official blog of the Envoy Proxy - Medium]]></description>
        <link>https://blog.envoyproxy.io?source=rss----bb5932e836f2---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 17:35:47 GMT</lastBuildDate>
        <atom:link href="https://blog.envoyproxy.io/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[OSTIF collaborates with the Envoy Team to further improve security posture.]]></title>
            <link>https://blog.envoyproxy.io/ostif-collaborates-with-the-envoy-team-to-further-improve-security-posture-ba9ed380ce13?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/ba9ed380ce13</guid>
            <category><![CDATA[fuzzing]]></category>
            <category><![CDATA[security-audit]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[security]]></category>
            <dc:creator><![CDATA[Amir Montazery]]></dc:creator>
            <pubDate>Thu, 17 Aug 2023 00:13:16 GMT</pubDate>
            <atom:updated>2023-08-17T00:13:16.518Z</atom:updated>
            <content:encoded><![CDATA[<p><a href="https://www.envoyproxy.io/">Envoy</a>, the open source edge and service proxy designed for cloud-native applications, worked with OSTIF and <a href="https://x41-dsec.de/">X41 D-Sec</a> to help improve the project’s security posture. The multi-phased engagement, sponsored by Google, focused first on the triaging and closing of bugs, then upon further improving the core fuzzers that continually monitor for security issues using OSS-Fuzz infrastructure at <a href="https://github.com/google/oss-fuzz.">https://github.com/google/oss-fuzz.</a> This engagement is part of a series of security efforts to improve the security and reliability of Envoy. Earlier efforts include the <a href="https://cure53.de/pentest-report_envoy.pdf">2018 Audit of Envoy</a> by <a href="https://cure53.de/">Cure53</a> and the <a href="https://adalogics.com/">Ada Logics</a> <a href="https://blog.envoyproxy.io/a-stroll-down-fuzzer-optimisation-lane-and-why-instrumentation-policies-matter-f0012ec260b3">fuzzing infrastructure</a> work in 2021.</p><p>Phase I of the engagement was designed to help the Envoy Team reduce the number of open bugs identified, reduce noise in the existing fuzzers (so the Envoy team spends less time triaging non-issues), and improve fuzzing coverage where possible. OSTIF sourced four senior security experts from X41 D-sec to complete the engagement.</p><p>The result of Phase I was the fixing of 68 security bugs. Furthermore, two critical vulnerabilities were found and proactively fixed. At the beginning of the Phase I engagement, there were 76 Total Fuzz Issues. As fixes and improvements were implemented, new fuzz issues were identified for analysis, allowing for further iteration and fine-tuning.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*OOcExNRXIuM0H29v" /><figcaption>Figure 1: Total Fuzz Issues vs Fixed at end of Phase I (Sept, 2022)</figcaption></figure><p>Phase II was focused on improving the number of executions/second and high signal to noise ratio produced by the fuzzers. Additionally to triaging remaining open high priority fuzzers from Phase I, there was an effort to close another 10 health check related fuzz issues. In order to cover more code faster, while remaining highly efficient and controlling output, X41 was able to analyze bugs from the first phase to determine how the fuzzers were running tests and where they were losing speed and validity. Of Envoy’s original 67 fuzzers, 19 were considered high priority due to testing code paths that are exposed to potential attack vectors; by dataplane traffic in the form of QoDs and bugs that can impact reliability and security of Envoy. Because Envoy is also offered as a service, fuzz testing and hardening the configuration interface is crucial. When fuzzers work on data and configurations simultaneously, there can be a tendency for them to move away from configurations that need deeper testing. This results in less code coverage. Envoy’s code base uses debug assertions, which further slowed down the testing process and often returned noise. To combat and remediate these multiple consequences, X41’s team developed a plan. First, create a way to generate valid configuration files by introspecting the configuration description already augmented by expressions supplying context, and then develop a two-step fuzzing process with the configuration files generated then used for fuzzing.</p><p>The team of Markus Vervier, Eric Sesterhenn, Dr. Andre Vehreschild and Dr. Robert Femmer had to develop a way to address the set of points they identified as limiting the fuzzers functionality. They aimed to improve the critical fuzzers that run on HTTP decoders and stream management, specifically HTTP/1.1, HTTP/2, and the HTTP/3 codec interfaces.</p><p>Dr. Vehreschild and Dr. Femmer not only developed four new fuzzers, but improved the overall security testing environment of Envoy instrumentally. The created fuzzers targeted HTTP decoders for high throughput focused on primary attack surfaces of Http1 (Balsa and http_parser), Http2 (nghttp2 and oghttp2), and Http3 Quic. To address the issues around configuration files, the team experimented with running configurations repeatedly to explore the data plane deeper before generating another configuration. Configurations found could increase fuzzing speed by a factor of 4 to 20 times. Further work resulted in removing debug assertions and reducing the binary size of fuzzers by a factor of 1.5. By developing ways to create valid and long-running configurations across the configuration and data planes, the performance of fuzzers was increased by a factor of 40.</p><p>The graph below strongly suggests that, since <strong>March 2022</strong> when Phase I of the engagement began, there has been a downward trend in generation of new fuzz issues due to this collective effort.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vb4AjlAZn1VA4v6g" /><figcaption>Figure 2: New Fuzz Issues since beginning of engagement (March 2022 — June, 2023)</figcaption></figure><p>This work took months to complete and was an open discussion-based project that allowed for all members to collaborate, help each other, and discuss the possibilities and testing done.</p><p>We sincerely thank Kirtimaan Rajshiva, Adi Peleg, Yan Avlasov, Harvey Tuch, Joshua Marantz, and the entire Envoy Platform Team for funding and guiding this effort along with Markus Vervier, Eric Sesterhenn, Dr. Andre Vehreschild, Dr. Robert Femmer, and Sofie Seuren of X41 D-Sec for their diligent work and insight.</p><p>OSTIF is grateful for the opportunity to collaborate and improve security posture for the betterment of FOSS. We would also like to recognize Google, without whose funding this project would not have been possible.</p><p>You can read the audit report <a href="https://www.x41-dsec.de/static/reports/X41-OSTIF-Envoy-Fuzzing-20230816-Public.pdf">here</a>.</p><p>You can read more about the work done at X41’s blog <a href="https://x41-dsec.de/news/2023/08/16/envoy-fuzzing">here.</a></p><p>For further information on Envoy, see <a href="https://www.envoyproxy.io/">here</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ba9ed380ce13" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/ostif-collaborates-with-the-envoy-team-to-further-improve-security-posture-ba9ed380ce13">OSTIF collaborates with the Envoy Team to further improve security posture.</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Introducing Envoy Gateway]]></title>
            <link>https://blog.envoyproxy.io/introducing-envoy-gateway-ad385cc59532?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/ad385cc59532</guid>
            <dc:creator><![CDATA[Matt Klein]]></dc:creator>
            <pubDate>Mon, 16 May 2022 07:02:51 GMT</pubDate>
            <atom:updated>2022-05-16T07:02:46.018Z</atom:updated>
            <content:encoded><![CDATA[<p>Today we are thrilled to announce <a href="https://github.com/envoyproxy/gateway">Envoy Gateway,</a> a new member of the Envoy Proxy family aimed at significantly decreasing the barrier to entry when using Envoy for API Gateway (sometimes known as “north-south”) use cases.</p><h3>History</h3><p>Envoy was <a href="https://medium.com/lyft-engineering/announcing-envoy-c-l7-proxy-and-communication-bus-92520b6c8191">released as OSS in the fall of 2016</a>, and much to our amazement quickly gained traction throughout the industry. Users were drawn to many different aspects of the project including its inclusive community, extensibility, API-driven configuration model, powerful observability output, and increasingly extensive feature set.</p><p>Although in its early history Envoy became synonymous with “service mesh,” its first use at Lyft was actually as an API gateway / edge proxy, providing in-depth observability output that aided Lyft’s migration from a monolithic to a microservice architecture.</p><p>Over the last 5+ years, we have seen Envoy adopted by a tremendous number of end users, both as an API gateway and as a sidecar proxy in “service mesh” roles. At the same time we have seen a large vendor ecosystem spring up around Envoy, providing a multitude of solutions both in the OSS and proprietary domains. Envoy’s vendor ecosystem has been critical to the project’s success; without funding for all of the employees that work part or full-time on Envoy the project would certainly not be what it is today.</p><p>The flip side of Envoy’s success as a component of many different architecture types and vendor solutions is that it is inherently low level; Envoy is not an easy piece of software to learn. While the project has had massive success being adopted by large engineering organizations around the world, it is only lightly adopted for smaller and simpler use cases, where <a href="https://nginx.org/">nginx</a> and <a href="http://www.haproxy.org/">HAProxy</a> are still dominant.</p><p>The Envoy Gateway project was born out of the belief that bringing Envoy “to the masses” in the API gateway role requires two primary things:</p><ul><li>A simplified deployment model and API layer aimed at lighter use cases.</li><li>Merging the existing <a href="https://www.cncf.io/">CNCF</a> API gateway projects (<a href="https://projectcontour.io/">Contour</a> and <a href="https://github.com/emissary-ingress/emissary">Emissary</a>) into a common core that can provide the best possible onboarding experience, while still allowing vendors to build value-added solutions based on Envoy Proxy and Envoy Gateway.</li></ul><p>We strongly believe that if the community converges around a single Envoy branded API gateway core, it will:</p><ul><li>Reduce duplicative efforts around security, control plane technical details, and other shared concerns.</li><li>Allow vendors to focus on layering value added functionality on top of Envoy Proxy and Envoy Gateway in the form of extensions, management plane UI, etc.</li><li>Lead to a “rising tide lifts all boats” phenomenon in which more users around the world enjoy the benefits of Envoy, whether their organization is large or small. More users feed the virtuous cycle of more potential customers, more support for the core Envoy project, and a better overall experience for all.</li></ul><h3>Project outline</h3><p>At a high level, Envoy Gateway can be thought of as a wrapper around the Envoy Proxy core. It will not change the core proxy, <a href="https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol">xDS</a>, <a href="https://github.com/envoyproxy/go-control-plane">go-control-plane</a>, etc. in any way (other than potentially driving features, bug fixes, and general improvements!). It will provide the following functionality:</p><ul><li>A simplified API for the gateway use case. The API will be the <a href="https://gateway-api.sigs.k8s.io/">Kubernetes Gateway API </a>with some Envoy-specific extensions. This API was chosen because deployment on Kubernetes as an ingress controller is the initial focus for the project and because the API has broad industry buy-in.</li><li>A “batteries included” experience that will enable users to get up and running as fast as possible. This includes lifecycle management functionality that provisions controller resources, control plane resources, proxy instances, etc.</li><li>An extensible API surface. While the project will aim to make common API gateway functionality available out of the box (e.g., rate limiting, authentication, <a href="https://letsencrypt.org/">Let’s Encrypt </a>integration, etc.), vendors will be able to provide SaaS versions of all APIs, provide additional APIs and value added functionality such as WAF, enhanced observability, chaos engineering, etc.</li><li>High quality documentation and getting started guides. Our primary goal with Envoy Gateway is to make the most common gateway use cases trivial to stand up for the average user.</li></ul><p>On the topic of APIs, we believe that one area that has led to significant confusion is the effective reimplementation of Envoy’s <a href="https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol">xDS</a> APIs in other projects when targeting advanced use cases. This pattern causes users to have to learn multiple sophisticated APIs (which ultimately translate back to xDS) in order to get their job done. As such, Envoy Gateway is committed to a “hard line in the sand” in which the Kubernetes Gateway API (and any permissible extensions within that API) is the <em>only</em> additional API that will be supported. More advanced use cases will be served by an “xDS mode” in which existing API resources will be automatically translated for the end user, who can then switch to utilizing the xDS APIs directly. This will lead to a crisper primary API while allowing an escape hatch for organizations that might outgrow the expressiveness of the primary API and wish to utilize the full power of Envoy via xDS.</p><h3>On API standardization</h3><p>Although a goal of Envoy Gateway is to provide a reference implementation for easily running Envoy in Kubernetes as an ingress controller, possibly the most important contribution of this effort will be <em>standardizing the APIs</em> that are used for this purpose. As the industry converges on specific Envoy Kubernetes Gateway API extensions, it will allow vendors to easily provide alternate SaaS implementations that may be preferable if a user outgrows the reference implementation, wants additional support and features, etc. Clearly, there is much work to do around defining the API extensions, determining which APIs are required versus optional for conformance, etc. This is the beginning of our standardization journey and we are eager to dive in with all interested parties.</p><h3>Next steps</h3><p>Today we are thankful for the initial sponsors of Envoy Gateway (<a href="https://www.getambassador.io/">Ambassador Labs</a>, <a href="https://www.fidelity.com/">Fidelity</a>, <a href="https://www.tetrate.io/">Tetrate</a>, and <a href="https://www.vmware.com/">VMware</a>) and are excited to start on this new journey with all of you. The project is very early with the focus so far having been to agree on <a href="https://github.com/envoyproxy/gateway/blob/main/GOALS.md">goals</a> and <a href="https://github.com/envoyproxy/gateway/blob/main/docs/design/SYSTEM_DESIGN.md">high level design</a>, so it’s a great time to get involved, either as an end user or as a system integrator.</p><p>We also want to make it extremely clear that existing users of Contour and Emissary will not be left behind. The project (and <a href="https://www.vmware.com/">VMware</a> and <a href="https://www.getambassador.io/">Ambassador Labs</a>) is completely committed to ensuring a smooth eventual migration path for users of those projects to Envoy Gateway, either via translation and replacement, or via those projects becoming wrappers around the Envoy Gateway core.</p><p>We are extremely excited about bringing Envoy to a larger group of users via the Envoy Gateway project, and we hope you will <a href="https://github.com/envoyproxy/gateway#contact">join us</a> on our journey!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ad385cc59532" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/introducing-envoy-gateway-ad385cc59532">Introducing Envoy Gateway</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Envoy Fundamentals, a training course to enable faster adoption of Envoy Proxy]]></title>
            <link>https://blog.envoyproxy.io/envoy-fundamentals-a-training-course-to-enable-faster-adoption-of-envoy-proxy-44060c9883bd?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/44060c9883bd</guid>
            <category><![CDATA[training-courses]]></category>
            <category><![CDATA[cloud]]></category>
            <category><![CDATA[envoy-proxy]]></category>
            <category><![CDATA[proxy]]></category>
            <dc:creator><![CDATA[Peter Jausovec]]></dc:creator>
            <pubDate>Thu, 10 Feb 2022 17:30:46 GMT</pubDate>
            <atom:updated>2022-02-10T17:30:46.431Z</atom:updated>
            <content:encoded><![CDATA[<p>Envoy Proxy, an open-source edge and service proxy, is a vital part of today’s modern, cloud-native application and is used in production by large companies like Booking.com, Pinterest, and Airbnb(<a href="https://www.infoq.com/news/2018/12/envoycon-service-mesh/">Source</a>). <a href="https://www.tetrate.io/">Tetrate</a>, a top contributor to Envoy, has developed <a href="https://academy.tetrate.io/courses/envoy-fundamentals">Envoy Fundamentals</a>, free training with a completion certificate, to help enterprises adopt the technology faster. It will enable DevOps users, SREs, developers, and other community members to learn Envoy easily with concept text, practical labs, and quizzes. Tetrate is also the creator of the popular <a href="http://academy.tetrate.io/">Istio Fundamentals</a> training course and the open-source project <a href="https://www.func-e.io/">Func-e</a>, which makes it easier to adopt Envoy.</p><p>“I am excited about Tetrate’s Envoy Fundamentals course and certification. It is well composed with information on the applications of Envoy and practical labs with step-by-step instructions and quizzes,” said Matt Klein, creator of Envoy Proxy and senior engineer at Lyft. “You will be rewarded with a certificate for completing the full course. Best of all, the training is entirely free. I highly recommend this course to people who want to learn and use Envoy Proxy.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*SmydR25y4r_8Nq8y" /><figcaption>Envoy Fundamentals course certification of completion</figcaption></figure><h3>Envoy is the default choice for building a service mesh</h3><p>The CNCF-graduated project <a href="https://www.envoyproxy.io/">Envoy</a> Proxy is the most popular sidecar and ingress provider. It’s the default sidecar in multiple service mesh projects, including Istio, Open Service Mesh, and Appmesh. As per <a href="https://www.cncf.io/wp-content/uploads/2020/11/CNCF_Survey_Report_2020.pdf">CNCF’s 2020 survey</a>, the usage of Envoy as an ingress provider increased 116%, with a total of 37% of respondents using Envoy Proxy in production.</p><p>Envoy was initially built at Lyft as a proxy to serve as a universal data plane for large-scale microservice service mesh architectures. The idea is to have Envoy sidecars run next to each service in your application, abstracting the network from the application. It works as an edge gateway, service mesh, and hybrid networking bridge. With Envoy, companies can scale their microservices by providing a more flexible release process and highly available and resilient infrastructure.</p><p>Envoy is rich in its network-related features such as retries, timeouts, traffic routing and mirroring, TLS termination, observability, and many more. As all network traffic flows through the mesh of Envoy proxies, it becomes necessary to observe traffic and problem areas, fine-tune the performance, and pinpoint any latency sources from a single place. With its many features and <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/api">vast APIs</a>, it might be overwhelming for users to navigate the extensive and comprehensive <a href="https://www.envoyproxy.io/docs/envoy/latest/start/start">documentation</a>, especially for beginners unfamiliar with proxies and just starting with their Envoy journey. Therefore, we decided to create a course that introduces the basic concepts of Envoy and its internals to enable a faster learning curve for users.</p><p>“Envoy has seen a rapid increase in adoption over the past years. This means access to easy to learn resources that provide users the ability to scale fast in their learning is crucial to keep Envoy adoption easier,” said Varun Talwar, Tetrate Co-founder.</p><h3>About the Envoy Fundamentals Course</h3><p>The free <a href="https://academy.tetrate.io/courses/envoy-fundamentals">Envoy fundamentals course</a> consists of 8 modules, with multiple video lessons and labs within each module.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Gr--Xqpp_XEQqLzQ" /><figcaption>8 modules with multiple videos lessons, practical labs, and quizzes</figcaption></figure><p>The course starts with an introduction to Envoy and explains concepts such as HTTP connection manager filter, clusters, listeners, logging, administrative interface, and extending Envoy. Each module includes practical labs with step-by-step instructions. The labs allow learners to practice explained concepts such as</p><ul><li>Configuration of dynamic Envoy</li><li>Circuit breakers</li><li>Traffic splitting</li><li>Mirror requests</li><li>Global and local rate-limiting</li><li>HTTP tap filter</li><li>Extending Envoy using Lua script and Wasm, and more.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*pkkatd4y-VKa9IvV" /><figcaption>Envoy Fundamentals course curriculum</figcaption></figure><p>Quizzes after each module help you evaluate your knowledge and gauge your progress. After completing the course and all quizzes, you’ll receive a certificate of completion. Sign up for the free <a href="https://academy.tetrate.io/courses/envoy-fundamentals">Envoy Fundamentals course</a> on the <a href="https://academy.tetrate.io/">Tetrate Academy website</a> to start learning.</p><p><strong>More resources to learn Envoy</strong></p><ul><li><a href="https://www.tetrate.io/blog/get-started-with-envoy-in-5-minutes/">Get started with Envoy in 5 minutes (blog)</a></li><li><a href="https://www.tetrate.io/blog/the-basics-of-envoy-and-envoy-extensibility/">The basics of Envoy and Envoy extensibility (blog)</a></li><li><a href="https://www.tetrate.io/blog/envoy-101-file-based-dynamic-configurations/">Envoy 101: File-based dynamic configurations (blog)</a></li><li><a href="https://www.youtube.com/watch?v=f0QEHEm9ERc">Istio Weekly: Envoy fundamentals (video)</a></li><li><a href="https://www.youtube.com/watch?v=JIq8wujlG9s&amp;t=2s">Istio Weekly: Developing Envoy Wasm Extensions (video)</a></li><li><a href="https://www.youtube.com/watch?v=spzfupads2o">Envoy deep dive (video)</a></li><li><a href="https://www.youtube.com/watch?v=55yi4MMVBi4&amp;t=0s">Lyft’s Envoy: Embracing a service mesh (video)</a></li><li><a href="https://www.youtube.com/watch?v=mJAYHHKmLhU&amp;t=0s">Making Envoy contributions feasible for everyone (video)</a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*nSk_Cw5Yvs8_hmqA" /><figcaption>Envoy Fundamentals course reviews</figcaption></figure><h3>About Tetrate</h3><p>Started by Istio founders and Envoy maintainers to reimagine application networking, Tetrate is the enterprise service mesh company managing the complexity of modern, hybrid cloud application infrastructure. Its flagship product, Tetrate Service Bridge, provides a comprehensive, enterprise-ready service mesh platform built for multi-cluster, multitenancy, and multi-cloud deployments. Customers get consistent, baked-in observability, runtime security, and traffic management in any environment. Tetrate remains a top contributor to the open-source projects Istio and Envoy Proxy, and its team includes senior maintainers of Envoy. Find out more at <a href="http://www.tetrate.io.">www.tetrate.io.</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=44060c9883bd" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/envoy-fundamentals-a-training-course-to-enable-faster-adoption-of-envoy-proxy-44060c9883bd">Envoy Fundamentals, a training course to enable faster adoption of Envoy Proxy</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[General Availability of Envoy on Windows]]></title>
            <link>https://blog.envoyproxy.io/general-availability-of-envoy-on-windows-267e4544994a?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/267e4544994a</guid>
            <category><![CDATA[service-mesh]]></category>
            <category><![CDATA[envoy-proxy]]></category>
            <category><![CDATA[windows]]></category>
            <category><![CDATA[istio]]></category>
            <category><![CDATA[open-service-mesh]]></category>
            <dc:creator><![CDATA[Sotiris Nanopoulos]]></dc:creator>
            <pubDate>Tue, 18 May 2021 23:05:22 GMT</pubDate>
            <atom:updated>2021-05-21T18:23:28.656Z</atom:updated>
            <content:encoded><![CDATA[<h3>Announcing General Availability of Envoy on Windows</h3><figure><img alt="Envoy and Microsoft logo" src="https://cdn-images-1.medium.com/max/1024/0*XoIm2sa1mYVmBss5.png" /></figure><p>The Envoy project has always strived to make the network “transparent” to all applications running regardless of the programming language, the platform architecture, and the operating system. Today, we’re excited to announce that Envoy is now generally available for use on the Windows platform! You can start using Envoy on Windows for production workloads starting with version <a href="https://github.com/envoyproxy/envoy/releases/tag/v1.18.3">1.18.3</a>.</p><p>Porting Envoy on Windows has been a goal of the community <a href="https://github.com/envoyproxy/envoy/issues/129">since 2016</a>. Since then, the Envoy-Windows-Development group has made a lot of progress. Primarily composed of developers from VMware and Microsoft, the group has collaborated over the last year to get Envoy from an <a href="https://blog.envoyproxy.io/announcing-alpha-support-for-envoy-on-windows-d2c53c51de7b">alpha release</a> in October 2020 to a stable production-ready state today.</p><p>You can now use Envoy on Windows to build cloud-native applications, improve the observability of legacy applications, and even deploy Envoy alongside a Windows application as an edge proxy.</p><p>Before we delve into the public-facing features that we have built to improve the Windows experience, we would like to thank the Envoy developer community and maintainers for their guidance, support, and patience.</p><h3>Recently added features</h3><p>Since the Alpha release of Envoy on Windows we have added more features, enabled continuous integration, and improved the performance and reliability of Envoy on Windows.</p><h4>Improved polling mechanism with synthetic edge events</h4><p>Envoy solves the <a href="http://www.kegel.com/c10k.html">C10K</a> problem on Linux by serving many clients with each thread and using nonblocking I/O and edge-triggered readiness change notifications. However, edge-triggered change notifications are not supported on Windows Server 2019 and this caused Envoy on Windows to spin and drain CPU resources.</p><p>To address this issue, we designed synthetic edge events that behave like edge events. Synthetic edge events are level events that are managed by Envoy and behave like edge events. We achieved this by manually disabling event registration when a new event arrives and enabling them again only when needed.</p><p>We observe, in the integration tests, that by switching to synthetic edge events, Envoy catches 3 orders of magnitude fewer events. This is a significant improvement that allows Envoy on Windows to scale to multiple concurrent connections. We plan to improve the event mechanism further. The newer version of Windows offers a native edge events API which we plan to integrate into Envoy.</p><h4>Windows container image</h4><p>We want operators and developers to be able to get started with Envoy on Windows with minimal friction. Since late October of 2020, we have been publishing developer images on <a href="https://hub.docker.com/r/envoyproxy/envoy-windows-dev">docker hub</a>. These images contain various SDK and tools that are particularly useful for developers looking to extend or experiment with Envoy. With version 1.18 we also publish slimmer images, <a href="https://hub.docker.com/r/envoyproxy/envoy-windows">envoy-windows</a>, which are more suitable for production environments.</p><h4>Improved Diagnostics</h4><p>We expect that operators will want to run Envoy on different platforms with the same configuration. The <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/access_loggers/stream/v3/stream.proto#standard-streams-access-loggers">new stream access loggers</a> allow operators to redirect the access logs produced by the listeners and the admin portal to the standard output of the process. Envoy uses the correct native API to write to the standard output/error depending on the platform it runs on.</p><h4>Add support for Clang compiler</h4><p>Envoy users leverage Envoy’s versatile extension model to build custom filters and features for their use case. Part of the versatile extension model is the support for different architectures (arm) and compiler toolchains (Clang and GCC) on Linux. Following the spirit of the community, we have added support for Clang on Windows. Since January, the CI builds envoy.exe on every commit both with MSVC and Clang compiler.</p><h4>Improved process management</h4><p>The Alpha release focused on functionality more than usability. Since then, we have added features for developers and Windows native operators to manage the lifetime of the Envoy process with ease. Now Envoy terminates gracefully when Ctrl + C and Ctrl + Break events are sent to the console in the same way it handles SIGINT and SIGTERM. Additionally, we have added experimental support to run Envoy as a <a href="https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/run-envoy#run-envoy-with-the-demo-configuration">Windows service</a>.</p><h3>Contribution Statistics</h3><p>Although these statistics do not say much on their own, we would like to take a step back to look at what we have accomplished in the past year:</p><ol><li>The Windows development group has contributed 189 patches to the Envoy repository.</li><li>416 out of the 435 Envoy’s tests are run on Windows at every commit. 16 tests are not compiled on Windows due to the lack of platform support for the particular feature and the remaining 3 tests are failing in the newly added QUIC support.</li><li>We support two compilers (MSVC and Clang), three runtimes (win32 native, SCM, and containers), and multiple versions of the Windows OS (Client and Server versions 1809 and above).</li></ol><h3>What’s next for Envoy on Windows?</h3><p>We still have a lot to work to get Envoy on Windows to parity with Linux. We look forward to:</p><ol><li>Add more sample sandboxes that demonstrate different use cases.</li><li>Improve the distribution of binaries.</li><li>Benchmark and improve performance.</li><li>Integrate with Service Mesh solutions, like <a href="https://openservicemesh.io/">OSM</a>, on the upcoming <a href="https://cloudblogs.microsoft.com/windowsserver/2021/03/02/announcing-windows-server-2022-now-in-preview/">Windows Server 2022</a> release.</li></ol><h3>How do I provide feedback and get involved?</h3><p>We are looking forward to listening to your feedback and your comments. There are multiple ways to reach us and all of them are equally effective so you can choose the one that you prefer.</p><p>You can get in touch with the contributors working on Envoy on Windows to ask questions or provide feedback in the <a href="https://envoyslack.cncf.io/">Envoy slack workspace</a> #envoy-windows-dev channel. Additionally, we follow and triage all the <a href="https://github.com/envoyproxy/envoy/issues">Github issues</a>. We also follow the <a href="https://groups.google.com/g/envoy-dev">envoy-dev</a> and <a href="https://groups.google.com/g/envoy-announce">envoy-announce</a> Google groups and reply to questions and issues there. We also maintain a <a href="https://www.envoyproxy.io/docs/envoy/latest/faq/overview#windows">FAQ</a> on the documentation website.</p><p><em>One important note, if you encounter a bug that is causing Envoy to crash, please reach out to envoy-security@googlegroups.com. You might have stumbled upon a security vulnerability that should not be publicly disclosed before we patch it.</em></p><p>Like every CNCF project, we host bi-weekly meetings which you can find on the <a href="https://goo.gl/PkDijT">Envoy CNCF calendar</a>. These meetings are a good place to start engaging and contributing to the Envoy roadmap on Windows.</p><p><em>Envoy Windows Development Group,</em></p><p><em>Sunjay Bhatia, William A. Rowe Jr (VMware)</em></p><p><em>Praveen Balasubramanian, Nick Grifka, Randy Miller, Sotiris Nanopoulos, David Schott (Microsoft)</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=267e4544994a" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/general-availability-of-envoy-on-windows-267e4544994a">General Availability of Envoy on Windows</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A stroll down fuzzer optimisation lane and why instrumentation policies matter]]></title>
            <link>https://blog.envoyproxy.io/a-stroll-down-fuzzer-optimisation-lane-and-why-instrumentation-policies-matter-f0012ec260b3?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/f0012ec260b3</guid>
            <category><![CDATA[vulnerability-analysis]]></category>
            <category><![CDATA[fuzzing]]></category>
            <category><![CDATA[proxy]]></category>
            <category><![CDATA[software-security]]></category>
            <dc:creator><![CDATA[David Korczynski]]></dc:creator>
            <pubDate>Fri, 14 May 2021 12:50:12 GMT</pubDate>
            <atom:updated>2021-05-14T13:06:43.971Z</atom:updated>
            <content:encoded><![CDATA[<p><em>This blog post goes through an in-depth view of libFuzzer internals and how this affects Envoy fuzzing performance. This blog post is by David Korczynski from </em><a href="https://adalogics.com/"><em>Ada Logics</em></a><em>.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/554/1*HVNShjlq98v_veyt7_xGDA.png" /></figure><p>During March and April this year security researchers from <a href="https://adalogics.com/">Ada Logics</a> worked on the fuzzing infrastructure of Envoy Proxy sponsored by the Linux Foundation. One of the focus areas was optimisation and in this blog post we will go through some of the findings of this work. The full report is a comprehensive 28-page document that is available <a href="https://github.com/envoyproxy/envoy/blob/main/docs/security/audit_fuzzer_adalogics_2021.pdf">here</a>.</p><p>In Envoy we have a large code base comprising roughly 1.3 million lines of code including auto-generated code (e.g. Protobuf code), testing infrastructure and also various important dependencies. To cater for security we have a comprehensive fuzzing suite that continuously analyses Envoy by way of <a href="https://google.github.io/oss-fuzz">OSS-Fuzz</a>. We follow the principles of an <a href="https://google.github.io/oss-fuzz/advanced-topics/ideal-integration/">ideal integration</a> and to this end we have a variety of fuzzers where some fuzzers resemble unit-tests in that they target specific areas of the Envoy proxy, such as our <a href="https://github.com/envoyproxy/envoy/blob/main/test/common/json/json_fuzz_test.cc">json fuzzer</a>, and others are closer related to integration-tests in that they target end-to-end concepts of Envoy, such as our <a href="https://github.com/envoyproxy/envoy/blob/main/test/integration/h2_fuzz.cc">HTTP2</a> end-to-end fuzzer.</p><p>The issue we had observed was that our end-to-end fuzzers ran with a low execution speed and, in particular, the execution speed was much lower than our own performance experiments with Envoy Proxy even after considering the slowdown of sanitizers, e.g. AddressSanitizer. Since execution speed is an essential factor for the success of fuzzers we had a desire to resolve the excessive slowdown and our investigations into this slowdown is the focus of this blog post.</p><h3>Coverage instrumentation and performance in libFuzzer</h3><p>In order to understand the reason for slowdown we need to understand how coverage-guided fuzzing works in libFuzzer. We describe this in detail in the full report so in this blog post we will keep it short.</p><p>From a simplified perspective, coverage-guided fuzzing with libFuzzer works as follows. First, we compile the target with fuzzer-specific instrumentation, which means we instrument the code with coverage-feedback instrumentation. This instrumentation will place counters on each basic block of the target program.</p><p>Second, when we run the compiled fuzzer the following process happens. At instantiation the fuzzer identifies the number of counters in the target and creates a “corpus” object as a combination of all the counters. The fuzzer then continues by performing the following infinite loop:</p><ol><li>Execute the fuzzer entry point. In this case, basic blocks that are executed will trigger an increase of its corresponding counter.</li><li>Post-process the execution by going through each of the counters in the program and log their state.If any of the counters is different in comparison to previous runs, it means that the execution triggered new coverage. Therefore, update corpus with the new test case and record the state of the counters.</li></ol><p>Following the description above, we can observe that there are performance penalties in two places when using libFuzzer, namely in the execution of the target as well as the post-processing after each fuzz iteration. The performance penalty during execution of the target code is fairly obvious for a non-fuzzing expert, however, the performance penalty of the post-processing is less obvious.</p><p>To measure the impact of the post-processing, consider the following simple fuzzer harness:</p><pre><strong>#include &lt;stdint.h&gt;</strong><br><strong>#include &lt;string.h&gt;</strong><br><strong>#include &lt;stdlib.h&gt;</strong><br><strong>#include “target.h”</strong></pre><pre><strong>int</strong> <strong>LLVMFuzzerTestOneInput</strong>(<strong>const</strong> <strong>uint8_t</strong> *data, <strong>size_t</strong> size){<br>  <strong>char</strong> *new_str = (<strong>char</strong> *)malloc(size+1);<br>  <strong>if</strong> (new_str == NULL){<br>    <strong>return</strong> 0;<br>  }<br>  memcpy(new_str, data, size);<br>  new_str[size] = ‘\0’;</pre><pre>  free(new_str);<br>  <strong>return</strong> 0;<br>}</pre><p>The above is an empty fuzzer that simply does a few operations but essentially explores no code. However, it includes the <em>target.h</em> header file and compiling this fuzzer with an empty “<em>target.h</em>” file, we can observe the amount of counters inserted by the fuzzer-instrumentation into the executable fuzzer:</p><pre>$ clang -fsanitize=fuzzer ./test.c <br>$ ./a.out -runs=0<br>INFO: Running with entropic power schedule (0xFF, 100).<br>INFO: Seed: 3040949426<br>INFO: Loaded 1 modules   (3 inline 8-bit counters): 3 [0x6ee0c0, 0x6ee0c3), <br>...</pre><p>The output shows us the fuzzer has 3 inline 8-bit counters. Now, if we compile the exact same fuzzer but with a <em>target.h</em> file defined as follows:</p><pre><strong>int</strong> <strong>target</strong>(<strong>char</strong> *addr, <strong>size_t</strong> size) {<br>  <strong>if</strong> (addr[0] == ‘A’ &amp;&amp; size == 0) <strong>return</strong> 1;<br>  <strong>return</strong> 0;<br>}</pre><p>and then load the fuzzer again without execution any iterations, we observe the following:</p><pre>$ clang -fsanitize=fuzzer ./test.c <br>$ ./a.out -runs=0<br>INFO: Running with entropic power schedule (0xFF, 100).<br>INFO: Seed: 3295510585<br>INFO: Loaded 1 modules   (7 inline 8-bit counters): 7 [0x6ee0c0, 0x6ee0c7),</pre><p>This time, the fuzzer has 7 inline 8-bit counters, despite nothing in the fuzzer-relevant code has changed. That is, the fuzzers have the exact same coverage and the only difference between the two fuzzers is that one fuzzer is compiled with additional, albeit unreachable, code which in turn is instrumented with coverage-feedback instrumentation.</p><p>The question is, what is the impact of these inline 8-bit counters even if the fuzzer does not change? In order to identify this, we can use a small script to generate arbitrarily large <em>target.h </em>files and then run an experiment for each fuzzer that counts the number of inline 8-bit counters and the number of executions per second of the resulting fuzzer. To observe this, we used the following minor Python script to automate creation of the <em>target.h </em>file with an arbitrary number of if-statements similar to the if-statement above as well as a simple bash script:</p><pre><strong>import</strong> os<br>max_iterations = int(os.environ[&#39;N&#39;])<br>func_impl = &quot;int target(char *addr, size_t size) {\n&quot;<br><strong>for</strong> i <strong>in</strong> range(max_iterations):<br>    func_impl += &quot;\tif (addr[0] == &#39;A&#39; &amp;&amp; size == %d) return %d;\n&quot;%(i, i)<br>func_impl += &quot;\treturn 0;\n}\n&quot;<br><br><strong>with</strong> open(&quot;target.h&quot;, &quot;w&quot;) <strong>as</strong> ff:<br>    ff.write(func_impl)</pre><p>The Python script generates a <em>target.h </em>file with <strong>N</strong> number of if-statements and we then use the following bash script to run several experiments with various values of <strong>N</strong>:</p><pre><strong>for</strong> N <strong>in</strong> 1 1000 10000 50000 100000 200000 500000 1000000; <strong>do</strong><br>    echo &quot;[+] Starting analysis&quot;<br>    export MAX_ITER=${N}<br>    python ./make_p.py<br>    clang -fsanitize=fuzzer test.c<br>    ./a.out -runs=0<br>    ./a.out -max_total_time=60<br><strong>done</strong></pre><p>Running this script and noting the counters as well sa the execution speed, we get the following data:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/323/1*ld07jifBEPlHt6zgw9-Ykw.png" /></figure><p>We observe the number of inline 8-bit counters grow linearly with <strong>N</strong>, which is to be expected. Again, we emphasize that the fuzzer itself is still just a single function and all fuzzers have the same code coverage, the only difference is the amount of extra code in the final executable that is compiled with fuzzer-specific instrumentation, although this code is not reachable by the fuzzer.</p><p>The next question now is how do these counters impact the execution speed of the fuzzer? To measure this, we simply ran the fuzzer for 60 seconds (done in the bash script above) and noted the number of executions per second. We observed the following data:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/266/1*sxQ4c2xIglrAF8JAC0rT2g.png" /></figure><p>The following Figure visualises the data:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/566/1*o7Nl2hde6LMIQbTyzk5Lug.png" /></figure><p>The difference between 7 inline 8-bit counters and 3,000,004 is a slowdown of 2407x despite the fuzzer having the exact same coverage (namely 2) and essentially executing very little code. The slowdown is entirely due to the post-processing of counters, and these counters.</p><h3>Impact on Envoy</h3><p>So how did this impact the Envoy fuzzers? As noted above, Envoy consists of a lot of code and the vast majority of this code was built with sanitizers. The HTTP2 end-to-end fuzzer had a staggering 1.3 million inline 8-bit counters and there is a specific reason for instrumenting our codebase in this manner. Our fuzzing is run by OSS-Fuzz, which requires for the fuzzers to be built statically. As such, we build a lot of our dependencies ourselves rather than using pre-compiled binaries, and all of our instrumentation flags are also used when compiling these dependencies.</p><p>To get a more meaningful understanding of how much time was spent inside the post-processing we profiled our system using <a href="https://github.com/optimyze/prodfiler-documentation">Prodfiler</a> and observed that 26% of samples were observed inside the post-processing logic of our fuzzer, meaning that roughly a quarter of our system processing was spent in this function:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/604/1*M8pbuQxWr8ueOj_hAuDJww.png" /></figure><p>The impact of 1.3 million counters is significant and we observe in the experiment above that 1.5 million inline 8-bit counters causes a slowdown of 1203x on an empty fuzzer. It is also noteworthy here that not all of the code in our resulting fuzz binaries is actually reachable by a given fuzzer. Thus, by default, there is room for improvements in terms of avoiding instrumentation of irrelevant code. However, it may also be that there are improvements to be found by limiting coverage instrumentation of code that is reachable by a fuzzer. This observation raised a set of questions:</p><ol><li>Should we accept the slowdown of instrumenting all of the code in Envoy with coverage-guided instrumentation and continue in our usual manner?</li><li>Assuming we should not instrument all code with coverage-guided instrumentation, which parts should we focus on? Are some parts more relevant than others, e.g. parser-specific code?</li><li>Should we push the issue of selectively instrumenting the code of a fuzz target away from the Envoy codebase and instead focus on adding heuristics about this into libFuzzer?</li></ol><p>In essence, between (1) and (2) above, the solution must be based on empirical evidence. In this context, the empirical evidence is the amount of bugs we found, and not necessarily the execution performance of our fuzzers.</p><p>Reducing coverage instrumentation in turn reduces the amount of code the fuzzer explores, which can also have the benefit of helping the fuzzer focus on important parts of the codebase. This seems positive as we can focus our fuzzer even more on our threat model. However, it could also leave out specific areas of the code and this might indirectly affect how the fuzzer explores relevant code.</p><p>We experimented with reducing coverage instrumentation on certain parts of the code, resulting in decreasing the inline 8-bit counters from 1.3 million to 270,000. We saw a speedup of about 2–3x on our end-to-end fuzzers with this reduction. This was a quick win in terms of performance improvements, however, there is still a lot of room for improvement if we continue to reduce the amount of code instrumented. Unfortunately, this reduction also comes with an increase in our build system, which is already fairly complex in and of itself.</p><h3><strong>Conclusions and future work</strong></h3><p>In an investigation into our fuzzing infrastructure we found that we incurred a non-negligible performance penalty in our fuzzers due instrumenting a large codebase with coverage-feedback instrumentation. The post-processing of each fuzz iteration due to the number of coverage-counters had a significant impact on the overall performance.</p><p>We observed that we need a more refined policy in terms of which parts of the codebase to instrument instead of instrumenting our entire codebase. Previously our goal was to instrument all of the codebase, whereas now, we have observed the need for instrumenting code on a per-fuzzer basis.</p><p>In addition to improving our own build set up, we think it is worth exploring improvements to libFuzzer itself. Specifically, it should be possible by the fuzzer to approximate which coverage-feedback should be used by a given fuzzer. The benefit of this is that there are potential large gains to be achieved across all projects that use libFuzzer.</p><p>We thank the Linux Foundation for sponsoring this project as well as the team at Envoy for a fruitful and enjoyable collaboration.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f0012ec260b3" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/a-stroll-down-fuzzer-optimisation-lane-and-why-instrumentation-policies-matter-f0012ec260b3">A stroll down fuzzer optimisation lane and why instrumentation policies matter</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Envoy support for OpenTelemetry access logging]]></title>
            <link>https://blog.envoyproxy.io/envoy-support-for-opentelemetry-access-logging-e4b08160d32c?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/e4b08160d32c</guid>
            <category><![CDATA[opentelemetry]]></category>
            <category><![CDATA[access-log]]></category>
            <category><![CDATA[extension]]></category>
            <category><![CDATA[envoy]]></category>
            <category><![CDATA[envoy-proxy]]></category>
            <dc:creator><![CDATA[Itamar Kaminski]]></dc:creator>
            <pubDate>Mon, 22 Mar 2021 23:53:29 GMT</pubDate>
            <atom:updated>2021-03-22T23:53:29.140Z</atom:updated>
            <content:encoded><![CDATA[<h3>Announcing Alpha OpenTelemetry access logging support in Envoy</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/783/1*yYP0pPUUQRNxk3TEIC5O8Q.png" /></figure><p>Today we are excited to announce Alpha support for OpenTelemetry access logging in Envoy, which implements access logging based on the <a href="https://github.com/open-telemetry/opentelemetry-proto/releases/tag/v0.7.0">OpenTelemetry 0.7.0 Protocol release</a>. The OpenTelemetry project was first announced by the Cloud Native Computing Foundation <a href="https://www.cncf.io/">(CNCF)</a> in May of 2019; and merged OpenTracing and OpenCensus into a new, unified standard.</p><p>With this announcement, users <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/access_loggers/open_telemetry/v3alpha/logs_service.proto">can now configure</a> Envoy to export <a href="https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/">OpenTelemetry Protocol (OTLP)</a> access logs in a flexible way, utilizing Envoy’s <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#format-rules">access logging formatter</a>.</p><p>Furthermore, by exporting these logs to an <a href="https://github.com/open-telemetry/opentelemetry-collector/blob/main/README.md">OpenTelemetry Collector</a>, the logs can be processed and exported as other telemetry data formats.</p><h3>What does Alpha support mean?</h3><p>Alpha support for OTLP in Envoy signifies that the Envoy and OpenTelemetry codebase have reached a stage where both the contributor and maintainer communities are confident it is stable enough for evaluation by the general public. We hope that by announcing this Alpha release, we can accelerate the process of collecting <a href="https://github.com/envoyproxy/envoy/issues/new/choose">community feedback</a> and contributions to push for a stable release in the near future (see <a href="https://github.com/envoyproxy/envoy/blob/main/EXTENSION_POLICY.md#extension-stability-and-security-posture">Envoy extension security and stability posture</a>).</p><p>The Alpha release signifies that the extension is functional but has not had substantial production burn time.</p><h3>The road ahead</h3><p>In the future, additional OpenTelemetry support will be introduced in Envoy, such as tracing and metrics and semantic conventions, to provide a complete OpenTelemetry Observability experience.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e4b08160d32c" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/envoy-support-for-opentelemetry-access-logging-e4b08160d32c">Envoy support for OpenTelemetry access logging</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Security Scorecards & Envoy — Automating supply chain analysis]]></title>
            <link>https://blog.envoyproxy.io/security-scorecards-envoy-automating-supply-chain-analysis-7b8fd9829169?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/7b8fd9829169</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[open-source-software]]></category>
            <category><![CDATA[envoy-proxy]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[envoy]]></category>
            <dc:creator><![CDATA[Kim Lewandowski]]></dc:creator>
            <pubDate>Thu, 17 Dec 2020 22:05:01 GMT</pubDate>
            <atom:updated>2020-12-19T01:32:52.390Z</atom:updated>
            <content:encoded><![CDATA[<h3>Security Scorecards &amp; Envoy — Automating supply chain analysis</h3><p>The <a href="https://github.com/ossf/scorecard/">Security Scorecards</a> project is one of my favorite projects I’ve worked on while at Google. We <a href="https://openssf.org/blog/2020/11/06/security-scorecards-for-open-source-projects/">announced</a> it under the OpenSSF umbrella several weeks ago. It auto-generates a “security score” through a number of checks on OSS projects. The reason why I like this project so much is because it’s simple to understand, fully automated, uses objective criteria and has the ability to make a large impact across the OSS ecosystem by driving awareness and inspiring projects to improve their security posture.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/541/1*24XScQGXxmnusziEdG0w1A.png" /></figure><p>Right after the initial announcement, we learned that the Envoy project was looking for a mechanism to understand and enforce policy on the health of projects they take dependencies on. This gave us the opportunity to test Scorecards on a real-world project used in critical systems across the industry!</p><p>We helped <a href="https://twitter.com/htuch314">Harvey Tuch</a>, maintainer of Envoy, try out and evaluate Scorecards for their use case as part of their new <a href="https://github.com/envoyproxy/envoy/pull/14334">policy for external dependencies</a>.</p><blockquote>“Until recently, we’ve had no stance on external dependencies or criteria for determining if a new external dependency is acceptable.”</blockquote><p>First, for fun, let’s run Scorecards on the Envoy project itself, and then we can run it against all of Envoy’s dependencies.</p><p>For Envoy, we get these results:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/936/1*KGFFqO4FXRyN0e-gVtNH8A.png" /></figure><p>Not too shabby, and this prompted an <a href="https://github.com/envoyproxy/envoy/issues/14076">issue</a> for signing releases and another <a href="https://github.com/ossf/scorecard/pull/94">fix</a> for the Scorecards project!</p><p>Taking this one level deeper, here’s a snippet of the output against Envoy’s external dependencies:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kpUf5le2t2ffTK7BswPFrg.png" /><figcaption>green = pass, red = fail</figcaption></figure><p>It was awesome to see the conversations taking place amongst the maintainers of those projects on making improvements — “hey, can we get fuzzing integrated into this project?”</p><p>It’s working!! 😏</p><p>The Envoy project plans to integrate OpenSSF Scorecards into their dependency <a href="https://github.com/envoyproxy/envoy/blob/master/bazel/repository_locations.bzl">metadata</a> and enforce in CI policies around their dependencies. Scorecards will reduce the toil and manual effort when maintaining Envoy’s supply chain. A key aspect of their new policy is that automated criteria are applied first, and then where necessary exceptions are made for non-conforming projects. This deliberative process allows maintainers the opportunity to consider relevant scorecard criteria, asking questions around missing criteria and evaluate alternatives. No automated system will be perfect, but Envoy plans to collaborate with OpenSSF Scorecards to improve accuracy and relevancy.</p><p>I’m looking forward to seeing more case studies like this. It’s really motivating to see the beginnings of a success story and cross-community collaboration. If you’re a maintainer of an OSS project and interested in trying out Scorecards similar to Envoy, tell me about it! You can find me and others working on projects like this in the <a href="https://slack.openssf.org/">Securing Critical Projects Slack Channel</a>.</p><p>Now if we can just figure out how to stop <a href="https://github.com/ossf/scorecard/issues/80">bumping up against GitHub’s API limits</a>. ;)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7b8fd9829169" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/security-scorecards-envoy-automating-supply-chain-analysis-7b8fd9829169">Security Scorecards &amp; Envoy — Automating supply chain analysis</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Envoy Proxy on Windows Containers]]></title>
            <link>https://blog.envoyproxy.io/envoy-proxy-on-windows-containers-193dffa13050?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/193dffa13050</guid>
            <category><![CDATA[windows-containers]]></category>
            <category><![CDATA[envoy-proxy]]></category>
            <category><![CDATA[a-b-testing]]></category>
            <category><![CDATA[containers]]></category>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Sotiris Nanopoulos]]></dc:creator>
            <pubDate>Wed, 30 Sep 2020 20:29:08 GMT</pubDate>
            <atom:updated>2020-09-30T20:29:08.233Z</atom:updated>
            <content:encoded><![CDATA[<blockquote>Recently the Envoy proxy announced the Alpha version for the Windows platform! You can find the announcement <a href="https://blog.envoyproxy.io/announcing-alpha-support-for-envoy-on-windows-d2c53c51de7b">here</a> and the instructions to take part in the Windows Alpha <a href="https://docs.google.com/document/d/1-sj_LSX93MXPbZpbV8TYc_WgpdHhd4LvrceVx2vDrMg/edit?usp=sharing">here</a>.</blockquote><p>Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures. In this blog post we will walk you through using Envoy and Windows containers to perform A/B testing on a website.</p><p>At the end of this post you should know:</p><ol><li>How to get started with Envoy proxy.</li><li>How to use Envoy proxy as an http proxy inside a Windows Server Container.</li><li>How to use Envoy proxy as an edge proxy to split the traffic between different containers.</li></ol><p>If you are new to Envoy, we would recommend the following material to onboard:</p><ol><li><a href="https://youtu.be/P719qI2h2yY">Intro: Envoy — Matt Klein &amp; Constance Caramanolis, Lyft</a></li><li>The project documentation page at <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/intro">envoproxy.io</a>.</li></ol><h3>The customer scenario</h3><p>In this demo we want to split the user traffic of our website to two different services. In practice, traffic split is useful to perform A/B testing and rolling deployments.</p><p>The architecture of the system is the following:</p><figure><img alt="Envoy on Windows Containers demo architecture" src="https://cdn-images-1.medium.com/max/948/1*GWlRuQ0GaFHC1nG551wnAA.png" /></figure><p>To build the system we rely on two types of components. These components are:</p><ol><li>The <strong>Front-end Envoy container</strong> that sits at the edge of the network. This container balances the traffic between Service 1 and Service 2.</li><li>The <strong>Service container</strong>. This container serves the front-end of our pets website. There are two different instances of the pets website. Each instance runs on a different service container. One instance of the website prints Dog images whereas the other instance prints Cat images.</li></ol><p>All the containers are based on the 2019 Windows Server Core image. For the user code here we use Python3 and Flask although any other server technology will also work.</p><h3>Building and Running the Demo</h3><p>In this section we will incrementally build the system. First we will build the two types of containers that we need. For each container we will configure the EnvoyProxy and validate that they work in isolation. Finally, we will compose all the containers together in a single network and have the full architecture up and running.</p><p>The code is available at <a href="https://github.com/davinci26/windows-envoy-samples">github.com/davinci26/windows-envoy-samples</a>.</p><h3>Requirements</h3><p>To follow along the demo you will need the following:</p><ol><li>A Windows (Server) machine running on version greater than 2019. The reason is that internally Envoy uses Unix Domain Sockets. This feature is available on Windows after version 2019. You might not run into issues with an older version but you will be on uncharted waters. Also if you are running this code on virtual machine make sure that you have <a href="https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization">nested virtualization</a> enabled.</li><li>Envoy proxy static executable built from source. As Envoy for Windows moves from alpha → beta we will be providing pre-built binaries but currently you will have to build Envoy from source. We have created <a href="https://docs.google.com/document/d/1-sj_LSX93MXPbZpbV8TYc_WgpdHhd4LvrceVx2vDrMg/edit#">this document to make the onboarding process a bit easier</a>.</li><li><a href="https://docs.docker.com/docker-for-windows/install/">Docker for Windows</a> and make sure that your docker engine <a href="https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers">is switched to Windows containers</a>.</li></ol><h3>Setup the Service container</h3><p>The service container is responsible for hosting the application code. For the service container we will use the following Dockerfile:</p><pre># Service Container Image</pre><pre>FROM mcr.microsoft.com/windows/servercore:ltsc2019</pre><pre># Container Variables<br>ARG servicePath</pre><pre># Setup Python<br>COPY ./setup_python.ps1 /</pre><pre>RUN powershell.exe .\\setup_python.ps1<br>RUN pip3 install -q Flask==0.11.1</pre><pre># Copy local files for the flask server<br>RUN powershell -Command mkdir service/<br>ADD ${servicePath} service/</pre><pre># Copy envoy and its configuration<br>RUN powershell -Command mkdir envoy-config/<br>ADD ./envoy-service-config.yaml ./envoy-config/envoy-service-config.yaml<br>ADD ./envoy-static.exe ./envoy-static.exe</pre><pre># Set up the entrypoint<br>ADD ./service_entrypoint.ps1 ./<br>ENTRYPOINT powershell ./service_entrypoint.ps1</pre><p>The setup that we do on the container is to install Python and Flask and copy Envoy and its configuration. The entry point is a PowerShell script that spawns the Python Flask server and Envoy.</p><p>For the service container we use the minimal Envoy configuration that allows us to match all the traffic coming to the container port 8000 and forward it to the Python Flask server running on port 8080. The Envoy configuration used is available <a href="https://github.com/davinci26/windows-envoy-samples/blob/master/envoy-service-config.yaml">here</a>.</p><p>To build &amp; run the container you need to use:</p><pre>PS D:\envoy-test&gt; docker build -f &quot;./Dockerfile-service&quot; -t &quot;test:service&quot; --build-arg servicePath=./service1 <br>PS D:\envoy-test&gt; docker run --publish 3000:8000 --detach --name bb test:service.<br>PS D:\envoy-test&gt; curl &quot;http:\\localhost:3000&quot; -UseBasicParsing</pre><pre>StatusCode        : 200<br>StatusDescription : OK</pre><p>Now if you visit <a href="http://localhost:3000/">http://localhost:3000/</a> on your browser you should see a cute dog appearing on the screen.</p><figure><img alt="Service 1 output on localhost" src="https://cdn-images-1.medium.com/max/1024/1*EDpAwpTfm849Bg0DKsKGKw.png" /><figcaption>If you visit <a href="http://localhost:3000/">http://localhost:3000/</a> after running service 1 you should see this output.</figcaption></figure><p>To run Service2, build the Service2 container by changing the build argument to --build-arg servicePath=./service2.</p><h3>Setup the Front-end Envoy container</h3><p>For the front-end Envoy container we will create a container that is similar to the Service container. The Dockerfile that we use is the following:</p><pre>FROM mcr.microsoft.com/windows/servercore:ltsc2019</pre><pre># Copy envoy and its configuration<br>RUN powershell -Command mkdir envoy-config/<br>ADD ./envoy-frontend.yaml ./envoy-config/envoy-frontend.yaml<br>ADD ./envoy-static.exe ./envoy-static.exe</pre><pre># Create a log folder to store the stats<br>RUN powershell -Command mkdir logs/</pre><pre>ENTRYPOINT [&quot;envoy-static.exe&quot;, &quot;-c&quot;, &quot;./envoy-config/envoy-frontend.yaml&quot;, &quot;--service-cluster&quot;, &quot;front-envoy&quot;</pre><p>In this container we only run an Envoy proxy that handles splitting the traffic between Service 1 and Service 2.</p><p>To orchestrate the traffic split, we add the following snippet to Envoy’s configuration:</p><pre>routes:<br> - match:<br>   prefix: &quot;/&quot;<br>      runtime_fraction:<br>       default_value:<br>         numerator: 50<br>          denominator: HUNDRED<br>        runtime_key: routing.traffic_shift.placeholder<br>     route:<br>        cluster: service1<br>   - match:<br>      prefix: &quot;/&quot;<br>     route:<br>      cluster: service2</pre><p>This script creates two matching rules for the traffic coming to the container. The first matching rule, which applies for 50% of the traffic, routes the traffic to Service 1. Envoy routes the remaining traffic (other 50%) to Service 2. With this routing rule we perform A/B testing on the two services.</p><p>To build &amp; run the container you need to use:</p><pre>PS D:\envoy-test&gt; docker run -f --publish 3005:8001 --detach --name fe test:envoy<br>PS D:\envoy-test&gt; docker run --publish 3005:8081 --detach --name fe test:envoy<br>PS D:\envoy-test&gt; curl &quot;http:\\localhost:3005&quot; -UseBasicParsing</pre><pre>StatusCode        : 200<br>StatusDescription : OK</pre><p>Now if you visit <a href="http://localhost:3000/">http://localhost:3005/</a> on your browser you should see Envoy admin board.</p><h3>Compose the network together</h3><p>At this point every piece of the system is working. Now, we only need to assemble the network and deploy all the containers together. To achieve this we have created a docker compose file.</p><p>The docker-compose file is the following:</p><pre>version: &quot;3.7&quot;<br>services:<br>  front-envoy:<br>    build:<br>      context: .<br>      dockerfile: Dockerfile-envoy<br>    networks:<br>      - envoymesh<br>    expose:<br>      - &quot;8000&quot;<br>      - &quot;8080&quot;<br>      - &quot;8081&quot;<br>    ports:<br>      - &quot;3000:8080&quot;<br>      - &quot;8081:8081&quot;<br>    depends_on:<br>      - dog-service<br>      - cat-service</pre><pre>dog-service:<br>    build:<br>      context: .<br>      dockerfile: Dockerfile-service<br>      args:<br>        - servicePath=./service1/<br>    expose:<br>        - &quot;8000&quot;<br>    networks:<br>      envoymesh:<br>        aliases:<br>          - service1<br>    environment: <br>      - ServiceId=1</pre><pre>cat-service:<br>    build:<br>      context: .<br>      dockerfile: Dockerfile-service<br>      args:<br>        - servicePath=./service2/<br>    expose:<br>        - &quot;8000&quot;<br>    networks:<br>      envoymesh:<br>        aliases:<br>          - service2<br>    environment: <br>      - ServiceId=2</pre><pre>networks:<br>  envoymesh: {}</pre><p>To build &amp; run the composed multi-container application we run the following commands:</p><pre>PS D:\envoy-test&gt; docker-compose build --pull<br>PS D:\envoy-test&gt; docker-compose up -d<br>PS D:\envoy-test&gt; docker-compose ps<br>Name                        Command               State                            Ports<br>----------------------------------------------------------------------------------------------------------------------------<br>envoy-test_cat-service_1   cmd /S /C powershell ./ser ...   Up      8000/tcp<br>envoy-test_dog-service_1   cmd /S /C powershell ./ser ...   Up      8000/tcp<br>envoy-test_front-envoy_1   envoy-static.exe -c ./envo ...   Up      8000/tcp, 0.0.0.0:3000-&gt;8080/tcp, 0.0.0.0:8081-&gt;8081/tcp</pre><p>Now if you visit <a href="http://localhost:3000/">http://localhost:3000/</a> you should see the output from either Service 1 (dog service) or Service 2 (cat service). Refreshing the page triggers another request that goes through the front-end Envoy proxy. Envoy forwards the request to either to Service 1 or Service 2.</p><p>Finally, we can go on Envoy admin page hosted on <a href="http://localhost:8081/">http://localhost:8081</a> and see the stats that Envoy collects automatically.</p><figure><img alt="Envoy admin page" src="https://cdn-images-1.medium.com/max/646/1*5do_K5VwbnOSjTcoM9Xe9w.png" /><figcaption>Envoy admin page hosted on <a href="http://localhost:8081/">http://localhost:8081</a></figcaption></figure><p>For example we can search for the upstream_rq_completed entry which is available in the stats page. This entry will tell us how many requests each Service completed. You can learn more about how the Envoy stats work in this <a href="https://blog.envoyproxy.io/envoy-stats-b65c7f363342">blog post</a>.</p><h3>Recap</h3><p>In this blog post we built a multi-container system to split the traffic our website between two service. To achieve that we relied on Windows Server Core containers and Envoy Proxy. We incrementally built each type of container in our system and then we composed them together.</p><p>Envoy on Windows is currently at an Alpha stage and we look forward to hearing your feedback. If you encounter any issue running the code above feel free to open an issue at <a href="https://github.com/envoyproxy/envoy">github.com/envoyproxy/envoy</a>. You can also reach out to the developers on the <a href="https://envoyslack.cncf.io/">Envoy proxy slack channel</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=193dffa13050" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/envoy-proxy-on-windows-containers-193dffa13050">Envoy Proxy on Windows Containers</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Announcing Alpha Support for Envoy on Windows]]></title>
            <link>https://blog.envoyproxy.io/announcing-alpha-support-for-envoy-on-windows-d2c53c51de7b?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/d2c53c51de7b</guid>
            <category><![CDATA[envoy]]></category>
            <category><![CDATA[envoy-proxy]]></category>
            <category><![CDATA[windows]]></category>
            <dc:creator><![CDATA[Sunjay Bhatia]]></dc:creator>
            <pubDate>Wed, 30 Sep 2020 20:28:47 GMT</pubDate>
            <atom:updated>2020-10-20T15:57:33.051Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PhwQzf6DWMsCvXY3fYj3RQ.png" /></figure><p>Porting Envoy to the Windows platform has been a <a href="https://github.com/envoyproxy/envoy/issues/129">goal of the project since 2016</a> and today we are excited to announce the Alpha release of Windows-native support for Envoy. The contributor community has been hard at work bringing Envoy’s rich feature set to Windows and this is another step in line with the project’s mission of making the network “transparent” to any application, regardless of language, architecture, or operating system.</p><p>Envoy is already in production use by a <a href="https://www.envoyproxy.io/#used-by">wide range of companies</a> and Windows support should open up its usage to additional cloud-native services, legacy .NET applications, and a whole host of other application architectures. Particularly promising is the potential for users to deploy Envoy alongside Windows applications running in the datacenter or public cloud on Windows Server, in Windows-based containers, or even alongside desktop applications.</p><p>The road to this Alpha announcement has been a long one but we hope we have done our part to improve the Envoy code base with cross platform code, new abstractions, and additional test coverage. If you are interested in a glimpse into the process of porting Envoy to Windows, take a look at this <a href="https://www.youtube.com/watch?v=FGBBeyZ-p1k&amp;ab_channel=CNCF%5BCloudNativeComputingFoundation%5D">presentation from KubeCon 2019</a> and look out for the upcoming <a href="https://sched.co/ecca">presentation at EnvoyCon 2020</a>. We would like to thank the Envoy maintainer team and especially Matt Klein and Lizan Zhou for enabling and supporting the Windows contributor group to reach this milestone.</p><h3>What does Alpha support on Windows mean?</h3><p>Alpha support for Envoy on Windows signifies the Envoy codebase has reached a stage where the contributor and maintainer community is confident it is stable enough on Windows for evaluation by the general public. General Availability (GA) release is also upcoming. We hope that by announcing this Alpha release, we can accelerate the process of collecting community feedback and contributions to push for a GA release.</p><p>As a result of getting to Alpha, Envoy compiles on Windows and tests are now required to pass in CI for every pull request and merged commit. In addition, there is a dedicated group of developers contributing to Windows, spending their time triaging reported issues and bugs, fixing CI failures and test flakes, and working with maintainers to ensure code quality and correctness (if you would like to get involved with this effort, see below!). The Alpha release does not signify that Envoy is suitable or supported for production workloads yet.</p><h3>How do I get started with Envoy on Windows?</h3><p>The project considers the master branch of the Envoy source repo to be release candidate quality at all times, and many organizations track and deploy master in production. As such, there is no “tagged” Alpha release commit, rather the master branch should be considered Alpha release quality on Windows until a GA release occurs. In general the Envoy codebase continues <a href="https://github.com/envoyproxy/envoy/graphs/code-frequency">to move forward rapidly</a> so we recommend refreshing your source checkouts often to take advantage of the feedback and improvements from the contributor community.</p><h4>Update 10/20/2020: Windows Docker Image</h4><p>As of <a href="https://github.com/envoyproxy/envoy/pull/13374">this PR</a> per-master commit Windows Docker image builds containing a statically compiled Envoy binary are now published publicly. The image can be found <a href="https://hub.docker.com/r/envoyproxy/envoy-windows-dev">here</a>. Image entrypoint and configuration mirrors the Linux “dev” image published per-master commit and will run a <a href="https://github.com/envoyproxy/envoy/blob/master/configs/google_com_proxy.yaml">basic bootstrap configuration</a>. You may use this image as-is to evaluate Envoy or extract the compiled <em>envoy.exe</em> binary to another container image or to run outside of a container.</p><h4>Building From Source</h4><p>Documentation on setting up a build environment and compiling a statically linked Envoy executable from source on Windows with Bazel can be found <a href="https://github.com/envoyproxy/envoy/tree/master/bazel#building-envoy-with-bazel">here</a>. We also provide a Windows Server 2019 Server Core based Docker container image with all required tools to build and statically link Envoy, see <a href="https://github.com/envoyproxy/envoy/blob/master/ci/README.md">this document</a> for more details.</p><h4>Usage Example</h4><p>Once you have an Envoy binary and want to start getting familiar with using Envoy on Windows, a good place to start is <a href="https://blog.envoyproxy.io/envoy-proxy-on-windows-containers-193dffa13050">this tutorial</a>. You will run through a modified version of the <a href="https://www.envoyproxy.io/docs/envoy/latest/start/start#sandboxes">Front Proxy Sandbox</a> example that demonstrates the advantage of running Envoy collocated with your services: all requests are handled by the service Envoy, and efficiently routed to your services.</p><h3>Are there any Windows-specific differences to be aware of?</h3><p>Work on Windows support is still moving rapidly and as of this Alpha release most all core Envoy functionality should have parity with Linux. Service mesh support requires additional platform capabilities and we hope to enable this functionality with an upcoming release of Windows. <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/configuration">Envoy configuration</a> and usage should not differ between platforms other than with common platform specific details like file paths, socket options, etc. That said, some existing features of Envoy were designed and implemented with Linux in mind first and as a result may be disabled on Windows or work in a limited capacity. You can find a list of Envoy APIs with <a href="https://github.com/envoyproxy/envoy/issues/13322">degraded or disabled functionality on Windows here</a>.</p><h3>How do I provide feedback and get involved?</h3><p>We expect users and new contributors may run into known issues or new bugs others have reported. The <a href="https://github.com/envoyproxy/envoy/issues?q=is%3Aissue+label%3Aarea%2Fwindows+">area/windows tag</a> in the Envoy issue tracker on GitHub and pulling the latest Envoy source from the master branch are great starting points if you encounter problems. Including “Windows:” in the title of any new issues and following the existing Envoy new issue templates will greatly help with triage. As always, PRs and issues are welcome to improve documentation in addition to Envoy source code.</p><p>To get in touch with full-time contributors to Envoy on Windows about how to get more involved with the project, development details, and detailed user scenarios, visit the <a href="https://envoyslack.cncf.io/">Envoy slack workspace</a> <em>#envoy-windows-dev</em> channel. We also hold a community meeting specifically for Windows contributors which you can find on the Envoy CNCF calendar <a href="https://goo.gl/PkDijT">here</a>. In addition to Github issues, this weekly meeting is a good place to stay in the loop with and contribute to the Envoy roadmap on Windows. The <a href="https://groups.google.com/g/envoy-dev">envoy-dev</a> and <a href="https://groups.google.com/g/envoy-announce">envoy-announce</a> Google groups are two other avenues via which we may solicit feedback.</p><p>We hope to lean on the community to get as much mileage as we can running Envoy on Windows and grow the community as we push forward to a GA release. Whether you would simply like to evaluate if Envoy suits your needs in a Windows environment or are interested in getting involved in active development on Windows, the project greatly appreciates detailed feedback. We look forward to collaborating with you and hearing how you use Envoy on Windows!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d2c53c51de7b" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/announcing-alpha-support-for-envoy-on-windows-d2c53c51de7b">Announcing Alpha Support for Envoy on Windows</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Taming a Network Filter]]></title>
            <link>https://blog.envoyproxy.io/taming-a-network-filter-44adcf91517?source=rss----bb5932e836f2---4</link>
            <guid isPermaLink="false">https://medium.com/p/44adcf91517</guid>
            <category><![CDATA[envoy-proxy]]></category>
            <category><![CDATA[wasm]]></category>
            <category><![CDATA[webassembly]]></category>
            <category><![CDATA[extension]]></category>
            <category><![CDATA[envoy]]></category>
            <dc:creator><![CDATA[Yaroslav Skopets]]></dc:creator>
            <pubDate>Sun, 13 Sep 2020 20:52:58 GMT</pubDate>
            <atom:updated>2020-09-13T20:52:58.189Z</atom:updated>
            <content:encoded><![CDATA[<p>This blog post continues the series “<a href="https://blog.envoyproxy.io/how-to-write-envoy-filters-like-a-ninja-part-1-d166e5abec09">How to Write Envoy Filters Like a Ninja!</a>”.</p><p>If you are considering developing a custom <em>Envoy</em> extension, it is crucial to have a solid understanding of how request processing works in <em>Envoy</em> and the role <em>Envoy</em> extensions play.</p><p>Today, we will take a closer look at <em>Network Filters</em>.</p><p>We begin with <em>Network Filters</em> partly because they have a simpler model, but also because support for HTTP in <em>Envoy</em> is implemented as yet another <em>Network Filter</em>. It wouldn’t be possible to explain <em>HTTP Filters</em> without referring to <em>Network Filters </em>first.</p><p>We’ll start from a general overview and then go over the practical applications.</p><p>Let’s get going!</p><h3>Lifecycle of a Network Filter</h3><p><em>Envoy</em> is fundamentally a L3/L4 proxy capable of handling any protocols at or above that level.</p><p>The workhorse of <em>Envoy </em>is a <em>Listener</em> — a concept responsible for accepting incoming (also known as “downstream”) connections and kicking off the request processing flow.</p><p>Each connection is processed by passing received data through a series of <em>Network Filters </em>collectively referred to as a <em>Filter Chain</em>.</p><p><em>Network Filters </em>intercept data (TCP payloads) flowing in both directions:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*RM-Rif51UiVlZmYC" /><figcaption>Figure 1</figcaption></figure><p>The “<em>Downstream &gt; Envoy &gt; Upstream</em>” path is referred to in <em>Envoy</em> as the “read” path, and the opposite direction is referred to as the “write” path.</p><p>Unlike the <em>Filter </em>concept you’ve seen in other APIs, <em>Filters </em>in Envoy are stateful. A separate instance of <em>Network Filter </em>is allocated for every connection.</p><p>The interface of a <em>Network Filter</em> consists of the following callbacks.</p><p>On the “read” path:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/718/0*xz7HnKZUgnAJ5SVi" /><figcaption>Figure 2</figcaption></figure><p>On the “write” path:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/716/0*lCKegtQgIcyjFmcl" /><figcaption>Figure 3</figcaption></figure><p>A <em>Network Filter</em> may also subscribe to connection events:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/708/0*oqLTm3LN20NkjKWo" /><figcaption>Figure 4</figcaption></figure><p>As mentioned earlier, <em>Envoy</em> processes received data by iterating through the filter chain.</p><p>On connect from the <em>Downstream</em>, <em>Envoy</em> will iterate through the filter chain to call <em>onNewConnection()</em>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*CDNRYy-Ow7s9lin8" /><figcaption>Figure 5</figcaption></figure><p>If a <em>Network Filter </em>returns <em>StopIteration</em> from its callback, <em>Envoy</em> will not proceed to the next filter in the chain.</p><p>Beware that <em>StopIteration </em>only means “don’t call filters after me for this particular iteration cycle” as opposed to “don’t do any further processing on that connection until I give a green light”.</p><p>For example, even if a <em>Network Filter</em> returns <em>StopIteration </em>from its <em>onNewConnection() </em>callback, once <em>Envoy</em> receives a chunk of request data from the <em>Downstream, </em>it will iterate through the filter chain again, this time calling the <em>onData() </em>callback. Filters that haven’t seen <em>onNewConnection() </em>yet are guaranteed to see it prior to <em>onData()</em>.</p><p>From a perspective of a single <em>Network Filter, </em>request processing flow looks the following way.</p><p>On the “read” path:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/946/0*P-DiF4YqC2FpOqCb" /><figcaption>Figure 6</figcaption></figure><p>On the “write” path:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/850/0*8PdUw10FP7Dn8A_L" /><figcaption>Figure 7</figcaption></figure><p>We will explain the difference between <em>onNewConnection() </em>and <em>onEvent(Connected) </em>in the section on <em>Gatekeeping</em> along with a practical example.</p><p>Right now, let’s focus on <em>StopIteration </em>and the effect it has on data buffering in <em>Envoy</em>.</p><p>The first thing to know is that “read” and “write” paths in <em>Envoy</em> are not symmetrical and will be described separately.</p><p>On the “read” path:</p><p>When <em>Envoy</em> receives a new chunk of request data from the <em>Downstream</em>:</p><ul><li>First, it appends the new chunk into the “read” buffer.</li><li>Next, it iterates over the filter chain calling <em>onData() </em>with the entire “read” buffer as a parameter (as opposed to the new chunk only).</li><li>The “read” buffer will normally be drained by the terminal filter in the chain (e.g., <em>TcpProxy</em>).</li><li>However, if one of the filters in the chain returns <em>StopIteration </em>without draining the buffer, the data will remain buffered.</li></ul><p>When <em>Envoy</em> receives the next chunk of request data from the <em>Downstream</em>:</p><ul><li>It will again append to the “read” buffer.</li><li>Next, it iterates over the filter chain calling <em>onData() </em>with the entire “read” buffer as parameter.</li></ul><p>The important thing to notice is that, due to the presence of the “read” buffer,<em> </em>a <em>Network Filter </em>might observe the same data twice in its <em>onData()</em> callback!</p><p>Finally, how safe is it to let the “read” buffer keep growing? Can it grow indefinitely or overflow? The good news is that <em>Envoy</em> takes care of this aspect automatically and will stop reading data from the socket as soon as the size of the “read” buffer exceeds the limit (1MiB by default).</p><p>On the “write” path:</p><p>There is no equivalent of the “read“ buffer on the “write” path.</p><p>When <em>Envoy</em> receives a new chunk of response data from the <em>Upstream</em>:</p><ul><li>It iterates over the filter chain calling <em>onWrite() </em>with the new chunk as parameter.</li><li>If all filters in the chain return <em>Continue</em>, the chunk will be appended to the “write” buffer (response data ready to be sent back to the <em>Downstream</em>).</li><li>If one of the filters returns <em>StopIteration</em>, the chunk will be dropped.</li></ul><p>It’s worth noting one more time that <em>StopIteration </em>has effect only on a single iteration cycle through the filter chain. It’s not a signal to “stop further processing until I give a green light“. The next time <em>Envoy</em> receives a chunk of data, it will start calling filters again no matter whether they returned <em>Continue</em> or <em>StopIteration</em> last time. Consequently, if a filter needs to wait for some external event to occur, it has to keep returning <em>StopIteration </em>from <em>onData</em>()/<em>onWrite</em>() callback until that very moment.</p><p>To prevent the “write” buffer from overflowing, <em>Envoy</em> implements a concept of flow control (also known as “backpressure”). Its purpose is to stop receiving data from the remote side (i.e., <em>Upstream</em>) if the local buffer is full (i.e., “write” buffer on the “downstream” connection). You can learn more about flow control <a href="https://github.com/envoyproxy/envoy/blob/master/source/docs/flow_control.md">here</a>.</p><p>At this point, we’ve touched most of the request processing flow. The final group of related <em>APIs</em> — <em>continueReading</em>(), <em>injectReadDataToFilterChain</em>(), and <em>injectWriteDataToFilterChain</em>() — will be explained in the section on <em>Reshaping Traffic</em> with a practical example.</p><h3>Practical Applications</h3><p>Not every <em>Network Filter</em> has to be as complicated as the <em>HTTP Connection Manager</em>.</p><p>Much more often, a <em>Network Filter </em>implements a single very specific action, such as rate limiting or authorization, and relies on other filters in the chain to do routing, load balancing, connecting to the <em>Upstream</em>, etc.</p><p>Let’s take a look into such practical applications of <em>Network Filters</em>.</p><h4>Gatekeeping</h4><p>One simple application of <em>Network Filters</em> is rejecting unwanted connections.</p><p>A few great examples would be <em>RBAC filter</em>, <em>External Authorization filter</em>, <em>Client SSL Auth filter, Rate Limit filter</em>, etc.</p><p>These filters remain neutral to the application-level protocol and make use of only a few <em>Envoy</em> APIs.</p><p>A typical request flow implemented by this filters looks like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1012/0*P5Dq5MCMqf49VddZ" /><figcaption>Figure 8</figcaption></figure><p>Although it might seem intuitive to always initiate an auxiliary external request from the context of <em>onNewConnection()</em> callback, it is not possible in certain cases.</p><p><em>Envoy</em> calls <em>onNewConnection()</em> [at least, on the first filter in the filter chain] as soon as a new connection has been accepted by the <em>Listener</em>. However, in the case of TLS connections, TLS handshake is not yet complete at this point. If a <em>Network Filter</em> depends on information from the TLS handshake, e.g., <em>Client SSL Auth filter, </em>it cannot do much in <em>onNewConnection()</em>.</p><p>In such a case, a <em>Network Filter </em>should defer the auxiliary external request until <em>onEvent(Connected)</em> is called on that filter. <em>Connected, </em>despite such a general name, is in fact a very specific event that gets fired right after the TLS handshake completes successfully.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/974/0*Krta0jv4pwBwJk2O" /><figcaption>Figure 9</figcaption></figure><p>Beware of the difference between returning <em>Continue</em> vs <em>StopIteration</em> from <em>onNewConnection() </em>callback, which becomes apparent in the case of gatekeeping filters.</p><p>If a filter chain includes the <em>TcpProxy filter</em> (always the last filter in the chain), a listener will not start reading data from that connection until <em>onNewConnection() </em>is called on <em>TcpProxy</em>. Which means that returning <em>StopIteration </em>by a gatekeeping <em>Network Filter</em> might leave the connection in a stalemate (cannot proceed because it is not allowed to read data). Unfortunately, returning <em>Continue </em>has its own side effects. When <em>onNewConnection() </em>is called on <em>TcpProxy, </em>it immediately kicks off connection to the <em>Upstream</em> and then starts proxying the response from it, all happening before the decision whether to allow connection or not has been made by the gatekeeper. This is a good example where <em>Envoy</em> API could be made cleaner and safer by default.</p><p>Lastly, you might be wondering what happens to the request/response data when a <em>Network Filter</em> returns <em>StopIteration</em>. On the “read” path, data stays in the “read” buffer and the filter will see it again in the next call to <em>onData()</em>. On the “write” path, data gets dropped.</p><h4>Collecting Protocol-specific Stats</h4><p>Support for a new application-level protocol in <em>Envoy</em> typically starts from a single feature — the ability to parse protocol messages and derive some metrics out of them.</p><p><em>Mongo</em>, <em>MySQL</em>, <em>Postgres</em>, <em>Zookeeper, Kafka</em> <em>filters</em> are all good examples.</p><p>These filters do not modify data they proxy and leave routing and load balancing up to the <em>TcpProxy filter</em>.</p><p>Just having insights into the actual traffic is already a big deal and brings a lot of value on its own.</p><p><em>Network Filters</em> in this category typically operate as follows:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JT-Pw3d88WZbcqNh" /><figcaption>Figure 10</figcaption></figure><p>Notice that in the model described above, the <em>Network Filter</em> assumes that <em>onData() </em>callback is always called on a new, previously unseen chunk of data.</p><p>However, <em>onData() </em>API does not guarantee that (as described earlier).</p><p>For the filter to work correctly, subsequent filter(s) in the chain must drain the “read” buffer (e.g., <em>TcpProxy</em> <em>filter </em>always does that).</p><p>In practice, it means that users/operators of <em>Envoy</em> have to be mindful of the configuration they choose. Filters like <em>Mongo, MySQL, Postgres </em>will work correctly when they are immediately followed by <em>TcpProxy</em>. However, injecting an arbitrary filter in between the two might lead to unexpected results.</p><p>Here is an example of the correct combination of filters:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/942/0*uqc1tSY_TUo6GmEt" /><figcaption>Figure 11</figcaption></figure><p>And here is an example of the incorrect one:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/946/0*PZkt-s7OtHzJQGvu" /><figcaption>Figure 12</figcaption></figure><h4>Feeding Protocol-specific Metadata</h4><p>As an extension to the previous use case, a <em>Network Filter</em> can expose not only metrics but also fine-grained metadata derived from protocol messages.</p><p>Such metadata can later be used to power access decisions (e.g., by <em>RBAC</em> filter), to enrich <em>access logs</em>, etc.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*R8GdeDKMfsPCP_Ba" /><figcaption>Figure 13</figcaption></figure><h4>Reshaping Traffic</h4><p>When speaking about traffic shaping, think of a <em>Fault Injection filter </em>that throttles traffic.</p><p>From a technical perspective, throttling implies shifting the time when the next chunk of data is forwarded to the <em>Upstream</em> or <em>Downstream</em>.</p><p>Request flow implemented by these filters look the following way.</p><p>On the “read” path:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*YlABPHLMPRWdJ3NC" /><figcaption>Figure 14</figcaption></figure><p>On the “write” path:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*nGEcYf8PHl6Xdetx" /><figcaption>Figure 15</figcaption></figure><p>In the above example traffic gets reshaped through combined use of <em>onTimer</em>() and <em>injectReadDataToFilterChain</em>() / <em>injectWriteDataToFilterChain</em>() / <em>continueReading()</em>.</p><p><em>onTimer</em>() is not the only reason for traffic to change its shape. For example, filters can also invoke these API methods from callbacks after processing the results of <em>HTTP or gRPC Client API</em> calls.</p><h4>Protocol-specific Routing and Load Balancing</h4><p><em>Network Filters</em> that fall into this category are the most advanced ones, e.g. <em>HTTP Connection Manager, Redis</em>, <em>Thrift</em>, <em>Dubbo</em>, etc.</p><p>Instead of relying on <em>TcpProxy</em> for protocol-agnostic routing and load balancing, a <em>Network Filter </em>can take over and do this job much more efficiently.</p><p>For example, the <em>HTTP Connection Manager filter </em>(which implements support for HTTP/1.1 and HTTP/2 in <em>Envoy</em>) takes advantage of HTTP/2 multiplexing and cuts the number of “upstream” connections to down to 1.</p><p>Since these filters are responsible for forwarding data to <em>Upstream</em>, they are also in charge of implementing flow control: if the “write” buffer on the “downstream” connection gets full, then stop receiving data from the <em>Upstream</em>. Similarly, if the “write” buffer on the “upstream” connection gets full, then stop receiving data from the <em>Downstream</em>.</p><p>We will touch more on routing, load balancing and flow control in future blog posts.</p><h3>Native Envoy extensions (C++) vs WebAssembly</h3><p>The <em>Network Filter</em> model and its practical applications we’ve described so far are applicable to both native (C++) and <em>WebAssembly</em>-based <em>Envoy</em> extensions.</p><p>Eventually, you will be able to achieve identical behaviour in both cases.</p><p>However, at the time of writing (early September 2020), <em>WebAssembly</em> support in <em>Envoy</em> implements only a subset of APIs available to native (C++) extensions:</p><ul><li>✅ <em>onNewConnection()</em></li><li>✅ <em>onData()</em></li><li>✅ <em>onWrite()</em></li><li>✅ <em>onEvent(RemoteClose | LocalClose)</em></li><li>❌ <em>onEvent(Connected)</em></li><li>❌ <em>continueReading()</em></li><li>❌ <em>injectReadDataToFilterChain()</em></li><li>❌ <em>injectWriteDataToFilterChain()</em></li><li>❌ <em>connection.close()</em></li><li>❌ <em>setTimer() / onTimer()</em></li><li>❌ <em>setDynamicMetadata()</em></li></ul><h3>Conclusion</h3><p>Today you’ve learned enough to be able to develop a <em>Network Filter</em> of your own.</p><p>For example, you could add support for a new SQL / NoSQL / object database, a new key-value store, a new message broker, etc.</p><p>In the simplest form, all you need to do is to integrate an existing protocol parsing library into <em>Envoy</em> request lifecycle, expose some metrics and profit :).</p><p>Take it as a challenge!</p><p>Make it even more interesting and do it in <em>Rust</em> 😉.</p><p>That’s all for today. See you in the next blog post where we will do a deep dive into <em>HTTP Filters</em>!</p><h3>Shameless plug</h3><p>If you’re curious about <em>WebAssembly</em>-based <em>Envoy</em> extensions and willing to learn some <em>Rust</em>, take a look at <a href="https://github.com/tetratelabs/envoy-wasm-rust-sdk">Envoy SDK for Rust</a> we’ve been working on here at <a href="https://www.tetrate.io/">Tetrate</a>.</p><p>This community project is a place for experiments on how an idiomatic <em>Envoy SDK</em> could look.</p><h3>Acknowledgements</h3><p>A huge thank you to <a href="https://twitter.com/zuercher">Stephan Zuercher</a>, a senior maintainer of <em>Envoy</em>, who’s been patiently reviewing this blog post and made it much more readable!</p><h3>References</h3><p><a href="https://www.envoyproxy.io/docs/envoy/latest/intro/life_of_a_request">https://www.envoyproxy.io/docs/envoy/latest/intro/life_of_a_request</a></p><p><a href="https://github.com/envoyproxy/envoy/blob/master/source/docs/flow_control.md">https://github.com/envoyproxy/envoy/blob/master/source/docs/flow_control.md</a></p><p><a href="https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310">https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=44adcf91517" width="1" height="1" alt=""><hr><p><a href="https://blog.envoyproxy.io/taming-a-network-filter-44adcf91517">Taming a Network Filter</a> was originally published in <a href="https://blog.envoyproxy.io">Envoy Proxy</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>