<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
    <title>Valkey</title>
    <link rel="self" type="application/atom+xml" href="https://valkey.io/atom.xml"/>
    <link rel="alternate" type="text/html" href="https://valkey.io"/>

    <generator uri="https://www.getzola.org/">Zola</generator>
    <updated>2026-05-07T01:01:01+00:00</updated>
    <id>https://valkey.io/atom.xml</id><entry xml:lang="en">
        <title>Announcing Spring Data Valkey: A New Season for High-Performance Spring Applications</title>
        <published>2026-04-01T00:00:00+00:00</published>
        <updated>2026-04-01T00:00:00+00:00</updated>
        
        <author>
          <name>
            makubo-aws
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/spring-data-valkey/"/>
        <id>https://valkey.io/blog/spring-data-valkey/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/spring-data-valkey/">&lt;p&gt;With the winter months winding down, it&#x27;s a fitting time to introduce the general availability of Spring Data Valkey, a new open source Spring Data module that provides first-class integration between Valkey and the Spring ecosystem.
Valkey adoption has expanded rapidly across cloud providers, Linux distributions, and enterprise environments.
As organizations standardize on Valkey for caching, session management, and real-time data workloads, demand has grown for native integrations with the frameworks developers already use.
For the Java ecosystem, that means Spring and Spring Data.&lt;&#x2F;p&gt;
&lt;p&gt;Spring Data Valkey was built to meet that need.
It allows Spring applications to use Valkey through familiar Spring Data abstractions.
The programming model remains the same.
The templates and repositories remain the same.
What changes is the foundation underneath.
Spring Data Valkey is officially supported by the Valkey project and can optionally be used with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&quot;&gt;Valkey GLIDE&lt;&#x2F;a&gt;, one of Valkey&#x27;s official client libraries, that brings new operational-excellence capabilities to Spring applications.
When enabled, GLIDE adds cluster awareness, intelligent routing, automatic failover handling, and production-grade reliability — extending the operational capabilities of Spring Data Valkey while preserving the existing developer experience.&lt;&#x2F;p&gt;
&lt;p&gt;In this blog, we&#x27;ll walk through the key capabilities of Spring Data Valkey, explain how it integrates with Valkey and Valkey GLIDE, outline common real-world application patterns, and show how to get started.
Whether you&#x27;re migrating an existing application or building something new, you&#x27;ll see how Spring Data Valkey provides a seamless path to adopting Valkey within the Spring ecosystem.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;spring-data-redis-compatible&quot;&gt;Spring Data Redis Compatible&lt;&#x2F;h2&gt;
&lt;p&gt;Spring Data Valkey was built to align closely with the established Spring Data programming model while supporting Valkey&#x27;s Redis-compatible protocol and data structures.
Spring Data Valkey is designed to make it as easy as possible to migrate from Spring Data Redis — your existing code and configuration patterns work as-is.
If your application already relies on Spring Data templates, repository-based data access, Spring&#x27;s cache abstraction (such as &lt;code&gt;@Cacheable&lt;&#x2F;code&gt; and &lt;code&gt;@CacheEvict&lt;&#x2F;code&gt;), or Spring Boot auto-configuration, you can seamlessly move to Valkey while continuing to use those same abstractions.
The core programming model does not change, data access patterns remain consistent, and your application architecture remains intact.&lt;&#x2F;p&gt;
&lt;p&gt;To move your existing Spring Data applications to use Spring Data Valkey, simply update a few dependencies, switch to Valkey-specific package namespaces, and adjust configuration property names; after that, your application continues to operate using the same Spring Data APIs and behaviors it always has.
For a step-by-step guide, you can refer to the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;spring.valkey.io&#x2F;commons&#x2F;migration&#x2F;&quot;&gt;Spring Data Valkey migration guide&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Spring Data Valkey is available through standard Maven repositories, so you can manage it using your existing dependency management workflows and align versions with the broader Spring ecosystem.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-valkey-advantage&quot;&gt;The Valkey Advantage&lt;&#x2F;h2&gt;
&lt;p&gt;Spring Data Valkey continues to support existing client libraries such as Lettuce and Jedis, enabling teams to adopt Valkey while keeping their current integrations.
In addition, it introduces first-class support for &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&quot;&gt;Valkey GLIDE&lt;&#x2F;a&gt;, one of Valkey&#x27;s official client libraries.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Replica selection and availability-zone–aware routing&lt;&#x2F;strong&gt; (when supported by your deployment) – Enables smarter read distribution and improved resilience by directing traffic to appropriate replicas across availability zones, while helping reduce inter-AZ network traffic and associated data transfer costs.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Resilient Pub&#x2F;Sub support&lt;&#x2F;strong&gt; – Automatically restores Pub&#x2F;Sub subscriptions after disconnections, failovers, or topology changes so applications continue receiving messages without custom resubscription logic.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;OpenTelemetry integration&lt;&#x2F;strong&gt; – Provides built-in observability hooks that emit tracing and metrics data, enabling teams to monitor Valkey interactions using standard OpenTelemetry-compatible tooling.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Together, these capabilities shift distributed systems complexity out of your application code and into the client layer where it belongs.
With Spring Data Valkey and GLIDE, cluster changes, failovers, reconnections, observability, and cross-AZ traffic optimization are handled transparently, allowing teams to focus on business logic instead of infrastructure edge cases.
The result is a Spring-native development experience backed by a client designed for resilient, production-grade operation in modern distributed environments.&lt;&#x2F;p&gt;
&lt;p&gt;You can learn more about GLIDE here: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&quot;&gt;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;performance-characteristics-with-valkey-glide&quot;&gt;Performance Characteristics with Valkey GLIDE&lt;&#x2F;h2&gt;
&lt;p&gt;Spring Data Valkey using the GLIDE client delivers the performance needed for caching and real-time data workloads.
We benchmarked Spring Data Valkey using GLIDE compared to Jedis and Lettuce.
Overall, all three clients perform similarly across all dimensions measured, although where Valkey GLIDE shined was in higher throughput at large client counts.
Let&#x27;s take a closer look.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;throughput&quot;&gt;Throughput&lt;&#x2F;h3&gt;
&lt;p&gt;The first graph shows total throughput measured in requests per second (RPS) as the number of concurrent clients increases.&lt;&#x2F;p&gt;
&lt;p&gt;At lower client counts, all three clients deliver similar performance.
As concurrency increases, GLIDE maintains competitive throughput and begins to scale particularly well beyond roughly 64 concurrent clients, sustaining high request rates while maintaining stable latency characteristics.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;spring-data-valkey&#x2F;images&#x2F;throughput-rps-scalability.png&quot; alt=&quot;RPS Scalability – Total Workload Throughput&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;tail-latency&quot;&gt;Tail Latency&lt;&#x2F;h3&gt;
&lt;p&gt;Tail latency is often the most important indicator for user-facing workloads, since it reflects the worst-case response times experienced by applications.
Across the test range, Spring Data Valkey with GLIDE maintains tail latency comparable to both Lettuce and Jedis.
Even as client counts increase, latency remains stable, indicating that the client and server interaction continues to operate efficiently under load.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;spring-data-valkey&#x2F;images&#x2F;latency-p99-get-commands.png&quot; alt=&quot;Latency p99.9 – GET Command&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;value-add-for-real-world-use-cases&quot;&gt;Value-add for Real-World Use Cases&lt;&#x2F;h2&gt;
&lt;p&gt;Let&#x27;s explore how the new capabilities introduced through Valkey GLIDE integration with Spring Data Valkey solve real-world problems in real-world use cases.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;high-availability-caching-in-microservices&quot;&gt;High-Availability Caching in Microservices&lt;&#x2F;h3&gt;
&lt;p&gt;In modern microservices architectures, caches are often deployed in clustered, multi-AZ environments.
The operational challenge is not implementing &lt;code&gt;@Cacheable&lt;&#x2F;code&gt; — it&#x27;s ensuring the cache remains available and correctly routed during node failures, scaling events, or topology changes.&lt;&#x2F;p&gt;
&lt;p&gt;With GLIDE, Spring Data Valkey automatically discovers cluster topology, routes requests to the correct primary or replica node, and adapts to slot remapping or primary promotions without application restarts.
In multi-AZ deployments, availability-zone–aware routing can reduce unnecessary cross-AZ traffic, improving resilience while also lowering inter-AZ data transfer costs.
These capabilities allow teams to scale cache clusters or handle infrastructure events without adding custom retry logic or topology management code.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;event-driven-systems-and-pub-sub-messaging&quot;&gt;Event-Driven Systems and Pub&#x2F;Sub Messaging&lt;&#x2F;h3&gt;
&lt;p&gt;Real-time messaging systems depend on stable subscriptions.
In long-running services, dropped connections or topology changes can silently break Pub&#x2F;Sub consumers unless resubscription logic is carefully implemented.&lt;&#x2F;p&gt;
&lt;p&gt;GLIDE provides resilient Pub&#x2F;Sub support by automatically restoring subscriptions after disconnections, failovers, or cluster changes.
Combined with cluster-aware routing, this reduces the operational burden of maintaining reliable messaging pipelines and allows Spring-based services to continue receiving events even as the underlying Valkey cluster evolves.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;real-time-analytics-counters-and-rate-limiting&quot;&gt;Real-Time Analytics, Counters, and Rate Limiting&lt;&#x2F;h3&gt;
&lt;p&gt;Workloads such as rate limiting, distributed counters, and leaderboards often operate under sustained traffic and must tolerate node restarts or scaling operations without impacting user-facing latency.&lt;&#x2F;p&gt;
&lt;p&gt;GLIDE&#x27;s intelligent routing and native cluster-mode integration ensure that commands are sent to the correct shard as the cluster topology changes.
Automatic reconnection handling reduces error surfaces during network interruptions, while built-in OpenTelemetry integration provides standardized tracing and metrics for datastore interactions — allowing teams to observe latency patterns, detect failover events, and troubleshoot distributed behavior using familiar observability tooling.&lt;&#x2F;p&gt;
&lt;p&gt;Across these use cases, the core theme is consistent: Spring Data Valkey preserves the developer experience, while GLIDE strengthens the operational layer.
Cluster awareness, failover transparency, resilient messaging, cost-aware routing, and observability are handled in the client — so application code can remain simple even as infrastructure grows more complex.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;a-new-season-for-spring-and-valkey&quot;&gt;A New Season for Spring and Valkey&lt;&#x2F;h2&gt;
&lt;p&gt;Spring Data Valkey makes leveraging Valkey in Spring applications simple.
You keep the familiar Spring Data programming model — templates, repositories, cache annotations, and Spring Boot workflows — while benefiting from a modern, cluster-aware client with built-in resilience, intelligent routing, and observability via the optional Valkey GLIDE integration, all without requiring changes to how your Spring applications are written.
Fully compatible with Spring Data Redis, migrating is seamless and straightforward — just a dependency update and a few configuration changes.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey is rapidly becoming the default high-performance caching engine across cloud providers and enterprise environments, and Spring Data Valkey provides a clear path for Spring developers to adopt it.
We are excited for the community to try it out and welcome all feedback — whether it&#x27;s feature suggestions, bug reports, or contributions to the project.
To get started, explore the project on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;spring-data-valkey&quot;&gt;GitHub&lt;&#x2F;a&gt;.
For documentation and a step-by-step migration guide from Spring Data Redis, visit &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;spring.valkey.io&#x2F;overview&#x2F;&quot;&gt;spring.valkey.io&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>What Valkey&#x27;s new primitives tell us about the tools we need</title>
        <published>2026-03-27T00:00:00+00:00</published>
        <updated>2026-03-27T00:00:00+00:00</updated>
        
        <author>
          <name>
            kivanow
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-tooling-primitives/"/>
        <id>https://valkey.io/blog/valkey-tooling-primitives/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-tooling-primitives/">&lt;p&gt;Valkey is protocol-compatible with Redis and supports the full Redis 7.2.4 command API. That&#x27;s one of its greatest strengths - existing clients are largely compatible, the learning curve is not steep. It&#x27;s also, quietly, a problem for tooling.&lt;&#x2F;p&gt;
&lt;p&gt;Because compatibility means every tool built before Valkey existed technically &quot;works&quot; with it - tools that haven&#x27;t been updated to take advantage of what Valkey now offers natively. And when something works well enough, nobody builds a better version. Why would they?&lt;&#x2F;p&gt;
&lt;p&gt;The release of &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-admin&quot;&gt;Valkey Admin&lt;&#x2F;a&gt; is a signal that the community already knows this is a problem worth solving. It&#x27;s also a useful frame for the broader question: what do Valkey&#x27;s newer primitives actually make possible, and are the tools keeping up?&lt;&#x2F;p&gt;
&lt;h2 id=&quot;primitives-that-deserve-purpose-built-tooling&quot;&gt;Primitives that deserve purpose-built tooling&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;latency-history&#x2F;&quot;&gt;LATENCY history&lt;&#x2F;a&gt; has been in Valkey (and Redis before it) for a while, but it&#x27;s consistently underused because the data is rarely captured continuously. Valkey maintains a history of latency events - fork operations, AOF flushes, AOF rewrites. When you collect this over time, questions like &quot;did our latency spike coincide with an AOF flush?&quot; become answerable. Without continuous collection, you&#x27;re left staring at the current state and reasoning backward with no evidence.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey has taken this further with two primitives that have no Redis equivalent.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;commandlog-get&#x2F;&quot;&gt;COMMANDLOG&lt;&#x2F;a&gt; (8.1+) is the clearest example. Unlike SLOWLOG, which only flags commands that exceed an execution time threshold, COMMANDLOG tracks by three separate criteria: slow execution, large request payloads, and large reply payloads.&lt;&#x2F;p&gt;
&lt;p&gt;The distinction matters more than it looks. A command retrieving a 50MB hash may execute quickly - the data is in memory, the read is fast - but it&#x27;s saturating your network, causing tail latency for every other client sharing that connection, and buffering data in ways that won&#x27;t show up in slowlog at all. You end up with rising P99 latency and no obvious culprit, because the traditional tool for finding slow commands says everything is fine. COMMANDLOG with large-reply tracking shows you exactly which command patterns are the problem - something that simply wasn&#x27;t possible to diagnose this way before.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;cluster-slot-stats&#x2F;&quot;&gt;CLUSTER SLOT-STATS&lt;&#x2F;a&gt; (8.0+) gives you per-slot key counts, reads, writes, and CPU usage across a cluster. Before this, shard balancing was largely a guessing game based on aggregate node metrics - you could see that a node was hot, but not which slots were driving it. With SLOT-STATS you can identify specific hot slots and make targeted rebalancing decisions backed by actual data rather than intuition.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;purpose-built-tooling-exists-and-it-s-coming-to-valkey&quot;&gt;Purpose-built tooling exists, and it&#x27;s coming to Valkey&lt;&#x2F;h2&gt;
&lt;p&gt;Redis recognized this early. Redis Insight and the Redis Insight VS Code extension exist precisely because the generic tools weren&#x27;t enough - operators needed something that understood Redis&#x27;s data model, its operational commands, and how developers actually interact with it day to day. The investment made sense because the tool could be built around the specific primitives Redis exposed.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey is getting the same treatment. &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-admin&quot;&gt;Valkey Admin&lt;&#x2F;a&gt; - the project&#x27;s own operator tool, and the closest thing to a Valkey-native alternative to Redis Insight - is a clear signal that the community recognizes this. On the editor side, a VS Code extension for Valkey brings the same workflow into the development environment. But the point isn&#x27;t any specific tool. The point is that the category needs to exist, and it needs to keep pace with what Valkey is shipping.&lt;&#x2F;p&gt;
&lt;p&gt;There&#x27;s a harder problem underneath this though: most of Valkey&#x27;s operational data is ephemeral by design. The slowlog is a circular buffer. COMMANDLOG entries don&#x27;t persist across restarts. Command patterns that caused a latency spike hours ago leave no trace unless something was collecting continuously. Purpose-built tooling needs to solve for persistence, not just presentation.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-right-analogy&quot;&gt;The right analogy&lt;&#x2F;h2&gt;
&lt;p&gt;You can use JavaScript for backend services. Node.js is perfectly capable, the ecosystem is vast, and a lot of teams do it. But if you need high throughput or tight memory control, you probably want Go or Rust - not because JS is broken, but because the problem warrants a better fit. The same logic applies to desktop tooling: Electron works, and it ships fast, but you&#x27;re bundling a browser engine to display a web UI. It runs, but it carries a lot of weight for what it&#x27;s doing.&lt;&#x2F;p&gt;
&lt;p&gt;Redis-compatible tools work with Valkey for the same reason Node.js works for backends. They operate at the wire protocol level, which is compatible. But Valkey-specific primitives like COMMANDLOG have no Redis equivalent, so no Redis-era tool has a UI for it, a Prometheus exporter for it, or a way to persist and query its history. You can call it from the CLI, but that&#x27;s about it. The fit just isn&#x27;t there.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-s-coming-in-9-1-makes-this-more-urgent&quot;&gt;What&#x27;s coming in 9.1 makes this more urgent&lt;&#x2F;h2&gt;
&lt;p&gt;Two PRs currently targeting 9.1 illustrate where Valkey is headed.&lt;&#x2F;p&gt;
&lt;p&gt;The first is per-thread I&#x2F;O utilization metrics (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2463&quot;&gt;PR #2463&lt;&#x2F;a&gt;). New &lt;code&gt;used_active_time_io_thread_N&lt;&#x2F;code&gt; fields in &lt;code&gt;INFO&lt;&#x2F;code&gt; will expose how much time each I&#x2F;O thread actually spends doing work versus waiting for it. Since Valkey&#x27;s I&#x2F;O threads do busy polling, CPU utilization alone is misleading - a thread can show near 100% CPU while barely processing any work. The new fields use a monotonic clock to measure active time, making real utilization calculable over any time window. Paired with COMMANDLOG, this opens up an interesting class of correlations: if large-request or large-reply patterns coincide with high I&#x2F;O thread active time, you&#x27;re likely looking at bandwidth saturation rather than a compute bottleneck.&lt;&#x2F;p&gt;
&lt;p&gt;The second is CLUSTERSCAN (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2934&quot;&gt;PR #2934&lt;&#x2F;a&gt;), a new cluster-native SCAN command that iterates across all slots while handling MOVED redirections automatically. The cursor format encodes slot information and a fingerprint of the memory layout - if the cluster reshards mid-scan, it restarts from the affected slot rather than returning an error. For tooling this changes what cluster-wide key inspection can look like. Every current tool handles this with node-by-node iteration. CLUSTERSCAN makes a cleaner approach possible.&lt;&#x2F;p&gt;
&lt;p&gt;Both of these need tooling adoption to be useful at scale. The I&#x2F;O thread metrics need continuous collection - a one-time snapshot tells you nothing about what thread utilization looked like during the incident window. CLUSTERSCAN needs tooling to implement it before operators benefit from it in their debugging workflows. Features that don&#x27;t have good tooling get underused, regardless of how good the implementation is.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-observability-gap&quot;&gt;The observability gap&lt;&#x2F;h2&gt;
&lt;p&gt;The reason this keeps coming up is that Valkey&#x27;s development has accelerated. 8.0 shipped multi-threaded I&#x2F;O and SLOT-STATS. 8.1 added COMMANDLOG. 9.0 brought hash field expiration, atomic slot migration, and multi-database cluster mode. Each of these adds operational surface.&lt;&#x2F;p&gt;
&lt;p&gt;Hash field expiration introduces TTL behavior at a new granularity - reasoning about memory behavior requires visibility into expiring-key counts at the field level, not just the key level. Atomic slot migration changes how resharding works - tooling for planning and monitoring migrations needs to understand the new model. And so on.&lt;&#x2F;p&gt;
&lt;p&gt;The community has invested heavily in the core. The more the ecosystem builds tools that treat Valkey&#x27;s primitives as first-class features - not as Redis extensions to be gracefully ignored - the more value operators actually get out of them.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;tooling-is-an-ongoing-topic-not-a-one-time-decision&quot;&gt;Tooling is an ongoing topic, not a one-time decision&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey&#x27;s development isn&#x27;t slowing down. New primitives will keep landing, each adding operational surface that existing tools weren&#x27;t built to handle. The gap between what Valkey exposes and what the tooling ecosystem can see isn&#x27;t a problem to solve once - it&#x27;s something to keep pace with.&lt;&#x2F;p&gt;
&lt;p&gt;The good news is the community is already moving. &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-admin&quot;&gt;Valkey Admin&lt;&#x2F;a&gt; is open source and in preview, actively looking for contributions. A &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;BetterDB-inc&#x2F;vscode&quot;&gt;VS Code extension for Valkey&lt;&#x2F;a&gt; is also available if you prefer working from the editor. If you&#x27;re building something in this space or have operational lessons worth sharing, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linkedin.com&#x2F;in&#x2F;kivanow&quot;&gt;reach out on LinkedIn&lt;&#x2F;a&gt; - happy to talk through it.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Beyond Vectors: Introducing Full-Text Search and Aggregations to Valkey</title>
        <published>2026-03-17T01:00:00+00:00</published>
        <updated>2026-03-17T01:00:00+00:00</updated>
        
        <author>
          <name>
            karthiksubbarao
          </name>
        </author>
        
        <author>
          <name>
            allenss
          </name>
        </author>
        
        <author>
          <name>
            bcathcart
          </name>
        </author>
        
        <author>
          <name>
            cnuthalapati
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-search-1_2/"/>
        <id>https://valkey.io/blog/valkey-search-1_2/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-search-1_2/">&lt;p&gt;Valkey Search now lets you search across text, tag, numeric, and vector attributes in a single query, and analyze results with server-side aggregations at the low latency you expect from Valkey.
Valkey Search enables searching terabytes of data with latency as low as microseconds, providing a flexible foundation for querying across a range of use cases, from powering in-app search experiences to recommendation systems and analyzing data to support in-app analytics and reporting dashboards.&lt;&#x2F;p&gt;
&lt;p&gt;Until now, Valkey Search focused on vector similarity, enabling a wide range of workloads such as semantic search and AI workloads (see &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;introducing-valkey-search&#x2F;&quot;&gt;launch blog&lt;&#x2F;a&gt;).
But if you needed to filter your data by numeric attributes such as price ranges, match on exact attributes such as color or size, or search within text attributes such as product reviews, you had to build that yourself.&lt;&#x2F;p&gt;
&lt;p&gt;With Valkey Search 1.2, that changes. In this post, we&#x27;ll walk through what&#x27;s new, show how it works, and explore the use cases these capabilities unlock.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;searching-your-valkey-data&quot;&gt;Searching Your Valkey Data&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey Search uses indexes to organize your data by searchable attributes, so queries can find matching keys without scanning every document. To get started with search, you first define an index over the attributes you want to search. Valkey Search supports four query types — full-text, tag, numeric range, and vector similarity, which you can use independently or combine with boolean operators. For example, consider an e-commerce retailer Acme.inc using Valkey to store their product catalog who creates an index like this:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;FT.CREATE product_index ON HASH SCHEMA&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  description TEXT&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  color TAG&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  manufacturer TAG&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  price NUMERIC SORTABLE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  rating NUMERIC SORTABLE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  review vector HNSW 6 TYPE FLOAT32 DIM 768 DISTANCE_METRIC COSINE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This creates an index called &lt;code&gt;product_index&lt;&#x2F;code&gt; over hash keys, indexing &lt;code&gt;description&lt;&#x2F;code&gt; as a full-text searchable attribute, &lt;code&gt;color&lt;&#x2F;code&gt; and &lt;code&gt;manufacturer&lt;&#x2F;code&gt; as exact-match tags, &lt;code&gt;price&lt;&#x2F;code&gt; and &lt;code&gt;rating&lt;&#x2F;code&gt; as sortable numeric attributes, and &lt;code&gt;review&lt;&#x2F;code&gt; as a vector attribute for similarity search. With this index in place, you can now query across all four types:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Full-text Search:&lt;&#x2F;strong&gt; Full-text search (FTS) helps you retrieve relevant documents by matching words and phrases within larger text when you do not have an exact identifier or a structured attribute value.
You can use full-text queries to retrieve documents by matching keywords, phrases, or patterns such as prefix, suffix, wildcard, and fuzzy queries (typo-tolerant), anywhere in your document index.
You can use FTS to express constraints like &quot;user reviews mentioning headphones with active noise cancellation &lt;strong&gt;and&lt;&#x2F;strong&gt; long battery life, but &lt;strong&gt;not&lt;&#x2F;strong&gt; weighing more than 3 ounces.&quot;
Hence, full-text search is a powerful capability for workloads that search unstructured data rather than exact tags.
Full-text search queries shine in use cases such as e-commerce and catalog searches to help users find products instantly across large inventories.
The prefix&#x2F;suffix pattern matching provided by full-text search makes it ideal for powering instant suggestions as users type, enabling discovery in search bars and product catalogs.
Finally, fuzzy matching capabilities allow you to accurately match documents despite spelling variations, typos, or inconsistent formats to support typo-tolerant retrieval.
For example, when a shopper on Acme.inc types &quot;noise cancelling earphones&quot; in the search bar, the app can search across descriptions to retrieve relevant products:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;FT.SEARCH product_index &amp;quot;noise canc*&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This returns matching products with all their indexed attributes:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;1) (integer) 2&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;2) &amp;quot;product:1001&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;3) 1) &amp;quot;description&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   2) &amp;quot;Wireless earphones with active noise cancelling and 30-hour battery life&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   3) &amp;quot;color&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;4) &amp;quot;product:1042&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Tag Search:&lt;&#x2F;strong&gt; Tag queries filter documents by checking attribute values for exact matches against the provided value. Tag attributes are best for structured categorical data where queries rely on exact value matching.
You can treat tag as a specialized alternative to text for attributes that you want to match as discrete values such as userIDs, categories, and status flags.
This makes tag a great fit to express conditions like &quot;category is earphones &lt;strong&gt;and&lt;&#x2F;strong&gt; size is small, but color is &lt;strong&gt;not&lt;&#x2F;strong&gt; red.&quot;
Tag filtering shines in use cases such as streaming and gaming platforms to retrieve metadata by exact identifiers such as user ID, region, or genre to power real-time stats and fast in-app filtering.
It is also well suited for session and user management, where you need to quickly locate active sessions and entitlements by exact identifiers such as session ID, device properties, or tenant ID across millions of documents.
For example, when a shopper on Acme.inc filters for earphones manufactured by &quot;shopnow&quot; that are &quot;white&quot; in color, the app can query the tag index:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;FT.SEARCH product_index &amp;quot;@manufacturer:{shopnow} @color:{white}&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Numeric Range Queries:&lt;&#x2F;strong&gt; Range search retrieves documents by filtering and sorting over numeric and time attributes using comparisons such as greater than, less than, and between. This makes it ideal for applications that query attributes such as scores, price bands, time windows, inventory thresholds, distances, and timestamps.
Valkey Search supports range-based queries with microsecond latencies, making it a strong fit for real-time leaderboards where you rank and retrieve content by numeric metrics such as scores, downloads, or engagement with updates reflected immediately.
Range queries are also well suited for financial transactions or time series data, enabling ultra-fast lookups by amounts, fees, date ranges, and risk scores to power customer-facing applications, personalization, or real-time monitoring.
For example, when a shopper on Acme.inc sets the price slider to $50–$100 and filters by 4-star and above, the app can query the numeric index:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;FT.SEARCH product_index &amp;quot;@price:[50 100] @rating:[4 +inf]&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Hybrid Queries:&lt;&#x2F;strong&gt; Valkey Search lets you combine full-text, tag, numeric range, and vector similarity clauses in a single query so you can retrieve relevant results.
Hybrid queries are ideal for applications that need to query across multiple attribute types in one request. For example, a product discovery query might combine vector similarity over reviews for semantic meaning, a tag filter for category and availability, a numeric range for price, and full-text search over product titles — all in a single request.
This eliminates the need to stitch together multiple queries or round-trips, making it a natural fit for recommendation systems, media platforms, and real-time operational workflows.
Combining our examples from above, when a shopper on Acme.inc searches for &quot;noise cancelling earphones&quot; and filters for products in &quot;white&quot; color, under $150, that are good for running, the app can combine all of this into a single query:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;FT.SEARCH product_index &amp;quot;noise cancelling earphones @color:{white} @price:[0 150] =&amp;gt;[KNN 5 @review $vector]&amp;quot; PARAMS 2 vector &amp;#39;VECTOR_REPRESENTATION&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Here, the query is searching through vector representations of product reviews to find “earphones that are good for running”.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;transform-your-valkey-data-with-aggregations&quot;&gt;Transform Your Valkey Data with Aggregations&lt;&#x2F;h2&gt;
&lt;p&gt;Aggregations help you analyze and summarize the results of a search query, instead of returning a raw list of matching documents.
You can use &lt;code&gt;GROUPBY&lt;&#x2F;code&gt; to form groups on any indexed attribute such as category, brand, region, and time, apply &lt;code&gt;REDUCE&lt;&#x2F;code&gt; functions such as &lt;code&gt;COUNT&lt;&#x2F;code&gt;, &lt;code&gt;SUM&lt;&#x2F;code&gt;, and &lt;code&gt;AVG&lt;&#x2F;code&gt; to compute per-group statistics, and use &lt;code&gt;APPLY&lt;&#x2F;code&gt; to create computed attributes on the fly.
You can then refine and shape the output with post-aggregation &lt;code&gt;FILTER&lt;&#x2F;code&gt;, &lt;code&gt;SORTBY&lt;&#x2F;code&gt;, and &lt;code&gt;LIMIT&lt;&#x2F;code&gt;, chaining stages together to build multi-step workflows in a single query.
This makes aggregations a strong fit for low-latency lightweight analytics directly on indexed Valkey data, without the need to export large result sets to the application layer. Some applications of aggregations include:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Faceted navigation and filtering: Power dynamic filtering UIs using aggregations to compute real-time counts over the current result set (for example by category, brand, price band, rating, or availability), enabling users to narrow down search results with instant feedback on available options. You can also group and count items by attribute such as genre, tags, language, or creator to power structured browsing and category-level summaries across large catalogs.&lt;&#x2F;li&gt;
&lt;li&gt;Real-time statistics: Use aggregations to compute grouped rankings for powering in-app flows such as trending items by recent engagement, category leaders by revenue, or top performers by region.&lt;&#x2F;li&gt;
&lt;li&gt;Analytics and reporting: Generate live counts and summary metrics to power operational dashboards, or generate on-demand reports for analysis and decision making without hitting backend databases.
For example, Acme.inc can populate the count of earphones available from “shopnow” in each price band on the webpage using:&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;FT.AGGREGATE product_index &amp;quot;@manufacturer:{shopnow}&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;APPLY &amp;quot;floor(@price &#x2F; 50) * 50&amp;quot; AS price_band&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;GROUPBY 1 @price_band&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;REDUCE COUNT 0 AS product_count&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;SORTBY 2 @price_band ASC&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This filters for products from the &quot;shopnow&quot; manufacturer, uses &lt;code&gt;APPLY&lt;&#x2F;code&gt; to bucket each product&#x27;s price into $50 bands (0–49, 50–99, 100–149, etc.), groups by those bands, counts products in each, and sorts by price band ascending — giving Acme.inc a price distribution histogram for that manufacturer&#x27;s products.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;under-the-hood&quot;&gt;Under the Hood&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;real-time-search-with-multi-threading&quot;&gt;Real-time Search with Multi-threading&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey Search uses indexes to organize keys by the contents of their searchable attributes, ensuring reads remain fast and efficient as data grows.
When you add, update, or delete an indexed key, the module receives the mutation event, extracts indexed attributes, queues the indexing work, and updates the index before acknowledging the write.
Valkey Search is multi-threaded, so you can maximize ingestion throughput by using multiple parallel connections to saturate the index update process without pipelining on a single connection.
Background worker threads process index updates, and the client receives a response only after the update is committed to the index.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;read-after-write-consistency&quot;&gt;Read-after-write Consistency&lt;&#x2F;h3&gt;
&lt;p&gt;For workloads that require strict read-after-write behavior, you can configure your client to route search queries to primaries.
A write only completes after its index updates are applied, so any search sent to the same primary after the write returns will see that change.
If your application can tolerate some staleness, you can offload reads to replicas.
On replicas, search is eventually consistent because replication and index maintenance are asynchronous, and each node maintains its own local indexes.
Multi-key updates wrapped in a MULTI&#x2F;EXEC transaction or Lua script are also atomically visible to search.
Valkey Search exposes the index only after all attribute updates in the batch are applied.
Separately, in cluster mode, read-after-write consistency applies per primary of the shard.
If you write a key to a primary, searches sent to that primary see the change after the write returns, but a fan-out query spanning multiple shards does not have a single global transactional view.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;scale-to-terabytes-of-data&quot;&gt;Scale to Terabytes of Data&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey Search includes built-in support for cluster mode, enabling you to scale to terabytes of data without requiring application or client code changes.
In cluster mode, Valkey Search creates indexes that span multiple shards by maintaining a separate index on each node for the keys belonging to that node&#x27;s slot range. When you create, update, or drop an index on any primary, Valkey Search propagates that change to all nodes.
You can scale read throughput by distributing Search queries evenly across primary nodes so that no single node becomes a bottleneck, or by routing some Search queries to replicas.
You can increase throughput by scaling to instances with more vCPUs, allowing multithreading to scale throughput linearly for both querying and ingesting, or by adding replicas to increase query throughput.&lt;&#x2F;p&gt;
&lt;p&gt;For queries, the coordinating node that receives the query request, packages a query plan and sends it to every shard (to run on either primaries or replicas).
A node within each shard performs the search and fetches its matching keys, then returns results for the coordinator to merge. Because the fan-out and merge logic exists on every cluster node, any node can coordinate a query. For mutations, the primary owning the slot handles updates: when a key is added, updated, or deleted, only that primary updates its index directly, and the change replicates to its replicas.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;getting-started&quot;&gt;Getting Started&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey Search 1.2 extends search to text, tag, and numeric attribute types and adds result aggregation capabilities such as filtering, sorting, grouping, and computing metrics. Whether you&#x27;re building cutting-edge AI applications, latency-sensitive search experiences, or integrating search into existing systems, we invite you to try it out.
To get started with Valkey Search, visit the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;Valkey Search GitHub repository&lt;&#x2F;a&gt; and the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;valkey&#x2F;valkey-bundle&quot;&gt;Valkey Bundle on Docker Hub&lt;&#x2F;a&gt;. The official Valkey Bundle image provides the fastest path to running Valkey with preloaded modules, including Valkey Search, so you can begin building search and aggregation workflows without manual module setup. You can connect using official Valkey client libraries such as Valkey GLIDE, valkey-py, valkey-go, and valkey-java, as well as other Redis-compatible clients. Valkey Search is available under the BSD-3-Clause license. You can learn more about Valkey Search through our &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;search&#x2F;&quot;&gt;documentation&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Get Involved: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;Join the Valkey Search community&lt;&#x2F;a&gt;, file issues, open pull requests, or suggest improvements. We welcome contributions of all kinds - code, documentation, testing, and feedback. Your involvement helps make Valkey Search better for everyone.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;&#x2F;strong&gt; As of March 13, 2026, if you want to use Valkey Search 1.2 features on docker, use the current valkey&#x2F;valkey-bundle:unstable image.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Introducing Valkey Admin: Visual Cluster Management for Valkey</title>
        <published>2026-02-25T00:00:00+00:00</published>
        <updated>2026-02-25T00:00:00+00:00</updated>
        
        <author>
          <name>
            allenhelton
          </name>
        </author>
        
        <author>
          <name>
            arsenykostenko
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/introducing-valkey-admin/"/>
        <id>https://valkey.io/blog/introducing-valkey-admin/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/introducing-valkey-admin/">&lt;p&gt;Do you ever wonder why we still manage Valkey the same way we always have? The honest answer is that it works. And once something works, it tends to stick around.&lt;&#x2F;p&gt;
&lt;p&gt;You spin up a Valkey cluster. Maybe it&#x27;s three nodes. Maybe it&#x27;s thirty. You need to inspect a key, so you open a terminal, start &lt;code&gt;valkey-cli&lt;&#x2F;code&gt;, connect, run a command, read the output, then do it again. If you need to understand what&#x27;s happening across the cluster, that usually means opening more terminals. More SSH sessions. More mental bookkeeping.&lt;&#x2F;p&gt;
&lt;p&gt;At some point, you&#x27;re carrying around a model of the cluster that only exists in your head or maybe on a napkin next to your desk. It&#x27;s easy to check a key on one node and assume it looks the same everywhere else. It&#x27;s also an easy way to make mistakes.&lt;&#x2F;p&gt;
&lt;p&gt;This is how many of us learned to work with Valkey, and it&#x27;s still how a lot of day-to-day debugging happens today. Familiarity, though, doesn&#x27;t always mean efficiency. In practice, it often means friction we&#x27;ve just learned to tolerate.&lt;&#x2F;p&gt;
&lt;p&gt;Enter Valkey Admin.&lt;&#x2F;p&gt;
&lt;p&gt;Much of this thinking was shared recently by &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linkedin.com&#x2F;in&#x2F;rongzhangrz&#x2F;&quot;&gt;Rong Zhang&lt;&#x2F;a&gt; at the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;unlocked.gomomento.com&#x2F;&quot;&gt;Unlocked Conference&lt;&#x2F;a&gt;, where several Valkey team members walked through real workflows that led to the tool: understanding cluster topology, diagnosing performance issues, and making everyday operational tasks easier to reason about without stitching everything together by hand.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-is-valkey-admin&quot;&gt;What is Valkey Admin?&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey Admin is an application that gives you a direct, visual interface to your Valkey clusters.&lt;&#x2F;p&gt;
&lt;p&gt;Immediately upon connecting, you can see the shape of the cluster itself: which nodes are primaries, which are replicas, and how individual nodes are behaving. From there, you can drill into node dashboards, view key metrics, and understand what&#x27;s happening without reconstructing the picture manually.&lt;&#x2F;p&gt;
&lt;p&gt;Key management is a first-class workflow. You can browse keys interactively, inspect values, check TTLs, look at sizes and data structures, modify or remove keys, and search or filter by prefix. All Valkey data types are supported, and the goal is to make inspection and troubleshooting as straightforward as possible.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey Admin also includes tools for diagnosing performance problems. It can identify hot keys using low-overhead analysis when LFU is enabled, or fall back to configurable sampling when it isn&#x27;t. Command logs surface slow commands, large requests, and large replies, making it easier to understand why latency changes instead of just noticing that it did.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;introducing-valkey-admin&#x2F;images&#x2F;hot-key-monitoring.jpg&quot; alt=&quot;Screenshot of hot key monitoring in Valkey Admin&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The application also embeds the &lt;code&gt;valkey-cli&lt;&#x2F;code&gt;. You run the same commands you already know, but with added context, including the ability to diff results between runs, so changes in state are highly visible.&lt;&#x2F;p&gt;
&lt;p&gt;This isn&#x27;t meant to replace a full monitoring stack, and it&#x27;s not trying to be a wall of dashboards. If you already have Grafana and alerting in place, Valkey Admin lives somewhere between ad-hoc CLI usage and long-term metrics. It&#x27;s the tool you reach for when you need to understand what&#x27;s happening right now.&lt;&#x2F;p&gt;
&lt;p&gt;This type of admin tooling is part of the operational surface of a datastore. For systems that run critical workloads, the tooling must be inspectable, extensible, and governed by the same open principles as the system itself. Rather than relying on a black-box admin interface with opaque trade-offs and closed roadmaps, Valkey Admin is built to be something the community can understand and evolve over time.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;when-things-go-wrong&quot;&gt;When things go wrong&lt;&#x2F;h2&gt;
&lt;p&gt;If you&#x27;ve ever debugged a cache issue late at night, you know how quickly context switching becomes the real problem. The issue itself is already stressful. The last thing you want at 2am is to double-check which node you&#x27;re connected to or mentally reconstruct slot ownership across a cluster.&lt;&#x2F;p&gt;
&lt;p&gt;The same friction shows up during development. Verifying cache behavior often means writing a throwaway script, running a handful of commands, or re-learning &lt;code&gt;SCAN&lt;&#x2F;code&gt; syntax just to confirm that something worked. Most of us already have at least one of those scripts lying around. Writing another one every time isn&#x27;t a great use of time or effort.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey Admin isn&#x27;t an attempt to replace the command line. It&#x27;s an acknowledgment that some workflows are easier, faster, and less error-prone when you can see what&#x27;s happening instead of inferring it.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;early-design-choices&quot;&gt;Early design choices&lt;&#x2F;h2&gt;
&lt;p&gt;One of the key technical decisions in Valkey Admin is the use of &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&quot;&gt;Valkey GLIDE&lt;&#x2F;a&gt; for the client layer. GLIDE provides a shared, high-performance core with thin language bindings, avoiding the need to re-implement protocol handling, clustering logic, and connection management independently in every language.&lt;&#x2F;p&gt;
&lt;p&gt;GLIDE&#x27;s architecture is a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;about-aws&#x2F;whats-new&#x2F;2025&#x2F;06&#x2F;valkey-glide-2-0-go-opentelemetry-pipeline-batching&quot;&gt;strong fit for admin and observability workloads&lt;&#x2F;a&gt; that involve fan-out access patterns and high concurrency. It provides more predictable behavior, tighter alignment with Valkey&#x27;s internals, and a simpler foundation for building production-grade tooling at scale.&lt;&#x2F;p&gt;
&lt;p&gt;These same considerations shaped how Valkey Admin is delivered. It is built with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;react.dev&#x2F;&quot;&gt;React&lt;&#x2F;a&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.typescriptlang.org&#x2F;&quot;&gt;TypeScript&lt;&#x2F;a&gt;, and runs as an &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.electronjs.org&#x2F;&quot;&gt;Electron&lt;&#x2F;a&gt; application on macOS and Linux, with support for Windows via WSL. That choice allows the tool to remain fast, responsive, and close to the developer&#x27;s workflow, without requiring a separate deployment or service to get started.&lt;&#x2F;p&gt;
&lt;p&gt;The project is fully open source, and contributions are encouraged both to improve the tool itself and to adapt it to the different environments Valkey runs in.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;where-it-s-headed&quot;&gt;Where it&#x27;s headed&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey Admin focuses on reducing day-to-day overhead of operating cache clusters, not trying to be an all-encompassing admin platform. During the Unlocked talk, Zhang was careful not to over-promise a long-term roadmap, instead pointing to areas the community is actively exploring.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&quot;Valkey Admin is about solving the day-to-day problems Valkey users face understanding cluster topology, identifying performance bottlenecks, and managing keys without having to piece everything together by hand.&quot;&lt;&#x2F;em&gt;
Rong Zhang, Product Manager, AWS (Unlocked Conference)&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Based on early feedback, the focus areas include deeper analysis of large keys, improved visualization for complex data types, stronger auth and access control, and a web-native version for Kubernetes or Docker Compose. According to Zhang, there&#x27;s also interest in AI-assisted observability that explains what changed and why, assisting operators at an entirely new level.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;getting-started&quot;&gt;Getting started&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey Admin is currently in preview, but you can try it today by cloning the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-admin&quot;&gt;repository&lt;&#x2F;a&gt; and running &lt;code&gt;.&#x2F;quickstart.sh&lt;&#x2F;code&gt;. The script installs dependencies, builds the application, and spins up a local Valkey cluster so there&#x27;s something to connect to immediately. A few minutes later, you have a working desktop app.&lt;&#x2F;p&gt;
&lt;p&gt;From there, you add a connection to your own Valkey instance and start working. There&#x27;s no lengthy setup process and no configuration file you need to get exactly right before the tool becomes useful.&lt;&#x2F;p&gt;
&lt;p&gt;There&#x27;s also a web version available via &lt;code&gt;.&#x2F;quickstart-web.sh&lt;&#x2F;code&gt;. It doesn&#x27;t include some Electron-specific features, such as hot key tracking and command logs, but it&#x27;s useful for quick inspection during development.&lt;&#x2F;p&gt;
&lt;p&gt;Most of us would rather spend time building systems than managing infrastructure details by hand. If Valkey Admin helps reduce that overhead, then it&#x27;s doing its job.&lt;&#x2F;p&gt;
&lt;p&gt;If you manage Valkey clusters and find yourself bouncing between terminals more than you&#x27;d like, it&#x27;s worth trying. And if something feels awkward or incomplete, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-admin&#x2F;issues&quot;&gt;opening an issue&lt;&#x2F;a&gt; or contributing back is part of how the tool improves.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Operational Lessons from Large-Scale Valkey Deployments</title>
        <published>2026-02-19T00:00:00+00:00</published>
        <updated>2026-02-19T00:00:00+00:00</updated>
        
        <author>
          <name>
            allenhelton
          </name>
        </author>
        
        <author>
          <name>
            mikecallahan
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/operational-lessons/"/>
        <id>https://valkey.io/blog/operational-lessons/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/operational-lessons/">&lt;p&gt;Engineers operating large-scale systems face a consistent challenge: what works at moderate scale often breaks in subtle ways as systems grow. Recently, contributors and platform teams gathered at the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.unlockedconf.io&quot;&gt;Unlocked Conference&lt;&#x2F;a&gt; to compare notes on what actually happens when Valkey is under real production load.&lt;&#x2F;p&gt;
&lt;p&gt;What follows are high-level observations from the day — directional insights that kept resurfacing across talks, questions, and small-group discussions. For teams running or evaluating Valkey, these represent the operational questions large-scale deployments are asking right now.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;scale-changes-the-nature-of-problems&quot;&gt;Scale Changes the Nature of Problems&lt;&#x2F;h2&gt;
&lt;p&gt;As &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linkedin.com&#x2F;in&#x2F;kshams&#x2F;&quot;&gt;Khawaja Shams&lt;&#x2F;a&gt; put it during the opening remarks:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Scale exposes all truths.&quot;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Across multiple conversations throughout the day, the recurring theme was what changes once systems cross certain thresholds. Latency that once felt negligible becomes visible. Client behavior that looked harmless begins to shape tail latencies. Operational shortcuts that worked at small volumes introduce instability when workloads grow.&lt;&#x2F;p&gt;
&lt;p&gt;The takeaway was that scale changes &lt;em&gt;which questions you should be asking&lt;&#x2F;em&gt;. Conversations shift from &quot;does it work?&quot; to &quot;what happens when it&#x27;s stressed?&quot;&lt;&#x2F;p&gt;
&lt;p&gt;This aligns with recent work in Valkey such as &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;atomic-slot-migration&#x2F;&quot;&gt;Atomic Slot Migration&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-8-1-0-ga&#x2F;#i-o-threads-improvements&quot;&gt;I&#x2F;O threading improvements&lt;&#x2F;a&gt; introduced in Valkey 8.0, and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2078&quot;&gt;copy-avoidance paths&lt;&#x2F;a&gt; for large replies. These changes reflect lessons learned at scale, where larger clusters, heavier payloads, and higher concurrency expose constraints that smaller deployments rarely encounter.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;predictability-over-peak-throughput&quot;&gt;Predictability Over Peak Throughput&lt;&#x2F;h2&gt;
&lt;p&gt;Another frequently mentioned topic was reducing tail latency rather than maximizing peak throughput. A large gap between your P99 and P999 latency reveals instability caused by issues such as bursty traffic, background work, large payloads, or similar workload characteristics. The practical lesson is that outages often come from edge cases rather than the happy path.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linkedin.com&#x2F;in&#x2F;madelyn-olson-valkey&#x2F;&quot;&gt;Madelyn Olson&lt;&#x2F;a&gt; shared the five guiding performance principles at Valkey:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;operational-lessons&#x2F;valkey-performance-principles.jpg&quot; alt=&quot;Valkey Performance Principles Screenshot&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;With &lt;em&gt;&quot;provide predictable user latency&quot;&lt;&#x2F;em&gt; listed as the second principle, it&#x27;s clear that consistency is a deliberate priority for the project. It&#x27;s treated as a first-order goal alongside scalability and simplicity.&lt;&#x2F;p&gt;
&lt;p&gt;A small percentage of multi-megabyte values can distort tail percentiles even when average latency and overall throughput appear stable — behavior that Khawaja Shams explored in a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.gomomento.com&#x2F;blog&#x2F;large-objects-ruin-the-party-valkey-9-tames-them&#x2F;&quot;&gt;deep dive on large object handling&lt;&#x2F;a&gt;. During her systems engineering session, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linkedin.com&#x2F;in&#x2F;danielamiao&#x2F;&quot;&gt;Daniela Miao&lt;&#x2F;a&gt; emphasized a similar principle: performance work often begins by raising the floor of latency behavior before chasing peak numbers. Building predictable systems requires designing for these edge cases explicitly rather than assuming &quot;typical&quot; request sizes.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey 9.0 introduced reply copy-avoidance paths for large values, significantly reducing the time the main event loop can be blocked by multi-megabyte responses. It&#x27;s a concrete example of the broader shift from chasing peak numbers to bounding worst-case behavior so latency distributions remain stable as workloads evolve.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;payload-size-and-bandwidth-shape-outcomes&quot;&gt;Payload Size and Bandwidth Shape Outcomes&lt;&#x2F;h2&gt;
&lt;p&gt;While the prior discussion focused on latency variance inside the engine, another recurring theme was variance introduced by traffic moving across the network. With high-performance systems, bottlenecks often emerge from the &lt;em&gt;shape&lt;&#x2F;em&gt; of traffic rather than raw CPU or memory limits. At Unlocked, teams described systems where CPU and memory looked healthy, yet latency variance and node failures increased once payload sizes grew or request patterns shifted.&lt;&#x2F;p&gt;
&lt;p&gt;Nodes can fail not from CPU exhaustion or memory pressure alone, but from the sheer volume of bytes moving across the network. As Ignacio &quot;Nacho&quot; Alvarez from Mercado Libre put it:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Nodes dying because of the volume of bytes moving in and out was the hardest problem to solve.&quot;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Bandwidth becomes a primary bottleneck and incident driver — a reminder that network throughput and payload variability can dominate system behavior long before standard utilization metrics indicate trouble.&lt;&#x2F;p&gt;
&lt;p&gt;Instead of asking &quot;are we out of CPU or memory?&quot;, teams found themselves asking &quot;what are we actually sending over the wire?&quot; Payload size distribution, network behavior, and client request patterns frequently ended up being primary cost drivers compared to processor utilization alone. Nacho recounted that monitoring how large those requests were and how unevenly they arrived were crucial to Mercado Libre&#x27;s stability.&lt;&#x2F;p&gt;
&lt;p&gt;This awareness is visible in recent Valkey releases as well. Valkey 9.0 added &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2092&quot;&gt;pipeline memory prefetching&lt;&#x2F;a&gt; to smooth bursty workloads and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1811&quot;&gt;Multipath TCP&lt;&#x2F;a&gt; support to reduce network-induced latency, alongside &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;1-billion-rps&#x2F;&quot;&gt;large-cluster resilience improvements&lt;&#x2F;a&gt; aimed at keeping end-to-end latency stable at higher node counts. Taken together, these changes point less toward chasing peak throughput numbers and more toward limiting the impact of uneven traffic and network variability.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;community-participation-is-the-multiplier&quot;&gt;Community Participation Is the Multiplier&lt;&#x2F;h2&gt;
&lt;p&gt;One of the strongest signals coming out of Unlocked wasn’t a feature announcement or benchmark — it was how openly engineers were exchanging operational lessons. Maintainers, platform teams, and end users were comparing migration strategies, performance regressions, and tooling approaches throughout the day. As Khawaja put it during the opening remarks, the goal is to ”&lt;em&gt;share what you learn and talk about the outages you caused — we’re among friends.&lt;&#x2F;em&gt;”&lt;&#x2F;p&gt;
&lt;p&gt;This culture of openness reflects what’s happening in Valkey itself. According to the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;&quot;&gt;Valkey 2025 Year-End Review&lt;&#x2F;a&gt;, more than &lt;em&gt;300 contributors&lt;&#x2F;em&gt; authored commits, reviewed code, or opened issues over the course of the year. Participation here spanned testing releases, proposing features, building tooling, documenting deployment patterns, and sharing production lessons that shape future improvements.&lt;&#x2F;p&gt;
&lt;p&gt;For operators, that amount of involvement means bugs are discovered across a wider variety of real workloads, migration paths are validated by multiple organizations before becoming common practice, and performance fixes are grounded in production behavior rather than synthetic benchmarks alone. Stability improvements and tooling changes arrive already pressure-tested across different infrastructure types and scale points.&lt;&#x2F;p&gt;
&lt;p&gt;In this sense, contributions act as a multiplier. Shared experience compounds into better tooling and more predictable systems across the ecosystem. What was visible in hallway conversations and session Q&amp;amp;A at Unlocked mirrors what’s happening in these repositories every day: a project evolving through collective operational knowledge.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-this-means-for-operators&quot;&gt;What This Means for Operators&lt;&#x2F;h2&gt;
&lt;p&gt;Across sessions and discussions, several operational priorities surfaced repeatedly for teams running or evaluating Valkey:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Monitor P99 and P999 latency, not just medians. Tail percentiles reveal the edge cases that cause outages while median latency still looks stable.&lt;&#x2F;li&gt;
&lt;li&gt;Instrument payload size distribution alongside traditional metrics. Bandwidth saturation often appears before CPU or memory pressure signals trouble.&lt;&#x2F;li&gt;
&lt;li&gt;Treat traffic shape as a first-class metric. Bursty workloads, background work, and large responses frequently explain instability better than raw request counts.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;These recommendations reflect the operational questions teams managing Valkey at scale are actively solving.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;staying-current&quot;&gt;Staying Current&lt;&#x2F;h2&gt;
&lt;p&gt;Because these priorities continue to evolve, most of the related work and discussion happens in the open through release notes, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;issues&quot;&gt;open issues in GitHub&lt;&#x2F;a&gt;, ongoing community conversations, and conferences like &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.unlockedconf.io&quot;&gt;Unlocked&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;For those interested in following where these signals continue to develop, the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-newsletter-new&#x2F;#email-signup&quot;&gt;Valkey Community newsletter&lt;&#x2F;a&gt; provides periodic summaries of new releases, tooling updates, and contributor insights as they are published.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Don’t Miss What’s Next in Valkey — Subscribe Now</title>
        <published>2026-02-04T00:00:00+00:00</published>
        <updated>2026-02-04T00:00:00+00:00</updated>
        
        <author>
          <name>
            crystalpham
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-newsletter-new/"/>
        <id>https://valkey.io/blog/valkey-newsletter-new/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-newsletter-new/">&lt;p&gt;Valkey is moving fast and the easiest way to stay ahead is to &lt;a href=&quot;&#x2F;blog&#x2F;valkey-newsletter-new&#x2F;#email-signup&quot;&gt;subscribe&lt;&#x2F;a&gt; to the official Valkey newsletter.
From new releases and roadmap milestones to community highlights and upcoming events, our newsletter is your single source of truth for everything happening across the Valkey ecosystem. No digging, no second-guessing — just the updates that matter, delivered straight to you.&lt;&#x2F;p&gt;
&lt;p&gt;When you subscribe, you’ll get:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Breaking updates on new releases, features, and project announcements.&lt;&#x2F;li&gt;
&lt;li&gt;Early visibility into events and opportunities to present, plus key takeaways if you couldn’t attend.&lt;&#x2F;li&gt;
&lt;li&gt;Real stories from the community, shared by the people building and using Valkey.&lt;&#x2F;li&gt;
&lt;li&gt;Clear ways to get involved, whether you want to contribute, collaborate, or get started faster.
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-newsletter-new&#x2F;valkeycommunity.gif&quot; alt=&quot;Valkey-Newsletter&quot; &#x2F;&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;If you care about Valkey — or rely on it — this is how you stay in the loop.
👉 &lt;a href=&quot;&#x2F;blog&#x2F;valkey-newsletter-new&#x2F;#email-signup&quot;&gt;Subscribe&lt;&#x2F;a&gt; now and stay connected.
Subscription is open to everyone, so invite a teammate, a collaborator, or any open source enthusiast who doesn’t want to miss what’s coming next.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey is growing! Make sure you’re part of the conversation.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey 2025 Year-End Review: Reflecting on Progress and Looking Ahead</title>
        <published>2026-01-22T00:00:00+00:00</published>
        <updated>2026-01-22T00:00:00+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <author>
          <name>
            crystalpham
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/2025-year-end/"/>
        <id>https://valkey.io/blog/2025-year-end/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/2025-year-end/">&lt;p&gt;As we kick off 2026, it’s a great moment to pause, reflect, and celebrate what the Valkey project and its community has accomplished together. This past year marked an important chapter for Valkey, one defined by growth, collaboration, and a shared commitment to building an open, high-performance key-value store for everyone.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;2025-key-milestones&quot;&gt;2025 Key Milestones&lt;&#x2F;h2&gt;
&lt;p&gt;This past year brought meaningful progress across the Valkey ecosystem. Some highlights:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-bundle-one-stop-shop-for-low-latency-modern-applications&#x2F;&quot;&gt;Valkey bundle release&lt;&#x2F;a&gt;: which brought together the modules&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;JSON&lt;&#x2F;li&gt;
&lt;li&gt;Bloom&lt;&#x2F;li&gt;
&lt;li&gt;Vector Search&lt;&#x2F;li&gt;
&lt;li&gt;LDAP&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-8-1-0-ga&#x2F;&quot;&gt;Valkey 8.1 release&lt;&#x2F;a&gt;:
Valkey 8.1 is a minor version update designed to further enhance performance, reliability, observability and usability over Valkey 8.0 for all Valkey installations.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;introducing-valkey-9&#x2F;&quot;&gt;Valkey 9.0 release&lt;&#x2F;a&gt;:
Valkey 9.0 brings innovation, long-requested features, and improvements to classic features updated with 1 billion+ RPS clusters, 40% higher throughput, and major feature launches including:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Hash field expiration — fine-grained TTLs that automatically free memory&lt;&#x2F;li&gt;
&lt;li&gt;Atomic slot migration — seamless, zero-error resharding with no downtime&lt;&#x2F;li&gt;
&lt;li&gt;Multiple databases in cluster mode — isolate workloads within a single cluster&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Technical milestones:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;try-valkey&#x2F;&quot;&gt;Try Valkey&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;az-affinity-strategy&#x2F;&quot;&gt;Valkey Glide&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-helm-chart&#x2F;&quot;&gt;Helm Chart &lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;performance&#x2F;&quot;&gt;Performance Dashboards &lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-swift&#x2F;&quot;&gt;Valkey swift&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;community-growth&quot;&gt;Community Growth&lt;&#x2F;h2&gt;
&lt;p&gt;In 2025, the Valkey project had 346 active contributors which are individuals who performed activities such as commits, issues, or pull requests during the selected time period. (Source: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;insights.linuxfoundation.org&#x2F;project&#x2F;valkey?timeRange=past365days&amp;amp;start=2025-01-15&amp;amp;end=2026-01-15&quot;&gt;LFX analytics&lt;&#x2F;a&gt;)&lt;&#x2F;p&gt;
&lt;h2 id=&quot;2025-events-recap&quot;&gt;2025 events recap&lt;&#x2F;h2&gt;
&lt;p&gt;We hosted our first Valkey conference, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;events.linuxfoundation.org&#x2F;keyspace&#x2F;&quot;&gt;Keyspace Amsterdam&lt;&#x2F;a&gt;, in 2025. Here are few photos from the event.
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;IMG_9.jpeg&quot; alt=&quot;Keyspace-Amsterdam&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;IMG_10.jpeg&quot; alt=&quot;Keyspace-Amsterdam&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;IMG_11.jpeg&quot; alt=&quot;Keyspace-Amsterdam&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;While in &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;events&#x2F;keyspace-beijing-2025&#x2F;&quot;&gt;Keyspace Beijing&lt;&#x2F;a&gt;, some of the Valkey community members joined a panel to share insights on Valkey and real‑world adoption. More than 65+ in-person and 1,000+ on-stream consisting of developers, SREs, and DevOps professionals came together to exchange ideas, share best practices, and explore new use cases powered by Valkey.
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;image3.jpg&quot; alt=&quot;Keyspace-Beijing&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;image4.jpg&quot; alt=&quot;Keyspace-Beijing&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;At &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;reinvent.awsevents.com&#x2F;&quot;&gt;AWS re:Invent&lt;&#x2F;a&gt; in Las Vegas, Valkey took over the Strip’s iconic Taco Bell Cantina for the three-floor House of Valkey, turning the conference into a true party hub. Attendees enjoyed deep technical conversations, live performers, giveaways, and exclusive swag.
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;image1.png&quot; alt=&quot;House-of-Valkey&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;image2.png&quot; alt=&quot;House-of-Valkey&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;At &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=8-trRjBr81I&amp;amp;list=PLbzoR-pLrL6pRN6kobVnmu0rY2RLLczAj&quot;&gt;Open Source Summit Japan&lt;&#x2F;a&gt;, Madelyn Olson of the Valkey Technical Steering Committee and Roberto Luna Rojas, Sr. Developer Advocate, presented sessions on open source governance, Linux packaging, community transitions, and GitHub best practices. &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=eW3uLjIj69g&quot;&gt;Watch the session here.&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Beyond those key highlights, the Valkey team and community stayed active throughout the year, showing up at many events around the world. &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;live.symfony.com&#x2F;2025-amsterdam-con&#x2F;&quot;&gt;SymfonyCon Amsterdam&lt;&#x2F;a&gt; added a playful splash to the calendar as the Symfony community marked its 20th anniversary with bouncy balls, ball pits, and cake. Valkey also made its mark at &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;ndcconferences.com&#x2F;&quot;&gt;NDC London&lt;&#x2F;a&gt;, where attendees met Sr. OSS Developer Advocate, Nigel Brown and stocked up on Valkey beanies and stickers. Valkey’s presence extended even further across &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;za.pycon.org&#x2F;&quot;&gt;PyCon Africa&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;2025.pycon.org.au&#x2F;&quot;&gt;PyCon Australia&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;laracon.us&#x2F;&quot;&gt;Laracon&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.alibabacloud.com&#x2F;en&#x2F;apsara-conference?_p_lc=1&quot;&gt;Aspara&lt;&#x2F;a&gt;, and more continuing to build meaningful connections and celebrate the growing global Valkey community.
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;image5.jpg&quot; alt=&quot;valkey-events&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;image6.jpg&quot; alt=&quot;valkey-events&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;image7.jpg&quot; alt=&quot;valkey-events&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2025-year-end&#x2F;image8.jpg&quot; alt=&quot;valkey-events&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;where-to-next-for-the-valkey-community&quot;&gt;Where to next for the Valkey community&lt;&#x2F;h2&gt;
&lt;p&gt;In 2026, Valkey will be on the road again, with appearances planned at the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;unlocked.gomomento.com&#x2F;&quot;&gt;Unlocked conference&lt;&#x2F;a&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;laracon.in&#x2F;&quot;&gt;Laracon India 2026&lt;&#x2F;a&gt; from January 31st - February 1st, with even more events to be announced as the year unfolds.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;valkey-in-the-news&quot;&gt;Valkey in the news&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;DBTA: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.dbta.com&#x2F;Editorial&#x2F;News-Flashes&#x2F;Valkey-90-Offers-Performance-and-Resiliency-for-Real-Time-Workloads-172148.aspx&quot;&gt;Valkey 9.0 Offers Performance and Resiliency for Real-Time Workloads&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;diginomica: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;diginomica.com&#x2F;valkey-90-open-source-velocity-and-pursuit-real-time-resilience&quot;&gt;Valkey 9.0 – open-source velocity and the pursuit of real-time resilience&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;It&#x27;s FOSS: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;news.itsfoss.com&#x2F;valkey-9-release&#x2F;&quot;&gt;Valkey 9.0 Adds Multi-Database Clusters, Supports 1 Billion Requests Per Second&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Runtime: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.runtime.news&#x2F;datadog-answers-a-burning-question-iac-has-a-new-player&#x2F;&quot;&gt;Datadog answers a burning question; IaC has a new player&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;The New Stack: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;thenewstack.io&#x2F;how-the-team-behind-valkey-knew-it-was-time-to-fork&#x2F;&quot;&gt;How the Team Behind Valkey Knew It Was Time to Fork&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;The New Stack: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;thenewstack.io&#x2F;open-source-inside-2025s-4-biggest-trends&#x2F;&quot;&gt;Open Source: Inside 2025&#x27;s 4 Biggest Trends - The New Stack&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;2026-valkey-community-goals&quot;&gt;2026 Valkey Community Goals&lt;&#x2F;h2&gt;
&lt;p&gt;Looking ahead into 2026, the Valkey Project and its community will focus on:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Making Valkey easier to run, scale, manage, and observe,&lt;&#x2F;li&gt;
&lt;li&gt;Better utilizing existing infrastructure to take advantage of modern CPUs with multi-threading and higher performance,&lt;&#x2F;li&gt;
&lt;li&gt;Building deeper connections to other open source projects to be optimized for Valkey,&lt;&#x2F;li&gt;
&lt;li&gt;Providing a wider network of local groups of Valkey enthusiast.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;We’re excited about what’s ahead and committed to evolving the Valkey project in ways that serve both current community members and future adopters.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;thank-you&quot;&gt;Thank You&lt;&#x2F;h2&gt;
&lt;p&gt;Thank you to everyone who contributed code, tested releases, filed issues, shared feedback, or adopted Valkey in production. Your continued involvement strengthens and advances the Valkey project.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;get-involved&quot;&gt;Get Involved&lt;&#x2F;h2&gt;
&lt;p&gt;To learn more and explore ways to get involved in the community:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;If you would like to contribute to the mission, please consider joining the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;community&#x2F;&quot;&gt;Valkey community here&lt;&#x2F;a&gt;, where community members contribute and shape the future of the Valkey project.&lt;&#x2F;li&gt;
&lt;li&gt;Follow along on our social channels for the latest Valkey community news, event recaps, and project developments (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linkedin.com&#x2F;company&#x2F;valkey&#x2F;&quot;&gt;LinkedIn&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;x.com&#x2F;valkey_io&quot;&gt;X&lt;&#x2F;a&gt;, and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;valkeyio.bsky.social&quot;&gt;BlueSky&lt;&#x2F;a&gt;).&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey Helm: The new way to deploy Valkey on Kubernetes</title>
        <published>2026-01-06T00:00:00+00:00</published>
        <updated>2026-01-06T00:00:00+00:00</updated>
        
        <author>
          <name>
            sgissi
          </name>
        </author>
        
        <author>
          <name>
            maheshcherukumilli
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-helm-chart/"/>
        <id>https://valkey.io/blog/valkey-helm-chart/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-helm-chart/">&lt;p&gt;Last year, Bitnami changed how it publishes and supports many container images and Helm charts (see &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;charts&#x2F;issues&#x2F;35164&quot;&gt;charts issue #35164&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;charts&#x2F;issues&#x2F;36215&quot;&gt;charts issue #36215&lt;&#x2F;a&gt;, and the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;news.broadcom.com&#x2F;app-dev&#x2F;broadcom-introduces-bitnami-secure-images-for-production-ready-containerized-applications&quot;&gt;Bitnami Secure Images announcement&lt;&#x2F;a&gt;). Some images move behind new terms, and older tags may not be available as before.&lt;&#x2F;p&gt;
&lt;p&gt;If your pipelines pull Bitnami charts or images during deploys, you may experience significant operational issues: rollouts can fail with &lt;code&gt;ImagePullBackOff&lt;&#x2F;code&gt; or auth&#x2F;404 errors, clusters can drift when staging keeps old cached images while production can&#x27;t pull or resolve a different tag, and &quot;invisible&quot; upgrades can occur when a moved tag points to a new digest. During incidents, rollbacks may slow down or fail entirely because the old image isn&#x27;t fetchable.&lt;&#x2F;p&gt;
&lt;p&gt;To reduce the impact on Valkey deployments, the community created an official, project-maintained Helm chart (request: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;issues&#x2F;2371&quot;&gt;issue #2371&lt;&#x2F;a&gt;, chart: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&quot;&gt;valkey-helm&lt;&#x2F;a&gt;). With the official chart, you can pin chart and image versions, keep &lt;code&gt;values.yaml&lt;&#x2F;code&gt; in code, and upgrade on your schedule without depending on vendor policy changes.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-a-valkey-maintained-chart-helps&quot;&gt;Why a Valkey maintained chart helps&lt;&#x2F;h2&gt;
&lt;p&gt;With the official chart, you control exactly which versions you deploy, without third-party vendor policies forcing unexpected changes. Pin a chart release from the Valkey repo (for example &lt;code&gt;--version 0.9.2&lt;&#x2F;code&gt; from &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&quot;&gt;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&lt;&#x2F;a&gt;) and lock the Valkey image tag in your &lt;code&gt;values.yaml&lt;&#x2F;code&gt;. Because the chart follows Valkey releases and docs, you can bump versions in a pull request, test in staging, then promote the same versions to production.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;capabilities-in-the-valkey-helm-chart&quot;&gt;Capabilities in the Valkey Helm Chart&lt;&#x2F;h2&gt;
&lt;p&gt;The official Valkey Helm chart supports the following:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Standalone instance&lt;&#x2F;strong&gt; - Deploy a single Valkey instance with or without data persistence, perfect for simple caching layers and development environments.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Replicated read-heavy workloads&lt;&#x2F;strong&gt; - Use a primary-replica topology with separate read and read-write endpoints, distributing read traffic across all replica instances while routing writes to the primary node.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ACL-based authentication&lt;&#x2F;strong&gt; - Enable authentication using Access Control Lists for fine-grained user permissions and password-based authentication.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TLS encryption&lt;&#x2F;strong&gt; - Enable TLS for encrypted client-server and replica-primary communication, protecting data in transit.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Metrics&lt;&#x2F;strong&gt; - Collect Valkey metrics by enabling the Prometheus exporter.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;For details on how to configure these capabilities and customize your deployment, see the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;tree&#x2F;main&#x2F;valkey&quot;&gt;chart README&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;migrating-from-bitnami-to-the-official-valkey-chart&quot;&gt;Migrating from Bitnami to the Official Valkey Chart&lt;&#x2F;h2&gt;
&lt;p&gt;Because of differences in how the two charts structure resources, labels, and StatefulSets, you can&#x27;t upgrade in-place from Bitnami. The charts use incompatible naming conventions and resource management approaches. Instead, deploy the official Valkey chart alongside your existing Bitnami installation and migrate the data. Plan for a brief maintenance window to ensure all writes are fully replicated before switching your applications to the new endpoints.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;before-you-migrate&quot;&gt;Before You Migrate&lt;&#x2F;h3&gt;
&lt;p&gt;Review the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;tree&#x2F;main&#x2F;valkey&quot;&gt;official chart documentation&lt;&#x2F;a&gt; to understand configuration options and match your current Bitnami settings. Bitnami&#x27;s default configuration deploys one primary with three replicas, protected by a randomly-generated password and without TLS. The migration steps below will configure the official chart the same way — adjust the chart parameters to match your current deployment.&lt;&#x2F;p&gt;
&lt;p&gt;Ensure you are using Bitnami Valkey chart version 2.0.0 or higher. That release updated service names and labels from &lt;code&gt;master&lt;&#x2F;code&gt; to &lt;code&gt;primary&lt;&#x2F;code&gt; for consistency with current terminology, and the migration steps below assumes that naming convention.&lt;&#x2F;p&gt;
&lt;p&gt;The following commands should be executed from a Bash shell. You&#x27;ll need &lt;code&gt;kubectl&lt;&#x2F;code&gt; configured to access your Kubernetes cluster, &lt;code&gt;helm&lt;&#x2F;code&gt; to install the new chart, and the standard utilities &lt;code&gt;grep&lt;&#x2F;code&gt; and &lt;code&gt;base64&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;step-1-find-existing-pods-services-and-namespace&quot;&gt;Step 1: Find existing pods, services and namespace&lt;&#x2F;h3&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# List all pods in all namespaces with app name &amp;#39;valkey&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get pods&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --all-namespaces -l&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; app.kubernetes.io&#x2F;name=valkey&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -o&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; custom-columns=Pod:.metadata.name,Namespace:.metadata.namespace,Instance:.metadata.labels.app&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;\\&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;.kubernetes&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;\\&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;.io&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;\\&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&#x2F;instance&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Pod                                 Namespace   Instance&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-bitnami-primary-0            apps-test   valkey-bitnami&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-bitnami-replicas-0           apps-test   valkey-bitnami&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-bitnami-replicas-1           apps-test   valkey-bitnami&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-bitnami-replicas-2           apps-test   valkey-bitnami&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Replace values below with the namespace and instance above:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;export&lt;&#x2F;span&gt;&lt;span&gt; NAMESPACE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;apps-test&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;export&lt;&#x2F;span&gt;&lt;span&gt; INSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;valkey-bitnami&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Save current environment details to be used for replication:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Identify the name of the current primary service&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;export&lt;&#x2F;span&gt;&lt;span&gt; SVCPRIMARY&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;$(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get service&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span&gt; $NAMESPACE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -l&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; app.kubernetes.io&#x2F;instance=&lt;&#x2F;span&gt;&lt;span&gt;$INSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;,app.kubernetes.io&#x2F;name=valkey,app.kubernetes.io&#x2F;component=primary&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -o&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; jsonpath=&amp;#39;{.items[0].metadata.name}&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Fetch the default user password&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;export&lt;&#x2F;span&gt;&lt;span&gt; PASS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;$(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get secret&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; apps-test&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -l&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; app.kubernetes.io&#x2F;name=valkey,app.kubernetes.io&#x2F;instance=valkey-bitnami&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -o&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; jsonpath=&amp;#39;{.items[0].data.valkey-password}&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;  base64&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -d&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;step-2-deploy-a-new-valkey-server&quot;&gt;Step 2: Deploy a new Valkey server&lt;&#x2F;h3&gt;
&lt;p&gt;Choose an instance name for the new deployment. It must be different from the current instance to avoid overwriting resources.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;export&lt;&#x2F;span&gt;&lt;span&gt; NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;valkey&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Add the official Helm chart repository:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;helm&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; repo add valkey https:&#x2F;&#x2F;valkey.io&#x2F;valkey-helm&#x2F;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;helm&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; repo update&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Create a &lt;code&gt;values.yaml&lt;&#x2F;code&gt; file that matches your current deployment, for details on the configuration options, check the chart &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;tree&#x2F;main&#x2F;valkey&quot;&gt;README&lt;&#x2F;a&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;blob&#x2F;main&#x2F;valkey&#x2F;values.yaml&quot;&gt;values.yaml&lt;&#x2F;a&gt;. The script below will generate a file that matches the default Bitnami Valkey configuration:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;&#x2F;strong&gt;: The example below provides the password as plain-text for simplicity. In production, store the password in a Kubernetes Secret and reference it using the &lt;code&gt;auth.usersExistingSecret&lt;&#x2F;code&gt; setting.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;cat&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;lt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt; EOF&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; values.yaml&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;auth:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  enabled: true&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  aclUsers:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    default:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      password: &amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;$PASS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      permissions: &amp;quot;~* &amp;amp;* +@all&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;replica:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  enabled: true&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  replicas: 3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  persistence:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    size: 8Gi&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;valkeyConfig: |&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  appendonly yes&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Install the new Valkey instance:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;helm&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; install&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span&gt; $NAMESPACE $NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&#x2F;valkey&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -f&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; values.yaml&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Check it is running as expected:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# List new pods and ensure they are in &amp;#39;Running&amp;#39; state&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get pods&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span&gt; $NAMESPACE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -l&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; app.kubernetes.io&#x2F;instance=&lt;&#x2F;span&gt;&lt;span&gt;$NEWINSTANCE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# NAME       READY   STATUS    RESTARTS   AGE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-0   1&#x2F;1     Running   0          2m33s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-1   1&#x2F;1     Running   0          2m16s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-2   1&#x2F;1     Running   0          2m4s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-3   1&#x2F;1     Running   0          103s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Check that server is responding to CLI commands&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; exec&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span&gt; $NAMESPACE $NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;-0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -c&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -a&lt;&#x2F;span&gt;&lt;span&gt; $PASS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --no-auth-warning&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; ping&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# PONG&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Check that the current instance is reachable from the new instance&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; exec&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span&gt; $NAMESPACE $NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;-0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -c&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -a&lt;&#x2F;span&gt;&lt;span&gt; $PASS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -h&lt;&#x2F;span&gt;&lt;span&gt; $SVCPRIMARY&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --no-auth-warning&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; ping&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# PONG&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Create shell aliases to call the Valkey CLI on the new and current instances:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;alias&lt;&#x2F;span&gt;&lt;span&gt; new-valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;kubectl exec -n &lt;&#x2F;span&gt;&lt;span&gt;$NAMESPACE $NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;-0 -c valkey -- valkey-cli -a &lt;&#x2F;span&gt;&lt;span&gt;$PASS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; --no-auth-warning&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;alias&lt;&#x2F;span&gt;&lt;span&gt; current-valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;kubectl exec -n &lt;&#x2F;span&gt;&lt;span&gt;$NAMESPACE $NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;-0 -c valkey -- valkey-cli -a &lt;&#x2F;span&gt;&lt;span&gt;$PASS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; -h &lt;&#x2F;span&gt;&lt;span&gt;$SVCPRIMARY&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; --no-auth-warning&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;step-3-enable-replication&quot;&gt;Step 3: Enable replication&lt;&#x2F;h3&gt;
&lt;p&gt;Replicate data from current instance and ensure it is replicating:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Configure password to connect to existing Valkey instance&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;new-valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; config set primaryauth&lt;&#x2F;span&gt;&lt;span&gt; $PASS&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Configure new instance to replicate data from the current instance&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;new-valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; replicaof&lt;&#x2F;span&gt;&lt;span&gt; $SVCPRIMARY&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Check status of replication, it should return a &amp;#39;slave&amp;#39; role and master_link_status as &amp;#39;up&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;new-valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; info replication&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; grep&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;^\(role\|master_host\|master_link_status\)&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# role:slave&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# master_host:valkey-bitnami-primary&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# master_link_status:up&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;step-4-enter-maintenance-window&quot;&gt;Step 4: Enter maintenance window&lt;&#x2F;h3&gt;
&lt;p&gt;Pause all clients connecting to the Valkey server deployed using Bitnami&#x27;s chart. The failover process will pause client writes, ensure changes are replicated, and promote the new instance to primary:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Retrieve the new instance Pod IP&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;export&lt;&#x2F;span&gt;&lt;span&gt; PODPRIMARY&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;$(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get pod&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span&gt; $NAMESPACE $NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;-0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -o&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; jsonpath=&amp;#39;{.status.podIP&amp;#39;}&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Initiate failover&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;current-valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; failover to&lt;&#x2F;span&gt;&lt;span&gt; $PODPRIMARY&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Check that instance role is &amp;#39;master&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;new-valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; info&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; grep&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;^role:&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# * Sample Output *&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# role:master&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;step-5-switch-clients-to-new-endpoints&quot;&gt;Step 5: Switch clients to new endpoints&lt;&#x2F;h3&gt;
&lt;p&gt;Update all clients to use the new Valkey read-write and read-only endpoints which are exposed as services. To list the service endpoints:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;echo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;Read-Write (primary): &lt;&#x2F;span&gt;&lt;span&gt;$NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span&gt;$NAMESPACE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;.svc.cluster.local&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;echo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;Read-only (all instances): &lt;&#x2F;span&gt;&lt;span&gt;$NEWINSTANCE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;-read.&lt;&#x2F;span&gt;&lt;span&gt;$NAMESPACE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;.svc.cluster.local&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;what-s-next-for-valkey-helm&quot;&gt;What&#x27;s next for Valkey Helm?&lt;&#x2F;h2&gt;
&lt;p&gt;The chart &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;milestones&quot;&gt;milestones&lt;&#x2F;a&gt; outlines the planned improvements for the official Valkey Helm chart, which is being actively developed in the open at the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&quot;&gt;valkey-io&#x2F;valkey-helm&lt;&#x2F;a&gt; repository. High-availability via Sentinel for automated failover is the next upcoming feature &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;issues&#x2F;22&quot;&gt;#22&lt;&#x2F;a&gt;, alongside more control over data persistence &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;issues&#x2F;88&quot;&gt;#88&lt;&#x2F;a&gt;, followed by Cluster support &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;issues&#x2F;18&quot;&gt;#18&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;get-started-today&quot;&gt;Get started today&lt;&#x2F;h2&gt;
&lt;p&gt;If you currently rely on Bitnami, test this chart in a dev cluster and try your normal workflows. The official Valkey Helm chart provides a stable, community-maintained path forward that puts you in control of your deployment lifecycle.&lt;&#x2F;p&gt;
&lt;p&gt;If something is missing or you encounter issues, the Valkey community is here to help. Open an issue at &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-helm&#x2F;issues&quot;&gt;valkey-io&#x2F;valkey-helm&lt;&#x2F;a&gt; or reach out on the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey-oss-developer.slack.com&#x2F;archives&#x2F;C09JZ6N2AAV&quot;&gt;#valkey-helm&lt;&#x2F;a&gt; Slack channel. Your feedback helps ensure the chart grows in the right direction for the entire community.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>How to Properly Secure Your Valkey Deployment</title>
        <published>2025-12-12T00:00:00+00:00</published>
        <updated>2025-12-12T00:00:00+00:00</updated>
        
        <author>
          <name>
            allenhelton
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/properly-secure-your-valkey-deployment/"/>
        <id>https://valkey.io/blog/properly-secure-your-valkey-deployment/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/properly-secure-your-valkey-deployment/">&lt;p&gt;Most of the production security incidents I&#x27;ve helped debug started with misconfigurations rather than zero-days or sophisticated exploits.&lt;&#x2F;p&gt;
&lt;p&gt;Security misconfiguration ranks as A05 in the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;owasp.org&#x2F;Top10&#x2F;A05_2021-Security_Misconfiguration&#x2F;&quot;&gt;OWASP Top 10:2021&lt;&#x2F;a&gt;, with 90% of applications tested showing some form of misconfiguration. That&#x27;s staggering. And when it comes to infrastructure like Valkey, the stakes are even higher - your cache often sits at the heart of your application, touching every request.&lt;&#x2F;p&gt;
&lt;p&gt;Engineers really care about security - but it is easy to overlook some crucial settings. This is especially true in the cloud, where everything moves really fast. You spin up a Valkey instance inside your VPC, it works, and you move on to the next problem. VPC can lock down your network to the outsiders  - but I often see multiple teams being able to access the same VPC. This leaves systems vulnerable to insider threats as well as well intentioned people or microservices that just happen to have a bad day. But using default configurations or enabling unnecessary features can make systems &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;socradar.io&#x2F;redis-redishell-vulnerability-cve-2025-49844&#x2F;&quot;&gt;easy targets for attackers&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Let&#x27;s talk about how to actually secure a Valkey deployment. Not with a single magic bullet, but with layers that work together.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;defense-in-depth-for-valkey&quot;&gt;Defense in Depth for Valkey&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;csrc.nist.gov&#x2F;glossary&#x2F;term&#x2F;defense_in_depth&quot;&gt;Defense in depth&lt;&#x2F;a&gt; is a security principle where multiple layers of security controls protect against attack, eliminating or mitigating single points of compromise. For Valkey, this means building overlapping protections so no single misconfiguration leaves you completely exposed.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;network-boundaries&quot;&gt;Network Boundaries&lt;&#x2F;h3&gt;
&lt;p&gt;This is your first line of defense. Valkey is designed to be accessed by trusted clients inside trusted environments. It should never be &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;ine.com&#x2F;blog&#x2F;cve-20220543-lua-sandbox-escape-in-redis&quot;&gt;directly exposed to the internet&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;This is where putting your Valkey node inside a VPC is necessary - but not sufficient. Security groups help reinforce access limitation to make sure that only services and people who are intended to access the cluster can do so. Your CI runners probably don&#x27;t need direct cache access. Each service should have just the access it needs.&lt;&#x2F;p&gt;
&lt;p&gt;Modern infrastructure also handles TLS seamlessly. While it is unlikely that an attacker is sniffing your packets on your cloud network, it is best practice to have encryption in transit - even within your own network. Current server hardware barely notices the handshake overhead, and Valkey picked up its TLS support back in 2020, so you rarely have to trade performance for security here.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;authentication-and-authorization&quot;&gt;Authentication and Authorization&lt;&#x2F;h3&gt;
&lt;p&gt;Authentication adds a critical layer of resiliency. The authentication layer protects you if your firewall or other protections fail, unauthenticated clients still can&#x27;t access your instance.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey supports &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;security&#x2F;#authentication&quot;&gt;two authentication methods&lt;&#x2F;a&gt;: the newer ACL system (Access Control Lists) and the legacy &lt;code&gt;requirepass&lt;&#x2F;code&gt;. ACLs give you more flexibility by allowing you to create users with fine-grained permissions tailored to what each service actually needs. If you already have centralized identity wiring, Valkey&#x27;s &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;ldap&#x2F;&quot;&gt;LDAP integration&lt;&#x2F;a&gt; plugs into that source of truth so you aren&#x27;t managing a separate credential store just for the cache.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;command-surface-and-runtime-protection&quot;&gt;Command Surface and Runtime Protection&lt;&#x2F;h3&gt;
&lt;p&gt;You can tighten security further by controlling which commands are available. Valkey&#x27;s ACL system lets you create users with fine-grained command permissions tailored to what each service actually needs. For example, a caching service might only need read and write operations:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ACL&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; SETUSER cache_writer on&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;strongpassword ~cached:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; +get +set +del +expire&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;A monitoring service might only need read access:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ACL&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; SETUSER monitor on&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;anotherpassword ~&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; +get +mget +info +ping&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;And your admin user can have full access while still being explicitly configured:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ACL&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; SETUSER admin on&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;verystrongpassword ~&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; +@all&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This principle of least privilege means that even if credentials are compromised, an attacker is limited to only the operations the user can perform. A read-only monitoring account can&#x27;t flush your entire cache or modify configurations.&lt;&#x2F;p&gt;
&lt;p&gt;For most application clients, a solid baseline is &lt;code&gt;ACL SETUSER application on &amp;gt;password +@all -@dangerous -@scripting&lt;&#x2F;code&gt;, which keeps day-to-day commands available while stripping scripting and other dangerous categories that often lead to trouble.&lt;&#x2F;p&gt;
&lt;p&gt;Beyond command restrictions, focus on container-level privilege boundaries rather than dropping root inside the container.&lt;&#x2F;p&gt;
&lt;p&gt;The &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;valkey&#x2F;valkey&#x2F;&quot;&gt;official Valkey container image&lt;&#x2F;a&gt; runs as root by default, which is standard practice. Container isolation provides the necessary security boundary, but the runtime still needs hardening. Restrict who can run or exec into containers, scope mounted volumes carefully, use read-only mounts for configuration files, and keep the image up to date.&lt;&#x2F;p&gt;
&lt;p&gt;These controls protect against container breakout and privilege escalation — the real risks in production environments.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;operational-controls&quot;&gt;Operational Controls&lt;&#x2F;h3&gt;
&lt;p&gt;Once Valkey is running, your operational posture determines how quickly you can detect and contain issues. Enable logging so you can see what&#x27;s happening. Monitor for unusual patterns like sudden spikes in command execution, connections from unexpected sources, or commands that shouldn&#x27;t be running in your environment.&lt;&#x2F;p&gt;
&lt;p&gt;Set resource limits in your configuration. Poorly written operations or runaway commands can impact your cache&#x27;s availability. &lt;code&gt;maxmemory&lt;&#x2F;code&gt; stops misbehaving workloads from consuming the whole node, and if you touch &lt;code&gt;timeout&lt;&#x2F;code&gt;, prefer the unauthenticated variant; forcing disconnects on healthy clients usually just amplifies reconnect storms.&lt;&#x2F;p&gt;
&lt;p&gt;Observability is part of security! Logs and metrics turn silent failures into visible signals, and visibility is what buys you time to respond before small issues become incidents. Track authentication failures like &lt;code&gt;acl_access_denied_auth&lt;&#x2F;code&gt; specifically - they&#x27;re usually the first sign someone is poking around where they shouldn&#x27;t.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-real-reason-for-layered-security&quot;&gt;The Real Reason for Layered Security&lt;&#x2F;h2&gt;
&lt;p&gt;Anecdotally, I&#x27;ve seen too many teams implement authentication and feel like they&#x27;re done. And authentication is great, it&#x27;s an essential layer. But defense in depth means multiple layers of safeguards, not relying on any single control.&lt;&#x2F;p&gt;
&lt;p&gt;Here&#x27;s a real scenario: You&#x27;ve enabled authentication. But then a developer accidentally pushes credentials into a public GitHub repo. Or a microservice inside your VPC gets compromised and its environment variables are exposed. Or credentials end up in application logs.&lt;&#x2F;p&gt;
&lt;p&gt;With layered security, network segmentation catches the GitHub leak. Monitoring catches the compromised service behaving oddly. ACLs limit what an attacker can do even with credentials.&lt;&#x2F;p&gt;
&lt;p&gt;Each layer catches what the others might miss. That&#x27;s the strength of defense in depth.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;your-security-checklist&quot;&gt;Your Security Checklist&lt;&#x2F;h2&gt;
&lt;p&gt;Here&#x27;s a good starting point:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Review your network configuration: Are firewall rules scoped appropriately?&lt;&#x2F;li&gt;
&lt;li&gt;Enable authentication: Use ACLs if your setup supports it, requirepass at minimum&lt;&#x2F;li&gt;
&lt;li&gt;Audit your command surface: Create and apply role-based ACLs to limit access&lt;&#x2F;li&gt;
&lt;li&gt;Run as non-root:  Create a dedicated user for Valkey&lt;&#x2F;li&gt;
&lt;li&gt;Review your threat model: Who has access to your VPC? What are the realistic risks?&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;security&#x2F;&quot;&gt;Valkey security documentation&lt;&#x2F;a&gt; has detailed implementation guidance for all of this.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;security-is-a-practice-not-a-checkbox&quot;&gt;Security Is a Practice, Not a Checkbox&lt;&#x2F;h2&gt;
&lt;p&gt;Infrastructure evolves. New services get added. Teams grow and change. Requirements shift.&lt;&#x2F;p&gt;
&lt;p&gt;Security works best when it&#x27;s part of how we operate, not something we set up once and forget. Review your configurations regularly. Update your approach as your architecture changes. And if you&#x27;re ever unsure, the community is here.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;P.S.&lt;&#x2F;strong&gt; If you need help thinking through your Valkey security setup, reach out to me on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;linkedin.com&#x2F;in&#x2F;allenheltondev&quot;&gt;LinkedIn&lt;&#x2F;a&gt;. I&#x27;m happy to talk through it.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Resharding, Reimagined: Introducing Atomic Slot Migration</title>
        <published>2025-10-29T00:00:00+00:00</published>
        <updated>2025-10-29T00:00:00+00:00</updated>
        
        <author>
          <name>
            murphyjacob4
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/atomic-slot-migration/"/>
        <id>https://valkey.io/blog/atomic-slot-migration/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/atomic-slot-migration/">&lt;p&gt;Managing the topology of a distributed database is one of the most critical and
challenging tasks for any operator. For a high-performance system like Valkey,
moving data slots between nodes —a process known as resharding— needs to be
fast, reliable, and easy.&lt;&#x2F;p&gt;
&lt;p&gt;Clustered Valkey has historically supported resharding through a process known
as slot migration, where one or more of the 16,384 slots is moved from one shard
to another. This slot migration process has historically led to many operational
headaches. To address this, Valkey 9.0 introduced a powerful new feature that
fundamentally improves this process: &lt;strong&gt;Atomic Slot Migration&lt;&#x2F;strong&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Atomic Slot Migration includes many benefits that makes resharding painless.
This includes:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;a simpler, one-shot command interface supporting multiple slot ranges in a
single migration,&lt;&#x2F;li&gt;
&lt;li&gt;built-in cancellation support and automated rollback on failure,&lt;&#x2F;li&gt;
&lt;li&gt;improved large key handling,&lt;&#x2F;li&gt;
&lt;li&gt;up to 9x faster slot migrations,&lt;&#x2F;li&gt;
&lt;li&gt;greatly reduced client impact during migrations.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Let&#x27;s dive into how it works and what it means for you.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;background-the-legacy-slot-migration-process&quot;&gt;Background: The Legacy Slot Migration Process&lt;&#x2F;h2&gt;
&lt;p&gt;Prior to Valkey 9.0, a slot migration from a source node to a target node was
performed through the following steps:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Send &lt;code&gt;&amp;lt;target&amp;gt;&lt;&#x2F;code&gt;: &lt;code&gt;CLUSTER SETSLOT &amp;lt;slot&amp;gt; IMPORTING &amp;lt;source&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Send &lt;code&gt;&amp;lt;source&amp;gt;&lt;&#x2F;code&gt;: &lt;code&gt;CLUSTER SETSLOT &amp;lt;slot&amp;gt; MIGRATING &amp;lt;target&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Send &lt;code&gt;&amp;lt;source&amp;gt;&lt;&#x2F;code&gt;: &lt;code&gt;CLUSTER GETKEYSINSLOT &amp;lt;slot&amp;gt; &amp;lt;count&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Send &lt;code&gt;&amp;lt;source&amp;gt;&lt;&#x2F;code&gt;: &lt;code&gt;MIGRATE ...&lt;&#x2F;code&gt; for each key in the result of step 3&lt;&#x2F;li&gt;
&lt;li&gt;Repeat 3 &amp;amp; 4 until no keys are left in the slot on &lt;code&gt;&amp;lt;source&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Send &lt;code&gt;&amp;lt;target&amp;gt;&lt;&#x2F;code&gt;: &lt;code&gt;CLUSTER SETSLOT &amp;lt;slot&amp;gt; NODE &amp;lt;target&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Send &lt;code&gt;&amp;lt;source&amp;gt;&lt;&#x2F;code&gt;: &lt;code&gt;CLUSTER SETSLOT &amp;lt;slot&amp;gt; NODE &amp;lt;target&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;This was subject to the following problems:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Higher latency for client operations&lt;&#x2F;strong&gt;: All client writes and reads to keys
in the migrating hash slot were subject to redirections through a special
&lt;code&gt;-ASK&lt;&#x2F;code&gt; error response, which required re-execution of the command on the
target node. Redirected responses meant unexpected latency spikes during
migrations.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Multi-key operation unavailability&lt;&#x2F;strong&gt;: Commands like &lt;code&gt;MGET&lt;&#x2F;code&gt; and &lt;code&gt;MSET&lt;&#x2F;code&gt; which
supply multiple keys could not always be served by a single node when slots
were migrating. When this happened, clients would receive error responses and
were expected to retry.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Problems with large keys&#x2F;collections&lt;&#x2F;strong&gt;: Since the migration was performed
one key at a time, large keys (e.g. collections with many elements) needed to
be sent as a single command. Serialization of a large key required a large
contiguous memory chunk on the source node and import of that payload required
a similar large memory chunk and a large CPU burst on the target node. In some
cases, the memory consumption was enough to trigger out-of-memory conditions
on either side, or the CPU burst could be large enough to cause a failover on
the target shard due to health probes not being served.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Slot migration latency&lt;&#x2F;strong&gt;: The overall latency of the slot migration was
bounded by how quickly the operator could send the &lt;code&gt;CLUSTER GETKEYSINSLOT&lt;&#x2F;code&gt; and
&lt;code&gt;MIGRATE&lt;&#x2F;code&gt; commands. Each batch of keys required a full round-trip-time between
the operator&#x27;s machine and the cluster, leaving a lot of waiting time that
could be used to do data migration.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Lack of resilience to failure&lt;&#x2F;strong&gt;: If a failure condition was encountered, for
example, the hash slot will not fit on the target node, undoing the slot
migration is not well supported and requires replaying the listed steps in
reverse. In some cases, the hash slot may have grown while the migration was
underway, and may not fit on either the source or target node.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;the-core-idea-migration-via-replication&quot;&gt;The Core Idea: Migration via Replication&lt;&#x2F;h2&gt;
&lt;p&gt;At its heart, the new atomic slot migration process closely resembles the
concepts of replication and failover which serve as the backbone for high
availability within Valkey. When atomic slot migration is requested, data is
asynchronously sent from the old owner (the source node) to the new owner (the
target node). Once all data is transferred and the target is completely caught
up, the ownership is atomically transferred to the target node.&lt;&#x2F;p&gt;
&lt;p&gt;This gets logically broken down into three phases:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Snapshot Phase&lt;&#x2F;strong&gt;: The source node first sends a point-in-time snapshot of
all the data in the migrating slots to the target node. The snapshot is done
asynchronously through a child process, allowing the parent process to
continue serving requests. The snapshot is formatted as a stream of commands
which the target node and its replica can consume verbatim.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Streaming Phase&lt;&#x2F;strong&gt;: While the snapshot is ongoing, the source node will keep
track of all new mutations made to the migrating slots. Once the snapshotting
completes, the source node streams all incremental changes for those slots to
the target.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Finalization Phase&lt;&#x2F;strong&gt;: Once the stream of changes has been sent to the
target, the source node briefly pauses mutations. Only once the target has
fully processed these changes does it acquire ownership of the migrating
slots and broadcast ownership to the cluster. When the source node discovers
this, it knows it can delete the contents of those slots and redirect any
paused clients to the new owner &lt;strong&gt;atomically&lt;&#x2F;strong&gt;.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;atomic-slot-migration&#x2F;atomic-slot-migration-phases.png&quot; alt=&quot;Diagram showing how how atomic slot migration is broken into the three phases&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-is-this-better&quot;&gt;Why is this better?&lt;&#x2F;h2&gt;
&lt;p&gt;By replicating the slot contents and atomically transferring ownership, Atomic
Slot Migration provides many desirable properties over the previous mechanism:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Clients are unaware&lt;&#x2F;strong&gt;: Since the entire hash slot is replicated before any
cleanup is done on the source node, clients are completely unaware of the slot
migration, and no longer need to follow &lt;code&gt;ASK&lt;&#x2F;code&gt; redirections and retry errors
for multi-key operations.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Keys no longer need to be atomically moved&lt;&#x2F;strong&gt;: Collections are moved as
chunks of elements that are replayed as commands, preventing the reliability
problems previously encountered when dumping and restoring a large collection.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;A migration can easily be rolled back on cancellation or failure&lt;&#x2F;strong&gt;: Since
Valkey places the hash slots in a staging area, they are easily wipe them
independently of the rest of the database. Since this state is not broadcasted
to the cluster, ending the migration is a straight-forward process of cleaning
up the staging area and marking the migration as cancelled. Many failures,
like out-of-memory, failover, or network partition can be handled completely
by the engine.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Greatly lowered slot migration latency&lt;&#x2F;strong&gt;: Valkey is highly-optimized for
replication. By batching the slot migrations and using this replication-like
process, the end-to-end migration latency can lower by as much as 9x when
compared to legacy slot migration through &lt;code&gt;valkey-cli&lt;&#x2F;code&gt;.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;how-to-use-atomic-slot-migration&quot;&gt;How to use Atomic Slot Migration&lt;&#x2F;h2&gt;
&lt;p&gt;A new family of &lt;code&gt;CLUSTER&lt;&#x2F;code&gt; commands gives you full control over the migration
lifecycle.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;cluster-migrateslots&#x2F;&quot;&gt;&lt;code&gt;CLUSTER MIGRATESLOTS SLOTSRANGE start-slot end-slot NODE node-id&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;
&lt;ul&gt;
&lt;li&gt;This command kicks off the migration. You can specify one or more slot
ranges and the target node ID to begin pushing data. Multiple migrations can
be queued in one command by repeating the SLOTSRANGE and NODE arguments.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;cluster-cancelslotmigrations&#x2F;&quot;&gt;&lt;code&gt;CLUSTER CANCELSLOTMIGRATIONS&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;
&lt;ul&gt;
&lt;li&gt;Use this command to safely cancel all ongoing slot migrations originating
from the node.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;cluster-getslotmigrations&#x2F;&quot;&gt;&lt;code&gt;CLUSTER GETSLOTMIGRATIONS&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;
&lt;ul&gt;
&lt;li&gt;This gives you an observable log of recent and active migrations, allowing
you to monitor the status, duration, and outcome of each job. Slot migration
jobs are stored in memory, allowing for simple programmatic access and error
handling.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;legacy-vs-atomic-head-to-head-results&quot;&gt;Legacy vs. Atomic: Head-to-Head Results&lt;&#x2F;h2&gt;
&lt;p&gt;Head-to-head experiments show the improvement provided by atomic slot migration.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;test-setup&quot;&gt;Test Setup&lt;&#x2F;h3&gt;
&lt;p&gt;To make things reproducible, the test setup is outlined below:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Valkey cluster nodes are &lt;code&gt;c4-standard-8&lt;&#x2F;code&gt; GCE VMs spread across GCP’s
us-central1 region running Valkey 9.0.0&lt;&#x2F;li&gt;
&lt;li&gt;Client machine is a separate &lt;code&gt;c4-standard-8&lt;&#x2F;code&gt; GCE VM in us-central1-f&lt;&#x2F;li&gt;
&lt;li&gt;Rebalancing is accomplished with the &lt;code&gt;valkey-cli --cluster rebalance&lt;&#x2F;code&gt; command,
with all parameters defaulted. The only exception is during scale in, where
&lt;code&gt;--cluster-weight&lt;&#x2F;code&gt; is used to set the weights to only allocate to 3 shards.&lt;&#x2F;li&gt;
&lt;li&gt;The cluster is filled with 40 GB of data consisting of 16 KB string valued
keys&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;slot-migration-latency-who-s-faster&quot;&gt;Slot Migration Latency: Who&#x27;s Faster?&lt;&#x2F;h3&gt;
&lt;p&gt;The experiment had two tests: one with no load and one with heavy read&#x2F;write
load. The heavy load is simulated using &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;RedisLabs&#x2F;memtier_benchmark&quot;&gt;memtier-benchmark&lt;&#x2F;a&gt; with a 1:10 set&#x2F;get
ratio on the client machine specified above.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;atomic-slot-migration&#x2F;legacy-vs-atomic-latency.png&quot; alt=&quot;Chart showing time to scale in and out when under load, from the table below&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;table&gt;
  &lt;tr&gt;
    &lt;th&gt;Test Case&lt;&#x2F;th&gt;
    &lt;th&gt;Legacy Slot Migration&lt;&#x2F;th&gt;
    &lt;th&gt;Atomic Slot Migration&lt;&#x2F;th&gt;
    &lt;th&gt;Speedup&lt;&#x2F;th&gt;
  &lt;&#x2F;tr&gt;
  &lt;tr&gt;
    &lt;td&gt;No Load: 3 to 4 shards&lt;&#x2F;td&gt;
    &lt;td&gt;1m42.089s&lt;&#x2F;td&gt;
    &lt;td&gt;0m10.723s&lt;&#x2F;td&gt;
    &lt;td style=&quot;background-color:#228b22;&quot;&gt;9.52x&lt;&#x2F;td&gt;
  &lt;&#x2F;tr&gt;
  &lt;tr&gt;
    &lt;td&gt;No Load: 4 to 3 shards&lt;&#x2F;td&gt;
    &lt;td&gt;1m20.270s&lt;&#x2F;td&gt;
    &lt;td&gt;0m9.507s&lt;&#x2F;td&gt;
    &lt;td style=&quot;background-color:#36a336;&quot;&gt;8.44x&lt;&#x2F;td&gt;
  &lt;&#x2F;tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Heavy Load: 3 to 4 shards&lt;&#x2F;td&gt;
    &lt;td&gt;2m27.276s&lt;&#x2F;td&gt;
    &lt;td&gt;0m30.995s&lt;&#x2F;td&gt;
    &lt;td style=&quot;background-color:#7fd37f;&quot;&gt;4.75x&lt;&#x2F;td&gt;
  &lt;&#x2F;tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Heavy Load: 4 to 3 shards&lt;&#x2F;td&gt;
    &lt;td&gt;2m5.328s&lt;&#x2F;td&gt;
    &lt;td&gt;0m27.105s&lt;&#x2F;td&gt;
    &lt;td style=&quot;background-color:#86d686;&quot;&gt;4.62x&lt;&#x2F;td&gt;
  &lt;&#x2F;tr&gt;
&lt;&#x2F;table&gt;
&lt;p&gt;The main culprit here is unnecessary network round trips (RTTs) in legacy slot
migration. Each slot requires:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;2 RTTs to call &lt;code&gt;SETSLOT&lt;&#x2F;code&gt; and begin the migration&lt;&#x2F;li&gt;
&lt;li&gt;Each batch of keys in a slot requires:
&lt;ul&gt;
&lt;li&gt;1 RTT for &lt;code&gt;CLUSTER GETKEYSINSLOT&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;1 RTT for &lt;code&gt;MIGRATE&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;1 RTT for the actual migration of the key batch from source to target node&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;2 RTTs to call &lt;code&gt;SETSLOT&lt;&#x2F;code&gt; and end the migration&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Those round trip times add up. For this test case where we have:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;4096 slots to move&lt;&#x2F;li&gt;
&lt;li&gt;160 keys per slot&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;valkey-cli&lt;&#x2F;code&gt;&#x27;s default batch size of 10&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;We need:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;atomic-slot-migration&#x2F;legacy-rtt-formula.png&quot; alt=&quot;Picture of the following formula rendered in LaTeX: RTTs = SlotsCount * (3 * KeysPerSlot&#x2F;KeysPerBatch + 4) = 4096 * (3 * 160&#x2F;10 + 4) = 212992&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Even with a 300 microsecond round trip time, legacy slot migration spends over a
minute just waiting for those 212,992 round trips.&lt;&#x2F;p&gt;
&lt;p&gt;By removing this overhead, atomic slot migration is now only bounded by the
speed that one node can push data to another, achieving much faster end-to-end
latency.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;client-impact-how-would-applications-respond&quot;&gt;Client Impact: How Would Applications Respond?&lt;&#x2F;h3&gt;
&lt;p&gt;The experiment measured the throughput of a simulated
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-py&quot;&gt;valkey-py&lt;&#x2F;a&gt; workload with 1:10 set&#x2F;get
ratio while doing each scaling event. Over three trial throughput averages are
shown below.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;atomic-slot-migration&#x2F;legacy-vs-atomic-client-impact.png&quot; alt=&quot;Chart showing queries per second over time. Atomic slot migration quickly dips and recovers, while legacy slot migration incurs a sustained dip&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Despite atomic slot migration causing a more acute throughput hit, you can see
the recovery of the client application is much faster due to far fewer topology
changes and an overall lower end-to-end latency. Each topology change needs to
be handled by the Valkey client, so the quicker the topology changes are made,
the sooner the impact ends. By collapsing the topology changes and performing
atomic handover, atomic slot migration leads to less client impact overall than
legacy slot migration.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;under-the-hood-state-machines-and-control-commands&quot;&gt;Under the Hood: State Machines and Control Commands&lt;&#x2F;h2&gt;
&lt;p&gt;To coordinate the complex dance between the two nodes, a new internal command,
&lt;a href=&quot;&#x2F;commands&#x2F;cluster-syncslots&#x2F;&quot;&gt;&lt;code&gt;CLUSTER SYNCSLOTS&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;, is introduced. This command
orchestrates the state machine with the following sub-commands:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;CLUSTER SYNCSLOTS ESTABLISH SOURCE &amp;lt;source-node-id&amp;gt; NAME &amp;lt;unique-migration-name&amp;gt; SLOTSRANGE &amp;lt;start&amp;gt; &amp;lt;end&amp;gt; ...&lt;&#x2F;code&gt;
&lt;ul&gt;
&lt;li&gt;Informs the target node of an in progress slot migration and begins tracking
the current connection as a slot migration link.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;CLUSTER SYNCSLOTS SNAPSHOT-EOF&lt;&#x2F;code&gt;
&lt;ul&gt;
&lt;li&gt;Used as a marker to inform the target the full snapshot of the hash slot
contents have been sent.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;CLUSTER SYNCSLOTS REQUEST-PAUSE&lt;&#x2F;code&gt;
&lt;ul&gt;
&lt;li&gt;Informs the source node that the target has received all of the snapshot and
is ready to proceed.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;CLUSTER SYNCSLOTS PAUSED&lt;&#x2F;code&gt;
&lt;ul&gt;
&lt;li&gt;Used as a marker to inform the target no more mutations should occur as the
source has paused mutations.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;CLUSTER SYNCSLOTS REQUEST-FAILOVER&lt;&#x2F;code&gt;
&lt;ul&gt;
&lt;li&gt;Informs the source node that the target is fully caught up and ready to take
over the hash slots.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;CLUSTER SYNCSLOTS FAILOVER-GRANTED&lt;&#x2F;code&gt;
&lt;ul&gt;
&lt;li&gt;Informs the target node that the source node is still paused and takeover
can be safely performed.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;CLUSTER SYNCSLOTS FINISH&lt;&#x2F;code&gt;
&lt;ul&gt;
&lt;li&gt;Inform the replica of the target node that a migration is completed (or
failed).&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;CLUSTER SYNCSLOTS CAPA&lt;&#x2F;code&gt;
&lt;ul&gt;
&lt;li&gt;Reserved command allowing capability negotiation.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The diagram below shows how &lt;code&gt;CLUSTER SYNCSLOTS&lt;&#x2F;code&gt; is used internally to drive a
slot migration from start to finish:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;atomic-slot-migration&#x2F;atomic-slot-migration-syncslots.png&quot; alt=&quot;Diagram showing how Valkey uses CLUSTER SYNCSLOTS to drive atomic slot migration&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;get-started-today&quot;&gt;Get Started Today!&lt;&#x2F;h2&gt;
&lt;p&gt;This new Atomic slot migration is a massive step forward for Valkey cluster
management. It provides a faster, more reliable, and overall easier mechanism
for resharding your data.&lt;&#x2F;p&gt;
&lt;p&gt;So, go &lt;a href=&quot;&#x2F;download&#x2F;&quot;&gt;download Valkey 9.0&lt;&#x2F;a&gt; and try Atomic Slot Migration for
yourself! A huge thank you to everyone in the community who contributed to the
design and implementation.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey 9.0: innovation, features, and improvements.</title>
        <published>2025-10-21T00:00:01+00:00</published>
        <updated>2025-10-21T00:00:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/introducing-valkey-9/"/>
        <id>https://valkey.io/blog/introducing-valkey-9/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/introducing-valkey-9/">&lt;p&gt;For Valkey&#x27;s second major release, Valkey 9.0 brings innovation, long-requested features, and improvements to classic features updated for today’s workloads.
Read on to find out all the team packed into this release.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;atomic-slot-migrations&quot;&gt;Atomic Slot Migrations&lt;&#x2F;h2&gt;
&lt;p&gt;Atomic slot migrations fundamentally changes how Valkey migrates data from node-to-node inside the cluster.
Prior to Valkey 9.0, data migrated in the cluster key-by-key.
This approach works for most situations, but corner cases can lead to degraded performance, operational headaches, and, at worst, blocked node migrations and lost data.&lt;&#x2F;p&gt;
&lt;p&gt;Key-by-key migration uses a move-then-delete sequence.
Performance issues arise when a client tries to access a key during a partially migrated state: if the migration hasn’t completed, the client may not know if the key resides on a the original node or the new node and leading to a condition that has more network hops and additional processing.
Worse, in a multi-key operation, if one key resides in the original node and another in the new node, Valkey cannot properly execute the command, so it requires the client to retry the request until the data resides on a single node, leading to a mini-outage where Valkey still has the data but it is inaccessible until the migration is complete for the affected data.
Finally, in a situation where Valkey is attempting to migrate a very large key (such as collections in data types like sorted sets, sets, or lists) from one node to another, the entire key may be too large to be accepted by the target node’s input buffer leading to a blocked migration that needs manual intervention.
To unblock the migration you need to increase the input buffer limit or end up losing data either through forcing the slot assignment or deleting the key.&lt;&#x2F;p&gt;
&lt;p&gt;In Valkey, keys are bundled into one of 16,384 ‘slots’ and each node takes one or more slots.
In Valkey 9.0 instead of being key-by-key, Valkey migrates entire slots at a time, atomically moving the slot from one node to another using the AOF format.
AOF can send individual items in a collection instead of the whole key.
Consequently, this prevents large collections from causing latency spikes when they are being processed during migration.
The new atomic slot migration doesn’t migrate keys directly, instead, the move-then-delete sequence is on the entire slot; the original node still retains all the keys and data until the entire slot migration is complete avoiding the pre-Valkey 9.0 issues with redirects or retries.
For more information, check outthe video from our recent &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=GoKfeJGXEH0&amp;amp;list=PLAV1X7hxH2HtZWc2YNQRMQe9FT9XTWemE&quot;&gt;Keyspace conference talk recording about Valkey 9.0&lt;&#x2F;a&gt; and look for an upcoming deep dive on atomic slot migrations.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;hash-field-expiration&quot;&gt;Hash Field Expiration&lt;&#x2F;h2&gt;
&lt;p&gt;The hash data type allows you to neatly tie together data with multiple fields under one key.
But because all this data lives attached to a single key, the expiry was, until Valkey 9.0, all-or-nothing: you couldn&#x27;t expire fields individually.
For users who needed &lt;em&gt;some&lt;&#x2F;em&gt; of the data to expire, the limitation made users look to awkward hacks with multiple keys, compounding the complexity and increasing the memory footprint of the data.
Valkey 9.0 now address this gap by adding the following commands:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hexpire&#x2F;&quot;&gt;HEXPIRE&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hexpireat&#x2F;&quot;&gt;HEXPIREAT&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hexpiretime&#x2F;&quot;&gt;HEXPIRETIME&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hgetex&#x2F;&quot;&gt;HGETEX&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hpersist&#x2F;&quot;&gt;HPERSIST&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hpexpire&#x2F;&quot;&gt;HPEXPIRE&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hpexpireat&#x2F;&quot;&gt;HPEXPIREAT&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hpexpiretime&#x2F;&quot;&gt;HPEXPIRETIME&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hpttl&#x2F;&quot;&gt;HPTTL&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;hsetex&#x2F;&quot;&gt;HSETEX&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;commands&#x2F;httl&#x2F;&quot;&gt;HTTL&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;You can read more about how hash field expiration works in &lt;a href=&quot;&#x2F;blog&#x2F;hash-fields-expiration&#x2F;&quot;&gt;Ran Shidlansik&#x27;s deep dive&lt;&#x2F;a&gt; on the subject.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;numbered-databases-in-cluster-mode&quot;&gt;Numbered Databases in cluster mode&lt;&#x2F;h2&gt;
&lt;p&gt;Numbered databases allow you to separate data and avoid key name clashes: each database contains keys  unique to that database.
This is an old feature dating back to the very first version of the preceding project.
However, before Valkey 9.0, numbered databases were severely limited with cluster mode being restricted to having a single database (db 0).
Without cluster support, numbered databases were discouraged as using them limited you to never scaling beyond a single node.&lt;&#x2F;p&gt;
&lt;p&gt;Based on user feedback and reconsideration by the team, Valkey 9.0 breaks from the preceding project and adds full support for numbered databases in cluster mode.
Numbered databases have a whole host of use cases and some very handy clustering characteristics, find out more in the &lt;a href=&quot;&#x2F;blog&#x2F;numbered-databases&#x2F;&quot;&gt;feature write up&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;much-much-more&quot;&gt;Much, much, more&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 9.0 brings numerous small changes and improvements:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;&#x2F;blog&#x2F;1-billion-rps&#x2F;&quot;&gt;1 Billion Requests&#x2F;Second with Large Clusters&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: Improvements in the resilience of large clusters, enabling scaling to 2,000 nodes and achieving over 1 billion requests per second,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2092&quot;&gt;Pipeline Memory Prefetch&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: Memory prefetching when pipelining, yielding up to 40% higher throughput,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2546&quot;&gt;Un-deprecation&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: In a similar vein to numbered databases, the Valkey project re-evaluated 25 previously deprecated commands and, based on the stance of API backward compatibility, restored the usage recommendation for these commands.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2078&quot;&gt;Zero copy Responses&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: Large requests avoid internal memory copying, yielding up to 20% higher throughput,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1811&quot;&gt;Multipath TCP&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: Adds Multipath TCP support which can reduce latency by 25%.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1741&quot;&gt;SIMD for BITCOUNT and HyperLogLog&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: optimizations that yield up to a 200% higher throughput,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1809&quot;&gt;By Polygon for Geospatial Indices&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: query location by a specified polygon,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1975&quot;&gt;Conditional Delete&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: Adds the &lt;a href=&quot;&#x2F;commands&#x2F;delifeq&#x2F;&quot;&gt;DELIFEQ&lt;&#x2F;a&gt; command that deletes the key if the value is equal to a specified value,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1466&quot;&gt;CLIENT LIST Filtering&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: options to filter &lt;a href=&quot;&#x2F;commands&#x2F;client-list&#x2F;&quot;&gt;CLIENT LIST&lt;&#x2F;a&gt; using flags, name, idle, library name&#x2F;version, database, IP, and capabilities.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;And, perhaps, most importantly, a new, whimsical &lt;a href=&quot;&#x2F;commands&#x2F;lolwut&#x2F;&quot;&gt;LOLWUT&lt;&#x2F;a&gt; generative art piece especially for version 9.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;get-it-today&quot;&gt;Get it today&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 9.0 was built by the collaborative efforts of dozens of contributors.
Make sure and grab Valkey 9.0 today as a &lt;a href=&quot;&#x2F;download&#x2F;releases&#x2F;v9-0-1&quot;&gt;binary, container&lt;&#x2F;a&gt;, or &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;releases&#x2F;tag&#x2F;9.0.1&quot;&gt;build it from source&lt;&#x2F;a&gt; and watch for it in your favourite Linux distribution.
Feel free to post a question on our GitHub discussions or Slack and if you find a bug, make sure and tell the team as an issue.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Scaling a Valkey Cluster to 1 Billion Request per Second</title>
        <published>2025-10-20T00:00:00+00:00</published>
        <updated>2025-10-20T00:00:00+00:00</updated>
        
        <author>
          <name>
            hpatro
          </name>
        </author>
        
        <author>
          <name>
            maheshcherukumilli
          </name>
        </author>
        
        <author>
          <name>
            sarthakaggarwal97
          </name>
        </author>
        
        <author>
          <name>
            sungming2
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/1-billion-rps/"/>
        <id>https://valkey.io/blog/1-billion-rps/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/1-billion-rps/">&lt;p&gt;The upcoming Valkey 9.0 release brings major improvements in the resilience of large clusters, enabling scaling to 2,000 nodes and achieving over 1 billion requests per second, all while ensuring bounded recovery time.
In this blog post, we provide an overview of how the Valkey clustering system works, along with the architectural improvements and rigorous testing that made this level of scale possible.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey’s standalone configuration is a single server setup with optional replicas for availability, but all writes flow to one primary. It is one process, one dataset, zero coordination, blazing fast and simple to operate when a single machine’s CPU, memory, and NIC can carry the load. However, Valkey at-scale moves past single node limits.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey’s cluster mode shards the keyspace into &lt;strong&gt;16,384 hash slots&lt;&#x2F;strong&gt; and spreads them across multiple primaries with replicas for redundancy. Clients are cluster aware: they route commands directly to the node owning the slot and follow redirections during resharding or failover. The result is horizontal scalability, balanced throughput, and built in fault tolerance without a central coordinator. Behind the scenes, the cluster bus keeps this distributed system coherent, coordinating membership, gossip, and failover.
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;1-billion-rps&#x2F;1B-rps-overview.png&quot; alt=&quot;Diagram showcasing different operational mode&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;cluster-bus-overview&quot;&gt;Cluster Bus Overview&lt;&#x2F;h2&gt;
&lt;p&gt;Under the hood, nodes coordinate over the cluster bus a persistent TCP mesh with a lightweight, gossip-based protocol. It handles &lt;code&gt;MEET&lt;&#x2F;code&gt; (discovery), &lt;code&gt;PING&lt;&#x2F;code&gt;&#x2F;&lt;code&gt;PONG&lt;&#x2F;code&gt; heartbeats with piggybacked cluster topology, quorum-based &lt;code&gt;FAIL&lt;&#x2F;code&gt; decisions, replica promotion elections, and epoch-based conflict resolution so the cluster converges cleanly after partitions. Since membership, health, and slot ownership flow through gossip, it allows Valkey to scale to large node counts while remaining resilient to node and network failures.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;membership-discovery-and-information-dissemination-gossip&quot;&gt;Membership Discovery and Information Dissemination (Gossip)&lt;&#x2F;h3&gt;
&lt;p&gt;Nodes join a Valkey cluster by sending a &lt;code&gt;MEET&lt;&#x2F;code&gt; message to any existing member (a seed). From there, the cluster bus, a mesh of persistent TCP links which spreads membership and topology using lightweight gossip of peers piggybacked on periodic PING&#x2F;PONG heartbeats. This peer-to-peer exchange quickly converges every node in the cluster and which slots each primary owns, without any central coordinator.
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;1-billion-rps&#x2F;1B-rps-node-discovery.png&quot; alt=&quot;Diagram showcasing membership discovery and mesh topology formation&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;cluster-bus-failure-detection-failover-and-epoch-update&quot;&gt;Cluster Bus: Failure Detection, Failover, and Epoch Update&lt;&#x2F;h3&gt;
&lt;p&gt;When a node misses heartbeats beyond the configurable node-timeout, peers mark it potentially failed (&lt;code&gt;PFAIL&lt;&#x2F;code&gt; - suspect). If a quorum of primaries observes the timeout, they declare &lt;code&gt;FAIL&lt;&#x2F;code&gt; and initiate failover: the replica(s) of the failed primary becomes candidate to take over the ownership of the shard and send vote requests to all the primaries. Primaries respond and the winner is promoted to serve the affected slots. Clients that hit the old owner receive &lt;code&gt;MOVED&#x2F;ASK&lt;&#x2F;code&gt; and transparently retry against the new primary.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;1-billion-rps&#x2F;1B-rps-failover.png&quot; alt=&quot;Diagram showcasing failure detection and failover&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;major-improvements-to-scale-cluster-bus-in-valkey&quot;&gt;Major improvements to scale Cluster Bus in Valkey&lt;&#x2F;h2&gt;
&lt;p&gt;Across multiple major&#x2F;minor versions, the community has improved cluster stability a lot and have made multiple changes to handle various failure scenarios to heal the system in a steady manner and avoid any manual intervention. Here are few of the interesting improvements:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multiple primary failures&lt;&#x2F;strong&gt; Failover operation across cluster is serialized. Hence, only one shard can undergo failover at a given point in time. When there are multiple primary outages across the cluster, it used to lead to collision of vote request from replica candidates of impacted shards. Due to the collision, the votes would get split and the cluster won’t reach to consensus of promoting a replica in a given cycle. With a large number of primaries failing, the cluster won’t heal automatically and further an administrator needs to manually intervene.
This problem was tackled by &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;enjoy-binbin&quot;&gt;Binbin Zhu&lt;&#x2F;a&gt; in Valkey 8.1 by introducing a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1018&quot;&gt;ranking mechanism&lt;&#x2F;a&gt;  to each shard based on lexicographic order of the shard ID. With the ranking algorithm, the shard with highest ranking would go sooner and the one with lower ranking would add additional delay to start the election for the given shard. This helped to get a consistent recovery time in a multiple primary outage.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Reconnection attempt storm to unavailable nodes&lt;&#x2F;strong&gt; Profiling revealed that with multiple node failures, a large chunk of compute goes into attempting to reconnect with the already failed nodes. Each node attempts to reconnect to all the failed nodes every 100ms. This lead to significant compute usage when there are hundreds of failed nodes in the cluster. In order to prevent the cluster from getting overwhelmed, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;sarthakaggarwal97&quot;&gt;Sarthak Aggarwal&lt;&#x2F;a&gt; implemented a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2154&quot;&gt;throttling mechanism&lt;&#x2F;a&gt; which allows enough reconnect attempts within a configured cluster node timeout, while ensuring the server node is not overwhelmed.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Optimized failure report tracking&lt;&#x2F;strong&gt; Profiling also revealed that when hundreds of nodes fail simultaneously, the surviving nodes spend a significant amount of time processing and cleaning up redundant failure reports. For example, after 499 out of 2000 nodes were killed, the remaining 1501 nodes continued to gossip about each failed node and exchange reports, even after those nodes had already been marked as failed. &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;sungming2&quot;&gt;Seungmin Lee&lt;&#x2F;a&gt; optimized the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2277&quot;&gt;addition&#x2F;removal of failure report&lt;&#x2F;a&gt; by using a radix tree to store the failure report time information rounded to every second and group multiple report together. This helps with cleaning up expired failure report efficiently as well. Further optimizations were also made to avoid duplicate failure report processing and save CPU cycles.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Pub&#x2F;Sub System&lt;&#x2F;strong&gt; Cluster bus system is also used for &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;pubsub&#x2F;&quot;&gt;pub&#x2F;sub operations&lt;&#x2F;a&gt; to provide a simplified interface for a client to connect to any node to publish data and subscribers connected on any node would receive the data. The data is transported via the cluster bus. This is quite an interesting usage of the cluster bus. However, the metadata overhead of each packet is roughly 2 KB which is quite large for small pub&#x2F;sub messages. The observation was the packet header was large due to the slot ownership information (16384 bits = 2048 bytes). And that information was irrelevant for a pub&#x2F;sub message. Hence, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;roshkhatri&quot;&gt;Roshan Khatri&lt;&#x2F;a&gt; introduced a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;654&quot;&gt;light weight message header&lt;&#x2F;a&gt; (~30 bytes) to be used for efficient message transfer across nodes. This allowed pub&#x2F;sub to scale better with large clusters.
Valkey also has a sharded pub&#x2F;sub system which keeps the data traffic of shard channels to a given shard which is an major improvement over the global pub&#x2F;sub system in cluster mode. This was also onboarded to the light weight message header.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Valkey 9.0 has plenty of other improvements to increase the overall stability of the clustering system. All these enhancements allowed us to scale to 2,000 nodes with bounded recovery time during network partitions and below we have documented the benchmark setup and the throughput we were able to attain.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;benchmarking&quot;&gt;Benchmarking&lt;&#x2F;h2&gt;
&lt;p&gt;In order to scale the Valkey cluster to 1 billion requests per second (RPS) for write command, we decided to choose a SET type command to accurately reflect the scale. We have seen previous experiments where a single instance was able to achieve more than &lt;a href=&quot;&#x2F;blog&#x2F;unlock-one-million-rps&quot;&gt;1 million RPS&lt;&#x2F;a&gt; so the goal was to reach 1 billion RPS with a 2,000 node cluster, where each shard has 1 primary and 1 replica. A replica is added to each shard for better availability.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Hardware Configuration&lt;&#x2F;strong&gt;
For this experiment, Valkey cluster was deployed on AWS &lt;code&gt;r7g.2xlarge&lt;&#x2F;code&gt; instance type, which is a memory optimized instance, featuring 8 cores and 64 GB memory on an ARM-based (&lt;code&gt;aarch64&lt;&#x2F;code&gt;) architecture. In order to generate enough traffic across all the slots, we used 750 instances of AWS &lt;code&gt;c7g.16xlarge&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;System Configuration&lt;&#x2F;strong&gt;
Note: The core assignments used in this guide are examples. Optimal core selection may vary depending on your specific system configuration and workload.&lt;&#x2F;p&gt;
&lt;p&gt;Each of the Valkey server nodes had 8 cores, so we decided to pin 2 cores to ensure interrupt affinity.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# identify the network interface, for this run, it was ens5&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;IFACE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;$(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; route&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; awk&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;&#x2F;default&#x2F; {print $5; exit}&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# expose two combined Rx&#x2F;Tx queues&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;sudo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; ethtool&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -L&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;$IFACE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot; combined&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 2&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# grab exactly the two IRQ numbers that belong to those queues&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;IRQS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;($(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;grep&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -w&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;$IFACE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot; &#x2F;proc&#x2F;interrupts&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; awk&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;{gsub(&amp;quot;:&amp;quot;,&amp;quot;&amp;quot;,$1); print $1}&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;IRQ0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;${IRQS[0]}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;IRQ1&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;${IRQS[1]}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# pin queue-0 interrupt to CPU0&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;echo 0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; sudo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; tee &#x2F;proc&#x2F;irq&#x2F;&lt;&#x2F;span&gt;&lt;span&gt;$IRQ0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&#x2F;smp_affinity_list&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;   # bind IRQ0 → CPU 0&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;echo 1&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; sudo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; tee &#x2F;proc&#x2F;irq&#x2F;&lt;&#x2F;span&gt;&lt;span&gt;$IRQ1&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&#x2F;smp_affinity_list&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;   # bind IRQ1 → CPU 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;sudo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; systemctl stop irqbalance&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;We increased the default limit of file descriptors of the instance so that the node is able to handle connections from the Valkey cluster nodes and clients.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# increase max conn&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;ulimit -n 1048544&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;With 6 cores left on the node, we decided to assign all the remaining cores to the &lt;code&gt;valkey-server&lt;&#x2F;code&gt;, so that the 6 &lt;code&gt;io-threads&lt;&#x2F;code&gt; (including the &lt;code&gt;main thread&lt;&#x2F;code&gt;) can leverage the most from the instance.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# pin 6 cores to valkey-server&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;CPUSET&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;2-7&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;sudo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; cset shield&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --cpu=&lt;&#x2F;span&gt;&lt;span&gt;$CPUSET&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --kthread=on&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;sudo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; cset shield&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --exec&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; taskset&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -- -c&lt;&#x2F;span&gt;&lt;span&gt; $CPUSET&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; .&#x2F;valkey-server valkey-cluster.conf&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --daemonize&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; yes&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Server Configuration&lt;&#x2F;strong&gt;
We launched each Valkey process with minimal changes to the default configurations.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;cluster-enabled yes # To enable cluster mode&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;cluster-config-file nodes.conf # To persiste the topology information&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;cluster-require-full-coverage no # To prioritize availability&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;cluster-allow-reads-when-down yes # To allow reads and partial operations to continue even under slot loss&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;save &amp;quot;&amp;quot; # Disable periodic snapshots&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;io-threads 6 # Allow offloading io operations to separate io-threads&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;maxmemory 50gb # Limit maximum memory utilization by the process&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Benchmark Configurations&lt;&#x2F;strong&gt;
We ran the &lt;code&gt;valkey-benchmark&lt;&#x2F;code&gt; from 750 client instances to generate the required traffic. We used the following parameters for each of those instances.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Number of requests (-n): &lt;code&gt;100M&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Number of clients (-c): &lt;code&gt;1000&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Test (-t): &lt;code&gt;SET&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Data Size (-d): &lt;code&gt;512&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Threads: &lt;code&gt;20&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;1-billion-rps&#x2F;1B-rps-throughput.png&quot; alt=&quot;Diagram showcasing 1 billion request per sec throughput&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Overall observation has been that the throughput grows almost linearly with the overall count of primaries and cluster bus has a minimal overhead with the default node timeout.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;recovery&quot;&gt;Recovery&lt;&#x2F;h3&gt;
&lt;p&gt;In the same environment, we tested the recovery time of the cluster when multiple primary nodes go down. We picked up to 50% of the primaries and killed them and let the automatic failover process to heal the write outage for a given shard. To simulate primary node failures, we sent &lt;code&gt;SIGKILL&lt;&#x2F;code&gt; to hard-stop the Valkey process, and let the replica take over through automated failovers. The recovery time measured here is from the moment any of the nodes discovered a primary in a &lt;code&gt;PFAIL&lt;&#x2F;code&gt; state until the cluster reported ok and all slots were covered again.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;1-billion-rps&#x2F;1B-rps-recovery-time.png&quot; alt=&quot;Diagram showcasing recovery time across various primary failure scenarios&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;closing-thoughts&quot;&gt;Closing thoughts&lt;&#x2F;h2&gt;
&lt;p&gt;With all these improvements made in Valkey, a cluster can now scale to 1 billion RPS using 2,000 nodes which is quite a remarkable feat. However, there is plenty of room to improve further. The steady state CPU utilization overhead from the cluster bus message transfer&#x2F;processing can be reduced further by incorporating the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;SWIM_Protocol&quot;&gt;SWIM protocol&lt;&#x2F;a&gt; or moving the cluster bus messsage processing off the main thread into an independent separate thread. The failover logic can be made smarter as well by incorporating the AZ placement of nodes. We would also like to introduce more observability metrics&#x2F;logs into the system for better manageability. All of these are being linked under the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;issues&#x2F;2281&quot;&gt;Support Large Cluster&lt;&#x2F;a&gt; issue. Feel free to check it out and add in your suggestions.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>How Valkey 8.1 Handles 50 Million Sorted Set Inserts</title>
        <published>2025-10-02T00:00:01+00:00</published>
        <updated>2025-10-02T00:00:01+00:00</updated>
        
        <author>
          <name>
            khawaja
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/50-million-zsets/"/>
        <id>https://valkey.io/blog/50-million-zsets/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/50-million-zsets/">&lt;p&gt;When you run infrastructure at scale, the smallest efficiencies compound into massive savings. Sorted sets (ZSETs) are the backing data structure for far more than leaderboards. They&#x27;re used for time‑ordered feeds, priority queues, recommendation rankings and more. Each entry carries per‑item overhead; when you&#x27;re inserting tens of millions of items, those bytes accumulate into gigabytes. The latest Valkey 8.1 release introduces a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-8-1-0-ga&#x2F;&quot;&gt;redesigned hash table&lt;&#x2F;a&gt; and other optimizations that promise lower memory usage and higher throughput. In this post, we put Valkey 8.1 under pressure by benchmarking it against Valkey 8.0, inserting 50 million members into a sorted set and measuring memory consumption and throughput along the way.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;about-momento-and-raider-io&quot;&gt;About Momento and Raider.IO&lt;&#x2F;h2&gt;
&lt;p&gt;Momento builds enterprise‑grade real‑time data infrastructure. Our flagship service, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;gomomento.com&quot;&gt;Momento Cache&lt;&#x2F;a&gt;, allows developers to focus on innovation rather than maintaining complex back‑ends. Behind the curtains, Valkey is one of the primary storage engines we operate. Its memory efficiency and predictable performance make it an ideal choice for mission‑critical workloads. By building on Valkey, we can deliver caching experiences that are fast, reliable, and cost‑effective.&lt;&#x2F;p&gt;
&lt;p&gt;One such workload is &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;raider.io&quot;&gt;Raider.IO&lt;&#x2F;a&gt;, the most popular companion site for World of Warcraft players. Raider.IO maintains leaderboards that track millions of characters and guilds, updating instantly whenever a player clears a dungeon or raid. That translates into hundreds of millions of sorted set operations every day. In this environment, downtime or latency isn&#x27;t acceptable. Memory efficiency and throughput directly shape the player experience. Running Valkey at hyperscale for workloads like Raider.IO means that improvements in Valkey 8.1 immediately impact both infrastructure cost and the ability to absorb peak traffic without breaking a sweat.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;benchmark-setup&quot;&gt;Benchmark Setup&lt;&#x2F;h2&gt;
&lt;p&gt;To provide a fair comparison, both Valkey 8.0 and Valkey 8.1.1 were run on the same hardware:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;AWS c8g.2xl instance (Graviton4, 8 vCPU, 16 GB RAM)&lt;&#x2F;li&gt;
&lt;li&gt;Identical settings for persistence, I&#x2F;O threads and memory limits&lt;&#x2F;li&gt;
&lt;li&gt;50 million members inserted into a ZSET via pipelined ZADD commands (score=i, member=m:{i})&lt;&#x2F;li&gt;
&lt;li&gt;250,000 item batch size per pipeline&lt;&#x2F;li&gt;
&lt;li&gt;Metrics collected every 1,000,000 inserts (flush between runs)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;momentohq&#x2F;sorted-set-benchmark&quot;&gt;benchmark code is open sourced&lt;&#x2F;a&gt; and straightforward to reproduce: it connects to both servers, flushes the test key, performs batched inserts, and records &lt;code&gt;used_memory&lt;&#x2F;code&gt;, &lt;code&gt;used_memory_rss&lt;&#x2F;code&gt;, total elapsed time, and throughput after each million inserts. This repeatability mirrors the ethos of Valkey&#x27;s community - every optimization is measurable, and anyone can verify the results.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;memory-usage-8-1-vs-8-0&quot;&gt;Memory Usage – 8.1 vs 8.0&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8.1&#x27;s redesigned dictionary structure cuts &lt;a href=&quot;&#x2F;blog&#x2F;valkey-8-1-0-ga&quot;&gt;roughly 20–30 bytes per key&lt;&#x2F;a&gt;, and those savings compound with scale. The figure below charts &lt;code&gt;used_memory&lt;&#x2F;code&gt; during the benchmark for both versions:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;50-million-zsets&#x2F;used-memory-vs-inserts.png&quot; alt=&quot;Chart illustrating used vs inserts. It shows Valkey 8.1 with lower used memory&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;At 1 million inserts, Valkey 8.0 used ~95 MB while Valkey 8.1 used ~81 MB. As the ZSET grew, the gap widened. By 10 million inserts, 8.0 consumed 1.06 GB versus 0.77 GB for 8.1 - &lt;strong&gt;a 27% reduction&lt;&#x2F;strong&gt;. At the end of the run (50 million inserts), 8.1 used 3.77 GB compared to 4.83 GB on 8.0, saving 1.06 GB (≈22 %).&lt;&#x2F;p&gt;
&lt;p&gt;These numbers align with the release notes. Valkey&#x27;s 8.1 announcement highlights lower per‑key overheads and improved data structure handling; Linuxiac notes that 8.1&#x27;s architectural changes can reduce memory footprints by &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;linuxiac.com&#x2F;valkey-8-1-in-memory-data-store-unleashes-10-faster-throughput&quot;&gt;approximately 20 bytes per KV pair&lt;&#x2F;a&gt;, with each pair normally consuming 100 bytes (a 20% reduction!). Our results also confirm a similar improvement on large ZSET workloads.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;throughput-and-total-time&quot;&gt;Throughput and Total Time&lt;&#x2F;h2&gt;
&lt;p&gt;Memory isn&#x27;t the only metric that matters. Throughput determines how quickly you can fill or update a set. Although Valkey 8.0 started strong with about 661k inserts per second at the 1M mark, its throughput gradually declined as the data set grew. Valkey 8.1 maintained higher and more consistent throughput, finishing the benchmark at 573k&#x2F;s compared with 532k&#x2F;s for 8.0. Overall time to insert 50 million members dropped from 94 seconds on 8.0 to 87 seconds on 8.1, a 7% improvement.&lt;&#x2F;p&gt;
&lt;p&gt;The smoother throughput curve is just as important as the raw peak. Spiky performance complicates capacity planning, especially when running close to memory limits. By contrast, Valkey 8.1&#x27;s combination of improved hash‑table implementation and I&#x2F;O thread optimizations delivers predictable scaling.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-these-gains-matter&quot;&gt;Why These Gains Matter&lt;&#x2F;h2&gt;
&lt;p&gt;For Raider.IO and similar workloads, these improvements translate directly into better economics and user experience:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Lower infrastructure cost - Saving over a gigabyte per 50 M members means fewer nodes are needed to support the same dataset. At hyperscale, this compounds into significant cost reductions.&lt;&#x2F;li&gt;
&lt;li&gt;More headroom during peaks - Efficiency gains provide a buffer during seasonal events like game expansions or esports tournaments. When leaderboards surge, there&#x27;s less risk of hitting memory ceilings.&lt;&#x2F;li&gt;
&lt;li&gt;Predictable scaling - A smoother memory growth curve and stable throughput make it easier to forecast when capacity needs to be added and to automate scaling policies.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;These benefits don&#x27;t just apply to leaderboards. Time‑ordered feeds (such as activity streams), job scheduling queues, recommendation rankings, rate‑limiting windows and search scoring all rely on sorted sets. In every case, per‑entry overhead multiplied by millions of entries determines whether you can do more with less. As described in the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linuxfoundation.org&#x2F;press&#x2F;linux-foundation-announces-general-availability-of-valkey-8-1&quot;&gt;Linux Foundation press release&lt;&#x2F;a&gt;, upgrading to Valkey 8.1 can reduce memory footprints for common workloads by up to 20%, helping enterprises scale while keeping costs in check.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;closing-thoughts&quot;&gt;Closing Thoughts&lt;&#x2F;h2&gt;
&lt;p&gt;As an operator who lives and breathes real‑time data systems, I&#x27;m continually amazed by the pace of innovation in the Valkey community. We didn&#x27;t tune any configuration knobs to achieve these results - Valkey 8.1&#x27;s efficiency is built in, and the improvements materialized instantly once we upgraded. On a 50 million‑entry benchmark, the new release used 27% less memory while delivering about 8% higher throughput, and completed the workload seven seconds faster than its predecessor. Those deltas may seem small in isolation, but at hyperscale they compound into transformative savings and more resilient services.&lt;&#x2F;p&gt;
&lt;p&gt;If you&#x27;re running sorted set heavy workloads - whether leaderboards, feeds, queues or scoring engines - I encourage you to upgrade to Valkey 8.1 and run this benchmark yourself. Interested in how Valkey 8.1 stacked up to its competitors? &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.gomomento.com&#x2F;blog&#x2F;valkey-vs-redis-memory-efficiency-at-hyperscale&#x2F;&quot;&gt;So were we&lt;&#x2F;a&gt;!&lt;&#x2F;p&gt;
&lt;p&gt;Our code is &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;momentohq&#x2F;sorted-set-benchmark&quot;&gt;open source&lt;&#x2F;a&gt; and available to run on your own hardware. I think you&#x27;ll be pleasantly surprised by how much headroom you gain and how effortlessly Valkey handles pressure. The power of open source and community‑driven engineering continues to shine through in every release.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Introducing Hash Field Expirations</title>
        <published>2025-09-30T00:00:00+00:00</published>
        <updated>2025-09-30T00:00:00+00:00</updated>
        
        <author>
          <name>
            ranshid
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/hash-fields-expiration/"/>
        <id>https://valkey.io/blog/hash-fields-expiration/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/hash-fields-expiration/">&lt;p&gt;One of the great strengths of Valkey has always been its built-in ability to expire keys.
This simple but powerful mechanism lets developers keep their datasets fresh, automatically clear caches, or enforce session lifetimes without additional logic.
But there has always been one limitation: expiration worked at the level of whole keys.
If you stored multiple fields in a hash, you could set a TTL for the hash itself, but you couldn’t say “this field should live for 10 seconds while another field should stick around for 10 minutes.”&lt;&#x2F;p&gt;
&lt;p&gt;That limitation forced developers into awkward choices.&lt;&#x2F;p&gt;
&lt;p&gt;Imagine you’re running a feature flag system. You want to store all of a customer’s feature toggles inside one hash:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Some flags are short-lived experiments, meant to turn off automatically after a few seconds or minutes.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Others are long-term rollouts, where the toggle might remain valid for days or weeks.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;If you only had key-level expirations, you were forced to compromise.
You could put all flags for a customer into a single hash, but then they would all share one expiration time.
Or, you could store each flag in its own key so that each could expire independently — but this would explode the number of keys your system has to manage,
hurting memory efficiency and increasing overhead for operations.&lt;&#x2F;p&gt;
&lt;p&gt;The inability to set different TTLs per field meant developers either built complex cleanup processes outside of Valkey, or gave up flexibility in how expirations were handled.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;finding-the-right-expiration-model&quot;&gt;Finding the Right Expiration Model&lt;&#x2F;h2&gt;
&lt;p&gt;At first glance, adding field expirations might sound like a simple matter of storing timestamps per field.
In order to understand the problem, it’s important to understand how expiration is normally handled in Valkey today.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey uses two complementary mechanisms to reclaim expired keys:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Lazy expiration:&lt;&#x2F;strong&gt; A key is deleted only when accessed. If you try to read or write an expired key, Valkey notices it has passed its TTL and deletes it immediately. This is cheap, but untouched keys would linger indefinitely and waste memory.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Active expiration:&lt;&#x2F;strong&gt; A background cron job runs 10 times per second, sampling a small set of keys with expiration. Expired keys are deleted until a time budget is reached, ensuring memory is reclaimed proactively without introducing latency spikes.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;For hash fields, we made a key design choice: we did not implement lazy expiration.
Adding expiration checks to every &lt;a href=&quot;&#x2F;commands&#x2F;hget&quot;&gt;&lt;code&gt;HGET&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;, &lt;a href=&quot;&#x2F;commands&#x2F;hset&quot;&gt;&lt;code&gt;HSET&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;, or &lt;a href=&quot;&#x2F;commands&#x2F;hdel&quot;&gt;&lt;code&gt;HDEL&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; would have complicated the hot path of hash commands and risked performance regressions.
Instead, we extended the active expiration job so that it can now also scan field-level expiration buckets, alongside top-level keys.&lt;&#x2F;p&gt;
&lt;p&gt;This approach keeps expiration logic unified, predictable, and free from new latency cliffs.
But it also introduced a challenge:
how do we efficiently track and clean up expired fields inside hashes that may contain thousands (or millions) of entries,
when only a small subset are volatile — and an even smaller fraction are expired at any given time?
The challenge is balancing three conflicting goals:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Keep memory usage small.&lt;&#x2F;strong&gt; Hashes can grow to millions of fields, so metadata overhead must be minimal.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Maintain fast lookups and updates.&lt;&#x2F;strong&gt; Most hash commands run in &lt;code&gt;O(1)&lt;&#x2F;code&gt; time, and expiration tracking can’t change that.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reclaim memory efficiently.&lt;&#x2F;strong&gt; The active expiration job is time-bounded, so we need to minimize wasted CPU cycles spent scanning unexpired fields.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;approaches-considered&quot;&gt;Approaches Considered&lt;&#x2F;h3&gt;
&lt;p&gt;&quot;We explored several ways to solve this problem:&lt;&#x2F;p&gt;
&lt;h4 id=&quot;1-secondary-hashtable-per-field-expiration&quot;&gt;1. Secondary hashtable per field expiration&lt;&#x2F;h4&gt;
&lt;p&gt;This seemed simple: build a secondary hashtable in each Hash object which maps only volatile fields.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-alternative-hashtable.png&quot; alt=&quot;Diagram illustrating optional solution using a secondary hashtable index&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;This is actually the way volatile generic keys are being tracked. Per each database, Valkey manage a secondary map mapping volatile keys.
During active expiration process the existing job scan the secondary map and each key found to pass it&#x27;s assigned TTL is expired and it&#x27;s memory is reclaimed.
The problem with this design option is the potential inefficiency introduced while scanning many items which should not expire.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;2-radix-tree-based-index&quot;&gt;2. Radix Tree-based index&lt;&#x2F;h4&gt;
&lt;p&gt;Using a radix tree to hold field names plus expirations provides sorted access for free.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-alternative-radixtree.png&quot; alt=&quot;Diagram illustrating optional solution using a secondary radix-tree index&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;This is also not a new concept in Valkey. In fact this exact structure is being used to manage client connections blocked on key operations
like &lt;a href=&quot;&#x2F;commands&#x2F;blpop&#x2F;&quot;&gt;&lt;code&gt;BLPOP&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; and &lt;a href=&quot;&#x2F;commands&#x2F;xread&#x2F;&quot;&gt;&lt;code&gt;XREAD&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;But the memory overhead per node was high. During experiments we measured more than 54 bytes overhead per each hash field when using this type of index.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;3-global-sorted-structure&quot;&gt;3. Global sorted structure&lt;&#x2F;h4&gt;
&lt;p&gt;We wanted to have the ability to efficiently scan over fields which are already expired. A good way to achieve this is by using a sorted index.
Using a radix tree was possible, but the memory overhead was high. Instead we could use a more lightweight data structure like a skip list, which memory consumption is more bounded
and is not governed by the data atrophy.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-alternative-skiplist.png&quot; alt=&quot;Diagram illustrating optional solution using a secondary skiplist index&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;However, This would give &lt;code&gt;O(log N)&lt;&#x2F;code&gt; access to the index which did not work well with our target to keep hash operations constant time complexity.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;coarse-buckets-with-adaptive-encoding&quot;&gt;Coarse Buckets with Adaptive Encoding&lt;&#x2F;h3&gt;
&lt;p&gt;Instead of tracking every field’s expiration individually, we designed a coarse bucket system.
Each field’s timestamp is mapped into a time bucket, represented by a shared “end timestamp.”&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-coarse-buckets.png&quot; alt=&quot;Diagram illustrating our selected coarse buckets solution&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;This solution introduces a semi-sorted data structure which we named &#x27;vset&#x27; (stands for &quot;volatile set&quot;).
The volatile set manages buckets in different time window resolutions and adaptive encodings.
Buckets can split if too many expirations cluster in one interval.
This adaptability keeps the number of buckets small while ensuring they’re fine-grained enough for efficient cleanup.&lt;&#x2F;p&gt;
&lt;p&gt;When a new field with TTL is added, the volatile set either places it into an existing bucket or creates a new one.
If a bucket grows too large, it is narrowed into smaller time windows to keep expiration scans efficient.
For example: scanning 1,000 items in a single 10-second bucket may result in mostly misses, while spreading them across 10 smaller buckets avoids wasted work.
Why not always use a very small time window (e.g few milliseconds) for each bucket? The reason is memory.
Recall that the buckets are managed in a Radix-Tree which introduces a high overhead per each item.
When many buckets are used, the memory overhead can be high and we might end up with the same memory overhead introduced by using a Radix-Tree index.&lt;&#x2F;p&gt;
&lt;p&gt;The volatile set also uses different bucket encoding based on the number of items in the bucket.
A bucket with only a single element would require just a single pointer size of bytes.
A bucket of small amount of items would be encoded as a vector of item pointers, and when the bucket contains many items we will use the &lt;a href=&quot;&#x2F;blog&#x2F;new-hash-table&quot;&gt;Valkey hashtable&lt;&#x2F;a&gt; structure to map the relevant items.
This way, we are leveraging existing data structures which are highly optimized for modern CPUs.&lt;&#x2F;p&gt;
&lt;p&gt;A hash object that contains volatile fields now also carries a secondary volatile set index.
At the database level, we maintain a global map of hashes with volatile fields.
The active expiration cron job scans both regular keys and these hashes, but only iterates over volatile set buckets whose end time has passed.
This ensures that CPU time is spent only on fields that are truly ready to expire.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;benchmarking-our-solution&quot;&gt;Benchmarking our solution&lt;&#x2F;h2&gt;
&lt;p&gt;Validating the new design meant benchmarking across several dimensions: memory overhead, command performance, and expiration efficiency.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;memory-overhead&quot;&gt;Memory Overhead&lt;&#x2F;h3&gt;
&lt;p&gt;We first measured the per-field memory overhead when setting TTLs.
The raw expiration time itself requires a constant 8 bytes (though this could be reduced in the future
by storing only a delta relative to a reference timestamp, such as server start time).
On top of that, extra memory is needed for tracking within the volatile set.&lt;&#x2F;p&gt;
&lt;p&gt;The actual overhead depends on both how many fields have expirations and how spread out their expiration times are.
This is because the bucket encoding chosen by the volatile set adapts to the data distribution.
In practice, the overhead ranged between 16 and 29 bytes per field.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-benchmark-memory.png&quot; alt=&quot;Chart indicating the per field memory overhead for different hash object encoding types&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The higher end of this range primarily affects small hashes, where compact encodings
like listpack are avoided when volatile fields are present.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;command-performance&quot;&gt;Command Performance&lt;&#x2F;h3&gt;
&lt;p&gt;Next, we benchmarked common hash commands both with and without field expirations.
The results showed no measurable performance regression and throughput remained stable when expirations were added.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-benchmark-hash-commands.png&quot; alt=&quot;Chart comparing the throughput of common hash commands both with and without field expirations&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;We also benchmarked the new expiration-aware commands (e.g., &lt;code&gt;HSETEX&lt;&#x2F;code&gt;), confirming that their performance is on par with traditional hash operations.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-benchmark-new-commands.png&quot; alt=&quot;Chart showing the throughput of the new expiration-aware commands&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;active-expiration-efficiency&quot;&gt;Active Expiration Efficiency&lt;&#x2F;h3&gt;
&lt;p&gt;The design goal of the volatile set was to enable efficient background deletion of expired fields.
To test this, we preloaded 10 million fields with TTLs.
We distributed these fields across varying numbers of hash objects to see how object size influences expiration.
During the load phase, we disabled the expiration job using the &lt;a href=&quot;&#x2F;commands&#x2F;debug&#x2F;&quot;&gt;&lt;code&gt;DEBUG&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;, then re-enabled it once all fields had expired.&lt;&#x2F;p&gt;
&lt;p&gt;The following chart shows the time it took the expiration cron job to complete the full deletion of all the 10M fields.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-benchmark-reclaim-time.png&quot; alt=&quot;Chart showing the time it took the expiration cron job complete the full deletion of all the 10M fields&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The results revealed that expiration time depends not just on the number of fields, but also on how they are distributed across objects.
Smaller hashes tend to fit into CPU caches, so random field deletions remain cache-friendly.
Very large hashes, however, cannot fit entirely in cache, which means more expensive memory lookups during expiration.&lt;&#x2F;p&gt;
&lt;p&gt;Another important factor is CPU utilization.
The active expiration job is deliberately CPU-bounded and designed to use no more than ~25% of available CPU time (unless configured to work more aggressively),
preventing it from overwhelming the system.
The chart above shows that average CPU usage was consistently kept under this cap, even when expiring millions of fields, ensuring predictable tail latency.&lt;&#x2F;p&gt;
&lt;p&gt;We also simulated a more realistic scenario: &lt;em&gt;expiring data during continuous ingestion.&lt;&#x2F;em&gt;
Using the &#x27;valkey-benchmark&#x27; executing the new &lt;a href=&quot;&#x2F;commands&#x2F;hsetex&quot;&gt;&lt;code&gt;HSETEX&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; command, we continuously
inserted 10 million fields with 10-second TTLs, while the active expiration job ran in the background.&lt;&#x2F;p&gt;
&lt;p&gt;This setup maintained a constant pool of fields at different stages of their lifecycle — some fresh, some nearing expiration, some ready to be reclaimed.
We then tracked memory usage over a 5-minute period.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;hfe-benchmark-active-expiry.png&quot; alt=&quot;Chart presenting the ability of active expiration job to reclaim memory during constant data ingestion&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The results aligned with our theoretical expectation:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;Memory = (Injection Throughput) x (AVG TTL) x (AVG Item memory)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;In our experiment:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Injection rate: 300K commands&#x2F;sec&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Average TTL: 10 seconds&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Base item size: ~61 bytes&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Additional expiration overhead: ~19 bytes&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;This yields an expected memory footprint of ~230MB:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;300K × 10 × 80B = 230MB&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The observed memory matched this estimate closely, demonstrating that the active expiration mechanism
is able to keep up with load and prevent memory from spiking unexpectedly.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;benchmark-takeaways&quot;&gt;Benchmark Takeaways&lt;&#x2F;h2&gt;
&lt;p&gt;The benchmarks demonstrate that field-level expirations can be added to Valkey without compromising memory efficiency, or latency.
The memory overhead remains modest and predictable, command throughput is unaffected, and the shared active expiration job efficiently reclaims memory even under heavy ingestion workloads.
Together, these results validate that the coarse-bucket design with adaptive encoding delivers the right balance of efficiency, scalability, and correctness, while preserving Valkey’s reputation for high performance and low latency.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;using-hash-field-expiration-in-valkey&quot;&gt;Using Hash Field Expiration in Valkey&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 9.0 introduces a new API for hash field expirations, fully compatible with the Redis 8.0 API.
Any existing client library supporting this API can be used.
In this example, we will use the latest &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;&quot;&gt;valkey-glide&lt;&#x2F;a&gt; 2.1.0 release, which also supports the new hash field expiration commands.&lt;&#x2F;p&gt;
&lt;p&gt;Let&#x27;s start with a simple example.&lt;&#x2F;p&gt;
&lt;p&gt;First, we create a client connecting to a local Valkey server:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; from&lt;&#x2F;span&gt;&lt;span&gt; glide_sync&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; import&lt;&#x2F;span&gt;&lt;span&gt; GlideClientConfiguration, NodeAddress, GlideClient&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; addresses&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; [NodeAddress(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;localhost&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;)]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; GlideClientConfiguration(addresses,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; request_timeout&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;500&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;  # 500ms timeout&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; client&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; GlideClient.create(config)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Next, let&#x27;s create a new hash object to store some random user data:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; client.hset(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;User1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, {&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;name&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;Ran&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;age&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;old&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;password&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;1234&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;})&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Suppose we want the user’s password to be available only for a specific timeframe (e.g., 60 seconds).
With the new hash field expiration feature, we can now do that:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; client.hexpire(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;User1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 60&lt;&#x2F;span&gt;&lt;span&gt;, [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;password&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;])&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;1&lt;&#x2F;span&gt;&lt;span&gt;]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Note that in this example, we only set a TTL for a single field, but the command can accept multiple fields at once.&lt;&#x2F;p&gt;
&lt;p&gt;We can check the remaining TTL for the password field:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; client.httl(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;User1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;password&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;])&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;46&lt;&#x2F;span&gt;&lt;span&gt;]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The reply shows the remaining time in seconds for the field to live. We can also query the absolute expiration time:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; client.hexpiretime(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;User1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;password&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;])&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;1757499351&lt;&#x2F;span&gt;&lt;span&gt;]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This is a Unix timestamp, which we can easily convert to human-readable time:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;date&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -d&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; @1757499351&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Wed&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; Sep&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 10&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; 10:15:51 UTC&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 2025&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;After 60 seconds, if we read the hash:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; client.hgetall(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;User1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;b&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;name&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; b&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;Ran&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; b&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;age&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; b&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;old&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Notice that the password field is no longer present, since it has expired.&lt;&#x2F;p&gt;
&lt;p&gt;Another practical use case is managing a collection of links where each field represents a URL along with its metadata:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;links:valkey:blogs -&amp;gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &amp;quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-is-a-key-value-store&amp;quot; -&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;                                    {votes: 15, category: &amp;quot;tech&amp;quot;},&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &amp;quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-supports-different-ai-workloads&amp;quot; -&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;                                    {votes: 42, category: &amp;quot;ai&amp;quot;},&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &amp;quot;https:&#x2F;&#x2F;myblog.com&#x2F;blog&#x2F;how-to-write-good-valkey-blog&amp;quot; -&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;                                    {votes: 7, category: &amp;quot;blog&amp;quot;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Here’s how hash field expirations help:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Each link can have a TTL representing its “relevance window.” If a link hasn’t been accessed within a certain period, it automatically expires.&lt;&#x2F;li&gt;
&lt;li&gt;Every time a user queries or updates a link, its TTL can be refreshed, keeping active links alive while letting inactive ones naturally fall off.&lt;&#x2F;li&gt;
&lt;li&gt;Expired links are removed automatically, so you don’t need extra cleanup logic and can focus on active links.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Using Valkey’s &lt;a href=&quot;&#x2F;commands&#x2F;hsetex&quot;&gt;HSETEX&lt;&#x2F;a&gt; and &lt;a href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;hash-fields-expiration&#x2F;(&#x2F;commands&#x2F;hgetex)&quot;&gt;HGETEX&lt;&#x2F;a&gt;, we can update a field’s TTL whenever it’s accessed or modified:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Set or update link metadata with a 30-day TTL&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client.hsetex(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;     &amp;#39;links:user:42&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      &amp;#39;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-is-a-key-value-store&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      &amp;#39;{&amp;quot;clicks&amp;quot;:15,&amp;quot;category&amp;quot;:&amp;quot;tech&amp;quot;}&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     },&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;     expiry&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;ExpirySet(ExpiryType.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;MILLSEC&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 30&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;24&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;3600&lt;&#x2F;span&gt;&lt;span&gt;))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Retrieve a link and refresh its TTL&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;link_data&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; client.hgetex(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      &amp;#39;links:user:42&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;      {&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-is-a-key-value-store&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;},&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;      expiry&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;ExpirySet(ExpiryType.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;MILLSEC&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 30&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;24&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;3600&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;what-s-next&quot;&gt;What’s Next?&lt;&#x2F;h2&gt;
&lt;p&gt;What we’ve shipped so far is just the first step: the ability to set and control time-to-live at the hash field level. But we’re not stopping here.
Future work will focus on two main areas: reducing memory overhead and improving performance.
For example, we plan to support compressed encodings for hashes with volatile fields, and to leverage modern CPU features such as memory prefetching and SIMD instructions to speed up operations.&lt;&#x2F;p&gt;
&lt;p&gt;Another critical area for improvement is the active expiration job.
Today, all volatile data in Valkey is tracked in unsorted maps (hashtables). The background job must repeatedly scan these unordered sets, which means wasting CPU cycles on entries that aren’t close to expiration.
By introducing structured tracking—through the volatile set or even alternative approaches like &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.cs.columbia.edu&#x2F;~nahum&#x2F;w6998&#x2F;papers&#x2F;sosp87-timing-wheels.pdf&quot;&gt;Hierarchal Time Wheels&lt;&#x2F;a&gt; —we can significantly
reduce wasted work and make expiration more efficient at scale.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Numbered Databases in Valkey 9.0</title>
        <published>2025-09-25T00:00:00+00:00</published>
        <updated>2025-09-25T00:00:00+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/numbered-databases/"/>
        <id>https://valkey.io/blog/numbered-databases/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/numbered-databases/">&lt;p&gt;If you explore Valkey’s documentation you might run across a feature called ‘numbered databases’ which allows you to separate the keyspace into (by default) 16 different databases. Digging into this feature reveals tantalizing ways to avoid key prefixing, house different workloads together on Valkey, and even perform patterns that are otherwise clunky. However, if you’ve done more research outside of the documentation on numbered databases you find advice like “don’t use them,” “they don’t scale,” and “they’re a bad idea.” Well, the forthcoming Valkey 9.0 changes many things with numbered databases and you’ll see in this post that advice definitely needs some updating.&lt;&#x2F;p&gt;
&lt;p&gt;Today, a common way to conceptualize Valkey is that your keys represent a unique name for pointers to data in memory across a cluster of nodes. So, key &lt;code&gt;foo&lt;&#x2F;code&gt;  is unique and deterministically linked to a specific node and on that node there is a memory address where the value resides. However, this misses one important detail: the database number. The reality is that key names belong to a specific numbered database and &lt;em&gt;aren’t unique&lt;&#x2F;em&gt; on a given instance of Valkey. To put this another way, Valkey can have the key &lt;code&gt;foo&lt;&#x2F;code&gt; as many times as there are numbered databases with each one pointing to different data.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;numbered-databases&#x2F;.&#x2F;images&#x2F;numbered-db.drawio.png&quot; alt=&quot;one key, many databases&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Historically, before Valkey’s preceding project had the ability to cluster (before version 3.0.0), using multiple numbered databases was fully supported. However, when clustering was implemented numbered databases weren’t included: a cluster had one database (DB 0) that spanned across the entire cluster. In a world where using numbered databases in your application locked you into never going to a cluster, the early advice made sense, however Valkey 9.0 adds the ability to have numbered databases on a cluster, changing everything about that advice.&lt;&#x2F;p&gt;
&lt;p&gt;Why bring this feature to cluster mode in Valkey 9.0? In the intervening years since numbered databases were left out of the cluster spec, users have found a number of very handy patterns that were unfortunately limited without numbered databases. However, numbered databases aren&#x27;t a panacea: read on to find out when you should stick to DB 0 and when to go for a numbered database.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;clustering-and-numbered-databases&quot;&gt;Clustering and numbered databases&lt;&#x2F;h2&gt;
&lt;p&gt;As mentioned earlier, the key name dictates where the key lives in the cluster and this doesn’t change for numbered databases. As a refresher, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;cluster-tutorial&#x2F;&quot;&gt;Valkey clustering&lt;&#x2F;a&gt; takes the key name as a string, runs that through a CRC-16 function, then does a modulo of 16,384 which determines the ‘slot’. Each of these slots belong to a node in the cluster. For a numbered database, each slot contains all the numbered databases, so, carrying forward the idea that you can have the same key name in each database on a single instance, in Valkey 9.0 you can have the same key multiple times in a given slot, each in their own database. In other words, numbered databases do not affect clustering: the key is still the determinate factor in calculating slots.&lt;&#x2F;p&gt;
&lt;p&gt;You can see this directly with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;cluster-keyslot&#x2F;&quot;&gt;&lt;code&gt;CLUSTER KEYSLOT&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;select&#x2F;&quot;&gt;&lt;code&gt;SELECT&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;. Take the following example:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; SELECT 0&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; SET somekey hi&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CLUSTER KEYSLOT somekey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 11058&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; SELECT 5&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; SET somekey hello&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CLUSTER KEYSLOT somekey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 11058&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Here the key &lt;code&gt;somekey&lt;&#x2F;code&gt; was set to two different values: ‘hi’ on database 0, and ‘hello’ on database 5, with &lt;code&gt;SELECT&lt;&#x2F;code&gt; altering the selected database. &lt;code&gt;CLUSTER KEYSLOT&lt;&#x2F;code&gt; was called twice: each one yielding the same slot number meaning this key will be assigned to the same node in the cluster, no matter which database is selected.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;limitations&quot;&gt;Limitations&lt;&#x2F;h2&gt;
&lt;p&gt;Numbered databases do not change the properties of how a Valkey cluster works. The keyspace of each database is still split amongst all the nodes. As a consequence, operations that need to span the entire keyspace will to be run on each node:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;flushdb&#x2F;&quot;&gt;&lt;code&gt;FLUSHDB&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; will flush the keys in the current database &lt;em&gt;on the connected node&lt;&#x2F;em&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;scan&#x2F;&quot;&gt;&lt;code&gt;SCAN&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; will iteratively return keys in the current database &lt;em&gt;on the connected node.&lt;&#x2F;em&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;dbsize&#x2F;&quot;&gt;&lt;code&gt;DBSIZE&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; will return the number of keys in the current database &lt;em&gt;on the connected node.&lt;&#x2F;em&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;You get the picture: if a command previously said it did something for the entire database, in cluster mode it really means for the connected node’s portion of the database. These are especially important to understand if you&#x27;re planning to move an application built for non-clustered, numbered databases to a cluster.&lt;&#x2F;p&gt;
&lt;p&gt;Additionally, numbered databases do not provide any form of resource isolation. It&#x27;s tempting to point a bunch of applications to a single Valkey cluster with each application taking its own database. While this certainly &lt;em&gt;can&lt;&#x2F;em&gt; work, it works without resource isolation and this setup can suffer from classic noisy neighbour problems: busy applications will affect the others using the same cluster. If you&#x27;re worried about resource sharing, most of the time you&#x27;ll be better off just having distinct clusters for each application instead of numbered databases.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;where-to-use-numbered-databases&quot;&gt;Where to use numbered databases&lt;&#x2F;h2&gt;
&lt;p&gt;If you take away one thing from this blog it should be this: &lt;strong&gt;numbered databases are a form of namespacing&lt;&#x2F;strong&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;The most straight forward use case of numbered databases is when you need to separate your data logically and you can tolerate the effects of resource sharing (see above). This might be something like keeping customer data separated from one another or combining applications on to a single cluster when resources are unlikely to be an issue. In a similar manner, multiple databases are a useful debugging tool. When you’re building an application it can be difficult to see what happens inside Valkey when you make a change. If you have multiple databases you can run your original code on one database and the changed version on a different database then you can more easily see changes in how data looks in Valkey by just the connection swapping between databases. This is also a useful pattern during a migration when you want your old data to stick around while your new data is populated.&lt;&#x2F;p&gt;
&lt;p&gt;An entirely different use of numbered databases is related to the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;move&#x2F;&quot;&gt;&lt;code&gt;MOVE&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; command. &lt;code&gt;MOVE&lt;&#x2F;code&gt; allows you to change a key from one database to another &lt;em&gt;without copying the data,&lt;&#x2F;em&gt; meaning it uses very little resources and it’s an &lt;code&gt;O(1)&lt;&#x2F;code&gt; operation. This allows a couple of things: 1) you can effectively make data inaccessible from a database whilst keeping it on the same cluster node, and 2) you can replace a complex key atomically.&lt;&#x2F;p&gt;
&lt;p&gt;Looking at the first use of &lt;code&gt;MOVE&lt;&#x2F;code&gt;: imagine you have some content, maybe a user submitted post, that is stored at the key “mycontent.” At some point this content gets flagged as needing to be reviewed. You might not want to &lt;em&gt;delete&lt;&#x2F;em&gt; this content but you also don’t want it accessible. In this case you take the content and &lt;code&gt;MOVE&lt;&#x2F;code&gt; it to a different database. Once it’s reviewed (or edited!) you can &lt;code&gt;MOVE&lt;&#x2F;code&gt; it back. Never does the data actually leave the node nor get copied. Whilst your use-case might not be the same as this, the pattern illustrates an operation useful in a variety of contexts.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;numbered-databases&#x2F;.&#x2F;images&#x2F;move-db.drawio.png&quot; alt=&quot;now you see me, now you don&amp;#39;t&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The second use of &lt;code&gt;MOVE&lt;&#x2F;code&gt; is similar to the first except it revolves around complex keys that contain many elements. Take, for example, a large sorted set. It’s not unusual for these keys to be quite large because the key contains thousands or millions of smaller elements. If you need to replace this key with one that contains different elements then you’d have to first &lt;code&gt;DEL&lt;&#x2F;code&gt; the original key, then &lt;code&gt;ZADD&lt;&#x2F;code&gt; thousands of elements. Doing this in a transaction would be expensive and monopolize the node, and without a transaction, it would reveal a (potentially undesirable) partial state. Instead, with multiple databases, you build up new a complex key on a different database (under no urgency) and do a much simpler &lt;code&gt;MULTI&lt;&#x2F;code&gt; &#x2F; &lt;code&gt;EXEC&lt;&#x2F;code&gt; transaction of of &lt;code&gt;DEL&lt;&#x2F;code&gt; followed by &lt;code&gt;MOVE&lt;&#x2F;code&gt; to atomically make the new complex key available to the database being used to serve commands without the chance of a partial state.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;numbered-databases&#x2F;.&#x2F;images&#x2F;zadd-move.drawio.png&quot; alt=&quot;show me it when its done&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;This is really a form of temporal namespacing: you&#x27;re taking advantage of the namespace to have two active keys under the same name both of which can be read and manipulated. You can apply this pattern to other cases: anyplace you want to make a key inaccessible for a period of time.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;How is key prefixing different than numbered databases?&lt;&#x2F;strong&gt; Some of the patterns in this blog can be achieved without numbered databases and instead with key prefixing (e.g. &lt;code&gt;app0:...&lt;&#x2F;code&gt;, &lt;code&gt;app1:...&lt;&#x2F;code&gt;). Key prefixing has a few downsides when compared to numbered databases:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Prefixes add up at scale. Millions of keys each with repeated prefixes over the span of the whole cluster means less RAM for data. While each database does have a memory overhead it is linear relative to the number of databases used, not keys. For a deeper look at how numbered database memory overhead works, including using 10 million databases, read the comments on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1609#issuecomment-2616366819&quot;&gt;valkey-io&#x2F;valkey#1609&lt;&#x2F;a&gt;.&lt;&#x2F;li&gt;
&lt;li&gt;Databases are transparent to your application. To support prefixing, either your application or your client library has to be able to interpolate the prefix into each key. With numbered databases, the changes can be as simple as a number in the connection URI, not to each key name.&lt;&#x2F;li&gt;
&lt;li&gt;Databases avoid pattern-based iterations. Having all your keys in DB 0 then needing to iterate over the entire keyspace to affect a specific pattern is expensive and complicated. If your keys are separated into databases, this both subsets that iteration and enables some database wide commands (i.e. &lt;code&gt;FLUSHDB&lt;&#x2F;code&gt; instead of trying to delete keys by a pattern).&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;blockquote&gt;
&lt;h2 id=&quot;gotchas-and-future-work&quot;&gt;Gotchas and future work&lt;&#x2F;h2&gt;
&lt;p&gt;Aside from the aforementioned lack of resource isolation, numbered databases in clustered Valkey have a few places that you need to look out for today. First, right now numbered databases don’t have a lot of dedicated metrics, so it can be hard to gain insights into per-database resource usage. Second, the ACL system doesn’t currently address numbered databases, so you can’t meaningfully restrict access to databases (at time of writing, there is an &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;2309&quot;&gt;open pull request that addresses this&lt;&#x2F;a&gt;). Unlike in the previous project, databases are an active area of work for the Valkey project and the database abstraction holds potential as a way to implement new features.&lt;&#x2F;p&gt;
&lt;p&gt;Finally, while numbered databases are well supported in client libraries there are rough edges:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Some client libraries artificially restrict usage to a single database in cluster mode,&lt;&#x2F;li&gt;
&lt;li&gt;Pooled clients may naïvely manage the selected database, so a client returned to the pool after running &lt;code&gt;SELECT&lt;&#x2F;code&gt; might retain the database number in subsequent usage. A similar situation is possible for multiplexed clients.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;In general, watch out for three assumptions: 1) the selected database is always 0, 2) there is only one database in cluster mode, and 3) if there are multiple databases in use it isn’t in cluster mode. None of these are true in Valkey 9.0.&lt;&#x2F;p&gt;
&lt;p&gt;With this in mind, Valkey 9.0 gives you the ability to divvy up that keyspace into nice neat numbered databases and spread them out over a whatever cluster you have. So, get rid of that old, outdated advice and start using them, seeing how they scale, and what a good idea number databases actually are for Valkey 9.0.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey: An Investment in Open Source</title>
        <published>2025-09-16T00:00:00+00:00</published>
        <updated>2025-09-16T00:00:00+00:00</updated>
        
        <author>
          <name>
            lorilorusso
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-investment-in-open-source/"/>
        <id>https://valkey.io/blog/valkey-investment-in-open-source/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-investment-in-open-source/">&lt;p&gt;Valkey was founded just over a year ago to keep high-performance key&#x2F;value storage in the open source community: free from vendor lock-in and restrictive licenses. Backed by contributors like AWS, Google Cloud, Ericsson, Oracle, Alibaba, Huawei, Tencent, Percona, Aiven, Heroku, Verizon, Chainguard, and Canonical, the project shows how “free” in open source depends on investment: time, talent, and ongoing financial support.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;built-by-builders-backed-by-the-linux-foundation&quot;&gt;Built by Builders, Backed by the Linux Foundation&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey was created on March 28, 2024 and has solidified itself as the open-source high-performance key&#x2F;value datastore that supports a variety of workloads such as caching, message queues, and can act as a primary database. Valkey is backed by the Linux Foundation, a neutral organization founded in 2000 that supports developers and technologists to scale and manage open source projects. Valkey operates under an open governance model focused on growing community contributions and adoption.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;one-year-in-growth-and-momentum&quot;&gt;One Year In: Growth and Momentum&lt;&#x2F;h3&gt;
&lt;p&gt;In just a year’s time the project has celebrated two major releases, an increase in corporate participants from 22 to 47, is defining, innovating, and executing on its roadmap, and is growing in its adoption. Users and the community have embraced the project and they are firm in their commitment to continue to improve Valkey for the benefit of all end users.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;new-valkey-features-use-cases&quot;&gt;New Valkey Features &amp;amp; Use Cases&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;solving-practical-problems-with-json-and-bloom-filters&quot;&gt;Solving Practical Problems with JSON and Bloom Filters&lt;&#x2F;h3&gt;
&lt;p&gt;Two new data types, JSON and Bloom filters, expand what developers can do with Valkey, especially in distributed systems where speed and structure are equally important. These additions reduce complexity in application logic and improve how data is handled at the edge.  “Adding JSON and Bloom filters to Valkey is about giving developers practical tools to solve real-world problems in distributed systems,” says Madelyn Olson, Valkey co-maintainer.&lt;&#x2F;p&gt;
&lt;p&gt;JSON support allows developers to work with rich, structured data natively, instead of relying on custom serialization or extra middleware. “JSON lets you work with rich, structured data directly in Valkey, which simplifies development and reduces glue code,” Olson explains.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;bloomfilters&#x2F;&quot;&gt;Valkey Bloom filters&lt;&#x2F;a&gt;, on the other hand, offer a compact way to perform fast checks. “Bloom filters are ideal when you need to make fast, memory-efficient existence checks, whether you&#x27;re trying to catch fraud or avoid unnecessary backend calls in a high-traffic service,” adds Olson.&lt;&#x2F;p&gt;
&lt;p&gt;As Olson put it, the goal is simple: “I’m really excited to see what problems our users will be able to solve with these new data types.”&lt;&#x2F;p&gt;
&lt;h3 id=&quot;valkey-search-speed-and-scale-for-ai-workloads&quot;&gt;Valkey Search: Speed and Scale for AI Workloads&lt;&#x2F;h3&gt;
&lt;p&gt;Google contributed a new module, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;Valkey Search&lt;&#x2F;a&gt;, which brings vector similarity search to Valkey. It’s fast, delivering single-digit millisecond latency and built to handle billions of vectors while maintaining over 99% recall.&lt;&#x2F;p&gt;
&lt;p&gt;Developers can run both approximate nearest neighbor (ANN) searches (via HNSW) and exact k-nearest neighbor (KNN) searches. It supports indexing with either Hash or &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-json&quot;&gt;Valkey-JSON&lt;&#x2F;a&gt; data types. While Valkey Search currently focuses on vector search, future goals would extend Valkey into a broader search engine with full-text support and more indexing options.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;performance-reliability-and-security&quot;&gt;Performance, Reliability, and Security&lt;&#x2F;h3&gt;
&lt;p&gt;Over the past year, contributors from companies like Ericsson, Oracle, and Percona have focused on making Valkey faster, safer, and more enterprise-ready. The following improvements showcase how Valkey is evolving to meet scaled production needs without compromising for any of its users.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;multithreading-improvements&quot;&gt;Multithreading Improvements&lt;&#x2F;h4&gt;
&lt;p&gt;Recent updates to Valkey’s internals improve how the system handles work across multiple threads, cutting down on lock contention and better using multi-core processors. This update significantly improves throughput under high concurrency, making the project more attractive for large-scale production use. Ericsson, one of Valkey’s core contributors, is already seeing real value in the project’s performance work.&lt;&#x2F;p&gt;
&lt;p&gt;Viktor Söderqvist, Ericsson Software Technology Engineer &amp;amp; Valkey co-maintainer, points out the core efficiency as a key reason Valkey is becoming more production-ready. “To get the most out of the CPUs and memory in terms of storage efficiency and speed, the recent hashtable redesign lately improved with SIMD techniques as well as the recent and ongoing improvements in multithreading and memory batch-prefetching techniques are exciting areas of improvement.”&lt;&#x2F;p&gt;
&lt;h4 id=&quot;simd-accelerated-hashtable-redesign&quot;&gt;SIMD-Accelerated Hashtable Redesign&lt;&#x2F;h4&gt;
&lt;p&gt;The project’s core hashtable was reengineered to leverage SIMD instructions, allowing Valkey to process key lookups more efficiently by handling multiple operations in parallel. These low-level optimizations translate to faster response times in latency-sensitive environments.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;batch-memory-prefetching&quot;&gt;Batch Memory Prefetching&lt;&#x2F;h4&gt;
&lt;p&gt;New support for batch memory prefetching helps reduce cache misses by anticipating access patterns and loading data proactively. The result is smoother performance and more consistent behavior under heavy or sequential access workloads.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;ldap-integration&quot;&gt;LDAP Integration&lt;&#x2F;h4&gt;
&lt;p&gt;LDAP integration brings centralized authentication and access control to Valkey, making it easier to deploy in enterprise environments with existing identity infrastructure. This feature addresses a common adoption barrier in security-conscious and compliance-driven organizations.&lt;&#x2F;p&gt;
&lt;p&gt;“A lot of our customers already rely on LDAP to manage access across their infrastructure, so bringing that to Valkey just made sense,” says Vadim Tkachenko, Co-Founder of Percona. “It’s one of those features that removes friction. You get auditability, group-based permissions, and it works with what teams already have in place.”&lt;&#x2F;p&gt;
&lt;h4 id=&quot;rust-module&quot;&gt;Rust Module&lt;&#x2F;h4&gt;
&lt;p&gt;Oracle is contributing a Rust-based module SDK to Valkey, aimed at improving safety and performance for low-level extensions. By adopting Rust’s strong guarantees around memory and concurrency, the project opens the door to safer, more maintainable systems integration, especially for production environments under load.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-real-investment-behind-open-source-success&quot;&gt;The Real Investment Behind Open Source Success&lt;&#x2F;h3&gt;
&lt;p&gt;The investment in open source and choosing to employ contributors to the project clearly demonstrates that a key aspect in technological innovation is for companies at varying levels, enterprise to mom-and-pop shops, to succeed they need to work together and that means investing in the resources that can move projects forward - the human capital of open source, the coders, the writers, the governance, and the community behind the ‘free’ moniker that we frequently associate with open source.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;what-s-happening-with-valkey&quot;&gt;What’s Happening With Valkey&lt;&#x2F;h3&gt;
&lt;p&gt;On August 15, Valkey dropped the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;releases&#x2F;tag&#x2F;9.0.0-rc1&quot;&gt;first release candidate for 9.0&lt;&#x2F;a&gt;. This testing release previews the new capabilities around atomic slot migrations, hash field expirations, and numbered databases in cluster mode alongside scads of performance enhancements and bug fixes. Additional release candidates are to follow with general availability of 9.0 in early autumn 2025.&lt;&#x2F;p&gt;
&lt;p&gt;Parallel to the testing phase for 9.0, two events happened last month in Amsterdam:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;events.linuxfoundation.org&#x2F;open-source-summit-europe&#x2F;&quot;&gt;Open Source Summit Europe - Aug 25–27, 2025&lt;&#x2F;a&gt;:  Three-day conference for open source developers, technologists, and leaders featuring keynotes, sessions, and community networking.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;events&#x2F;keyspace-2025&#x2F;&quot;&gt;Valkey User Conference: Keyspace - August 28, 2025&lt;&#x2F;a&gt;: A one-day conference for developers, SREs, and DevOps teams with sessions, lightning talks, and workshops.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>What you&#x27;ll see at Keyspace 2025</title>
        <published>2025-08-12T01:01:01+00:00</published>
        <updated>2025-08-12T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/keyspace-schedule/"/>
        <id>https://valkey.io/blog/keyspace-schedule/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/keyspace-schedule/">&lt;style type=&quot;text&#x2F;css&quot;&gt;
&#x2F;* this blog post pulls in CSS from sessionize and it clashes with our color scheme *&#x2F;
.main-inner #sessionize.sessionize-wrapper .sz-speaker.sz-speaker--full .sz-speaker__name {
    color: #30176e;
}
.main-inner #sessionize.sessionize-wrapper .sz-speaker.sz-speaker--full .sz-speaker__tagline {
    color: #30176e;
}
&lt;&#x2F;style&gt;
&lt;p&gt;I sat on the selection committee for Keyspace and was floored by the number and quality of submissions we received.
It&#x27;s truly amazing to see the talent and depth of experience that is present in the Valkey community manifest in a slate of presentations aimed to interest both experienced and new Valkey users.&lt;&#x2F;p&gt;
&lt;p&gt;The Keyspace 2025 conference is quickly approaching on August 28th, 2025 in Amsterdam so make sure and &lt;a href=&quot;&#x2F;events&#x2F;keyspace-2025#register&quot;&gt;register today&lt;&#x2F;a&gt; to hear from these experts.&lt;&#x2F;p&gt;
&lt;p&gt;Click on the session titles below to find out about each one.&lt;&#x2F;p&gt;
&lt;script type=&quot;text&#x2F;javascript&quot; src=&quot;https:&#x2F;&#x2F;sessionize.com&#x2F;api&#x2F;v2&#x2F;qv5dn29l&#x2F;view&#x2F;Speakers&quot;&gt;&lt;&#x2F;script&gt;</content>
        
    </entry><entry xml:lang="en">
        <title>Introducing valkey-swift, the Swift client for Valkey</title>
        <published>2025-08-04T00:00:00+00:00</published>
        <updated>2025-08-04T00:00:00+00:00</updated>
        
        <author>
          <name>
            adamfowler
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-swift/"/>
        <id>https://valkey.io/blog/valkey-swift/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-swift/">&lt;p&gt;We are excited to introduce the preview release of &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-swift&quot;&gt;valkey-swift&lt;&#x2F;a&gt;, a new &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;swift.org&quot;&gt;Swift&lt;&#x2F;a&gt; based client library for Valkey.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-client&quot;&gt;The Client&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-swift is a modern swift client built with Swift concurrency in mind. Using Valkey from Swift lets you take advantage of a strongly typed API, Swift&#x27;s memory and data race safety guarantees, as well as maintaining a very light memory footprint. The API uses structured concurrency; a paradigm designed to bring clarity to concurrent programming by using the structure of your code to define the lifetimes of tasks and associated resources. This allows you to reason about your code locally. The client includes the following features:&lt;&#x2F;p&gt;
&lt;h3 id=&quot;connection-pool&quot;&gt;Connection Pool&lt;&#x2F;h3&gt;
&lt;p&gt;The client includes a persistent connection pool. Instead of establishing a new connection for every request, it leases a connection from a pool of existing connections. This minimizes the time and resources required to get a connection to your Valkey server.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;commands&quot;&gt;Commands&lt;&#x2F;h3&gt;
&lt;p&gt;The project uses code generation to generate all of Valkey&#x27;s command set. This ensures that all of Valkey&#x27;s features are available, and includes the string, list, set, sorted set, stream, hash, geospatial, hyperloglog, and pub&#x2F;sub commands. Using code generation has the added bonus of allowing us to easily keep up to date with the latest Valkey command changes. All the Valkey commands are available directly from &lt;code&gt;ValkeyClient&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;try await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;set&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;value&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;Test&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;let&lt;&#x2F;span&gt;&lt;span&gt; value &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;= try await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Each call to a command using &lt;code&gt;ValkeyClient&lt;&#x2F;code&gt; leases a connection from the connection pool to run that single command, so we provide an alternative where you can lease a single connection to run multiple commands as follows:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;try await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;withConnection&lt;&#x2F;span&gt;&lt;span&gt; { connection &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;in&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    try await&lt;&#x2F;span&gt;&lt;span&gt; connection.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;set&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;value&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;Test&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; value &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;= try await&lt;&#x2F;span&gt;&lt;span&gt; connection.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;When exiting the closure the connection is auto-released, avoiding the potential user error of forgetting to return the connection to the connection pool.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;pipelining&quot;&gt;Pipelining&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey Pipelining is a technique for improving performance. It sends multiple commands at the same time without waiting for the response of each individual command. It avoids the round trip time between each command, and removes the relation between receiving the response from a request and sending the next request.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-swift&#x2F;images&#x2F;valkey-pipelining.png&quot; alt=&quot;An image that shows non-pipelined commands waiting for each response, and pipelined commands that send a burst of commands, a technique which shows the advantage of pipelining by submitting sets of commands quickly.&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Valkey-swift provides support for pipelining in a couple of different ways.
First, you can do this using the &lt;code&gt;execute(_:)&lt;&#x2F;code&gt; function available from both &lt;code&gt;ValkeyClient&lt;&#x2F;code&gt; and &lt;code&gt;ValkeyConnection&lt;&#x2F;code&gt;.
This sends all the commands off at the same time and receives a tuple of responses.
Swift allows this client to present a strongly typed API, ensuring it both accepts the correct types for multiple commands and returns the correct types for responses.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;let&lt;&#x2F;span&gt;&lt;span&gt; (lpushResult, rpopResult)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; = await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;execute&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    LPUSH&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Key2&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;elements&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;entry1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;entry2&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;]),&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    RPOP&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Key2&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;let&lt;&#x2F;span&gt;&lt;span&gt; count &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;= try&lt;&#x2F;span&gt;&lt;span&gt; lpushResult.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;  &#x2F;&#x2F; 2&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;let&lt;&#x2F;span&gt;&lt;span&gt; value &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;= try&lt;&#x2F;span&gt;&lt;span&gt; rpopResult.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;  &#x2F;&#x2F; ByteBuffer containing &amp;quot;entry1&amp;quot; string&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The second way to take advantage of pipelining is to use &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;docs.swift.org&#x2F;swift-book&#x2F;documentation&#x2F;the-swift-programming-language&#x2F;concurrency&#x2F;&quot;&gt;Swift Concurrency&lt;&#x2F;a&gt;. Because the &lt;code&gt;ValkeyConnection&lt;&#x2F;code&gt; type is a Swift &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;docs.swift.org&#x2F;swift-book&#x2F;documentation&#x2F;the-swift-programming-language&#x2F;concurrency#Actors&quot;&gt;actor&lt;&#x2F;a&gt; it can be used across concurrent tasks without concern for data race issues.&lt;&#x2F;p&gt;
&lt;p&gt;Unlike the &lt;code&gt;execute(_:)&lt;&#x2F;code&gt; function the commands will be sent individually but the sending of a command is not dependent on a previous command returning a response.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;try await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;withConnection&lt;&#x2F;span&gt;&lt;span&gt; { connection &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;in&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    try await&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; withThrowingTaskGroup&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;of&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; Void&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;self&lt;&#x2F;span&gt;&lt;span&gt;) { group &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;in&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;        &#x2F;&#x2F; run LPUSH and RPUSH concurrently &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        group.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;addTask&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            try await&lt;&#x2F;span&gt;&lt;span&gt; connection.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;lpush&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;key&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;foo1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;element&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;bar&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;])&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        group.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;addTask&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            try await&lt;&#x2F;span&gt;&lt;span&gt; connection.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;rpush&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;key&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;foo2&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;element&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;baz&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;])&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;pub-sub&quot;&gt;Pub&#x2F;Sub&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey can be used as a message broker using its publish&#x2F;subscribe messaging model. A subscription is a stream of messages from a channel. The easiest way to model this is with a Swift &lt;code&gt;AsyncSequence&lt;&#x2F;code&gt;. The valkey-swift subscription API provides a simple way to manage subscriptions with a single function call that automatically subscribes and unsubscribes from channels as needed. You provide it with a closure, it calls &lt;code&gt;SUBSCRIBE&lt;&#x2F;code&gt; on the channels you specified, and provides an &lt;code&gt;AsyncSequence&lt;&#x2F;code&gt; of messages from those channels. When you exit the closure, the connection sends the relevant &lt;code&gt;UNSUBSCRIBE&lt;&#x2F;code&gt; commands. This avoids the common user error of forgetting to unsubscribe from a channel once it is no longer needed.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;try await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;withConnection&lt;&#x2F;span&gt;&lt;span&gt; { connection &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;in&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    try await&lt;&#x2F;span&gt;&lt;span&gt; connection.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;subscribe&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;channels&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;channel1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;]) { subscription &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;in&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        for try await&lt;&#x2F;span&gt;&lt;span&gt; message &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;in&lt;&#x2F;span&gt;&lt;span&gt; subscription {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;            print&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;String&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;buffer&lt;&#x2F;span&gt;&lt;span&gt;: message.message))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;valkey-cluster&quot;&gt;Valkey Cluster&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey scales horizontally with a deployment called Valkey Cluster. Data is sharded across multiple Valkey servers based on the hash of the key being accessed. It also provides a level of availability, using replicas. You can continue operations even when a node fails or is unable to communicate.&lt;&#x2F;p&gt;
&lt;p&gt;Swift-valkey includes a cluster client &lt;code&gt;ValkeyClusterClient&lt;&#x2F;code&gt;. This includes support for:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Election based cluster topology discovery and maintenance.&lt;&#x2F;li&gt;
&lt;li&gt;Command routing to the appropriate node based on key hashslots.&lt;&#x2F;li&gt;
&lt;li&gt;Handling of MOVED errors for proper cluster resharding.&lt;&#x2F;li&gt;
&lt;li&gt;Connection pooling and failover.&lt;&#x2F;li&gt;
&lt;li&gt;Circuit breaking during cluster disruptions.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;The following example shows how to create a cluster client that uses &lt;code&gt;ValkeyStaticNodeDiscovery&lt;&#x2F;code&gt; to find the first node in the cluster. From there the client discovers the remains of the cluster topology.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;let&lt;&#x2F;span&gt;&lt;span&gt; clusterClient &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; ValkeyClusterClient&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    clientConfiguration&lt;&#x2F;span&gt;&lt;span&gt;: clientConfiguration,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    nodeDiscovery&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; ValkeyStaticNodeDiscovery&lt;&#x2F;span&gt;&lt;span&gt;([&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;init&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;host&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;127.0.0.1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;port&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 9000&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;useTLS&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; true&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ]),&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    logger&lt;&#x2F;span&gt;&lt;span&gt;: logger&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;All the standard Valkey commands are available to the cluster client. The only requirement is that if a command references two of more keys, they all come from the same shard in the cluster.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;try await&lt;&#x2F;span&gt;&lt;span&gt; clusterClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;xread&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    milliseconds&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 10000&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    streams&lt;&#x2F;span&gt;&lt;span&gt;: .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;init&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;key&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;events&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;], &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;id&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;0-0&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;])&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;try-it-out&quot;&gt;Try it out&lt;&#x2F;h2&gt;
&lt;p&gt;If you don&#x27;t already have Swift installed you can find install instructions on the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.swift.org&#x2F;install&#x2F;&quot;&gt;swift.org&lt;&#x2F;a&gt; site.&lt;&#x2F;p&gt;
&lt;p&gt;Start a new project...&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; mkdir try-valkey-swift&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; cd try-valkey-swift&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; swift package init&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --type&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; executable&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Creating&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; executable package: try-valkey-swift&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Creating&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; Package.swift&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Creating&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; .gitignore&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Creating&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; Sources&#x2F;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Creating&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; Sources&#x2F;main.swift&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Add the valkey-swift package to the project and executable target&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; swift package add-dependency https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-swift&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --branch&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; main&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Updating&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; package manifest at Package.swift... done.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; swift package add-target-dependency Valkey try-valkey-swift&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --package&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-swift&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Updating&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; package manifest at Package.swift... done.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;If you are running on macOS, edit the generated &lt;code&gt;Package.swift&lt;&#x2F;code&gt; to add a minimum required platform&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;let package =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; Package&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    name&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;try-valkey-swift&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    platforms&lt;&#x2F;span&gt;&lt;span&gt;: [.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;macOS&lt;&#x2F;span&gt;&lt;span&gt;(.v15)],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Open up &lt;code&gt;Sources&#x2F;main.swift&lt;&#x2F;code&gt; and replace its contents with&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;swift&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;import&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Logging&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;import&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Valkey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;let&lt;&#x2F;span&gt;&lt;span&gt; logger &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; Logger&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;label&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;Valkey&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;let&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; ValkeyClient&lt;&#x2F;span&gt;&lt;span&gt;(.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;hostname&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;127.0.0.1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;port&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;), &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;logger&lt;&#x2F;span&gt;&lt;span&gt;: logger)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;try await&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; withThrowingTaskGroup&lt;&#x2F;span&gt;&lt;span&gt; { group &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;in&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    group.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;addTask&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;        &#x2F;&#x2F; run connection manager background process&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;run&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    try await&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; testValkey&lt;&#x2F;span&gt;&lt;span&gt;(valkeyClient, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;logger&lt;&#x2F;span&gt;&lt;span&gt;: logger)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    group.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;cancelAll&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F;&#x2F; Lets test valkey-swift.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;func&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; testValkey&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;_&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient: ValkeyClient, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;logger&lt;&#x2F;span&gt;&lt;span&gt;: Logger)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; async throws&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    try await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;set&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;foo&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;value&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;bar&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; foo &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;= try await&lt;&#x2F;span&gt;&lt;span&gt; valkeyClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;foo&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if let&lt;&#x2F;span&gt;&lt;span&gt; foo {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        logger.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;info&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;foo = &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;\(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;String&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;buffer&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;: foo)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; else&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        logger.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;info&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;foo is empty&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The code above creates a client. The client needs a background process to manage its connection pool so it sets up a &lt;code&gt;TaskGroup&lt;&#x2F;code&gt; which runs the connection pool background process concurrently with the &lt;code&gt;testValkey&lt;&#x2F;code&gt; function. The code in the &lt;code&gt;testValkey&lt;&#x2F;code&gt; function sets the value of key &quot;foo&quot; to &quot;bar&quot; and then gets the value of key &quot;foo&quot;. If it returns a value it is printed to the log.&lt;&#x2F;p&gt;
&lt;p&gt;To run your code use on the command line:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;swift run&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;get-involved&quot;&gt;Get Involved&lt;&#x2F;h2&gt;
&lt;p&gt;We&#x27;d love you to try the client out and get your feedback.&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;How does the public API work for you?&lt;&#x2F;li&gt;
&lt;li&gt;We know some features are not yet available, for example reading from replicas and Sentinel, but what other features do you think are needed?&lt;&#x2F;li&gt;
&lt;li&gt;Performance has been a major focus during development, but how is the client working out in your production environment?&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h3 id=&quot;connect-with-us&quot;&gt;Connect with us&lt;&#x2F;h3&gt;
&lt;p&gt;If you want to discuss any of the above, want to report a bug or want to contribute please reach out to us on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-swift&#x2F;&quot;&gt;GitHub&lt;&#x2F;a&gt; or talk to us in the &lt;code&gt;#valkey-swift&lt;&#x2F;code&gt; channel on the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;slack&quot;&gt;Valkey Slack&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Using k0rdent MultiClusterService Template for Valkey on Kubernetes</title>
        <published>2025-07-21T01:01:42+00:00</published>
        <updated>2025-07-21T01:01:42+00:00</updated>
        
        <author>
          <name>
            s3rj1k
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-on-k0rdent/"/>
        <id>https://valkey.io/blog/valkey-on-k0rdent/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-on-k0rdent/">&lt;p&gt;Managing distributed applications across multiple Kubernetes clusters can be a complex and time-consuming process. This guide demonstrates how to streamline Valkey deployment using &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;k0rdent.io&#x2F;&quot;&gt;k0rdent&#x27;s&lt;&#x2F;a&gt; &lt;code&gt;MultiClusterService&lt;&#x2F;code&gt; template, providing a practical example of modern multi-cluster application delivery.&lt;&#x2F;p&gt;
&lt;p&gt;In this tutorial, we&#x27;ll walk through deploying Valkey across Kubernetes clusters using k0rdent&#x27;s template-driven approach. By the end of this guide, you will understand how to leverage k0rdent for simplified Valkey deployment and multi-cluster application management.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;&#x2F;h2&gt;
&lt;p&gt;It is assumed that you have basic knowledge of:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Valkey and its use cases&lt;&#x2F;li&gt;
&lt;li&gt;Kubernetes clusters and core concepts&lt;&#x2F;li&gt;
&lt;li&gt;Helm charts and package management&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;You will also need the following tools installed:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;docs.docker.com&#x2F;desktop&#x2F;&quot;&gt;Docker&lt;&#x2F;a&gt; (running as a daemon)&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;kind.sigs.k8s.io&#x2F;&quot;&gt;kind&lt;&#x2F;a&gt; CLI&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;kubernetes.io&#x2F;docs&#x2F;tasks&#x2F;tools&#x2F;&quot;&gt;kubectl&lt;&#x2F;a&gt; CLI&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;helm.sh&#x2F;&quot;&gt;helm&lt;&#x2F;a&gt; CLI&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;the-k0-family&quot;&gt;The k0* Family&lt;&#x2F;h2&gt;
&lt;p&gt;k0rdent is part of the k0* family of tools:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;k0sproject.io&#x2F;&quot;&gt;k0s&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: Zero Friction Kubernetes Distribution&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;k0smotron.io&#x2F;&quot;&gt;k0smotron&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: k0s specific CAPI providers&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;k0rdent.io&#x2F;&quot;&gt;k0rdent&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;: Multi-cluster management platform&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;what-is-k0rdent&quot;&gt;What is k0rdent?&lt;&#x2F;h2&gt;
&lt;p&gt;k0rdent is a Kubernetes-native, distributed container management platform that simplifies and automates the deployment, scaling, and lifecycle management of Kubernetes clusters across multi-cloud and hybrid environments by using a template-driven approach. You can think of it as a super control plane for multiple child clusters that are controlled by different CAPI providers across multi-cloud environments.&lt;&#x2F;p&gt;
&lt;p&gt;All providers (infrastructure, cluster) are packaged as Helm templates and exposed to the consumer via an entry point object called &lt;code&gt;ClusterDeployment&lt;&#x2F;code&gt;. The &lt;code&gt;ClusterDeployment&lt;&#x2F;code&gt; object is what the consumer uses to declaratively define a new child cluster, and combined with credentials-related objects, this provides the consumer with a managed Kubernetes cluster on any platform that has existing CAPI providers.&lt;&#x2F;p&gt;
&lt;p&gt;Check out this &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.cncf.io&#x2F;blog&#x2F;2025&#x2F;02&#x2F;24&#x2F;introducing-k0rdent-design-deploy-and-manage-kubernetes-based-idps&#x2F;&quot;&gt;CNCF blog post&lt;&#x2F;a&gt; for additional information.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;service-templates-and-application-delivery&quot;&gt;Service Templates and Application Delivery&lt;&#x2F;h2&gt;
&lt;p&gt;For any child cluster under k0rdent management, the consumer can control application delivery via service template objects, meaning that it is possible to install applications into the child clusters and have everything controlled from the super-control-plane (management cluster) where k0rdent itself runs.&lt;&#x2F;p&gt;
&lt;p&gt;The k0rdent project maintains a public repository called the &quot;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;catalog.k0rdent.io&#x2F;latest&#x2F;&quot;&gt;Catalog&lt;&#x2F;a&gt;&quot; where you can find pre-built application service templates. While templates can be created locally, and there is no hard requirement to use the catalog, we&#x27;ll use the catalog for a more streamlined experience with Valkey delivery to child clusters. You can find the Valkey template in the catalog at &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;catalog.k0rdent.io&#x2F;latest&#x2F;apps&#x2F;valkey&quot;&gt;https:&#x2F;&#x2F;catalog.k0rdent.io&#x2F;latest&#x2F;apps&#x2F;valkey&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;demo-setup-overview&quot;&gt;Demo Setup Overview&lt;&#x2F;h2&gt;
&lt;p&gt;In this practical demonstration, we&#x27;ll:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Use Kind for the management cluster&lt;&#x2F;li&gt;
&lt;li&gt;Deploy to a child cluster using Cluster API Provider Docker (CAPD)&lt;&#x2F;li&gt;
&lt;li&gt;Use &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;hyperspike&#x2F;valkey-operator&quot;&gt;Hyperspike&#x27;s Valkey Operator&lt;&#x2F;a&gt; to manage Valkey instances&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;While we use Docker and Kind for simplicity, k0rdent supports any CAPI provider and can run on any Kubernetes distribution for production deployments.&lt;&#x2F;p&gt;
&lt;p&gt;There is no better way of getting to know something than by doing it, so I encourage you to follow along with the steps if possible.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;setting-up-the-management-cluster&quot;&gt;Setting Up the Management Cluster&lt;&#x2F;h2&gt;
&lt;p&gt;Let&#x27;s start by creating a new Kind cluster with a mounted Docker socket:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;cat&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;lt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt; &amp;#39;EOF&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; kind&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; create cluster&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --name&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kind&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --config=-&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kind: Cluster&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;apiVersion: kind.x-k8s.io&#x2F;v1alpha4&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;nodes:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;- role: control-plane&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  extraMounts:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  - hostPath: &#x2F;var&#x2F;run&#x2F;docker.sock&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    containerPath: &#x2F;var&#x2F;run&#x2F;docker.sock&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    readOnly: false&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;After Kind CLI is finished with its magic, let&#x27;s install k0rdent into our new cluster:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;helm&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; install kcm oci:&#x2F;&#x2F;ghcr.io&#x2F;k0rdent&#x2F;kcm&#x2F;charts&#x2F;kcm&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --version 1.0.0 -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --create-namespace&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; wait&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --for=condition=Ready=True&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; management&#x2F;kcm&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --timeout=9000s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;installing-the-valkey-service-template&quot;&gt;Installing the Valkey Service Template&lt;&#x2F;h2&gt;
&lt;p&gt;Now we need to install the Valkey service template like this:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;helm&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; install valkey oci:&#x2F;&#x2F;ghcr.io&#x2F;k0rdent&#x2F;catalog&#x2F;charts&#x2F;valkey-service-template&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --version 0.1.0 -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; wait&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --for=jsonpath=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;{.status.valid}&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;=true&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; servicetemplate&#x2F;valkey-0-1-0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --timeout=600s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;setting-up-credentials&quot;&gt;Setting Up Credentials&lt;&#x2F;h2&gt;
&lt;p&gt;Let&#x27;s now create a group of credentials-related objects that enable the CAPD provider to work:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; apply&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -f&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; -&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;lt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;---&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;apiVersion: v1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kind: Secret&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;metadata:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  name: docker-cluster-secret&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  namespace: kcm-system&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  labels:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    k0rdent.mirantis.com&#x2F;component: &amp;quot;kcm&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;type: Opaque&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;---&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;apiVersion: k0rdent.mirantis.com&#x2F;v1beta1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kind: Credential&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;metadata:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  name: docker-stub-credential&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  namespace: kcm-system&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;spec:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  description: Docker Credentials&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  identityRef:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    apiVersion: v1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    kind: Secret&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    name: docker-cluster-secret&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    namespace: kcm-system&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;---&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;apiVersion: v1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kind: ConfigMap&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;metadata:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  name: docker-cluster-credential-resource-template&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  namespace: kcm-system&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  labels:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    k0rdent.mirantis.com&#x2F;component: &amp;quot;kcm&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  annotations:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    projectsveltos.io&#x2F;template: &amp;quot;true&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;creating-the-child-cluster&quot;&gt;Creating the Child Cluster&lt;&#x2F;h2&gt;
&lt;p&gt;Now we are finally ready to create our new child cluster!&lt;&#x2F;p&gt;
&lt;p&gt;Let&#x27;s do that like this:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; apply&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -f&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; -&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;lt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;---&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;apiVersion: k0rdent.mirantis.com&#x2F;v1beta1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kind: ClusterDeployment&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;metadata:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  name: docker-hosted-cp&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  namespace: kcm-system&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;spec:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  template: docker-hosted-cp-1-0-0&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  credential: docker-stub-credential&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  config:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    clusterLabels: {}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    clusterAnnotations: {}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Note how we use &lt;code&gt;docker-hosted-cp-1-0-0&lt;&#x2F;code&gt; as the template for the new child cluster, this will give us a CAPD-based child cluster in &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;docs.k0rdent.io&#x2F;head&#x2F;admin&#x2F;hosted-control-plane&#x2F;&quot;&gt;Hosted Control-Plane&lt;&#x2F;a&gt; mode.&lt;&#x2F;p&gt;
&lt;p&gt;Now we wait for the child cluster to be &lt;code&gt;Ready&lt;&#x2F;code&gt;:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; wait&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --for=condition=Ready&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; clusterdeployment&#x2F;docker-hosted-cp&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --timeout=600s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; wait&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --for=jsonpath=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;{.status.phase}&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;=Provisioned&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; cluster&#x2F;docker-hosted-cp&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --timeout=600s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; wait&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --for=condition=Ready&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; dockercluster&#x2F;docker-hosted-cp&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --timeout=600s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; wait&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --for=jsonpath=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;{.status.ready}&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;=true&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; k0smotroncontrolplane&#x2F;docker-hosted-cp-cp&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --timeout=600s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;verifying-the-child-cluster&quot;&gt;Verifying the Child Cluster&lt;&#x2F;h2&gt;
&lt;p&gt;Let&#x27;s get the child cluster &lt;code&gt;kubeconfig&lt;&#x2F;code&gt; out and check if the cluster itself looks good:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system get secret docker-hosted-cp-kubeconfig&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -o&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; jsonpath=&amp;#39;{.data.value}&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; base64&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -d&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; docker-hosted-cp.kubeconfig&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;KUBECONFIG&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;docker-hosted-cp.kubeconfig&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get pods&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -A&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now we have almost everything setup for actual Valkey application delivery, we need to setup the storage provider inside our child cluster, let&#x27;s use &lt;code&gt;local-path-provisioner&lt;&#x2F;code&gt; for simplicity:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;KUBECONFIG&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;docker-hosted-cp.kubeconfig&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; apply&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -f&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;rancher&#x2F;local-path-provisioner&#x2F;v0.0.31&#x2F;deploy&#x2F;local-path-storage.yaml&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;KUBECONFIG&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;docker-hosted-cp.kubeconfig&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; patch storageclass local-path&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -p&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;{&amp;quot;metadata&amp;quot;: {&amp;quot;annotations&amp;quot;:{&amp;quot;storageclass.kubernetes.io&#x2F;is-default-class&amp;quot;:&amp;quot;true&amp;quot;}}}&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;&#x2F;strong&gt; We should wait until all Pods in the child cluster are Ready, let&#x27;s do that interactively, feel free to exit when pods are &lt;code&gt;Ready&lt;&#x2F;code&gt;:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;watch&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; KUBECONFIG=&amp;quot;docker-hosted-cp.kubeconfig&amp;quot; kubectl get pods&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -A&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;deploying-valkey-using-multiclusterservice&quot;&gt;Deploying Valkey Using &lt;code&gt;MultiClusterService&lt;&#x2F;code&gt;&lt;&#x2F;h2&gt;
&lt;p&gt;Whew, that was a lot of YAML, but we are finally here, and we can now see how k0rdent simplifies deploying Valkey into the child cluster!&lt;&#x2F;p&gt;
&lt;p&gt;Let&#x27;s first add a label to our new child cluster in the management cluster, where k0rdent is running, this label will be &quot;group=demo&quot;:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; label cluster docker-hosted-cp group=demo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kcm-system&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This label is needed because we will be using a &lt;code&gt;MultiClusterService&lt;&#x2F;code&gt; object that can reference multiple child clusters for service&#x2F;application delivery. In our case, we will use our Docker-based cluster, still, we should keep in mind that we are not restricted as to which cluster we deliver new services, it can be a single child cluster or a group of them.&lt;&#x2F;p&gt;
&lt;p&gt;Ok, let&#x27;s do this!&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; apply&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -f&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; -&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;lt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;apiVersion: k0rdent.mirantis.com&#x2F;v1alpha1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kind: MultiClusterService&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;metadata:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  name: valkey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;spec:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  clusterSelector:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    matchLabels:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      group: demo&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  serviceSpec:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    services:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    - template: valkey-0-1-0&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      name: valkey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      namespace: valkey-system&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;      values: |&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;        valkey:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;          spec:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;            tls: false # when enabled, needs CertManager (and some configs) inside child-cluster&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;In our case, &lt;code&gt;values.valkey.spec&lt;&#x2F;code&gt; that are exposed inside the template are Valkey Operator Helm Chart values.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;verifying-the-deployment&quot;&gt;Verifying the Deployment&lt;&#x2F;h2&gt;
&lt;p&gt;Let&#x27;s check the object status, we should see something similar to the example output:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get MultiClusterService&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -A&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Expected output:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;NAME     SERVICES   CLUSTERS   AGE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;valkey   1&#x2F;1        1&#x2F;1        23s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now, let&#x27;s check how things look like inside the child cluster:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;KUBECONFIG&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;docker-hosted-cp.kubeconfig&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; kubectl&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get pods&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -A&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Expected output:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;NAMESPACE              NAME                                                READY   STATUS    RESTARTS   AGE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;kube-system            coredns-5555f45c94-bf9mb                            1&#x2F;1     Running   0          23m&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;kube-system            konnectivity-agent-tfsr8                            1&#x2F;1     Running   0          21m&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;kube-system            kube-proxy-thx5h                                    1&#x2F;1     Running   0          21m&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;kube-system            kube-router-6b7s8                                   1&#x2F;1     Running   0          21m&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;kube-system            metrics-server-7778865875-s9hsz                     1&#x2F;1     Running   0          23m&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;local-path-storage     local-path-provisioner-74f9666bc9-5xqlf             1&#x2F;1     Running   0          16m&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;projectsveltos         sveltos-agent-manager-79df48c686-8l6dk              1&#x2F;1     Running   0          23m&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;valkey-system          valkey-0                                            1&#x2F;1     Running   0          64s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;valkey-system          valkey-operator-controller-manager-6dc5d6bf57-rbt9x 1&#x2F;1     Running   0          78s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;It might look like pure magic at first, but what you saw was how k0rdent simplifies application delivery.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;Feel free to play around with the Valkey Operator by leveraging the &lt;code&gt;MultiClusterService&lt;&#x2F;code&gt; object together with additional Helm Chart values, and when finished, cleaning up the environment only requires deleting the Kind cluster.&lt;&#x2F;p&gt;
&lt;p&gt;Want to explore more? Head over to the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;docs.k0rdent.io&#x2F;latest&#x2F;&quot;&gt;k0rdent docs&lt;&#x2F;a&gt;, and join our &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;cloud-native.slack.com&#x2F;archives&#x2F;C08A63Q4NCD&quot;&gt;Slack community&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;This is all for today, thanks for spending this time with me!&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Save the Date August 28, 2025: Keyspace</title>
        <published>2025-07-10T01:01:01+00:00</published>
        <updated>2025-07-10T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/keyspace-save-the-date/"/>
        <id>https://valkey.io/blog/keyspace-save-the-date/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/keyspace-save-the-date/">&lt;p&gt;Join the Valkey crew on August 28, 2025 in Amsterdam for the first ever Valkey conference: Keyspace!&lt;&#x2F;p&gt;
&lt;p&gt;Keyspace is the event where developers, SREs, and DevOps pros gather to share techniques, best practices, and new uses for Valkey.
You’ll meet and network with the project maintainers, community enthusiasts, and thought leaders in a focused one-day event.
The event will give insights into the project through general sessions, breakout rooms, lightning talks, and a workshop, in two distinct orbits (aka &quot;tracks&quot; if you want to be boring).&lt;&#x2F;p&gt;
&lt;p&gt;Breakfast, lunch, happy hour, and exclusive Valkey swag is included.&lt;&#x2F;p&gt;
&lt;p&gt;Keyspace is being held in conjunction with the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;events.linuxfoundation.org&#x2F;open-source-summit-europe&#x2F;&quot;&gt;Linux Foundation Open Source Summit Europe&lt;&#x2F;a&gt; at the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.rai.nl&#x2F;en&quot;&gt;RAI Amsterdam Convention Centre&lt;&#x2F;a&gt;.
Tickets for Open Source Summit are not required to attend Keyspace.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Want to share &lt;em&gt;your&lt;&#x2F;em&gt; Valkey story?&lt;&#x2F;strong&gt; Date extended! &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;sessionize.com&#x2F;keyspace&#x2F;&quot;&gt;Submit your talk by July &lt;del&gt;24&lt;&#x2F;del&gt; 28&lt;&#x2F;a&gt;!&lt;&#x2F;p&gt;
&lt;p&gt;Tickets available soon.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Introducing Vector Search To Valkey</title>
        <published>2025-07-07T01:01:01+00:00</published>
        <updated>2025-07-07T01:01:01+00:00</updated>
        
        <author>
          <name>
            yairgott
          </name>
        </author>
        
        <author>
          <name>
            allenss
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/introducing-valkey-search/"/>
        <id>https://valkey.io/blog/introducing-valkey-search/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/introducing-valkey-search/">&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;Valkey-search&lt;&#x2F;a&gt; is an official Valkey module that introduces native vector similarity search capabilities. It allows you to efficiently create indexes and search through billions of vectors stored in your Valkey instances. Whether you&#x27;re building semantic search, fraud detection systems, or conversational AI experiences, Valkey-Search provides a flexible and high-performance foundation for your application. It is compatible with Valkey versions 8.1.1 and above and is BSD-3-Clause licensed.&lt;&#x2F;p&gt;
&lt;p&gt;In this blog, you&#x27;ll learn how valkey-search works, explore key use cases it supports, understand its architecture and indexing model, and see how to integrate it into your own applications. You&#x27;ll also gain insight into how it scales, ensures high availability, and supports hybrid queries that combine vector similarity with structured filtering.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;semantic-search&quot;&gt;Semantic Search&lt;&#x2F;h2&gt;
&lt;p&gt;The ability of AI models to extract semantic meaning enables new classes of search algorithms, collectively known as semantic search. An AI model can process an input and convert it into a single high-dimension numeric vector – known as an embedding. Inputs with similar meaning will have similar embeddings. Semantic search is the process of converting a query into its embedding and searching a database of embeddings to find the most similar results.&lt;&#x2F;p&gt;
&lt;p&gt;The semantic search process can be divided into three phases:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Encode&lt;&#x2F;strong&gt;:  First, convert your input data into searchable units known as chunks. Chunking strategies are data-type specific. For example, with text one well-known chunking algorithm is to make each sentence be a chunk. Then use an AI model to  generate an embedding for each chunk. The specific AI model used for encoding depends on the data type and specific use case. Many AI models are available as a service, such as Google Cloud’s &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;cloud.google.com&#x2F;vertex-ai&quot;&gt;VertexAI&lt;&#x2F;a&gt; or AWS’ &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;bedrock&#x2F;&quot;&gt;Bedrock&lt;&#x2F;a&gt;, simplifying the embedding generation process.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Index&lt;&#x2F;strong&gt;: Store the generated embeddings along with any associated metadata in valkey-search. Each stored item is indexed with a primary key and a set of attributes spanning multiple modalities, for example, tags (categories),  numbers (pricing), etc.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Search&lt;&#x2F;strong&gt;: The query is converted into an embedding using the same AI model as was used in the &lt;strong&gt;Encode&lt;&#x2F;strong&gt; step above. This embedding is used with the vector search capability of valkey-search to locate the most similar vectors. The located vectors correspond to chunks of the original input that have the most similar meaning to the query.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;introducing-valkey-search&#x2F;images&#x2F;highlevel-flow.png&quot; alt=&quot;Semantic search phases&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;use-cases-where-valkey-search-shines&quot;&gt;Use-Cases Where valkey-search Shines&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search’s ability to search billions of vectors with millisecond latencies makes it ideal for real-time applications such as:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Personalized Recommendations&lt;&#x2F;strong&gt; – Deliver instant, highly relevant recommendations based on real-time user interactions.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Fraud Detection &amp;amp; Security&lt;&#x2F;strong&gt; – Identify anomalies and suspicious activity with ultra-fast similarity matching.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Conversational AI &amp;amp; Chatbots&lt;&#x2F;strong&gt; – Enhance response accuracy and relevance by leveraging rapid vector-based retrieval.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Image &amp;amp; Video Search&lt;&#x2F;strong&gt; – Enable multimedia search through real-time similarity detection.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;GenAI &amp;amp; Semantic Search&lt;&#x2F;strong&gt; – Power advanced AI applications with efficient vector retrieval for natural language understanding.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;valkey-search-architecture-overview&quot;&gt;Valkey-Search Architecture Overview&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search allows the creation of any number of named indexes. A valkey-search index can be thought of like a relational database table where a row is a Valkey key and a column is an attribute (field) within that key. Each index is defined to cover a portion of the Valkey keyspace and some list of attributes within those keys. Any mutation of a key within the scope of an index synchronously updates that index before that mutation command is acknowledged. Query operations can be performed on a single index, returning the located key names and optionally their contents.&lt;&#x2F;p&gt;
&lt;p&gt;Indexes can be constructed over hash or JSON keys. For hash keys, the indexable attributes of the keys are just the hash fields. For JSON keys, the indexable attributes are identified using the JSON path notation. Regardless of key type, an index can have any number of attributes. Each attribute of an index is declared with a type and sometimes type-specific modifiers. Currently, three index attribute types are supported: Vector, Numeric and Tag.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;index-creation&quot;&gt;Index Creation&lt;&#x2F;h3&gt;
&lt;p&gt;The &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;ft.create&#x2F;&quot;&gt;FT.CREATE&lt;&#x2F;a&gt; command is used to create a new, empty index. This causes the system to automatically initiate an asynchronous background process to scan the keyspace for keys that belong in the index in a process called backfilling. The backfill process runs until the entire keyspace has been scanned for the index. The &lt;code&gt;backfill_in_progress&lt;&#x2F;code&gt; and &lt;code&gt;backfill_complete_percent&lt;&#x2F;code&gt; fields of the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;ft.info&#x2F;&quot;&gt;FT.INFO&lt;&#x2F;a&gt; command result can be used to monitor the progress of the backfill. Once the backfill is complete it need not be executed against the index again.&lt;&#x2F;p&gt;
&lt;p&gt;During the backfill process, query and ingestion operations proceed normally. Queries will be executed against the current state of the index. Newly mutated data will be placed into the index as usual and remain unaffected by the backfill process. Thus after the creation of an index – while it is backfilling – the application can be certain that queries will contain the results of all data that has been ingested &lt;em&gt;after&lt;&#x2F;em&gt; the creation of the index and &lt;em&gt;some&lt;&#x2F;em&gt; of the data ingested &lt;em&gt;before&lt;&#x2F;em&gt;. Once the backfill has completed then queries will consider all data covered by the index.&lt;&#x2F;p&gt;
&lt;p&gt;In cluster mode, the &lt;code&gt;FT.CREATE&lt;&#x2F;code&gt; command can be sent to any primary node of the cluster and the system will automatically distribute the new index definition to all cluster members. The distribution is done using a combination of the cluster bus and direct gRPC communication between nodes. In the rare case where the distribution machinery detects an inconsistency between nodes, a last-writer-wins (LWW) collision resolution protocol is invoked to enable eventual cluster-wide consistency.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;vector-search-algorithms&quot;&gt;Vector Search Algorithms&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search offers K Nearest Neighbor (KNN) search. Applications supply a reference vector and request that the system locate the K vectors which are closest to the reference vector using the a specified distance function (L2, IP or cosine). KNN searching is a classic problem which lacks an efficient exact ideal solution. Valkey-search addresses this problem by providing two different algorithms that the developer can select from:&lt;&#x2F;p&gt;
&lt;p&gt;The first algorithm performs an exhaustive linear search, providing exactly correct answers but with a run-time that may be intolerable on large data sets.&lt;&#x2F;p&gt;
&lt;p&gt;The second algorithm addresses this problem by compromising on accuracy in exchange for shorter run-times. In other words, it runs very fast but may not deliver exactly the correct answer. This type of algorithm is often known as Approximate Nearest Neighbor (ANN). The term &quot;recall&quot; is used to measure the quality of an ANN algorithm result and is expressed as the ratio (or percentage) of the found answers to the correct answers, e.g., a recall of 0.95 (or 95%) means that for a search with K = 100, 95 of the correct answers were found. There is a tradeoff between the recall of an ANN algorithm and its time&#x2F;space resource consumption.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey-search supports the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Hierarchical_navigable_small_world&quot;&gt;Hierarchical Navigable Small Worlds&lt;&#x2F;a&gt; (HNSW) ANN algorithm as it provides the best performance at the highest levels of recall demanded by real-time applications. The HNSW algorithm has (&lt;code&gt;O(log N)&lt;&#x2F;code&gt;) time complexity and gives the developer three parameters to adjust the CPU and memory consumption vs recall. The relationship between these parameters and the resulting operation latency and recall is complex and data dependent. It is recommended that developers test with data that closely approximates production environments.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;hybrid-query-support&quot;&gt;Hybrid Query Support&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search query operations are not limited to just vector searching. Documents can contain additional metadata that can be used to enhance searches. Two types of metadata are currently supported: Numeric and Tag. Numeric metadata supports range queries, i.e., you can include or exclude documents with metadata in the particular range. Tag metadata is an enumerated list of words. Tag searches can be done with an exact word match or a prefix match (trailing wild card).&lt;&#x2F;p&gt;
&lt;p&gt;Hybrid queries are vector query operations which have been augmented with a filter, constructed from the numeric and tag searching operators combined with the usual logical operators &lt;code&gt;AND&lt;&#x2F;code&gt;, &lt;code&gt;OR&lt;&#x2F;code&gt;, and &lt;code&gt;NOT.&lt;&#x2F;code&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Hybrid queries are particularly powerful for real-world applications, where a mix of vector and non-vector attributes defines the relevance of results. For example, a numeric field could be used as a timestamp, meaning that search operations could be automatically confined to a particular period of time. Another example would be to use a tag field to indicate a language.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;query-execution&quot;&gt;Query Execution&lt;&#x2F;h3&gt;
&lt;p&gt;When executing hybrid queries, valkey-search automatically selects from one of two strategies as part of the query execution planning phase. This is done by breaking down the query filter into predicates and estimating the selectivity of each predicate to estimate the least expensive execution strategy.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Pre-filtering:&lt;&#x2F;strong&gt; This algorithm has two sequential steps: first, a temporary dataset is created using only the documents which pass the filter expression, then a linear KNN similarity search of is performed on the temporary dataset. Pre-filtering is particularly effective for high selectivity queries, i.e., the filter significantly narrows down the dataset.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Inline-filtering:&lt;&#x2F;strong&gt; In this algorithm, filtering is done &lt;em&gt;during&lt;&#x2F;em&gt; the vector search itself. As the HNSW search graph is traversed, each candidate document is tested against the filter criteria before being added to the result set. This method is best suited for cases where the filter isn’t highly selective, i.e., the filter matches a large amount of the dataset.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;high-availability&quot;&gt;High Availability&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search is built on top of Valkey, leveraging its primary&#x2F;replica-based architecture to provide high availability (HA). Diagram #2 shows a typical configuration of one primary and three replicas. In the event of a primary node failure, the system automatically promotes one of the replicas to become the new primary.&lt;&#x2F;p&gt;
&lt;p&gt;Clients must send data mutation (write) commands to the primary node which are executed and then automatically asynchronously transmitted to each replica. Clients can send data read operations to any node in the cluster, recognizing that reading from a replica is eventually consistent.&lt;&#x2F;p&gt;
&lt;p&gt;When valkey-search is used, each node, whether a primary or a replica, builds and maintains its own indexes. No additional traffic on the replication channel is generated for index maintenance. Search query operations sent to a replica will be executed against its indexes, reflecting the historical point in time of the data within that node.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;introducing-valkey-search&#x2F;images&#x2F;ha.png&quot; alt=&quot;High availability&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;scaling&quot;&gt;Scaling&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search supports all three methods of scaling: horizontal, vertical and replicas. When scaling for capacity, valkey-search behaves just like regular Valkey, i.e., increasing the memory of individual nodes (vertical scaling) or increasing the number of nodes (horizontal scaling) will increase the overall capacity.&lt;&#x2F;p&gt;
&lt;p&gt;However, from a performance perspective, valkey-search behaves very differently from regular Valkey. The multi-threaded implementation of valkey-search means that additional CPUs yield up to linear increases in both query and ingestion throughput. Horizontal scaling yields linear increases in ingestion throughput but may provide little to no benefit on query throughput. If additional query throughput is required, scaling through replicas or additional CPUs is required.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;introducing-valkey-search&#x2F;images&#x2F;cluster.png&quot; alt=&quot;Scaling&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;snapshots&quot;&gt;Snapshots&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey’s point-in-time RDB snapshotting mechanism is a key component for maintaining high availability, zero downtime, and minimal operational overhead. Beyond persistence and loading from disk snapshots, RDB plays a crucial role in full-sync operations, where a primary node synchronizes its in-memory data with a replica node in an HA setup. Full-sync is commonly triggered when a new replica joins the cluster but may also occur if a secondary falls too far behind due to prolonged lag.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey-search enhances the Valkey snapshotting mechanism to include index definitions and vector indexes and is built for resilience and efficiency:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Fast Turn-up:&lt;&#x2F;strong&gt; New nodes must become fully operational in minimal time. While rebuilding an index on startup is an option, ingesting a large volume of vectors can be prohibitively slow, delaying system readiness. A snapshot captures not only the index metadata but also the vector index content which significantly reduces downtime and operational burden.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Backward&#x2F;Forward Compatibility:&lt;&#x2F;strong&gt; Cluster upgrades or downgrades often result in a temporary mix of node versions. valkey-search seamlessly handles such scenarios with a serialization format based on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Protocol_Buffers&quot;&gt;Protocol Buffers (Protobuf)&lt;&#x2F;a&gt;, ensuring both backward and forward compatibility. This simplifies version transitions and reduces the chance of costly re-indexing.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;redisearch-api-compatibility&quot;&gt;RediSearch API Compatibility&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search implements a subset of RediSearch’s functionality, with compatibility across key vector search APIs. This enables integration with most existing RediSearch client libraries, allowing you to continue using familiar tools. Developers already experienced with RediSearch can adopt valkey-search with minimal changes, as the vector search API remains largely consistent.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;performance-low-latency&quot;&gt;Performance &amp;amp; Low Latency&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search was designed from the ground up as an in-memory secondary index. A multi-threaded architecture optimizes query and mutation processing with minimal thread contention, enabling near-linear vertical scalability.&lt;&#x2F;p&gt;
&lt;p&gt;At its core, valkey-search’s threading architecture follows a common design pattern: a worker thread pool combined with task queues. It employs advanced synchronization mechanisms to maintain index consistency while minimizing contention among worker threads. By time-slicing CPU access between read and write operations, the system minimizes locks on the read path, delivering high performance and consistently low search latency.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey-search’s HNSW implementation is based on the OSS project &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;nmslib&#x2F;hnswlib&quot;&gt;HNSWLib&lt;&#x2F;a&gt;. While HNSWLib is well-regarded for its speed, we have enhanced its performance and efficiency for our use case. These improvements include better SIMD utilization, promotion of CPU cache efficiency, memory utilization and more.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;future-enhancements&quot;&gt;Future Enhancements&lt;&#x2F;h2&gt;
&lt;p&gt;This first release of valkey-search focuses on vector search, but it is designed as a general-purpose indexing subsystem. Future releases will extend both the available data types as well as provide post-query data processing facilities.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;get-involved&quot;&gt;Get Involved&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey-search is open source and ready for you to explore. Whether you&#x27;re building cutting-edge AI applications or integrating vector search into existing systems, we invite you to try it out. The easiest way to get started is by visiting the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;GitHub repository&lt;&#x2F;a&gt;, where you&#x27;ll find setup instructions, documentation, and examples.&lt;&#x2F;p&gt;
&lt;p&gt;We welcome contributions of all kinds - code, documentation, testing, and feedback. Join the community, file issues, open pull requests, or suggest improvements. Your involvement helps make valkey-search better for everyone.&lt;&#x2F;p&gt;
&lt;p&gt;Ready to dive in? Clone the repo, fire up the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;valkey&#x2F;valkey-bundle&quot;&gt;dev container&lt;&#x2F;a&gt;, and start building high-performance vector search with valkey-search.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Introducing Enhanced JSON Capabilities in Valkey</title>
        <published>2025-06-30T01:01:01+00:00</published>
        <updated>2025-06-30T01:01:01+00:00</updated>
        
        <author>
          <name>
            roshkhatri
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/introducing-enhanced-json-capabilities-in-valkey/"/>
        <id>https://valkey.io/blog/introducing-enhanced-json-capabilities-in-valkey/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/introducing-enhanced-json-capabilities-in-valkey/">&lt;p&gt;JSON is a ubiquitous format for semi-structured data, and developers often expect native support across their technology stack, including in-memory stores like Valkey. Previously, working with JSON in Valkey required serializing entire objects as strings or flattening them into Hashes, limiting nesting. These workarounds added complexity and made updates harder than they should be.&lt;&#x2F;p&gt;
&lt;p&gt;That changes with the general availability of native JSON support in Valkey. You can now store, query, and update JSON documents directly, without manual parsing or transformation. This brings a cleaner model to working with semi-structured data and makes your code easier to write and maintain. Valkey JSON is fully compatible with &lt;strong&gt;Valkey 8.0 and above&lt;&#x2F;strong&gt;. It is also compliant with &lt;strong&gt;RFC7159&lt;&#x2F;strong&gt; and &lt;strong&gt;ECMA-404&lt;&#x2F;strong&gt;, adhering to widely accepted JSON standards which ensures consistent handling JSON data.&lt;&#x2F;p&gt;
&lt;p&gt;In this blog, we&#x27;ll guide you through setting up the Valkey JSON module and demonstrate how to use it for some common workloads.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;core-capabilities-and-performance&quot;&gt;Core Capabilities and Performance&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey JSON supports six data types-&lt;strong&gt;null, boolean, number, string, object, and array&lt;&#x2F;strong&gt;—allowing developers to represent complex, nested data structures without the constraints of traditional string serialization. Unlike other composite types, JSON objects and arrays in Valkey can contain any combination of the six value types, enabling deeply nested data models to be stored natively.&lt;&#x2F;p&gt;
&lt;p&gt;Internally, Valkey JSON utilizes an optimized &lt;strong&gt;binary tree-like format&lt;&#x2F;strong&gt;, which enables rapid traversal and manipulation of substructures without requiring the full document to be rewritten. This structure allows operations on specific paths to be efficient. Path-based commands like &lt;code&gt;JSON.GET&lt;&#x2F;code&gt;, &lt;code&gt;JSON.SET&lt;&#x2F;code&gt;, and &lt;code&gt;JSON.DEL&lt;&#x2F;code&gt; allow targeted interactions with specific elements, supporting multiple paths within a single operation. Additionally, Valkey JSON integrates with Valkey’s Access Control Lists (ACLs), introducing a &lt;code&gt;@json&lt;&#x2F;code&gt; command category to allow granular permissions alongside existing data types.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;getting-started-with-valkey-json&quot;&gt;Getting Started with Valkey JSON&lt;&#x2F;h2&gt;
&lt;p&gt;Let’s walk through a practical example using Valkey JSON with the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;valkey&#x2F;valkey-bundle&quot;&gt;&lt;code&gt;valkey-bundle&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; Docker image.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;installation-and-setup&quot;&gt;Installation and Setup&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey JSON comes pre-loaded in the &lt;code&gt;valkey-bundle&lt;&#x2F;code&gt;, which also includes other modules such as &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bloom&quot;&gt;valkey-bloom&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;valkey-search&lt;&#x2F;a&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-ldap&quot;&gt;valkey-ldap&lt;&#x2F;a&gt; loaded on Valkey.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;start-a-valkey-bundle-instance&quot;&gt;Start a Valkey bundle instance&lt;&#x2F;h4&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;docker run --name my-valkey-bundle -d valkey&#x2F;valkey-bundle&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This will start a docker container of Valkey with JSON module already loaded.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;connect-with-valkey-cli&quot;&gt;Connect with valkey-cli&lt;&#x2F;h4&gt;
&lt;p&gt;To connect to your running instance using the built-in valkey-cli:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;docker run -it --rm valkey&#x2F;valkey-bundle valkey-cli&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;We will now verify the modules are loaded and Valkey JSON is one of them:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; MODULE LIST&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;3) 1) &amp;quot;name&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   2) &amp;quot;json&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   3) &amp;quot;ver&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   4) (integer) 10010&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   5) &amp;quot;path&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   6) &amp;quot;&#x2F;usr&#x2F;lib&#x2F;valkey&#x2F;libjson.so&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   7) &amp;quot;args&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   8) (empty array)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;You’re now ready to work with JSON in Valkey!&lt;&#x2F;p&gt;
&lt;h3 id=&quot;basic-json-operations&quot;&gt;Basic JSON Operations&lt;&#x2F;h3&gt;
&lt;p&gt;Let’s take a real world example where you want to store a &lt;strong&gt;list of users&lt;&#x2F;strong&gt; in a single key like this:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; JSON.SET users $ &amp;#39;[{&amp;quot;name&amp;quot;:&amp;quot;Alice&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;alice@example.com&amp;quot;,&amp;quot;city&amp;quot;:&amp;quot;Seattle&amp;quot;,&amp;quot;state&amp;quot;:&amp;quot;WA&amp;quot;},{&amp;quot;name&amp;quot;:&amp;quot;Bob&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;bob@example.com&amp;quot;,&amp;quot;city&amp;quot;:&amp;quot;Bellevue&amp;quot;,&amp;quot;state&amp;quot;:&amp;quot;WA&amp;quot;},{&amp;quot;name&amp;quot;:&amp;quot;Charlie&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;charlie@example.com&amp;quot;,&amp;quot;city&amp;quot;:&amp;quot;Austin&amp;quot;,&amp;quot;state&amp;quot;:&amp;quot;TX&amp;quot;}]&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This command stores a JSON array of user objects at the root path &lt;code&gt;$&lt;&#x2F;code&gt; under the key &lt;code&gt;users&lt;&#x2F;code&gt;. You can visualize it like this:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;name&amp;quot;: &amp;quot;Alice&amp;quot;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;email&amp;quot;: &amp;quot;alice@example.com&amp;quot;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;city&amp;quot;: &amp;quot;Seattle&amp;quot;, &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;state&amp;quot;: &amp;quot;WA&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   },&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;name&amp;quot;: &amp;quot;Bob&amp;quot;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;email&amp;quot;: &amp;quot;bob@example.com&amp;quot;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;city&amp;quot;: &amp;quot;Bellevue&amp;quot;, &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;state&amp;quot;: &amp;quot;WA&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   },&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;name&amp;quot;: &amp;quot;Charlie&amp;quot;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;email&amp;quot;: &amp;quot;charlie@example.com&amp;quot;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;city&amp;quot;: &amp;quot;Austin&amp;quot;, &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;     &amp;quot;state&amp;quot;: &amp;quot;TX&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; ]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Next, let’s say you want to fetch all the users in this list.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt;JSON.GET users&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;quot;[{\&amp;quot;name\&amp;quot;:\&amp;quot;Alice\&amp;quot;,\&amp;quot;email\&amp;quot;:\&amp;quot;alice@example.com\&amp;quot;,\&amp;quot;city\&amp;quot;:\&amp;quot;Seattle\&amp;quot;,\&amp;quot;state\&amp;quot;:\&amp;quot;WA\&amp;quot;},&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;{\&amp;quot;name\&amp;quot;:\&amp;quot;Bob\&amp;quot;,\&amp;quot;email\&amp;quot;:\&amp;quot;bob@example.com\&amp;quot;,\&amp;quot;city\&amp;quot;:\&amp;quot;Bellevue\&amp;quot;,\&amp;quot;state\&amp;quot;:\&amp;quot;WA\&amp;quot;},&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;{\&amp;quot;name\&amp;quot;:\&amp;quot;Charlie\&amp;quot;,\&amp;quot;email\&amp;quot;:\&amp;quot;charlie@example.com\&amp;quot;,\&amp;quot;city\&amp;quot;:\&amp;quot;Austin\&amp;quot;,\&amp;quot;state\&amp;quot;:\&amp;quot;TX\&amp;quot;}]&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now we will retrieve only the cities associated with each user:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt;JSON.GET users $[*].city&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;quot;[\&amp;quot;Seattle\&amp;quot;,\&amp;quot;Bellevue\&amp;quot;,\&amp;quot;Austin\&amp;quot;]&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;advanced-json-queries&quot;&gt;Advanced JSON Queries&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey JSON supports &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;goessner.net&#x2F;articles&#x2F;JsonPath&#x2F;&quot;&gt;JSONPath&lt;&#x2F;a&gt; expressions.
JSONPath is a query language for JSON documents, similar to XPath for XML. It allows users to select and extract specific data from a JSON document using a path-like expression.&lt;&#x2F;p&gt;
&lt;p&gt;Let us look at some filtering using JSONPath.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;retrieve-all-users-from-washington-state&quot;&gt;Retrieve all users from Washington State&lt;&#x2F;h4&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt;JSON.GET users &amp;#39;$[?(@.state==&amp;quot;WA&amp;quot;)]&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;quot;[{\&amp;quot;name\&amp;quot;:\&amp;quot;Alice\&amp;quot;,\&amp;quot;email\&amp;quot;:\&amp;quot;alice@example.com\&amp;quot;,\&amp;quot;city\&amp;quot;:\&amp;quot;Seattle\&amp;quot;,\&amp;quot;state\&amp;quot;:\&amp;quot;WA\&amp;quot;},&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;{\&amp;quot;name\&amp;quot;:\&amp;quot;Bob\&amp;quot;,\&amp;quot;email\&amp;quot;:\&amp;quot;bob@example.com\&amp;quot;,\&amp;quot;city\&amp;quot;:\&amp;quot;Bellevue\&amp;quot;,\&amp;quot;state\&amp;quot;:\&amp;quot;WA\&amp;quot;}]&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Here, JSONPath syntax is used to filter data from a JSON document.
Let&#x27;s break it down on how that works:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;$&lt;&#x2F;code&gt; represents the root of the JSON document&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;[...]&lt;&#x2F;code&gt; indicates we&#x27;re working with an array&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;?()&lt;&#x2F;code&gt; is a filter expression that applies a condition&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;@&lt;&#x2F;code&gt; refers to the current object being processed in the array&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;.state&lt;&#x2F;code&gt; accesses the &quot;state&quot; property of the current object&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;==&quot;WA&quot;&lt;&#x2F;code&gt; checks if that property equals the string &quot;WA&quot;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;multi-path-queries&quot;&gt;Multi-Path Queries&lt;&#x2F;h3&gt;
&lt;p&gt;Below we retrieve the names of users from WA and those from other states:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; JSON.GET users &amp;#39;$[?(@.state==&amp;quot;WA&amp;quot;)].name&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;quot;[\&amp;quot;Charlie\&amp;quot;]&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; JSON.GET users &amp;#39;$[?(@.state!=&amp;quot;WA&amp;quot;)].name&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;quot;[\&amp;quot;Alice\&amp;quot;,\&amp;quot;Bob\&amp;quot;]&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h4 id=&quot;we-can-also-get-the-emails-of-users-from-seattle-or-austin&quot;&gt;We can also get the emails of users from Seattle or Austin&lt;&#x2F;h4&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; JSON.GET users &amp;#39;$[?(@.city==&amp;quot;Seattle&amp;quot; || @.city==&amp;quot;Austin&amp;quot;)].email&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;quot;[\&amp;quot;alice@example.com\&amp;quot;,\&amp;quot;charlie@example.com\&amp;quot;]&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;key-use-cases-for-with-valkey-json&quot;&gt;Key Use Cases for with Valkey JSON&lt;&#x2F;h3&gt;
&lt;h4 id=&quot;per-user-event-counters-for-ad-or-notification-delivery&quot;&gt;Per-User Event Counters for Ad or Notification Delivery&lt;&#x2F;h4&gt;
&lt;p&gt;Valkey JSON can be used for tracking per-user counters for ad impressions, push notifications, or message deliveries. An ad platform may store a JSON document per user with nested metadata for each campaign — including impression counts, last delivery timestamps, and click history. In-place updates using &lt;code&gt;JSON.NUMINCRBY&lt;&#x2F;code&gt; or &lt;code&gt;JSON.SET&lt;&#x2F;code&gt; on specific paths (e.g., &lt;code&gt;$.ad_campaigns.ad_123.count&lt;&#x2F;code&gt;). This reduces network I&#x2F;O and latency while ensuring atomicity. Microservices can also retrieve only the required subfields using JSONPath queries, like &lt;code&gt;$.ad_campaigns.ad_123.lastSeen&lt;&#x2F;code&gt;, allowing for efficient real-time decisioning.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;shared-reference-metadata-store-for-microservices&quot;&gt;Shared Reference Metadata Store for Microservices&lt;&#x2F;h3&gt;
&lt;p&gt;Across games, e-commerce platforms, or internal developer tools, multiple microservices often need fast access to consistent, structured reference data. This can include things like product attributes, game character metadata, tax codes, or ID mappings — which are naturally stored as JSON documents. Valkey JSON can be used for centralizing this reference data in-memory. We can store JSON documents using &lt;code&gt;JSON.SET&lt;&#x2F;code&gt;, and services can retrieve targeted subfields using path expressions like &lt;code&gt;$.items[?(@.rarity==&quot;epic&quot;)]&lt;&#x2F;code&gt; or &lt;code&gt;$.idToName[&quot;1234&quot;]&lt;&#x2F;code&gt;. Updates happen in bulk during patch releases or deployment cycles, but reads are constant and latency-sensitive. By keeping this metadata in Valkey, services avoid making remote API calls or parsing local files, achieving very low lookup time even under load.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;identity-graph-and-profile-storage-at-scale&quot;&gt;Identity Graph and Profile Storage at Scale&lt;&#x2F;h3&gt;
&lt;p&gt;For organization operating large-scale identity platforms — such as those in fintech, healthtech, or fraud detection — managing complex user or entity profiles is a core requirement. These profiles often include deeply nested data like names, contact info, document verification, scores, and historical activity. Valkey JSON allows each profile to be stored as a single JSON document and updated atomically as new data arrives, without needing to rewrite the entire object. Queries like &lt;code&gt;$.email&lt;&#x2F;code&gt;, &lt;code&gt;$.history[-1]&lt;&#x2F;code&gt;, or &lt;code&gt;$.risk.score&lt;&#x2F;code&gt; can be executed efficiently. This architecture supports concurrent reads and writes per second and can scale to multi-terabyte datasets using a mix of memory and persistent storage.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;Before Valkey JSON, developers often had to write extra serialization code, split JSON documents across keys, make additional round trips, or rely on external document stores alongside Valkey. Now, with native JSON commands, you can store, query, and update rich, nested structures directly in-memory and in real time. This removes architectural complexity, reduces latency, and extends Valkey’s high-performance design to modern, document-centric use cases.&lt;&#x2F;p&gt;
&lt;p&gt;Explore Valkey JSON in more depth and start building with it today!&lt;&#x2F;p&gt;
&lt;p&gt;See the following resources for details and ways to contribute:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;valkey-json&#x2F;&quot;&gt;Valkey Official Documentation&lt;&#x2F;a&gt; to learn more.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-json&quot;&gt;Valkey JSON GitHub Repository&lt;&#x2F;a&gt; for issues or feature requests.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;valkey&#x2F;valkey-bundle&quot;&gt;Valkey Bundle&lt;&#x2F;a&gt; to start building with Valkey JSON.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Happy coding!&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>valkey-bundle: One stop shop for real-time applications</title>
        <published>2025-06-23T00:00:01+00:00</published>
        <updated>2025-06-23T00:00:01+00:00</updated>
        
        <author>
          <name>
            rlunar
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-bundle-one-stop-shop-for-low-latency-modern-applications/"/>
        <id>https://valkey.io/blog/valkey-bundle-one-stop-shop-for-low-latency-modern-applications/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-bundle-one-stop-shop-for-low-latency-modern-applications/">&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-bundle-one-stop-shop-for-low-latency-modern-applications&#x2F;images&#x2F;valkey-bundle.png&quot; alt=&quot;valkey-bundle&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Are you looking to build low-latency, high-performance, feature-rich applications without the hassle of managing multiple dependencies? Meet &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; your new best friend in the world of modern application development!&lt;&#x2F;p&gt;
&lt;p&gt;&lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; enhances developer productivity by providing easy access to four tools in addition to the Valkey 8.1.1 through one container: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-json&quot;&gt;Valkey JSON&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bloom&quot;&gt;Valkey Bloom&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;Valkey Search&lt;&#x2F;a&gt;, and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-ldap&quot;&gt;Valkey LDAP&lt;&#x2F;a&gt;. &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; enables teams to leverage native JSON support for reliable document data handling, while implementing efficient data filtering using bloom, vector similarity search and handling authentication through LDAP. You have all these modules to pick from, mix and match for your requirements.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; provides a simplified development experience that eliminates dependency conflicts and reduces setup time with a single container deployment, teams can simplify their CI&#x2F;CD pipelines while maintaining consistency across development and production environments.&lt;&#x2F;p&gt;
&lt;p&gt;From a security perspective, &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; integrates enterprise grade authentication through Valkey LDAP, enabling cohesive integration with existing identity providers while maintaining strict access controls. This built-in security layer on top of existing Valkey Access Control Lists (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;acl&#x2F;&quot;&gt;ACL&lt;&#x2F;a&gt;) ensures that your data remains protected without sacrificing performance or adding complexity to your architecture.&lt;&#x2F;p&gt;
&lt;p&gt;Besides the technical capabilities that &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; provides my favorite thing about this new solution is to see the Valkey community working together releasing modules catering to different use cases driven by Valkey users requests in an open forum in the discussions section of the project for example: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;orgs&#x2F;valkey-io&#x2F;discussions&#x2F;212&quot;&gt;Implement JSON and search&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;orgs&#x2F;valkey-io&#x2F;discussions&#x2F;119&quot;&gt;FEATURE: JSON&#x2F;ReJSON&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;orgs&#x2F;valkey-io&#x2F;discussions&#x2F;108&quot;&gt;Modules&lt;&#x2F;a&gt;, and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;orgs&#x2F;valkey-io&#x2F;discussions&#x2F;215&quot;&gt;FEATURE: Bloom Filters&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;In this blog post, we go over how to get started with the new single container to deploy &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt;, understand the different modules included within, and we will dive into a real-life use case with a simple Ad Platform.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;getting-started-with-valkey-bundle&quot;&gt;Getting Started with &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt;&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; is generally available as version 8.1 includes Valkey 8.1, valkey-json 1.0, valkey-bloom 1.0, valkey-search 1.0 and valkey-ldap 1.0 out of the box in a single container, you have the ability to deploy the container as standalone valkey or in cluster mode.&lt;&#x2F;p&gt;
&lt;p&gt;Start your &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; instance:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Get the latest version of the container image&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; docker pull valkey&#x2F;valkey-bundle&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Run a standalone valkey using the default port&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; docker run&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --name&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; my-valkey-bundle&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    -p&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; 6379:6379&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    -d&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&#x2F;valkey-bundle&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Connect to the same container we have previously created using the built-in &lt;code&gt;valkey-cli&lt;&#x2F;code&gt;:&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; docker exec&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -it&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; my-valkey-bundle&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -h&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; localhost&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -p 6379 -3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;List the available modules using the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;info&#x2F;&quot;&gt;&lt;code&gt;INFO&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; command.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;my-valkey-bundle:6379&amp;gt; INFO modules&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Modules&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;module:name=bf,ver=10000,api=1,filters=0,usedby=[],using=[],options=[]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;module:name=search,ver=10000,api=1,filters=0,usedby=[],using=[],options=[handle-io-errors|handle-repl-async-load|no-implicit-signal-modified]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;module:name=json,ver=10010,api=1,filters=0,usedby=[],using=[],options=[handle-io-errors]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;module:name=ldap,ver=16777471,api=1,filters=0,usedby=[],using=[],options=[]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;valkey-bundle-features-and-cli-examples&quot;&gt;&lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; Features and CLI Examples&lt;&#x2F;h2&gt;
&lt;p&gt;Each module in &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; addresses specific challenges for modern applications with low latency requirements that currently use Valkey and would like to add the functionality of JSON, VSS Search, Bloom filters, and LDAP. In this section, we explore each module currently included in &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; 8.1.1, understand the technical capabilities and constraints by CLI examples.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;json-document-storage-and-querying&quot;&gt;JSON Document Storage and Querying&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-json&quot;&gt;Valkey JSON&lt;&#x2F;a&gt; brings native &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;JSON&quot;&gt;JSON&lt;&#x2F;a&gt; support to your applications, eliminating the need to serialize&#x2F;deserialize data or maintain complex object-relational mappings. It implements the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;datatracker.ietf.org&#x2F;doc&#x2F;html&#x2F;rfc7159.html&quot;&gt;RFC7159&lt;&#x2F;a&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;ecma-international.org&#x2F;publications-and-standards&#x2F;standards&#x2F;ecma-404&#x2F;&quot;&gt;ECMA-404&lt;&#x2F;a&gt; JSON standards, providing a foundation for storing and querying structured data. With support for &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;JSONPath&quot;&gt;JSONPath&lt;&#x2F;a&gt; queries, you can perform complex query operations on nested data structures without writing custom parsing logic.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Key Capabilities and Constraints:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Document Size&lt;&#x2F;strong&gt;: Configurable maximum document size (default: no limit) via &lt;code&gt;json.max-document-size&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Nesting Depth&lt;&#x2F;strong&gt;: Maximum nesting level of 128 for objects&#x2F;arrays (configurable via &lt;code&gt;json.max-path-limit&lt;&#x2F;code&gt;)&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Path Syntax&lt;&#x2F;strong&gt;: Supports both enhanced JSONPath and restricted syntax
&lt;ul&gt;
&lt;li&gt;Enhanced syntax includes recursive descent (&lt;code&gt;..&lt;&#x2F;code&gt;), wildcards (&lt;code&gt;*&lt;&#x2F;code&gt;), array slices (&lt;code&gt;[start:end:step]&lt;&#x2F;code&gt;), and complex filters&lt;&#x2F;li&gt;
&lt;li&gt;Filter expressions support comparisons (&lt;code&gt;==&lt;&#x2F;code&gt;, &lt;code&gt;!=&lt;&#x2F;code&gt;, &lt;code&gt;&amp;gt;&lt;&#x2F;code&gt;, &lt;code&gt;&amp;gt;=&lt;&#x2F;code&gt;, &lt;code&gt;&amp;lt;&lt;&#x2F;code&gt;, &lt;code&gt;&amp;lt;=&lt;&#x2F;code&gt;) and logical operators (&lt;code&gt;&amp;amp;&amp;amp;&lt;&#x2F;code&gt;, &lt;code&gt;||&lt;&#x2F;code&gt;)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Performance&lt;&#x2F;strong&gt;: O(1) for direct path operations, scaling to O(N) for recursive or filtered queries&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Memory Monitoring&lt;&#x2F;strong&gt;: Use &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;json.debug&#x2F;&quot;&gt;&lt;code&gt;JSON.DEBUG&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;&lt;code&gt; MEMORY &amp;lt;key&amp;gt;&lt;&#x2F;code&gt; or &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;memory-usage&#x2F;&quot;&gt;&lt;code&gt;MEMORY USAGE&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; &lt;key&gt; to track memory usage&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Performance Tip&lt;&#x2F;strong&gt;: Use specific paths in &lt;code&gt;JSON.GET&lt;&#x2F;code&gt; for better performance.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Let&#x27;s now explore a few commands using an example for a user profile, with a few attributes and nested elements.&lt;&#x2F;p&gt;
&lt;p&gt;Sample document for key &lt;code&gt;user:6379&lt;&#x2F;code&gt;&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;json&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;{&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    &amp;quot;name&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;Val Key&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    &amp;quot;address&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;        &amp;quot;city&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;New York&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;        &amp;quot;zip&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;10001&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    },&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    &amp;quot;orders&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        {&lt;&#x2F;span&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;&amp;quot;id&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;ord1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt; &amp;quot;total&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 99.99&lt;&#x2F;span&gt;&lt;span&gt;},&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        {&lt;&#x2F;span&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;&amp;quot;id&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;ord2&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt; &amp;quot;total&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 150.50&lt;&#x2F;span&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Store and retrieve JSON document with nested elements and arrays using &lt;code&gt;valkey-cli&lt;&#x2F;code&gt; with the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;json.set&#x2F;&quot;&gt;&lt;code&gt;JSON.SET&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; command.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; JSON.SET user:6379 $ &amp;#39;{&amp;quot;name&amp;quot;: &amp;quot;Val Key&amp;quot;,&amp;quot;address&amp;quot;: {&amp;quot;city&amp;quot;: &amp;quot;New York&amp;quot;,&amp;quot;zip&amp;quot;: &amp;quot;10001&amp;quot;},&amp;quot;orders&amp;quot;: [{&amp;quot;id&amp;quot;: &amp;quot;ord1&amp;quot;, &amp;quot;total&amp;quot;: 99.99},{&amp;quot;id&amp;quot;: &amp;quot;ord2&amp;quot;, &amp;quot;total&amp;quot;: 150.50}]}&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Complex query with filters to retrieve orders where total is over 100 from within the array in the document for the given user, we&#x27;ll use &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;json.get&#x2F;&quot;&gt;&lt;code&gt;JSON.GET&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; command.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; JSON.GET user:6379 &amp;#39;$.orders[?(@.total &amp;gt; 100)]&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;quot;[{\&amp;quot;id\&amp;quot;:\&amp;quot;ord2\&amp;quot;,\&amp;quot;total\&amp;quot;:150.50}]&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Perform &lt;em&gt;Array&lt;&#x2F;em&gt; operations by inserting an item to the orders list using &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;json.arrappend&#x2F;&quot;&gt;&lt;code&gt;JSON.ARRAPPEND&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; command.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; JSON.ARRAPPEND user:6379 $.orders &amp;#39;{&amp;quot;id&amp;quot;: &amp;quot;ord3&amp;quot;,&amp;quot;total&amp;quot;: 75.25}&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;1) (integer) 3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-json&quot;&gt;Valkey JSON&lt;&#x2F;a&gt; version &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-json&#x2F;releases&#x2F;tag&#x2F;1.0.0&quot;&gt;1.0.0&lt;&#x2F;a&gt; included in &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; was released on Tuesday April 1st 2025. If you would like to dive deeper see more details in the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;valkey-json&#x2F;&quot;&gt;Valkey JSON Documentation&lt;&#x2F;a&gt; and explore the list of &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;#json&quot;&gt;commands&lt;&#x2F;a&gt; included.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;efficient-data-filtering-with-bloom-filters&quot;&gt;Efficient Data Filtering with Bloom Filters&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bloom&quot;&gt;Valkey Bloom&lt;&#x2F;a&gt; implements space-efficient probabilistic data structures that allow membership testing. Use cases are de-duplication, filtering, and avoid one-hit-wonders in caching (data that is accessed exactly once and then never requested again). &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Bloom_filter&quot;&gt;Bloom filters&lt;&#x2F;a&gt; can reduce memory usage by up to 98% compared to the &lt;code&gt;SET&lt;&#x2F;code&gt; data structure while maintaining fast lookup times.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Key Capabilities and Constraints&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Memory Efficiency&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;Default memory limit of 128MB per filter (configurable via &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;bloomfilters&#x2F;&quot;&gt;&lt;code&gt;BF.BLOOM-MEMORY-USAGE-LIMIT&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;)&lt;&#x2F;li&gt;
&lt;li&gt;Example: With 0.01 (1%) false positive rate can probabilistic-ally track 112M items within 128MB.&lt;&#x2F;li&gt;
&lt;li&gt;Achieves 93-98% memory savings compared to the &lt;code&gt;SET&lt;&#x2F;code&gt; data structure for items using UUIDs for uniqueness.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Scaling Options&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Non-scaling filters&lt;&#x2F;strong&gt;: Fixed capacity for better performance.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Scaling filters&lt;&#x2F;strong&gt;: Dynamic growth with configurable expansion rate.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Performance Characteristics&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Time complexity&lt;&#x2F;strong&gt;: Bloom filter operations take &lt;strong&gt;O(k)&lt;&#x2F;strong&gt; time because you must compute k different hash functions and access k different bit positions in the array for every insert or lookup operation. The time complexity is independent of the number of items stored or the size of the bit array - it only depends on the number of hash functions configured for the filter.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Performance Tip:&lt;&#x2F;strong&gt; Start with larger initial capacities to avoid scaling.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Accuracy Control&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;Configurable false positive rate (default: 0.01 or 1%) a tradeoff between time and space for accuracy.&lt;&#x2F;li&gt;
&lt;li&gt;Guaranteed zero false negatives.&lt;&#x2F;li&gt;
&lt;li&gt;Tightening ratio for maintaining accuracy during scale-out.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Create a non-scaling (fixed memory) filter with specific parameters with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;bf.reserve&#x2F;&quot;&gt;&lt;code&gt;BF.RESERVE&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.RESERVE non_scaling_filter 0.001 1000000 NONSCALING&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Create a scaling filter with custom expansion using &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;bf.insert&#x2F;&quot;&gt;&lt;code&gt;BF.INSERT&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.INSERT scaling_filter EXPANSION 4 ITEMS item1 item2&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;1) (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;2) (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Check filter capacity and stats using &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;bf.info&#x2F;&quot;&gt;&lt;code&gt;BF.INFO&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; for fixed filter&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.INFO non_scaling_filter CAPACITY&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 1000000&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Check filter capacity and stats for scaling filter&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.INFO scaling_filter MAXSCALEDCAPACITY&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 34952500&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Bulk operations, use &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;bf.madd&#x2F;&quot;&gt;&lt;code&gt;BF.MADD&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; to track multiple elements at once:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.MADD non_scaling_filter item1 item2 item3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;1) (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;2) (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;3) (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Check for multiple items in a single roundtrip using &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;bf.mexists&#x2F;&quot;&gt;&lt;code&gt;BF.MEXISTS&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;, if we try to get an item that does not exist (item4), we get a Zero as response.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.MEXISTS non_scaling_filter item1 item2 item4&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;1) (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;2) (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;3) (integer) 0&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bloom&quot;&gt;Valkey Bloom&lt;&#x2F;a&gt; version &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bloom&#x2F;releases&#x2F;tag&#x2F;1.0.0&quot;&gt;1.0.0&lt;&#x2F;a&gt; included in &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; was released on Tuesday April 1st 2025. If you would like to learn more details, commands and examples see the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;bloomfilters&#x2F;&quot;&gt;Valkey Bloom Documentation&lt;&#x2F;a&gt; and the blog &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;introducing-bloom-filters&#x2F;&quot;&gt;Introducing Bloom Filters for Valkey&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;high-performance-vector-similarity-search&quot;&gt;High-Performance Vector Similarity Search&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;Valkey Search&lt;&#x2F;a&gt; provides your application with high performance vector similarity search capabilities, essential for modern AI-driven applications. It delivers single digit millisecond latency at 99% recall even with billions of vectors through optimized implementations of Approximate Nearest Neighbor (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Nearest_neighbor_search#Approximation_methods&quot;&gt;ANN&lt;&#x2F;a&gt;) search with Hierarchical navigable small world (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Hierarchical_navigable_small_world&quot;&gt;HNSW&lt;&#x2F;a&gt;) and exact matching using K-Nearest Neighbors (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Nearest_neighbor_search#Approximation_methods&quot;&gt;KNN&lt;&#x2F;a&gt;) algorithms.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Key Capabilities and Constraints&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Vector Specifications&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;Supports &lt;code&gt;FLOAT32&lt;&#x2F;code&gt; vectors&lt;&#x2F;li&gt;
&lt;li&gt;Multiple distance metrics: &lt;code&gt;L2&lt;&#x2F;code&gt; (Euclidean), &lt;code&gt;IP&lt;&#x2F;code&gt; (Inner Product), &lt;code&gt;Cosine&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;HNSW parameters for performance tuning
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;M&lt;&#x2F;code&gt;: Maximum outgoing edges (default: 16, max: 512)&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;EF_CONSTRUCTION&lt;&#x2F;code&gt;: Build-time accuracy (default: 200, max: 4,096)&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;EF_RUNTIME&lt;&#x2F;code&gt;: Query-time accuracy (default: 10, max: 4,906)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Hybrid Query Support&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;Tag indexes for exact matching
&lt;ul&gt;
&lt;li&gt;Case-insensitive by default,&lt;&#x2F;li&gt;
&lt;li&gt;Configurable separators (default: comma)&lt;&#x2F;li&gt;
&lt;li&gt;Prefix and exact matching supported&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;Numeric indexes for range queries
&lt;ul&gt;
&lt;li&gt;Supports inclusive&#x2F;exclusive ranges&lt;&#x2F;li&gt;
&lt;li&gt;Infinite bounds using &lt;code&gt;-inf&lt;&#x2F;code&gt; and &lt;code&gt;+inf&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Performance Optimization&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;Automatic query planning between pre-filtering and inline-filtering&lt;&#x2F;li&gt;
&lt;li&gt;Configurable block size for memory allocation (&lt;code&gt;--hnsw-block-size&lt;&#x2F;code&gt;)&lt;&#x2F;li&gt;
&lt;li&gt;Linear scaling with CPU cores for both queries and ingestion&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Resource Management&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;Configurable thread pools for readers and writers&lt;&#x2F;li&gt;
&lt;li&gt;Default: matches physical CPU core count&lt;&#x2F;li&gt;
&lt;li&gt;Cluster mode support for horizontal scaling&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Create an index with HNSW configuration with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;ft.create&#x2F;&quot;&gt;&lt;code&gt;FT.CREATE&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; FT.CREATE productIndex \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ON JSON PREFIX 1 product: \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    SCHEMA $.vector AS vector \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    VECTOR HNSW 10 \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        TYPE FLOAT32 \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        DIM 20 \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        DISTANCE_METRIC COSINE \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        M 4 \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        EF_CONSTRUCTION 100 \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            $.category AS category TAG \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            $.price AS price NUMERIC&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Perform Hybrid query combining vector similarity with filters using &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;ft.search&#x2F;&quot;&gt;&lt;code&gt;FT.SEARCH&lt;&#x2F;code&gt;&lt;&#x2F;a&gt;&lt;code&gt; index query&lt;&#x2F;code&gt;:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; FT.SEARCH productIndex &amp;quot;*=&amp;gt;[KNN 5 @vector $query_vector] @category:{electronics} @price:[100 500]&amp;quot; \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    PARAMS 2 query_vector &amp;quot;$encoded_vector&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&quot;&gt;Valkey Search&lt;&#x2F;a&gt; version &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-search&#x2F;releases&#x2F;tag&#x2F;1.0.1&quot;&gt;1.0.1&lt;&#x2F;a&gt; included in &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; was released on Wednesday June 11th 2025. If you would like to learn more about the capabilities and opportunities this module provides see the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;search&#x2F;&quot;&gt;Valkey Search Documentation&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;enhanced-security-with-existing-mechanisms-with-ldap&quot;&gt;Enhanced Security with existing Mechanisms with LDAP&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-ldap&quot;&gt;Valkey LDAP&lt;&#x2F;a&gt; integrates with your existing identity management infrastructure, providing robust authentication without requiring additional user management systems. It supports both simple &lt;em&gt;bind&lt;&#x2F;em&gt; and &lt;em&gt;search+bind&lt;&#x2F;em&gt; modes, making it compatible with various LDAP directory structures. This flexibility ensures that you can maintain enterprise security standards while leveraging the power of &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Key Features&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Authentication Modes&lt;&#x2F;strong&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Bind Mode&lt;&#x2F;strong&gt;: Fast, direct authentication when usernames match DN patterns&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Search+Bind Mode&lt;&#x2F;strong&gt;: Flexible authentication for complex directory structures&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Security Features&lt;&#x2F;strong&gt;
&lt;ol&gt;
&lt;li&gt;TLS&#x2F;SSL support&lt;&#x2F;li&gt;
&lt;li&gt;Connection pooling&lt;&#x2F;li&gt;
&lt;li&gt;High availability with multiple LDAP servers&lt;&#x2F;li&gt;
&lt;li&gt;Integration with the Valkey ACL system&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;&lt;strong&gt;Basic configuration&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Simple bind mode setup to our imaginary LDAP server:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.servers &amp;quot;ldap:&#x2F;&#x2F;ldap.valkey.io:389&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.bind_dn_prefix &amp;quot;cn=&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.bind_dn_suffix &amp;quot;,ou=users,dc=valkey,dc=io&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Enable TLS:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.use_starttls yes&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.tls_ca_cert_path &amp;quot;&#x2F;path&#x2F;to&#x2F;ca.crt&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;User Management&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Create LDAP-authenticated user&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; ACL SETUSER valkey on resetpass +@all&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Authenticate&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; AUTH valkey &amp;quot;ldap_password&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Performance Tips:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Use bind mode when possible (faster)&lt;&#x2F;li&gt;
&lt;li&gt;Adjust connection pool size for high traffic:&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.connection_pool_size 5&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ul&gt;
&lt;li&gt;Configure multiple LDAP servers for reliability&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.servers &amp;quot;ldap:&#x2F;&#x2F;main:389,ldap:&#x2F;&#x2F;backup:389&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-ldap&quot;&gt;Valkey LDAP&lt;&#x2F;a&gt; version &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-ldap&#x2F;releases&#x2F;tag&#x2F;1.0.0&quot;&gt;1.0.0&lt;&#x2F;a&gt; included in &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; was released on Sunday June 13th 2025. If you would like to learn more details about the capabilities read the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;ldap&#x2F;&quot;&gt;LDAP Authentication Documentation&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;getting-ready-for-production-deployment&quot;&gt;Getting Ready for Production Deployment&lt;&#x2F;h2&gt;
&lt;p&gt;Deploying &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; in production requires careful consideration of persistence, configuration, monitoring, and security. These foundational elements ensure your application maintains data integrity, performs optimally, and scales reliably under production workloads.&lt;&#x2F;p&gt;
&lt;p&gt;For example:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; docker run&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --name&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; my-valkey-bundle&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    -d&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&#x2F;valkey-bundle&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    valkey-server&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --save 60 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;custom-configuration&quot;&gt;Custom Configuration&lt;&#x2F;h3&gt;
&lt;p&gt;A well-structured configuration file is essential for production deployments as it provides centralized control over all module settings, security parameters, and performance tuning options. The configuration below demonstrates a production-ready setup that balances performance, security, and resource utilization across all &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; modules.&lt;&#x2F;p&gt;
&lt;p&gt;This sample configuration file includes optimized settings for:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;JSON module with reasonable document size limits and nesting depth controls&lt;&#x2F;li&gt;
&lt;li&gt;Bloom filters with production-appropriate memory limits and false positive rates&lt;&#x2F;li&gt;
&lt;li&gt;Search module with thread pools sized for typical server hardware&lt;&#x2F;li&gt;
&lt;li&gt;LDAP integration with TLS security and connection pooling&lt;&#x2F;li&gt;
&lt;li&gt;Valkey settings for memory management and persistence&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Note: This is not an official nor recommended configuration is only for demonstration purposes of the module settings.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Valkey settings&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;port 6379&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;bind 127.0.0.1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;protected-mode yes&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;requirepass &amp;quot;strong_password&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;################################## JSON Module ###################################&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Maximum document size (in bytes, 0 = unlimited)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;json.max-document-size 1048576&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Maximum nesting depth for JSON documents&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;json.max-path-limit 32&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;################################## Bloom Module #################################&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Default initial capacity for new bloom filters&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;bf.bloom-capacity 100000&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Default false positive rate&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;bf.bloom-fp-rate 0.01&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Memory usage limit per bloom filter (in bytes)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;bf.bloom-memory-usage-limit 134217728  # 128MB&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Default expansion rate for scaling filters&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;bf.bloom-expansion 2&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;################################## Search Module ###############################&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Thread configuration for search operations&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;search.reader-threads 8&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;search.writer-threads 4&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# HNSW graph configuration&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;search.hnsw-block-size 10000&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Enable cluster mode&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;search.use-coordinator no&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Log level (debug, verbose, notice, warning)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;search.log-level notice&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;################################## LDAP Module #################################&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# LDAP server configuration&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.servers &amp;quot;ldap:&#x2F;&#x2F;primary:389,ldap:&#x2F;&#x2F;backup:389&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.auth_mode &amp;quot;search+bind&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# TLS configuration&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.use_starttls yes&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.tls_ca_cert_path &amp;quot;&#x2F;path&#x2F;to&#x2F;ca.crt&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.tls_cert_path &amp;quot;&#x2F;path&#x2F;to&#x2F;client.crt&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.tls_key_path &amp;quot;&#x2F;path&#x2F;to&#x2F;client.key&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Search+bind mode settings&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.search_base &amp;quot;ou=users,dc=valkey,dc=io&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.search_filter &amp;quot;objectClass=person&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.search_attribute &amp;quot;uid&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.search_bind_dn &amp;quot;cn=readonly,dc=valkey,dc=io&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.search_bind_passwd &amp;quot;readonly_password&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Performance tuning&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.connection_pool_size 5&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.timeout_connection 5&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.timeout_ldap_operation 3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;ldap.failure_detector_interval 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;################################## Common Settings #############################&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Memory and performance&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;maxmemory 4gb&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;maxmemory-policy allkeys-lru&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Persistence&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;save 900 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;save 300 10&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;save 60 10000&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Logging&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;loglevel notice&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;logfile &amp;quot;&#x2F;var&#x2F;log&#x2F;valkey&#x2F;valkey.log&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Load the file as follows:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; docker run&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -v&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &#x2F;valkey&#x2F;my-valkey-bundle.conf:&#x2F;usr&#x2F;local&#x2F;etc&#x2F;valkey&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    --name&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; my-valkey-bundle&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    valkey&#x2F;valkey-bundle&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;real-world-example-personalized-ad-platform&quot;&gt;Real-World Example: Personalized Ad Platform&lt;&#x2F;h2&gt;
&lt;p&gt;Consider a typical scenario: you&#x27;re building a recommendation engine for an ad platform that needs to process JSON data, perform similarity searches, and efficiently track user interactions. Traditionally, this would require integrating and maintaining multiple services, each with its own configuration, deployment, and scaling considerations. With &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt;, these capabilities are available out-of-the-box, all through a unified interface.&lt;&#x2F;p&gt;
&lt;p&gt;Let&#x27;s explore how we would go about creating a system that could handle complex user profiles, deliver personalized recommendations, prevent ad fatigue, and maintain enterprise grade security.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;managing-rich-user-profiles-with-valkey-json&quot;&gt;Managing Rich User Profiles with Valkey JSON&lt;&#x2F;h3&gt;
&lt;p&gt;First, we needed a flexible way to store user profiles. Valkey JSON proved perfect for this with its native JSON support. Here&#x27;s what our sample user profile might look like:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;json&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;{&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  &amp;quot;user_id&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;u123456&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  &amp;quot;personal&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    &amp;quot;name&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;Val Key&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    &amp;quot;email&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;valkey@valkey.io&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  },&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  &amp;quot;preferences&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    &amp;quot;categories&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;electronics&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;sports&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    &amp;quot;brands&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;nike&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;apple&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  },&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  &amp;quot;embedding&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0.23&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0.45&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0.67&lt;&#x2F;span&gt;&lt;span&gt;]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The beauty of using Valkey JSON is how easily we can update specific fields for user with id &lt;code&gt;u123456&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; JSON.SET user:u123456 . &amp;#39;{&amp;quot;user_id&amp;quot;: &amp;quot;u123456&amp;quot;,&amp;quot;personal&amp;quot;: {&amp;quot;name&amp;quot;: &amp;quot;Val Key&amp;quot;,&amp;quot;email&amp;quot;: &amp;quot;valkey@valkey.io&amp;quot;},&amp;quot;preferences&amp;quot;: {&amp;quot;categories&amp;quot;: [&amp;quot;electronics&amp;quot;, &amp;quot;sports&amp;quot;],&amp;quot;brands&amp;quot;: [&amp;quot;nike&amp;quot;, &amp;quot;apple&amp;quot;]},&amp;quot;embedding&amp;quot;: [0.23, 0.45, 0.67]}&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Update user preferences by adding the automotive category.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; JSON.ARRAPPEND user:u123456 $.preferences.categories &amp;#39;&amp;quot;automotive&amp;quot;&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;1) (integer) 3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;smart-recommendations-with-vector-search&quot;&gt;Smart Recommendations with Vector Search&lt;&#x2F;h3&gt;
&lt;p&gt;Here&#x27;s where things get interesting. We use Valkey Search to implement vector similarity matching for product recommendations. Each product and user preference is represented as a high-dimensional vector, allowing us to find similar items quickly.&lt;&#x2F;p&gt;
&lt;p&gt;Create a vector similarity index.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; FT.CREATE product_idx ON JSON &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  PREFIX 1 product: &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;  SCHEMA &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    $.embedding AS embedding VECTOR HNSW 6 &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;      TYPE FLOAT32 &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;      DIM 128 &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;      DISTANCE_METRIC COSINE &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    $.category AS category TAG&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This setup allows us to find similar products while applying business rules like category filters and price ranges. The HNSW algorithm ensures we get results in milliseconds, even with millions of products.&lt;&#x2F;p&gt;
&lt;p&gt;Here&#x27;s how we can fetch similar products based on user preferences:&lt;&#x2F;p&gt;
&lt;p&gt;Get the preference vector from the user with Id u123456&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; SET user_vector `JSON.GET user:u123456 $.embedding`&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Find similar products in the same category&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; FT.SEARCH product_idx &amp;quot;*=&amp;gt;[KNN 5 @embedding $user_vector] @category:{electronics}&amp;quot; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    PARAMS 2 user_vector &amp;quot;$user_vector&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This query combines vector similarity search with category filtering, ensuring recommendations are both relevant and contextual.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;preventing-ad-fatigue-with-bloom-filters&quot;&gt;Preventing Ad Fatigue with Bloom Filters&lt;&#x2F;h3&gt;
&lt;p&gt;Nobody likes seeing the same ad repeatedly. We use Valkey Bloom to efficiently track which user each ad has been shown. The beauty of Bloom filters is their space efficiency, we can track millions of impressions using minimal memory.&lt;&#x2F;p&gt;
&lt;p&gt;Track ad impressions by the Ad Id&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.RESERVE ad:a789012 0.01 10000000&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.ADD ad:a789012 &amp;quot;user:u123456&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Quick check before showing an ad&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.EXISTS ad:a789012 &amp;quot;user:u123456&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Try a different user for the same Ad Id.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; BF.EXISTS ad:a789012 &amp;quot;user:u234567&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 0&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;With a false positive rate of just 1%, we maintain accuracy while using about 93% less memory compared to the &lt;code&gt;SET&lt;&#x2F;code&gt; data structure.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;securing-it-all-with-ldap-authentication&quot;&gt;Securing It All with LDAP Authentication&lt;&#x2F;h3&gt;
&lt;p&gt;Our Ad Platform needs to support multiple teams with different access levels - from account managers and content creators to data analysts and administrators. Valkey LDAP allows us to leverage existing corporate directories while implementing fine-grained access control.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;directory-structure-and-role-mapping&quot;&gt;Directory Structure and Role Mapping&lt;&#x2F;h4&gt;
&lt;p&gt;First, let&#x27;s set up our LDAP integration to map organizational roles:&lt;&#x2F;p&gt;
&lt;p&gt;Configure LDAP connection with out imaginary Valkey LDAP&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.servers &amp;quot;ldaps:&#x2F;&#x2F;ldap.valkey.io:636&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.search_base &amp;quot;ou=employees,dc=valkey,dc=io&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.search_filter &amp;quot;(objectClass=user)&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.search_attribute &amp;quot;uid&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Enable TLS for secure communication&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.use_starttls yes&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; CONFIG SET ldap.tls_ca_cert_path &amp;quot;&#x2F;etc&#x2F;valkey&#x2F;certs&#x2F;ca.crt&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h4 id=&quot;role-based-access-control&quot;&gt;Role-Based Access Control&lt;&#x2F;h4&gt;
&lt;p&gt;We&#x27;ll create different access levels using Valkey ACLs that map to LDAP groups:&lt;&#x2F;p&gt;
&lt;p&gt;Account Managers - Can view and modify client campaigns&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; ACL SETUSER account_manager on resetpass +@read +@write -@admin &amp;gt;client_secret&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ~campaign:* &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ~client:* &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ~analytics:*&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Content Creators - Can manage ad content and view basic analytics&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; ACL SETUSER content_creator on resetpass +@read +@write -@admin &amp;gt;content_secret&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ~ad:* &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ~content:* &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &amp;amp;analytics:basic:*&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Data Analysts - Read-only access to all analytics data&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; ACL SETUSER data_analyst on resetpass +@read -@write -@admin &amp;gt;analyst_secret&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ~analytics:* &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &amp;amp;campaign:*&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;System Administrators - Full access&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt; ACL SETUSER admin on resetpass +@all &amp;gt;admin_secret&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; enables modern application development of latency-sensitive and high-throughput workloads by combining essential tools in a single, maintainable container. Whether you&#x27;re building a startup MVP or scaling enterprise applications, &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; provides the foundation for efficient, reliable, and maintainable systems.&lt;&#x2F;p&gt;
&lt;p&gt;Ready to get started? Here&#x27;s how you can begin using &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; today:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Quick Start:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Pull the latest valkey-bundle container: &lt;code&gt;docker pull valkey&#x2F;valkey-bundle&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Run your first instance: &lt;code&gt;docker run --name my-valkey-bundle -p 6379:6379 -d valkey&#x2F;valkey-bundle&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Connect and explore: &lt;code&gt;docker exec -it my-valkey-bundle valkey-cli&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;&lt;strong&gt;Next Steps:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Experiment&lt;&#x2F;strong&gt;: Try the JSON, Bloom, Search, and LDAP examples from this guide&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Learn More&lt;&#x2F;strong&gt;: Visit the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;docs&quot;&gt;Valkey Documentation&lt;&#x2F;a&gt; for comprehensive guides&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Get Support&lt;&#x2F;strong&gt;: Join the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;discussions&quot;&gt;Valkey Community&lt;&#x2F;a&gt; for questions and discussions&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Contribute&lt;&#x2F;strong&gt;: Help improve &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; by reporting issues or contributing features on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bundle&quot;&gt;GitHub&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Start building your next low-latency, high-performance application with &lt;em&gt;valkey-bundle&lt;&#x2F;em&gt; your one-stop shop for modern data infrastructure needs.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Performance Optimization Methodology for Valkey - Part 1</title>
        <published>2025-05-27T00:00:00+00:00</published>
        <updated>2025-05-27T00:00:00+00:00</updated>
        
        <author>
          <name>
            lipzhu
          </name>
        </author>
        
        <author>
          <name>
            guowangy
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/performance-optimization-methodology-for-valkey/"/>
        <id>https://valkey.io/blog/performance-optimization-methodology-for-valkey/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/performance-optimization-methodology-for-valkey/">&lt;p&gt;Performance optimization is a multifaceted domain, particularly for high-performance systems like Valkey. While overall system performance depends on numerous factors including hardware specifications, OS configurations, network conditions, and deployment architectures, our work focuses specifically on optimizing Valkey&#x27;s performance at the CPU level.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;&#x2F;h2&gt;
&lt;p&gt;When examining software CPU optimization approaches, two fundamental strategies are generally recognized:&lt;&#x2F;p&gt;
&lt;h3 id=&quot;strategy-1-maximizing-parallelism&quot;&gt;Strategy 1: Maximizing Parallelism&lt;&#x2F;h3&gt;
&lt;p&gt;This strategy involves redesigning software architecture to fully leverage multiple CPU cores. By effectively distributing workloads across available computing resources, applications can achieve significant throughput improvements—a critical advantage as processor core counts continue to increase.&lt;&#x2F;p&gt;
&lt;p&gt;The I&#x2F;O threading model in Valkey exemplifies this approach. As described in the Valkey blog post &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;unlock-one-million-rps&#x2F;&quot;&gt;&quot;Unlock 1 Million RPS: Experience Triple the Speed with Valkey&quot;&lt;&#x2F;a&gt;, this architecture enables Valkey to offload operations to dedicated threads, allowing better utilization of available CPU cores. Rather than handling all operations in a single thread, this model intelligently delegates tasks to multiple threads, reducing bottlenecks and improving throughput. This enhancement has demonstrated impressive scalability, enabling near-linear performance scaling with additional cores.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;strategy-2-enhancing-cpu-efficiency&quot;&gt;Strategy 2: Enhancing CPU Efficiency&lt;&#x2F;h3&gt;
&lt;p&gt;This strategy focuses on maximizing performance within limited CPU resources through two complementary approaches:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reducing Instruction Count&lt;&#x2F;strong&gt;: Eliminating redundant code and unnecessary operations to decrease the total work the CPU must perform.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Improving IPC&lt;&#x2F;strong&gt;: Optimizing how efficiently the processor executes instructions by addressing microarchitectural bottlenecks like cache misses, branch mispredictions, and memory access patterns.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;Through our analysis and optimization efforts, we&#x27;ve identified several key methodologies that deliver significant performance improvements: eliminating redundant code, reducing lock contention, and addressing false sharing. We&#x27;ve also explored other techniques including asynchronous processing, batch operations, and leveraging CPU-specific instructions such as SIMD vectorization.&lt;&#x2F;p&gt;
&lt;p&gt;While parallelism allows us to &quot;use more resources efficiently,&quot; efficiency optimizations enable us to &quot;do more with the same resources.&quot; Both approaches are essential in a comprehensive optimization strategy, particularly for mission-critical systems like Valkey.&lt;&#x2F;p&gt;
&lt;p&gt;This article focuses primarily on the second strategy, exploring methodical approaches to improving CPU efficiency with concrete examples from our contributions to the Valkey codebase.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;benchmarking-the-foundation-for-reliable-performance-optimization&quot;&gt;Benchmarking: The Foundation for Reliable Performance Optimization&lt;&#x2F;h2&gt;
&lt;p&gt;Reliable performance optimization requires consistent, reproducible measurements to evaluate whether code changes actually improve performance. Without methodical benchmarking in a controlled environment, it&#x27;s impossible to accurately quantify improvements or determine if optimizations introduce regressions in other areas.&lt;&#x2F;p&gt;
&lt;p&gt;To isolate CPU performance factors and eliminate variables that might affect measurements, we&#x27;ve implemented the following constraints:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Bare-metal servers&lt;&#x2F;strong&gt; rather than virtual machines to eliminate hypervisor overhead and contention,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Process pinning&lt;&#x2F;strong&gt; using taskset to pin Valkey to specific cores, preventing thread migration overhead,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Local network interfaces&lt;&#x2F;strong&gt; (loopback) for client-server communication to minimize network variability,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;High CPU utilization&lt;&#x2F;strong&gt; benchmark parameters to ensure we&#x27;re measuring true CPU performance limits.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;This controlled environment allows us to accurately attribute improvements to specific code optimizations, providing reliable measurements of throughput and latency improvements. For each optimization attempt, we establish baseline performance metrics, implement changes in isolation, and then re-measure to quantify the impact. This disciplined approach ensures that our optimizations deliver genuine benefits rather than illusory improvements that might disappear in production environments.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;optimization-methodology&quot;&gt;Optimization Methodology&lt;&#x2F;h2&gt;
&lt;p&gt;In the following sections, we&#x27;ll share practical examples of optimization techniques applied to Valkey. These insights represent our modest contributions that might be helpful for others working on similar performance challenges.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;1-eliminating-redundant-code&quot;&gt;1. Eliminating Redundant Code&lt;&#x2F;h3&gt;
&lt;p&gt;Simplifying execution paths by removing redundant operations is a straightforward optimization approach, though identifying truly unnecessary code requires careful analysis.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;how-to-identify-redundant-code&quot;&gt;How to Identify Redundant Code&lt;&#x2F;h4&gt;
&lt;p&gt;The key to finding redundant code lies in having:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Representative Workloads&lt;&#x2F;strong&gt;: Test workloads that reflect real-world usage patterns,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Proper Profiling Tools&lt;&#x2F;strong&gt;: Tools like &lt;code&gt;perf&lt;&#x2F;code&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.intel.com&#x2F;content&#x2F;www&#x2F;us&#x2F;en&#x2F;developer&#x2F;tools&#x2F;oneapi&#x2F;vtune-profiler.html&quot;&gt;Intel® VTune™ Profiler&lt;&#x2F;a&gt; to identify hot code paths,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Systematic Code Review&lt;&#x2F;strong&gt;: Manual inspection of hot paths to find redundancies that automated tools might miss,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Trace-based Analysis&lt;&#x2F;strong&gt;: Execution traces that highlight repeated operations.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h4 id=&quot;real-world-examples&quot;&gt;Real-World Examples&lt;&#x2F;h4&gt;
&lt;p&gt;Through detailed CPU cycle hotspot analysis, we identified redundant logic in how Valkey prepares client connections for write operations. By analyzing execution patterns during high-throughput benchmarks, we discovered opportunities to eliminate unnecessary function calls in critical paths.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Example 1: Optimizing Client Write Preparation&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;In &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;670&quot;&gt;PR #670&lt;&#x2F;a&gt;, we found redundant calls to &lt;code&gt;prepareClientToWrite()&lt;&#x2F;code&gt; when multiple &lt;code&gt;addReply&lt;&#x2F;code&gt; operations were performed consecutively. By restructuring the code to call this function only when necessary, we eliminated redundant operations in a hot path.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Example 2: Improving List Command Efficiency&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Similarly, in &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;860&quot;&gt;PR #860&lt;&#x2F;a&gt;, we moved the &lt;code&gt;prepareClientToWrite()&lt;&#x2F;code&gt; call outside of a loop in the &lt;code&gt;lrange&lt;&#x2F;code&gt; command, preventing it from being called repeatedly for each list element.&lt;&#x2F;p&gt;
&lt;p&gt;These relatively simple code changes yielded measurable performance improvements because they affected code paths executed extremely frequently during normal operation.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;2-reduce-lock-contention&quot;&gt;2. Reduce Lock Contention&lt;&#x2F;h3&gt;
&lt;p&gt;When discussing lock overhead, we consider two aspects:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Protected code scope&lt;&#x2F;strong&gt;: The cost of operations within critical sections,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Lock implementation overhead&lt;&#x2F;strong&gt;: The cost of the synchronization mechanism itself.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;Due to Valkey&#x27;s single main thread design, there aren&#x27;t many complex mutex-protected critical sections. Therefore, we focus on the overhead of synchronization primitives themselves, which becomes significant when the protected work is minimal.&lt;&#x2F;p&gt;
&lt;p&gt;In Valkey, atomic operations are used to update global variables and shared data. While atomic operations are faster than mutex locks, they still introduce considerable overhead compared to non-atomic operations—particularly in high-throughput scenarios where these operations occur millions of times per second.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;real-world-examples-1&quot;&gt;Real-World Examples&lt;&#x2F;h4&gt;
&lt;p&gt;&lt;strong&gt;Thread-local Storage for Memory Tracking&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;In &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;674&quot;&gt;PR #674&lt;&#x2F;a&gt;, we introduced thread-local storage variables to optimize memory tracking. Previously, Valkey used atomic operations to update the global &lt;code&gt;used_memory&lt;&#x2F;code&gt; variable tracking memory allocation across all threads.&lt;&#x2F;p&gt;
&lt;p&gt;Our profiling identified that most operations on this variable were writes occurring within the same thread, with the system only occasionally reading the total memory usage across threads.&lt;&#x2F;p&gt;
&lt;p&gt;By implementing thread-local variables for each thread to track its own memory usage, we eliminated atomic operations during frequent writes. Each thread now updates its local counter using regular (non-atomic) operations, with the global value computed only when needed by summing thread-local values.&lt;&#x2F;p&gt;
&lt;p&gt;This optimization pattern is particularly effective when:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Most operations occur within a single thread,&lt;&#x2F;li&gt;
&lt;li&gt;Values are written frequently but read infrequently,&lt;&#x2F;li&gt;
&lt;li&gt;Synchronization overhead is significant compared to the work being protected.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h3 id=&quot;3-eliminating-false-sharing&quot;&gt;3. Eliminating False Sharing&lt;&#x2F;h3&gt;
&lt;p&gt;False sharing occurs when different threads access different variables located within the same CPU cache line (typically 64 bytes). Even though threads work with different variables, the hardware treats these accesses as conflicts because they share the same cache line.&lt;&#x2F;p&gt;
&lt;p&gt;When one thread modifies its variable, the entire cache line is invalidated for all other cores, forcing them to reload the cache line even though their own variables haven&#x27;t changed. This creates unnecessary cache coherence traffic and can significantly degrade performance.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;identifying-false-sharing&quot;&gt;Identifying False Sharing&lt;&#x2F;h4&gt;
&lt;p&gt;False sharing can be difficult to detect because it doesn&#x27;t cause functional issues. Signs include:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Unexplained Performance Scaling Issues&lt;&#x2F;strong&gt;: Poor scaling despite threads working independently,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;High Cache Coherence Traffic&lt;&#x2F;strong&gt;: Monitoring shows high rates of cache line invalidations,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Thread-dependent Performance Variations&lt;&#x2F;strong&gt;: Unusual performance patterns with different thread counts.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;Tools that help identify false sharing include:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;perf c2c&lt;&#x2F;code&gt; - A Linux performance tool specifically designed for detecting cache line contention,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.intel.com&#x2F;content&#x2F;www&#x2F;us&#x2F;en&#x2F;developer&#x2F;tools&#x2F;oneapi&#x2F;vtune-profiler.html&quot;&gt;Intel® VTune™ Profiler&lt;&#x2F;a&gt; with Memory Access Analysis,&lt;&#x2F;li&gt;
&lt;li&gt;Performance counter monitoring tools tracking cache coherence events.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;These tools have transformed false sharing from a difficult-to-diagnose problem into one that can be efficiently located and addressed.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;mitigation-strategies&quot;&gt;Mitigation Strategies&lt;&#x2F;h4&gt;
&lt;p&gt;When addressing false sharing, several approaches are available:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Data Structure Padding&lt;&#x2F;strong&gt;: Adding padding between variables accessed by different threads,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Cache Line Alignment&lt;&#x2F;strong&gt;: Aligning thread-specific data to cache line boundaries,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Thread-local Storage&lt;&#x2F;strong&gt;: Using thread-local variables instead of shared arrays,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Data Structure Redesign&lt;&#x2F;strong&gt;: Reorganizing data structures to group thread-specific data.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h4 id=&quot;real-world-examples-2&quot;&gt;Real-World Examples&lt;&#x2F;h4&gt;
&lt;p&gt;&lt;strong&gt;Strategic False Sharing Mitigation&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;In &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1179&quot;&gt;PR #1179&lt;&#x2F;a&gt;, we encountered false sharing in memory tracking counters used by both the main thread and I&#x2F;O threads.&lt;&#x2F;p&gt;
&lt;p&gt;Instead of completely eliminating all false sharing (which would have degraded performance in some scenarios), we implemented a nuanced solution:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;performance-optimization-methodology-for-valkey&#x2F;used-memory-thread-layout.png&quot; alt=&quot;Diagram of used memory thread layout showing false sharing mitigation&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;The reason we didn&#x27;t take the struct padding way is the performance degradation. Because when calling functions like zmalloc_used_memory() (called in main thread mostly), the main thread will cost more cycles to fetch more cache lines (3 vs 16 cache lines). Per our benchmarking, it will have a ~3% performance degradation depending on the specific benchmark.&quot;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Our solution focused on eliminating false sharing between the main thread and I&#x2F;O threads while allowing it to remain among I&#x2F;O threads:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;We did some tradeoff, just fixing the false sharing between main thread and io-threads, while keeping the false sharing in the io-threads for memory aggregation, because the resource of main thread is the bottleneck but resources of io-threads can be scaled.&quot;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;This example illustrates that performance optimization isn&#x27;t always about finding a theoretically perfect solution, but rather making intelligent tradeoffs based on real-world constraints and priorities. By focusing on the system&#x27;s actual bottleneck (the main thread), we achieved better overall performance despite leaving some false sharing in place.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;The methodologies presented here demonstrate that effective performance optimization requires both systematic analysis and thoughtful implementation. By focusing on eliminating redundancy, reducing synchronization overhead, and addressing false sharing, we&#x27;ve achieved meaningful improvements in Valkey&#x27;s performance.&lt;&#x2F;p&gt;
&lt;p&gt;These optimizations demonstrate that even in mature, well-engineered systems, there are opportunities for performance improvement when guided by careful measurement and analysis. The key lesson is that understanding hardware characteristics and system bottlenecks enables targeted optimizations with substantial impact, even when implementation changes are relatively small.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey Modules Rust SDK updates</title>
        <published>2025-05-20T01:01:01+00:00</published>
        <updated>2025-05-20T01:01:01+00:00</updated>
        
        <author>
          <name>
            dmitrypol
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-modules-rust-sdk-updates/"/>
        <id>https://valkey.io/blog/valkey-modules-rust-sdk-updates/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-modules-rust-sdk-updates/">&lt;p&gt;In an earlier &lt;a href=&quot;&#x2F;blog&#x2F;modules-101&#x2F;&quot;&gt;article&lt;&#x2F;a&gt; we explored the process of building Valkey Modules to enable developers to add features such as new commands and data types to Valkey without modifying its core.
We also introduced the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkeymodule-rs&quot;&gt;Valkey Modules Rust SDK&lt;&#x2F;a&gt; demonstrating how to use it to create a basic module.
In this follow-up article, we’ll dive deeper into the SDK and highlight several new features and improvements introduced over the past year.
This article assumes that the reader is well familiar with Rust and Valkey modules.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-is-the-valkey-modules-rust-sdk&quot;&gt;What is the Valkey Modules Rust SDK?&lt;&#x2F;h2&gt;
&lt;p&gt;The SDK is based on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;RedisLabsModules&#x2F;redismodule-rs&quot;&gt;Redis Modules Rust SDK&lt;&#x2F;a&gt; and provides abstraction APIs on top of Valkey&#x27;s own modules API.
For those familiar with Rust development the SDK is a Rust crate that can be added to &lt;code&gt;Cargo.toml&lt;&#x2F;code&gt; file like any other Rust dependency.
It requires the underlying Valkey server to have appropriate module APIs but allows writing Valkey modules in Rust, without needing to use raw pointers or unsafe code.
The recently released &lt;a href=&quot;&#x2F;blog&#x2F;introducing-bloom-filters&#x2F;&quot;&gt;Bloom Filters module&lt;&#x2F;a&gt; is built using this SDK and several of the developers who worked on the module contributed to the SDK.
Let&#x27;s deep dive into the new features.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;client&quot;&gt;Client&lt;&#x2F;h2&gt;
&lt;p&gt;We begin with enhancements that give developers deeper insight into the clients connected to Valkey.
The &lt;code&gt;Context&lt;&#x2F;code&gt; struct has been extended with several new functions that allow retrieving client specific data, such as client name, username or IP address.
It provides Rust wrappers around &lt;code&gt;Module_GetClient*&lt;&#x2F;code&gt; functions in the underlying Valkey Module API.
Most of these new functions return &lt;code&gt;ValkeyResult&lt;&#x2F;code&gt; so that the module developer can handle the error appropriately.
The functions can be called for the current client or by specifying &lt;code&gt;client_id&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; my_client_cmd&lt;&#x2F;span&gt;&lt;span&gt;(ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Context&lt;&#x2F;span&gt;&lt;span&gt;, _args&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Vec&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyString&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; -&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyResult&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; client_id&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get_client_id&lt;&#x2F;span&gt;&lt;span&gt;();&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; username&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get_client_username_by_id&lt;&#x2F;span&gt;&lt;span&gt;(client_id);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;    Ok&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyValue&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;from&lt;&#x2F;span&gt;&lt;span&gt;(username&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;to_string&lt;&#x2F;span&gt;&lt;span&gt;()))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;valkey_module!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    commands&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;my_client_cmd&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, my_client_cmd,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;auth-callback&quot;&gt;Auth callback&lt;&#x2F;h2&gt;
&lt;p&gt;Another key improvement is support for authentication callbacks introduced in Valkey 7.2.
Thanks to the new features in the SDK Rust modules can now integrate directly with Valkey&#x27;s auth flow, making it possible to implement custom authentication logic or enhance security policies on a per-client basis.
One potential use case is to combine it with &lt;code&gt;ctx.get_client_ip&lt;&#x2F;code&gt; function described above to allow some users access only from specific IP addresses.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; auth_callback&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Context&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    username&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyString&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    _password&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyString&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; -&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Result&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;c_int,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyError&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;authenticate_client_with_acl_user&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;amp;&lt;&#x2F;span&gt;&lt;span&gt;username)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; ==&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Status&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Ok&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        let&lt;&#x2F;span&gt;&lt;span&gt; _client_ip&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get_client_ip&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;?&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        return&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Ok&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;AUTH_HANDLED&lt;&#x2F;span&gt;&lt;span&gt;);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;    Ok&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;AUTH_NOT_HANDLED&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;valkey_module!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    auth&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [auth_callback],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;preload-validation&quot;&gt;Preload validation&lt;&#x2F;h2&gt;
&lt;p&gt;While the &lt;code&gt;valkey_module!&lt;&#x2F;code&gt; macro already provided an &lt;code&gt;init&lt;&#x2F;code&gt; callback to execute custom code during module load, it executed at the very end of the module load after new commands and data types were created.
That can be useful but what if we wanted to stop module load before any of that happened?
For example, we might need to restrict a module to be loaded only on specific version of Valkey.
That&#x27;s where the optional &lt;code&gt;preload&lt;&#x2F;code&gt; comes in.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; preload_fn&lt;&#x2F;span&gt;&lt;span&gt;(ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Context&lt;&#x2F;span&gt;&lt;span&gt;, _args&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span&gt;[&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyString&lt;&#x2F;span&gt;&lt;span&gt;])&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; -&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Status&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; _version&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get_server_version&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;unwrap&lt;&#x2F;span&gt;&lt;span&gt;();&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; respond with Status::Ok or Status::Err to prevent loading&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;    Status&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Ok&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;valkey_module!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    preload&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; preload_fn,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    commands&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;filters&quot;&gt;Filters&lt;&#x2F;h2&gt;
&lt;p&gt;To execute custom code before specific Valkey commands we can use command filters which is now supported in the SDK.
Filters can be leveraged to replace default commands with custom commands or modify arguments.
Thanks to the abstractions provided by the SDK we simply need to create a Rust function and register it in the &lt;code&gt;valkey_module!&lt;&#x2F;code&gt; macro.
Note of caution - since filters are executed before every command this code needs to be optimized for performance.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; my_filter_fn&lt;&#x2F;span&gt;&lt;span&gt;(_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: *mut&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; RedisModuleCommandFilterCtx&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; cf_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; CommandFilterCtx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;new&lt;&#x2F;span&gt;&lt;span&gt;(ctx);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; check the number of arguments&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; cf_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;args_count&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; !=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 3&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        return&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; get which command is being executed&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; _cmd&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; cf_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;cmd_get_try_as_str&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;unwrap&lt;&#x2F;span&gt;&lt;span&gt;();&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; grab various args passed to the command&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; _all_args&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; cf_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get_all_args_wo_cmd&lt;&#x2F;span&gt;&lt;span&gt;();&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; replace command&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    cf_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;arg_replace&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;custom_cmd&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;valkey_module!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    filters&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        [my_filter_fn,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; VALKEYMODULE_CMDFILTER_NOSELF&lt;&#x2F;span&gt;&lt;span&gt;],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;new-event-handlers&quot;&gt;New event handlers&lt;&#x2F;h2&gt;
&lt;p&gt;Reacting to server events can be very important to the module behavior.
The SDK has expanded its support for registering event handlers, allowing developers to hook into more server-side events.
We can use this to execute our own code on client connect&#x2F;disconnect, server shutdown or specific key events.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;#[client_changed_event_handler]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; client_changed_event_handler&lt;&#x2F;span&gt;&lt;span&gt;(_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Context&lt;&#x2F;span&gt;&lt;span&gt;, client_event&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ClientChangeSubevent&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    match&lt;&#x2F;span&gt;&lt;span&gt; client_event {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        ClientChangeSubevent&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Connected&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; {}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        ClientChangeSubevent&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Disconnected&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; {}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;#[shutdown_event_handler]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; shutdown_event_handler&lt;&#x2F;span&gt;&lt;span&gt;(_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Context&lt;&#x2F;span&gt;&lt;span&gt;, _event&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; u64&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;#[key_event_handler]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; key_event_handler&lt;&#x2F;span&gt;&lt;span&gt;(ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Context&lt;&#x2F;span&gt;&lt;span&gt;, key_event&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; KeyChangeSubevent&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    match&lt;&#x2F;span&gt;&lt;span&gt; key_event {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        KeyChangeSubevent&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Deleted&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; {}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        KeyChangeSubevent&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Evicted&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; {}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        KeyChangeSubevent&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Overwritten&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; {}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        KeyChangeSubevent&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Expired&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; {}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;custom-acl-categories-support&quot;&gt;Custom ACL categories support&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8 introduced support for custom ACL categories which simplifies access control for custom commands introduced in a module.
To implement that we need to enable &lt;code&gt;required-features = [&quot;min-valkey-compatibility-version-8-0&quot;]&lt;&#x2F;code&gt; in &lt;code&gt;Cargo.toml&lt;&#x2F;code&gt; and register new categories in &lt;code&gt;valkey_module!&lt;&#x2F;code&gt; macro.
Then we can restrict our custom commands to these custom ACL categories.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;valkey_module!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    acl_categories&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;        &amp;quot;acl_one&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;        &amp;quot;acl_two&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    commands&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;cmd1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, cmd1,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;write&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;  0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;acl_one&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;cmd2&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, cmd2,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;acl_one acl_two&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;validating-rejecting-config-set&quot;&gt;Validating &#x2F; Rejecting CONFIG SET&lt;&#x2F;h2&gt;
&lt;p&gt;Configuration flexibility is important but so is validation.
The SDK now supports specifying optional callback functions to validate or reject custom configurations.
This is available for &lt;code&gt;String&lt;&#x2F;code&gt;, &lt;code&gt;i64&lt;&#x2F;code&gt;, &lt;code&gt;bool&lt;&#x2F;code&gt; and &lt;code&gt;enum&lt;&#x2F;code&gt; configs.&lt;br &#x2F;&gt;
Here is an example of such validation for &lt;code&gt;i64&lt;&#x2F;code&gt; custom config.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;lazy_static!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    static ref&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; CONFIG_I64&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyGILGuard&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;i64&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyGILGuard&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;default&lt;&#x2F;span&gt;&lt;span&gt;();&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; on_i64_config_set&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;G&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; T&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ConfigurationValue&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;i64&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&amp;gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    config_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ConfigurationContext&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    _name&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;str&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    val&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;static T&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; -&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Result&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;(),&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyError&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; val&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;(config_ctx)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; ==&lt;&#x2F;span&gt;&lt;span&gt; custom_logic_here {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;        log_notice&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;log message here&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        Err&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyError&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Str&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;error message here &amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; else&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        Ok&lt;&#x2F;span&gt;&lt;span&gt;(())&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;valkey_module!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    configurations&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        i64&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;my_i64&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;amp;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;CONFIG_I64&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 10&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 1000&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ConfigurationFlags&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;DEFAULT&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Some&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Box&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;new&lt;&#x2F;span&gt;&lt;span&gt;(on_configuration_changed)),&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Some&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Box&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;new&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;on_i64_config_set&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyString&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyGILGuard&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;i64&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&amp;gt;))],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        ],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;defrag&quot;&gt;Defrag&lt;&#x2F;h2&gt;
&lt;p&gt;For memory-sensitive applications, defragmentation is essential.
The SDK now offers a safe and idiomatic Rust abstraction over the defrag API for custom data types.
The new &lt;code&gt;Defrag&lt;&#x2F;code&gt; struct abstracts away the raw C FFI calls.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;static&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; MY_VALKEY_TYPE&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyType&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyType&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;new&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    &amp;quot;mytype123&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;    raw&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;RedisModuleTypeMethods&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        defrag&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Some&lt;&#x2F;span&gt;&lt;span&gt;(defrag),&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    },&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;unsafe extern&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;C&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; defrag&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    defrag_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: *mut&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; raw&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;RedisModuleDefragCtx&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    _from_key&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: *mut&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; RedisModuleString&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    value&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: *mut *mut&lt;&#x2F;span&gt;&lt;span&gt; c_void,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; -&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; i32&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    let&lt;&#x2F;span&gt;&lt;span&gt; defrag&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Defrag&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;new&lt;&#x2F;span&gt;&lt;span&gt;(defrag_ctx);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    0&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;valkey_module!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    data_types&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;        MY_VALKEY_TYPE&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    ...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;redis-support&quot;&gt;Redis support&lt;&#x2F;h2&gt;
&lt;p&gt;Need your module to work with both Valkey and recent versions of Redis?
The SDK includes a compatibility feature flag to ensure your module runs on both Valkey and Redis.
Specify &lt;code&gt;use-redismodule-api&lt;&#x2F;code&gt; so that module used RedisModule API Initialization for backwards compatibility.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;[features]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;use-&lt;&#x2F;span&gt;&lt;span&gt;redismodule&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;-&lt;&#x2F;span&gt;&lt;span&gt;api&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;valkey-module&#x2F;use-redismodule-api&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;default&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; []&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;cargo build&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; --&lt;&#x2F;span&gt;&lt;span&gt;release&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; --&lt;&#x2F;span&gt;&lt;span&gt;features&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; use-&lt;&#x2F;span&gt;&lt;span&gt;redismodule&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;-&lt;&#x2F;span&gt;&lt;span&gt;api&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;unit-tests-and-memory-allocation&quot;&gt;Unit tests and memory allocation&lt;&#x2F;h2&gt;
&lt;p&gt;This feature enables writing unit tests to run outside of Valkey or Redis.
Instead of using Valkey Allocator it relies on the System Allocator.&lt;br &#x2F;&gt;
Unit tests allow us to perform much more granular testing and execute much faster.&lt;br &#x2F;&gt;
The core logic lives in &lt;code&gt;alloc.rs&lt;&#x2F;code&gt; but developers only need to specify this feature in the module &lt;code&gt;Cargo.toml&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;[features]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;enable&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;-&lt;&#x2F;span&gt;&lt;span&gt;system&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;-&lt;&#x2F;span&gt;&lt;span&gt;alloc&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;valkey-module&#x2F;system-alloc&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;cargo test&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; --&lt;&#x2F;span&gt;&lt;span&gt;features enable&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;-&lt;&#x2F;span&gt;&lt;span&gt;system&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;-&lt;&#x2F;span&gt;&lt;span&gt;alloc&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;The Valkey Modules Rust SDK has seen exciting improvements over the past year, making it easier and more powerful to extend Valkey.
But we are not stopping.
Some of the ideas for future development include &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkeymodule-rs&#x2F;issues&#x2F;202&quot;&gt;mock context support for unit testing&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkeymodule-rs&#x2F;issues&#x2F;203&quot;&gt;enhanced context access within filters&lt;&#x2F;a&gt;, and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkeymodule-rs&#x2F;issues&#x2F;204&quot;&gt;environment specific configs&lt;&#x2F;a&gt; to streamline development and testing.
Additionally, the introduction of &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkeymodule-rs&#x2F;issues&#x2F;205&quot;&gt;crontab scheduling&lt;&#x2F;a&gt; will allow executing custom logic on a defined schedules using &lt;code&gt;cron_event_handler&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;We hope this overview helped you understand how to leverage the SDK and inspired you to explore what&#x27;s possible with Valkey modules.
Stay tuned for future updates.&lt;&#x2F;p&gt;
&lt;p&gt;We also want to express appreciation to the engineers who contributed to the SDK in the past year - &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;KarthikSubbarao&quot;&gt;KarthikSubbarao&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;dmitrypol&quot;&gt;dmitrypol&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;sachinvmurthy&quot;&gt;sachinvmurthy&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;zackcam&quot;&gt;zackcam&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;YueTang-Vanessa&quot;&gt;YueTang-Vanessa&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;hahnandrew&quot;&gt;hahnandrew&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;Mkmkme&quot;&gt;Mkmkme&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;useful-links&quot;&gt;Useful links&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&quot;&gt;Valkey repo&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkeymodule-rs&quot;&gt;Valkey Modules Rust SDK&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;code.visualstudio.com&#x2F;docs&#x2F;languages&#x2F;rust&quot;&gt;Rust in VS Code&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Upgrade Stories from the Community, Volume 1:</title>
        <published>2025-05-14T01:01:01+00:00</published>
        <updated>2025-05-14T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <author>
          <name>
            nigel
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/upgrade-stories-vol1/"/>
        <id>https://valkey.io/blog/upgrade-stories-vol1/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/upgrade-stories-vol1/">&lt;h2 id=&quot;two-new-valkey-users-describe-what-it-s-really-like-to-upgrade&quot;&gt;Two new Valkey users describe what it&#x27;s really like to upgrade&lt;&#x2F;h2&gt;
&lt;p&gt;Many potential Valkey users have told the project that they&#x27;re interested in hearing more stories about companies that have decided to migrate to Valkey. This blog is the first in a series that will share our users&#x27; stories. Two organizations, Muse and 4th Whale Marketing, share a similar story: they both have Valkey in a critical place in their applications, they switched engines quickly with zero issues, and they saved money.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey is a multi-vendor&#x2F;vendor neutral project. With the support of a large and diverse community, the project is innovating at a rapid pace. In September 2024, we released Valkey 8.0 which brought performance improvements. Six months later in March 2025, Valkey 8.1 added memory efficiency improvements, extended observability, and more performance improvements. Further, it has a permissive, open source license allowing for anyone to use, modify, and provide Valkey as a Service. This allows organizations to make changes that best fit their values, budgets, and technical constraints. Valkey provides choice and prevents lock-in by being offered across many vendors, Linux distributions, with no licensing costs.&lt;&#x2F;p&gt;
&lt;p&gt;According to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.museml.com&#x2F;&quot;&gt;Muse&#x27;s website&lt;&#x2F;a&gt;, they are,  “the first intelligent creative assistant and inspiration studio for storytellers and world builders.” Garth Henson, Director of Engineering at Muse, says they use Valkey in several ways:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Caching: “We utilize standard HTML and payload caching to offload redundant data retrieval and computation, improving performance and efficiency.”&lt;&#x2F;li&gt;
&lt;li&gt;Pub&#x2F;Sub Messaging – “Valkey’s topic-based subscriptions drive our event-driven workflows across multiple system layers and support real-time WebSocket client communication.”&lt;&#x2F;li&gt;
&lt;li&gt;Lua Scripting – “We leverage Valkey’s single-threaded Lua execution for complex and multi-stage data retrievals, ensuring consistency while avoiding concurrency issues and race conditions.”&lt;&#x2F;li&gt;
&lt;li&gt;Taxonomies &amp;amp; Tagging Strategies – “The ability to handle massive lists at low cost has allowed us to efficiently implement taxonomies and data mappings, making retrieval seamless.”&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;One of the key reasons Muse chose Valkey was that it is open source. Garth says, “Redis’ licensing change prompted us to seek a truly open source alternative that aligns with our values.” They also mentioned the cost and performance as well as Valkey being a drop-in replacement as reasons they upgraded.&lt;&#x2F;p&gt;
&lt;p&gt;For them, the change was seamless. They learned about Valkey on a Thursday, began evaluating the next Monday and were fully migrated by Wednesday with zero downtime. That’s four working days from idea to production. They said that so far they’ve noticed benefits of cost efficiency, storage optimization, and performance stability since upgrading to Valkey.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.4thwhale.com&#x2F;&quot;&gt;4th Whale Marketing&lt;&#x2F;a&gt; is a Montreal-based company specializing in digital affiliate marketing. Founded in 2004, their seasoned team combines many years of experience in digital marketing, optimization, content creation, design, and web development&lt;&#x2F;p&gt;
&lt;p&gt;They use Valkey, “to store data that changes regularly (every 10 seconds or less) and needs to be available very quickly and often with the latest update possible.” The impetus for the change was that their cloud provider made it easy and cost effective which resulted in 4th Whale saving $18k USD per year. It took them 20 minutes to upgrade one instance through tooling from their cloud provider. Once everything was tested and working as expected, they upgraded their other two instances.&lt;&#x2F;p&gt;
&lt;p&gt;This story of upgrading to Valkey with minimal friction and seeing great benefits is not unique to these two organizations. One provider of Valkey as a Service has shared similar experiences from companies like Tubi and Nextdoor on their testimonials page. Similarly, a year in review for Valkey shows even more stories from organizations like Verizon, Fedora, and AlmaLinux. The move and savings these organizations experienced are only possible in a level playing field, competitive, vendor neutral environment. Users can deploy Valkey wherever, whenever. Vendors can build services on top of Valkey quickly and at a low cost. As a Linux Foundation project, Valkey has no motivation to lock users in because it is controlled by no single company and will never change its license.&lt;&#x2F;p&gt;
&lt;p&gt;Are you using Valkey? We&#x27;d love to include you in this series and list your company logo on this site. Please contact us by reaching out on Slack.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Introducing Bloom Filters for Valkey</title>
        <published>2025-04-09T01:01:01+00:00</published>
        <updated>2025-04-09T01:01:01+00:00</updated>
        
        <author>
          <name>
            karthiksubbarao
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/introducing-bloom-filters/"/>
        <id>https://valkey.io/blog/introducing-bloom-filters/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/introducing-bloom-filters/">&lt;p&gt;The Valkey project is introducing Bloom Filters as a new data type via &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bloom&#x2F;&quot;&gt;valkey-bloom&lt;&#x2F;a&gt; (BSD-3 licensed), an official Valkey Module which is compatible with Valkey versions &amp;gt;= 8.0. Bloom filters provide efficient, large-scale membership testing, improving performance and offering significant memory savings for high-volume applications.&lt;&#x2F;p&gt;
&lt;p&gt;As an example, to handle advertisement deduplication workloads and answer the question, &quot;Has this customer seen this ad before?&quot;, Valkey developers could use the SET data type.
This is done by adding the customer IDs (of those who viewed an ad) into a &lt;code&gt;SET&lt;&#x2F;code&gt; object representing a particular advertisement. However, the problem with this approach is high memory usage since every item in the set is allocated.
This article demonstrates how using the bloom filter data type from valkey-bloom can achieve significant memory savings, more than 93% in our example workload, while exploring its implementation, technical details, and practical recommendations.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;&#x2F;h2&gt;
&lt;p&gt;Bloom filters are a space efficient probabilistic data structure that supports adding elements and checking whether elements were previously added. False positives are possible, where a filter incorrectly indicates that an element exists even though it was not added.
However, Bloom Filters guarantee that false negatives do not occur, meaning that an element that was added successfully will never be reported as not existing.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;bloomfilter_bitvector.png&quot; alt=&quot;Bloom Filter Bit Vector&quot; &#x2F;&gt;
&lt;em&gt;Image taken from &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Bloom_filter#&#x2F;media&#x2F;File:Bloom_filter.svg&quot;&gt;source&lt;&#x2F;a&gt;&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
&lt;p&gt;When adding an item to a bloom filter, K different hash functions compute K corresponding bits from the bit vector, which are then set to 1.
Checking existence involves the same hash functions - if any bit is 0, the item is definitely absent; if all bits are 1, the item likely exists (with a defined false positive probability).
This bit-based approach, rather than full item allocation, makes bloom filters very space efficient with the trade off being potential false positives.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey-Bloom introduces bloom filters as a new data type to Valkey, providing both scalable and non-scalable variants.
It is API compatible with the bloom filter command syntax of the official Valkey client libraries including valkey-py, valkey-java, valkey-go (as well as the equivalent Redis libraries).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;data-type-overview&quot;&gt;Data type overview&lt;&#x2F;h2&gt;
&lt;p&gt;The &quot;Bloom Object&quot; is the main bloom data type structure. This is what gets created with any bloom filter creation command and this structure can act either as a &quot;scaling bloom filter&quot; or &quot;non scaling bloom filter&quot; depending on the user configuration.
It consists of a vector of &quot;Sub Filters&quot; with length &amp;gt;= 1 in case of scaling and only 1 in case of non scaling.&lt;&#x2F;p&gt;
&lt;p&gt;The &quot;Sub Filter&quot; is an inner structure which is created and used within the &quot;Bloom Object&quot;. It tracks the capacity, number of items added, and an instance of a Bloom Filter (of the specified properties).&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;bloomfilter_datatype.png&quot; alt=&quot;bloom filter data type&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Non Scaling&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;When non-scaling filters reach their capacity, if a user tries to add a new&#x2F;unique item, an error is returned.
You can create a non scaling bloom filter using &lt;code&gt;BF.RESERVE&lt;&#x2F;code&gt; or &lt;code&gt;BF.INSERT&lt;&#x2F;code&gt; commands.
Example:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;BF.RESERVE &amp;lt;filter-name&amp;gt; &amp;lt;error-rate&amp;gt; &amp;lt;capacity&amp;gt; NONSCALING&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Scaling&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;When scaling filters reach their capacity, if a user adds an item to the bloom filter, a new sub filter is created and added to the vector of sub filters.
This new bloom sub filter will have a larger capacity (previous_bloomfilter_capacity * expansion_rate of the bloom filter).
When checking whether an item exists on a scaled out bloom filter (&lt;code&gt;BF.EXISTS&lt;&#x2F;code&gt;&#x2F;&lt;code&gt;BF.MEXISTS&lt;&#x2F;code&gt;), we look through each filter (from oldest to newest) in the sub filter vector and perform a check operation on each one.
Similarly, to add a new item to the bloom filter, we check through all the filters to see if the item already exists and the item is added to the current filter if it does not exist.
Any default creation as a result of &lt;code&gt;BF.ADD&lt;&#x2F;code&gt;, &lt;code&gt;BF.MADD&lt;&#x2F;code&gt;, &lt;code&gt;BF.INSERT&lt;&#x2F;code&gt; will be a scalable bloom filter.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Common Bloom filter properties&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Capacity - The number of unique items that can be added before a bloom filter scales out occurs (in case of scalable bloom filters) or before any command which inserts a new item will return an error (in case of non scalable bloom filters).&lt;&#x2F;p&gt;
&lt;p&gt;False Positive Rate (FP) - The rate that controls the probability of item add&#x2F;exists operations being false positives. Example: 0.001 means 1 in every 1000 operations can be a false positive.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;use-cases-memory-savings&quot;&gt;Use cases &#x2F; Memory Savings&lt;&#x2F;h2&gt;
&lt;p&gt;In this example, we are simulating a very common use case of bloom filters: Advertisement Deduplication. Applications can utilize bloom filters to track whether an advertisement &#x2F; promotion has already been shown to a customer and use this to prevent showing it again to the customer.&lt;&#x2F;p&gt;
&lt;p&gt;Let us assume we have 500 unique advertisements and our service has 5M customers. Both advertisements and customers are identified by a UUID (36 characters).&lt;&#x2F;p&gt;
&lt;p&gt;Without bloom filters, applications could use the &lt;code&gt;SET&lt;&#x2F;code&gt; Valkey data type such that they have a unique &lt;code&gt;SET&lt;&#x2F;code&gt; for every advertisement.
Then, they can use the &lt;code&gt;SADD&lt;&#x2F;code&gt; command to track every customer who has already seen this particular advertisement by adding them to the set.
To check if a customer has seen the ad, the &lt;code&gt;SISMEMBER&lt;&#x2F;code&gt; or &lt;code&gt;SMISMEMBER&lt;&#x2F;code&gt; command can be used. This means we have 500 sets, each with 5M members. This will require ~152.57 GB of &lt;code&gt;used_memory&lt;&#x2F;code&gt; on a Valkey 8.0 server.&lt;&#x2F;p&gt;
&lt;p&gt;With bloom filters, applications can create a unique bloom filter for every advertisement with the &lt;code&gt;BF.RESERVE&lt;&#x2F;code&gt; or &lt;code&gt;BF.INSERT&lt;&#x2F;code&gt; command.
Here, they can specify the exact capacity they require: 5M - which means 5M items can be added to the bloom filter. For every customer that the advertisement is shown to, the application can add the UUID of the customer onto the specific filter.
To check if a customer has seen the ad, the &lt;code&gt;BF.EXISTS&lt;&#x2F;code&gt; or &lt;code&gt;BF.MEXISTS&lt;&#x2F;code&gt; command can be used. So, we have 500 bloom filters, each with a capacity of 5M.
This will require variable memory depending on the false positive rate. In all cases (even stricter false positive rates), we can see there is a significant memory optimization compared to using the &lt;code&gt;SET&lt;&#x2F;code&gt; data type.&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Number of Bloom Filters&lt;&#x2F;th&gt;&lt;th&gt;Capacity&lt;&#x2F;th&gt;&lt;th&gt;FP Rate&lt;&#x2F;th&gt;&lt;th&gt;FP Rate Description&lt;&#x2F;th&gt;&lt;th&gt;Total Used Memory (GB)&lt;&#x2F;th&gt;&lt;th&gt;Memory Saved % compared to SETS&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;500&lt;&#x2F;td&gt;&lt;td&gt;5000000&lt;&#x2F;td&gt;&lt;td&gt;0.01&lt;&#x2F;td&gt;&lt;td&gt;One in every 100&lt;&#x2F;td&gt;&lt;td&gt;2.9&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;98.08%&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;500&lt;&#x2F;td&gt;&lt;td&gt;5000000&lt;&#x2F;td&gt;&lt;td&gt;0.001&lt;&#x2F;td&gt;&lt;td&gt;One in every 1K&lt;&#x2F;td&gt;&lt;td&gt;4.9&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;96.80%&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;500&lt;&#x2F;td&gt;&lt;td&gt;5000000&lt;&#x2F;td&gt;&lt;td&gt;0.00001&lt;&#x2F;td&gt;&lt;td&gt;One in every 100K&lt;&#x2F;td&gt;&lt;td&gt;7.8&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;94.88%&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;500&lt;&#x2F;td&gt;&lt;td&gt;5000000&lt;&#x2F;td&gt;&lt;td&gt;0.0000002&lt;&#x2F;td&gt;&lt;td&gt;One in every 5M&lt;&#x2F;td&gt;&lt;td&gt;9.8&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;93.60%&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;In this example, we are able to benefit from 93% - 98% savings in memory usage when using Bloom Filters compared to the &lt;code&gt;SET&lt;&#x2F;code&gt; data type. Depending on your workload, you can expect similar results.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;bloomfilter_memusage.png&quot; alt=&quot;SET vs Bloom Filter Memory Usage Comparison&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;large-bloom-filters-and-recommendations&quot;&gt;Large Bloom Filters and Recommendations&lt;&#x2F;h2&gt;
&lt;p&gt;To improve server performance during serialization and deserialization of bloom filters, we have added validation on the memory usage per object.
The default memory usage limit of a bloom filter is defined by the &lt;code&gt;BF.BLOOM-MEMORY-USAGE-LIMIT&lt;&#x2F;code&gt; configuration which has a default value of 128 MB.
However, the value can be tuned using the configuration above.&lt;&#x2F;p&gt;
&lt;p&gt;The implication of the memory limit is that operations involving bloom filter creations or scaling out, that result in a bloom filter with overall memory usage over the limit, will return an error. Example:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; BF.ADD ad1_filter user1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(error) ERR operation exceeds bloom object memory limit&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This poses an issue to users where their scalable bloom filters can reach the memory limit after some number of days of data population and it starts failing scale outs during the insertion of unique items.&lt;&#x2F;p&gt;
&lt;p&gt;As a solution, to help users understand at what capacity their bloom filter will hit the memory limit, valkey-bloom has two options.
These are useful to check beforehand to ensure that your bloom filter will not fail scale outs or creations later on as part of your workload.&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Perform a memory check prior to bloom filter creation&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;We can use the &lt;code&gt;VALIDATESCALETO&lt;&#x2F;code&gt; option of the &lt;code&gt;BF.INSERT&lt;&#x2F;code&gt; command to perform a validation whether the filter is within the memory limit.
If it is not within the limits, the command will return an error. In the example below, we see that filter1 cannot scale out and reach the capacity of 26214301 due to the memory limit. However, it can scale out and reach a capacity of 26214300.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; BF.INSERT filter1 VALIDATESCALETO 26214301&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(error) ERR provided VALIDATESCALETO causes bloom object to exceed memory limit&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; BF.INSERT filter1 VALIDATESCALETO 26214300 ITEMS item1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;1) (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Check the maximum capacity that an existing scalable bloom filter can expand to&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;We can use the &lt;code&gt;BF.INFO&lt;&#x2F;code&gt; command to find out the maximum capacity that the scalable bloom filter can expand to hold. In this case, we can see the filter can hold 26214300 items (after scaling out until the memory limit).&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; BF.INFO filter1 MAXSCALEDCAPACITY&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 26214300&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;To get an idea of what the memory usage looks like for the max capacity of an individual non scaling filter, we have a table below.
With a 128MB limit and default false positive rate, we can create a bloom filter with 112M as the capacity. With a 512MB limit, a bloom filter can hold 448M items.&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Non Scaling Filter - Capacity&lt;&#x2F;th&gt;&lt;th&gt;FP Rate&lt;&#x2F;th&gt;&lt;th&gt;Memory Usage (MB)&lt;&#x2F;th&gt;&lt;th&gt;Notes&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;112M&lt;&#x2F;td&gt;&lt;td&gt;0.01&lt;&#x2F;td&gt;&lt;td&gt;~128&lt;&#x2F;td&gt;&lt;td&gt;Default FP Rate and Default Memory Limit&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;74M&lt;&#x2F;td&gt;&lt;td&gt;0.001&lt;&#x2F;td&gt;&lt;td&gt;~128&lt;&#x2F;td&gt;&lt;td&gt;Custom FP Rate and Default Memory Limit&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;448M&lt;&#x2F;td&gt;&lt;td&gt;0.01&lt;&#x2F;td&gt;&lt;td&gt;~512&lt;&#x2F;td&gt;&lt;td&gt;Default FP Rate and Custom Memory Limit&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;298M&lt;&#x2F;td&gt;&lt;td&gt;0.001&lt;&#x2F;td&gt;&lt;td&gt;~512&lt;&#x2F;td&gt;&lt;td&gt;Custom FP Rate and Custom Memory Limit&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;h2 id=&quot;performance&quot;&gt;Performance&lt;&#x2F;h2&gt;
&lt;p&gt;The bloom commands which involve adding items or checking the existence of items have a time complexity of O(N * K) where N is the number of elements being inserted and K is the number of hash functions used by the bloom filter.
This means that both &lt;code&gt;BF.ADD&lt;&#x2F;code&gt; and &lt;code&gt;BF.EXISTS&lt;&#x2F;code&gt; are both O(K) as they only operate on one item.&lt;&#x2F;p&gt;
&lt;p&gt;In scalable bloom filters, we increase the number of hash function based checks during add&#x2F;exists operations with each scale out; Each sub filter requires at least one hash function and this number increases as the false positive rate becomes stricter with scale outs due to the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;bloomfilters&#x2F;#advanced-properties&quot;&gt;tightening ratio&lt;&#x2F;a&gt;.
For this reason, it is recommended that users choose a capacity and expansion rate after evaluating the use case &#x2F; workload to avoid several scale outs and reduce the number of checks.&lt;&#x2F;p&gt;
&lt;p&gt;Example: For a bloom filter to achieve an overall capacity of 10M with a starting capacity of 100K and expansion rate of 1, it will require 100 sub filters (after 99 scale outs).
Instead, with the same starting capacity of 100K and expansion rate of 2, a bloom filter can achieve an overall capacity of ~12.7M with just 7 sub filters.
Alternatively, with the same expansion rate of 1 and starting capacity of 1M, a bloom filter can achieve an overall capacity of 10M with 10 sub filters.
Both approaches significantly reduce the number of checks per item add &#x2F; exists operation.&lt;&#x2F;p&gt;
&lt;p&gt;The other bloom filter commands are O(1) time complexity: &lt;code&gt;BF.CARD&lt;&#x2F;code&gt;, &lt;code&gt;BF.INFO&lt;&#x2F;code&gt;, &lt;code&gt;BF.RESERVE&lt;&#x2F;code&gt;, and &lt;code&gt;BF.INSERT&lt;&#x2F;code&gt; (when no items are provided).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;valkey-bloom offers an efficient solution for high-volume membership testing through bloom filters, providing significant memory usage savings compared to traditional data types.
This enhances Valkey&#x27;s capability to handle various workloads including large-scale advertisement &#x2F; event deduplication, fraud detection, and reducing disk &#x2F; backend lookups more efficiently.&lt;&#x2F;p&gt;
&lt;p&gt;To learn more about &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bloom&#x2F;&quot;&gt;valkey-bloom&lt;&#x2F;a&gt;, you can read about the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;bloomfilters&#x2F;&quot;&gt;Bloom Filters data type&lt;&#x2F;a&gt; and follow the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-bloom&#x2F;blob&#x2F;1.0.0&#x2F;QUICK_START.md&quot;&gt;quick start guide&lt;&#x2F;a&gt; to try it yourself.
Additionally, to use valkey-bloom on Docker (along with other official modules), you can check out the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;valkey&#x2F;valkey-bundle&quot;&gt;Valkey Extensions Docker Image&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Thank you to all those who helped develop the module:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Karthik Subbarao (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;KarthikSubbarao&quot;&gt;KarthikSubbarao&lt;&#x2F;a&gt;)&lt;&#x2F;li&gt;
&lt;li&gt;Cameron Zack (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;zackcam&quot;&gt;zackcam&lt;&#x2F;a&gt;)&lt;&#x2F;li&gt;
&lt;li&gt;Vanessa Tang (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;YueTang-Vanessa&quot;&gt;YueTang-Vanessa&lt;&#x2F;a&gt;)&lt;&#x2F;li&gt;
&lt;li&gt;Nihal Mehta (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;nnmehta&quot;&gt;nnmehta&lt;&#x2F;a&gt;)&lt;&#x2F;li&gt;
&lt;li&gt;wuranxx (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;wuranxx&quot;&gt;wuranxx&lt;&#x2F;a&gt;)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey 8.1: Continuing to Deliver Enhanced Performance and Reliability</title>
        <published>2025-04-02T01:01:01+00:00</published>
        <updated>2025-04-02T01:01:01+00:00</updated>
        
        <author>
          <name>
            rdias
          </name>
        </author>
        
        <author>
          <name>
            mvisser
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-8-1-0-ga/"/>
        <id>https://valkey.io/blog/valkey-8-1-0-ga/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-8-1-0-ga/">&lt;p&gt;The Valkey community is excited to unveil the new release of Valkey 8.1,
a minor version update designed to further enhance performance, reliability, observability and usability
over Valkey 8.0 for all Valkey installations.&lt;&#x2F;p&gt;
&lt;p&gt;In this blog, we&#x27;ll dive a bit deeper into some of the new features in Valkey 8.1 and how they can benefit your applications.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;performance&quot;&gt;Performance&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8.1 introduces several performance improvements that reduce latency, increase throughput, and lower memory usage.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-new-hashtable&quot;&gt;The New Hashtable&lt;&#x2F;h3&gt;
&lt;p&gt;The main changes responsible for several performance improvements is the new hashtable implementation that is used both as the main key-value store in Valkey and the implementations for the Hash, Set, and Sorted Set data types.&lt;&#x2F;p&gt;
&lt;p&gt;The new hashtable implementation is a complete rewrite of the previous hashtable. The new design adopts several modern design techniques to reduce the number of allocations to store each object, which reduces the number of random memory accesses while also saving memory.&lt;&#x2F;p&gt;
&lt;p&gt;The result is we observed a roughly 20 byte reduction per key-value pair for keys without a TTL, and up to a 30 byte reduction for key-value pairs with a TTL. The new implementation also helps improve the Valkey server throughput by roughly 10% compared to 8.0 version for pipeline workloads when I&#x2F;O threading is not used.&lt;&#x2F;p&gt;
&lt;p&gt;You can learn more about the design and results in &lt;a href=&quot;&#x2F;blog&#x2F;new-hash-table&quot;&gt;the dedicated blog post about the implementation&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;iterator-prefetching&quot;&gt;Iterator Prefetching&lt;&#x2F;h3&gt;
&lt;p&gt;Iterating over the key set keys is done in various scenarios, for example when a Valkey node needs to send all the keys and values to a newly connected replica.&lt;&#x2F;p&gt;
&lt;p&gt;In Valkey 8.1 the iteration functionality is improved by using memory prefetching techniques.&lt;&#x2F;p&gt;
&lt;p&gt;This means that when an element is going to be returned to the caller, the bucket and its elements have already been loaded into CPU cache when the previous bucket was being iterated.&lt;&#x2F;p&gt;
&lt;p&gt;This makes the iterator &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1568&quot;&gt;3.5x&lt;&#x2F;a&gt; faster than without prefetching, thus reducing the time it takes to send the data to a newly connected replica.&lt;&#x2F;p&gt;
&lt;p&gt;Commands like &lt;code&gt;KEYS&lt;&#x2F;code&gt; and also benefit from this optimization.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;i-o-threads-improvements&quot;&gt;I&#x2F;O Threads Improvements&lt;&#x2F;h3&gt;
&lt;p&gt;Following up the I&#x2F;O threads improvements added in 8.0, more operations have been offloaded to the I&#x2F;O thread pool in the 8.1 release, improving the throughput and latency of some operations.&lt;&#x2F;p&gt;
&lt;p&gt;In the new release, TLS connections are now able to offload the TLS negotiation to I&#x2F;O threads. This change improves the rate of accepting new connections by around &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1338&quot;&gt;300%&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Other sources of overhead in the TLS connection handling were identified, namely in the calls to &lt;code&gt;SSL_pending()&lt;&#x2F;code&gt; and &lt;code&gt;ERR_clear_error()&lt;&#x2F;code&gt; functions, which were being called in the main event thread. By offloading these functions to the I&#x2F;O threads pool, a throughput improvement was achieved in some operations. For instance, it was observed a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1271&quot;&gt;10%&lt;&#x2F;a&gt; improvement in &lt;code&gt;SET&lt;&#x2F;code&gt; operations throughput, and a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1271&quot;&gt;22%&lt;&#x2F;a&gt; improvement in &lt;code&gt;GET&lt;&#x2F;code&gt; operations throughput.&lt;&#x2F;p&gt;
&lt;p&gt;Replication traffic efficiency was also improved in 8.1 by offloading the reading of replication stream on the replicas to the I&#x2F;O thread pool which means they can serve more read traffic. On the primaries, replication stream writes are now offloaded to the I&#x2F;O thread pool.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;replication-improvements&quot;&gt;Replication Improvements&lt;&#x2F;h3&gt;
&lt;p&gt;Full syncs with TLS enabled are up to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1479&quot;&gt;18%&lt;&#x2F;a&gt; faster by removing redundant CRC checksumming when using diskless replication.&lt;&#x2F;p&gt;
&lt;p&gt;The fork copy-on-write memory overhead is reduced by up to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;905&quot;&gt;47%&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;sorted-set-and-hyperloglog-and-bitcount-optimizations&quot;&gt;Sorted set and hyperloglog and bitcount optimizations&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;code&gt;ZRANK&lt;&#x2F;code&gt; command, which serves a popular usecase in operating Leaderboards, was optimized to perform up to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1389&quot;&gt;45%&lt;&#x2F;a&gt; faster, depending on the sorted set size.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;code&gt;ZADD&lt;&#x2F;code&gt; and other commands that involve floating point numbers are optimized by &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1260&quot;&gt;&lt;code&gt;fast_float&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; to parse floats using SIMD instructions.
This optimization requires a C++ compiler, and is currently an opt-in feature at compile time.&lt;&#x2F;p&gt;
&lt;p&gt;The probabilistic hyperloglog is another great data type, used for counting unique elements in very large datasets whilst using only 12KB of memory regardless of the amount of elements. By using the modern CPUs Advanced Vector Extensions of x86, Valkey 8.1 can achieve a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1293&quot;&gt;12x&lt;&#x2F;a&gt; speed for the operations like &lt;code&gt;PFMERGE&lt;&#x2F;code&gt; and &lt;code&gt;PFCOUNT&lt;&#x2F;code&gt; on hyperloglog data types.&lt;&#x2F;p&gt;
&lt;p&gt;Similarly, the BITCOUNT operation has been improved up to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1741&quot;&gt;514%&lt;&#x2F;a&gt; using AVX2 on x86.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;active-defrag-improvements&quot;&gt;Active Defrag Improvements&lt;&#x2F;h3&gt;
&lt;p&gt;Active Defrag has been improved to eliminate latencies greater than 1ms (https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;1242).  Defrag cycle time has been reduced to 500us (with increased frequency), resulting in much more predictable latencies, with a dramatic reduction in tail latencies.&lt;&#x2F;p&gt;
&lt;p&gt;An anti-starvation protection has also been introduced in the presence of long-running commands. If a slow command delays the defrag cycle, the defrag process will run proportionately longer to ensure that the configured CPU is achieved. Given the presence of slow commands, the proportional extra time is insignificant to latency.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;observability&quot;&gt;Observability&lt;&#x2F;h2&gt;
&lt;p&gt;There are also several improvements to the observability of the system behavior in Valkey 8.1.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;log-improvements&quot;&gt;Log Improvements&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey 8.1 brings new options to the format of the log file entries as well as the way timestamps are recorded in the log file. This makes it easier to consume the log files by log collecting systems.&lt;&#x2F;p&gt;
&lt;p&gt;The format of the log file entries is controlled by the &lt;code&gt;log-format&lt;&#x2F;code&gt; parameter, where the default is the existing format :&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;legacy&lt;&#x2F;code&gt;: the default, traditional log format&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;logfmt&lt;&#x2F;code&gt;: a structured log format; see https:&#x2F;&#x2F;www.brandur.org&#x2F;logfmt&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The formatting of the timestamp of the log file entries is controlled by the &lt;code&gt;log-timestamp-format&lt;&#x2F;code&gt; parameter, where the default is the existing format:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;legacy&lt;&#x2F;code&gt;: default format&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;iso8601&lt;&#x2F;code&gt;: ISO 8601 extended date and time with time zone, of the form yyyy-mm-ddThh:mm:ss.sss±hh:mm&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;milliseconds&lt;&#x2F;code&gt;: milliseconds since the epoch&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;em&gt;Note&lt;&#x2F;em&gt;: using both the &lt;code&gt;logfmt&lt;&#x2F;code&gt; and &lt;code&gt;iso8601&lt;&#x2F;code&gt; format uses around 60% more space, so disk space should be considered when implementing these.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;extending-the-slowlog-to-commandlog&quot;&gt;Extending the Slowlog to Commandlog&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey has long had the capability to record slow commands at execution time based on the threshold set with the &lt;code&gt;slowlog-log-slower-than&lt;&#x2F;code&gt; parameter to keep the last &lt;code&gt;slowlog-max-len&lt;&#x2F;code&gt; entries. A useful tool in troubleshooting, it didn&#x27;t take into account the overall round-trip to the application or the impact on network usage. With the addition of the new &lt;code&gt;COMMANDLOG&lt;&#x2F;code&gt; feature in Valkey 8.1, the recording of large requests and replies is now giving users great visibility in end-to-end latency.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;improved-latency-insights&quot;&gt;Improved Latency Insights&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey has a built-in latency monitoring framework which samples latency-sensitive code paths such as for example fork when enabled through &lt;code&gt;latency-monitor-threshold&lt;&#x2F;code&gt; &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;latency-monitor&#x2F;&quot;&gt;latency monitor&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;The new feature adds to additional metrics to the &lt;code&gt;LATENCY LATEST&lt;&#x2F;code&gt; command that reports on the latest latency events that have been collected. The additional information in Valkey 8.1 reports on the total of the recorded latencies as well as the number of recorded spikes for this event. These additional fields allow users to better understand how often these latency events are occurring and the total impact they are causing to the system.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;extensibility&quot;&gt;Extensibility&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey is already well known by its extensibility features. The sophisticated module system allows to extend the core system with new features developed as external modules.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;programmability&quot;&gt;Programmability&lt;&#x2F;h3&gt;
&lt;p&gt;In Valkey 8.1 the module system API was extended with the support for developing new scripting engines as external modules.&lt;&#x2F;p&gt;
&lt;p&gt;This new API opens the door for the development of new language and runtime alternatives to the Lua base scripts supported by the Valkey core when using &lt;code&gt;EVAL&lt;&#x2F;code&gt; and &lt;code&gt;FCALL&lt;&#x2F;code&gt; commands.&lt;&#x2F;p&gt;
&lt;p&gt;In future releases of Valkey, we expect the emergence of new scripting engines. A good candidate is a scripting engine based on WASM, allowing &lt;code&gt;EVAL&lt;&#x2F;code&gt; scripts to be written in other languages than Lua and to be executed in a more secure sandbox environment.&lt;&#x2F;p&gt;
&lt;p&gt;There are also benefits for existing Lua scripts, since new Lua runtimes can be easily plugged in that provide better security properties and&#x2F;or better performance.&lt;&#x2F;p&gt;
&lt;p&gt;Developers that intend to build new scripting engines for Valkey should check the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;modules-api-ref&#x2F;&quot;&gt;Module API&lt;&#x2F;a&gt; documentation.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;additional-highlights&quot;&gt;Additional Highlights&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;conditional-updates&quot;&gt;Conditional Updates&lt;&#x2F;h3&gt;
&lt;p&gt;This new functionality allows Valkey users to perform conditional updates using the &lt;code&gt;SET&lt;&#x2F;code&gt; command if the given comparison-value matches the key’s current value. This is a not only a quality-of-life improvement for developers as they no longer need to add this condition to their application code, it also saves a roundtrip to first get a value and then compare it before a &lt;code&gt;SET&lt;&#x2F;code&gt;. When using the optional &lt;code&gt;GET&lt;&#x2F;code&gt; as part of the &lt;code&gt;SET IFEQ&lt;&#x2F;code&gt;, the existing value is returned regardless whether it matches the comparison-value.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8.1 continues the path of innovation and improvements, transparently bringing more performance and reliability to the user.  We look forward to hearing what you achieve with Valkey 8.1! More detail can be found in &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;releases&#x2F;tag&#x2F;8.1.0&quot;&gt;release notes&lt;&#x2F;a&gt; for the 8.1 GA release.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;thank-you&quot;&gt;THANK YOU&lt;&#x2F;h2&gt;
&lt;p&gt;We appreciate the efforts of all who contributed code to this release!&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Alan Scherger (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;flyinprogrammer&quot;&gt;flyinprogrammer&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Amit Nagler (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;naglera&quot;&gt;naglera&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Basel Naamna (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;xbasel&quot;&gt;xbasel&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Ben Totten (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bentotten&quot;&gt;bentotten&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Binbin (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;enjoy-binbin&quot;&gt;enjoy-binbin&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Caiyi Wu (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;Codebells&quot;&gt;Codebells&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Danish Mehmood (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;danish-mehmood&quot;&gt;danish-mehmood&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Eran Ifrah (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;eifrah-aws&quot;&gt;eifrah-aws&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Guillaume Koenig (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;knggk&quot;&gt;knggk&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Harkrishn Patro (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;hpatro&quot;&gt;hpatro&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Jacob Murphy (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;murphyjacob4&quot;&gt;murphyjacob4&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Jim Brunner (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;JimB123&quot;&gt;JimB123&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Josef Šimánek (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;simi&quot;&gt;simi&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Jungwoo Song (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bluayer&quot;&gt;bluayer&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Karthick Ariyaratnam (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;karthyuom&quot;&gt;karthyuom&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Karthik Subbarao (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;KarthikSubbarao&quot;&gt;KarthikSubbarao&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Lipeng Zhu (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;lipzhu&quot;&gt;lipzhu&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Madelyn Olson (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;madolson&quot;&gt;madolson&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Masahiro Ide (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;imasahiro&quot;&gt;imasahiro&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Melroy van den Berg (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;melroy89&quot;&gt;melroy89&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Mikhail Koviazin (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;mkmkme&quot;&gt;mkmkme&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Nadav Gigi (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;NadavGigi&quot;&gt;NadavGigi&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Nadav Levanoni (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;nadav-levanoni&quot;&gt;nadav-levanoni&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Nikhil Manglore (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;Nikhil-Manglore&quot;&gt;Nikhil-Manglore&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Parth Patel (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;parthpatel&quot;&gt;parthpatel&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Pierre (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;pieturin&quot;&gt;pieturin&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Ping Xie (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;PingXie&quot;&gt;PingXie&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Qu Chen (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;QuChen88&quot;&gt;QuChen88&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Rain Valentine (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;SoftlyRaining&quot;&gt;SoftlyRaining&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Ran Shidlansik (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;ranshid&quot;&gt;ranshid&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Ray Cao (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;RayaCoo&quot;&gt;RayaCoo&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Ricardo Dias (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;rjd15372&quot;&gt;rjd15372&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Romain Geissler (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;Romain-Geissler-1A&quot;&gt;Romain-Geissler-1A&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Roman Gershman (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;romange&quot;&gt;romange&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Roshan Khatri (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;roshkhatri&quot;&gt;roshkhatri&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Rueian (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;rueian&quot;&gt;rueian&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Sarthak Aggarwal (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;sarthakaggarwal97&quot;&gt;sarthakaggarwal97&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Seungmin Lee (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;sungming2&quot;&gt;sungming2&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Shai Zarka (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;zarkash-aws&quot;&gt;zarkash-aws&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Shivshankar (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;Shivshankar-Reddy&quot;&gt;Shivshankar-Reddy&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Simon Baatz (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;gmbnomis&quot;&gt;gmbnomis&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Sinkevich Artem (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;ArtSin&quot;&gt;ArtSin&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Stav Ben-Tov (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;stav-bentov&quot;&gt;stav-bentov&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Stefan Mueller (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;muelstefamzn&quot;&gt;muelstefamzn&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Tal Shachar (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;talxsha&quot;&gt;talxsha&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Thalia Archibald (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;thaliaarchi&quot;&gt;thaliaarchi&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Uri Yagelnik (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;uriyage&quot;&gt;uriyage&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Vadym Khoptynets (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;poiuj&quot;&gt;poiuj&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Viktor Szépe (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;szepeviktor&quot;&gt;szepeviktor&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Viktor Söderqvist (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;zuiderkwast&quot;&gt;zuiderkwast&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Vu Diep (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;vudiep411&quot;&gt;vudiep411&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Wen Hui (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;hwware&quot;&gt;hwware&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Xuyang WANG (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;Nugine&quot;&gt;Nugine&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Yanqi Lv (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;lyq2333&quot;&gt;lyq2333&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Yury Fridlyand (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;Yury-Fridlyand&quot;&gt;Yury-Fridlyand&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;Zvi Schneider (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;zvi-code&quot;&gt;zvi-code&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;bodong.ybd (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;yangbodong22011&quot;&gt;yangbodong22011&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;chx9&quot;&gt;chx9&lt;&#x2F;a&gt;,&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;kronwerk&quot;&gt;kronwerk&lt;&#x2F;a&gt;,&lt;&#x2F;li&gt;
&lt;li&gt;otheng (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;otheng03&quot;&gt;otheng03&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;secwall&quot;&gt;secwall&lt;&#x2F;a&gt;,&lt;&#x2F;li&gt;
&lt;li&gt;skyfirelee (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;artikell&quot;&gt;artikell&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;xingbowang (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;xingbowang&quot;&gt;xingbowang&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;zhaozhao.zz (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;soloestoy&quot;&gt;soloestoy&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;zhenwei pi(&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;pizhenwei&quot;&gt;pizhenwei&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;zixuan zhao (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;azuredream&quot;&gt;azuredream&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;烈香 (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;hengyoush&quot;&gt;hengyoush&lt;&#x2F;a&gt;),&lt;&#x2F;li&gt;
&lt;li&gt;风去幽墨 (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;fengquyoumo&quot;&gt;fengquyoumo&lt;&#x2F;a&gt;)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>A new hash table</title>
        <published>2025-03-28T00:00:00+00:00</published>
        <updated>2025-03-28T00:00:00+00:00</updated>
        
        <author>
          <name>
            zuiderkwast
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/new-hash-table/"/>
        <id>https://valkey.io/blog/new-hash-table/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/new-hash-table/">&lt;p&gt;Many workloads are bound on storing data. Being able to store more data using
less memory allows you to reduce the size of your clusters.&lt;&#x2F;p&gt;
&lt;p&gt;In Valkey, keys and values are stored in what&#x27;s called a hash table. A hash
table works by chopping a key into a number of seemingly random bits. These bits
are shaped into a memory address, pointing to where the value is supposed to be
stored. It&#x27;s a very fast way of jumping directly to the right place in memory
without scanning through all the keys.&lt;&#x2F;p&gt;
&lt;p&gt;For the 8.1 release, we looked into improving the performance and memory usage,
so that users can store more data using less memory. This work led us to the
design of a new hash table, but first, let&#x27;s take a look at the hash table that
was used in Valkey until now.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-dict&quot;&gt;The dict&lt;&#x2F;h2&gt;
&lt;p&gt;The hash table used Valkey until now, called &quot;dict&quot;, has the following memory
layout:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;new-hash-table&#x2F;dict-structure.png&quot; alt=&quot;dict structure&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The dict has two tables, called &quot;table 0&quot; and &quot;table 1&quot;. Usually only one
exists, but both are used when incremental rehashing is in progress.&lt;&#x2F;p&gt;
&lt;p&gt;It&#x27;s a chained hash table, so if multiple keys are hashed to the same slot in
the table, their key-value entries form a linked list. That&#x27;s what the &quot;next&quot;
pointer in the &lt;code&gt;dictEntry&lt;&#x2F;code&gt; is for.&lt;&#x2F;p&gt;
&lt;p&gt;To lookup a key &quot;FOO&quot; and access the value &quot;BAR&quot;, Valkey still has to read from
memory four times. If there is a hash collision, it has to follow two more
pointers for each hash collision and thus read twice more from memory (the key
and the next pointer).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;minimize-memory-accesses&quot;&gt;Minimize memory accesses&lt;&#x2F;h2&gt;
&lt;p&gt;One of the slower operations when looking up a key-value pair is reading from
the main RAM memory. A key point is therefore to make sure we have as few memory
accesses as possible. Ideally, the memory we want to access should already be
loaded in the CPU cache, which is a smaller but much faster memory belonging to
the CPU.&lt;&#x2F;p&gt;
&lt;p&gt;Optimizing for memory usage, we also want to minimize the number of distinct
memory allocations and the number of pointers between them, because storing a
pointer needs 8 bytes in a 64-bit system. If we can save one pointer per
key-value pair, for 100 million keys that&#x27;s almost a gigabyte.&lt;&#x2F;p&gt;
&lt;p&gt;When the CPU loads some data from the main memory into the CPU cache, it does so
in fixed size blocks called cache lines. The cache-line size is 64 bytes on
almost all modern hardware. Recent work on hash tables, such as &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;abseil.io&#x2F;about&#x2F;design&#x2F;swisstables&quot;&gt;Swiss
tables&lt;&#x2F;a&gt;, are highly optimized to
store and access data within a single cache line. If the key you&#x27;re looking
for isn&#x27;t found where you first look for it (due to a hash collision), then it
should ideally be found within the same cache line. If it is, then it&#x27;s found
very fast once this cache line has been loaded into the CPU cache.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;required-features&quot;&gt;Required features&lt;&#x2F;h2&gt;
&lt;p&gt;Why not use an open-source state-of-the-art hash table implementation such as
Swiss tables? The answer is that we require some specific features, apart from
the basic operations like add, lookup, replace and delete:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Incremental rehashing, so that when the hashtable is full, we don&#x27;t freeze the
server while the table is being resized.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Scan, a way to iterate over the hash table even if the hash table is resized
between the iterations. This is important to keep supporting the
&lt;a href=&quot;&#x2F;commands&#x2F;scan&#x2F;&quot;&gt;SCAN&lt;&#x2F;a&gt; command.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Random element sampling, for commands like &lt;a href=&quot;&#x2F;commands&#x2F;randomkey&#x2F;&quot;&gt;RANDOMKEY&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;These aren&#x27;t standard features, so we couldn&#x27;t simply pick an off-the-shelf hash
table. We had to design one ourselves.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;design&quot;&gt;Design&lt;&#x2F;h2&gt;
&lt;p&gt;In the new hash table designed for Valkey 8.1, the table consists of buckets of
64 bytes, one cache line. Each bucket can store up to seven elements. Keys that
map to the same bucket are all stored in the same bucket. The bucket also
contains a metadata section, marked &quot;m&quot; in the figures. The bucket layout
including the metadata section is explained in more detail below.&lt;&#x2F;p&gt;
&lt;p&gt;We&#x27;ve eliminated the &lt;code&gt;dictEntry&lt;&#x2F;code&gt; and instead embed key and value in the
&lt;code&gt;serverObject&lt;&#x2F;code&gt;, along with other data for the key.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;new-hash-table&#x2F;hashtable-structure.png&quot; alt=&quot;hashtable structure&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Assuming the &lt;code&gt;hashtable&lt;&#x2F;code&gt; structure is already in the CPU cache, looking up
key-value entry now requires only two memory lookups: The bucket and the
&lt;code&gt;serverObject&lt;&#x2F;code&gt;. If there is a hash collision, the object we&#x27;re looking for is
most likely in the same bucket, so no extra memory access is required.&lt;&#x2F;p&gt;
&lt;p&gt;If a bucket becomes full, the last element slot in the bucket is replaced by a
pointer to a child bucket. A child bucket has the same layout as a regular
bucket, but it&#x27;s a separate allocation. The lengths of these bucket chains are
not bounded, but long chains are very rare as long as keys are well distributed
by the hashing function. Most of the keys are stored in top-level buckets.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;new-hash-table&#x2F;hashtable-child-buckets.png&quot; alt=&quot;hashtable structure&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The elements in the same bucket, or bucket chain, are stored without any
internal ordering. When inserting a new entry into the bucket, any of the free
slots can be used.&lt;&#x2F;p&gt;
&lt;p&gt;As mentioned earlier, each bucket also contains a metadata section. The bucket
metadata consists of eight bytes of which one bit indicates whether the bucket
has a child bucket or not. The next seven bits, one bit for each of the seven
element slots, indicates whether that slot is filled, i.e. whether it contains
an element or not. The remaining seven bytes are used for storing a one byte
secondary hash for each of the entries stored in the bucket.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;new-hash-table&#x2F;hash-bucket-structure.png&quot; alt=&quot;bucket structure&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The secondary hash is made up of hash bits that are not used when looking up the
bucket. Out of a 64 bits hash, we need not more than 56 bits for looking up the
bucket and we use the remaining 8 bits as the secondary hash. These hash bits
are used for quickly eliminating mismatching entries when looking up a key
without comparing the keys. Comparing the keys of each entry in the bucket would
require an extra memory access per entry. If the secondary hash mismatches the
key we&#x27;re looking for, we can immediately skip that entry. The chance of a false
positive, meaning an entry for which the secondary hash is matching although the
entry doesn&#x27;t match the key were looking for, is one in 256, so this eliminates
99.6% of the false positives.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;results&quot;&gt;Results&lt;&#x2F;h2&gt;
&lt;p&gt;By replacing the hash table with a different implementation, we&#x27;ve managed to
reduce the memory usage by roughly 20 bytes per key-value pair.&lt;&#x2F;p&gt;
&lt;p&gt;The graph below shows the memory overhead for different value sizes. The
overhead is the memory usage excluding the key and the value itself. Lower is
better. (The zigzag pattern is because of unused memory resulting from the memory
allocator&#x27;s discrete allocation sizes.)&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;new-hash-table&#x2F;memory-usage.png&quot; alt=&quot;memory usage by version&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;For keys with an &lt;a href=&quot;&#x2F;commands&#x2F;expire&#x2F;&quot;&gt;expire time&lt;&#x2F;a&gt; (time-to-live, TTL) the memory
usage is down even more, roughly 30 bytes per key-value pair.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;new-hash-table&#x2F;memory-usage-with-expire.png&quot; alt=&quot;memory usage with expire&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;In some workloads, such as when storing very small objects and when pipelining
is used extensively, the latency and CPU usage are also improved. In most cases
though this is negligible in practice. The key takeaway appears to be reduced
memory usage.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;hashes-sets-and-sorted-sets&quot;&gt;Hashes, sets and sorted sets&lt;&#x2F;h2&gt;
&lt;p&gt;The nested data types Hashes, Sets and Sorted sets also make use of the new hash
table when they contain a sufficiently large number of elements. The memory
usage is down by roughly 10-20 bytes per element for these types of keys.&lt;&#x2F;p&gt;
&lt;p&gt;Special thanks to Rain Valentine for the graphs and for the help with
integrating this hash table into Valkey.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Introducing the Valkey Glide Go Client: Now in Public Preview!</title>
        <published>2025-03-04T01:01:01+00:00</published>
        <updated>2025-03-04T01:01:01+00:00</updated>
        
        <author>
          <name>
            niharikabhavaraju
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/2025-03-4-go-client-in-public-preview/"/>
        <id>https://valkey.io/blog/2025-03-4-go-client-in-public-preview/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/2025-03-4-go-client-in-public-preview/">&lt;p&gt;Valkey-Glide is pleased to announce the public preview release of the GLIDE(General Language Independent Driver for the Enterprise) Go client. This release brings the power and reliability of Valkey to Go developers with an API designed for performance and developer productivity.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey GLIDE is a multi-language client for Valkey, designed for operational excellence and incorporating best practices refined through years of experience. GLIDE ensures a consistent and unified client experience across applications, regardless of the programming language.&lt;&#x2F;p&gt;
&lt;p&gt;Currently, GLIDE supports Java, Node.js, and Python. This announcement introduces the Valkey GLIDE support for Go, expanding support to Go developers and providing new connectivity to Valkey servers, including both standalone and cluster deployments.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-you-should-be-excited&quot;&gt;Why You Should Be Excited&lt;&#x2F;h2&gt;
&lt;p&gt;The Go client extends Valkey GLIDE to the Go community, offering a robust, client that&#x27;s built on the battle-tested Rust core. This client library is a thoughtfully designed experience for Go developers who need reliable, high-performance data access.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;key-features&quot;&gt;Key Features&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;advanced-cluster-topology-management&quot;&gt;Advanced Cluster Topology Management&lt;&#x2F;h3&gt;
&lt;p&gt;Connect to your Valkey cluster with minimal configuration. The client automatically detects the entire cluster topology and configures connection management based on industry best practices.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;go&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClusterClientConfiguration&lt;&#x2F;span&gt;&lt;span&gt;().&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithAddress&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;api&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NodeAddress&lt;&#x2F;span&gt;&lt;span&gt;{Host:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;localhost&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, Port:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;})&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClusterClient&lt;&#x2F;span&gt;&lt;span&gt;(config)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The Go client provides advanced topology managements features such as:&lt;&#x2F;p&gt;
&lt;h4 id=&quot;automatic-topology-discovery&quot;&gt;Automatic Topology Discovery&lt;&#x2F;h4&gt;
&lt;p&gt;GLIDE automatically discovers all cluster nodes from a single seed node, eliminating the need to manually configure every node address. The NodeAddress can be an IP address, hostname, or fully qualified domain name (FQDN).&lt;&#x2F;p&gt;
&lt;h4 id=&quot;dynamic-topology-maintenance&quot;&gt;Dynamic Topology Maintenance&lt;&#x2F;h4&gt;
&lt;p&gt;Cluster topology can change over time as nodes are added, removed, or when slot ownership changes. GLIDE implements several mechanisms to maintain an accurate view of the cluster:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Proactive Topology Monitoring&lt;&#x2F;strong&gt;: GLIDE performs periodic background checks for cluster topology changes. This approach ensures a comprehensive and up-to-date view of the cluster, improving availability and reducing tail latency.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Consensus-Based Resolution&lt;&#x2F;strong&gt;: GLIDE queries multiple nodes for their topology view and selects the one with the highest agreement, reducing the risk of stale or incorrect mappings and ensuring a more accurate and up-to-date cluster view, improving the overall availability of the cluster.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Efficient Resource Management&lt;&#x2F;strong&gt;: GLIDE employs an efficient algorithm to compare node views and dynamically throttles client-management requests to prevent overloading Valkey servers, ensuring a balance between maintaining an up-to-date topology map and optimizing resource utilization.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;enhanced-connection-management&quot;&gt;Enhanced Connection Management&lt;&#x2F;h3&gt;
&lt;p&gt;Connection management in distributed systems presents unique challenges that impact performance, reliability, and resource utilization. The Go client addresses these challenges with reliable solutions:&lt;&#x2F;p&gt;
&lt;h4 id=&quot;proactive-reconnection&quot;&gt;Proactive Reconnection&lt;&#x2F;h4&gt;
&lt;p&gt;GLIDE implements a background monitoring system for connection states. By detecting disconnections and initiating reconnections preemptively, the client eliminates the reconnection latency typically experienced when a request discovers a broken connection.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;connection-storm-prevention&quot;&gt;Connection Storm Prevention&lt;&#x2F;h4&gt;
&lt;p&gt;When network events occur, connection storms can overwhelm servers with simultaneous reconnection attempts. GLIDE mitigates this risk through backoff algorithm with jitter that distributes reconnection attempts over time, protecting servers from sudden connection surges.&lt;&#x2F;p&gt;
&lt;p&gt;Robust connection handling with automatic reconnection strategies ensures your application remains resilient even during network instability:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;go&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Configure a custom reconnection strategy with exponential backoff&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClientConfiguration&lt;&#x2F;span&gt;&lt;span&gt;().&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithAddress&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;api&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NodeAddress&lt;&#x2F;span&gt;&lt;span&gt;{Host:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;localhost&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, Port:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;}).&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithReconnectStrategy&lt;&#x2F;span&gt;&lt;span&gt;(api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewBackoffStrategy&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;        5&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt; &#x2F;&#x2F; Initial delay in milliseconds&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;        10&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt; &#x2F;&#x2F; Maximum attempts&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;        50&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt; &#x2F;&#x2F; Maximum delay in milliseconds&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h4 id=&quot;multiplexed-connection&quot;&gt;Multiplexed Connection&lt;&#x2F;h4&gt;
&lt;p&gt;Rather than maintaining connection pools, GLIDE establishes a single multiplexed connection per cluster node. This architectural choice:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Minimizes the total number of TCP connections to servers&lt;&#x2F;li&gt;
&lt;li&gt;Reduces system call overhead&lt;&#x2F;li&gt;
&lt;li&gt;Maintains high throughput through efficient connection pipelining&lt;&#x2F;li&gt;
&lt;li&gt;Decreases server-side connection management burden&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;built-for-performance&quot;&gt;Built for Performance&lt;&#x2F;h3&gt;
&lt;p&gt;The Go client is designed from the ground up with performance in mind while still being simple to use.
The Go client provides a synchronous API for simplicity and compatibility with existing Go key-value store clients. While each individual command is blocking (following the familiar patterns in the ecosystem), the client is fully thread-safe and designed for concurrent usage:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;go&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Example of concurrent execution using goroutines&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;func&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; performConcurrentOperations&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;client&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; *&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;api&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;GlideClient&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    var&lt;&#x2F;span&gt;&lt;span&gt; wg&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; sync&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;WaitGroup&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; Launch 10 concurrent operations&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    for&lt;&#x2F;span&gt;&lt;span&gt; i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;; i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 10&lt;&#x2F;span&gt;&lt;span&gt;; i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;++&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        wg.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Add&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;1&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        go func&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;idx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; int&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            defer&lt;&#x2F;span&gt;&lt;span&gt; wg.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Done&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            key&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Sprintf&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;%d&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, idx)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            value&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Sprintf&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;value:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;%d&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, idx)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;            &#x2F;&#x2F; Each command blocks within its goroutine, but all 10 run concurrently&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            _, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Set&lt;&#x2F;span&gt;&lt;span&gt;(key, value)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            if&lt;&#x2F;span&gt;&lt;span&gt; err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; !=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; nil&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;                fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Printf&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Error setting &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;%s&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;%v\n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, key, err)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;                return&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            result, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Get&lt;&#x2F;span&gt;&lt;span&gt;(key)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            if&lt;&#x2F;span&gt;&lt;span&gt; err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; !=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; nil&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;                fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Printf&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Error getting &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;%s&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;%v\n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, key, err)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;                return&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Printf&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Result for &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;%s&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;%s\n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, key, result)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        }(i)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    wg.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Wait&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Under the hood, the client efficiently handles these concurrent requests by:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Using a single multiplexed connection per node to pipeline concurrent commands, minimizing socket overhead and system resources&lt;&#x2F;li&gt;
&lt;li&gt;Implementing thread-safe command execution&lt;&#x2F;li&gt;
&lt;li&gt;Efficiently routing concurrent commands to the appropriate server nodes&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;While the current API is synchronous, the implementation is specifically optimized for concurrent usage through Go&#x27;s native goroutines. We would love feedback about whether to add async&#x2F;channel-based APIs in future releases.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;getting-started&quot;&gt;Getting Started&lt;&#x2F;h2&gt;
&lt;p&gt;You can add Valkey GLIDE to your project with the following two commands:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;go&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;go&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;go&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; mod tidy&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Then, you can get started connecting to a Valkey standalone server, running locally on port 6379, with the following sample applications:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;go&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;package&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; main&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;import&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;fmt&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;go&#x2F;api&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;func&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; main&lt;&#x2F;span&gt;&lt;span&gt;() {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; Connect to a standalone Valkey server&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClientConfiguration&lt;&#x2F;span&gt;&lt;span&gt;().&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;        WithAddress&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;api&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NodeAddress&lt;&#x2F;span&gt;&lt;span&gt;{Host:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;localhost&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, Port:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;})&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    client, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClient&lt;&#x2F;span&gt;&lt;span&gt;(config)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; !=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; nil&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Println&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Error:&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, err)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        return&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    defer&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Close&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; Test the connection&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    result, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Ping&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; !=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; nil&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Println&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;Error:&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, err)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        return&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Println&lt;&#x2F;span&gt;&lt;span&gt;(result)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt; &#x2F;&#x2F; PONG&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; Store and retrieve a value&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Set&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;hello&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;valkey&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    value, _&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Get&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;hello&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Println&lt;&#x2F;span&gt;&lt;span&gt;(value)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt; &#x2F;&#x2F; valkey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;cluster-mode-connection-setup&quot;&gt;Cluster Mode Connection Setup&lt;&#x2F;h3&gt;
&lt;p&gt;Need to work with a Valkey cluster?&lt;&#x2F;p&gt;
&lt;p&gt;Just as easy! The Go client automatically discovers your entire cluster topology from a single seed node. The following sample shows how to connect to a Valkey cluster through a node running locally on port 7001:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;go&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;package&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; main&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;import&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;fmt&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;go&#x2F;api&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;func&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; main&lt;&#x2F;span&gt;&lt;span&gt;() {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; Specify the address of any single node in your cluster&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; This example connects to a local cluster node on port 7001&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    host&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;localhost&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    port&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 7001&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;    &#x2F;&#x2F; Connect to a Valkey cluster through any node&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClusterClientConfiguration&lt;&#x2F;span&gt;&lt;span&gt;().&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;        WithAddress&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;api&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NodeAddress&lt;&#x2F;span&gt;&lt;span&gt;{Host: host, Port: port})&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    client, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClusterClient&lt;&#x2F;span&gt;&lt;span&gt;(config)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; !=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; nil&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Println&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;There was an error: &amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, err)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        return&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    res, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Ping&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; !=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; nil&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Println&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;There was an error: &amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, err)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        return&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    fmt.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Println&lt;&#x2F;span&gt;&lt;span&gt;(res)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt; &#x2F;&#x2F; PONG&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Close&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;advanced-configuration-options&quot;&gt;Advanced Configuration Options&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;read-strategies-for-optimized-performance&quot;&gt;Read Strategies for Optimized Performance&lt;&#x2F;h3&gt;
&lt;p&gt;Balance consistency and throughput with flexible read strategies:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;go&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Configure to prefer replicas for read operations&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClusterClientConfiguration&lt;&#x2F;span&gt;&lt;span&gt;().&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithAddress&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;api&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NodeAddress&lt;&#x2F;span&gt;&lt;span&gt;{Host:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;cluster.example.com&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, Port:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;}).&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithReadFrom&lt;&#x2F;span&gt;&lt;span&gt;(api.PreferReplica)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClusterClient&lt;&#x2F;span&gt;&lt;span&gt;(config)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Write to primary&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Set&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;value1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Automatically reads from a replica (round-robin)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;result, err&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;Get&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Available strategies:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;PRIMARY&lt;&#x2F;strong&gt;: Always read from primary nodes for the freshest data&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;PREFER_REPLICA&lt;&#x2F;strong&gt;: Distribute reads across replicas in round-robin fashion, falling back to primary when needed&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Planned for future release:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AZ_AFFINITY&lt;&#x2F;strong&gt;: (Coming soon) Prefer replicas in the same availability zone as the client&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;authentication-and-tls&quot;&gt;Authentication and TLS&lt;&#x2F;h3&gt;
&lt;p&gt;Secure your connections with built-in authentication and TLS support:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;go&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Configure with authentication&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClientConfiguration&lt;&#x2F;span&gt;&lt;span&gt;().&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithAddress&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;api&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NodeAddress&lt;&#x2F;span&gt;&lt;span&gt;{Host:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;localhost&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, Port:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;}).&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithCredentials&lt;&#x2F;span&gt;&lt;span&gt;(api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewServerCredentials&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;username&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;password&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)).&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithUseTLS&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;true&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt; &#x2F;&#x2F; Enable TLS for encrypted connections&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;request-timeout-and-handling&quot;&gt;Request Timeout and Handling&lt;&#x2F;h3&gt;
&lt;p&gt;Fine-tune timeout settings for different workloads:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;go&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Set a longer timeout for operations that may take more time&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; :=&lt;&#x2F;span&gt;&lt;span&gt; api.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;NewGlideClientConfiguration&lt;&#x2F;span&gt;&lt;span&gt;().&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithAddress&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;api&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NodeAddress&lt;&#x2F;span&gt;&lt;span&gt;{Host:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;localhost&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, Port:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;}).&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    WithRequestTimeout&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;500&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt; &#x2F;&#x2F; 500ms timeout&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;behind-the-scenes-technical-architecture&quot;&gt;Behind the Scenes: Technical Architecture&lt;&#x2F;h2&gt;
&lt;p&gt;The Valkey GLIDE Go client is built on top of the Valkey GLIDE core. The core framework is written in Rust (lib.rs), which exposes public functions. These functions are converted to a C header file using Cbindgen. The Go client then uses CGO to call these C functions, providing Go developers with an idiomatic interface while leveraging Rust&#x27;s performance advantages. This architecture ensures consistent behavior across all Valkey GLIDE language implementations (Java, Python, Node.js, and Go) while maintaining performance and reliability.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;component-details&quot;&gt;Component details&lt;&#x2F;h3&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;+------------+      +------+      +------------+      +------------+      +------------+&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;|            |      |      |      |            |      |            |      |            |&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;|    Go      |-----&amp;gt;|      |-----&amp;gt;|  C Header  |-----&amp;gt;|    Rust    |-----&amp;gt;|   Valkey   |&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;|  Client    |      |  CGO |      |  cbindgen  |      |    Core    |      |   Server   |&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;|            |&amp;lt;-----|      |&amp;lt;-----|            |&amp;lt;-----|            |&amp;lt;-----|            |&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;|            |      |      |      |            |      |            |      |            |&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;+------------+      +------+      +------------+      +------------+      +------------+&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Go Client&lt;&#x2F;strong&gt;: The language-specific interface for Go developers&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;CGO&lt;&#x2F;strong&gt;: Allows Go code to call C functions&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Cbindgen&lt;&#x2F;strong&gt;: Automates the generation of C header files from Rust public APIs&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Rust Core&lt;&#x2F;strong&gt;: High-performance framework that connects to and communicates with Valkey servers&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Rust FFI Library&lt;&#x2F;strong&gt;: Enables cross-language function calls between Rust and other languages&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;join-the-journey&quot;&gt;Join the Journey&lt;&#x2F;h2&gt;
&lt;p&gt;This public preview is just the beginning. We&#x27;re actively developing and enhancing the Go wrapper, and we&#x27;d love your feedback and contributions. Try it out in your projects, share your experiences, and help us make it even better!
You can join our development journey by:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Submitting issues or feature requests on our &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;issues&quot;&gt;GitHub Issues page&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Joining discussions in our &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;discussions&quot;&gt;GitHub Discussions forum&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;looking-forward&quot;&gt;Looking Forward&lt;&#x2F;h2&gt;
&lt;p&gt;As we move toward general availability, we&#x27;ll be expanding command support, enhancing performance, and adding even more features to make the Valkey GLIDE Go client a great choice for Go developers.&lt;&#x2F;p&gt;
&lt;p&gt;Checkout our &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;tree&#x2F;main&#x2F;go&quot;&gt;Valkey GLIDE go client&lt;&#x2F;a&gt; for the source code.
For implementation examples, please refer to the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;blob&#x2F;main&#x2F;go&#x2F;README.md&quot;&gt;README of the Go examples&lt;&#x2F;a&gt; for instructions on running the Standalone and Cluster examples.&lt;&#x2F;p&gt;
&lt;p&gt;For a complete reference of all available commands and their parameters, explore the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;pkg.go.dev&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;go&#x2F;api&quot;&gt;Go API documentation on pkg.go.dev&lt;&#x2F;a&gt;, which provides detailed information on method signatures, parameters, and return types.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;contributors&quot;&gt;Contributors&lt;&#x2F;h2&gt;
&lt;p&gt;A huge thank you to all the contributors who have made this possible - your dedication and expertise have created something truly special for the Go community.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;janhavigupta007&quot;&gt;Janhavi Gupta&lt;&#x2F;a&gt; (Google Cloud Platform)&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;niharikabhavaraju&quot;&gt;Niharika Bhavaraju&lt;&#x2F;a&gt; (Google Cloud Platform)&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;EdricCua&quot;&gt;Edric Cuartero&lt;&#x2F;a&gt; (Google Cloud Platform)&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;omangesg&quot;&gt;Omkar Mestry&lt;&#x2F;a&gt; (Google Cloud Platform)&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;Yury-Fridlyand&quot;&gt;Yury Fridlyand&lt;&#x2F;a&gt; (Improving)&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;prateek-kumar-improving&quot;&gt;Prateek Kumar&lt;&#x2F;a&gt; (Improving)&lt;&#x2F;p&gt;
&lt;p&gt;Kudos to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;aaron-congo&quot;&gt;Aaron Congo&lt;&#x2F;a&gt; who created the backbone of the client 🚀 and to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;umit&quot;&gt;Umit Unal&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;MikeMwita&quot;&gt;Michael&lt;&#x2F;a&gt; for their contributions!&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Reducing application latency and lowering Cloud bill by setting up your client library</title>
        <published>2025-01-08T01:01:01+00:00</published>
        <updated>2025-01-08T01:01:01+00:00</updated>
        
        <author>
          <name>
            asafporatstoler
          </name>
        </author>
        
        <author>
          <name>
            adarovadya
          </name>
        </author>
        
        <author>
          <name>
            muhammadawawdi
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/az-affinity-strategy/"/>
        <id>https://valkey.io/blog/az-affinity-strategy/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/az-affinity-strategy/">&lt;p&gt;How can adjusting your client library help you reduce Cloud costs and improve latency?&lt;&#x2F;p&gt;
&lt;p&gt;In this post, we dive into &lt;strong&gt;Availability Zone (AZ) affinity routing&lt;&#x2F;strong&gt; mechanics, showing how it optimizes your application&#x27;s performance and cost using &lt;strong&gt;Valkey GLIDE (General Language Independent Driver for the Enterprise)&lt;&#x2F;strong&gt;. We also guide you through how to configure GLIDE to benefit from other key features.&lt;&#x2F;p&gt;
&lt;p&gt;GLIDE is an official open-source Valkey client library. GLIDE is designed for reliability, optimized performance, and high-availability, for Valkey and Redis-based applications. GLIDE is a multi-language client that supports Java, Python, and Node.js, with further languages in development. GLIDE recently added support for a key feature AZ affinity routing, which enables Valkey-based applications to direct calls specifically to server nodes in the same AZ as the client. This minimizes cross-AZ traffic, reduces latency, and lowers cloud expenses.&lt;&#x2F;p&gt;
&lt;p&gt;Before we explore AZ affinity routing, let’s understand what availability zones are, how different &lt;strong&gt;read strategies&lt;&#x2F;strong&gt; work and how they impact your application.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;choosing-the-right-read-strategy-for-your-application&quot;&gt;Choosing the Right Read Strategy for Your Application&lt;&#x2F;h2&gt;
&lt;p&gt;Distributed applications rely on scalability and resilience, often achieved by techniques like caching, sharding, and high availability. Valkey enhances these systems by acting as a robust caching layer, reducing database load and accelerating read operations. Its sharding capabilities distribute data across multiple nodes, ensuring efficient storage and access patterns, while its high availability features safeguard uptime by replicating data across the primary and replica nodes. This combination enables distributed applications to handle high traffic and recover quickly from failures, ensuring consistent performance.&lt;&#x2F;p&gt;
&lt;p&gt;In Valkey-based applications, selecting the right read strategy is required for optimizing performance and cost. Read strategies determine how read-only commands are routed, balancing factors like data freshness, latency, and throughput.
Understanding the infrastructure that supports these strategies is key to leveraging them effectively.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Availability Zones&lt;&#x2F;strong&gt; are isolated locations within Cloud regions that provide redundancy and fault tolerance. They are physically separated but connected through low-latency networks. Major cloud providers like AWS, Oracle and GCP implement the concept of AZs. However, using resources across different AZs can incur increased latency and cost.
GLIDE takes advantage of this infrastructure by routing reads to replica nodes within the same AZ, enabling faster responses and improved user experience.
This is particularly advantageous for applications that prioritize read throughput and can tolerate slightly stale data. For instance, websites with personalized recommendation engines rely on displaying content quickly to users rather than ensuring every update is perfectly synchronized.
Additionally, one of the most common use cases for caching is to store database query results, allowing applications to trade off absolute freshness for better performance, scalability, and cost-effectiveness. The read-from-replica strategies introduces minimal additional staleness, making it an efficient choice for such scenarios.
GLIDE provides flexible options tailored to your application’s needs:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;PRIMARY&lt;&#x2F;code&gt;: Always read from the primary to ensure the freshness of data.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;PREFER_REPLICA&lt;&#x2F;code&gt;: Distribute requests among all replicas in a round-robin manner. If no replica is available, fall back to the primary.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;AZ_AFFINITY&lt;&#x2F;code&gt;: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fall back to other replicas or the primary if needed.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;AZ_AFFINITY_REPLICAS_AND_PRIMARY&lt;&#x2F;code&gt;: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fall back to the primary in the same AZ. If neither are available, fall back to other replicas or the primary in other zones.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;In Valkey 8,  &lt;code&gt;availability-zone&lt;&#x2F;code&gt; configuration was introduced, allowing clients to specify the AZ for each Valkey server. GLIDE leverages this new configuration to empower its users with the ability to use AZ Affinity routing. At the time of writing, GLIDE is the only Valkey client library supporting the AZ Affinity strategies, offering a unique advantage.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;az-affinity-routing-advantages&quot;&gt;AZ Affinity routing advantages&lt;&#x2F;h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reduce Data Transfer Costs&lt;&#x2F;strong&gt; Cross-zone data transfer often incurs additional charges in Cloud environments. By ensuring operations are directed to nodes within the same AZ, you can minimize or eliminate these costs.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Example:&lt;&#x2F;strong&gt; An application in AWS with a Valkey cluster of 2 shards, each with 1 primary and 2 replicas, the instance type is m7g.xlarge. The cluster processes 250MB of data per second and to simplify the example 100% of the traffic is read operation. 50% of this traffic crosses AZs at a cost of $0.01 per GB, the monthly cross-AZ data transfer cost would be approximately $3,285. In addition the cost of the cluster is $0.252 per hour per node. Total of $1,088 per month. By implementing AZ affinity routing, you can reduce the total cost from $4,373 to $1,088, as all traffic remains within the same AZ.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Minimize Latency&lt;&#x2F;strong&gt; Distance between AZs within the same region— for example, in AWS, is typically up to 60 miles (100 kilometers)—adds extra roundtrip latency, usually in the range of 500µs to 1000µs. By ensuring requests remain within the same AZ, you can reduce latency and improve the responsiveness of your application.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Example 1:&lt;&#x2F;strong&gt;
Consider a cluster with three nodes, one primary and two replicas. Each node is located in a different availability zone. The client located in az-2 along with replica-1.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;With &lt;code&gt;PREFER_REPLICA&lt;&#x2F;code&gt; strategy&lt;&#x2F;strong&gt;:
In this case, the client will read commands from any replica that is available. It may be located in a different AZ as shown below, and the average latency is, for example, 800 microseconds.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;PREFER_REPLICA_strategy.png&quot; alt=&quot;PREFER_REPLICA Read strategy latency example&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;With &lt;code&gt;AZ_AFFINITY&lt;&#x2F;code&gt; strategy&lt;&#x2F;strong&gt;:
In this case, the client will read commands from a replica in the same client&#x27;s AZ and the average latency is, for example, about 300 microseconds.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;AZ_AFFINITY_strategy.png&quot; alt=&quot;AZ_AFFINITY Read strategy latency example&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Example 2:&lt;&#x2F;strong&gt;
Consider a cluster with three nodes, one primary and two replicas. Each node is located in a different availability zone. The client located in az-2 along with the primary node. The replicas are located in az-1 and az-3.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;With &lt;code&gt;AZ_AFFINITY&lt;&#x2F;code&gt; strategy&lt;&#x2F;strong&gt;:
In this case, the client attempts to read from a replica in the same AZ. Since none are available in az-2, it falls back to a replica in another AZ, such as az-1 or az-3.&lt;br &#x2F;&gt;
and the average latency is, for example, about 800 microseconds.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;AZ_AFFINITY_strategy2.png&quot; alt=&quot;AZ_AFFINITY Read strategy latency example&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;With &lt;code&gt;AZ_AFFINITY_REPLICAS_AND_PRIMARY&lt;&#x2F;code&gt; strategy&lt;&#x2F;strong&gt;:&lt;br &#x2F;&gt;
In this case, the client first attempts to read from a replica in the same AZ. Since no local replica exists, it reads from the primary located in az-2. The average latency is, for example, about 300 microseconds.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;AZ_AFFINITY_REPLICAS_AND_PRIMARY_strategy.png&quot; alt=&quot;AZ_AFFINITY_REPLICAS_AND_PRIMARY Read strategy latency example&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h2 id=&quot;configuring-az-affinity-connections-with-glide&quot;&gt;Configuring AZ Affinity Connections with GLIDE&lt;&#x2F;h2&gt;
&lt;p&gt;Setting up AZ affinity routing in GLIDE is simple, allowing you to leverage its full potential with just a few configuration steps. Let’s walk through the steps to enable this feature in your application.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;steps-to-set-up-az-affinity-routing-in-glide&quot;&gt;Steps to Set Up AZ Affinity Routing in GLIDE&lt;&#x2F;h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Configure Valkey nodes availability zone -
Assign each Valkey node to a specific AZ based on its physical or virtual location within the Cloud provider&#x27;s region.
The initial configuration must to be done with a separate management client on node initialization, as the clients gets the info from the replicas on the first reconnect.
In some managed services like Amazon ElastiCache, this mapping is configured automatically and this step is not required.&lt;&#x2F;p&gt;
&lt;p&gt;For each node, run the following command and change the AZ and routing address as appropriate:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Python:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client.config_set({&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;availability-zone&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: az}, &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;                    route&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;ByAddressRoute(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;host&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;address.example.com&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; port&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;6379&lt;&#x2F;span&gt;&lt;span&gt;))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Java:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;java&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;configSet&lt;&#x2F;span&gt;&lt;span&gt;(Map.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;of&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;availability-zone&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, az),&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; new&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; ByAddressRoute&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;address.example.com&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;span&gt;))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Node.js:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;javascript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;configSet&lt;&#x2F;span&gt;&lt;span&gt;({&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;availability-zone&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;: az}, { route: {type:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;routeByAddress&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, host:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;address.example.com&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, port:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;6379&lt;&#x2F;span&gt;&lt;span&gt;}})&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Configure GLIDE with AZ-Specific Targeting -
Here are Python, Java, and Node.JS examples showing how to set up an AZ affinity client that directs calls to nodes in the same AZ as the client.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Python:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;python&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;from&lt;&#x2F;span&gt;&lt;span&gt; glide&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; import&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    GlideClient,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    GlideClientConfiguration,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    NodeAddress,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ReadFrom&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Determine the client&amp;#39;s AZ (this could be fetched from the cloud provider&amp;#39;s metadata service)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client_az&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;us-east-1a&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Initialize Valkey client with preference for the client&amp;#39;s AZ&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;addresses&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; [NodeAddress(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;host&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;address.example.com&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; port&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;6379&lt;&#x2F;span&gt;&lt;span&gt;)]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client_config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; GlideClusterClientConfiguration(addresses,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; read_from&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;ReadFrom.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;AZ_AFFINITY&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; client_az&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;client_az)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; = await&lt;&#x2F;span&gt;&lt;span&gt; GlideClusterClient.create(client_config)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Write operation (route to the primary&amp;#39;s slot owner)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client.set(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;val1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# Get will read from one of the replicas in the same client&amp;#39;s availability zone if one exits.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;value&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; client.get(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Java:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;java&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Initialize Valkey client with preference for the client&amp;#39;s AZ&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;GlideClusterClientConfiguration config&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; GlideClusterClientConfiguration.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;builder&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;address&lt;&#x2F;span&gt;&lt;span&gt;(NodeAddress.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;builder&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;host&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;address.example.com&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;port&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;6379&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;build&lt;&#x2F;span&gt;&lt;span&gt;())&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;readFrom&lt;&#x2F;span&gt;&lt;span&gt;(ReadFrom.AZ_AFFINITY)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;clientAZ&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;us-east-1a&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    .&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;build&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;GlideClusterClient client&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; GlideClusterClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;createClient&lt;&#x2F;span&gt;&lt;span&gt;(config).&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;();&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Write operation (route to the primary&amp;#39;s slot owner)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;set&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;val1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;).&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;();&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Get will read from one of the replicas in the same client&amp;#39;s availability zone if one exits.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;).&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;();&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Node.js:&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;javascript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;import&lt;&#x2F;span&gt;&lt;span&gt; GlideClusterClient&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; from&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;@valkey&#x2F;valkey-glide&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;const&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; addresses&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        host:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;address.example.com&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        port:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 6379&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;];&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Initialize Valkey client with preference for the client&amp;#39;s AZ&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;const&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; client&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; = await&lt;&#x2F;span&gt;&lt;span&gt; GlideClusterClient.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;createClient&lt;&#x2F;span&gt;&lt;span&gt;({&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    addresses: addresses,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    readFrom:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;AZAffinity&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; as&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ReadFrom&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    clientAz:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;us-east-1a&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;});&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Write operation (route to the primary&amp;#39;s slot owner)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;await&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;set&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;val1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;&#x2F;&#x2F; Get will read from one of the replicas in the same client&amp;#39;s availability zone if one exits.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;await&lt;&#x2F;span&gt;&lt;span&gt; client.&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;get&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;key1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;By implementing AZ affinity routing in Valkey and using GLIDE, you can achieve lower latency and cost savings by routing requests to replicas in the same AZ as the client.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;p&gt;&lt;em&gt;Updated May 2025 to cover the &lt;code&gt;AZ_AFFINITY_REPLICAS_AND_PRIMARY&lt;&#x2F;code&gt; strategy and a corresponding example.&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;further-reading&quot;&gt;Further Reading&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&quot;&gt;Valkey GLIDE GitHub Repository&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;&quot;&gt;Valkey Documentation&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;wiki&#x2F;Python-wrapper#read-strategy&quot;&gt;Valkey GLIDE read strategy documentation in Python&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;wiki&#x2F;Java-Wrapper#read-strategy&quot;&gt;Valkey GLIDE read strategy documentation in Java&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-glide&#x2F;wiki&#x2F;NodeJS-wrapper#read-strategy&quot;&gt;Valkey GLIDE read strategy documentation in NodeJS&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>2024: The Year of Valkey</title>
        <published>2024-12-20T01:01:01+00:00</published>
        <updated>2024-12-20T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/2024-year-of-valkey/"/>
        <id>https://valkey.io/blog/2024-year-of-valkey/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/2024-year-of-valkey/">&lt;p&gt;The end of the calendar year is a great time to reflect, but for Valkey this particular year-end holds special meaning.
Think about it: this time in 2023, no one had ever heard the name “Valkey” because, well, it didn’t exist.
This seems nearly unbelievable given how much has changed in only a few short months.&lt;&#x2F;p&gt;
&lt;p&gt;Now, at the end of 2024, Valkey has had both a minor and major release as well as a few patches.
Valkey 7.2 primarily introduced the project and new name while carrying over the feature set and performance from before the fork.
Valkey 8.0 made substantial internal changes, bringing higher performance through multi-threaded I&#x2F;O, better memory efficiency from a rewritten main dictionary, more granular visibility into performance and resource usage, and enhanced reliability in replication and slot migration.
In 2025, the project is looking toward a future with new functionality and a whole boatload of optimizations in both performance and efficiency.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;beginnings&quot;&gt;Beginnings&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;youtu.be&#x2F;74Svvu37I_8?si=onLIvlu3X_ncKdh2&amp;amp;t=3055&quot;&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2024-year-of-valkey&#x2F;images&#x2F;ossna-thumb.jpg&quot; alt=&quot;YouTube video thumbnail of open source summit north america&quot; &#x2F;&gt;&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Getting started with Valkey has changed substantially over the course of the year.
When Valkey first launched at Open Source Summit North America, I recall excitedly telling people that you could build from source, get the binary from the website or even use a container.
Now, you can get Valkey directly from the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;repology.org&#x2F;project&#x2F;valkey&#x2F;versions&quot;&gt;package manager in most Linux distributions&lt;&#x2F;a&gt; (and more on the way in 2025).
There are multiple options for containerization and operators to fit your needs.
And, for those who want to let &lt;em&gt;others&lt;&#x2F;em&gt; run Valkey, it is also available as a service on Aiven, AWS, Google Cloud Platform, NetApp Instaclustr, UpCloud, and several more.&lt;&#x2F;p&gt;
&lt;p&gt;The buzz around Valkey caused no shortage of coverage.
One theme that the initial Valkey coverage focused on was the speed that everything happened: only 8 days after the license change, the project had a release, was a Linux foundation project with supporters from a variety of companies around the industry:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;An eight day timeline for a project of this size and scope is shocking.
An eight day timeline to announce a project of this scope that includes names like AWS, Google and Oracle of all vendors is an event without any obvious precedent.
Even setting the requisite legal approvals aside, naming and branding take time – as they did in this case, apparently.&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;redmonk.com&#x2F;sogrady&#x2F;2024&#x2F;07&#x2F;16&#x2F;post-valkey-world&#x2F;&quot;&gt;Stephen O’Grady&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Over the course of the year, the coverage shifted focus from establishment to velocity:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;As per GitHub and as of this writing, it’s got roughly 10 times as many contributors and is hundreds of commits ahead of Redis.
It’s like Redis, but with more caffeine, a bigger dev team, and a community that’s suddenly not beholden to keeping requested features stuffed behind a paywall.&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.lastweekinaws.com&#x2F;blog&#x2F;aws-valkey-play-when-a-fork-becomes-a-price-cut&#x2F;&quot;&gt;Corey Quinn&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h2 id=&quot;stories&quot;&gt;Stories&lt;&#x2F;h2&gt;
&lt;p&gt;Beyond seeing this coverage, less than six months after the establishment of the project a report indicated that &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;thenewstack.io&#x2F;redis-users-want-a-change&#x2F;&quot;&gt;63% of respondents were already familiar with Valkey&lt;&#x2F;a&gt;, so it’s no surprise that there are countless folks already using it.
These specific stories being told in public are a valuable data point for the next wave of migrations.
At Open Source Summit Hong Kong in September, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.facesofopensource.com&#x2F;dirk-hohndel&#x2F;&quot;&gt;Dirk Hohndel&lt;&#x2F;a&gt; from Verizon talks about migrating his own app over to Valkey 8.0 release candidate and seeing a 3x performance increase.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Qp74Nn-d5a8&quot;&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;2024-year-of-valkey&#x2F;images&#x2F;osshk-dirk.jpg&quot; alt=&quot;YouTube video thumbnail of open source summit hong kong&quot; &#x2F;&gt;&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;On &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;developer&#x2F;community&#x2F;heroes&#x2F;marcin-sodkiewicz&#x2F;&quot;&gt;Marcin Sodkiewicz&lt;&#x2F;a&gt;’s blog he talks a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;sodkiewiczm.medium.com&#x2F;elasticache-serverless-valkey-review-1e3329cfbfa0&quot;&gt;little bit about his journey moving to Valkey&lt;&#x2F;a&gt;.
The post is a lot of math about the saving he’s achieving due to the service he’s using having a lower unit cost
What’s perhaps most interesting in this account of migrating to Valkey is what &lt;em&gt;isn’t&lt;&#x2F;em&gt; said.
Not once did Marcin talk about changing code, libraries, or commands: it was a “no-brainer” because he just had to update some infrastructure-as-code configuration to move to Valkey.&lt;&#x2F;p&gt;
&lt;p&gt;Given the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linuxfoundation.org&#x2F;blog&#x2F;iwb-2024-state-of-open-source-financial-services&quot;&gt;strategic relevance of open source software in the financial services industry&lt;&#x2F;a&gt;, seeing organizations come forward with strong supporting statements about their migration to Valkey in context.
Kailash Nadh, CTO of &lt;a rel=&quot;external&quot; href=&quot;http:&#x2F;&#x2F;zerodha.com&#x2F;&quot;&gt;Zerodha&lt;&#x2F;a&gt;, an &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Zerodha&quot;&gt;Indian online brokerage and financial services company&lt;&#x2F;a&gt; provided the following quote:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;We recently adopted Valkey following the unexpected changes with Redis.
Valkey is a crucial project for the ecosystem, and we’re closely monitoring its progress with the ultimate aim of fully migrating to it.
We are excited to support the project!&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Similarly, Linux distributions have &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;repology.org&#x2F;project&#x2F;valkey&#x2F;versions&quot;&gt;widely adopted Valkey in their package managers&lt;&#x2F;a&gt;.
The latest version of Fedora, version 41, goes beyond just packaging Valkey: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;fedoraproject.org&#x2F;wiki&#x2F;Changes&#x2F;Replace_Redis_With_Valkey#Upgrade&#x2F;compatibility_impact&quot;&gt;it obsoletes Redis in lieu of Valkey&lt;&#x2F;a&gt;.
Functionally, this means that if you upgraded to Fedora 41 and consumed Redis packaged by Fedora, you were automatically migrated to Valkey and all future attempts to install Redis from the package manager gives you Valkey.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;almalinux.org&#x2F;&quot;&gt;AlmaLinux, “the forever-free, Enterprise OS”&lt;&#x2F;a&gt; provides Valkey to it’s users but also is a user of Valkey in their mirroring system.
In an interview with Jonathan Wright, Lead for the infrastructure special interest group at AlmaLinux, he described how the mirror list system within AlmaLinux handles “millions and millions of requests per day” for systems requesting package updates.
The information is not simple: geolocation, subnet, and ASN matching means that querying a relational database directly was not an option: it would just be too inefficient and too slow.
The system, which originally ran on Redis, takes the total request time from “2-3 seconds per request” down to just 50 milliseconds.
On Alma’s migration to Valkey he said:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;...it was seamless ... you can yank out Redis, drop in Valkey and just keep on running, you don’t have to change anything.&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h2 id=&quot;forward-to-2025&quot;&gt;Forward to 2025&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey has grown immensely in 2024.
2025 will definitely hold new challenges but it’s sure to carry forward the growth and excitement about the project while still delivering performance, efficiency, and drama-free installation &amp;amp; migrations to users.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Pushing the limits of Valkey on a Raspberry Pi</title>
        <published>2024-11-21T01:01:01+00:00</published>
        <updated>2024-11-21T01:01:01+00:00</updated>
        
        <author>
          <name>
            dtaivpp
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/testing-the-limits/"/>
        <id>https://valkey.io/blog/testing-the-limits/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/testing-the-limits/">&lt;p&gt;While doing extensive performance testing on a Raspberry Pi is silly, it&#x27;s made me realize the complexity of performance testing. For example, in some of the tests below I ended up managing to use all of the resources of the Raspberry Pi and achieved terrible performance. Every application has different performance characteristics so we&#x27;ll walk through what factors to consider when it comes to deploying Valkey.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-test-environment&quot;&gt;The test environment&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;testing-the-limits&#x2F;images&#x2F;cm4.png&quot; alt=&quot;Picture of the Compute Module 4 credit Raspberry Pi Ltd (CC BY-SA)&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;For hardware we are going to be using a Raspberry Pi &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.raspberrypi.com&#x2F;products&#x2F;compute-module-4&#x2F;?variant=raspberry-pi-cm4001000&quot;&gt;Compute Module 4 (CM4)&lt;&#x2F;a&gt;. It&#x27;s a single board computer (SBC) that comes with a tiny 1.5Ghz 4-core Broadcom CPU and 8GB of system memory. This is hardly the first device someone would pick when deciding on a production system. Using the CM4 makes it easy to showcase how to optimize Valkey depending on your different hardware constraints.&lt;&#x2F;p&gt;
&lt;p&gt;Our operating system will be a 64-bit Debian based operating system (OS) called &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.raspbian.org&#x2F;&quot;&gt;Rasbian&lt;&#x2F;a&gt;. This distribution is specifically modified to perform well on the CM4. Valkey will run in a docker container orchestrated with docker compose. I like deploying in containers as it simplifies operations. If you&#x27;d like to follow along here is &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;docs.docker.com&#x2F;engine&#x2F;install&#x2F;debian&#x2F;&quot;&gt;a guide for installing Docker&lt;&#x2F;a&gt;. Make sure to continue to the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;docs.docker.com&#x2F;engine&#x2F;install&#x2F;linux-postinstall&#x2F;&quot;&gt;second page of the installation process&lt;&#x2F;a&gt; as well. It&#x27;s easy to miss and skipping it could make it harder to follow along.&lt;&#x2F;p&gt;
&lt;p&gt;We&#x27;ll be using two CM4s for testing. The first will host Valkey and the second will host the benchmarking software. This setup probably better reflects how most people will run in production. Benchmarking is being done with redis-benchmark because it can be installed with &lt;code&gt;sudo apt install redis-tools&lt;&#x2F;code&gt;. Valkey does have its own benchmark utility that comes installed with Valkey. To use valkey-benchmark instead you would need to install Valkey on the benchmarking server or spin up a container and connect into it. Functionally, they both operate nearly the same as of the writing of this article.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;testing-the-limits&#x2F;images&#x2F;test_setup.png&quot; alt=&quot;Test architecture showing two nodes: a Benchmark server with the ip of 10.0.11.221 with an arrow pointing to a Valkey server with an ip of 10.0.1.136&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;setting-up-our-environment&quot;&gt;Setting up our environment&lt;&#x2F;h2&gt;
&lt;p&gt;Below is a straightforward docker compose file that will start a single Valkey container. This container will bind Valkey to port 6379 on the host device. This means that is exposed to anyone with access to your network! This is important for us to access it from the benchmarking server.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey.yaml&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;services&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  valkey-1&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    image&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&#x2F;valkey:latest&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    hostname&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    command&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-server --port 6379 --requirepass ${VALKEY_PASSWORD} --io-threads ${IO_THREADS} --save &amp;quot;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    volumes&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;      -&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; .&#x2F;data:&#x2F;data&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    network_mode&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; host&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;volumes&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  data&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    driver&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; local&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Since we are exposing this to our internal network we will create a password for our default user. I used &lt;code&gt;head -16 &#x2F;dev&#x2F;urandom | openssl sha1&lt;&#x2F;code&gt; to generate a random password. Because of how fast Valkey can process requests a brute force attack could try hundreds of thousands of passwords per second. After generating that password, I put it in a &lt;code&gt;.env&lt;&#x2F;code&gt; file in the same directory as our docker compose.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;#.env&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;VALKEY_PASSWORD&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;NODE_IP&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;VALKEY&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; SERVER&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; I&lt;&#x2F;span&gt;&lt;span&gt;P&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;IO_THREADS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now by running &lt;code&gt;docker compose -f valkey.yaml up -d&lt;&#x2F;code&gt; Valkey server will start with password we set!&lt;&#x2F;p&gt;
&lt;h2 id=&quot;baseline-test&quot;&gt;Baseline test&lt;&#x2F;h2&gt;
&lt;p&gt;Now we are ready to do some baseline testing. We will log into the benchmarking server. If you haven&#x27;t installed redis-benchmark yet you can do so with &lt;code&gt;sudo apt install redis-tools&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;redis-benchmark&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n 1000000 -t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set,get&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -P 16 -q -a&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;PASSWORD FROM .en&lt;&#x2F;span&gt;&lt;span&gt;v&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --threads 5 -h 10.0.1.136&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Test breakdown:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;-n&lt;&#x2F;code&gt; - this will run 1,000,000 operations using the commands in -t&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;-t&lt;&#x2F;code&gt; - will run the set and get tests&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;-P&lt;&#x2F;code&gt; - this specifies that we would like the tests to use 16 pipelines (send 16 operations per request).&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;-q&lt;&#x2F;code&gt; - silences the output to show only the final results&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;-a&lt;&#x2F;code&gt; - use the specified password&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;-h&lt;&#x2F;code&gt; - run the test against the specified host&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;--threads&lt;&#x2F;code&gt; - how many threads to generate test data from&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Honestly, I was astonished by the first set of results I got. Sure, I expected Valkey to be fast but this speed from a single board computer?? ON A SINGLE THREAD?? It&#x27;s amazing.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;redis-benchmark&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n 1000000 -t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set,get&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -P 16 -q -a&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --threads 5 -h 10.0.1.136&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;SET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 173040.33&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;4.503&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;span&gt;                    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;GET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 307031.00&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;2.455&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Between the two tests we averaged 240,000 requests per second.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;raising-the-cpu-clock-speed&quot;&gt;Raising the CPU clock speed&lt;&#x2F;h2&gt;
&lt;p&gt;Since Valkey is a single threaded application, it makes sense that higher clock speeds would lead to more performance. I don&#x27;t expect most people will overclock their servers in production. Different servers may be available with different CPU clock speeds.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;&#x2F;strong&gt; Clock speeds generally are only comparable between CPU&#x27;s with a similar architecture. For example, you could reasonably compare clock speeds between an 12th generation Intel i5 and a 12th generation Intel i7. If the 12th gen i7 had a max clock speed of 5Ghz that doesn&#x27;t necessarily mean it will be slower than a AMD Ryzen 9 9900X clocked at 5.6Ghz.&lt;&#x2F;p&gt;
&lt;p&gt;If you&#x27;re following along on a Pi of your own I&#x27;ve outlined the steps to overclock your CM4 below. Otherwise, you can skip to the results section below.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Warning&lt;&#x2F;strong&gt; Just a reminder overclocking your device can cause damage to your device. Please use caution and do your own research for settings that are safe.&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Open the below file
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;sudo nano &#x2F;boot&#x2F;firmware&#x2F;config.txt&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;At the end of the file add this section below the &lt;code&gt;[all]&lt;&#x2F;code&gt; tag&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;[all]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;over_voltage=8&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;arm_freq=2200&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Restart the Pi and log back in &lt;code&gt;sudo restart now&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;We&#x27;ve just increased the speed of the Pi by 47% by raising the clock speed from 1.5Ghz to 2.2Ghz. Now lets re-run our test and see how things look!&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;redis-benchmark&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n 1000000 -t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set,get&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -P 16 -q -a&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --threads 5 -h 10.0.1.136&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;SET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 394368.41&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;1.223&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;span&gt;                    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;GET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 438058.53&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;1.135&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;span&gt; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;We&#x27;re up to 416,000 requests per second (reminder this is the average between the two operations). The mathematicians out there might notice that this speed up is a lot more than the expected 47% increase. It&#x27;s a 73% increase in requests per second. What&#x27;s happening?!&lt;&#x2F;p&gt;
&lt;h2 id=&quot;adding-io-threading&quot;&gt;Adding IO Threading&lt;&#x2F;h2&gt;
&lt;p&gt;With all these gains I was super excited to try the new &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;unlock-one-million-rps&#x2F;&quot;&gt;IO threading available&lt;&#x2F;a&gt; in Valkey 8. First we will take down the previous running docker instance with &lt;code&gt;docker compose -f valkey.yaml down&lt;&#x2F;code&gt;. Then we will modify the &lt;code&gt;.env&lt;&#x2F;code&gt; file&#x27;s &lt;code&gt;IO_THREADS&lt;&#x2F;code&gt; parameter to 5.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;#.env&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;VALKEY_PASSWORD&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;NODE_IP&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;VALKEY&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; SERVER&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; I&lt;&#x2F;span&gt;&lt;span&gt;P&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;IO_THREADS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;5&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Then we can &lt;code&gt;docker compose -f valkey.yaml up -d&lt;&#x2F;code&gt; to start it again. Remote into the benchmarking server to start the test and...?&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;redis-benchmark&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n 10000000 -t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set,get&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -P 16 -q -a&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --threads 5 -h 10.0.1.136&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;SET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 345494.75&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0.911&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;span&gt;                    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;GET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 327858.09&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0.879&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Wait a second... These results are worse than the ones before? We went from 416k requests per second to 336k.... What&#x27;s happening?&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;testing-the-limits&#x2F;images&#x2F;io_threads.png&quot; alt=&quot;A picture of the valkey server with 4 cores represented as boxes. In each of the four core boxes there are other boxes. 2 cores have 1 IO Thread Box, 1 Core has 2 IO thread boxes, and the last one has 1 IO Thread box along with a Valkey Process box.&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;We have over-subscribed our CPU. This means we&#x27;ve created more worker threads than CPU cores. When a thread is under constant load it is competing with other threads on that core for resources. Not to mention, they are also competing with the Valkey process for resources.&lt;&#x2F;p&gt;
&lt;p&gt;That&#x27;s why &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;blob&#x2F;a62d1f177b7888ec88035a0a1ce600fbc2280ce7&#x2F;valkey.conf#L1337-L1341&quot;&gt;Valkey recommends&lt;&#x2F;a&gt; setting the number of threads to be a value less than the number of cores you have. For our little 4 core server lets change the &lt;code&gt;IO_THREADS&lt;&#x2F;code&gt; parameter to be 2 threads in the &lt;code&gt;.env&lt;&#x2F;code&gt; file and try again.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;redis-benchmark&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n 10000000 -t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set,get&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -P 16 -q -a&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --threads 5 -h 10.0.1.136&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;SET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 609050.44&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0.831&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;span&gt;                    &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;GET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 521186.22&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0.719&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Much better! Now we are seeing around 565,000 requests per second. Thats a 35% increase in performance across both metrics! Not to mention in the picture below you can see that we have 100% utilization across all of our CPU&#x27;s which means there&#x27;s no more room for improvement!&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;testing-the-limits&#x2F;images&#x2F;io_threading_htop.png&quot; alt=&quot;A picture of an HTOP window showing all four of our CPU&amp;#39;s at 100% utilization.&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Right? Well believe it or not we can squeeze even more performance out of our little CM4!&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;testing-the-limits&#x2F;images&#x2F;io_threading_arch.png&quot; alt=&quot;A picture of our Valkey server with the 4 core boxes and to the right of them is a memory box. In the first core box is the Valkey process and in the next two there are IO Threads. The valkey process has a loop showing it communicating with both of the IO threads. It also has a bracket showing it managing all the memory.&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Above is a representative outline of what&#x27;s happening on the server. The Valkey process has to take up valuable cycles managing the IO Threads. Not only that it has to perform a lot of work to manage all the memory assigned to it. That&#x27;s a lot of work for a single process.&lt;&#x2F;p&gt;
&lt;p&gt;Now there is actually one more optimization we can use to make single threaded Valkey even faster. Valkey recently has done a substantial amount of work to support speculative execution. This work allows Valkey to predict which values will be needed from memory in future processing steps. This way Valkey server doesn&#x27;t have to wait for memory access which is an order of magnitude slower than L1 caches. While I won&#x27;t go through the details of how this works as there&#x27;s already a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;unlock-one-million-rps-part2&#x2F;&quot;&gt;great blog that describes how to take advantage of these optimizations&lt;&#x2F;a&gt;. Here are the results:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;redis-benchmark&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n 10000000 -t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set,get&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -P 16 -q -a&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --threads 5 -h 10.0.1.136&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;SET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 632791.25&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;1.191&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;GET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 888573.00&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0.695&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;While these results are better they are a bit confusing. After talking with some of Valkey&#x27;s maintainers it seems there may be something different in the way that Rasbian is configured when it comes to memory writes. In their testing the &lt;code&gt;GET&#x2F;SET&lt;&#x2F;code&gt; requests were nearly identical but in my testing so far the write speed seems to always be behind read speed. If you think you know why please reach out!&lt;&#x2F;p&gt;
&lt;h2 id=&quot;clustered-valkey&quot;&gt;Clustered Valkey&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;testing-the-limits&#x2F;images&#x2F;valkey_clustered.png&quot; alt=&quot;A picture of our Valkey server with the 4 core boxes and to the right of them is a memory box. In the first three core boxes are Valkey processes. Each of them has a bracket around a portion of the memory.&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;For our last step we are going to spin up a Valkey cluster. This cluster will have individual instances of Valkey running that each will be responsible for managing their own keys. This way each instance can execute operations in parallel much more easily.&lt;&#x2F;p&gt;
&lt;p&gt;I am not going into detail with how the keyspaces work but &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;cluster-tutorial&#x2F;&quot;&gt;here is a good 101 guide&lt;&#x2F;a&gt; for understanding clustering in Valkey.&lt;&#x2F;p&gt;
&lt;p&gt;First we&#x27;ll stop our previous Valkey container &lt;code&gt;docker compose -f valkey.yaml down&lt;&#x2F;code&gt;. Now we can create our docker compose file for the cluster. Because each of these are exposed on the host they will all need to be using different ports. Additionally, all of them need to be aware they are started in cluster mode so they can redirect requests to the appropriate instance.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #8B949E;&quot;&gt;# valkey-cluster.yaml&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;services&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  valkey-node-1&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    hostname&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    image&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&#x2F;valkey:latest&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    command&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --requirepass ${VALKEY_PASSWORD} --save &amp;quot;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    volumes&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;      -&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; .&#x2F;data1:&#x2F;data&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    network_mode&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; host&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  valkey-node-2&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    hostname&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey2&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    image&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&#x2F;valkey:latest&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    command&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-server --port 6380 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --requirepass ${VALKEY_PASSWORD} --save &amp;quot;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    volumes&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;      -&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; .&#x2F;data2:&#x2F;data&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    network_mode&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; host&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  valkey-node-3&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    hostname&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    image&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&#x2F;valkey:latest&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    command&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-server --port 6381 --cluster-enabled yes --cluster-config-file nodes.conf --cluster-node-timeout 5000 --requirepass ${VALKEY_PASSWORD} --save &amp;quot;&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    volumes&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;      -&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; .&#x2F;data3:&#x2F;data&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    network_mode&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; host&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;volumes&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  data1&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    driver&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; local&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  data3&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    driver&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; local&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;  data2&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #7EE787;&quot;&gt;    driver&lt;&#x2F;span&gt;&lt;span&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; local&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Run &lt;code&gt;docker compose -f valkey-cluster.yaml up -d&lt;&#x2F;code&gt; to start the cluster. There is one more step to get the cluster running. Find the name of one of your nodes with &lt;code&gt;docker ps --format &#x27;{{.Names}}&#x27;&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; ps&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --format&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;{{.Names}}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kvtest-valkey-node-1-1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kvtest-valkey-node-3-1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;kvtest-valkey-node-2-1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;I&#x27;ll use the first container to finish the cluster creation. Once the containers have started up we have to tell them the details they need to use for the cluster. Below I am using the IP of the host and the port configurations of all the containers to create the cluster. This is because these addresses need to be accessible from the benchmarking server.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; exec&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -it&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kvtest-valkey-node-1-1 valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --cluster&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; create 10.0.1.136:6379 10.0.1.136:6380 10.0.1.136:6381&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -a&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now we can run our benchmark! We need to add the &lt;code&gt;--cluster&lt;&#x2F;code&gt; flag to our benchmarking command. Also, because this is so fast I ended up moving from 1 million requests to 10 million requests. That way we can make sure Valkey has time to fully utilize all it&#x27;s resources.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;redis-benchmark&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n 10000000 -t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set,get&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -P 16 -q --threads 10 --cluster -a&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; e41fb9818502071d592b36b99f63003019861dad&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --threads 5 -h 10.0.1.136&lt;&#x2F;span&gt;&lt;span&gt; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Cluster&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; has&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 3&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; master nodes:&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Master&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; 0: 219294612b44226fa32482871cf21025ff531875 10.0.1.136:6380&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Master&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; 1: e5d85b970551c27065f1552f5358f4add6114d98 10.0.1.136:6381&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Master&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; 2: 1faf3d0dd22e518eec11fd46c0de6ce18cd15cfe 10.0.1.136:6379&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;SET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 1122838.50&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0.575&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;span&gt;                     &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;GET:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 1188071.75&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; requests per second, p50=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0.511&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; msec&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;1,155,000 requests per second&lt;&#x2F;strong&gt;. We&#x27;ve managed to double our requests per second. All this on a single board computer that&#x27;s the size of a credit card.&lt;&#x2F;p&gt;
&lt;p&gt;While this is far from what I would recommend for a production server these are the same steps I&#x27;d recommend to someone evaluating Valkey. It&#x27;s important test with a single instance to start finding the optimal settings. Then you can begin to scale up your test by adding either more IO Threads or Valkey instances.&lt;&#x2F;p&gt;
&lt;p&gt;Testing should mirror your production workload as best it can. This test is using synthetic data. That&#x27;s why I&#x27;d recommend checking out the documentation to find what other settings you may need to test with. For example, we tested with the default settings of 50 client connections and 3 byte payloads. Your production workload may look different so explore all the settings! You may find that IO threading works better for your use case than I did in this case.&lt;&#x2F;p&gt;
&lt;p&gt;If you enjoyed this read make sure to check out my blog &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;tippybits.com&quot;&gt;TippyBits.com&lt;&#x2F;a&gt; where I post content like this on a regular basis. Stay curious my friends!&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Generally Available: Valkey 8.0.0</title>
        <published>2024-09-16T01:01:01+00:00</published>
        <updated>2024-09-16T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-8-ga/"/>
        <id>https://valkey.io/blog/valkey-8-ga/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-8-ga/">&lt;p&gt;The first ever release of Valkey, 7.2.5, became generally available more than 5 months ago.
While the initial release was a milestone, it focused on compatibility and license continuity; bringing no new features to the table.
Today marks a different milestone for the Valkey project: the first major release.
Valkey 8.0.0 continues the traditions of the seven major versions of Redis that precede it by bringing improvements to speed and efficiency alongside new features.&lt;&#x2F;p&gt;
&lt;p&gt;Key properties of the Valkey project are transparency and collaboration.
As a consequence of Valkey 8.0.0 being developed entirely in the open, the team has already written about both the big and small features of the release.
The best overview is the RC1 blog which breaks down all the changes and features in the release into a few sections: &lt;a href=&quot;&#x2F;blog&#x2F;valkey-8-0-0-rc1&#x2F;#performance&quot;&gt;performance&lt;&#x2F;a&gt;, &lt;a href=&quot;&#x2F;blog&#x2F;valkey-8-0-0-rc1&#x2F;#reliability&quot;&gt;reliability&lt;&#x2F;a&gt;, &lt;a href=&quot;&#x2F;blog&#x2F;valkey-8-0-0-rc1&#x2F;#replication&quot;&gt;replication&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;valkey-8-0-0-rc1&#x2F;#observability&quot;&gt;observability&lt;&#x2F;a&gt;, and &lt;a href=&quot;&#x2F;blog&#x2F;valkey-8-0-0-rc1&#x2F;#efficiency&quot;&gt;efficiency&lt;&#x2F;a&gt;.
Additionally, there are deep dives on the &lt;a href=&quot;&#x2F;blog&#x2F;unlock-one-million-rps&#x2F;&quot;&gt;speed&lt;&#x2F;a&gt; (with a &lt;a href=&quot;&#x2F;blog&#x2F;unlock-one-million-rps-part2&#x2F;&quot;&gt;follow up&lt;&#x2F;a&gt;) and &lt;a href=&quot;&#x2F;blog&#x2F;valkey-memory-efficiency-8-0&#x2F;&quot;&gt;efficiency&lt;&#x2F;a&gt; improvements in Valkey 8.0.0.&lt;&#x2F;p&gt;
&lt;p&gt;While this is a major version, Valkey takes command set compatibility seriously: Valkey 8.0.0 makes no backwards incompatible changes to the existing command syntax or their responses.
Your existing tools and custom software will be able to immediately take advantage of Valkey 8.0.0.
Since Valkey 8.0.0 does make some small changes to previously undefined behaviors, it&#x27;s wise to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;blob&#x2F;8.0.0&#x2F;00-RELEASENOTES&quot;&gt;read the release notes&lt;&#x2F;a&gt;.
Additionally, because this version makes changes in how the software uses threading, you may want to re-evaluate your cluster’s infrastructure to achieve the highest performance.&lt;&#x2F;p&gt;
&lt;p&gt;Valkey 8.0.0 has gone through multiple rounds of release candidates, testing, and verification.
The Technical Steering Committee considers it ready for production usage.
You can &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;tree&#x2F;8.0.0&quot;&gt;build from source&lt;&#x2F;a&gt;, start &lt;a href=&quot;&#x2F;download&#x2F;&quot;&gt;installing the binaries, or deploy the containers&lt;&#x2F;a&gt; today.
Expect package managers to pick up the latest version in the coming days.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;&#x2F;strong&gt; &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;tree&#x2F;8.0.1&quot;&gt;Valkey 8.0.1&lt;&#x2F;a&gt; was released on October 2, read the release notes on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;blob&#x2F;8.0.1&#x2F;00-RELEASENOTES&quot;&gt;GitHub&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Unlock 1 Million RPS: Experience Triple the Speed with Valkey - part 2</title>
        <published>2024-09-13T01:01:01+00:00</published>
        <updated>2024-09-13T01:01:01+00:00</updated>
        
        <author>
          <name>
            dantouitou
          </name>
        </author>
        
        <author>
          <name>
            uriyagelnik
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/unlock-one-million-rps-part2/"/>
        <id>https://valkey.io/blog/unlock-one-million-rps-part2/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/unlock-one-million-rps-part2/">&lt;p&gt;In the &lt;a href=&quot;&#x2F;blog&#x2F;unlock-one-million-rps&#x2F;&quot;&gt;first part&lt;&#x2F;a&gt; of this blog, we described how we offloaded almost all I&#x2F;O operations to I&#x2F;O threads, thereby freeing more CPU cycles in the main thread to execute commands. When we profiled the execution of the main thread, we found that a considerable amount of time was spent waiting for external memory. This was not entirely surprising, as when accessing random keys, the probability of finding the key in one of the processor caches is relatively low.  Considering that external memory access latency is approximately 50 times greater than L1 cache, it became clear that despite showing 100% CPU utilization, the main process was mostly “waiting”. In this blog, we describe the technique we have been using to increase the number of parallel memory accesses, thereby reducing the impact that external memory latency has on performance.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;speculative-execution-and-linked-lists&quot;&gt;Speculative execution and linked lists&lt;&#x2F;h3&gt;
&lt;p&gt;Speculative execution is a performance optimization technique used by modern processors, where the processor guesses the outcome of conditional operations and executes in parallel subsequent instructions ahead of time. Dynamic data structures, such as linked lists and search trees, have many advantages over static data structures: they are economical in memory consumption, provide fast insertion and deletion mechanisms, and can be resized efficiently. However, some dynamic data structures have a major drawback: they hinder the processor&#x27;s ability to speculate on future memory load instructions that could be executed in parallel. This lack of concurrency is especially problematic in very large dynamic data structures, where most pointer accesses result in high-latency external memory access.&lt;&#x2F;p&gt;
&lt;p&gt;In this blog, Memory Access Amortization, a method that facilitates speculative execution to improve performance, is introduced along with how it is applied in Valkey. The basic idea behind the method is that by interleaving the execution of operations that access random memory locations, one can achieve significantly better performance than by executing them serially.&lt;&#x2F;p&gt;
&lt;p&gt;To depict the problem we are trying to solve consider the following &lt;a href=&quot;&#x2F;assets&#x2F;C&#x2F;list_array.c&quot;&gt;function&lt;&#x2F;a&gt; which gets an array of linked list and returns sum of all values in the lists:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;c&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;unsigned long&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; sequentialSum&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;size_t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; arr_size&lt;&#x2F;span&gt;&lt;span&gt;, list &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;**&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;la&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    list &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span&gt;lp;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    unsigned long&lt;&#x2F;span&gt;&lt;span&gt;  res &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    for&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;int&lt;&#x2F;span&gt;&lt;span&gt; i &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;; i &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt; arr_size; i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;++&lt;&#x2F;span&gt;&lt;span&gt;) { &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        lp &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; la&lt;&#x2F;span&gt;&lt;span&gt;[i]; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        while&lt;&#x2F;span&gt;&lt;span&gt; (lp) { &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            res &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;+=&lt;&#x2F;span&gt;&lt;span&gt; lp-&amp;gt;val;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            lp &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt; lp-&amp;gt;next;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    return&lt;&#x2F;span&gt;&lt;span&gt; res; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Executing this function on an array of 16 lists containing 10 million elements each takes approximately 20.8 seconds on an ARM processor (Graviton 3). Now consider the following alternative implementation which instead of scanning the lists separately,  interleaves the executions of the lists scans:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;c&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;unsigned long&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; interleavedSum&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;size_t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; arr_size&lt;&#x2F;span&gt;&lt;span&gt;, list &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;**&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;la&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    list &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;**&lt;&#x2F;span&gt;&lt;span&gt;lthreads &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; malloc&lt;&#x2F;span&gt;&lt;span&gt;(arr_size &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;* sizeof&lt;&#x2F;span&gt;&lt;span&gt;(list &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span&gt;)); &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    unsigned long&lt;&#x2F;span&gt;&lt;span&gt; res &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    int&lt;&#x2F;span&gt;&lt;span&gt; n &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt; arr_size; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    for&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;int&lt;&#x2F;span&gt;&lt;span&gt; i &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;; i &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt; arr_size; i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;++&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;        lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; la&lt;&#x2F;span&gt;&lt;span&gt;[i]; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        if&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; ==&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; NULL&lt;&#x2F;span&gt;&lt;span&gt;) &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;--&lt;&#x2F;span&gt;&lt;span&gt;; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    } &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    while&lt;&#x2F;span&gt;&lt;span&gt;(n) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        for&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;int&lt;&#x2F;span&gt;&lt;span&gt; i &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;; i &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt; arr_size; i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;++&lt;&#x2F;span&gt;&lt;span&gt;) { &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            if&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; ==&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; NULL&lt;&#x2F;span&gt;&lt;span&gt;) &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;                continue&lt;&#x2F;span&gt;&lt;span&gt;; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;            res &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;+=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]-&amp;gt;val;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;            lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]-&amp;gt;next; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            if&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; ==&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; NULL&lt;&#x2F;span&gt;&lt;span&gt;) &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;                n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;--&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        }  &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    }&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    free&lt;&#x2F;span&gt;&lt;span&gt;(lthreads);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    return&lt;&#x2F;span&gt;&lt;span&gt; res; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Running this new version with the same input as previously described takes less than 2 seconds, achieving a 10x speedup! The explanation for this significant improvement lies in the processor&#x27;s speculative execution capabilities. In a standard sequential traversal of a linked list, as seen in the first version of the function, the processor cannot &#x27;speculate&#x27; on future memory access instructions. This limitation becomes particularly costly with large lists, where each pointer access likely results in a expensive external memory access. In contrast, the alternative implementation, which interleaves list traversals, allows the processor to issue more memory accesses in parallel. This leads to an overall reduction in memory access latency through amortization.&lt;&#x2F;p&gt;
&lt;p&gt;One way to maximize the amount of parallel memory access issued is to add prefetch instructions. Replacing&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;c&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;             if&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; ==&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; NULL&lt;&#x2F;span&gt;&lt;span&gt;) &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;                n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;--&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;with&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;c&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            if&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]) &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;                __builtin_prefetch&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;lthreads&lt;&#x2F;span&gt;&lt;span&gt;[i]);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            else&lt;&#x2F;span&gt;&lt;span&gt; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;                n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;--&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;reduces the execution time further to 1.8 sec.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;back-to-valkey&quot;&gt;Back to Valkey&lt;&#x2F;h3&gt;
&lt;p&gt;In the first part, we described how we updated the existing I&#x2F;O threads implementation to increase parallelism and reduce the amount of I&#x2F;O operations executed by the main thread to a minimum. Indeed, we observed an increase in the number of requests per second, reaching up to 780K SET commands per second. Profiling the execution revealed that Valkey&#x27;s main thread was spending more than 40% of its time in a single function: lookupKey, whose goal is to locate the command keys in Valkey&#x27;s main dictionary. This dictionary is implemented as a straightforward chained hash, as shown in the picture below:
&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;lookupKey.jpg&quot; alt=&quot;dict find&quot; &#x2F;&gt;
On a large enough set of keys, almost every memory address accessed while searching the dictionary will not be found in any of the processor caches, resulting in costly external memory accesses. Also, similarly as with the linked list from above, since the addresses in the table→dictEntry→...dictEntry→robj sequence are serially dependent, it is not possible to determine the next address to be accessed before the previous address in the chain has been resolved.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;batching-and-interleaving&quot;&gt;Batching and interleaving&lt;&#x2F;h3&gt;
&lt;p&gt;To overcome this inefficiency, we adopted the following approach. Every time a batch of incoming commands from the I&#x2F;O threads is ready for execution, Valkey’s main thread efficiently prefetches the memory addresses needed for future lookupKey invocations for the keys involved in the commands  before executing the commands. This prefetch phase is achieved by dictPrefetch, which, similarly as with the linked list example from above, interleaves the table→dictEntry→...dictEntry→robj search sequences for all keys. This reduces the time spent on lookupKey by more than 80%. Another issue we had to address was that all the incoming parsed commands from the I&#x2F;O threads were not present in the L1&#x2F;L2 caches of the core running Valkey’s main thread. This was also resolved using the same method.  All the relevant code can be found in &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;blob&#x2F;unstable&#x2F;src&#x2F;memory_prefetch.c&quot;&gt;memory_prefetch.c&lt;&#x2F;a&gt;. In total the impact of the memory access amortization on Valkey performance is almost 50% and it increased the requests per second to more than 1.19M rps.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;how-to-reproduce-valkey-8-0-performance-numbers&quot;&gt;How to reproduce Valkey 8.0 performance numbers&lt;&#x2F;h3&gt;
&lt;p&gt;This section will walk you through the process of reproducing our performance results, where we achieved 1.19 million requests per second using Valkey 8.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;hardware-setup&quot;&gt;Hardware Setup&lt;&#x2F;h3&gt;
&lt;p&gt;We conducted our tests on an AWS EC2 c7g.4xlarge instance, featuring 16 cores on an ARM-based (aarch64) architecture.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;system-configuration&quot;&gt;System Configuration&lt;&#x2F;h3&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: The core assignments used in this guide are examples. Optimal core selection may vary depending on your specific system configuration and workload.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Interrupt affinity - locate the network interface with &lt;code&gt;ifconfig&lt;&#x2F;code&gt; (let&#x27;s assume it is &lt;code&gt;eth0&lt;&#x2F;code&gt;) and its associated IRQs with&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;grep&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; eth0 &#x2F;proc&#x2F;interrupts&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; awk&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;{print $1}&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; |&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; cut&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -d&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; :&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -f 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;In our setup, lines &lt;code&gt;48&lt;&#x2F;code&gt; to &lt;code&gt;55&lt;&#x2F;code&gt; are allocated for &lt;code&gt;eth0&lt;&#x2F;code&gt; interrupts. Allocate one core per 4 IRQ lines:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;for&lt;&#x2F;span&gt;&lt;span&gt; i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; in&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;48..51}&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; do&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; echo 1000&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &#x2F;proc&#x2F;irq&#x2F;&lt;&#x2F;span&gt;&lt;span&gt;$i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&#x2F;smp_affinity&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; done&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;for&lt;&#x2F;span&gt;&lt;span&gt; i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; in&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;52..55}&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; do&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; echo 2000&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &#x2F;proc&#x2F;irq&#x2F;&lt;&#x2F;span&gt;&lt;span&gt;$i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&#x2F;smp_affinity&lt;&#x2F;span&gt;&lt;span&gt;;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; done&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Server configuration - launch the Valkey server with these minimal configurations:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;.&#x2F;valkey-server&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --io-threads 9 --save --protected-mode&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; no&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;code&gt;--save&lt;&#x2F;code&gt; disables dumping to RDB file and &lt;code&gt;--protected-mode no &lt;&#x2F;code&gt;  allows connections from external hosts. &lt;code&gt;--io-threads&lt;&#x2F;code&gt; number includes the main thread and the IO threads, meaning that in our case 8 I&#x2F;O threads are launched in addition to the main thread.&lt;&#x2F;p&gt;
&lt;p&gt;Main thread affinity - pin the main thread to a specific CPU core, avoiding the cores handling IRQs. Here we use core #3:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;sudo&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; taskset&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -cp 3&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; `&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;pidof&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-server`&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Important: We suggest experimenting with different core pinning strategies to find the optimal performance while avoiding conflicts with IRQ-handling cores.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h3 id=&quot;benchmark-configuration&quot;&gt;Benchmark Configuration&lt;&#x2F;h3&gt;
&lt;p&gt;Run the benchmark from a separate instance using the following parameters:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Value size: 512 bytes&lt;&#x2F;li&gt;
&lt;li&gt;Number of keys: 3 million&lt;&#x2F;li&gt;
&lt;li&gt;Number of clients: 650&lt;&#x2F;li&gt;
&lt;li&gt;Number of threads: 50 (may vary for optimal results)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;.&#x2F;valkey-benchmark&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -t&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -d 512 -r 3000000 -c 650 --threads 50 -h&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;host-name&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n 100000000000&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Important: When running the benchmark, it may take a few seconds for the database to get populated and for the performance to stabilize. You can adjust the &lt;code&gt;-n&lt;&#x2F;code&gt; parameter to ensure the benchmark runs long enough to reach optimal throughput.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h3 id=&quot;testing-and-availability&quot;&gt;Testing and Availability&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;releases&#x2F;tag&#x2F;8.0.0-rc2&quot;&gt;Valkey 8.0 RC2&lt;&#x2F;a&gt; is available now for evaluation with I&#x2F;O threads and memory access amortization.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Storing more with less: Memory Efficiency in Valkey 8</title>
        <published>2024-09-04T01:01:01+00:00</published>
        <updated>2024-09-04T01:01:01+00:00</updated>
        
        <author>
          <name>
            hpatro
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-memory-efficiency-8-0/"/>
        <id>https://valkey.io/blog/valkey-memory-efficiency-8-0/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-memory-efficiency-8-0/">&lt;p&gt;Valkey 8.0 GA is around the corner and one of the themes is increasing overall memory efficiency. Memory overhead reduction has the obvious effect of better resource utilization, but also impacts performance. By minimizing unnecessary memory consumption, you can store more data with the same hardware resources and improve overall system responsiveness. This post is going to give an overview into how Valkey internally manages the data and its memory overhead. Additionally, it talks about the two major improvements for Valkey 8.0 that improves the overall memory efficiency.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;overview&quot;&gt;Overview&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey has two modes of operation: standalone and cluster mode. Standalone allows for one primary with it’s replica(s). To shard data horizontally and scale to store large amounts of data, cluster mode provides a mechanism to set up multiple primaries each with their own replica(s).&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;valkey_operation_mode.png&quot; alt=&quot;Figure 1 Standalone (left) and Cluster mode (right)&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;For both standalone and cluster mode setup, Valkey&#x27;s main dictionary is a hash table with a chained linked list: The major components are a &lt;strong&gt;bucket&lt;&#x2F;strong&gt; and &lt;strong&gt;dictionary entry&lt;&#x2F;strong&gt;. A key is hashed to a bucket and each bucket points to a linked list of dictionary entries and further each dictionary entry consists of key, value, and a next pointer. Each pointer takes 8 bytes of memory usage. So, a single dictionary entry has a minimum overhead of 24 bytes.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;dictionary_bucket_and_entry_overview.png&quot; alt=&quot;Figure 2 Dictionary bucket pointing to a dictionary entry&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;In cluster mode, Valkey uses a concept called &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;cluster-tutorial&#x2F;&quot;&gt;hash slots&lt;&#x2F;a&gt; to shard data. There are 16,384 hash slots in cluster, and to compute the hash slot for a given key, the server computes the CRC16 of the key modulo 16,384. Keys are distributed on basis of these slots assigned to each of the primary. The server needs to maintain additional metadata for bookkeeping i.e. slot-to-key mapping to move a slot from one primary to another. In order to maintain the slot to key mapping, two additional pointers &lt;code&gt;slot-prev&lt;&#x2F;code&gt; and &lt;code&gt;slot-next&lt;&#x2F;code&gt; (Figure 3)  are stored as metadata in each dictionary entry forming a double linked list of all keys belonging to a given slot. This further increases the overhead by 16 bytes per dictionary entry i.e. total 40 bytes.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;dictionary_in_cluster_mode_7.2.png&quot; alt=&quot;Figure 3 Dictionary in cluster mode (Valkey 7.2) with multiple key value pair&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;improvements&quot;&gt;Improvements&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;optimization-1-dictionary-per-slot&quot;&gt;Optimization 1 - &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;redis&#x2F;redis&#x2F;pull&#x2F;11695&quot;&gt;Dictionary per slot&lt;&#x2F;a&gt;&lt;&#x2F;h3&gt;
&lt;p&gt;The first optimization is a dictionary per slot (16,384 of them in total), where each dictionary stores data for a given slot. With this simplification, the cost of maintaining additional metadata for the mapping of slot to key is no longer required in Valkey 8. To iterate over all the keys in a given slot, the engine simply finds out the dictionary for a given slot and traverse all the entries in it. This reduces the memory usage per dictionary entry by 16 bytes with a small memory overhead around 1 MB per node. As cluster mode is generally used for storing large amount of keys, avoiding the additional overhead per key allows users to store more number of keys in the same amount of memory.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;dictionary_in_cluster_mode_8.0.png&quot; alt=&quot;Figure 4 Dictionary in cluster mode (Valkey 8.0) with multiple key value pair&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;A few of the interesting challenges that comes up with the above improvements are supporting existing use cases which were optimized with a single dictionary for the entire keyspace. The usecases are:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;scan&#x2F;&quot;&gt;Iterating the entire keyspace&lt;&#x2F;a&gt; - Command like &lt;code&gt;SCAN&lt;&#x2F;code&gt; to iterate over the entire keyspace.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;lru-cache&#x2F;&quot;&gt;Random key for eviction&lt;&#x2F;a&gt; - The server does random sampling of the keyspace to find ideal candidate for eviction.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;commands&#x2F;randomkey&#x2F;&quot;&gt;Finding a random key&lt;&#x2F;a&gt; - Commands like &lt;code&gt;RANDOMKEY&lt;&#x2F;code&gt; retrieve a random key from the database.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;In order to efficiently implement these functions, we need to be able to both find non-empty slots, to skip over empty slots during scanning, and be able to select a random slot weighted by the number of keys that it owns. These requirements require a data structure which provides the following functionality:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Modify value for a given slot - If a key gets added or removed, increment or decrement the value for that given slot by 1 respectively.&lt;&#x2F;li&gt;
&lt;li&gt;Cumulative frequency until each slot - For a given number representing a key between 1 and the total number of keys, return the slot which covers the particular key.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;If approached naively, the former and latter operation would take O(1) and O(N) respectively. However, we want to minimize the latter operation’s time complexity and minimally avoid in the former. Hence, a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.topcoder.com&#x2F;thrive&#x2F;articles&#x2F;Binary%20Indexed%20Trees&quot;&gt;binary indexed tree (BIT) or fenwick tree&lt;&#x2F;a&gt; which provides the above functionality with a minimal memory overhead (~1 MB per node) and the time complexity is also bounded to O(M log N) for both the operations where M = number of modification(s) and N = number of slots. This enables skipping over empty slots efficiently while iterating over the keyspace as well as finding a slot for a given key index in logarithmic time via binary search over the cumulative sum maintained by the BIT.&lt;&#x2F;p&gt;
&lt;p&gt;Another interesting side effect is on the rehashing operation. Rehashing is CPU intensive. By default, a limited number of buckets are allocated in a dictionary and it expands&#x2F;shrinks dynamically based on usage. While undergoing rehashing, all the data needs to be moved from an old dictionary to a new dictionary. With Valkey 7.2, a global dictionary being shared across all the slots, all the keys get stored under a single dictionary and each time the fill factor (number of keys &#x2F; number of buckets) goes above 1, the dictionary needs to move to a larger dictionary (multiple of 2) and move a large amount of keys. As this operation is performed on the fly, it causes an increase in latency for regular command operations while it&#x27;s ongoing. With the per-slot dictionary optimization, the impact of rehashing is localized to the specific dictionary undergoing the process and only a subset of keys needs to be moved.&lt;&#x2F;p&gt;
&lt;p&gt;Overall, with this new approach, the benefits are:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Removes additional memory overhead in cluster mode: Get rid of two pointers (16 bytes) per key to keep the mapping of slot to keys.&lt;&#x2F;li&gt;
&lt;li&gt;With the rehashing operation spread out across dictionaries, CPU utilization is also spread out.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h3 id=&quot;optimization-2-key-embedding-into-dictionary-entry&quot;&gt;Optimization 2 - &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;541&quot;&gt;Key embedding into dictionary entry&lt;&#x2F;a&gt;&lt;&#x2F;h3&gt;
&lt;p&gt;After the dictionary per slot change, the memory layout of dictionary entry in cluster mode is the following, there are three pointers (key, value, and next). The key pointer points to a SDS (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;antirez&#x2F;sds&#x2F;blob&#x2F;master&#x2F;README.md&quot;&gt;simple dynamic string&lt;&#x2F;a&gt;) which contains the actual key data. As a key is immutable, without bringing in much complexity, it can be embedded into the dictionary entry which has the same lifetime as the former.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;key_embedding.png&quot; alt=&quot;Figure 5 Key data storage in 7.2 (left) and 8.0 (right)&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;With this new approach, the overall benefits are:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Reduces 8 bytes additional memory overhead per key.&lt;&#x2F;li&gt;
&lt;li&gt;Removes an additional memory lookup for key: With access of dictionary entry, the additional random pointer access for key is no longer required leading to better cache locality and overall better performance.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h3 id=&quot;benchmarking&quot;&gt;Benchmarking&lt;&#x2F;h3&gt;
&lt;h4 id=&quot;setup&quot;&gt;Setup&lt;&#x2F;h4&gt;
&lt;p&gt;A single shard cluster is setup with 1 primary and 2 replica(s). Each node runs with different version to highlight the memory improvements with each optimization introduced between 7.2 to 8.0 and to signify that no additional configuration is required to achieve the memory efficiency.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Node A: Primary running on port 6379 with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;commit&#x2F;ad0a24c7421d3a8ea76cf44b56001e3b3b6ed545&quot;&gt;Valkey 7.2 version&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Node B: Replica 1 running on port 6380 with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;commit&#x2F;1ea49e5845a11250a13273c725720822c26860f1&quot;&gt;optimization 1 - dictionary per slot&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Node C: Replica 2 running on port 6381 with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;commit&#x2F;a323dce8900341328114b86a92078c50cec0d9b8&quot;&gt;optimization 1 - dictionary per slot and optimization 2 - key embedding&lt;&#x2F;a&gt; - Includes all memory efficiency optimization in Valkey 8.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h4 id=&quot;synthetic-data-generation-using-valkey-benchmark-utility&quot;&gt;Synthetic data generation using &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;topics&#x2F;benchmark&#x2F;&quot;&gt;valkey-benchmark utility&lt;&#x2F;a&gt;&lt;&#x2F;h4&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;src&#x2F;valkey-benchmark \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; -t set \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; -n 10000000 \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; -r 10000000 \&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt; -d 16&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h4 id=&quot;memory-usage&quot;&gt;Memory Usage&lt;&#x2F;h4&gt;
&lt;ul&gt;
&lt;li&gt;Node A&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; DBSIZE # command to retrieve number of keys.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 6318941&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; INFO MEMORY # command to retrieve statistics about memory usage&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Memory&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;used_memory:727339288&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;used_memory_human:693.64M&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ul&gt;
&lt;li&gt;Node B&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6380&amp;gt; DBSIZE # command to retrieve number of keys.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 6318941&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6380&amp;gt; INFO MEMORY # command to retrieve statistics about memory usage&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Memory&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;used_memory:627851888&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;used_memory_human:598.77M&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ul&gt;
&lt;li&gt;Node C&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6381&amp;gt; DBSIZE # command to retrieve number of keys.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(integer) 6318941&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6381&amp;gt; INFO MEMORY # command to retrieve statistics about memory usage&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;# Memory&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;used_memory:577300952&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;used_memory_human:550.56M&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h4 id=&quot;overall-improvement&quot;&gt;Overall Improvement&lt;&#x2F;h4&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;memory_usage_comparison.png&quot; alt=&quot;Figure 6 Overall memory usage with benchmark data&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h4 id=&quot;with-dictionary-per-slot-change-memory-usage-reduced-from-693-64-mb-to-598-77-mb-with-the-same-dataset&quot;&gt;With dictionary per slot change memory usage reduced from 693.64 MB to 598.77 MB with the same dataset&lt;&#x2F;h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Percentage Drop 1&lt;&#x2F;strong&gt;: ((693.64 - 598.77) &#x2F; 693.64) * 100 = (94.87 &#x2F; 693.64) * 100 ≈ 13.68%&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h4 id=&quot;further-with-key-embedding-memory-usage-reduced-from-598-77-mb-to-550-56-mb-with-the-same-dataset&quot;&gt;Further with key embedding, memory usage reduced from 598.77 MB to 550.56 MB with the same dataset&lt;&#x2F;h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Percentage Drop 2&lt;&#x2F;strong&gt;: ((598.77 - 550.56) &#x2F; 598.77) * 100 = (48.21 &#x2F; 598.77) * 100 ≈ 8.05%&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h4 id=&quot;overall-drop-from-693-64-mb-to-550-56-mb&quot;&gt;Overall Drop: From 693.64 MB to 550.56 MB&lt;&#x2F;h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Overall Percentage Drop&lt;&#x2F;strong&gt;: ((693.64 - 550.56) &#x2F; 693.64) * 100 = (143.08 &#x2F; 693.64) * 100 ≈ 20.63%&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;So, the drop in percentage is approximately &lt;strong&gt;20.63% in overall memory usage on a given node on upgrade from Valkey 7.2 to Valkey 8.0&lt;&#x2F;strong&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;Through the memory efficiency achieved by introducing dictionary per slot and key embedding into dictionary entry, users should have additional capacity to store more keys per node in Valkey 8.0 (up to 20%, but it will vary based on the workload). For users, upgrading from Valkey 7.2 to Valkey 8.0, the improvement should be observed automatically and no configuration changes are required.
Give it a try by spinning up a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;download&#x2F;&quot;&gt;Valkey cluster&lt;&#x2F;a&gt; and join us in the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;&quot;&gt;community&lt;&#x2F;a&gt; to provide feedback. Further, there is an ongoing discussion around overhauling the main dictionary with a more compact memory layout and introduce an open addressing scheme which will significantly improve memory efficiency. More details can be found in &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;issues&#x2F;169&quot;&gt;Issue 169: Re-thinking the main hash table&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Unlock 1 Million RPS: Experience Triple the Speed with Valkey</title>
        <published>2024-08-05T01:01:01+00:00</published>
        <updated>2024-08-05T01:01:01+00:00</updated>
        
        <author>
          <name>
            dantouitou
          </name>
        </author>
        
        <author>
          <name>
            uriyagelnik
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/unlock-one-million-rps/"/>
        <id>https://valkey.io/blog/unlock-one-million-rps/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/unlock-one-million-rps/">&lt;p&gt;Valkey 8.0, set for release in September 2024, will bring major performance enhancements through a variety of improvements including a new multi-threaded architecture.
This update aims to significantly boost throughput and reduce latency across various hardware configurations.
Read on to learn more about the new innovative I&#x2F;O threading implementation and its impact on performance and efficiency.
This post is the first in a two-part series. The next post will dive into the new prefetch mechanism and its impact on performance.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;our-commitment-to-performance-and-efficiency&quot;&gt;Our Commitment to performance and efficiency&lt;&#x2F;h3&gt;
&lt;p&gt;At AWS, we have hundreds of thousands of customers using Amazon ElastiCache and Amazon MemoryDB.
Feedback we continuously hear from end users is that they need better absolute performance and want to squeeze more performance from their clusters.&lt;&#x2F;p&gt;
&lt;p&gt;Our commitment to meeting these performance and efficiency needs led us down a path of improving the multi-threaded performance of our ElastiCache and MemoryDB services, through features we called &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;database&#x2F;boosting-application-performance-and-reducing-costs-with-amazon-elasticache-for-redis&#x2F;&quot;&gt;Enhanced IO&lt;&#x2F;a&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;database&#x2F;enhanced-io-multiplexing-for-amazon-elasticache-for-redis&#x2F;&quot;&gt;Multiplexing&lt;&#x2F;a&gt;.
Today we are excited to dive into how we are sharing our learnings from this performance journey by contributing a major performance improvement to the Valkey project.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;benefits-of-high-capacity-shards&quot;&gt;Benefits of High Capacity Shards&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey&#x27;s common approach to performance and memory improvement is scaling out by adding more shards to the cluster.
However, the availability of more powerful nodes offers additional flexibility in application design.
Higher-capacity shards can increase cluster capacity, improve resilience to request surges, and reduce latencies at high percentiles.
This approach is particularly beneficial for Valkey users with workloads that don&#x27;t respond well to horizontal scaling, such as hot keys and large collections that can&#x27;t be effectively distributed across multiple shards.&lt;&#x2F;p&gt;
&lt;p&gt;Another challenge with horizontal scaling comes from multi-key operations like MGET.
These multi-key operations require all involved keys to reside in the same slot, often resulting in users utilizing only a small number of slots, which can significantly restrict the cluster&#x27;s scalability potential.
Larger shards can alleviate these constraints by accommodating more keys and larger collections within a single node.&lt;&#x2F;p&gt;
&lt;p&gt;While larger shards offer these benefits, they come with trade-offs.
Full synchronization for very large instances can be risky, and losing a large shard can be more impactful than losing a smaller one.
Conversely, managing a cluster with too many small instances can be operationally complex.
The optimal configuration depends on the specific workload, requiring a careful balance between scaling out and using larger shards.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;major-upgrade-to-valkey-performance&quot;&gt;Major Upgrade to Valkey Performance&lt;&#x2F;h3&gt;
&lt;p&gt;Starting with version 8, Valkey users will benefit from an increase in multi-threaded performance, thanks to a new multi-threading architecture that can boost throughput and reduce latency on a wide range of hardware types.
&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;performance_comparison.png&quot; alt=&quot;Performance comparison between existing I&#x2F;O threading implementation and the new I&#x2F;O threading implementation available in Valkey 8.&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The data demonstrates a substantial performance improvement with the new I&#x2F;O threads approach.
Throughput increased by approximately 230%, rising from 360K to 1.19M requests per second compared to Valkey 7.2
Latency metrics improved across all percentiles, with average latency decreasing by 69.8% from 1.792 ms to 0.542 ms.&lt;&#x2F;p&gt;
&lt;p&gt;Tested with 8 I&#x2F;O threads, 3M keys DB size, 512 bytes value size, and 650 clients running sequential SET commands using AWS EC2 C7g.16xlarge instance.
Please note that these numbers include the Prefetch change that will be described in the next &lt;a href=&quot;&#x2F;blog&#x2F;unlock-one-million-rps-part2&#x2F;&quot;&gt;blog post&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;performance-without-compromising-simplicity&quot;&gt;Performance Without Compromising Simplicity&lt;&#x2F;h3&gt;
&lt;p&gt;Valkey strives to stay simple by executing as much code in a single thread as possible.
This ensures an API that can continuously evolve without the need to use complex synchronization and avoid race conditions.
Our new multi-threading approach is designed based on this long-standing architectural principle that we believe is the right architecture for Valkey.
It utilizes a minimal number of synchronization mechanisms and keeps Valkey command execution single-threaded, simple, and primed for future enhancements.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;assets&#x2F;media&#x2F;pictures&#x2F;io_threads.png&quot; alt=&quot;I&#x2F;O threads high level design&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;high-level-design&quot;&gt;High Level Design&lt;&#x2F;h3&gt;
&lt;p&gt;The above diagram depicts the high-level design of I&#x2F;O threading in Valkey 8.
I&#x2F;O threads are worker threads that receive jobs to execute from the main thread.
A job can involve reading and parsing a command from a client, writing responses back to the client, polling for I&#x2F;O events on TCP connections, or deallocating memory.
While I&#x2F;O threads are busy handling I&#x2F;O, the main thread is able to spend more time executing commands.&lt;&#x2F;p&gt;
&lt;p&gt;The main thread orchestrates all the jobs spawned to the I&#x2F;O threads, ensuring that no race conditions occur.
The number of active I&#x2F;O threads can be adjusted by the main thread based on the current load to ensure efficient utilization of the underlying hardware.
Despite the dynamic nature of I&#x2F;O threads, the main thread maintains thread affinity, ensuring that, when possible the same I&#x2F;O thread will handle I&#x2F;O for the same client to improve memory access locality.&lt;&#x2F;p&gt;
&lt;p&gt;Socket polling system calls, such as &lt;code&gt;epoll_wait&lt;&#x2F;code&gt;, are expensive procedures.
When executed solely by the main thread, &lt;code&gt;epoll_wait&lt;&#x2F;code&gt; consumes more than 20 percent of the time.
Therefore, we decided to offload &lt;code&gt;epoll_wait&lt;&#x2F;code&gt; execution to the I&#x2F;O threads in the following way: to avoid race conditions, at any given time, at most one thread, either an io_thread or the main thread, executes &lt;code&gt;epoll_wait&lt;&#x2F;code&gt;.
I&#x2F;O threads never sleep on &lt;code&gt;epoll&lt;&#x2F;code&gt;, and whenever there are pending I&#x2F;O operations or commands to be executed, &lt;code&gt;epoll_wait&lt;&#x2F;code&gt; calls are scheduled to the I&#x2F;O threads by the main thread.
In all other cases, the main thread executes the &lt;code&gt;epoll_wait&lt;&#x2F;code&gt; with the waiting time as in the original Valkey implementation&lt;&#x2F;p&gt;
&lt;p&gt;In addition, before executing commands, the main thread performs a new procedure, prefetch-commands-keys, which aims to reduce the number of external memory accesses needed when executing the commands on the main dictionary. A detailed explanation of the technique used in that procedure will be described in our next blog&lt;&#x2F;p&gt;
&lt;h3 id=&quot;testing-and-availability&quot;&gt;Testing and Availability&lt;&#x2F;h3&gt;
&lt;p&gt;The enhanced performance will be available for testing in the first release candidate of Valkey, available today.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey 8.0: Delivering Enhanced Performance and Reliability</title>
        <published>2024-08-02T01:01:01+00:00</published>
        <updated>2024-08-02T01:01:01+00:00</updated>
        
        <author>
          <name>
            pingxie
          </name>
        </author>
        
        <author>
          <name>
            madolson
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-8-0-0-rc1/"/>
        <id>https://valkey.io/blog/valkey-8-0-0-rc1/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-8-0-0-rc1/">&lt;p&gt;The Valkey community is proud to unveil the first release candidate of Valkey 8.0,
a major update designed to enhance performance, reliability, and observability
for all Valkey installations. In this blog, we&#x27;ll dive a bit deeper into each of these
areas and talk about the exciting features we&#x27;ve built for this release.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;performance&quot;&gt;Performance&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8.0 features significant improvements to the existing I&#x2F;O threading system,
allowing the main thread and I&#x2F;O threads to operate concurrently. This release also
includes a number of improvements to offload work to the I&#x2F;O threads and introduces
efficient batching of commands. Altogether, Valkey 8.0 is designed to handle up to
1.2 million Queries Per Second (QPS) on AWS&#x27;s r7g platform, compared to the previous
limit of 380K QPS. We&#x27;ll dive deeper into these numbers in an upcoming blog.&lt;&#x2F;p&gt;
&lt;p&gt;NOTE: Not all improvements are available in the release candidate, but they will
be available in the GA release of Valkey 8.0.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Asynchronous I&#x2F;O Threading&lt;&#x2F;strong&gt;: Enables parallel processing of commands and
I&#x2F;O operations, maximizing throughput and minimizing bottlenecks.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Intelligent Core Utilization&lt;&#x2F;strong&gt;: Distributes I&#x2F;O tasks across multiple
cores based on realtime usage, reducing idle time and improving energy efficiency.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Command Batching&lt;&#x2F;strong&gt;: Optimizes memory access patterns by prefetching frequently
accessed data to minimize CPU cache misses, reducing memory accesses required for
dictionary operations.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;For more details on these improvements, you can refer to
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;758&quot;&gt;#758&lt;&#x2F;a&gt; and
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;763&quot;&gt;#763&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;reliability&quot;&gt;Reliability&lt;&#x2F;h2&gt;
&lt;p&gt;Cluster scaling operations via slot migrations have historically been delicate.
Valkey 8.0 improves reliability and minimizes disruptions with the following
enhancements:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Automatic Failover for Empty Shards&lt;&#x2F;strong&gt;: New shards that start empty, owning
no slots, now benefit from automatic failover. This ensures high availability
from the start of the scaling process.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Replication of Slot Migration States&lt;&#x2F;strong&gt;: All &lt;code&gt;CLUSTER SETSLOT&lt;&#x2F;code&gt; commands are
now replicated synchronously to replicas before execution on the primary. This
reduces the chance of unavailability if the primary fails, as the replicas have
the most up-to-date information about the state of the shard. New replicas also
automatically inherit the state from the primary without additional input from
an operator.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Slot Migration State Recovery&lt;&#x2F;strong&gt;: In the event of a failover, Valkey 8.0 automatically
updates the slot migration states on source and target nodes. This ensures requests
are continuously routed to the correct primary in the target shard, maintaining
cluster integrity and availability.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;For more details on these improvements, you can refer to
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;445&quot;&gt;#445&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;replication&quot;&gt;Replication&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8.0 introduces a dual-channel replication scheme, allowing the RDB and
the replica backlog to be transferred simultaneously, accelerating synchronization.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reduced Memory Load&lt;&#x2F;strong&gt;: By streaming replication data to the replica during
the full sync, the primary node experiences significantly less memory pressure.
The replica now manages the Client Output Buffer (COB) tracking, reducing the
likelihood of COB overruns and enabling larger COB sizes on the replica side.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Reduced Parent Process Load&lt;&#x2F;strong&gt;: A dedicated connection for RDB transfer frees
the primary&#x27;s parent process from handling this data, allowing it to focus on
client queries and improving overall responsiveness.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Performance tests show improvements in write latency during sync, and in scenarios
with heavy read commands, the sync time can be cut by up to 50%. This translates
to a more responsive system, even during synchronization.&lt;&#x2F;p&gt;
&lt;p&gt;For more details on these improvements, you can refer to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;60&quot;&gt;#60&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;observability&quot;&gt;Observability&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8.0 introduces a comprehensive per-slot metrics infrastructure, providing
detailed visibility into the performance and resource usage of individual slots.
This granular data helps inform decisions about resource allocation, load
balancing, and performance optimization.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Key Count&lt;&#x2F;strong&gt;: Returns the number of keys in each slot, making it easier to
identify the slots with the largest number of keys.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;CPU Usage&lt;&#x2F;strong&gt;: Tracks CPU time consumed by operations on each slot, identifying
areas of high utilization and potential bottlenecks.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Network Input&#x2F;Output Bytes&lt;&#x2F;strong&gt;: Monitors data transmission and reception by
each slot, offering insights into network load and bandwidth utilization.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Minimal Overhead&lt;&#x2F;strong&gt;: Initial benchmarks show that enabling detailed metrics
incurs a negligible overhead of approximately 0.7% in QPS.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;For more details on these improvements, you can refer to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;712&quot;&gt;#712&lt;&#x2F;a&gt;,
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;720&quot;&gt;#720&lt;&#x2F;a&gt;, and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;771&quot;&gt;#771&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;efficiency&quot;&gt;Efficiency&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8.0 introduces two new improvements that reduce the memory overhead of keys,
allowing users to store more data without any application changes.
The first change is that keys are now embedded in the main dictionary, eliminating separate
key pointers and significantly reducing memory overhead. This results in a 9-10%
reduction in overall memory usage for scenarios with 16-byte keys and 8 or 16-byte
values, along with performance improvements.&lt;&#x2F;p&gt;
&lt;p&gt;This release also introduces a new per-slot dictionary for Valkey cluster, which
replaces a linked list that used to allow operator to list out all the keys in
a slot for slot-migration. The new architecture splits the main dictionary by slot,
reducing the memory overhead by 16 bytes per key-value pair without degrading performance.&lt;&#x2F;p&gt;
&lt;p&gt;For more details on these improvements, you can refer to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;541&quot;&gt;#541&lt;&#x2F;a&gt;
and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;redis&#x2F;redis&#x2F;pull&#x2F;11695&quot;&gt;Redis#11695&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;additional-highlights&quot;&gt;Additional Highlights&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Dual IPv4 and IPv6 Stack Support&lt;&#x2F;strong&gt;: Seamlessly operate in mixed IP environments
for enhanced compatibility and flexibility.
See &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;736&quot;&gt;#736&lt;&#x2F;a&gt; for details.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Improved Pub&#x2F;Sub Efficiency&lt;&#x2F;strong&gt;: Lightweight cluster messages streamline
communication and reduce overhead for faster, more efficient Pub&#x2F;Sub operations.
See &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;654&quot;&gt;#654&lt;&#x2F;a&gt; for details.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Valkey Over RDMA (Experimental)&lt;&#x2F;strong&gt;: Unlock significant performance improvements
with direct memory access between clients and Valkey servers, delivering up to
275% increase in throughput.
See &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;pull&#x2F;477&quot;&gt;#477&lt;&#x2F;a&gt; for details.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Numerous Smaller Performance&#x2F;Reliability Enhancements&lt;&#x2F;strong&gt;: Many under-the-hood
improvements ensure a smoother, more stable experience across the board.
See &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;releases&#x2F;tag&#x2F;8.0.0-rc1&quot;&gt;release notes&lt;&#x2F;a&gt; for details.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey 8.0 is a major update that offers improved performance, reliability, and
observability. Whether you are an experienced Valkey&#x2F;Redis user or exploring
it for the first time, this release provides significant advancements in in-memory
data storage. You can try out these enhancements today by downloading from
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;releases&#x2F;tag&#x2F;8.0.0-rc1&quot;&gt;source&lt;&#x2F;a&gt; or using one
of our &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;valkey&#x2F;valkey&quot;&gt;container images&lt;&#x2F;a&gt;. We would love
to hear your thoughts on these new features and what you hope to see in the
future from the Valkey project.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Important Note&lt;&#x2F;strong&gt;: The Valkey Over RDMA feature is currently experimental and
might change or be removed in future versions.&lt;&#x2F;p&gt;
&lt;p&gt;We look forward to seeing what you achieve with Valkey 8.0!&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Using Bitnami&#x27;s Valkey chart</title>
        <published>2024-07-09T01:01:01+00:00</published>
        <updated>2024-07-09T01:01:01+00:00</updated>
        
        <author>
          <name>
            rafariossaa
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/using-bitnami-valkey-chart/"/>
        <id>https://valkey.io/blog/using-bitnami-valkey-chart/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/using-bitnami-valkey-chart/">&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Due to changes in the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;charts&#x2F;issues&#x2F;35164&quot;&gt;Bitnami catalog&lt;&#x2F;a&gt;&lt;&#x2F;strong&gt;, the contents of this blog post are, as of September 2025, likely out-of-date and the helm chart &lt;strong&gt;does not represent an operational best practice&lt;&#x2F;strong&gt; without a subscription. This post is preserved for historical reference only. You can track work towards a project-provided Helm chart at &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;issues&#x2F;2371&quot;&gt;valkey-io&#x2F;valkey#2371&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Valkey is a high-performance key&#x2F;value datastore that supports workloads such as caching, and message queues, supporting many data types including strings, numbers, hashes, bitmaps, and more. Valkey can run in standalone or cluster mode for replication and high availability.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;bitnami.com&#x2F;&quot;&gt;Bitnami&lt;&#x2F;a&gt; offers a number of secure, up-to-date, and easy to deploy charts for a number of popular open source applications.&lt;&#x2F;p&gt;
&lt;p&gt;This blog will serve as a walkthrough on how you can deploy and use the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;charts&#x2F;tree&#x2F;main&#x2F;bitnami&#x2F;valkey&quot;&gt;Bitnami Helm chart for Valkey&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h1 id=&quot;assumptions-and-prerequisites&quot;&gt;Assumptions and prerequisites&lt;&#x2F;h1&gt;
&lt;p&gt;Before starting the deployment, make sure that you have the following prerequisites:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;An operational Kubernetes cluster.&lt;&#x2F;li&gt;
&lt;li&gt;An installed and configured kubectl CLI and Helm v3.x package manager. If you need help with these steps, check our article “&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;docs.bitnami.com&#x2F;kubernetes&#x2F;get-started-kubernetes#step-3-install-kubectl-command-line&quot;&gt;Learn how to install kubectl and Helm v3.x.&lt;&#x2F;a&gt;”&lt;&#x2F;li&gt;
&lt;li&gt;Optional: Access to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;app-catalog.vmware.com&#x2F;catalog&quot;&gt;VMware Tanzu Application Catalog&lt;&#x2F;a&gt;.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h1 id=&quot;deploying-the-bitnami-package-for-the-valkey-helm-chart&quot;&gt;Deploying the Bitnami package for the Valkey Helm chart&lt;&#x2F;h1&gt;
&lt;p&gt;The sections below describe the steps to configure the deployment, get and deploy the Bitnami-package Valkey Helm chart, and obtain its external IP address to access the service.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;getting-and-deploying-the-bitnami-package-for-valkey-helm-chart&quot;&gt;Getting and deploying the Bitnami package for Valkey Helm chart&lt;&#x2F;h2&gt;
&lt;p&gt;You can deploy the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;charts&#x2F;blob&#x2F;main&#x2F;LICENSE.md&quot;&gt;community&lt;&#x2F;a&gt; Bitnami-packaged Valkey Helm chart from the open source Bitnami Application Catalog. Alternatively, if you have access to an enterprise Tanzu Application Catalog instance, it can also be deployed from there.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;deploying-the-open-source-version-of-the-chart-through-bitnami-application-catalog&quot;&gt;Deploying the open source version of the chart through Bitnami Application Catalog&lt;&#x2F;h3&gt;
&lt;p&gt;To deploy the chart in its namespace, run the following commands:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kubectl create namespace valkey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; helm install myvalkey oci:&#x2F;&#x2F;registry-1.docker.io&#x2F;bitnamicharts&#x2F;valkey&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --set&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; auth.enabled=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;true --set&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; auth.password=test_pwd&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --namespace&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h3 id=&quot;deploying-the-enterprise-version-of-the-chart-through-tanzu-application-catalog&quot;&gt;Deploying the enterprise version of the chart through Tanzu Application Catalog&lt;&#x2F;h3&gt;
&lt;p&gt;The following steps describe navigating the Tanzu Application Catalog and getting the instructions to deploy Valkey in your cluster. This example shows a Valkey chart built using Ubuntu 22 as the base OS image, but feel free to customize the chart depending on your needs.&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;app-catalog.vmware.com&quot;&gt;app-catalog.vmware.com&lt;&#x2F;a&gt; and sign in to your catalog with your VMware account.
&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;using-bitnami-valkey-chart&#x2F;images&#x2F;using-bitnami-valkey-chart_1.png&quot; alt=&quot;Tanzu Library&quot; &#x2F;&gt;&lt;&#x2F;li&gt;
&lt;li&gt;In the My Applications section, search for Valkey and request it for your catalog. It is supported by Photon, Ubuntu, RHEL UBI, and Debian Linux distributions. On the next screen, you will find the instructions for deploying the chart on your cluster. Make sure that your cluster is up and running.&lt;&#x2F;li&gt;
&lt;li&gt;Execute &lt;strong&gt;kubectl cluster-info&lt;&#x2F;strong&gt;, then run the commands you will find in the Consume your Helm chart section.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;using-bitnami-valkey-chart&#x2F;images&#x2F;using-bitnami-valkey-chart_2.png&quot; alt=&quot;Tanzu Application Catalog&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;using-bitnami-valkey-chart&#x2F;images&#x2F;using-bitnami-valkey-chart_3.png&quot; alt=&quot;Bitnami Package Content&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;After this, the steps for deploying the chart will be the same as the ones described in the following sections to deploy its community version.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;obtaining-the-external-ip-address-and-logging-into-valkey&quot;&gt;Obtaining the external IP address and logging into Valkey&lt;&#x2F;h2&gt;
&lt;p&gt;Wait for the deployment to complete and check that all &lt;em&gt;myvalkey&lt;&#x2F;em&gt; pods are Running.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kubectl get pods,svc&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -n&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NAME&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;                      READY   STATUS    RESTARTS   AGE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;pod&#x2F;myvalkey-master-0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;     1&#x2F;1     Running&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;   0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;          4m41s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;pod&#x2F;myvalkey-replicas-0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;   1&#x2F;1     Running&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;   0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;          4m41s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;pod&#x2F;myvalkey-replicas-1&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;   1&#x2F;1     Running&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;   0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;          3m59s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;pod&#x2F;myvalkey-replicas-2&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;   1&#x2F;1     Running&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;   0&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;          3m32s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NAME&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;S&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;    AGE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;service&#x2F;myvalkey-headless&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;   ClusterIP   None&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;            &amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;non&lt;&#x2F;span&gt;&lt;span&gt;e&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;        6379&#x2F;TCP   4m41s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;service&#x2F;myvalkey-master&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;     ClusterIP&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;   10.110.225.33&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;   &amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;non&lt;&#x2F;span&gt;&lt;span&gt;e&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;        6379&#x2F;TCP   4m41s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;service&#x2F;myvalkey-replicas&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;   ClusterIP&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;   10.98.176.69&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    &amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;non&lt;&#x2F;span&gt;&lt;span&gt;e&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;        6379&#x2F;TCP   4m41s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;connecting-using-valkey-s-cli-client&quot;&gt;Connecting using Valkey’s CLI client&lt;&#x2F;h2&gt;
&lt;p&gt;To connect to the server, you can deploy a client pod.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; export VALKEY_PASSWORD=test_pwd&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kubectl run&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --namespace&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey valkey-client&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --restart=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;#39;Never&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;  --env&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; VALKEY_PASSWORD=&lt;&#x2F;span&gt;&lt;span&gt;$VALKEY_PASSWORD&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;  --image&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; docker.io&#x2F;bitnami&#x2F;valkey:7.2.5-debian-12-r5&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --command --&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; sleep infinity&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;You are now ready to attach to the pod and interact with the server using the CLI client.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kubectl exec&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --tty -i&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey-client&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --namespace&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; valkey&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; bash&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; REDISCLI_AUTH=&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;$VALKEY_PASSWORD&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot; valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -h&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; myvalkey-master&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;myvalkey-master:6379&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; set test_key &amp;quot;test_value&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;myvalkey-master:6379&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; get test_key&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;&amp;quot;test_value&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;using-a-python-environment&quot;&gt;Using a Python environment&lt;&#x2F;h2&gt;
&lt;p&gt;The next example explains how to connect to Valkey using a Python script. To run the script, first, it is needed to provide a Python environment and install Python’s Redis package.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kubectl run&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -it&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  python-redis&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --image=bitnami&#x2F;python --&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; bash&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;root@python-redis:&#x2F;app#&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; pip install redis&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Collecting&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; redis&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Downloading&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; redis-5.0.6-py3-none-any.whl.metadata&lt;&#x2F;span&gt;&lt;span&gt; (9.3&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kB&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Downloading&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; redis-5.0.6-py3-none-any.whl&lt;&#x2F;span&gt;&lt;span&gt; (252&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kB&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; 252.0&#x2F;252.0 kB&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 942.1&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kB&#x2F;s eta 0:00:00&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Installing&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; collected packages: redis&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Successfully&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; installed redis-5.0.6&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;You are now ready to run the script.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;root@python-redis:&#x2F;app#&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; python&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Python&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 3.12.4&lt;&#x2F;span&gt;&lt;span&gt; (main,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; Jun&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;  7&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; 2024, 04:30:17&lt;&#x2F;span&gt;&lt;span&gt;) [GCC 12.2.0] on linux&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Type&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; import&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; redis&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; creds_provider&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; = redis.UsernamePasswordCredentialProvider&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;&amp;quot;&amp;quot;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;test_pwd&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; user_connection&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; = redis.Redis&lt;&#x2F;span&gt;&lt;span&gt;(host&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;myvalkey-master.valkey&amp;quot;,&lt;&#x2F;span&gt;&lt;span&gt; port&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;6379,&lt;&#x2F;span&gt;&lt;span&gt; credential_provider&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;creds_provider&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; user_connection.ping&lt;&#x2F;span&gt;&lt;span&gt;()&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;True&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; user_connection.set(&amp;#39;foo_key&amp;#39;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;#39;bar&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;True&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; user_connection.get(&amp;#39;foo_key&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;b&amp;#39;bar&amp;#39;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; user_connection.hset(&amp;#39;user-session:123&amp;#39;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; mapping={&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;...&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;     &amp;#39;name&amp;#39;: &amp;#39;John&amp;#39;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;...&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;     &amp;quot;surname&amp;quot;: &amp;#39;Smith&amp;#39;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;...&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;     &amp;quot;company&amp;quot;: &amp;#39;Redis&amp;#39;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;...&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;     &amp;quot;age&amp;quot;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 29&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;...&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; }&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;4&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;&amp;gt;&amp;gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; user_connection.hgetall(&amp;#39;user-session:123&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;{b&amp;#39;name&amp;#39;: b&amp;#39;John&amp;#39;, b&amp;#39;surname&amp;#39;: b&amp;#39;Smith&amp;#39;, b&amp;#39;company&amp;#39;: b&amp;#39;Redis&amp;#39;, b&amp;#39;age&amp;#39;: b&amp;#39;29&amp;#39;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;&lt;h2 id=&quot;using-valkey-as-a-cache-for-wordpress&quot;&gt;Using Valkey as a cache for WordPress&lt;&#x2F;h2&gt;
&lt;p&gt;Valkey can also work as an object cache for a WordPress deployment. The following steps will show you how to set up this scenario.&lt;&#x2F;p&gt;
&lt;p&gt;First, it is needed to create the configuration for the WordPress deployment, in this case, Valkey’s parameters and the user credential for the site administrator are set.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; cat&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; settings.yaml&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; &amp;lt;&amp;lt;&lt;&#x2F;span&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;wordpressPassword: &amp;quot;wp_pwd&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;wordpressExtraConfigContent: |&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  define( &amp;#39;WP_REDIS_HOST&amp;#39;, &amp;#39;myvalkey-master.valkey&amp;#39; );&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  define( &amp;#39;WP_REDIS_PORT&amp;#39;, 6379 );&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  define( &amp;#39;WP_REDIS_PASSWORD&amp;#39;, &amp;#39;test_pwd&amp;#39; );&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;  define( &amp;#39;WP_REDIS_PREFIX&amp;#39;, &amp;#39;wp_redis&amp;#39; );&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;EOF&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Once you have the settings parameters, it is time to deploy WordPress in its namespace&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kubectl create namespace wordpress&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; helm install mywp bitnami&#x2F;wordpress&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -f&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; settings.yaml&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --namespace&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; wordpress&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Wait for the deployment to complete and check that the service has been assigned an external IP&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; kubectl get svc&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; --namespace&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; wordpress&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;NAME&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;                   TYPE         CLUSTER-IP     EXTERNAL-IP  PORT&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;S&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;         AGE&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;service&#x2F;mywp-mariadb&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;   ClusterIP&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;    10.106.11.191&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;  &amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;non&lt;&#x2F;span&gt;&lt;span&gt;e&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;       3306&#x2F;TCP                    74s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;service&#x2F;mywp-wordpress&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; LoadBalancer&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 10.100.136.181 192.168.49.50&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;   80:31116&#x2F;TCP,443:31119&#x2F;TCP   74s&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Using the external IP, browse to the admin page and log in using the previously set credentials.&lt;&#x2F;p&gt;
&lt;p&gt;You will be able to see the WordPress administrator UI where you can search for &lt;code&gt;Redis Object Cache&lt;&#x2F;code&gt; plugin, and then install and activate it. After enabling it, wait for some metrics to appear:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;valkey.io&#x2F;blog&#x2F;using-bitnami-valkey-chart&#x2F;images&#x2F;using-bitnami-valkey-chart_4.png&quot; alt=&quot;Redis Object Cache Metrics&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;This is only a drop in the bucket of what can be done with Valkey. Thereafter you could easily integrate Valkey into your projects.&lt;&#x2F;p&gt;
&lt;h1 id=&quot;support-and-resources&quot;&gt;Support and resources&lt;&#x2F;h1&gt;
&lt;p&gt;The Bitnami package for Valkey is available in both the community version, through the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;charts&#x2F;tree&#x2F;main&#x2F;bitnami&#x2F;valkey&#x2F;#installing-the-chart&quot;&gt;Bitnami GitHub repository&lt;&#x2F;a&gt;, as well as the enterprise version, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;app-catalog.vmware.com&#x2F;catalog&#x2F;f1242d16-218e-4773-8856-adcb2b2006e9&#x2F;branch&#x2F;1542f88a-71c4-42ed-b79c-89bd2063ac9a&quot;&gt;Tanzu Application Catalog&lt;&#x2F;a&gt;. Learn more about the differences between these two catalogs in this &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;tanzu.vmware.com&#x2F;content&#x2F;blog&#x2F;open-source-vs-enterprise-edition-of-vmware-bitnami-application-catalog&quot;&gt;blog post&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;To solve the problems you may have with the Bitnami community packages—including deployment support, operational support, and bug fixes—please open an issue in the Bitnami &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;charts&quot;&gt;Helm charts&lt;&#x2F;a&gt; or &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;containers&quot;&gt;containers&lt;&#x2F;a&gt; GitHub repositories. Also, if you want to contribute to the catalog, feel free to send us a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;bitnami&#x2F;containers&#x2F;pulls&quot;&gt;pull request&lt;&#x2F;a&gt;, and the team will check it and guide you in the process for a successful merge.&lt;&#x2F;p&gt;
&lt;p&gt;If you are interested in learning more about the Tanzu Application Catalog in general, check out the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;tanzu.vmware.com&#x2F;application-catalog&quot;&gt;product webpage&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>What&#x27;s new in Valkey for June 2024</title>
        <published>2024-06-27T01:01:01+00:00</published>
        <updated>2024-06-27T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/whats-new-june-2024/"/>
        <id>https://valkey.io/blog/whats-new-june-2024/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/whats-new-june-2024/">&lt;p&gt;What have people been saying since the &lt;a href=&quot;&#x2F;blog&#x2F;may-roundup&#x2F;&quot;&gt;last what&#x27;s new post&lt;&#x2F;a&gt;? Read on to find out.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;news-and-press&quot;&gt;News and Press&lt;&#x2F;h2&gt;
&lt;p&gt;Richard Speed from &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.theregister.com&#x2F;2024&#x2F;06&#x2F;19&#x2F;valkey_picks_up_more_partners&#x2F;&quot;&gt;The Register tells about how Valkey gained momentum with new backers&lt;&#x2F;a&gt;.
Michael Larabel wrote a couple of stories on Phoronix: one on how &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.phoronix.com&#x2F;news&#x2F;Fedora-Replacing-Redis-Valkey&quot;&gt;Valkey will be packaged in Fedora 41&lt;&#x2F;a&gt; and another on &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.phoronix.com&#x2F;news&#x2F;Valkey-Redis-Fork-More-Backers&quot;&gt;Valkey&#x27;s new backers&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;valkey-how-to&quot;&gt;Valkey How-To&lt;&#x2F;h2&gt;
&lt;p&gt;Abhishek Gupta gives a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;community.aws&#x2F;content&#x2F;2hx81ITCvDiWqrAz06SECOvepoa&#x2F;getting-started-with-valkey-using-javascript&quot;&gt;tutorial on how to get started with Valkey on JavaScript (and LangChain)&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;In less than 10 minutes Shantanu shows you how to install Valkey from source on Ubuntu.&lt;&#x2F;p&gt;
&lt;div &gt;
    &lt;iframe src=&quot;https:&#x2F;&#x2F;www.youtube-nocookie.com&#x2F;embed&#x2F;T-tH1GC0omo&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;&#x2F;iframe&gt;
&lt;&#x2F;div&gt;
&lt;p&gt;Percona published a couple of detailed how-to posts: Matthew Boehm detailed &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;valkey-redis-setting-up-replication&#x2F;&quot;&gt;setting up Valkey replication&lt;&#x2F;a&gt; and Anil Joshi &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;valkey-redis-sharding-using-the-native-clustering-feature&#x2F;&quot;&gt;covers Valkey sharding&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;interviews-and-podcasts&quot;&gt;Interviews and Podcasts&lt;&#x2F;h2&gt;
&lt;p&gt;Swapnil from &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;tfir.io&#x2F;why-open-source-still-leads-the-way-despite-license-changes-ann-schlemmer&#x2F;&quot;&gt;TFIR interviews Ann Schlemmer from Percona&lt;&#x2F;a&gt; about why open source still leads the way despite license changes.&lt;&#x2F;p&gt;
&lt;div &gt;
    &lt;iframe src=&quot;https:&#x2F;&#x2F;www.youtube-nocookie.com&#x2F;embed&#x2F;D2G7kfAO37U&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;&#x2F;iframe&gt;
&lt;&#x2F;div&gt;
&lt;p&gt;TSC member &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.odbms.org&#x2F;2024&#x2F;06&#x2F;on-the-open-source-valkey-project-qa-with-madelyn-olson&#x2F;&quot;&gt;Madelyn Olson was interviewed by Roberto Zicari on ODBMS&lt;&#x2F;a&gt; and chatted with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.lastweekinaws.com&#x2F;podcast&#x2F;screaming-in-the-cloud&#x2F;steering-through-open-source-waters-with-madelyn-olson&#x2F;&quot;&gt;Corey Quinn about Valkey on the Screaming in the Cloud podcast&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;div &gt;
    &lt;iframe src=&quot;https:&#x2F;&#x2F;www.youtube-nocookie.com&#x2F;embed&#x2F;Pl-udfEPwtk&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;&#x2F;iframe&gt;
&lt;&#x2F;div&gt;
&lt;p&gt;Robert and Courtney from the WooCommerce community chat about Valkey on the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=E1hX1GZij_U&quot;&gt;Do the Woo Podcast&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;div &gt;
    &lt;iframe src=&quot;https:&#x2F;&#x2F;www.youtube-nocookie.com&#x2F;embed&#x2F;E1hX1GZij_U&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;&#x2F;iframe&gt;
&lt;&#x2F;div&gt;&lt;h2 id=&quot;want-to-feature-your-tutorial-article-meetup-video&quot;&gt;Want to feature your tutorial&#x2F;article&#x2F;meetup&#x2F;video?&lt;&#x2F;h2&gt;
&lt;p&gt;Add your own links to the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-io.github.io&#x2F;pulls?q=is%3Apr+is%3Aopen+label%3Aroundup-post&quot;&gt;draft pull request open on the website GitHub repo&lt;&#x2F;a&gt;.
You can also submit your own content to be published directly on valkey.io by following the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-io.github.io&#x2F;blob&#x2F;main&#x2F;CONTRIBUTING-BLOG-POST.md&quot;&gt;blog post contributing guide&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>What&#x27;s new in Valkey for May 2024</title>
        <published>2024-05-24T01:01:01+00:00</published>
        <updated>2024-05-24T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/may-roundup/"/>
        <id>https://valkey.io/blog/may-roundup/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/may-roundup/">&lt;p&gt;It&#x27;s become clear that people want to talk about Valkey and have been publishing blog posts&#x2F;articles fervently.
Here you&#x27;ll find a collection of all the post I&#x27;m aware of in the last few weeks.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;percona&quot;&gt;Percona&lt;&#x2F;h2&gt;
&lt;p&gt;The kind folks over at Percona have been on an absolutely legendary streak of posting about Valkey.
They&#x27;ve done a series on data types (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;valkey-redis-the-hash-datatype&#x2F;&quot;&gt;Hashes&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;valkey-redis-sets-and-sorted-sets&#x2F;&quot;&gt;Sorted Sets&lt;&#x2F;a&gt;), &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;valkey-redis-configuration-best-practices&#x2F;&quot;&gt;best&lt;&#x2F;a&gt; and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;valkey-redis-not-so-good-practices&#x2F;&quot;&gt;not-so-good practices&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;hello-valkey-lets-get-started&#x2F;&quot;&gt;getting started&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;valkey-redis-replication-and-auto-failover-with-sentinel-service&#x2F;&quot;&gt;replication&#x2F;failover&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;valkey-redis-configurations-and-persistent-setting-of-the-key-parameters&#x2F;&quot;&gt;configurations&#x2F;persistence&lt;&#x2F;a&gt;, and finally their own &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.percona.com&#x2F;blog&#x2F;hello-valkey-lets-get-started&#x2F;&quot;&gt;Valkey packages for DEB and RPM-based distros&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;fedora-magazine&quot;&gt;Fedora Magazine&lt;&#x2F;h2&gt;
&lt;p&gt;Yours truly wrote an article for &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;fedoramagazine.org&#x2F;how-to-move-from-redis-to-valkey&#x2F;&quot;&gt;Fedora Magazine about using the &lt;code&gt;valkey-compat-redis&lt;&#x2F;code&gt; package to move to Valkey&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;community-aws&quot;&gt;Community.aws&lt;&#x2F;h2&gt;
&lt;p&gt;Ricardo Ferreira put together a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;community.aws&#x2F;content&#x2F;2fdr6Vg8BiJS8jr8xsuQRRc0MD5&#x2F;getting-started-with-valkey-using-docker-and-go&quot;&gt;walkthrough of using Valkey with Go on Docker&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-new-stack&quot;&gt;The New Stack&lt;&#x2F;h2&gt;
&lt;p&gt;While Open Source Summit North America was last month, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;thenewstack.io&#x2F;valkey-a-redis-fork-with-a-future&#x2F;&quot;&gt;The New Stack published a blog post about Valkey&lt;&#x2F;a&gt; and accompany interview with project leaders, it&#x27;s worth a watch and read.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;presentation-digging-into-valkey&quot;&gt;Presentation: Digging into Valkey&lt;&#x2F;h2&gt;
&lt;p&gt;On the subject of Open Source Summit, the talk I gave along side Madelyn Olson, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;youtu.be&#x2F;3G6QgwIl-xs&quot;&gt;&quot;Digging into Valkey&quot; was posted as a video&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;valkey-seattle-irl&quot;&gt;Valkey Seattle IRL&lt;&#x2F;h2&gt;
&lt;p&gt;The &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.meetup.com&#x2F;seattle-valkey&#x2F;&quot;&gt;Seattle Valkey Meetup&lt;&#x2F;a&gt; is holding a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.meetup.com&#x2F;seattle-valkey&#x2F;events&#x2F;301177195&#x2F;&quot;&gt;Rust module workshop on June 5th&lt;&#x2F;a&gt;.
A lot of folks will be in town for the Contributor Summit, so this meet up is bound to  be flush with Valkey experts.
Don&#x27;t miss it.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;want-to-add-your-tutorial-article-meetup-video-to-a-future-roundup&quot;&gt;Want to add your tutorial&#x2F;article&#x2F;meetup&#x2F;video to a future roundup?&lt;&#x2F;h2&gt;
&lt;p&gt;This is the first in a series of roundups on Valkey content.
The plan is to keep an &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-io.github.io&#x2F;issues?q=is%3Adraft+label%3Aroundup-post+&quot;&gt;draft pull request open on the website GitHub repo&lt;&#x2F;a&gt; where you can contribute your own content.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey Modules 101</title>
        <published>2024-05-01T01:01:01+00:00</published>
        <updated>2024-05-01T01:01:01+00:00</updated>
        
        <author>
          <name>
            dmitrypol
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/modules-101/"/>
        <id>https://valkey.io/blog/modules-101/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/modules-101/">&lt;h2 id=&quot;what-are-valkey-modules&quot;&gt;What are Valkey modules?&lt;&#x2F;h2&gt;
&lt;p&gt;The idea of modules is to allow adding extra features (such as new commands and data types) to Valkey without making changes to the core code.
Modules are a special type of code distribution called a shared library, which can be loaded by other programs at runtime and executed.
Modules can be written in C or other languages that have C bindings.
In this article we will go through the process of building simple modules in C and Rust (using the Valkey Module Rust SDK).
This article expects the audience to be at least somewhat familiar with git, C, Rust and Valkey.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;hello-world-module-in-c&quot;&gt;Hello World module in C&lt;&#x2F;h2&gt;
&lt;p&gt;If we clone the Valkey repo by running &lt;code&gt;git clone git@github.com:valkey-io&#x2F;valkey.git&lt;&#x2F;code&gt; we will find numerous examples in &lt;code&gt;src&#x2F;modules&lt;&#x2F;code&gt;.
Let&#x27;s create a new file &lt;code&gt;module1.c&lt;&#x2F;code&gt; in the same folder.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;c&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;#include&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;..&#x2F;valkeymodule.h&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;int&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; hello&lt;&#x2F;span&gt;&lt;span&gt;(ValkeyModuleCtx &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ctx&lt;&#x2F;span&gt;&lt;span&gt;, ValkeyModuleString &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;**&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;argv&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; int&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; argc&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    VALKEYMODULE_NOT_USED&lt;&#x2F;span&gt;&lt;span&gt;(argv);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    VALKEYMODULE_NOT_USED&lt;&#x2F;span&gt;&lt;span&gt;(argc);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    return&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; ValkeyModule_ReplyWithSimpleString&lt;&#x2F;span&gt;&lt;span&gt;(ctx,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;world1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;int&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; ValkeyModule_OnLoad&lt;&#x2F;span&gt;&lt;span&gt;(ValkeyModuleCtx &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;*&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ctx&lt;&#x2F;span&gt;&lt;span&gt;, ValkeyModuleString &lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;**&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;argv&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; int&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; argc&lt;&#x2F;span&gt;&lt;span&gt;) {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    VALKEYMODULE_NOT_USED&lt;&#x2F;span&gt;&lt;span&gt;(argv);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;    VALKEYMODULE_NOT_USED&lt;&#x2F;span&gt;&lt;span&gt;(argc);&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;ValkeyModule_Init&lt;&#x2F;span&gt;&lt;span&gt;(ctx,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;module1&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;1&lt;&#x2F;span&gt;&lt;span&gt;,VALKEYMODULE_APIVER_1) &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        ==&lt;&#x2F;span&gt;&lt;span&gt; VALKEYMODULE_ERR)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; return&lt;&#x2F;span&gt;&lt;span&gt; VALKEYMODULE_ERR;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    if&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;ValkeyModule_CreateCommand&lt;&#x2F;span&gt;&lt;span&gt;(ctx,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;module1.hello&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, hello,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;0&lt;&#x2F;span&gt;&lt;span&gt;) &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;        ==&lt;&#x2F;span&gt;&lt;span&gt; VALKEYMODULE_ERR)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; return&lt;&#x2F;span&gt;&lt;span&gt; VALKEYMODULE_ERR;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;    return&lt;&#x2F;span&gt;&lt;span&gt; VALKEYMODULE_OK;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Here we are calling &lt;code&gt;ValkeyModule_OnLoad&lt;&#x2F;code&gt; C function (required by Valkey) to initialize &lt;code&gt;module1&lt;&#x2F;code&gt; using &lt;code&gt;ValkeyModule_Init&lt;&#x2F;code&gt;.
Then we use &lt;code&gt;ValkeyModule_CreateCommand&lt;&#x2F;code&gt; to create a Valkey command &lt;code&gt;hello&lt;&#x2F;code&gt; which uses C function &lt;code&gt;hello&lt;&#x2F;code&gt; and returns &lt;code&gt;world1&lt;&#x2F;code&gt; string.
In future blog posts we will explore these areas at greater depth.&lt;&#x2F;p&gt;
&lt;p&gt;Now we need to update &lt;code&gt;src&#x2F;modules&#x2F;Makefile&lt;&#x2F;code&gt;&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;make&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;all&lt;&#x2F;span&gt;&lt;span&gt;: ... module1.so&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;module1.xo&lt;&#x2F;span&gt;&lt;span&gt;: ..&#x2F;valkeymodule.h&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;module1.so&lt;&#x2F;span&gt;&lt;span&gt;: module1.xo&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;	$(&lt;&#x2F;span&gt;&lt;span&gt;LD&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;)&lt;&#x2F;span&gt;&lt;span&gt; -o &lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt;$@ $^&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; $(&lt;&#x2F;span&gt;&lt;span&gt;SHOBJ_LDFLAGS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;) $(&lt;&#x2F;span&gt;&lt;span&gt;LIBS&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;)&lt;&#x2F;span&gt;&lt;span&gt; -lc&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Run &lt;code&gt;make module1.so&lt;&#x2F;code&gt; inside &lt;code&gt;src&#x2F;modules&lt;&#x2F;code&gt; folder.
This will compile our module in the &lt;code&gt;src&#x2F;modules&lt;&#x2F;code&gt; folder.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;hello-world-module-in-rust&quot;&gt;Hello World module in Rust&lt;&#x2F;h2&gt;
&lt;p&gt;We will create a new Rust package by running &lt;code&gt;cargo new --lib module2&lt;&#x2F;code&gt; in bash.
Inside the &lt;code&gt;module2&lt;&#x2F;code&gt; folder we will have &lt;code&gt;Cargo.toml&lt;&#x2F;code&gt; and &lt;code&gt;src&#x2F;lib.rs&lt;&#x2F;code&gt; files.
To install the valkey-module SDK run &lt;code&gt;cargo add valkey-module&lt;&#x2F;code&gt; inside &lt;code&gt;module2&lt;&#x2F;code&gt; folder.
Alternatively we can add &lt;code&gt;valkey-module = &quot;0.1.0&lt;&#x2F;code&gt; in &lt;code&gt;Cargo.toml&lt;&#x2F;code&gt; under &lt;code&gt;[dependencies]&lt;&#x2F;code&gt;.
Run &lt;code&gt;cargo build&lt;&#x2F;code&gt; and it will create or update the &lt;code&gt;Cargo.lock&lt;&#x2F;code&gt; file.&lt;&#x2F;p&gt;
&lt;p&gt;Modify &lt;code&gt;Cargo.toml&lt;&#x2F;code&gt; to specify the crate-type to be &quot;cdylib&quot;, which will tell cargo to build the target as a shared library.
Read &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;doc.rust-lang.org&#x2F;reference&#x2F;linkage.html&quot;&gt;Rust docs&lt;&#x2F;a&gt; to understand more about &lt;code&gt;crate-type&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;[lib]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;crate-type = [&amp;quot;cdylib&amp;quot;]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now in &lt;code&gt;src&#x2F;lib.rs&lt;&#x2F;code&gt; replace the existing code with the following:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;rust&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;#[macro_use]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;extern crate&lt;&#x2F;span&gt;&lt;span&gt; valkey_module;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;use&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; valkey_module&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span&gt;{&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Context&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyResult&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyString&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyValue&lt;&#x2F;span&gt;&lt;span&gt;};&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;fn&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt; hello&lt;&#x2F;span&gt;&lt;span&gt;(_ctx&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;: &amp;amp;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;Context&lt;&#x2F;span&gt;&lt;span&gt;, _args&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; Vec&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyString&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;)&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt; -&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; ValkeyResult&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;    Ok&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyValue&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;SimpleStringStatic&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;world2&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;))&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #D2A8FF;&quot;&gt;valkey_module!&lt;&#x2F;span&gt;&lt;span&gt; {&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    name&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;module2&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    version&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 1&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    allocator&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; (&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;valkey_module&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;alloc&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyAlloc&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt; valkey_module&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;alloc&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;::&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;ValkeyAlloc&lt;&#x2F;span&gt;&lt;span&gt;),&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    data_types&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    commands&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;:&lt;&#x2F;span&gt;&lt;span&gt; [&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;        [&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;module2.hello&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;, hello,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;,&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; 0&lt;&#x2F;span&gt;&lt;span&gt;],&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;    ]&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;}&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Rust syntax is a bit different than C but we are creating &lt;code&gt;module2&lt;&#x2F;code&gt; with command &lt;code&gt;hello&lt;&#x2F;code&gt; that returns &lt;code&gt;world2&lt;&#x2F;code&gt; string.
We are using the external crate &lt;code&gt;valkey_module&lt;&#x2F;code&gt; with &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;doc.rust-lang.org&#x2F;book&#x2F;ch19-06-macros.html&quot;&gt;Rust macros&lt;&#x2F;a&gt; and passing it variables like &lt;code&gt;name&lt;&#x2F;code&gt; and &lt;code&gt;version&lt;&#x2F;code&gt;.
Some variables like &lt;code&gt;data_types&lt;&#x2F;code&gt; and &lt;code&gt;commands&lt;&#x2F;code&gt; are arrays and we can pass zero, one or many values.
Since we are not using ctx or args we prefix them with &lt;code&gt;_&lt;&#x2F;code&gt; (Rust convention) instead of &lt;code&gt;VALKEYMODULE_NOT_USED&lt;&#x2F;code&gt; as we did in C.&lt;&#x2F;p&gt;
&lt;p&gt;Run &lt;code&gt;cargo build&lt;&#x2F;code&gt; in the root folder.
We will now see &lt;code&gt;target&#x2F;debug&#x2F;libmodule2.dylib&lt;&#x2F;code&gt; (on macOS).
The build will produce *.so files on Linux and *.dll files on Windows.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;run-valkey-server-with-both-modules&quot;&gt;Run Valkey server with both modules&lt;&#x2F;h2&gt;
&lt;p&gt;Go back into the Valkey repo folder and run &lt;code&gt;make&lt;&#x2F;code&gt; to compile the Valkey code.
Then add these lines to the bottom of the &lt;code&gt;valkey.conf&lt;&#x2F;code&gt; file.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;loadmodule UPDATE_PATH_TO_VALKEY&#x2F;src&#x2F;modules&#x2F;module1.so&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;loadmodule UPDATE_PATH_TO_MODULE2&#x2F;target&#x2F;debug&#x2F;libmodule2.dylib&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;and run &lt;code&gt;src&#x2F;valkey-server valkey.conf&lt;&#x2F;code&gt;.
You will see these messages in the log output.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;Module &amp;#39;module1&amp;#39; loaded from UPDATE_PATH_TO_VALKEY&#x2F;src&#x2F;modules&#x2F;module1.so&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;...&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;Module &amp;#39;module2&amp;#39; loaded from UPDATE_PATH_TO_MODULE2&#x2F;target&#x2F;debug&#x2F;libmodule2.dylib&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Then use &lt;code&gt;src&#x2F;valkey-cli&lt;&#x2F;code&gt; to connect.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;shellscript&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;src&#x2F;valkey-cli&lt;&#x2F;span&gt;&lt;span style=&quot;color: #79C0FF;&quot;&gt; -3&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;127.0.0.1:6379&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; module list&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;1&lt;&#x2F;span&gt;&lt;span&gt;) 1# &lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;name&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;module2&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;   2#&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;ver&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;   3#&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;path&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;UPDATE_PATH_TO_MODULE2&#x2F;target&#x2F;debug&#x2F;libmodule2.dylib&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;   4#&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;args&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; (empty&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; array&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;) 1# &lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt;&amp;quot;name&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;module1&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;   2#&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;ver&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;   3#&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;path&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;UPDATE_PATH_TO_VALKEY&#x2F;src&#x2F;modules&#x2F;module1.so&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;   4#&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; &amp;quot;args&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt; =&lt;&#x2F;span&gt;&lt;span style=&quot;color: #FF7B72;&quot;&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span&gt; (empty&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; array&lt;&#x2F;span&gt;&lt;span&gt;)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;127.0.0.1:6379&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; module1.hello&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;world1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;127.0.0.1:6379&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color: #A5D6FF;&quot;&gt; module2.hello&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span style=&quot;color: #FFA657;&quot;&gt;world2&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;We can now run both modules side by side and if we modify either C or RS file, recompile the code and restart &lt;code&gt;valkey-server&lt;&#x2F;code&gt; we will get the new functionality.&lt;&#x2F;p&gt;
&lt;p&gt;As an alternative to specifying modules in &lt;code&gt;valkey.conf&lt;&#x2F;code&gt; file, we can use &lt;code&gt;MODULE LOAD&lt;&#x2F;code&gt; and &lt;code&gt;UNLOAD&lt;&#x2F;code&gt; from &lt;code&gt;valkey-cli&lt;&#x2F;code&gt; to update the server.
First specify &lt;code&gt;enable-module-command yes&lt;&#x2F;code&gt; in &lt;code&gt;valkey.conf&lt;&#x2F;code&gt; and restart &lt;code&gt;valkey-server&lt;&#x2F;code&gt;.
This enables us to update our module code, recompile it and reload it at runtime.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot; style=&quot;color: #E6EDF3; background-color: #0D1117;&quot;&gt;&lt;code data-lang=&quot;plain&quot;&gt;&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; module load UPDATE_PATH_TO_VALKEY&#x2F;src&#x2F;modules&#x2F;module1.so&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; module list&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;1) 1# &amp;quot;name&amp;quot; =&amp;gt; &amp;quot;module1&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   2# &amp;quot;ver&amp;quot; =&amp;gt; (integer) 1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   3# &amp;quot;path&amp;quot; =&amp;gt; &amp;quot;UPDATE_PATH_TO_VALKEY&#x2F;src&#x2F;modules&#x2F;module1.so&amp;quot;&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;   4# &amp;quot;args&amp;quot; =&amp;gt; (empty array)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; module unload module1&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;OK&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; module list&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;(empty array)&lt;&#x2F;span&gt;&lt;&#x2F;span&gt;
&lt;span class=&quot;giallo-l&quot;&gt;&lt;span&gt;127.0.0.1:6379&amp;gt; &lt;&#x2F;span&gt;&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Please stay tuned for more articles in the future as we explore the possibilities of Valkey modules and where using C or Rust makes sense.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;useful-links&quot;&gt;Useful links&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&quot;&gt;Valkey repo&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkeymodule-rs&quot;&gt;Valkey Rust SDK&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;code.visualstudio.com&#x2F;docs&#x2F;languages&#x2F;rust&quot;&gt;Rust in VS Code&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>Valkey 7.2.5 GA is out!</title>
        <published>2024-04-16T01:01:01+00:00</published>
        <updated>2024-04-16T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/valkey-7-2-5-out/"/>
        <id>https://valkey.io/blog/valkey-7-2-5-out/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/valkey-7-2-5-out/">&lt;p&gt;Exciting times!&lt;&#x2F;p&gt;
&lt;p&gt;I&#x27;m pleased to announce that you can start using the first generally available, stable Valkey release today.
Check out the &lt;a href=&quot;&#x2F;download&#x2F;releases&#x2F;v7-2-5&quot;&gt;release page for 7.2.5&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;This release maintains the same protocol, API, return values, and data file formats with the last open source release of Redis (7.2.4).&lt;&#x2F;p&gt;
&lt;p&gt;You can &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;releases&#x2F;tag&#x2F;7.2.5&quot;&gt;build it from source&lt;&#x2F;a&gt; or &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hub.docker.com&#x2F;r&#x2F;valkey&#x2F;valkey&#x2F;&quot;&gt;pull it from Valkey’s official Docker Hub&lt;&#x2F;a&gt;.
Valkey’s release candidates are available in Fedora and EPEL and the new release will be available once the community updates the packages.&lt;&#x2F;p&gt;
</content>
        
    </entry><entry xml:lang="en">
        <title>SET first-blog-post &quot;Hello, world&quot;</title>
        <published>2024-04-11T01:01:01+00:00</published>
        <updated>2024-04-11T01:01:01+00:00</updated>
        
        <author>
          <name>
            kyledvs
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://valkey.io/blog/hello-world/"/>
        <id>https://valkey.io/blog/hello-world/</id>
        
        <content type="html" xml:base="https://valkey.io/blog/hello-world/">&lt;p&gt;Welcome!
For the inaugural blog post on valkey.io, I’d like to recap the story so far, what to look forward to, and then describe how this blog works.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;how-do-you-describe-an-open-source-whirlwind&quot;&gt;How do you describe an open source whirlwind?&lt;&#x2F;h2&gt;
&lt;p&gt;I would describe it like this: first &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;redis&#x2F;redis&#x2F;pull&#x2F;13157&quot;&gt;a license change&lt;&#x2F;a&gt;, the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;commit&#x2F;38632278fd06fe186f7707e4fa099f666d805547#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5&quot;&gt;establishment of PlaceholderKV&lt;&#x2F;a&gt;, a new name and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.linuxfoundation.org&#x2F;press&#x2F;linux-foundation-launches-open-source-valkey-community&quot;&gt;formation of Valkey within the Linux Foundation&lt;&#x2F;a&gt;, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;compare&#x2F;redis-7.2.4...7.2.4-rc1&quot;&gt;hundreds of code updates&lt;&#x2F;a&gt; by community members from around the world, and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey&#x2F;releases&#x2F;tag&#x2F;7.2.4-rc1&quot;&gt;a release candidate&lt;&#x2F;a&gt;.
All within the span of about three weeks.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;only-the-start&quot;&gt;Only the start&lt;&#x2F;h2&gt;
&lt;p&gt;Out of this initial flurry of activity emerges a project that is sure to have a long history.
This blog will cover the project over time by describing what’s new, what to look forward to, and how you can explore the full extent of Valkey.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;for-the-community-with-the-community&quot;&gt;For the community, with the community&lt;&#x2F;h2&gt;
&lt;p&gt;Like the Valkey project itself, this blog is not a singular effort of one company but rather a community effort, &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-io.github.io&#x2F;&quot;&gt;built in the open with full transparency&lt;&#x2F;a&gt;.
You want to write about a topic on the blog?
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-io.github.io&#x2F;fork&quot;&gt;Fork it and make a pull request.&lt;&#x2F;a&gt;
You want to help edit or review a post?
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-io.github.io&#x2F;issues&quot;&gt;Do a code review.&lt;&#x2F;a&gt;
Problem with a post?
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-io.github.io&#x2F;issues&#x2F;new?assignees=&amp;amp;labels=bug%2C+untriaged&amp;amp;projects=&amp;amp;template=bug_template.md&amp;amp;title=%5BBUG%5D&quot;&gt;Create an issue.&lt;&#x2F;a&gt;
Feel like something is missing?
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;valkey-io&#x2F;valkey-io.github.io&#x2F;issues&#x2F;new?assignees=&amp;amp;labels=enhancement&amp;amp;projects=&amp;amp;template=feature_template.md&amp;amp;title=&quot;&gt;Make a feature request.&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-s-next&quot;&gt;What’s next?&lt;&#x2F;h2&gt;
&lt;p&gt;Stay tuned for trip reports from Valkey’s first conferences then information about the first GA release.
It’s only going to get more exciting from here.&lt;&#x2F;p&gt;
</content>
        
    </entry>
</feed>
