🚀 The feature, motivation and pitch
I'm writing to propose a feature enhancement for the Prefix Caching mechanism. Currently, vllm uses an LRU (Least Recently Used) eviction policy for prefix caching: when the cache reaches its capacity limit, the least recently accessed prefixes are evicted to make space for new ones. This works well for general use cases, but there are scenarios where users might need specific critical prefixes to remain persistently in memory (or VRAM) and never be evicted, even if they are not the most recently used.
Use Case Motivation:
In production environments, certain prefixes (e.g., system prompts, common instruction templates, or frequently reused context chunks) are accessed repeatedly across many inference requests. Evicting these prefixes can lead to unnecessary recomputation when they are needed again, which undermines the performance benefits of caching. Allowing users to "pin" such critical prefixes would ensure they remain in cache indefinitely, optimizing latency for high-priority workloads.
Proposed Enhancement:
Add a mechanism to mark specific prefixes as "pinned" (non-evictable). Pinned prefixes would:
- Be prioritized for retention in cache, even when the cache is full.
- Not be subject to LRU eviction, regardless of their access frequency.
- Occupy a reserved portion of the cache (or be counted against the total capacity but excluded from eviction logic).
This could be exposed via an API parameter (e.g., pin_prefix=True when submitting requests) or a configuration setting to specify which prefixes should be pinned.
Potential Considerations:
- Ensuring pinned prefixes do not exceed the total cache capacity (with safeguards/errors if users attempt to pin more than possible).
- Balancing pinned and dynamic (evictable) prefixes to avoid underutilizing cache space.
Alternatives
No response
Additional context
No response
Before submitting a new issue...
🚀 The feature, motivation and pitch
I'm writing to propose a feature enhancement for the Prefix Caching mechanism. Currently, vllm uses an LRU (Least Recently Used) eviction policy for prefix caching: when the cache reaches its capacity limit, the least recently accessed prefixes are evicted to make space for new ones. This works well for general use cases, but there are scenarios where users might need specific critical prefixes to remain persistently in memory (or VRAM) and never be evicted, even if they are not the most recently used.
Use Case Motivation:
In production environments, certain prefixes (e.g., system prompts, common instruction templates, or frequently reused context chunks) are accessed repeatedly across many inference requests. Evicting these prefixes can lead to unnecessary recomputation when they are needed again, which undermines the performance benefits of caching. Allowing users to "pin" such critical prefixes would ensure they remain in cache indefinitely, optimizing latency for high-priority workloads.
Proposed Enhancement:
Add a mechanism to mark specific prefixes as "pinned" (non-evictable). Pinned prefixes would:
This could be exposed via an API parameter (e.g.,
pin_prefix=Truewhen submitting requests) or a configuration setting to specify which prefixes should be pinned.Potential Considerations:
Alternatives
No response
Additional context
No response
Before submitting a new issue...