eden/.llms/rules/ACR_unbounded_concurrency.md
Severity: HIGH
futures::future::join_all() on an unbounded collectionFuturesUnordered without a concurrency limittokio::spawn in a loop without a semaphoreStreamExt::for_each_concurrent(None, ...) — unlimited parallelismjoin_all(items.iter().map(|i| fetch(i))) where items is user-controlled or repository-scalefor_each_concurrent(None, ...) on streams of unbounded sizeSemaphore::acquire()join_all() on a small, fixed set of futures (e.g., 2-3 known operations)buffer_unordered(N) or buffered(N) with a concrete limitmpsc::channel(N)) used for backpressureBAD (unbounded fan-out):
let results = futures::future::join_all(
changeset_ids.iter().map(|cs| fetch_changeset(ctx, repo, *cs))
).await;
// If changeset_ids has 100K entries, this spawns 100K concurrent fetches
GOOD (bounded):
let results: Vec<_> = stream::iter(changeset_ids)
.map(|cs| fetch_changeset(ctx, repo, cs))
.buffer_unordered(100)
.try_collect()
.await?;
BAD (backlog stampede — matches S493741 pattern):
// After upstream SEV is mitigated, all queued jobs resume at once
while let Some(job) = backlog_queue.pop() {
tokio::spawn(process_job(job));
}
GOOD (graduated drain):
let drain_semaphore = Semaphore::new(50); // max 50 concurrent during drain
while let Some(job) = backlog_queue.pop() {
let permit = drain_semaphore.acquire().await?;
tokio::spawn(async move {
let _permit = permit;
process_job(job).await
});
}
BAD (rate limit bypass — matches S498806 pattern):
fn should_rate_limit(override_config: &Override) -> bool {
if override_config.is_expired() {
return false; // Expired override = no limit. WRONG: lets unlimited traffic through.
}
check_rate(override_config.limit())
}
Use StreamExt::buffer_unordered(N) or Semaphore to cap concurrency. Choose N based on downstream capacity (50-200 for blobstore, 10-50 for SQL). After an outage or backlog buildup, drain queues gradually — never release the entire backlog at once. Add per-request memory budgets for data fetches: if a single request would fetch more than X MB, reject it early rather than OOM.
commit_location_to_hash calls overloaded Mononoke's MySQL backend.sl diff between rebase source and destination without scoping to relevant files, causing 5x LFS traffic spike and OOM-based load shedding.