docs/TODOS.md
Scratch pad for tracking work across the project. See also CAPABILITIES.yaml for the full feature inventory.
*.karate.js files)configure report = { showJsLineNumbers: true }karate-base.js (shared config from classpath JAR)@timeout=<millis> scenario-level timeout tag — was documented in v1 / early v2 docs (now removed from karate-docs) but never wired up. Tag.java has no TIMEOUT constant and no handler exists. Decide whether to ship it (ScenarioRuntime cancels the scenario after N ms, surfaces as a failure with a clear message) or formally drop it from the surface area.Runner.suites().add(...).parallel(n).run()@setup dynamic expressions, and examples-table cell interpolation. Currently TagSelector.evaluate creates a fresh Engine per call — low individual cost but adds up across per-section pre-filter + per-scenario runtime evaluation. Would need per-thread or pooled engines for parallel execution.BigInteger (large IDs, timestamps, financial identifiers)BigDecimal (money/finance)byte[] (raw binary data)Pattern from getJavaValue())java.util.Set (deduplication, membership)java.util.Map (ordered keys, non-string keys)java.util.Iteratorconsole.warn(), console.error(), console.trace() etc. should map to appropriate log levels (WARN, ERROR, TRACE) when cascading onto core/karate loggingasync/await -> CompletableFuture / virtual threadssetTimeout() and timer functionsimport/export) for JS reuse across tests--listener / --listener-factory CLI flags-m, -s, -W, etc.)FeatureResult.fromJson() for offline report generation from JSONLkarate.call() from JS, Background calls, multi-level chains)karate-base.js / karate-config.js / env-config output. Today config-time karate.log, karate.embed, karate.call, karate.callSingle output is replayed onto the first user step (or the beforeScenario hook step when one fires) — see ScenarioRuntime.call() capture-then-replay around the LogContext.set(new LogContext()) reset, issue #2840. A dedicated synthetic step (parallel to StepResult.hook(...), e.g. StepResult.configBootstrap(...)) would surface config output as a distinct first-class entry rather than blending it into user steps. Hook point: between the capture and the step loop in ScenarioRuntime.call(). Needs HTML/Cucumber JSON/JUnit XML writers to render the new synthetic step kind.Runner.Builder exposure via protocol.runner() for GatlingDriverFeatureTest — verify EventSource connects to SseHandler and receives events in a real browser. Current SSE tests only validate server-side wire format. This would cover the HTMX sse-swap and Alpine EventSource patterns end-to-end.find / findAll as aliases for locate / locateAll — jQuery, Cypress, Selenium (findElement) all use find for scoped descendant lookups, and $() / $$() shorthands are near-universal. locate is internally consistent with Karate's "locator" noun but non-standard elsewhere. Cost is ~5 lines (bind as aliases in Driver.jsGet and BaseElement.jsGet); benefit is one less thing for users arriving from other frameworks to learn. Skip until someone actually asks — existing locate is established, documented, and v1-compatible. Expand JS mock documentation in MOCKS.md — more examples of pathMatches, session patterns, and comparison with feature-file mocks
context.synchronized(name, fn) for JS-file mocks. MockHandler.apply() wraps every Karate-feature mock request in a requestLock, so feature-file mocks serialize naturally and shared mutable state (singleton session, caches) "just works". ServerRequestHandler (the JS-file mock path) has no equivalent — concurrent requests race on shared state, and the races aren't fixable in user data structures alone: JS array ops (push, splice, sort, …) are non-atomic read-modify-write sequences in JsArrayPrototype itself. Two manifestations seen in repro: silent item loss (T2 reads stale len, set(len, x) overwrites T1's append) and IndexOutOfBoundsException / ConcurrentModificationException from JsArray$ArrayLength.applySet taking the truncate path on a list that grew under it. Auditing every Array.prototype/Object operation for atomicity isn't feasible (single-threaded execution is a JS-spec invariant — no engine promises this), so the right fix is to expose locking to user code.
Decision: ship context.synchronized(name, fn) on the JS-file-mock context namespace (matches the existing JS-mock idiom of context.uuid() etc.; avoids introducing a new karate binding for one method). Reentrant, named, callback-only (forces try/finally; no leaks). Lock registry is a ConcurrentHashMap<String, ReentrantLock> on ServerConfig so each server is isolated.
Why not a global serverConfig.singleThreadedJs(true) knob: punishes read-only / non-shared paths in the same app and turns parallel JS mocks back into the same single-threaded performance profile that feature-file mocks already have — power users want to be selective.
Why not also bind on the test-scenario karate namespace today: footgun risk. Easy to silently kill parallel(N) suite throughput, hides scenario-isolation problems, lock-name typos are different locks (silent), unbounded lock-map growth from per-id keys. Defer until real demand surfaces.
Workaround in the meantime: wrap the entire request handler in a ReentrantLock (mirrors what MockHandler does internally). One-liner in user code:
ReentrantLock lock = new ReentrantLock();
Function<HttpRequest, HttpResponse> inner = new ServerRequestHandler(config, resolver);
Function<HttpRequest, HttpResponse> serialized = req -> {
lock.lock();
try { return inner.apply(req); } finally { lock.unlock(); }
};
See karate-todo's App.handler() for a worked example.
KARATE_TELEMETRY=false