docs/codex-openai-session-resolver.md
This documents the personal L wrapper that opens or resumes Codex sessions with repo-specific query matching for ~/repos/openai/codex.
Relevant files:
~/config/fish/fn.fish~/config/fish/scripts/codex-openai-session.tsL with no args runs f ai codex new.L <query> targets stored Codex sessions for ~/repos/openai/codex.f ai codex resume <thread-id> in that repo.l and L now also treat explicit recovery phrases as a separate lightweight
path.
Examples:
see this convo in ...what was I doing in ...recover recent contextcontinue the ... workFor those prompts, the launcher first runs:
f ai codex recover --summary-only --path <derived target> <prompt>
Then it prepends the short recovery summary to the new prompt and opens the session in the derived target repo/workspace. Normal prompts do not pay this cost.
This path uses codex app-server instead of parsing f ai codex list output.
That matters because thread/list gives:
cwd filtering for ~/repos/openai/codexupdatedAt orderingid, name, preview, gitInfo, and cwdsearchTermFor this wrapper, exact repo scoping is the main win. It avoids mixing sessions from unrelated repos and avoids depending on Flow's imported session index.
codex app-server with cwd set to ~/repos/openai/codex.initialize, then initialized.thread/list with:
cwd: ~/repos/openai/codexarchived: falsesortKey: updated_atafter most recent active: 2thread/read includeTurns:true using full turn text.
f ai codex resume <id> in the Codex repo.The resolver is deterministic. It does not call a model.
Matching order:
after or before2, second, 3rdthread.namethread.previewthread.gitInfo.branchthread.gitInfo.shamost recent activeExamples:
L most recent activeL session after most recent activeL secondL 019cca91L where does codex storeL history.jsonlImportant accuracy guardrails:
last only means "latest session" when the rest of the query is otherwise empty after control words are removed.codex app-server process on every lookup. That is the main latency cost.thread/list searchTerm only filters extracted titles and is case-sensitive, so the helper still needs local fallback ranking.L reuses one app-server connection instead of spawning a fresh process.id, updatedAt, name, preview, and gitInfo, then refresh it opportunistically.thread/name/set; exact names will beat fuzzy preview matching.after ... and before ... queries more deeply when the anchor match is still weak.A model-based resolver is possible but should be the fallback, not the first pass.
Why:
id, updatedAt, name, preview, and repo path already narrow the space wellIf a model is added later, the safe shape is:
NONEThe highest-value next change is reducing lookup latency:
That should improve the user-visible speed more than further prompt tuning.