examples/integrations/cursor/README.md
[!CAUTION]
This example is deprecated and unmaintained. Use at your own risk.
[!NOTE]
Read more about this integration on our blog: Reverse Engineering Cursor's LLM Client
This example shows how to use the TensorZero Gateway as a proxy between Cursor and the LLM APIs — enabling you to observe LLM calls being made, run evaluations on individual inferences, use inference-time optimizations, and even experiment with and optimize the prompts and models that Cursor uses.
.env file with the credentials for the LLM providers you want to use (as in any other TensorZero deployment).
Our example uses OpenAI, Anthropic, and Google AI Studio, but you can use any LLM provider TensorZero supports.
See .env.example for an example.API_TOKEN in your .env file.
We've had some issues with special characters in this step so recommend you use an alphanumeric string to be safe.
Optionally, set USER="yourname" if you'd like to tag each request with your name for downstream use..env file as the value for NGROK_AUTHTOKEN.docker compose up to stand up ClickHouse, the TensorZero Gateway, the TensorZero UI, Nginx, and ngrok.
To avoid port conflicts, ClickHouse and TensorZero services that would normally bind to ports XXXX bind to 1XXXX instead (e.g. 3000 → 13000).
(Get-Content ./nginx/entrypoint.sh -Raw) -replace "`r`n", "`n" | Set-Content ./nginx/entrypoint.sh -NoNewline -Encoding UTF8
http://localhost:4040 and grab your ngrok URL.OPENAI_BASE_URL to your ngrok URL with the /openai/v1 suffix (e.g. https://your-id.ngrok-free.app/openai/v1).
This should be available in your Cursor model settings.API_TOKEN value you set in your .env file (not your OpenAI API key!).tensorzero::function_name::cursorzero.Now, you can use Cursor as you normally would but with the tensorzero::function_name::cursorzero model you defined.
This will send all traffic through your self-hosted TensorZero Gateway, which in turn will route those requests to the LLM APIs you defined in your tensorzero.toml file.
Take a look at the server running on http://localhost:14000 to see what your requests look like!
This section is mostly for reference. You do not need to do this to run the Cursor integration today.
mitmproxy, and install the root certificate system-wide.http_proxy=http://localhost:8080 and https_proxy=http://localhost:8080.Cursor calls go through
TensorZero watches them
See the requests flow