docs/app/templates/llamaindex-app.md
The following is an alternative UI to display the LLamaIndex app.
If you plan on deploying your agentic workflow to prod, follow the llama deploy tutorial to deploy your agentic workflow.
To run this app locally, install Reflex and run:
reflex init --template reflex-llamaindex-template
The following lines in the state.py file are where the app makes a request to your deployed agentic workflow. If you have not deployed your agentic workflow, you can edit this to call and api endpoint of your choice.
client = httpx.AsyncClient()
# call the agentic workflow
input_payload = {
"chat_history_dicts": chat_history_dicts,
"user_input": question,
}
deployment_name = os.environ.get("DEPLOYMENT_NAME", "MyDeployment")
apiserver_url = os.environ.get("APISERVER_URL", "http://localhost:4501")
response = await client.post(
f"{apiserver_url}/deployments/{deployment_name}/tasks/create",
json={"input": json.dumps(input_payload)},
timeout=60,
)
answer = response.text
for i in range(len(answer)):
# Pause to show the streaming effect.
await asyncio.sleep(0.01)
# Add one letter at a time to the output.
self.chat_history[-1] = (
self.chat_history[-1][0],
answer[: i + 1],
)
yield
Once you have set up your environment, install the dependencies and run the app:
cd reflex-llamaindex-template
pip install -r requirements.txt
reflex run