multi_agents/README.md
LangGraph is a library for building stateful, multi-actor applications with LLMs. This example uses Langgraph to automate the process of an in depth research on any given topic.
Looking for the AG2 version? See multi_agents_ag2/ and the AG2 docs page.
By using Langgraph, the research process can be significantly improved in depth and quality by leveraging multiple agents with specialized skills. Inspired by the recent STORM paper, this example showcases how a team of AI agents can work together to conduct research on a given topic, from planning to publication.
An average run generates a 5-6 page research report in multiple formats such as PDF, Docx and Markdown.
Please note: Multi-agents are utilizing the same configuration of models like GPT-Researcher does. However, only the SMART_LLM is used for the time being. Please refer to the LLM config pages.
The research team is made up of 8 agents:
Generally, the process is based on the following stages:
More specifically (as seen in the architecture diagram) the process is as follows:
Install required packages found in this root folder including langgraph:
pip install -r requirements.txt
Update env variables, see the GPT-Researcher docs for more details.
Run the application:
python main.py
To change the research query and customize the report, edit the task.json file in the main directory.
query - The research query or task.model - The OpenAI LLM to use for the agents.max_sections - The maximum number of sections in the report. Each section is a subtopic of the research query.include_human_feedback - If true, the user can provide feedback to the agents. If false, the agents will work autonomously.publish_formats - The formats to publish the report in. The reports will be written in the output directory.source - The location from which to conduct the research. Options: web or local. For local, please add DOC_PATH env var.follow_guidelines - If true, the research report will follow the guidelines below. It will take longer to complete. If false, the report will be generated faster but may not follow the guidelines.guidelines - A list of guidelines that the report must follow.verbose - If true, the application will print detailed logs to the console.{
"query": "Is AI in a hype cycle?",
"model": "gpt-4o",
"max_sections": 3,
"publish_formats": {
"markdown": true,
"pdf": true,
"docx": true
},
"include_human_feedback": false,
"source": "web",
"follow_guidelines": true,
"guidelines": [
"The report MUST fully answer the original question",
"The report MUST be written in apa format",
"The report MUST be written in english"
],
"verbose": true
}
pip install langgraph-cli
langgraph up
From there, see documentation here on how to use the streaming and async endpoints, as well as the playground.