Back to Crewai

Quickstart

docs/en/quickstart.mdx

1.14.5a28.8 KB
Original Source

Watch: Building CrewAI Agents & Flows with Coding Agent Skills

Install our coding agent skills (Claude Code, Codex, ...) to quickly get your coding agents up and running with CrewAI.

You can install it with npx skills add crewaiinc/skills

<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>

In this guide you will create a Flow that sets a research topic, runs a crew with one agent (a researcher using web search), and ends with a markdown report on disk. Flows are the recommended way to structure production apps: they own state and execution order, while agents do the work inside a crew step.

If you have not installed CrewAI yet, follow the installation guide first.

Prerequisites

  • Python environment and the CrewAI CLI (see installation)
  • An LLM configured with the right API keys — see LLMs
  • A Serper.dev API key (SERPER_API_KEY) for web search in this tutorial

Build your first Flow

<Steps> <Step title="Create a Flow project"> From your terminal, scaffold a Flow project (the folder name uses underscores, e.g. `latest_ai_flow`):
<CodeGroup>
  ```shell Terminal
  crewai create flow latest-ai-flow
  cd latest_ai_flow
  ```
</CodeGroup>

This creates a Flow app under src/latest_ai_flow/, including a starter crew under crews/content_crew/ that you will replace with a minimal single-agent research crew in the next steps. </Step>

<Step title="Configure one agent in `agents.yaml`"> Replace the contents of `src/latest_ai_flow/crews/content_crew/config/agents.yaml` with a single researcher. Variables like `{topic}` are filled from `crew.kickoff(inputs=...)`.
```yaml agents.yaml
# src/latest_ai_flow/crews/content_crew/config/agents.yaml
researcher:
  role: >
    {topic} Senior Data Researcher
  goal: >
    Uncover cutting-edge developments in {topic}
  backstory: >
    You're a seasoned researcher with a knack for uncovering the latest
    developments in {topic}. You find the most relevant information and
    present it clearly.
```
</Step> <Step title="Configure one task in `tasks.yaml`"> ```yaml tasks.yaml # src/latest_ai_flow/crews/content_crew/config/tasks.yaml research_task: description: > Conduct thorough research about {topic}. Use web search to find current, credible information. The current year is 2026. expected_output: > A markdown report with clear sections: key trends, notable tools or companies, and implications. Aim for 800–1200 words. No fenced code blocks around the whole document. agent: researcher output_file: output/report.md ``` </Step> <Step title="Wire the crew class (`content_crew.py`)"> Point the generated crew at your YAML and attach `SerperDevTool` to the researcher.
```python content_crew.py
# src/latest_ai_flow/crews/content_crew/content_crew.py
from typing import List

from crewai import Agent, Crew, Process, Task
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool


@CrewBase
class ResearchCrew:
  """Single-agent research crew used inside the Flow."""

  agents: List[BaseAgent]
  tasks: List[Task]

  agents_config = "config/agents.yaml"
  tasks_config = "config/tasks.yaml"

  @agent
  def researcher(self) -> Agent:
    return Agent(
      config=self.agents_config["researcher"],  # type: ignore[index]
      verbose=True,
      tools=[SerperDevTool()],
    )

  @task
  def research_task(self) -> Task:
    return Task(
      config=self.tasks_config["research_task"],  # type: ignore[index]
    )

  @crew
  def crew(self) -> Crew:
    return Crew(
      agents=self.agents,
      tasks=self.tasks,
      process=Process.sequential,
      verbose=True,
    )
```
</Step> <Step title="Define the Flow in `main.py`"> Connect the crew to a Flow: a `@start()` step sets the topic in **state**, and a `@listen` step runs the crew. The task’s `output_file` still writes `output/report.md`.
```python main.py
# src/latest_ai_flow/main.py
from pydantic import BaseModel

from crewai.flow import Flow, listen, start

from latest_ai_flow.crews.content_crew.content_crew import ResearchCrew


class ResearchFlowState(BaseModel):
  topic: str = ""
  report: str = ""


class LatestAiFlow(Flow[ResearchFlowState]):
  @start()
  def prepare_topic(self, crewai_trigger_payload: dict | None = None):
    if crewai_trigger_payload:
      self.state.topic = crewai_trigger_payload.get("topic", "AI Agents")
    else:
      self.state.topic = "AI Agents"
    print(f"Topic: {self.state.topic}")

  @listen(prepare_topic)
  def run_research(self):
    result = ResearchCrew().crew().kickoff(inputs={"topic": self.state.topic})
    self.state.report = result.raw
    print("Research crew finished.")

  @listen(run_research)
  def summarize(self):
    print("Report path: output/report.md")


def kickoff():
  LatestAiFlow().kickoff()


def plot():
  LatestAiFlow().plot()


if __name__ == "__main__":
  kickoff()
```
<Tip> If your package name differs from `latest_ai_flow`, change the import of `ResearchCrew` to match your project’s module path. </Tip> </Step> <Step title="Set environment variables"> In `.env` at the project root, set:
- `SERPER_API_KEY` — from [Serper.dev](https://serper.dev/)
- Your model provider keys as required — see [LLM setup](/en/concepts/llms#setting-up-your-llm)
</Step> <Step title="Install and run"> <CodeGroup> ```shell Terminal crewai install crewai run ``` </CodeGroup>

crewai run executes the Flow entrypoint defined in your project (same command as for crews; project type is "flow" in pyproject.toml). </Step>

<Step title="Check the output"> You should see logs from the Flow and the crew. Open **`output/report.md`** for the generated report (excerpt): <CodeGroup> ```markdown output/report.md # AI Agents in 2026: Landscape and Trends
## Executive summary
…

## Key trends
- **Tool use and orchestration** — …
- **Enterprise adoption** — …

## Implications
…
```
</CodeGroup>

Your actual file will be longer and reflect live search results. </Step> </Steps>

How this run fits together

  1. FlowLatestAiFlow runs prepare_topic first, then run_research, then summarize. State (topic, report) lives on the Flow.
  2. CrewResearchCrew runs one task with one agent: the researcher uses Serper to search the web, then writes the structured report.
  3. Artifact — The task’s output_file writes the report under output/report.md.

To go deeper on Flow patterns (routing, persistence, human-in-the-loop), see Build your first Flow and Flows. For crews without a Flow, see Crews. For a single Agent and kickoff() without tasks, see Agents.

<Check> You now have an end-to-end Flow with an agent crew and a saved report — a solid base to add more steps, crews, or tools. </Check>

Naming consistency

YAML keys (researcher, research_task) must match the method names on your @CrewBase class. See Crews for the full decorator pattern.

Deploying

Push your Flow to CrewAI AMP once it runs locally and your project is in a GitHub repository. From the project root:

<CodeGroup> ```bash Authenticate crewai login ```
bash
crewai deploy create
bash
crewai deploy status
crewai deploy logs
bash
crewai deploy push
bash
crewai deploy list
crewai deploy remove <deployment_id>
</CodeGroup> <Tip> The first deploy usually takes **around 1 minute**. Full prerequisites and the web UI flow are in [Deploy to AMP](/en/enterprise/guides/deploy-to-amp). </Tip> <CardGroup cols={2}> <Card title="Deploy guide" icon="book" href="/en/enterprise/guides/deploy-to-amp"> Step-by-step AMP deployment (CLI and dashboard). </Card> <Card title="Join the Community" icon="comments" href="https://community.crewai.com" > Discuss ideas, share projects, and connect with other CrewAI developers. </Card> </CardGroup>