community_usecase/OWL Interview Preparation Assistant/README.md
An intelligent multi-agent interview preparation system powered by the OWL framework that helps you prepare for job interviews with comprehensive research, tailored questions, and detailed preparation plans.
First, clone the OWL repository, which this project depends on:
git clone https://github.com/camel-ai/owl.git
cd owl
# Create a conda environment (recommended)
conda create -n interview_assistant python=3.10
conda activate interview_assistant
# OR using venv
python -m venv interview_env
source interview_env/bin/activate # On Windows: interview_env\Scripts\activate
# Install OWL
pip install -e .
# Install additional dependencies
pip install streamlit numpy pandas opencv-python
Create a .env file in the project directory with your API keys:
# Navigate to the Interview Preparation Assistant directory
cd community_usecase/new\ int/
# Create .env file
touch .env
Add your API keys to the .env file:
# OpenAI API (recommended for best results)
OPENAI_API_KEY=your_openai_api_key_here
# OR OpenRouter API (for access to Gemini models)
OPENROUTER_API_KEY=your_openrouter_api_key_here
# Optional: Google Search API for enhanced research (optional)
GOOGLE_API_KEY=your_google_api_key_here
SEARCH_ENGINE_ID=your_google_search_engine_id_here
The fastest way to get started is to use the Streamlit web interface:
# Navigate to the project directory
cd community_usecase/new\ int/
# Start the web application
streamlit run app.py
This will open a web browser window with the Interview Preparation Assistant interface where you can:
The web interface provides three main functions:
Click on "Research Company" to generate a comprehensive report about your target company including:
Click on "Generate Questions" to create tailored interview questions for your role and company:
Click on "Create Preparation Plan" to receive a detailed day-by-day preparation guide:
You can also run specific functions from the command line:
# Run company research
python -c "from main import research_company; result = research_company('Google', detailed=True); print(result['answer'])"
# Generate interview questions
python -c "from main import generate_interview_questions; result = generate_interview_questions('Machine Learning Engineer', 'Google'); print(result['answer'])"
# Create preparation plan
python -c "from main import create_interview_prep_plan; result = create_interview_prep_plan('Machine Learning Engineer', 'Google'); print(result['answer'])"
You can view the logs in real-time in the "System Logs" tab of the web interface to monitor:
You can adjust the following parameters in main.py:
Round Limit: Change the conversation round limit by modifying the round_limit parameter in function calls (default: 5)
Model Selection: Edit the model configuration in construct_interview_assistant() to use different models
Output Directory: Change INTERVIEW_PREP_DIR to customize where results are stored
In addition to API keys, you can customize behavior with these environment variables:
LOG_LEVEL: Set to DEBUG, INFO, WARNING, or ERROR to control logging verbosityAPI Key Errors
.env fileModel Errors
Round Limit Not Working
Memory Errors
If you encounter issues not covered here:
community_usecase/new int/
āāā app.py # Streamlit web interface
āāā main.py # Core functionality and API connections
āāā config/
ā āāā prompts.py # Prompt templates for different tasks
āāā interview_prep/ # Generated interview preparation materials
āāā logging_utils.py # Logging utilities
āāā README.md # This documentation
This project is built on top of the CAMEL-AI OWL framework, which is licensed under the Apache License 2.0.
Made with ā¤ļø for job seekers everywhere.