# Crewai > ## Documentation Index --- # Source: https://docs.crewai.com/en/enterprise/features/agent-repositories.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Agent Repositories > Learn how to use Agent Repositories to share and reuse your agents across teams and projects Agent Repositories allow enterprise users to store, share, and reuse agent definitions across teams and projects. This feature enables organizations to maintain a centralized library of standardized agents, promoting consistency and reducing duplication of effort. Agent Repositories ## Benefits of Agent Repositories * **Standardization**: Maintain consistent agent definitions across your organization * **Reusability**: Create an agent once and use it in multiple crews and projects * **Governance**: Implement organization-wide policies for agent configurations * **Collaboration**: Enable teams to share and build upon each other's work ## Creating and Use Agent Repositories 1. You must have an account at CrewAI, try the [free plan](https://app.crewai.com). 2. Create agents with specific roles and goals for your workflows. 3. Configure tools and capabilities for each specialized assistant. 4. Deploy agents across projects via visual interface or API integration. Agent Repositories ### Loading Agents from Repositories You can load agents from repositories in your code using the `from_repository` parameter to run locally: ```python theme={null} from crewai import Agent # Create an agent by loading it from a repository # The agent is loaded with all its predefined configurations researcher = Agent( from_repository="market-research-agent" ) ``` ### Overriding Repository Settings You can override specific settings from the repository by providing them in the configuration: ```python theme={null} researcher = Agent( from_repository="market-research-agent", goal="Research the latest trends in AI development", # Override the repository goal verbose=True # Add a setting not in the repository ) ``` ### Example: Creating a Crew with Repository Agents ```python theme={null} from crewai import Crew, Agent, Task # Load agents from repositories researcher = Agent( from_repository="market-research-agent" ) writer = Agent( from_repository="content-writer-agent" ) # Create tasks research_task = Task( description="Research the latest trends in AI", agent=researcher ) writing_task = Task( description="Write a comprehensive report based on the research", agent=writer ) # Create the crew crew = Crew( agents=[researcher, writer], tasks=[research_task, writing_task], verbose=True ) # Run the crew result = crew.kickoff() ``` ### Example: Using `kickoff()` with Repository Agents You can also use repository agents directly with the `kickoff()` method for simpler interactions: ```python theme={null} from crewai import Agent from pydantic import BaseModel from typing import List # Define a structured output format class MarketAnalysis(BaseModel): key_trends: List[str] opportunities: List[str] recommendation: str # Load an agent from repository analyst = Agent( from_repository="market-analyst-agent", verbose=True ) # Get a free-form response result = analyst.kickoff("Analyze the AI market in 2025") print(result.raw) # Access the raw response # Get structured output structured_result = analyst.kickoff( "Provide a structured analysis of the AI market in 2025", response_format=MarketAnalysis ) # Access structured data print(f"Key Trends: {structured_result.pydantic.key_trends}") print(f"Recommendation: {structured_result.pydantic.recommendation}") ``` ## Best Practices 1. **Naming Convention**: Use clear, descriptive names for your repository agents 2. **Documentation**: Include comprehensive descriptions for each agent 3. **Tool Management**: Ensure that tools referenced by repository agents are available in your environment 4. **Access Control**: Manage permissions to ensure only authorized team members can modify repository agents ## Organization Management To switch between organizations or see your current organization, use the CrewAI CLI: ```bash theme={null} # View current organization crewai org current # Switch to a different organization crewai org switch # List all available organizations crewai org list ``` When loading agents from repositories, you must be authenticated and switched to the correct organization. If you receive errors, check your authentication status and organization settings using the CLI commands above. --- # Source: https://docs.crewai.com/en/concepts/agents.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Agents > Detailed guide on creating and managing agents within the CrewAI framework. ## Overview of an Agent In the CrewAI framework, an `Agent` is an autonomous unit that can: * Perform specific tasks * Make decisions based on its role and goal * Use tools to accomplish objectives * Communicate and collaborate with other agents * Maintain memory of interactions * Delegate tasks when allowed Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content. CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time. Visual Agent Builder Screenshot The Visual Agent Builder enables: * Intuitive agent configuration with form-based interfaces * Real-time testing and validation * Template library with pre-configured agent types * Easy customization of agent attributes and behaviors ## Agent Attributes | Attribute | Parameter | Type | Description | | :-------------------------------------- | :----------------------- | :------------------------------------ | :------------------------------------------------------------------------------------------------------- | | **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. | | **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. | | **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. | | **LLM** *(optional)* | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". | | **Tools** *(optional)* | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. | | **Function Calling LLM** *(optional)* | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. | | **Max Iterations** *(optional)* | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. | | **Max RPM** *(optional)* | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. | | **Max Execution Time** *(optional)* | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. | | **Verbose** *(optional)* | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. | | **Allow Delegation** *(optional)* | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. | | **Step Callback** *(optional)* | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. | | **Cache** *(optional)* | `cache` | `bool` | Enable caching for tool usage. Default is True. | | **System Template** *(optional)* | `system_template` | `Optional[str]` | Custom system prompt template for the agent. | | **Prompt Template** *(optional)* | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. | | **Response Template** *(optional)* | `response_template` | `Optional[str]` | Custom response template for the agent. | | **Allow Code Execution** *(optional)* | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. | | **Max Retry Limit** *(optional)* | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. | | **Respect Context Window** *(optional)* | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. | | **Code Execution Mode** *(optional)* | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. | | **Multimodal** *(optional)* | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. | | **Inject Date** *(optional)* | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. | | **Date Format** *(optional)* | `date_format` | `str` | Format string for date when inject\_date is enabled. Default is "%Y-%m-%d" (ISO format). | | **Reasoning** *(optional)* | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. | | **Max Reasoning Attempts** *(optional)* | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. | | **Embedder** *(optional)* | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. | | **Knowledge Sources** *(optional)* | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. | | **Use System Prompt** *(optional)* | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. | ## Creating Agents There are two ways to create agents in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**. ### YAML Configuration (Recommended) Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects. After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements. Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew: ```python Code theme={null} crew.kickoff(inputs={'topic': 'AI Agents'}) ``` Here's an example of how to configure agents using YAML: ```yaml agents.yaml theme={null} # src/latest_ai_development/config/agents.yaml researcher: role: > {topic} Senior Data Researcher goal: > Uncover cutting-edge developments in {topic} backstory: > You're a seasoned researcher with a knack for uncovering the latest developments in {topic}. Known for your ability to find the most relevant information and present it in a clear and concise manner. reporting_analyst: role: > {topic} Reporting Analyst goal: > Create detailed reports based on {topic} data analysis and research findings backstory: > You're a meticulous analyst with a keen eye for detail. You're known for your ability to turn complex data into clear and concise reports, making it easy for others to understand and act on the information you provide. ``` To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`: ```python Code theme={null} # src/latest_ai_development/crew.py from crewai import Agent, Crew, Process from crewai.project import CrewBase, agent, crew from crewai_tools import SerperDevTool @CrewBase class LatestAiDevelopmentCrew(): """LatestAiDevelopment crew""" agents_config = "config/agents.yaml" @agent def researcher(self) -> Agent: return Agent( config=self.agents_config['researcher'], # type: ignore[index] verbose=True, tools=[SerperDevTool()] ) @agent def reporting_analyst(self) -> Agent: return Agent( config=self.agents_config['reporting_analyst'], # type: ignore[index] verbose=True ) ``` The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code. ### Direct Code Definition You can create agents directly in code by instantiating the `Agent` class. Here's a comprehensive example showing all available parameters: ```python Code theme={null} from crewai import Agent from crewai_tools import SerperDevTool # Create an agent with all available parameters agent = Agent( role="Senior Data Scientist", goal="Analyze and interpret complex datasets to provide actionable insights", backstory="With over 10 years of experience in data science and machine learning, " "you excel at finding patterns in complex datasets.", llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4" function_calling_llm=None, # Optional: Separate LLM for tool calling verbose=False, # Default: False allow_delegation=False, # Default: False max_iter=20, # Default: 20 iterations max_rpm=None, # Optional: Rate limit for API calls max_execution_time=None, # Optional: Maximum execution time in seconds max_retry_limit=2, # Default: 2 retries on error allow_code_execution=False, # Default: False code_execution_mode="safe", # Default: "safe" (options: "safe", "unsafe") respect_context_window=True, # Default: True use_system_prompt=True, # Default: True multimodal=False, # Default: False inject_date=False, # Default: False date_format="%Y-%m-%d", # Default: ISO format reasoning=False, # Default: False max_reasoning_attempts=None, # Default: None tools=[SerperDevTool()], # Optional: List of tools knowledge_sources=None, # Optional: List of knowledge sources embedder=None, # Optional: Custom embedder configuration system_template=None, # Optional: Custom system prompt template prompt_template=None, # Optional: Custom prompt template response_template=None, # Optional: Custom response template step_callback=None, # Optional: Callback function for monitoring ) ``` Let's break down some key parameter combinations for common use cases: #### Basic Research Agent ```python Code theme={null} research_agent = Agent( role="Research Analyst", goal="Find and summarize information about specific topics", backstory="You are an experienced researcher with attention to detail", tools=[SerperDevTool()], verbose=True # Enable logging for debugging ) ``` #### Code Development Agent ```python Code theme={null} dev_agent = Agent( role="Senior Python Developer", goal="Write and debug Python code", backstory="Expert Python developer with 10 years of experience", allow_code_execution=True, code_execution_mode="safe", # Uses Docker for safety max_execution_time=300, # 5-minute timeout max_retry_limit=3 # More retries for complex code tasks ) ``` #### Long-Running Analysis Agent ```python Code theme={null} analysis_agent = Agent( role="Data Analyst", goal="Perform deep analysis of large datasets", backstory="Specialized in big data analysis and pattern recognition", memory=True, respect_context_window=True, max_rpm=10, # Limit API calls function_calling_llm="gpt-4o-mini" # Cheaper model for tool calls ) ``` #### Custom Template Agent ```python Code theme={null} custom_agent = Agent( role="Customer Service Representative", goal="Assist customers with their inquiries", backstory="Experienced in customer support with a focus on satisfaction", system_template="""<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>""", prompt_template="""<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>""", response_template="""<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""", ) ``` #### Date-Aware Agent with Reasoning ```python Code theme={null} strategic_agent = Agent( role="Market Analyst", goal="Track market movements with precise date references and strategic planning", backstory="Expert in time-sensitive financial analysis and strategic reporting", inject_date=True, # Automatically inject current date into tasks date_format="%B %d, %Y", # Format as "May 21, 2025" reasoning=True, # Enable strategic planning max_reasoning_attempts=2, # Limit planning iterations verbose=True ) ``` #### Reasoning Agent ```python Code theme={null} reasoning_agent = Agent( role="Strategic Planner", goal="Analyze complex problems and create detailed execution plans", backstory="Expert strategic planner who methodically breaks down complex challenges", reasoning=True, # Enable reasoning and planning max_reasoning_attempts=3, # Limit reasoning attempts max_iter=30, # Allow more iterations for complex planning verbose=True ) ``` #### Multimodal Agent ```python Code theme={null} multimodal_agent = Agent( role="Visual Content Analyst", goal="Analyze and process both text and visual content", backstory="Specialized in multimodal analysis combining text and image understanding", multimodal=True, # Enable multimodal capabilities verbose=True ) ``` ### Parameter Details #### Critical Parameters * `role`, `goal`, and `backstory` are required and shape the agent's behavior * `llm` determines the language model used (default: OpenAI's GPT-4) #### Memory and Context * `memory`: Enable to maintain conversation history * `respect_context_window`: Prevents token limit issues * `knowledge_sources`: Add domain-specific knowledge bases #### Execution Control * `max_iter`: Maximum attempts before giving best answer * `max_execution_time`: Timeout in seconds * `max_rpm`: Rate limiting for API calls * `max_retry_limit`: Retries on error #### Code Execution * `allow_code_execution`: Must be True to run code * `code_execution_mode`: * `"safe"`: Uses Docker (recommended for production) * `"unsafe"`: Direct execution (use only in trusted environments) This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section. Add the code interpreter tool as a tool in the agent as a tool parameter. #### Advanced Features * `multimodal`: Enable multimodal capabilities for processing text and visual content * `reasoning`: Enable agent to reflect and create plans before executing tasks * `inject_date`: Automatically inject current date into task descriptions #### Templates * `system_template`: Defines agent's core behavior * `prompt_template`: Structures input format * `response_template`: Formats agent responses When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting. When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution. ## Agent Tools Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from: * [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) * [LangChain Tools](https://python.langchain.com/docs/integrations/tools) Here's how to add tools to an agent: ```python Code theme={null} from crewai import Agent from crewai_tools import SerperDevTool, WikipediaTools # Create tools search_tool = SerperDevTool() wiki_tool = WikipediaTools() # Add tools to agent researcher = Agent( role="AI Technology Researcher", goal="Research the latest AI developments", tools=[search_tool, wiki_tool], verbose=True ) ``` ## Agent Memory and Context Agents can maintain memory of their interactions and use context from previous tasks. This is particularly useful for complex workflows where information needs to be retained across multiple tasks. ```python Code theme={null} from crewai import Agent analyst = Agent( role="Data Analyst", goal="Analyze and remember complex data patterns", memory=True, # Enable memory verbose=True ) ``` When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks. ## Context Window Management CrewAI includes sophisticated automatic context window management to handle situations where conversations exceed the language model's token limits. This powerful feature is controlled by the `respect_context_window` parameter. ### How Context Window Management Works When an agent's conversation history grows too large for the LLM's context window, CrewAI automatically detects this situation and can either: 1. **Automatically summarize content** (when `respect_context_window=True`) 2. **Stop execution with an error** (when `respect_context_window=False`) ### Automatic Context Handling (`respect_context_window=True`) This is the **default and recommended setting** for most use cases. When enabled, CrewAI will: ```python Code theme={null} # Agent with automatic context management (default) smart_agent = Agent( role="Research Analyst", goal="Analyze large documents and datasets", backstory="Expert at processing extensive information", respect_context_window=True, # 🔑 Default: auto-handle context limits verbose=True ) ``` **What happens when context limits are exceeded:** * ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."` * 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history * ✅ **Continued execution**: Task execution continues seamlessly with the summarized context * 📝 **Preserved information**: Key information is retained while reducing token count ### Strict Context Limits (`respect_context_window=False`) When you need precise control and prefer execution to stop rather than lose any information: ```python Code theme={null} # Agent with strict context limits strict_agent = Agent( role="Legal Document Reviewer", goal="Provide precise legal analysis without information loss", backstory="Legal expert requiring complete context for accurate analysis", respect_context_window=False, # ❌ Stop execution on context limit verbose=True ) ``` **What happens when context limits are exceeded:** * ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."` * 🛑 **Execution stops**: Task execution halts immediately * 🔧 **Manual intervention required**: You need to modify your approach ### Choosing the Right Setting #### Use `respect_context_window=True` (Default) when: * **Processing large documents** that might exceed context limits * **Long-running conversations** where some summarization is acceptable * **Research tasks** where general context is more important than exact details * **Prototyping and development** where you want robust execution ```python Code theme={null} # Perfect for document processing document_processor = Agent( role="Document Analyst", goal="Extract insights from large research papers", backstory="Expert at analyzing extensive documentation", respect_context_window=True, # Handle large documents gracefully max_iter=50, # Allow more iterations for complex analysis verbose=True ) ``` #### Use `respect_context_window=False` when: * **Precision is critical** and information loss is unacceptable * **Legal or medical tasks** requiring complete context * **Code review** where missing details could introduce bugs * **Financial analysis** where accuracy is paramount ```python Code theme={null} # Perfect for precision tasks precision_agent = Agent( role="Code Security Auditor", goal="Identify security vulnerabilities in code", backstory="Security expert requiring complete code context", respect_context_window=False, # Prefer failure over incomplete analysis max_retry_limit=1, # Fail fast on context issues verbose=True ) ``` ### Alternative Approaches for Large Data When dealing with very large datasets, consider these strategies: #### 1. Use RAG Tools ```python Code theme={null} from crewai_tools import RagTool # Create RAG tool for large document processing rag_tool = RagTool() rag_agent = Agent( role="Research Assistant", goal="Query large knowledge bases efficiently", backstory="Expert at using RAG tools for information retrieval", tools=[rag_tool], # Use RAG instead of large context windows respect_context_window=True, verbose=True ) ``` #### 2. Use Knowledge Sources ```python Code theme={null} # Use knowledge sources instead of large prompts knowledge_agent = Agent( role="Knowledge Expert", goal="Answer questions using curated knowledge", backstory="Expert at leveraging structured knowledge sources", knowledge_sources=[your_knowledge_sources], # Pre-processed knowledge respect_context_window=True, verbose=True ) ``` ### Context Window Best Practices 1. **Monitor Context Usage**: Enable `verbose=True` to see context management in action 2. **Design for Efficiency**: Structure tasks to minimize context accumulation 3. **Use Appropriate Models**: Choose LLMs with context windows suitable for your tasks 4. **Test Both Settings**: Try both `True` and `False` to see which works better for your use case 5. **Combine with RAG**: Use RAG tools for very large datasets instead of relying solely on context windows ### Troubleshooting Context Issues **If you're getting context limit errors:** ```python Code theme={null} # Quick fix: Enable automatic handling agent.respect_context_window = True # Better solution: Use RAG tools for large data from crewai_tools import RagTool agent.tools = [RagTool()] # Alternative: Break tasks into smaller pieces # Or use knowledge sources instead of large prompts ``` **If automatic summarization loses important information:** ```python Code theme={null} # Disable auto-summarization and use RAG instead agent = Agent( role="Detailed Analyst", goal="Maintain complete information accuracy", backstory="Expert requiring full context", respect_context_window=False, # No summarization tools=[RagTool()], # Use RAG for large data verbose=True ) ``` The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest! ## Direct Agent Interaction with `kickoff()` Agents can be used directly without going through a task or crew workflow using the `kickoff()` method. This provides a simpler way to interact with an agent when you don't need the full crew orchestration capabilities. ### How `kickoff()` Works The `kickoff()` method allows you to send messages directly to an agent and get a response, similar to how you would interact with an LLM but with all the agent's capabilities (tools, reasoning, etc.). ```python Code theme={null} from crewai import Agent from crewai_tools import SerperDevTool # Create an agent researcher = Agent( role="AI Technology Researcher", goal="Research the latest AI developments", tools=[SerperDevTool()], verbose=True ) # Use kickoff() to interact directly with the agent result = researcher.kickoff("What are the latest developments in language models?") # Access the raw response print(result.raw) ``` ### Parameters and Return Values | Parameter | Type | Description | | :---------------- | :--------------------------------- | :------------------------------------------------------------------------ | | `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content | | `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output | The method returns a `LiteAgentOutput` object with the following properties: * `raw`: String containing the raw output text * `pydantic`: Parsed Pydantic model (if a `response_format` was provided) * `agent_role`: Role of the agent that produced the output * `usage_metrics`: Token usage metrics for the execution ### Structured Output You can get structured output by providing a Pydantic model as the `response_format`: ```python Code theme={null} from pydantic import BaseModel from typing import List class ResearchFindings(BaseModel): main_points: List[str] key_technologies: List[str] future_predictions: str # Get structured output result = researcher.kickoff( "Summarize the latest developments in AI for 2025", response_format=ResearchFindings ) # Access structured data print(result.pydantic.main_points) print(result.pydantic.future_predictions) ``` ### Multiple Messages You can also provide a conversation history as a list of message dictionaries: ```python Code theme={null} messages = [ {"role": "user", "content": "I need information about large language models"}, {"role": "assistant", "content": "I'd be happy to help with that! What specifically would you like to know?"}, {"role": "user", "content": "What are the latest developments in 2025?"} ] result = researcher.kickoff(messages) ``` ### Async Support An asynchronous version is available via `kickoff_async()` with the same parameters: ```python Code theme={null} import asyncio async def main(): result = await researcher.kickoff_async("What are the latest developments in AI?") print(result.raw) asyncio.run(main()) ``` The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.). ## Important Considerations and Best Practices ### Security and Code Execution * When using `allow_code_execution`, be cautious with user input and always validate it * Use `code_execution_mode: "safe"` (Docker) in production environments * Consider setting appropriate `max_execution_time` limits to prevent infinite loops ### Performance Optimization * Use `respect_context_window: true` to prevent token limit issues * Set appropriate `max_rpm` to avoid rate limiting * Enable `cache: true` to improve performance for repetitive tasks * Adjust `max_iter` and `max_retry_limit` based on task complexity ### Memory and Context Management * Leverage `knowledge_sources` for domain-specific information * Configure `embedder` when using custom embedding models * Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior ### Advanced Features * Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks * Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts) * Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks * Customize the date format with `date_format` using standard Python datetime format codes * Enable `multimodal: true` for agents that need to process both text and visual content ### Agent Collaboration * Enable `allow_delegation: true` when agents need to work together * Use `step_callback` to monitor and log agent interactions * Consider using different LLMs for different purposes: * Main `llm` for complex reasoning * `function_calling_llm` for efficient tool usage ### Date Awareness and Reasoning * Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks * Customize the date format with `date_format` using standard Python datetime format codes * Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc. * Invalid date formats will be logged as warnings and will not modify the task description * Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection ### Model Compatibility * Set `use_system_prompt: false` for older models that don't support system messages * Ensure your chosen `llm` supports the features you need (like function calling) ## Troubleshooting Common Issues 1. **Rate Limiting**: If you're hitting API rate limits: * Implement appropriate `max_rpm` * Use caching for repetitive operations * Consider batching requests 2. **Context Window Errors**: If you're exceeding context limits: * Enable `respect_context_window` * Use more efficient prompts * Clear agent memory periodically 3. **Code Execution Issues**: If code execution fails: * Verify Docker is installed for safe mode * Check execution permissions * Review code sandbox settings 4. **Memory Issues**: If agent responses seem inconsistent: * Check knowledge source configuration * Review conversation history management Remember that agents are most effective when configured according to their specific use case. Take time to understand your requirements and adjust these parameters accordingly. --- # Source: https://docs.crewai.com/en/tools/ai-ml/aimindtool.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # AI Mind Tool > The `AIMindTool` is designed to query data sources in natural language. # `AIMindTool` ## Description The `AIMindTool` is a wrapper around [AI-Minds](https://mindsdb.com/minds) provided by [MindsDB](https://mindsdb.com/). It allows you to query data sources in natural language by simply configuring their connection parameters. This tool is useful when you need answers to questions from your data stored in various data sources including PostgreSQL, MySQL, MariaDB, ClickHouse, Snowflake, and Google BigQuery. Minds are AI systems that work similarly to large language models (LLMs) but go beyond by answering any question from any data. This is accomplished by: * Selecting the most relevant data for an answer using parametric search * Understanding the meaning and providing responses within the correct context through semantic search * Delivering precise answers by analyzing data and using machine learning (ML) models ## Installation To incorporate this tool into your project, you need to install the Minds SDK: ```shell theme={null} uv add minds-sdk ``` ## Steps to Get Started To effectively use the `AIMindTool`, follow these steps: 1. **Package Installation**: Confirm that the `crewai[tools]` and `minds-sdk` packages are installed in your Python environment. 2. **API Key Acquisition**: Sign up for a Minds account [here](https://mdb.ai/register), and obtain an API key. 3. **Environment Configuration**: Store your obtained API key in an environment variable named `MINDS_API_KEY` to facilitate its use by the tool. ## Example The following example demonstrates how to initialize the tool and execute a query: ```python Code theme={null} from crewai_tools import AIMindTool # Initialize the AIMindTool aimind_tool = AIMindTool( datasources=[ { "description": "house sales data", "engine": "postgres", "connection_data": { "user": "demo_user", "password": "demo_password", "host": "samples.mindsdb.com", "port": 5432, "database": "demo", "schema": "demo_data" }, "tables": ["house_sales"] } ] ) # Run a natural language query result = aimind_tool.run("How many 3 bedroom houses were sold in 2008?") print(result) ``` ## Parameters The `AIMindTool` accepts the following parameters: * **api\_key**: Optional. Your Minds API key. If not provided, it will be read from the `MINDS_API_KEY` environment variable. * **datasources**: A list of dictionaries, each containing the following keys: * **description**: A description of the data contained in the datasource. * **engine**: The engine (or type) of the datasource. * **connection\_data**: A dictionary containing the connection parameters for the datasource. * **tables**: A list of tables that the data source will use. This is optional and can be omitted if all tables in the data source are to be used. A list of supported data sources and their connection parameters can be found [here](https://docs.mdb.ai/docs/data_sources). ## Agent Integration Example Here's how to integrate the `AIMindTool` with a CrewAI agent: ```python Code theme={null} from crewai import Agent from crewai.project import agent from crewai_tools import AIMindTool # Initialize the tool aimind_tool = AIMindTool( datasources=[ { "description": "sales data", "engine": "postgres", "connection_data": { "user": "your_user", "password": "your_password", "host": "your_host", "port": 5432, "database": "your_db", "schema": "your_schema" }, "tables": ["sales"] } ] ) # Define an agent with the AIMindTool @agent def data_analyst(self) -> Agent: return Agent( config=self.agents_config["data_analyst"], allow_delegation=False, tools=[aimind_tool] ) ``` ## Conclusion The `AIMindTool` provides a powerful way to query your data sources using natural language, making it easier to extract insights without writing complex SQL queries. By connecting to various data sources and leveraging AI-Minds technology, this tool enables agents to access and analyze data efficiently. --- # Source: https://docs.crewai.com/en/tools/automation/apifyactorstool.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Apify Actors > `ApifyActorsTool` lets you call Apify Actors to provide your CrewAI workflows with web scraping, crawling, data extraction, and web automation capabilities. # `ApifyActorsTool` Integrate [Apify Actors](https://apify.com/actors) into your CrewAI workflows. ## Description The `ApifyActorsTool` connects [Apify Actors](https://apify.com/actors), cloud-based programs for web scraping and automation, to your CrewAI workflows. Use any of the 4,000+ Actors on [Apify Store](https://apify.com/store) for use cases such as extracting data from social media, search engines, online maps, e-commerce sites, travel portals, or general websites. For details, see the [Apify CrewAI integration](https://docs.apify.com/platform/integrations/crewai) in Apify documentation. ## Steps to get started Install `crewai[tools]` and `langchain-apify` using pip: `pip install 'crewai[tools]' langchain-apify`. Sign up to [Apify Console](https://console.apify.com/) and get your [Apify API token](https://console.apify.com/settings/integrations).. Set your Apify API token as the `APIFY_API_TOKEN` environment variable to enable the tool's functionality. ## Usage example Use the `ApifyActorsTool` manually to run the [RAG Web Browser Actor](https://apify.com/apify/rag-web-browser) to perform a web search: ```python theme={null} from crewai_tools import ApifyActorsTool # Initialize the tool with an Apify Actor tool = ApifyActorsTool(actor_name="apify/rag-web-browser") # Run the tool with input parameters results = tool.run(run_input={"query": "What is CrewAI?", "maxResults": 5}) # Process the results for result in results: print(f"URL: {result['metadata']['url']}") print(f"Content: {result.get('markdown', 'N/A')[:100]}...") ``` ### Expected output Here is the output from running the code above: ```text theme={null} URL: https://www.example.com/crewai-intro Content: CrewAI is a framework for building AI-powered workflows... URL: https://docs.crewai.com/ Content: Official documentation for CrewAI... ``` The `ApifyActorsTool` automatically fetches the Actor definition and input schema from Apify using the provided `actor_name` and then constructs the tool description and argument schema. This means you need to specify only a valid `actor_name`, and the tool handles the rest when used with agents—no need to specify the `run_input`. Here's how it works: ```python theme={null} from crewai import Agent from crewai_tools import ApifyActorsTool rag_browser = ApifyActorsTool(actor_name="apify/rag-web-browser") agent = Agent( role="Research Analyst", goal="Find and summarize information about specific topics", backstory="You are an experienced researcher with attention to detail", tools=[rag_browser], ) ``` You can run other Actors from [Apify Store](https://apify.com/store) simply by changing the `actor_name` and, when using it manually, adjusting the `run_input` based on the Actor input schema. For an example of usage with agents, see the [CrewAI Actor template](https://apify.com/templates/python-crewai). ## Configuration The `ApifyActorsTool` requires these inputs to work: * **`actor_name`** The ID of the Apify Actor to run, e.g., `"apify/rag-web-browser"`. Browse all Actors on [Apify Store](https://apify.com/store). * **`run_input`** A dictionary of input parameters for the Actor when running the tool manually. * For example, for the `apify/rag-web-browser` Actor: `{"query": "search term", "maxResults": 5}` * See the Actor's [input schema](https://apify.com/apify/rag-web-browser/input-schema) for the list of input parameters. ## Resources * **[Apify](https://apify.com/)**: Explore the Apify platform. * **[How to build an AI agent on Apify](https://blog.apify.com/how-to-build-an-ai-agent/)** - A complete step-by-step guide to creating, publishing, and monetizing AI agents on the Apify platform. * **[RAG Web Browser Actor](https://apify.com/apify/rag-web-browser)**: A popular Actor for web search for LLMs. * **[CrewAI Integration Guide](https://docs.apify.com/platform/integrations/crewai)**: Follow the official guide for integrating Apify and CrewAI. --- # Source: https://docs.crewai.com/en/observability/arize-phoenix.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Arize Phoenix > Arize Phoenix integration for CrewAI with OpenTelemetry and OpenInference # Arize Phoenix Integration This guide demonstrates how to integrate **Arize Phoenix** with **CrewAI** using OpenTelemetry via the [OpenInference](https://github.com/openinference/openinference) SDK. By the end of this guide, you will be able to trace your CrewAI agents and easily debug your agents. > **What is Arize Phoenix?** [Arize Phoenix](https://phoenix.arize.com) is an LLM observability platform that provides tracing and evaluation for AI applications. [![Watch a Video Demo of Our Integration with Phoenix](https://storage.googleapis.com/arize-assets/fixtures/setup_crewai.png)](https://www.youtube.com/watch?v=Yc5q3l6F7Ww) ## Get Started We'll walk through a simple example of using CrewAI and integrating it with Arize Phoenix via OpenTelemetry using OpenInference. You can also access this guide on [Google Colab](https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/tracing/crewai_tracing_tutorial.ipynb). ### Step 1: Install Dependencies ```bash theme={null} pip install openinference-instrumentation-crewai crewai crewai-tools arize-phoenix-otel ``` ### Step 2: Set Up Environment Variables Setup Phoenix Cloud API keys and configure OpenTelemetry to send traces to Phoenix. Phoenix Cloud is a hosted version of Arize Phoenix, but it is not required to use this integration. You can get your free Serper API key [here](https://serper.dev/). ```python theme={null} import os from getpass import getpass # Get your Phoenix Cloud credentials PHOENIX_API_KEY = getpass("🔑 Enter your Phoenix Cloud API Key: ") # Get API keys for services OPENAI_API_KEY = getpass("🔑 Enter your OpenAI API key: ") SERPER_API_KEY = getpass("🔑 Enter your Serper API key: ") # Set environment variables os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}" os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com" # Phoenix Cloud, change this to your own endpoint if you are using a self-hosted instance os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY os.environ["SERPER_API_KEY"] = SERPER_API_KEY ``` ### Step 3: Initialize OpenTelemetry with Phoenix Initialize the OpenInference OpenTelemetry instrumentation SDK to start capturing traces and send them to Phoenix. ```python theme={null} from phoenix.otel import register tracer_provider = register( project_name="crewai-tracing-demo", auto_instrument=True, ) ``` ### Step 4: Create a CrewAI Application We'll create a CrewAI application where two agents collaborate to research and write a blog post about AI advancements. ```python theme={null} from crewai import Agent, Crew, Process, Task from crewai_tools import SerperDevTool from openinference.instrumentation.crewai import CrewAIInstrumentor from phoenix.otel import register # setup monitoring for your crew tracer_provider = register( endpoint="http://localhost:6006/v1/traces") CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider) search_tool = SerperDevTool() # Define your agents with roles and goals researcher = Agent( role="Senior Research Analyst", goal="Uncover cutting-edge developments in AI and data science", backstory="""You work at a leading tech think tank. Your expertise lies in identifying emerging trends. You have a knack for dissecting complex data and presenting actionable insights.""", verbose=True, allow_delegation=False, # You can pass an optional llm attribute specifying what model you wanna use. # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7), tools=[search_tool], ) writer = Agent( role="Tech Content Strategist", goal="Craft compelling content on tech advancements", backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles. You transform complex concepts into compelling narratives.""", verbose=True, allow_delegation=True, ) # Create tasks for your agents task1 = Task( description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024. Identify key trends, breakthrough technologies, and potential industry impacts.""", expected_output="Full analysis report in bullet points", agent=researcher, ) task2 = Task( description="""Using the insights provided, develop an engaging blog post that highlights the most significant AI advancements. Your post should be informative yet accessible, catering to a tech-savvy audience. Make it sound cool, avoid complex words so it doesn't sound like AI.""", expected_output="Full blog post of at least 4 paragraphs", agent=writer, ) # Instantiate your crew with a sequential process crew = Crew( agents=[researcher, writer], tasks=[task1, task2], verbose=1, process=Process.sequential ) # Get your crew to work! result = crew.kickoff() print("######################") print(result) ``` ### Step 5: View Traces in Phoenix After running the agent, you can view the traces generated by your CrewAI application in Phoenix. You should see detailed steps of the agent interactions and LLM calls, which can help you debug and optimize your AI agents. Log into your Phoenix Cloud account and navigate to the project you specified in the `project_name` parameter. You'll see a timeline view of your trace with all the agent interactions, tool usages, and LLM calls. ![Example trace in Phoenix showing agent interactions](https://storage.googleapis.com/arize-assets/fixtures/crewai_traces.png) ### Version Compatibility Information * Python 3.8+ * CrewAI >= 0.86.0 * Arize Phoenix >= 7.0.1 * OpenTelemetry SDK >= 1.31.0 ### References * [Phoenix Documentation](https://docs.arize.com/phoenix/) - Overview of the Phoenix platform. * [CrewAI Documentation](https://docs.crewai.com/) - Overview of the CrewAI framework. * [OpenTelemetry Docs](https://opentelemetry.io/docs/) - OpenTelemetry guide * [OpenInference GitHub](https://github.com/openinference/openinference) - Source code for OpenInference SDK. --- # Source: https://docs.crewai.com/en/tools/search-research/arxivpapertool.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Arxiv Paper Tool > The `ArxivPaperTool` searches arXiv for papers matching a query and optionally downloads PDFs. # `ArxivPaperTool` ## Description The `ArxivPaperTool` queries the arXiv API for academic papers and returns compact, readable results. It can also optionally download PDFs to disk. ## Installation This tool has no special installation beyond `crewai-tools`. ```shell theme={null} uv add crewai-tools ``` No API key is required. This tool uses the public arXiv Atom API. ## Steps to Get Started 1. Initialize the tool. 2. Provide a `search_query` (e.g., "transformer neural network"). 3. Optionally set `max_results` (1–100) and enable PDF downloads in the constructor. ## Example ```python Code theme={null} from crewai import Agent, Task, Crew from crewai_tools import ArxivPaperTool tool = ArxivPaperTool( download_pdfs=False, save_dir="./arxiv_pdfs", use_title_as_filename=True, ) agent = Agent( role="Researcher", goal="Find relevant arXiv papers", backstory="Expert at literature discovery", tools=[tool], verbose=True, ) task = Task( description="Search arXiv for 'transformer neural network' and list top 5 results.", expected_output="A concise list of 5 relevant papers with titles, links, and summaries.", agent=agent, ) crew = Crew(agents=[agent], tasks=[task]) result = crew.kickoff() ``` ### Direct usage (without Agent) ```python Code theme={null} from crewai_tools import ArxivPaperTool tool = ArxivPaperTool( download_pdfs=True, save_dir="./arxiv_pdfs", ) print(tool.run(search_query="mixture of experts", max_results=3)) ``` ## Parameters ### Initialization Parameters * `download_pdfs` (bool, default `False`): Whether to download PDFs. * `save_dir` (str, default `./arxiv_pdfs`): Directory to save PDFs. * `use_title_as_filename` (bool, default `False`): Use paper titles for filenames. ### Run Parameters * `search_query` (str, required): The arXiv search query. * `max_results` (int, default `5`, range 1–100): Number of results. ## Output format The tool returns a human‑readable list of papers with: * Title * Link (abs page) * Snippet/summary (truncated) When `download_pdfs=True`, PDFs are saved to disk and the summary mentions saved files. ## Usage Notes * The tool returns formatted text with key metadata and links. * When `download_pdfs=True`, PDFs will be stored in `save_dir`. ## Troubleshooting * If you receive a network timeout, re‑try or reduce `max_results`. * Invalid XML errors indicate an arXiv response parse issue; try a simpler query. * File system errors (e.g., permission denied) may occur when saving PDFs; ensure `save_dir` is writable. ## Related links * arXiv API docs: [https://info.arxiv.org/help/api/index.html](https://info.arxiv.org/help/api/index.html) ## Error Handling * Network issues, invalid XML, and OS errors are handled with informative messages. --- # Source: https://docs.crewai.com/en/enterprise/integrations/asana.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Asana Integration > Team task and project coordination with Asana integration for CrewAI. ## Overview Enable your agents to manage tasks, projects, and team coordination through Asana. Create tasks, update project status, manage assignments, and streamline your team's workflow with AI-powered automation. ## Prerequisites Before using the Asana integration, ensure you have: * A [CrewAI AMP](https://app.crewai.com) account with an active subscription * An Asana account with appropriate permissions * Connected your Asana account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors) ## Setting Up Asana Integration ### 1. Connect Your Asana Account 1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors) 2. Find **Asana** in the Authentication Integrations section 3. Click **Connect** and complete the OAuth flow 4. Grant the necessary permissions for task and project management 5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations) ### 2. Install Required Package ```bash theme={null} uv add crewai-tools ``` ### 3. Environment Variable Setup To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token. ```bash theme={null} export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token" ``` Or add it to your `.env` file: ``` CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token ``` ## Available Actions **Description:** Create a comment in Asana. **Parameters:** * `task` (string, required): Task ID - The ID of the Task the comment will be added to. The comment will be authored by the currently authenticated user. * `text` (string, required): Text (example: "This is a comment."). **Description:** Create a project in Asana. **Parameters:** * `name` (string, required): Name (example: "Stuff to buy"). * `workspace` (string, required): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Projects in. Defaults to the user's first Workspace if left blank. * `team` (string, optional): Team - Use Connect Portal Workflow Settings to allow users to select which Team to share this Project with. Defaults to the user's first Team if left blank. * `notes` (string, optional): Notes (example: "These are things we need to purchase."). **Description:** Get a list of projects in Asana. **Parameters:** * `archived` (string, optional): Archived - Choose "true" to show archived projects, "false" to display only active projects, or "default" to show both archived and active projects. * Options: `default`, `true`, `false` **Description:** Get a project by ID in Asana. **Parameters:** * `projectFilterId` (string, required): Project ID. **Description:** Create a task in Asana. **Parameters:** * `name` (string, required): Name (example: "Task Name"). * `workspace` (string, optional): Workspace - Use Connect Portal Workflow Settings to allow users to select which Workspace to create Tasks in. Defaults to the user's first Workspace if left blank.. * `project` (string, optional): Project - Use Connect Portal Workflow Settings to allow users to select which Project to create this Task in. * `notes` (string, optional): Notes. * `dueOnDate` (string, optional): Due On - The date on which this task is due. Cannot be used together with Due At. (example: "YYYY-MM-DD"). * `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z"). * `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee. * `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later. **Description:** Update a task in Asana. **Parameters:** * `taskId` (string, required): Task ID - The ID of the Task that will be updated. * `completeStatus` (string, optional): Completed Status. * Options: `true`, `false` * `name` (string, optional): Name (example: "Task Name"). * `notes` (string, optional): Notes. * `dueOnDate` (string, optional): Due On - The date on which this task is due. Cannot be used together with Due At. (example: "YYYY-MM-DD"). * `dueAtDate` (string, optional): Due At - The date and time (ISO timestamp) at which this task is due. Cannot be used together with Due On. (example: "2019-09-15T02:06:58.147Z"). * `assignee` (string, optional): Assignee - The ID of the Asana user this task will be assigned to. Use Connect Portal Workflow Settings to allow users to select an Assignee. * `gid` (string, optional): External ID - An ID from your application to associate this task with. You can use this ID to sync updates to this task later. **Description:** Get a list of tasks in Asana. **Parameters:** * `workspace` (string, optional): Workspace - The ID of the Workspace to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Workspace. * `project` (string, optional): Project - The ID of the Project to filter tasks on. Use Connect Portal Workflow Settings to allow users to select a Project. * `assignee` (string, optional): Assignee - The ID of the assignee to filter tasks on. Use Connect Portal Workflow Settings to allow users to select an Assignee. * `completedSince` (string, optional): Completed since - Only return tasks that are either incomplete or that have been completed since this time (ISO or Unix timestamp). (example: "2014-04-25T16:15:47-04:00"). **Description:** Get a list of tasks by ID in Asana. **Parameters:** * `taskId` (string, required): Task ID. **Description:** Get a task by external ID in Asana. **Parameters:** * `gid` (string, required): External ID - The ID that this task is associated or synced with, from your application. **Description:** Add a task to a section in Asana. **Parameters:** * `sectionId` (string, required): Section ID - The ID of the section to add this task to. * `taskId` (string, required): Task ID - The ID of the task. (example: "1204619611402340"). * `beforeTaskId` (string, optional): Before Task ID - The ID of a task in this section that this task will be inserted before. Cannot be used with After Task ID. (example: "1204619611402340"). * `afterTaskId` (string, optional): After Task ID - The ID of a task in this section that this task will be inserted after. Cannot be used with Before Task ID. (example: "1204619611402340"). **Description:** Get a list of teams in Asana. **Parameters:** * `workspace` (string, required): Workspace - Returns the teams in this workspace visible to the authorized user. **Description:** Get a list of workspaces in Asana. **Parameters:** None required. ## Usage Examples ### Basic Asana Agent Setup ```python theme={null} from crewai import Agent, Task, Crew # Create an agent with Asana capabilities asana_agent = Agent( role="Project Manager", goal="Manage tasks and projects in Asana efficiently", backstory="An AI assistant specialized in project management and task coordination.", apps=['asana'] # All Asana actions will be available ) # Task to create a new project create_project_task = Task( description="Create a new project called 'Q1 Marketing Campaign' in the Marketing workspace", agent=asana_agent, expected_output="Confirmation that the project was created successfully with project ID" ) # Run the task crew = Crew( agents=[asana_agent], tasks=[create_project_task] ) crew.kickoff() ``` ### Filtering Specific Asana Tools ```python theme={null} from crewai import Agent, Task, Crew # Create agent with specific Asana actions only task_manager_agent = Agent( role="Task Manager", goal="Create and manage tasks efficiently", backstory="An AI assistant that focuses on task creation and management.", apps=[ 'asana/create_task', 'asana/update_task', 'asana/get_tasks' ] # Specific Asana actions ) # Task to create and assign a task task_management = Task( description="Create a task called 'Review quarterly reports' and assign it to the appropriate team member", agent=task_manager_agent, expected_output="Task created and assigned successfully" ) crew = Crew( agents=[task_manager_agent], tasks=[task_management] ) crew.kickoff() ``` ### Advanced Project Management ```python theme={null} from crewai import Agent, Task, Crew project_coordinator = Agent( role="Project Coordinator", goal="Coordinate project activities and track progress", backstory="An experienced project coordinator who ensures projects run smoothly.", apps=['asana'] ) # Complex task involving multiple Asana operations coordination_task = Task( description=""" 1. Get all active projects in the workspace 2. For each project, get the list of incomplete tasks 3. Create a summary report task in the 'Management Reports' project 4. Add comments to overdue tasks to request status updates """, agent=project_coordinator, expected_output="Summary report created and status update requests sent for overdue tasks" ) crew = Crew( agents=[project_coordinator], tasks=[coordination_task] ) crew.kickoff() ``` --- # Source: https://docs.crewai.com/en/enterprise/guides/automation-triggers.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Triggers Overview > Understand how CrewAI AMP triggers work, how to manage them, and where to find integration-specific playbooks CrewAI AMP triggers connect your automations to real-time events across the tools your teams already use. Instead of polling systems or relying on manual kickoffs, triggers listen for changes—new emails, calendar updates, CRM status changes—and immediately launch the crew or flow you specify. Automation Triggers Overview ### Integration Playbooks Deep-dive guides walk through setup and sample workflows for each integration: Enable crews when emails arrive or threads update. {" "} React to calendar events as they are created, updated, or cancelled. {" "} Handle Drive file uploads, edits, and deletions. {" "} Automate responses to new Outlook messages and calendar updates. {" "} Audit file activity and sharing changes in OneDrive. {" "} Kick off workflows when new Teams chats start. {" "} Launch automations from HubSpot workflows and lifecycle events. {" "} Connect Salesforce processes to CrewAI for CRM automation. {" "} Start crews directly from Slack slash commands. Bridge CrewAI with thousands of Zapier-supported apps. ## Trigger Capabilities With triggers, you can: * **Respond to real-time events** - Automatically execute workflows when specific conditions are met * **Integrate with external systems** - Connect with platforms like Gmail, Outlook, OneDrive, JIRA, Slack, Stripe and more * **Scale your automation** - Handle high-volume events without manual intervention * **Maintain context** - Access trigger data within your crews and flows ## Managing Triggers ### Viewing Available Triggers To access and manage your automation triggers: 1. Navigate to your deployment in the CrewAI dashboard 2. Click on the **Triggers** tab to view all available trigger integrations List of available automation triggers This view shows all the trigger integrations available for your deployment, along with their current connection status. ### Enabling and Disabling Triggers Each trigger can be easily enabled or disabled using the toggle switch: Enable or disable triggers with toggle * **Enabled (blue toggle)**: The trigger is active and will automatically execute your deployment when the specified events occur * **Disabled (gray toggle)**: The trigger is inactive and will not respond to events Simply click the toggle to change the trigger state. Changes take effect immediately. ### Monitoring Trigger Executions Track the performance and history of your triggered executions: List of executions triggered by automation ## Building Trigger-Driven Automations Before building your automation, it's helpful to understand the structure of trigger payloads that your crews and flows will receive. ### Trigger Setup Checklist Before wiring a trigger into production, make sure you: * Connect the integration under **Tools & Integrations** and complete any OAuth or API key steps * Enable the trigger toggle on the deployment that should respond to events * Provide any required environment variables (API tokens, tenant IDs, shared secrets) * Create or update tasks that can parse the incoming payload within the first crew task or flow step * Decide whether to pass trigger context automatically using `allow_crewai_trigger_context` * Set up monitoring—webhook logs, CrewAI execution history, and optional external alerting ### Testing Triggers Locally with CLI The CrewAI CLI provides powerful commands to help you develop and test trigger-driven automations without deploying to production. #### List Available Triggers View all available triggers for your connected integrations: ```bash theme={null} crewai triggers list ``` This command displays all triggers available based on your connected integrations, showing: * Integration name and connection status * Available trigger types * Trigger names and descriptions #### Simulate Trigger Execution Test your crew with realistic trigger payloads before deployment: ```bash theme={null} crewai triggers run ``` For example: ```bash theme={null} crewai triggers run microsoft_onedrive/file_changed ``` This command: * Executes your crew locally * Passes a complete, realistic trigger payload * Simulates exactly how your crew will be called in production **Important Development Notes:** * Use `crewai triggers run ` to simulate trigger execution during development * Using `crewai run` will NOT simulate trigger calls and won't pass the trigger payload * After deployment, your crew will be executed with the actual trigger payload * If your crew expects parameters that aren't in the trigger payload, execution may fail ### Triggers with Crew Your existing crew definitions work seamlessly with triggers, you just need to have a task to parse the received payload: ```python theme={null} @CrewBase class MyAutomatedCrew: @agent def researcher(self) -> Agent: return Agent( config=self.agents_config['researcher'], ) @task def parse_trigger_payload(self) -> Task: return Task( config=self.tasks_config['parse_trigger_payload'], agent=self.researcher(), ) @task def analyze_trigger_content(self) -> Task: return Task( config=self.tasks_config['analyze_trigger_data'], agent=self.researcher(), ) ``` The crew will automatically receive and can access the trigger payload through the standard CrewAI context mechanisms. Crew and Flow inputs can include `crewai_trigger_payload`. CrewAI automatically injects this payload: - Tasks: appended to the first task's description by default ("Trigger Payload: {crewai_trigger_payload}") - Control via `allow_crewai_trigger_context`: set `True` to always inject, `False` to never inject - Flows: any `@start()` method that accepts a `crewai_trigger_payload` parameter will receive it ### Integration with Flows For flows, you have more control over how trigger data is handled: #### Accessing Trigger Payload All `@start()` methods in your flows will accept an additional parameter called `crewai_trigger_payload`: ```python theme={null} from crewai.flow import Flow, start, listen class MyAutomatedFlow(Flow): @start() def handle_trigger(self, crewai_trigger_payload: dict = None): """ This start method can receive trigger data """ if crewai_trigger_payload: # Process the trigger data trigger_id = crewai_trigger_payload.get('id') event_data = crewai_trigger_payload.get('payload', {}) # Store in flow state for use by other methods self.state.trigger_id = trigger_id self.state.trigger_type = event_data return event_data # Handle manual execution return None @listen(handle_trigger) def process_data(self, trigger_data): """ Process the data from the trigger """ # ... process the trigger ``` #### Triggering Crews from Flows When kicking off a crew within a flow that was triggered, pass the trigger payload as it: ```python theme={null} @start() def delegate_to_crew(self, crewai_trigger_payload: dict = None): """ Delegate processing to a specialized crew """ crew = MySpecializedCrew() # Pass the trigger payload to the crew result = crew.crew().kickoff( inputs={ 'a_custom_parameter': "custom_value", 'crewai_trigger_payload': crewai_trigger_payload }, ) return result ``` ## Troubleshooting **Trigger not firing:** * Verify the trigger is enabled in your deployment's Triggers tab * Check integration connection status under Tools & Integrations * Ensure all required environment variables are properly configured **Execution failures:** * Check the execution logs for error details * Use `crewai triggers run ` to test locally and see the exact payload structure * Verify your crew can handle the `crewai_trigger_payload` parameter * Ensure your crew doesn't expect parameters that aren't included in the trigger payload **Development issues:** * Always test with `crewai triggers run ` before deploying to see the complete payload * Remember that `crewai run` does NOT simulate trigger calls—use `crewai triggers run` instead * Use `crewai triggers list` to verify which triggers are available for your connected integrations * After deployment, your crew will receive the actual trigger payload, so test thoroughly locally first Automation triggers transform your CrewAI deployments into responsive, event-driven systems that can seamlessly integrate with your existing business processes and tools. --- # Source: https://docs.crewai.com/en/enterprise/features/automations.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Automations > Manage, deploy, and monitor your live crews (automations) in one place. ## Overview Automations is the live operations hub for your deployed crews. Use it to deploy from GitHub or a ZIP file, manage environment variables, re‑deploy when needed, and monitor the status of each automation. Automations Overview ## Deployment Methods ### Deploy from GitHub Use this for version‑controlled projects and continuous deployment. Click Configure GitHub and authorize access. Choose the Repository and Branch you want to deploy from. Turn on Automatically deploy new commits to ship updates on every push. Add secrets individually or use Bulk View for multiple variables. Click Deploy to create your live automation. GitHub Deployment ### Deploy from ZIP Ship quickly without Git—upload a compressed package of your project. Select the ZIP archive from your computer. Provide any required variables or keys. Click Deploy to create your live automation. ZIP Deployment ## Automations Dashboard The table lists all live automations with key details: * **CREW**: Automation name * **STATUS**: Online / Failed / In Progress * **URL**: Endpoint for kickoff/status * **TOKEN**: Automation token * **ACTIONS**: Re‑deploy, delete, and more Use the top‑right controls to filter and search: * Search by name * Filter by Status * Filter by Source (GitHub / Studio / ZIP) Once deployed, you can view the automation details and have the **Options** dropdown menu to `chat with this crew`, `Export React Component` and `Export as MCP`. Automations Table ## Best Practices * Prefer GitHub deployments for version control and CI/CD * Use re‑deploy to roll forward after code or config updates or set it to auto-deploy on every push ## Related Deploy a Crew from GitHub or ZIP file. Trigger automations via webhooks or API. Stream real-time events and updates to your systems. --- # Source: https://docs.crewai.com/en/enterprise/guides/azure-openai-setup.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Azure OpenAI Setup > Configure Azure OpenAI with Crew Studio for enterprise LLM connections This guide walks you through connecting Azure OpenAI with Crew Studio for seamless enterprise AI operations. ## Setup Process 1. In Azure, go to [Azure AI Foundry](https://ai.azure.com/) > select your Azure OpenAI deployment. 2. On the left menu, click `Deployments`. If you don't have one, create a deployment with your desired model. 3. Once created, select your deployment and locate the `Target URI` and `Key` on the right side of the page. Keep this page open, as you'll need this information. Azure AI Foundry 4. In another tab, open `CrewAI AMP > LLM Connections`. Name your LLM Connection, select Azure as the provider, and choose the same model you selected in Azure. 5. On the same page, add environment variables from step 3: * One named `AZURE_DEPLOYMENT_TARGET_URL` (using the Target URI). The URL should look like this: [https://your-deployment.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview](https://your-deployment.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview) * Another named `AZURE_API_KEY` (using the Key). 6. Click `Add Connection` to save your LLM Connection. 7. In `CrewAI AMP > Settings > Defaults > Crew Studio LLM Settings`, set the new LLM Connection and model as defaults. 8. Ensure network access settings: * In Azure, go to `Azure OpenAI > select your deployment`. * Navigate to `Resource Management > Networking`. * Ensure that `Allow access from all networks` is enabled. If this setting is restricted, CrewAI may be blocked from accessing your Azure OpenAI endpoint. ## Verification You're all set! Crew Studio will now use your Azure OpenAI connection. Test the connection by creating a simple crew or task to ensure everything is working properly. ## Troubleshooting If you encounter issues: * Verify the Target URI format matches the expected pattern * Check that the API key is correct and has proper permissions * Ensure network access is configured to allow CrewAI connections * Confirm the deployment model matches what you've configured in CrewAI --- # Source: https://docs.crewai.com/en/tools/integration/bedrockinvokeagenttool.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Bedrock Invoke Agent Tool > Enables CrewAI agents to invoke Amazon Bedrock Agents and leverage their capabilities within your workflows # `BedrockInvokeAgentTool` The `BedrockInvokeAgentTool` enables CrewAI agents to invoke Amazon Bedrock Agents and leverage their capabilities within your workflows. ## Installation ```bash theme={null} uv pip install 'crewai[tools]' ``` ## Requirements * AWS credentials configured (either through environment variables or AWS CLI) * `boto3` and `python-dotenv` packages * Access to Amazon Bedrock Agents ## Usage Here's how to use the tool with a CrewAI agent: ```python {2, 4-8} theme={null} from crewai import Agent, Task, Crew from crewai_tools.aws.bedrock.agents.invoke_agent_tool import BedrockInvokeAgentTool # Initialize the tool agent_tool = BedrockInvokeAgentTool( agent_id="your-agent-id", agent_alias_id="your-agent-alias-id" ) # Create a CrewAI agent that uses the tool aws_expert = Agent( role='AWS Service Expert', goal='Help users understand AWS services and quotas', backstory='I am an expert in AWS services and can provide detailed information about them.', tools=[agent_tool], verbose=True ) # Create a task for the agent quota_task = Task( description="Find out the current service quotas for EC2 in us-west-2 and explain any recent changes.", agent=aws_expert ) # Create a crew with the agent crew = Crew( agents=[aws_expert], tasks=[quota_task], verbose=2 ) # Run the crew result = crew.kickoff() print(result) ``` ## Tool Arguments | Argument | Type | Required | Default | Description | | :------------------- | :----- | :------- | :-------- | :------------------------------------------ | | **agent\_id** | `str` | Yes | None | The unique identifier of the Bedrock agent | | **agent\_alias\_id** | `str` | Yes | None | The unique identifier of the agent alias | | **session\_id** | `str` | No | timestamp | The unique identifier of the session | | **enable\_trace** | `bool` | No | False | Whether to enable trace for debugging | | **end\_session** | `bool` | No | False | Whether to end the session after invocation | | **description** | `str` | No | None | Custom description for the tool | ## Environment Variables ```bash theme={null} BEDROCK_AGENT_ID=your-agent-id # Alternative to passing agent_id BEDROCK_AGENT_ALIAS_ID=your-agent-alias-id # Alternative to passing agent_alias_id AWS_REGION=your-aws-region # Defaults to us-west-2 AWS_ACCESS_KEY_ID=your-access-key # Required for AWS authentication AWS_SECRET_ACCESS_KEY=your-secret-key # Required for AWS authentication ``` ## Advanced Usage ### Multi-Agent Workflow with Session Management ```python {2, 4-22} theme={null} from crewai import Agent, Task, Crew, Process from crewai_tools.aws.bedrock.agents.invoke_agent_tool import BedrockInvokeAgentTool # Initialize tools with session management initial_tool = BedrockInvokeAgentTool( agent_id="your-agent-id", agent_alias_id="your-agent-alias-id", session_id="custom-session-id" ) followup_tool = BedrockInvokeAgentTool( agent_id="your-agent-id", agent_alias_id="your-agent-alias-id", session_id="custom-session-id" ) final_tool = BedrockInvokeAgentTool( agent_id="your-agent-id", agent_alias_id="your-agent-alias-id", session_id="custom-session-id", end_session=True ) # Create agents for different stages researcher = Agent( role='AWS Service Researcher', goal='Gather information about AWS services', backstory='I am specialized in finding detailed AWS service information.', tools=[initial_tool] ) analyst = Agent( role='Service Compatibility Analyst', goal='Analyze service compatibility and requirements', backstory='I analyze AWS services for compatibility and integration possibilities.', tools=[followup_tool] ) summarizer = Agent( role='Technical Documentation Writer', goal='Create clear technical summaries', backstory='I specialize in creating clear, concise technical documentation.', tools=[final_tool] ) # Create tasks research_task = Task( description="Find all available AWS services in us-west-2 region.", agent=researcher ) analysis_task = Task( description="Analyze which services support IPv6 and their implementation requirements.", agent=analyst ) summary_task = Task( description="Create a summary of IPv6-compatible services and their key features.", agent=summarizer ) # Create a crew with the agents and tasks crew = Crew( agents=[researcher, analyst, summarizer], tasks=[research_task, analysis_task, summary_task], process=Process.sequential, verbose=2 ) # Run the crew result = crew.kickoff() ``` ## Use Cases ### Hybrid Multi-Agent Collaborations * Create workflows where CrewAI agents collaborate with managed Bedrock agents running as services in AWS * Enable scenarios where sensitive data processing happens within your AWS environment while other agents operate externally * Bridge on-premises CrewAI agents with cloud-based Bedrock agents for distributed intelligence workflows ### Data Sovereignty and Compliance * Keep data-sensitive agentic workflows within your AWS environment while allowing external CrewAI agents to orchestrate tasks * Maintain compliance with data residency requirements by processing sensitive information only within your AWS account * Enable secure multi-agent collaborations where some agents cannot access your organization's private data ### Seamless AWS Service Integration * Access any AWS service through Amazon Bedrock Actions without writing complex integration code * Enable CrewAI agents to interact with AWS services through natural language requests * Leverage pre-built Bedrock agent capabilities to interact with AWS services like Bedrock Knowledge Bases, Lambda, and more ### Scalable Hybrid Agent Architectures * Offload computationally intensive tasks to managed Bedrock agents while lightweight tasks run in CrewAI * Scale agent processing by distributing workloads between local CrewAI agents and cloud-based Bedrock agents ### Cross-Organizational Agent Collaboration * Enable secure collaboration between your organization's CrewAI agents and partner organizations' Bedrock agents * Create workflows where external expertise from Bedrock agents can be incorporated without exposing sensitive data * Build agent ecosystems that span organizational boundaries while maintaining security and data control --- # Source: https://docs.crewai.com/en/tools/cloud-storage/bedrockkbretriever.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Bedrock Knowledge Base Retriever > Retrieve information from Amazon Bedrock Knowledge Bases using natural language queries # `BedrockKBRetrieverTool` The `BedrockKBRetrieverTool` enables CrewAI agents to retrieve information from Amazon Bedrock Knowledge Bases using natural language queries. ## Installation ```bash theme={null} uv pip install 'crewai[tools]' ``` ## Requirements * AWS credentials configured (either through environment variables or AWS CLI) * `boto3` and `python-dotenv` packages * Access to Amazon Bedrock Knowledge Base ## Usage Here's how to use the tool with a CrewAI agent: ```python {2, 4-17} theme={null} from crewai import Agent, Task, Crew from crewai_tools.aws.bedrock.knowledge_base.retriever_tool import BedrockKBRetrieverTool # Initialize the tool kb_tool = BedrockKBRetrieverTool( knowledge_base_id="your-kb-id", number_of_results=5 ) # Create a CrewAI agent that uses the tool researcher = Agent( role='Knowledge Base Researcher', goal='Find information about company policies', backstory='I am a researcher specialized in retrieving and analyzing company documentation.', tools=[kb_tool], verbose=True ) # Create a task for the agent research_task = Task( description="Find our company's remote work policy and summarize the key points.", agent=researcher ) # Create a crew with the agent crew = Crew( agents=[researcher], tasks=[research_task], verbose=2 ) # Run the crew result = crew.kickoff() print(result) ``` ## Tool Arguments | Argument | Type | Required | Default | Description | | :--------------------------- | :----- | :------- | :------ | :------------------------------------------------------------------------- | | **knowledge\_base\_id** | `str` | Yes | None | The unique identifier of the knowledge base (0-10 alphanumeric characters) | | **number\_of\_results** | `int` | No | 5 | Maximum number of results to return | | **retrieval\_configuration** | `dict` | No | None | Custom configurations for the knowledge base query | | **guardrail\_configuration** | `dict` | No | None | Content filtering settings | | **next\_token** | `str` | No | None | Token for pagination | ## Environment Variables ```bash theme={null} BEDROCK_KB_ID=your-knowledge-base-id # Alternative to passing knowledge_base_id AWS_REGION=your-aws-region # Defaults to us-east-1 AWS_ACCESS_KEY_ID=your-access-key # Required for AWS authentication AWS_SECRET_ACCESS_KEY=your-secret-key # Required for AWS authentication ``` ## Response Format The tool returns results in JSON format: ```json theme={null} { "results": [ { "content": "Retrieved text content", "content_type": "text", "source_type": "S3", "source_uri": "s3://bucket/document.pdf", "score": 0.95, "metadata": { "additional": "metadata" } } ], "nextToken": "pagination-token", "guardrailAction": "NONE" } ``` ## Advanced Usage ### Custom Retrieval Configuration ```python theme={null} kb_tool = BedrockKBRetrieverTool( knowledge_base_id="your-kb-id", retrieval_configuration={ "vectorSearchConfiguration": { "numberOfResults": 10, "overrideSearchType": "HYBRID" } } ) policy_expert = Agent( role='Policy Expert', goal='Analyze company policies in detail', backstory='I am an expert in corporate policy analysis with deep knowledge of regulatory requirements.', tools=[kb_tool] ) ``` ## Supported Data Sources * Amazon S3 * Confluence * Salesforce * SharePoint * Web pages * Custom document locations * Amazon Kendra * SQL databases ## Use Cases ### Enterprise Knowledge Integration * Enable CrewAI agents to access your organization's proprietary knowledge without exposing sensitive data * Allow agents to make decisions based on your company's specific policies, procedures, and documentation * Create agents that can answer questions based on your internal documentation while maintaining data security ### Specialized Domain Knowledge * Connect CrewAI agents to domain-specific knowledge bases (legal, medical, technical) without retraining models * Leverage existing knowledge repositories that are already maintained in your AWS environment * Combine CrewAI's reasoning with domain-specific information from your knowledge bases ### Data-Driven Decision Making * Ground CrewAI agent responses in your actual company data rather than general knowledge * Ensure agents provide recommendations based on your specific business context and documentation * Reduce hallucinations by retrieving factual information from your knowledge bases ### Scalable Information Access * Access terabytes of organizational knowledge without embedding it all into your models * Dynamically query only the relevant information needed for specific tasks * Leverage AWS's scalable infrastructure to handle large knowledge bases efficiently ### Compliance and Governance * Ensure CrewAI agents provide responses that align with your company's approved documentation * Create auditable trails of information sources used by your agents * Maintain control over what information sources your agents can access --- # Source: https://docs.crewai.com/en/enterprise/integrations/box.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Box Integration > File storage and document management with Box integration for CrewAI. ## Overview Enable your agents to manage files, folders, and documents through Box. Upload files, organize folder structures, search content, and streamline your team's document management with AI-powered automation. ## Prerequisites Before using the Box integration, ensure you have: * A [CrewAI AMP](https://app.crewai.com) account with an active subscription * A Box account with appropriate permissions * Connected your Box account through the [Integrations page](https://app.crewai.com/crewai_plus/connectors) ## Setting Up Box Integration ### 1. Connect Your Box Account 1. Navigate to [CrewAI AMP Integrations](https://app.crewai.com/crewai_plus/connectors) 2. Find **Box** in the Authentication Integrations section 3. Click **Connect** and complete the OAuth flow 4. Grant the necessary permissions for file and folder management 5. Copy your Enterprise Token from [Integration Settings](https://app.crewai.com/crewai_plus/settings/integrations) ### 2. Install Required Package ```bash theme={null} uv add crewai-tools ``` ### 3. Environment Variable Setup To use integrations with `Agent(apps=[])`, you must set the `CREWAI_PLATFORM_INTEGRATION_TOKEN` environment variable with your Enterprise Token. ```bash theme={null} export CREWAI_PLATFORM_INTEGRATION_TOKEN="your_enterprise_token" ``` Or add it to your `.env` file: ``` CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token ``` ## Available Actions **Description:** Save a file from URL in Box. **Parameters:** * `fileAttributes` (object, required): Attributes - File metadata including name, parent folder, and timestamps. ```json theme={null} { "content_created_at": "2012-12-12T10:53:43-08:00", "content_modified_at": "2012-12-12T10:53:43-08:00", "name": "qwerty.png", "parent": { "id": "1234567" } } ``` * `file` (string, required): File URL - Files must be smaller than 50MB in size. (example: "[https://picsum.photos/200/300](https://picsum.photos/200/300)"). **Description:** Save a file in Box. **Parameters:** * `file` (string, required): File - Accepts a File Object containing file data. Files must be smaller than 50MB in size. * `fileName` (string, required): File Name (example: "qwerty.png"). * `folder` (string, optional): Folder - Use Connect Portal Workflow Settings to allow users to select the File's Folder destination. Defaults to the user's root folder if left blank. **Description:** Get a file by ID in Box. **Parameters:** * `fileId` (string, required): File ID - The unique identifier that represents a file. (example: "12345"). **Description:** List files in Box. **Parameters:** * `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0"). * `filterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions. ```json theme={null} { "operator": "OR", "conditions": [ { "operator": "AND", "conditions": [ { "field": "direction", "operator": "$stringExactlyMatches", "value": "ASC" } ] } ] } ``` **Description:** Create a folder in Box. **Parameters:** * `folderName` (string, required): Name - The name for the new folder. (example: "New Folder"). * `folderParent` (object, required): Parent Folder - The parent folder where the new folder will be created. ```json theme={null} { "id": "123456" } ``` **Description:** Move a folder in Box. **Parameters:** * `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0"). * `folderName` (string, required): Name - The name for the folder. (example: "New Folder"). * `folderParent` (object, required): Parent Folder - The new parent folder destination. ```json theme={null} { "id": "123456" } ``` **Description:** Get a folder by ID in Box. **Parameters:** * `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0"). **Description:** Search folders in Box. **Parameters:** * `folderId` (string, required): Folder ID - The folder to search within. * `filterFormula` (object, optional): A filter in disjunctive normal form - OR of AND groups of single conditions. ```json theme={null} { "operator": "OR", "conditions": [ { "operator": "AND", "conditions": [ { "field": "sort", "operator": "$stringExactlyMatches", "value": "name" } ] } ] } ``` **Description:** Delete a folder in Box. **Parameters:** * `folderId` (string, required): Folder ID - The unique identifier that represents a folder. (example: "0"). * `recursive` (boolean, optional): Recursive - Delete a folder that is not empty by recursively deleting the folder and all of its content. ## Usage Examples ### Basic Box Agent Setup ```python theme={null} from crewai import Agent, Task, Crew from crewai import Agent, Task, Crew # Create an agent with Box capabilities box_agent = Agent( role="Document Manager", goal="Manage files and folders in Box efficiently", backstory="An AI assistant specialized in document management and file organization.", apps=['box'] # All Box actions will be available ) # Task to create a folder structure create_structure_task = Task( description="Create a folder called 'Project Files' in the root directory and upload a document from URL", agent=box_agent, expected_output="Folder created and file uploaded successfully" ) # Run the task crew = Crew( agents=[box_agent], tasks=[create_structure_task] ) crew.kickoff() ``` ### Filtering Specific Box Tools ```python theme={null} from crewai import Agent, Task, Crew # Create agent with specific Box actions only file_organizer_agent = Agent( role="File Organizer", goal="Organize and manage file storage efficiently", backstory="An AI assistant that focuses on file organization and storage management.", apps=['box/create_folder', 'box/save_file', 'box/list_files'] # Specific Box actions ) # Task to organize files organization_task = Task( description="Create a folder structure for the marketing team and organize existing files", agent=file_organizer_agent, expected_output="Folder structure created and files organized" ) crew = Crew( agents=[file_organizer_agent], tasks=[organization_task] ) crew.kickoff() ``` ### Advanced File Management ```python theme={null} from crewai import Agent, Task, Crew file_manager = Agent( role="File Manager", goal="Maintain organized file structure and manage document lifecycle", backstory="An experienced file manager who ensures documents are properly organized and accessible.", apps=['box'] ) # Complex task involving multiple Box operations management_task = Task( description=""" 1. List all files in the root folder 2. Create monthly archive folders for the current year 3. Move old files to appropriate archive folders 4. Generate a summary report of the file organization """, agent=file_manager, expected_output="Files organized into archive structure with summary report" ) crew = Crew( agents=[file_manager], tasks=[management_task] ) crew.kickoff() ``` --- # Source: https://docs.crewai.com/en/observability/braintrust.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Braintrust > Braintrust integration for CrewAI with OpenTelemetry tracing and evaluation # Braintrust Integration This guide demonstrates how to integrate **Braintrust** with **CrewAI** using OpenTelemetry for comprehensive tracing and evaluation. By the end of this guide, you will be able to trace your CrewAI agents, monitor their performance, and evaluate their outputs using Braintrust's powerful observability platform. > **What is Braintrust?** [Braintrust](https://www.braintrust.dev) is an AI evaluation and observability platform that provides comprehensive tracing, evaluation, and monitoring for AI applications with built-in experiment tracking and performance analytics. ## Get Started We'll walk through a simple example of using CrewAI and integrating it with Braintrust via OpenTelemetry for comprehensive observability and evaluation. ### Step 1: Install Dependencies ```bash theme={null} uv add braintrust[otel] crewai crewai-tools opentelemetry-instrumentation-openai opentelemetry-instrumentation-crewai python-dotenv ``` ### Step 2: Set Up Environment Variables Setup Braintrust API keys and configure OpenTelemetry to send traces to Braintrust. You'll need a Braintrust API key and your OpenAI API key. ```python theme={null} import os from getpass import getpass # Get your Braintrust credentials BRAINTRUST_API_KEY = getpass("🔑 Enter your Braintrust API Key: ") # Get API keys for services OPENAI_API_KEY = getpass("🔑 Enter your OpenAI API key: ") # Set environment variables os.environ["BRAINTRUST_API_KEY"] = BRAINTRUST_API_KEY os.environ["BRAINTRUST_PARENT"] = "project_name:crewai-demo" os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY ``` ### Step 3: Initialize OpenTelemetry with Braintrust Initialize the Braintrust OpenTelemetry instrumentation to start capturing traces and send them to Braintrust. ```python theme={null} import os from typing import Any, Dict from braintrust.otel import BraintrustSpanProcessor from crewai import Agent, Crew, Task from crewai.llm import LLM from opentelemetry import trace from opentelemetry.instrumentation.crewai import CrewAIInstrumentor from opentelemetry.instrumentation.openai import OpenAIInstrumentor from opentelemetry.sdk.trace import TracerProvider def setup_tracing() -> None: """Setup OpenTelemetry tracing with Braintrust.""" current_provider = trace.get_tracer_provider() if isinstance(current_provider, TracerProvider): provider = current_provider else: provider = TracerProvider() trace.set_tracer_provider(provider) provider.add_span_processor(BraintrustSpanProcessor()) CrewAIInstrumentor().instrument(tracer_provider=provider) OpenAIInstrumentor().instrument(tracer_provider=provider) setup_tracing() ``` ### Step 4: Create a CrewAI Application We'll create a CrewAI application where two agents collaborate to research and write a blog post about AI advancements, with comprehensive tracing enabled. ```python theme={null} from crewai import Agent, Crew, Process, Task from crewai_tools import SerperDevTool def create_crew() -> Crew: """Create a crew with multiple agents for comprehensive tracing.""" llm = LLM(model="gpt-4o-mini") search_tool = SerperDevTool() # Define agents with specific roles researcher = Agent( role="Senior Research Analyst", goal="Uncover cutting-edge developments in AI and data science", backstory="""You work at a leading tech think tank. Your expertise lies in identifying emerging trends. You have a knack for dissecting complex data and presenting actionable insights.""", verbose=True, allow_delegation=False, llm=llm, tools=[search_tool], ) writer = Agent( role="Tech Content Strategist", goal="Craft compelling content on tech advancements", backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles. You transform complex concepts into compelling narratives.""", verbose=True, allow_delegation=True, llm=llm, ) # Create tasks for your agents research_task = Task( description="""Conduct a comprehensive analysis of the latest advancements in {topic}. Identify key trends, breakthrough technologies, and potential industry impacts.""", expected_output="Full analysis report in bullet points", agent=researcher, ) writing_task = Task( description="""Using the insights provided, develop an engaging blog post that highlights the most significant {topic} advancements. Your post should be informative yet accessible, catering to a tech-savvy audience. Make it sound cool, avoid complex words so it doesn't sound like AI.""", expected_output="Full blog post of at least 4 paragraphs", agent=writer, context=[research_task], ) # Instantiate your crew with a sequential process crew = Crew( agents=[researcher, writer], tasks=[research_task, writing_task], verbose=True, process=Process.sequential ) return crew def run_crew(): """Run the crew and return results.""" crew = create_crew() result = crew.kickoff(inputs={"topic": "AI developments"}) return result # Run your crew if __name__ == "__main__": # Instrumentation is already initialized above in this module result = run_crew() print(result) ``` ### Step 5: View Traces in Braintrust After running your crew, you can view comprehensive traces in Braintrust through different perspectives: Braintrust Trace View Braintrust Timeline View Braintrust Thread View ### Step 6: Evaluate via SDK (Experiments) You can also run evaluations using Braintrust's Eval SDK. This is useful for comparing versions or scoring outputs offline. Below is a Python example using the `Eval` class with the crew we created above: ```python theme={null} # eval_crew.py from braintrust import Eval from autoevals import Levenshtein def evaluate_crew_task(input_data): """Task function that wraps our crew for evaluation.""" crew = create_crew() result = crew.kickoff(inputs={"topic": input_data["topic"]}) return str(result) Eval( "AI Research Crew", # Project name { "data": lambda: [ {"topic": "artificial intelligence trends 2024"}, {"topic": "machine learning breakthroughs"}, {"topic": "AI ethics and governance"}, ], "task": evaluate_crew_task, "scores": [Levenshtein], }, ) ``` Setup your API key and run: ```bash theme={null} export BRAINTRUST_API_KEY="YOUR_API_KEY" braintrust eval eval_crew.py ``` See the [Braintrust Eval SDK guide](https://www.braintrust.dev/docs/start/eval-sdk) for more details. ### Key Features of Braintrust Integration * **Comprehensive Tracing**: Track all agent interactions, tool usage, and LLM calls * **Performance Monitoring**: Monitor execution times, token usage, and success rates * **Experiment Tracking**: Compare different crew configurations and models * **Automated Evaluation**: Set up custom evaluation metrics for crew outputs * **Error Tracking**: Monitor and debug failures across your crew executions * **Cost Analysis**: Track token usage and associated costs ### Version Compatibility Information * Python 3.8+ * CrewAI >= 0.86.0 * Braintrust >= 0.1.0 * OpenTelemetry SDK >= 1.31.0 ### References * [Braintrust Documentation](https://www.braintrust.dev/docs) - Overview of the Braintrust platform * [Braintrust CrewAI Integration](https://www.braintrust.dev/docs/integrations/crew-ai) - Official CrewAI integration guide * [Braintrust Eval SDK](https://www.braintrust.dev/docs/start/eval-sdk) - Run experiments via the SDK * [CrewAI Documentation](https://docs.crewai.com/) - Overview of the CrewAI framework * [OpenTelemetry Docs](https://opentelemetry.io/docs/) - OpenTelemetry guide * [Braintrust GitHub](https://github.com/braintrustdata/braintrust) - Source code for Braintrust SDK --- # Source: https://docs.crewai.com/en/tools/search-research/bravesearchtool.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Brave Search > The `BraveSearchTool` is designed to search the internet using the Brave Search API. # `BraveSearchTool` ## Description This tool is designed to perform web searches using the Brave Search API. It allows you to search the internet with a specified query and retrieve relevant results. The tool supports customizable result counts and country-specific searches. ## Installation To incorporate this tool into your project, follow the installation instructions below: ```shell theme={null} pip install 'crewai[tools]' ``` ## Steps to Get Started To effectively use the `BraveSearchTool`, follow these steps: 1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment. 2. **API Key Acquisition**: Acquire a Brave Search API key at [https://api.search.brave.com/app/keys](https://api.search.brave.com/app/keys) (sign in to generate a key). 3. **Environment Configuration**: Store your obtained API key in an environment variable named `BRAVE_API_KEY` to facilitate its use by the tool. ## Example The following example demonstrates how to initialize the tool and execute a search with a given query: ```python Code theme={null} from crewai_tools import BraveSearchTool # Initialize the tool for internet searching capabilities tool = BraveSearchTool() # Execute a search results = tool.run(search_query="CrewAI agent framework") print(results) ``` ## Parameters The `BraveSearchTool` accepts the following parameters: * **search\_query**: Mandatory. The search query you want to use to search the internet. * **country**: Optional. Specify the country for the search results. Default is empty string. * **n\_results**: Optional. Number of search results to return. Default is `10`. * **save\_file**: Optional. Whether to save the search results to a file. Default is `False`. ## Example with Parameters Here is an example demonstrating how to use the tool with additional parameters: ```python Code theme={null} from crewai_tools import BraveSearchTool # Initialize the tool with custom parameters tool = BraveSearchTool( country="US", n_results=5, save_file=True ) # Execute a search results = tool.run(search_query="Latest AI developments") print(results) ``` ## Agent Integration Example Here's how to integrate the `BraveSearchTool` with a CrewAI agent: ```python Code theme={null} from crewai import Agent from crewai.project import agent from crewai_tools import BraveSearchTool # Initialize the tool brave_search_tool = BraveSearchTool() # Define an agent with the BraveSearchTool @agent def researcher(self) -> Agent: return Agent( config=self.agents_config["researcher"], allow_delegation=False, tools=[brave_search_tool] ) ``` ## Conclusion By integrating the `BraveSearchTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. The tool provides a simple interface to the powerful Brave Search API, making it easy to retrieve and process search results programmatically. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward. --- # Source: https://docs.crewai.com/en/tools/web-scraping/brightdata-tools.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Bright Data Tools > Bright Data integrations for SERP search, Web Unlocker scraping, and Dataset API. # Bright Data Tools This set of tools integrates Bright Data services for web extraction. ## Installation ```shell theme={null} uv add crewai-tools requests aiohttp ``` ## Environment Variables * `BRIGHT_DATA_API_KEY` (required) * `BRIGHT_DATA_ZONE` (for SERP/Web Unlocker) Create credentials at [https://brightdata.com/](https://brightdata.com/) (sign up, then create an API token and zone). See their docs: [https://developers.brightdata.com/](https://developers.brightdata.com/) ## Included Tools * `BrightDataSearchTool`: SERP search (Google/Bing/Yandex) with geo/language/device options. * `BrightDataWebUnlockerTool`: Scrape pages with anti-bot bypass and rendering. * `BrightDataDatasetTool`: Run Dataset API jobs and fetch results. ## Examples ### SERP Search ```python Code theme={null} from crewai_tools import BrightDataSearchTool tool = BrightDataSearchTool( query="CrewAI", country="us", ) print(tool.run()) ``` ### Web Unlocker ```python Code theme={null} from crewai_tools import BrightDataWebUnlockerTool tool = BrightDataWebUnlockerTool( url="https://example.com", format="markdown", ) print(tool.run(url="https://example.com")) ``` ### Dataset API ```python Code theme={null} from crewai_tools import BrightDataDatasetTool tool = BrightDataDatasetTool( dataset_type="ecommerce", url="https://example.com/product", ) print(tool.run()) ``` ## Troubleshooting * 401/403: verify `BRIGHT_DATA_API_KEY` and `BRIGHT_DATA_ZONE`. * Empty/blocked content: enable rendering or try a different zone. ## Example ```python Code theme={null} from crewai import Agent, Task, Crew from crewai_tools import BrightDataSearchTool tool = BrightDataSearchTool( query="CrewAI", country="us", ) agent = Agent( role="Web Researcher", goal="Search with Bright Data", backstory="Finds reliable results", tools=[tool], verbose=True, ) task = Task( description="Search for CrewAI and summarize top results", expected_output="Short summary with links", agent=agent, ) crew = Crew( agents=[agent], tasks=[task], verbose=True, ) result = crew.kickoff() ``` --- # Source: https://docs.crewai.com/en/tools/web-scraping/browserbaseloadtool.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Browserbase Web Loader > Browserbase is a developer platform to reliably run, manage, and monitor headless browsers. # `BrowserbaseLoadTool` ## Description [Browserbase](https://browserbase.com) is a developer platform to reliably run, manage, and monitor headless browsers. Power your AI data retrievals with: * [Serverless Infrastructure](https://docs.browserbase.com/under-the-hood) providing reliable browsers to extract data from complex UIs * [Stealth Mode](https://docs.browserbase.com/features/stealth-mode) with included fingerprinting tactics and automatic captcha solving * [Session Debugger](https://docs.browserbase.com/features/sessions) to inspect your Browser Session with networks timeline and logs * [Live Debug](https://docs.browserbase.com/guides/session-debug-connection/browser-remote-control) to quickly debug your automation ## Installation * Get an API key and Project ID from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`, `BROWSERBASE_PROJECT_ID`). * Install the [Browserbase SDK](http://github.com/browserbase/python-sdk) along with `crewai[tools]` package: ```shell theme={null} pip install browserbase 'crewai[tools]' ``` ## Example Utilize the BrowserbaseLoadTool as follows to allow your agent to load websites: ```python Code theme={null} from crewai_tools import BrowserbaseLoadTool # Initialize the tool with the Browserbase API key and Project ID tool = BrowserbaseLoadTool() ``` ## Arguments The following parameters can be used to customize the `BrowserbaseLoadTool`'s behavior: | Argument | Type | Description | | :---------------- | :------- | :------------------------------------------------------------------------------------ | | **api\_key** | `string` | *Optional*. Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable. | | **project\_id** | `string` | *Optional*. Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable. | | **text\_content** | `bool` | *Optional*. Retrieve only text content. Default is `False`. | | **session\_id** | `string` | *Optional*. Provide an existing Session ID. | | **proxy** | `bool` | *Optional*. Enable/Disable Proxies. Default is `False`. | --- # Source: https://docs.crewai.com/en/enterprise/guides/build-crew.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.crewai.com/llms.txt > Use this file to discover all available pages before exploring further. # Build Crew > A Crew is a group of agents that work together to complete a task. ## Overview [CrewAI AMP](https://app.crewai.com) streamlines the process of **creating**, **deploying**, and **managing** your AI agents in production environments. ## Getting Started