# Agentverse > **Source:** --- # AgentVerse Overview **Source:** https://github.com/OpenBMB/AgentVerse AgentVerse is an open-source Python framework designed to facilitate the deployment of multiple LLM-based agents in various applications. It provides comprehensive support for building multi-agent systems that can collaborate, coordinate, and achieve complex tasks. ## Key Features AgentVerse primarily provides two frameworks: ### 1. Task-Solving Framework A framework that assembles multiple agents as an automatic multi-agent system to collaboratively accomplish corresponding tasks. - Enables multi-agent systems to work together on complex problems - Based on research from [AgentVerse-Tasksolving](https://arxiv.org/pdf/2308.10848.pdf) and [Multi-agent as system](https://arxiv.org/abs/2309.02427) - Applications: software development systems, consulting systems, code generation, and more **Notable Use Cases:** - Software design and implementation with code writer, tester, and reviewer agents - Problem solving with specialized agents using tools - Humaneval benchmark testing - Brainstorming tasks with multiple agents ### 2. Simulation Framework Allows users to set up custom environments to observe behaviors among, or interact with, multiple agents. - Applications: games, social behavior research of LLM-based agents, custom environments - Note: The project is refactoring the simulation code. For stable simulation-only features, use the [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch **Notable Examples:** - NLP Classroom: Multi-agent classroom environment with professor and students - Prisoner's Dilemma: Strategic interaction between agents - Software Design Environment: Collaborative code development - Database Administrator (DBA) Monitoring: Multi-agent database diagnostics - Pokemon Game: Interactive game environment with agent NPCs ## Project Status - **License:** Apache 2.0 - **Python Version Required:** 3.9+ - **Build Status:** Active CI/CD pipeline - **Code Style:** Black formatted - **Paper Accepted:** ICLR 2024 - **Latest Update:** Featured in NVIDIA's blog (March 2024) - "Building Your First LLM Agent Application" ## Community & Support - **Discord:** https://discord.gg/gDAXfjMw - **Twitter:** https://twitter.com/Agentverse71134 - **Hugging Face:** https://huggingface.co/spaces/AgentVerse/agentVerse - **Email:** agentverse2@gmail.com - **Research Paper:** https://arxiv.org/abs/2308.10848 ## Citation If you use AgentVerse in your research, please cite: ```bibtex @article{chen2023agentverse, title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents}, author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others}, journal={arXiv preprint arXiv:2308.10848}, year={2023} } ``` ## Contributors **Project Leaders:** - Weize Chen (chenweize1998@gmail.com) - Yusheng Su (yushengsu.thu@gmail.com) **Core Contributors:** The project welcomes contributions in multiple areas including code development, documentation, application exploration, and community feedback. --- # Installation and Setup **Source:** https://github.com/OpenBMB/AgentVerse ## Installation AgentVerse requires **Python 3.9 or higher**. ### Option 1: Manual Installation (Recommended) Clone the repository and install in development mode: ```bash git clone https://github.com/OpenBMB/AgentVerse.git --depth 1 cd AgentVerse pip install -e . ``` ### Option 2: Install via pip Install the latest version from PyPI: ```bash pip install -U agentverse ``` ### Optional: Local Model Support To use AgentVerse with local models such as LLaMA and Vicunna, install additional dependencies: ```bash pip install -r requirements_local.txt ``` ## Environment Variables ### OpenAI API Configuration Set up your OpenAI API key: ```bash export OPENAI_API_KEY="your_api_key_here" ``` ### Azure OpenAI Configuration For Azure OpenAI services, export both your API key and base URL: ```bash export AZURE_OPENAI_API_KEY="your_api_key_here" export AZURE_OPENAI_API_BASE="your_api_base_here" ``` ### vLLM Support To use vLLM for larger inference workloads, set up the vLLM server first by following the [vLLM installation guide](https://docs.vllm.ai/en/latest/getting_started/quickstart.html). Then configure the following environment variables: ```bash export VLLM_API_KEY="your_api_key_here" export VLLM_API_BASE="http://your_vllm_url_here" ``` ## Framework-Specific Module Requirements ### Simulation Framework The simulation framework requires: ``` - agentverse - agents - simulation_agent - environments - simulation_env ``` ### Task-Solving Framework The task-solving framework requires: ``` - agentverse - agents - simulation_agent - environments - tasksolving_env ``` ## Tools Integration ### BMTools Installation If you want to run simulation cases with tools (e.g., `simulation/nlp_classroom_3players_withtool`), install BMTools: ```bash git clone https://github.com/OpenBMB/BMTools.git cd BMTools pip install -r requirements.txt python setup.py develop ``` This is optional. Simulation cases without tools will run normally without BMTools installed. ### XAgent ToolsServer For tool-using task-solving cases (multi-agent systems using web browser, Jupyter notebook, Bing search, etc.), build the ToolsServer from [XAgent](https://github.com/OpenBMB/XAgent). Follow the [XAgent setup instructions](https://github.com/OpenBMB/XAgent#%EF%B8%8F-build-and-setup-toolserver) to build and run the ToolServer. ## Local Model Configuration ### FSChat Integration For local models like LLaMA and Vicunna via FastChat: #### 1. Install Additional Dependencies ```bash pip install -r requirements_local.txt ``` #### 2. Launch Local Model Server Modify the `MODEL_PATH` and `MODEL_NAME` variables according to your needs, then run: ```bash bash scripts/run_local_model_server.sh ``` This launches a service for Llama 7B chat model by default. #### Supported Models The `MODEL_NAME` in AgentVerse currently supports: - `llama-2-7b-chat-hf` - `llama-2-13b-chat-hf` - `llama-2-70b-chat-hf` - `vicuna-7b-v1.5` - `vicuna-13b-v1.5` For additional [FastChat-compatible models](https://github.com/lm-sys/FastChat/blob/main/docs/model_support.md), you need to: 1. Add the `MODEL_NAME` to `LOCAL_LLMS` in `agentverse/llms/__init__.py` 2. Add the mapping from `MODEL_NAME` to its Hugging Face identifier in `LOCAL_LLMS_MAPPING` in `agentverse/llms/__init__.py` #### 3. Configure Your Config File Set the `llm_type` to `local` and `model` to the `MODEL_NAME`: ```yaml llm: llm_type: local model: llama-2-7b-chat-hf ... ``` Refer to `agentverse/tasks/tasksolving/commongen/llama-2-7b-chat-hf/config.yaml` for a complete example. ## Verification After installation, verify everything works by running a simple example: ```bash # For simulation agentverse-simulation --task simulation/nlp_classroom_3players # For task-solving agentverse-tasksolving --task tasksolving/brainstorming ``` --- # Simulation Framework **Source:** https://github.com/OpenBMB/AgentVerse The Simulation Framework allows you to create custom multi-agent environments where agents interact, collaborate, and exhibit emergent behaviors. This framework is ideal for research on agent behavior, games, and social dynamics. ## Overview The simulation framework enables: - Custom environment creation with configurable rules - Multi-agent interaction and coordination - Observation of emergent behaviors - Interactive scenarios with agent participation ## Running Simulations ### CLI Example Launch a pre-built simulation scenario using the CLI: ```bash agentverse-simulation --task simulation/nlp_classroom_3players ``` This example runs a basic 3-player classroom with professor, student, and teaching assistant. ### GUI Example Launch a local web-based interface for visualization and interaction: ```bash agentverse-simulation-gui --task simulation/nlp_classroom_9players ``` After starting the server, visit `http://127.0.0.1:7860/` to view and interact with the environment. ## Built-in Examples ### 1. NLP Classroom A realistic classroom environment where: - One agent acts as the professor - Multiple agents are students - Students raise their hands to ask questions - Professor calls on students for their questions **Run it:** ```bash agentverse-simulation-gui --task simulation/nlp_classroom_9players ``` ### 2. Prisoner's Dilemma A game-theory scenario featuring: - Two rational agents facing a strategic choice - Options to cooperate for mutual benefit or betray for individual gain - Study of rational decision-making in multi-agent systems **Run it:** ```bash agentverse-simulation-gui --task simulation/prisoner_dilemma ``` ### 3. Software Design Environment A collaborative development scenario with: - Code Writer agent: generates code implementation - Code Tester agent: runs unit tests and provides feedback - Code Reviewer agent: reviews code quality - Iterative refinement process **Run it:** ```bash agentverse-simulation-gui --task simulation/sde_team/sde_team_2players ``` ### 4. Database Administrator (DBA) Monitoring A system monitoring scenario where: - Chief DBA monitors system anomalies (slow queries, locks, crashes) - Domain expert agents analyze root causes - Team provides recommendations and optimization solutions - Chief DBA generates diagnostic reports **Run it:** ```bash agentverse-simulation-gui --task simulation/db_diag ``` ### 5. Pokemon Game **Available in [`release-0.1` branch](https://github.com/OpenBMB/AgentVerse/tree/release-0.1)** An interactive game environment featuring: - 6 Pokemon Emerald characters (May, Professor Birch, Steven Stone, Maxie, Archie, Joseph) - Free movement and interaction between agents - Player can engage with agent characters as another agent - WASD controls for movement, SPACE for conversation **Setup:** 1. Launch the local server: ```bash uvicorn pokemon_server:app --reload --port 10002 ``` 2. In another terminal, start the UI: ```bash cd ui npm install # Required only on first run npm run watch ``` Controls: WASD for movement, SPACE to initiate conversation ## Example Simulations with Tools AgentVerse supports simulations where agents can use external tools: ### NLP Classroom with Tool Usage ```bash agentverse-simulation-gui --task simulation/nlp_classroom_3players_withtool ``` Students can use Bing search API while attending class. ### Math Problem Solving ```bash agentverse-simulation-gui --task simulation/math_problem_2players_tools ``` Two agents collaborate using WolframAlpha API for arithmetic problems. ## Configuration Structure Simulations are configured using YAML files. Basic structure: ```yaml environment: env_type: basic max_turns: 10 rule: order: type: sequential visibility: type: all selector: type: basic updater: type: basic describer: type: basic agents: - agent_type: conversation name: Agent Name role_description: Description of agent's role memory: memory_type: chat_history prompt_template: | Your prompt here llm: llm_type: text-davinci-003 model: text-davinci-003 temperature: 0.7 max_tokens: 250 ``` ## Custom Agent Types ### ConversationAgent Standard agent for text-based conversation and interaction. ### ToolAgent Agent with capability to use external tools and APIs (requires BMTools). ## Customization ### Rule Components The simulation framework abstracts environments into five customizable rule components: 1. **Describer**: Provides environment description to agents each turn 2. **Order**: Defines agent action sequence (sequential, random, concurrent) 3. **Selector**: Filters valid agent messages 4. **Updater**: Updates agent memory with relevant messages 5. **Visibility**: Maintains list of visible agents for each agent ### Creating Custom Scenarios To create your own simulation: 1. Create a task directory in `agentverse/tasks` 2. Write a `config.yaml` configuration file 3. Implement an output parser for agent responses 4. Register the parser in `agentverse/tasks/__init__.py` For detailed customization guides, see the main repository documentation. ## Hugging Face Integration Try AgentVerse simulations online without local installation: - **HuggingFace Spaces:** https://huggingface.co/spaces/AgentVerse/agentVerse - Supported scenarios: NLP Classroom, Prisoner's Dilemma - Requires OpenAI API key ## Community Examples ### ChatEval Integration The [ChatEval](https://github.com/chanchimin/ChatEval) project implements a multi-agent referee team using AgentVerse to evaluate text generation from different models. Agents debate differences and provide judgments, showing better alignment with human evaluations than baseline approaches. ## Tips for Simulation Design 1. Start with existing examples to understand the framework 2. Test with small agent counts before scaling 3. Configure memory appropriately for context length 4. Use reasonable `max_turns` to prevent infinite loops 5. Customize prompts and rule components for specific scenarios 6. Use tools to enable sophisticated agent behaviors --- # Task-Solving Framework **Source:** https://github.com/OpenBMB/AgentVerse The Task-Solving Framework assembles multiple agents as an automatic multi-agent system to collaboratively accomplish complex tasks. This framework is ideal for real-world applications like software development, consulting, code generation, and problem-solving. ## Overview The task-solving framework enables: - Automatic coordination of multiple specialized agents - Collaborative problem-solving and code generation - Tool usage for web browsing, file operations, and computations - Benchmark evaluation on standard datasets - Support for various LLM backends (OpenAI, Azure, vLLM, local models) ## Key Applications - **Software Development**: Automated code writing, testing, and reviewing - **Code Generation**: Solving programming challenges and benchmarks - **Consulting Systems**: Multi-expert systems providing comprehensive solutions - **Problem Solving**: Complex tasks requiring multiple specialized agents - **Tool Usage**: Agents using browsers, notebooks, search APIs, and more ## Running Task-Solving Examples ### Benchmark Evaluation Evaluate agents on standard benchmarks like HumanEval: ```bash agentverse-benchmark --task tasksolving/humaneval/gpt-3.5 \ --dataset_path data/humaneval/test.jsonl \ --overwrite ``` **Configuration path:** `agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml` ### Single Query Tasks Run agents on specific problems (task defined in config file): ```bash agentverse-tasksolving --task tasksolving/brainstorming ``` **Configuration path:** `agentverse/tasks/tasksolving/brainstorming/gpt-3.5/config.yaml` ### Tool-Using Tasks For multi-agent systems using external tools (web browser, Jupyter, Bing search, etc.): #### 1. Set up XAgent ToolsServer Follow the [XAgent ToolsServer setup guide](https://github.com/OpenBMB/XAgent#%EF%B8%8F-build-and-setup-toolserver) to build and run the ToolServer. #### 2. Run Tool-Using Task ```bash agentverse-tasksolving --task tasksolving/tool_using/24point ``` Additional tool-using tasks are provided in `agentverse/tasks/tasksolving/tool_using/`. ## Available Tasks ### Code Generation Tasks Located in `agentverse/tasks/tasksolving/`: - **humaneval/**: Code generation on standard HumanEval benchmarks - **brainstorming/**: Creative problem-solving with multiple agents - **commongen/**: Common sense generation tasks - **sde_team/**: Software development with code writer, tester, reviewer ### Tool-Using Tasks Located in `agentverse/tasks/tasksolving/tool_using/`: Demonstrates how multiple agents can coordinate to use various tools: - Web browsing - File system operations - Jupyter notebook execution - Bing search integration - WolframAlpha computation Examples include: - `24point`: Mathematical game solving - Custom problem-solving tasks ## Configuration Structure Task-solving configurations specify agent roles, LLM settings, and environment parameters: ```yaml environment: env_type: tasksolving max_turns: 30 rule: order: type: sequential visibility: type: all selector: type: basic updater: type: basic describer: type: task_describer agents: - agent_type: conversation name: Code Writer role_description: You are an expert programmer... prompt_template: | Your specialized prompt here llm: llm_type: gpt-4 model: gpt-4 temperature: 0.7 max_tokens: 2000 - agent_type: tool_agent name: Code Tester role_description: You review and test code... tools: [bash_tool, python_tool] llm: llm_type: gpt-4 model: gpt-4 ``` ## LLM Configuration ### OpenAI Models ```yaml llm: llm_type: openai model: gpt-4 temperature: 0.7 max_tokens: 2000 ``` ### Azure OpenAI ```yaml llm: llm_type: azure model: gpt-4-deployment-name temperature: 0.7 max_tokens: 2000 ``` ### vLLM (Local Large Models) ```yaml llm: llm_type: vllm model: llama-2-70b-chat-hf temperature: 0.7 max_tokens: 2000 ``` Configure environment variables: ```bash export VLLM_API_KEY="your_api_key" export VLLM_API_BASE="http://localhost:8000" ``` ### Local Models (FastChat) ```yaml llm: llm_type: local model: llama-2-7b-chat-hf temperature: 0.7 max_tokens: 2000 ``` See [Installation and Setup](02-installation-and-setup.md) for local model configuration. ## Framework Required Modules The task-solving framework uses: ``` - agentverse - agents - simulation_agent - environments - tasksolving_env ``` ## Agent Types ### ConversationAgent Standard agent for dialogue and problem-solving tasks. **Capabilities:** - Text generation and reasoning - Memory management - Prompt templating - LLM integration ### ToolAgent Agent with external tool usage capabilities (requires BMTools and XAgent ToolsServer). **Capabilities:** - Everything from ConversationAgent - Tool invocation and orchestration - Complex multi-step problem solving - File and system operations ## Memory Management Agents support multiple memory types: ```yaml memory: memory_type: chat_history # Stores full conversation history ``` Future enhancements will support more sophisticated memory strategies. ## Output Parsing Agents generate structured outputs that are parsed by custom parsers. Example format: ``` Action: Write Code Action Input: def fibonacci(n): if n <= 1: return n return fibonacci(n-1) + fibonacci(n-2) ``` Parsers extract structured information from agent outputs for task evaluation. ## Customization Guidelines ### Creating Custom Task-Solving Scenarios 1. Create task directory: `agentverse/tasks/tasksolving/your_task/` 2. Define configuration: `config.yaml` 3. Implement output parser for agent responses 4. Register parser in `agentverse/tasks/__init__.py` ### Customizing Agents Inherit from `BaseAgent` class for specialized agent behavior: ```python class CustomAgent(BaseAgent): def __init__(self, name, role_description, ...): super().__init__(name, role_description, ...) def generate_response(self, prompt): # Custom response generation logic pass ``` ### Customizing Environments For specialized task-solving environments, inherit from `BaseEnvironment` and implement custom execution logic. ## Best Practices 1. **Agent Roles**: Clearly define specialized roles for each agent 2. **Prompts**: Craft detailed prompts that guide agent behavior 3. **Temperature**: Use lower temperatures (0.3-0.5) for deterministic tasks, higher (0.7-0.9) for creative tasks 4. **Context Length**: Monitor max_tokens to ensure complete outputs 5. **Tool Integration**: Test tools before multi-agent execution 6. **Evaluation**: Use standardized benchmarks to measure performance 7. **Iteration**: Start with simple tasks, progressively add complexity ## Related Work The Task-Solving Framework is based on research published in: - [AgentVerse: Facilitating Multi-Agent Collaboration](https://arxiv.org/abs/2308.10848) - ICLR 2024 - [Multi-agent as System](https://arxiv.org/abs/2309.02427) See the research paper for detailed algorithms and performance comparisons. ## Troubleshooting ### Agents Not Responding 1. Check LLM API credentials and rate limits 2. Verify prompt templates are valid 3. Check agent memory and context length settings ### Tools Not Working 1. Ensure XAgent ToolsServer is running 2. Verify tool configuration in agent definition 3. Check tool API credentials and permissions ### Low Performance 1. Improve prompt engineering 2. Experiment with different agent roles and specializations 3. Adjust temperature and max_tokens 4. Use stronger models (GPT-4 vs GPT-3.5) --- # Agent Types and Customization **Source:** https://github.com/OpenBMB/AgentVerse AgentVerse provides built-in agent types and supports extensive customization for specialized applications. This guide covers agent types, memory systems, and how to create custom agents. ## Built-in Agent Types ### ConversationAgent The standard agent type for text-based dialogue and reasoning tasks. **Features:** - Natural language generation and understanding - Memory management through chat history - Configurable prompts and role descriptions - Integration with multiple LLM backends - Support for tool-less interactions **Example Configuration:** ```yaml agents: - agent_type: conversation name: Code Reviewer role_description: | You are an expert code reviewer with 10+ years of experience. Your job is to review code quality, identify bugs, and suggest improvements. memory: memory_type: chat_history prompt_template: | You are ${role}. Your task: Review the following code ${context} Provide detailed feedback. llm: llm_type: gpt-4 model: gpt-4 temperature: 0.3 max_tokens: 1000 ``` **Methods:** - `generate_response()`: Main method for generating agent responses - `update_memory()`: Stores messages and conversation history - `fill_prompt_template()`: Instantiates prompt templates with context ### ToolAgent Specialized agent with capability to use external tools and APIs (requires BMTools and XAgent). **Features:** - All ConversationAgent capabilities - External tool invocation - Multi-step problem solving with tool chains - Complex task execution - System interaction (file operations, code execution, web browsing) **Example Configuration:** ```yaml agents: - agent_type: tool_agent name: Research Agent role_description: | You are a research agent capable of searching the web, reading documents, and synthesizing information. tools: - web_search - file_reader - summarizer memory: memory_type: chat_history prompt_template: | You have access to the following tools: ${tools} Task: ${task} llm: llm_type: gpt-4 model: gpt-4 temperature: 0.5 max_tokens: 2000 ``` **Available Tools (via XAgent):** - Web browsing and search - File system operations - Code execution (Jupyter notebooks) - Shell commands - Mathematical computation - API interactions ## Memory Types ### chat_history Stores the complete conversation history for the agent. ```yaml memory: memory_type: chat_history ``` **Behavior:** - Maintains sequential message history - Includes both agent outputs and external inputs - Used for context in subsequent turns - Respects context length limits of the LLM **Example Memory Structure:** ``` [ {"role": "user", "content": "Write a function to sort an array"}, {"role": "assistant", "content": "def sort_array(arr): ..."}, {"role": "user", "content": "Add error handling"}, {"role": "assistant", "content": "def sort_array(arr): ..."} ] ``` ### Future Memory Types The roadmap includes: - Vector similarity-based memory - Hierarchical memory (summary + details) - Experience replay memory - Semantic memory ## Customizing Agents ### Method 1: Configuration-Based Customization For most use cases, customize agents through configuration files: **1. Define Role and Behavior via Prompt Template:** ```yaml prompt_template: | You are ${role_name}. Your responsibilities: ${responsibilities} Context: ${context} Guidelines: - Be concise but thorough - Provide actionable feedback - Ask clarifying questions when needed ``` **2. Adjust LLM Parameters:** ```yaml llm: llm_type: gpt-4 model: gpt-4 temperature: 0.3 # Lower for consistency max_tokens: 1500 # For detailed responses top_p: 0.9 # Nucleus sampling frequency_penalty: 0.5 ``` **3. Configure Memory Strategy:** ```yaml memory: memory_type: chat_history # Future: enable advanced memory strategies ``` ### Method 2: Code-Based Customization For advanced customization, inherit from `BaseAgent`: ```python from agentverse.agents import BaseAgent class CustomSearchAgent(BaseAgent): """Custom agent with specialized search capabilities.""" def __init__(self, name, role_description, llm_config, tools=None): super().__init__(name, role_description, llm_config) self.tools = tools or [] self.search_cache = {} def generate_response(self, prompt): """Generate response with custom logic.""" # Pre-process prompt processed_prompt = self._preprocess(prompt) # Call parent LLM response = super().generate_response(processed_prompt) # Post-process if tool-related if self._should_use_tool(response): response = self._execute_tool(response) return response def _preprocess(self, prompt): """Custom preprocessing logic.""" # Add search context if "search" in prompt.lower(): context = self._get_cached_searches() return f"{prompt}\n\nRecent searches: {context}" return prompt def _should_use_tool(self, response): """Check if response triggers tool usage.""" return "[TOOL:" in response or "[SEARCH:" in response def _execute_tool(self, response): """Execute embedded tool commands.""" # Parse and execute tool calls # Return augmented response pass ``` Register custom agent: ```python # In agentverse/agents/__init__.py from .custom_agents import CustomSearchAgent __all__ = [ 'ConversationAgent', 'ToolAgent', 'CustomSearchAgent', # Register here ] ``` Use in config: ```yaml agents: - agent_type: custom_search name: Smart Researcher role_description: You are a research expert... ``` ## Advanced Configuration ### Multi-Agent Team Structure Define specialized team for complex tasks: ```yaml agents: # Planner agent - agent_type: conversation name: Project Manager role_description: You organize and plan the project... prompt_template: | Create a structured plan for: ${task} Output format: 1. Objectives 2. Milestones 3. Resource requirements llm: llm_type: gpt-4 model: gpt-4 temperature: 0.3 max_tokens: 1500 # Domain expert agent - agent_type: conversation name: Technical Lead role_description: You provide technical guidance and validation... llm: llm_type: gpt-4 model: gpt-4 temperature: 0.4 max_tokens: 2000 # Tool-using agent - agent_type: tool_agent name: Implementation Agent role_description: You execute technical tasks using available tools... tools: [code_executor, file_manager, api_client] llm: llm_type: gpt-4 model: gpt-4 temperature: 0.5 max_tokens: 3000 # Quality assurance agent - agent_type: conversation name: QA Specialist role_description: You test solutions and validate requirements... llm: llm_type: gpt-4 model: gpt-4 temperature: 0.3 max_tokens: 1500 ``` ### Dynamic Temperature Tuning Adjust temperature per agent based on task: ```yaml # For consistency (code review) temperature: 0.1 # For creativity (brainstorming) temperature: 0.9 # For balanced (general reasoning) temperature: 0.5 ``` ### Context Management Manage context windows for long conversations: ```yaml llm: llm_type: gpt-4 model: gpt-4 max_tokens: 2000 # Output limit context_window: 8000 # Input limit memory: memory_type: chat_history max_history_turns: 10 # Keep last 10 turns only # Future: summarize older context ``` ## Creating Specialized Agent Classes ### Research Agent Specialized for information gathering and synthesis: ```python class ResearchAgent(BaseAgent): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.sources = [] self.verified_facts = [] def generate_response(self, prompt): # First: gather information sources = self._search_sources(prompt) # Second: generate response with sources response = super().generate_response( f"{prompt}\n\nSources: {sources}" ) # Third: cite sources in response return self._add_citations(response) ``` ### Creative Agent Specialized for brainstorming and creative tasks: ```python class CreativeAgent(BaseAgent): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.llm_config['temperature'] = 0.8 self.llm_config['top_p'] = 0.95 def generate_response(self, prompt): # Add creativity prompts enhanced_prompt = f"{prompt}\n\nBe creative and generate multiple ideas." response = super().generate_response(enhanced_prompt) # Extract and format ideas return self._format_ideas(response) ``` ### Validator Agent Specialized for quality assurance: ```python class ValidatorAgent(BaseAgent): def generate_response(self, prompt): response = super().generate_response(prompt) # Check response quality issues = self._validate_response(response) if issues: return f"Issues found:\n{issues}\n\nRevised response:\n{response}" return response ``` ## Best Practices 1. **Clear Role Descriptions**: Write detailed, specific role descriptions 2. **Prompt Engineering**: Invest time in crafting effective prompts 3. **Temperature Selection**: Match temperature to task type 4. **Memory Management**: Balance context length with information retention 5. **Tool Selection**: Carefully choose tools for ToolAgents 6. **Error Handling**: Implement graceful degradation for tool failures 7. **Monitoring**: Log agent responses for quality analysis 8. **Testing**: Test agents individually before multi-agent scenarios ## Troubleshooting ### Agent Ignoring Instructions - Increase specificity in prompt template - Lower temperature for consistency - Add explicit output format requirements ### Memory Issues - Reduce max_history_turns if context overflows - Use summarization for long conversations - Clear memory between unrelated tasks ### Tool Failures - Verify tool configuration and credentials - Add error handling in agent logic - Implement tool execution timeouts - Log tool calls for debugging --- # Environment Rules and Customization **Source:** https://github.com/OpenBMB/AgentVerse The AgentVerse framework abstracts multi-agent environments into five customizable rule components. This modular design enables flexible environment creation for various scenarios while maintaining code reusability. ## The Five Rule Components ### 1. Describer **Purpose:** Provides environment description to agents each turn. **Role:** - Defines what information agents receive about the environment - Specifies agent visibility and interaction context - Generates dynamic descriptions based on environment state **Configuration:** ```yaml environment: rule: describer: type: basic # or custom ``` **Types:** - **basic**: No custom description (agents receive minimal context) - **task_describer**: For task-solving scenarios - **simulation_describer**: For simulation scenarios - **custom**: User-defined describer **Example Custom Describer:** ```python from agentverse.environments.rules import BaseDescriber class LocationAwareDescriber(BaseDescriber): """Describes environment based on agent location.""" def describe(self, agent, environment): """Return description for agent.""" location = agent.location visible_agents = self._get_nearby_agents(agent, environment) description = f""" Location: {location} Nearby agents: {visible_agents} Available actions: {self._get_available_actions(location)} """ return description ``` ### 2. Order **Purpose:** Defines the sequence in which agents take actions. **Role:** - Controls turn order and action timing - Determines synchronization of agent actions - Manages action scheduling **Configuration:** ```yaml environment: rule: order: type: sequential # or random, concurrent ``` **Types:** - **sequential**: Agents act one at a time in defined order - **random**: Random agent selection for each turn - **concurrent**: All agents act simultaneously each turn - **custom**: User-defined ordering **Example Configurations:** Sequential Order: ```yaml order: type: sequential agent_sequence: [Professor, Student1, Student2, ...] ``` Random Order: ```yaml order: type: random seed: 42 # For reproducibility ``` Concurrent Order: ```yaml order: type: concurrent ``` **Custom Order Example:** ```python from agentverse.environments.rules import BaseOrder class PriorityBasedOrder(BaseOrder): """Prioritize certain agents based on status.""" def get_agent_order(self, agents, environment): """Return agent order based on priority.""" # Prioritize urgent agents urgent = [a for a in agents if a.status == 'urgent'] normal = [a for a in agents if a.status != 'urgent'] return urgent + normal ``` ### 3. Selector **Purpose:** Filters and validates agent-generated messages. **Role:** - Accepts or rejects agent outputs - Validates message format and content - Implements business logic constraints **Configuration:** ```yaml environment: rule: selector: type: basic # or classroom, custom ``` **Types:** - **basic**: Accept all messages (no filtering) - **classroom**: Validates hand-raising and professor calls in classroom - **custom**: User-defined validation logic **Example: Classroom Selector** In a classroom environment: - Students can only speak after being called on - Professor can call on any student - Invalid messages are filtered **Custom Selector Example:** ```python from agentverse.environments.rules import BaseSelector class RoleBasedSelector(BaseSelector): """Filter messages based on agent roles and permissions.""" def select(self, message, agent, environment): """Validate message before accepting.""" # Only allow certain agents to issue commands if "command:" in message.lower(): if agent.role != "admin": return False, "Only admins can issue commands" # Validate message length if len(message) > 5000: return False, "Message too long" # Check for prohibited content if self._contains_prohibited(message): return False, "Message contains prohibited content" return True, None ``` ### 4. Updater **Purpose:** Updates agent memory with relevant messages. **Role:** - Decides which agents receive which messages - Manages selective information distribution - Maintains memory consistency **Configuration:** ```yaml environment: rule: updater: type: basic # or location_aware, custom ``` **Types:** - **basic**: All agents receive all messages - **location_aware**: Only nearby agents receive messages - **room_based**: Agents in same room receive messages - **custom**: User-defined distribution logic **Example: Location-Aware Updater** ```python from agentverse.environments.rules import BaseUpdater class LocationAwareUpdater(BaseUpdater): """Update only agents within communication range.""" def update(self, message, sender, environment): """Distribute message to relevant agents.""" recipients = [] # Find agents within communication range for agent in environment.agents: distance = self._calculate_distance(sender, agent) if distance <= self.communication_range: agent.update_memory(message) recipients.append(agent) return recipients ``` ### 5. Visibility **Purpose:** Maintains the list of agents each agent can see/interact with. **Role:** - Defines agent perception and awareness - Updates visibility as environment changes - Controls interaction possibilities **Configuration:** ```yaml environment: rule: visibility: type: all # or location_based, custom ``` **Types:** - **all**: All agents see all other agents - **location_based**: Agents only see nearby agents - **group_based**: Agents see agents in same group - **custom**: User-defined visibility logic **Example: Dynamic Visibility** ```python from agentverse.environments.rules import BaseVisibility class DynamicVisibility(BaseVisibility): """Update visibility as agents move.""" def get_visible_agents(self, agent, environment): """Return list of visible agents for given agent.""" visible = [] for other in environment.agents: if other == agent: continue # Check if in same location if agent.location == other.location: visible.append(other) # Check if in adjacent rooms elif self._adjacent_locations(agent.location, other.location): visible.append(other) return visible def update_visibility(self, environment): """Update visibility for all agents.""" for agent in environment.agents: agent.visible_agents = self.get_visible_agents(agent, environment) ``` ## Complete Environment Configuration A complete environment specification integrating all five components: ```yaml environment: env_type: basic max_turns: 30 rule: # Agent action sequence order: type: sequential # Environment description describer: type: basic # Message filtering selector: type: classroom hand_raise_required: true # Message distribution updater: type: basic # Agent perception visibility: type: all # Additional parameters shared between components rule_params: location_threshold: 10 # For location-based logic communication_range: 5 agents: # Agent definitions... ``` ## Environment Customization Strategies ### Strategy 1: Simple Modification Start with built-in types and override minimal logic: ```python class SlightlyCustomSelector(BaseSelector): """Adds length validation to basic selector.""" def select(self, message, agent, environment): # Add custom validation if len(message) > 3000: return False, "Message exceeds length limit" # Use parent default return True, None ``` ### Strategy 2: Specialized Environment Create domain-specific rule components: **Example: Game Environment** ```python class GameDescriber(BaseDescriber): """Describes game board state.""" def describe(self, agent, environment): board = environment.game_board return f"Board state:\n{board}\nYour pieces: {agent.pieces}" class GameOrder(BaseOrder): """Alternates turns between players.""" def get_agent_order(self, agents, environment): # Alternate between red and blue teams return sorted(agents, key=lambda a: a.team) class GameSelector(BaseSelector): """Validates legal moves.""" def select(self, message, agent, environment): move = self._parse_move(message) return environment.is_legal_move(move, agent) class GameUpdater(BaseUpdater): """Update both players with move.""" def update(self, message, sender, environment): for agent in environment.agents: agent.update_memory(f"Opponent: {message}") class GameVisibility(BaseVisibility): """Both players see entire board.""" def get_visible_agents(self, agent, environment): return [a for a in environment.agents if a != agent] ``` ### Strategy 3: Component Interaction Components communicate via `rule_params`: ```python class SmartDescriber(BaseDescriber): def describe(self, agent, environment): # Access rule_params set by other components urgent_agents = environment.rule_params.get('urgent_agents', []) return f"Urgent agents: {urgent_agents}" class SmartSelector(BaseSelector): def select(self, message, agent, environment): # Mark urgent agents if "urgent" in message: environment.rule_params['urgent_agents'] = [agent] return True, None class SmartOrder(BaseOrder): def get_agent_order(self, agents, environment): # Prioritize agents marked urgent urgent = environment.rule_params.get('urgent_agents', []) normal = [a for a in agents if a not in urgent] return urgent + normal ``` ## Best Practices 1. **Start Simple**: Use built-in rule types before creating custom ones 2. **Clear Separation**: Keep logic isolated in single rule component 3. **Testing**: Test each rule component independently 4. **Documentation**: Document custom rules clearly 5. **Performance**: Consider efficiency of rule execution in loops 6. **Flexibility**: Use `rule_params` for inter-component communication 7. **Inheritance**: Extend base classes rather than reimplementing 8. **Logging**: Add logging to rule components for debugging ## Common Patterns ### Multi-Room Environment ```python class RoomAwareRules: """Rules for multi-room environments.""" describer = RoomAwareDescriber() order = SequentialOrder() selector = RoleBasedSelector() updater = RoomAwareUpdater() visibility = RoomAwareVisibility() ``` ### Game Environment ```python class GameRules: """Rules for turn-based games.""" describer = GameDescriber() order = TurnBasedOrder() selector = GameSelector() updater = BroadcastUpdater() visibility = FullVisibility() ``` ### Hierarchical Organization ```python class HierarchicalRules: """Rules for hierarchical structures (company, military).""" describer = HierarchyAwareDescriber() order = HierarchyBasedOrder() selector = RankBasedSelector() updater = HierarchyAwareUpdater() visibility = RankAwareVisibility() ```