# Camel > This documentation was extracted from the [CAMEL GitHub repository](https://github.com/camel-ai/camel). --- # CAMEL Documentation Source This documentation was extracted from the [CAMEL GitHub repository](https://github.com/camel-ai/camel). ## About CAMEL CAMEL (Communicative Agents for "Mind" Exploration of Large-Scale Language Model Society) is a multi-agent framework for exploring the scaling laws of agents. - **Repository**: https://github.com/camel-ai/camel - **Website**: https://www.camel-ai.org - **Documentation**: https://docs.camel-ai.org - **License**: Apache 2.0 ## Documentation Structure - `get_started/` - Getting started guides and installation instructions - `key_modules/` - Core module documentation (agents, models, prompts, etc.) - Reference files - API reference documentation (RST -> Markdown converted) ## Last Updated Documentation extracted on 2026-01-01 from the master branch. --- # camel.agents package ## Subpackages ::: {.toctree maxdepth="4"} camel.agents.tool_agents ::: ## Submodules ## camel.agents.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.base ::: ## camel.agents.chat_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.chat_agent ::: ## camel.agents.critic_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.critic_agent ::: ## camel.agents.deductive_reasoner_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.deductive_reasoner_agent ::: ## camel.agents.embodied_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.embodied_agent ::: ## camel.agents.knowledge_graph_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.knowledge_graph_agent ::: ## camel.agents.role_assignment_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.role_assignment_agent ::: ## camel.agents.search_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.search_agent ::: ## camel.agents.task_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.task_agent ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents ::: --- # camel.agents.tool_agents package ## Submodules ## camel.agents.tool_agents.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.tool_agents.base ::: ## camel.agents.tool_agents.hugging_face_tool_agent module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.tool_agents.hugging_face_tool_agent ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.agents.tool_agents ::: --- # camel.benchmarks package ## Submodules ## camel.benchmarks.apibank module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.benchmarks.apibank ::: ## camel.benchmarks.apibench module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.benchmarks.apibench ::: ## camel.benchmarks.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.benchmarks.base ::: ## camel.benchmarks.gaia module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.benchmarks.gaia ::: ## camel.benchmarks.nexus module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.benchmarks.nexus ::: ## camel.benchmarks.ragbench module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.benchmarks.ragbench ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.benchmarks ::: --- # camel.bots.discord package ## Submodules ## camel.bots.discord.discord_app module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots.discord.discord_app ::: ## camel.bots.discord.discord_installation module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots.discord.discord_installation ::: ## camel.bots.discord.discord_store module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots.discord.discord_store ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots.discord ::: --- # camel.bots package ## Subpackages ::: {.toctree maxdepth="4"} camel.bots.discord camel.bots.slack ::: ## Submodules ## camel.bots.telegram_bot module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots.telegram_bot ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots ::: --- # camel.bots.slack package ## Submodules ## camel.bots.slack.models module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots.slack.models ::: ## camel.bots.slack.slack_app module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots.slack.slack_app ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.bots.slack ::: --- # camel.configs package ## Submodules ## camel.configs.anthropic_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.anthropic_config ::: ## camel.configs.base_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.base_config ::: ## camel.configs.cometapi_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.cometapi_config ::: ## camel.configs.gemini_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.gemini_config ::: ## camel.configs.groq_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.groq_config ::: ## camel.configs.litellm_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.litellm_config ::: ## camel.configs.mistral_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.mistral_config ::: ## camel.configs.ollama_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.ollama_config ::: ## camel.configs.openai_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.openai_config ::: ## camel.configs.reka_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.reka_config ::: ## camel.configs.samba_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.samba_config ::: ## camel.configs.togetherai_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.togetherai_config ::: ## camel.configs.vllm_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.vllm_config ::: ## camel.configs.zhipuai_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs.zhipuai_config ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.configs ::: --- # camel.data_collector package ## Submodules ## camel.data_collector.alpaca_collector module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.data_collector.alpaca_collector ::: ## camel.data_collector.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.data_collector.base ::: ## camel.data_collector.sharegpt_collector module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.data_collector.sharegpt_collector ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.data_collector ::: --- # camel.datagen package ## Subpackages ::: {.toctree maxdepth="4"} camel.datagen.self_instruct camel.datagen.source2synth ::: ## Submodules ## camel.datagen.cot_datagen module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.cot_datagen ::: ## camel.datagen.self_improving_cot module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.self_improving_cot ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen ::: --- # camel.datagen.self_instruct.filter package ## Submodules ## camel.datagen.self_instruct.filter.filter_function module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.self_instruct.filter.filter_function ::: ## camel.datagen.self_instruct.filter.filter_registry module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.self_instruct.filter.filter_registry ::: ## camel.datagen.self_instruct.filter.instruction_filter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.self_instruct.filter.instruction_filter ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.self_instruct.filter ::: --- # camel.datagen.self_instruct package ## Subpackages ::: {.toctree maxdepth="4"} camel.datagen.self_instruct.filter ::: ## Submodules ## camel.datagen.self_instruct.self_instruct module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.self_instruct.self_instruct ::: ## camel.datagen.self_instruct.templates module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.self_instruct.templates ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.self_instruct ::: --- # camel.datagen.source2synth package ## Submodules ## camel.datagen.source2synth.data_processor module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.source2synth.data_processor ::: ## camel.datagen.source2synth.models module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.source2synth.models ::: ## camel.datagen.source2synth.user_data_processor_config module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.source2synth.user_data_processor_config ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datagen.source2synth ::: --- # camel.datahubs package ## Submodules ## camel.datahubs.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datahubs.base ::: ## camel.datahubs.huggingface module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datahubs.huggingface ::: ## camel.datahubs.models module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datahubs.models ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datahubs ::: --- # camel.datasets package ## Submodules ## camel.datasets.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datasets.base ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.datasets ::: --- # camel.embeddings package ## Submodules ## camel.embeddings.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.embeddings.base ::: ## camel.embeddings.mistral_embedding module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.embeddings.mistral_embedding ::: ## camel.embeddings.openai_embedding module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.embeddings.openai_embedding ::: ## camel.embeddings.sentence_transformers_embeddings module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.embeddings.sentence_transformers_embeddings ::: ## camel.embeddings.vlm_embedding module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.embeddings.vlm_embedding ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.embeddings ::: --- # camel.environments package ## Submodules ## camel.environments.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.environments.base ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.environments ::: --- # camel.extractors package ## Submodules ## camel.extractors.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.extractors.base ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.extractors ::: --- # camel.interpreters package ## Submodules ## camel.interpreters.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.interpreters.base ::: ## camel.interpreters.docker_interpreter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.interpreters.docker_interpreter ::: ## camel.interpreters.internal_python_interpreter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.interpreters.internal_python_interpreter ::: ## camel.interpreters.interpreter_error module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.interpreters.interpreter_error ::: ## camel.interpreters.ipython_interpreter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.interpreters.ipython_interpreter ::: ## camel.interpreters.subprocess_interpreter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.interpreters.subprocess_interpreter ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.interpreters ::: --- # camel.loaders package ## Submodules ## camel.loaders.base_io module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.loaders.base_io ::: ## camel.loaders.firecrawl_reader module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.loaders.firecrawl_reader ::: ## camel.loaders.jina_url_reader module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.loaders.jina_url_reader ::: ## camel.loaders.unstructured_io module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.loaders.unstructured_io ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.loaders ::: --- # camel package ## Subpackages ::: {.toctree maxdepth="4"} camel.agents camel.configs camel.datagen camel.embeddings camel.interpreters camel.loaders camel.memories camel.messages camel.models camel.prompts camel.responses camel.retrievers camel.societies camel.storages camel.tasks camel.terminators camel.toolkits camel.types camel.utils ::: ## Submodules ## camel.generators module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.generators ::: ## camel.human module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.human ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel ::: --- # camel.memories.blocks package ## Submodules ## camel.memories.blocks.chat_history_block module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories.blocks.chat_history_block ::: ## camel.memories.blocks.vectordb_block module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories.blocks.vectordb_block ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories.blocks ::: --- # camel.memories.context_creators package ## Submodules ## camel.memories.context_creators.score_based module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories.context_creators.score_based ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories.context_creators ::: --- # camel.memories package ## Subpackages ::: {.toctree maxdepth="4"} camel.memories.blocks camel.memories.context_creators ::: ## Submodules ## camel.memories.agent_memories module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories.agent_memories ::: ## camel.memories.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories.base ::: ## camel.memories.records module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories.records ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.memories ::: --- # camel.messages.conversion package ## Subpackages ::: {.toctree maxdepth="4"} camel.messages.conversion.sharegpt ::: ## Submodules ## camel.messages.conversion.alpaca module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.conversion.alpaca ::: ## camel.messages.conversion.conversation_models module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.conversion.conversation_models ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.conversion ::: --- # camel.messages.conversion.sharegpt.hermes package ## Submodules ## camel.messages.conversion.sharegpt.hermes.hermes_function_formatter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.conversion.sharegpt.hermes.hermes_function_formatter ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.conversion.sharegpt.hermes ::: --- # camel.messages.conversion.sharegpt package ## Subpackages ::: {.toctree maxdepth="4"} camel.messages.conversion.sharegpt.hermes ::: ## Submodules ## camel.messages.conversion.sharegpt.function_call_formatter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.conversion.sharegpt.function_call_formatter ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.conversion.sharegpt ::: --- # camel.messages package ## Submodules ## camel.messages.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.base ::: ## camel.messages.func_message module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages.func_message ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.messages ::: --- # camel.models package ## Submodules ## camel.models.anthropic_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.anthropic_model ::: ## camel.models.azure_openai_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.azure_openai_model ::: ## camel.models.base_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.base_model ::: ## camel.models.cometapi_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.cometapi_model ::: ## camel.models.gemini_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.gemini_model ::: ## camel.models.groq_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.groq_model ::: ## camel.models.litellm_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.litellm_model ::: ## camel.models.mistral_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.mistral_model ::: ## camel.models.model_factory module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.model_factory ::: ## camel.models.nemotron_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.nemotron_model ::: ## camel.models.ollama_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.ollama_model ::: ## camel.models.open_source_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.openai_audio_models ::: ## camel.models.openai_compatible_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.openai_compatible_model ::: ## camel.models.openai_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.openai_model ::: ## camel.models.reka_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.reka_model ::: ## camel.models.samba_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.samba_model ::: ## camel.models.stub_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.stub_model ::: ## camel.models.togetherai_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.togetherai_model ::: ## camel.models.vllm_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.vllm_model ::: ## camel.models.zhipuai_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.zhipuai_model ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models ::: --- # camel.models.reward package ## Submodules ## camel.models.reward.base_reward_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.reward.base_reward_model ::: ## camel.models.reward.evaluator module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.reward.evaluator ::: ## camel.models.reward.nemotron_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.reward.nemotron_model ::: ## camel.models.reward.skywork_model module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.reward.skywork_model ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.models.reward ::: --- # camel.personas package ## Submodules ## camel.personas.persona module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.personas.persona ::: ## camel.personas.persona_hub module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.personas.persona_hub ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.personas ::: --- # camel.prompts package ## Submodules ## camel.prompts.ai_society module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.ai_society ::: ## camel.prompts.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.base ::: ## camel.prompts.code module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.code ::: ## camel.prompts.evaluation module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.evaluation ::: ## camel.prompts.generate_text_embedding_data module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.generate_text_embedding_data ::: ## camel.prompts.image_craft module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.image_craft ::: ## camel.prompts.misalignment module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.misalignment ::: ## camel.prompts.multi_condition_image_craft module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.multi_condition_image_craft ::: ## camel.prompts.object_recognition module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.object_recognition ::: ## camel.prompts.prompt_templates module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.prompt_templates ::: ## camel.prompts.role_description_prompt_template module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.role_description_prompt_template ::: ## camel.prompts.solution_extraction module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.solution_extraction ::: ## camel.prompts.task_prompt_template module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.task_prompt_template ::: ## camel.prompts.translation module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.translation ::: ## camel.prompts.video_description_prompt module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts.video_description_prompt ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.prompts ::: --- # camel.responses package ## Submodules ## camel.responses.agent_responses module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.responses.agent_responses ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.responses ::: --- # camel.retrievers package ## Submodules ## camel.retrievers.auto_retriever module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.retrievers.auto_retriever ::: ## camel.retrievers.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.retrievers.base ::: ## camel.retrievers.bm25_retriever module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.retrievers.bm25_retriever ::: ## camel.retrievers.cohere_rerank_retriever module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.retrievers.cohere_rerank_retriever ::: ## camel.retrievers.vector_retriever module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.retrievers.vector_retriever ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.retrievers ::: --- # camel.runtime package ## Subpackages ::: {.toctree maxdepth="4"} camel.runtime.utils ::: ## Submodules ## camel.runtime.api module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.api ::: ## camel.runtime.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.base ::: ## camel.runtime.configs module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.configs ::: ## camel.runtime.docker_runtime module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.docker_runtime ::: ## camel.runtime.llm_guard_runtime module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.llm_guard_runtime ::: ## camel.runtime.remote_http_runtime module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.remote_http_runtime ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime ::: --- # camel.runtime.utils package ## Submodules ## camel.runtime.utils.function_risk_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.utils.function_risk_toolkit ::: ## camel.runtime.utils.ignore_risk_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.utils.ignore_risk_toolkit ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.runtime.utils ::: --- # camel.schemas package ## Submodules ## camel.schemas.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.schemas.base ::: ## camel.schemas.openai_converter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.schemas.openai_converter ::: ## camel.schemas.outlines_converter module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.schemas.outlines_converter ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.schemas ::: --- # camel.societies package ## Submodules ## camel.societies.babyagi_playing module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.babyagi_playing ::: ## camel.societies.role_playing module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.role_playing ::: ## Subpackages ::: {.toctree maxdepth="4"} camel.societies.workforce ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies ::: --- camel.societies.workforce package ======================= # Submodules camel.societies.workforce.base module \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-- ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce.base ::: camel.societies.workforce.prompts module \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-- ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce.prompts ::: camel.societies.workforce.role_playing_worker module \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-- ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce.role_playing_worker ::: camel.societies.workforce.single_agent_worker module \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-- ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce.single_agent_worker ::: camel.societies.workforce.task_channel module \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-- ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce.task_channel ::: camel.societies.workforce.utils module \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-- ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce.utils ::: camel.societies.workforce.worker module \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-- ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce.worker ::: camel.societies.workforce.workforce module \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-- ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce.workforce ::: # Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.societies.workforce ::: --- # camel.storages.graph_storages package ## Submodules ## camel.storages.graph_storages.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.graph_storages.base ::: ## camel.storages.graph_storages.graph_element module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.graph_storages.graph_element ::: ## camel.storages.graph_storages.neo4j_graph module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.graph_storages.neo4j_graph ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.graph_storages ::: --- # camel.storages.key_value_storages package ## Submodules ## camel.storages.key_value_storages.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.key_value_storages.base ::: ## camel.storages.key_value_storages.in_memory module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.key_value_storages.in_memory ::: ## camel.storages.key_value_storages.json module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.key_value_storages.json ::: ## camel.storages.key_value_storages.redis module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.key_value_storages.redis ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.key_value_storages ::: --- # camel.storages package ## Subpackages ::: {.toctree maxdepth="4"} camel.storages.graph_storages camel.storages.key_value_storages camel.storages.object_storages camel.storages.vectordb_storages ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages ::: --- # camel.storages.object_storages package ## Submodules ## camel.storages.object_storages.amazon_s3 module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.object_storages.amazon_s3 ::: ## camel.storages.object_storages.azure_blob module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.object_storages.azure_blob ::: ## camel.storages.object_storages.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.object_storages.base ::: ## camel.storages.object_storages.google_cloud module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.object_storages.google_cloud ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.object_storages ::: --- # camel.storages.vectordb_storages package ## Submodules ## camel.storages.vectordb_storages.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.vectordb_storages.base ::: ## camel.storages.vectordb_storages.milvus module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.vectordb_storages.milvus ::: ## camel.storages.vectordb_storages.qdrant module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.vectordb_storages.tidb ::: ## camel.storages.vectordb_storages.tidb module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.vectordb_storages.qdrant ::: ## camel.storages.vectordb_storages.faiss module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.vectordb_storages.faiss ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.storages.vectordb_storages ::: --- # camel.tasks package ## Submodules ## camel.tasks.task module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.tasks.task ::: ## camel.tasks.task_prompt module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.tasks.task_prompt ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.tasks ::: --- # camel.terminators package ## Submodules ## camel.terminators.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.terminators.base ::: ## camel.terminators.response_terminator module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.terminators.response_terminator ::: ## camel.terminators.token_limit_terminator module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.terminators.token_limit_terminator ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.terminators ::: --- # camel.toolkits package ## Submodules ## camel.toolkits.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.base ::: ## camel.toolkits.code_execution module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.code_execution ::: ## camel.toolkits.dalle_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.dalle_toolkit ::: ## camel.toolkits.github_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.github_toolkit ::: ## camel.toolkits.google_maps_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.google_maps_toolkit ::: ## camel.toolkits.linkedin_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.linkedin_toolkit ::: ## camel.toolkits.math_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.math_toolkit ::: ## camel.toolkits.open_api_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.open_api_toolkit ::: ## camel.toolkits.openai_function module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.function_tool ::: ## camel.toolkits.reddit_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.reddit_toolkit ::: ## camel.toolkits.retrieval_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.retrieval_toolkit ::: ## camel.toolkits.search_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.search_toolkit ::: ## camel.toolkits.slack_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.slack_toolkit ::: ## camel.toolkits.twitter_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.twitter_toolkit ::: ## camel.toolkits.weather_toolkit module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits.weather_toolkit ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.toolkits ::: --- # camel.types.agents package ## Submodules ## camel.types.agents.tool_calling_record module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.types.agents.tool_calling_record ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.types.agents ::: --- # camel.types package ## Submodules ## camel.types.enums module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.types.enums ::: ## camel.types.openai_types module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.types.openai_types ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.types ::: --- # camel.utils package ## Submodules ## camel.utils.async_func module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.utils.async_func ::: ## camel.utils.commons module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.utils.commons ::: ## camel.utils.constants module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.utils.constants ::: ## camel.utils.token_counting module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.utils.token_counting ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.utils ::: --- # camel.verifiers package ## Submodules ## camel.verifiers.base module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.verifiers.base ::: ## camel.verifiers.models module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.verifiers.models ::: ## camel.verifiers.python_verifier module ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.verifiers.python_verifier ::: ## Module contents ::: {.automodule members="" undoc-members="" show-inheritance=""} camel.verifiers ::: --- Advanced Features ================ ```{=html}
``` ::: {.toctree maxdepth="1"} agents_with_tools agents_with_tools_from_ACI agents_with_tools_from_Composio agents_with_human_in_loop_and_tool_approval agents_with_memory agents_with_rag agents_with_graph_rag agents_with_MCP agents_tracking critic_agents_and_tree_search embodied_agents agent_generate_structured_output ::: --- ```{=html} ``` # Applications ::: {.toctree maxdepth="1"} roleplaying_scraper dynamic_travel_planner customer_service_Discord_bot_with_agentic_RAG customer_service_Discord_bot_using_SambaNova_with_agentic_RAG customer_service_Discord_bot_using_local_model_with_agentic_RAG finance_discord_bot pptx_toolkit ::: --- Basic Concepts ============= ```{=html} ``` ::: {.toctree maxdepth="1"} create_your_first_agent create_your_first_agents_society agents_message agents_prompting ::: --- # Agentic Data Generation ```{=html} ``` ::: {.toctree maxdepth="1"} sft_data_generation_and_unsloth_finetuning_mistral_7b_instruct sft_data_generation_and_unsloth_finetuning_Qwen2_5_7B sft_data_generation_and_unsloth_finetuning_tinyllama data_gen_with_real_function_calls_and_hermes_format self_instruct_data_generation cot_data_gen_sft_qwen_unsolth_upload_huggingface synthetic_dataevaluation&filter_with_reward_model data_model_generation_and_structured_output_with_qwen distill_math_reasoning_data_from_deepseek_r1 self_improving_math_reasoning_data_distillation_from_deepSeek_r1 self_improving_cot_generation ::: --- # Deep Dive into CAMEL’s Practices for Self-Improving CoT Generation 🚀 The field of AI is rapidly evolving, with reasoning models playing a crucial role in enhancing the problem-solving capabilities of large language models (LLMs). Recent developments, such as DeepSeek's R1 and OpenAI's o3-mini, demonstrate the industry's commitment to advancing reasoning through innovative approaches. DeepSeek's R1 model, introduced in January 2025, has shown remarkable proficiency in tasks that require complex reasoning and code generation. Its exceptional performance in areas like mathematics, science, and programming is particularly noteworthy. By distilling Chain-of-Thought (CoT) data from reasoning models, we can generate high-quality reasoning traces that are more accurate in solving complex problems. These generated data can be used to further fine-tune another LLM with less parameters, thereby enhancing its reasoning ability. CAMEL developed an approach leverages iterative refinement, self-assessment, and efficient batch processing to enable the continuous improvement of reasoning traces. In this blog, we will delve into how CAMEL implements its self-improving CoT pipeline. --- ## 1. Overview of the End-to-End Pipeline 🔍 ### 1.1 Why an Iterative CoT Pipeline? One-time CoT generation often leads to incomplete or suboptimal solutions. CAMEL addresses this challenge by employing a multi-step, iterative approach: 1. **Generate** an initial reasoning trace. 2. **Evaluate** the trace through either a dedicated evaluation agent or a specialized reward model. 3. **Refine** the trace based on the feedback provided. This self-improving methodology ensures that the reasoning process improves progressively, meeting specific thresholds for correctness, clarity, and completeness. Each iteration enhances the model's ability to solve the problem by learning from the previous outputs and evaluations. ### 1.2 Core Components The self-improving pipeline consists of three key components: 1. **`reason_agent`:** This agent is responsible for generating or improving reasoning traces. 2. **`evaluate_agent`:** An optional agent that evaluates the quality of the reasoning trace. This can be replaced by a reward model if needed. 3. **`reward_model`:** An optional model that provides numerical feedback on the trace, evaluating dimensions such as correctness, coherence, complexity, and verbosity. Here's a high-level diagram of the pipeline:  --- ## 2. Generation of CoT Data: The Heart of the Pipeline 🤖 Generating CoT data is at the core of the pipeline. Below, we outline the process in detail. ### 2.1 Initial Trace Generation 🐣 The first step in the process is the generation of an initial reasoning trace. The **`reason_agent`** plays a central role here, creating a coherent and logical explanation of how to solve a given problem. The agent breaks down the problem into smaller steps, illustrating the thought process at each stage. We also support the use of non-reasoning LLMs to generate traces through prompt engineering. The generation could also guided by **few-shot examples**, which provide context and help the agent understand the desired reasoning style. Here’s how this is accomplished: - **Input**: The problem statement is provided to the **`reason_agent`**, we can optionally provide the ground truth to guide the reasoning process. - **Output**: The agent generates a sequence of reasoning content. This initial generation serves as a foundational reasoning process that can be directly useful or further refined. ### 2.2 Evaluation of the Initial Trace 📒 Once the reasoning trace is generated, it is evaluated for its quality. This evaluation serves two purposes: - **Detecting weaknesses**: The evaluation identifies areas where the reasoning trace could be further improved. - **Providing feedback**: The evaluation produces feedback that guides the agent in refining the reasoning trace. This feedback can come from either the **`evaluate_agent`** or a **`reward_model`**. #### 2.2.1 Agent-Based Evaluation If an **`evaluate_agent`** is available, it examines the reasoning trace for: 1. **Correctness**: Does the trace logically solve the problem? 2. **Clarity**: Is the reasoning easy to follow and well-structured? 3. **Completeness**: Are all necessary steps included in the reasoning? The feedback from the agent provides insights into areas for improvement, such as unclear reasoning or incorrect answers, offering a more generalized approach compared to rule-based matching. #### 2.2.2 Reward Model Evaluation Alternatively, the pipeline supports using a **reward model** to evaluate the trace. The reward model outputs scores based on predefined dimensions such as correctness, coherence, complexity, and verbosity. --- ### 2.3 Iterative Refinement: The Self-Improving Cycle 🔁 The key to CAMEL's success in CoT generation is its **self-improving loop**. After the initial trace is generated and evaluated, the model refines the trace based on the evaluation feedback. This process is repeated in a loop. #### How does this iterative refinement work? 1. **Feedback Integration**: The feedback from the evaluation phase is used to refine the reasoning. This could involve rewording unclear parts, adding missing steps, or adjusting the logic to make it more correct or complete. 2. **Improvement through Reasoning**: After receiving feedback, the **`reason_agent`** is used again to generate an improved version of the reasoning trace. This trace incorporates the feedback provided, refining the earlier steps and enhancing the overall reasoning. 3. **Re-evaluation**: Once the trace is improved, the new version is evaluated again using the same process (either agent-based evaluation or reward model). This new trace is assessed against the same criteria to ensure the improvements have been made. 4. **Threshold Check**: The iterative process continues until the desired quality thresholds are met or reached the maximum number of iterations. --- ## 3. Pipeline Setup in Code 💻 Below is a truncated version of our pipeline initialization. We encapsulate logic in a class called `SelfImprovingCoTPipeline`: ```python class SelfImprovingCoTPipeline: def __init__( self, reason_agent: ChatAgent, problems: List[Dict], max_iterations: int = 3, score_threshold: Union[float, Dict[str, float]] = 0.7, evaluate_agent: Optional[ChatAgent] = None, reward_model: Optional[BaseRewardModel] = None, output_path: Optional[str] = None, few_shot_examples: Optional[str] = None, batch_size: Optional[int] = None, max_workers: Optional[int] = None, solution_pattern: str = r'\\boxed{(.*?)}', trace_pattern: Optional[str] = None, ): r"""Initialize the STaR pipeline. Args: reason_agent (ChatAgent): The chat agent used for generating and improving reasoning traces. problems (List[Dict]): List of problem dictionaries to process. max_iterations (int, optional): Maximum number of improvement iterations. If set to `0`, the pipeline will generate an initial trace without any improvement iterations. (default: :obj:`3`) score_threshold (Union[float, Dict[str, float]], optional): Quality threshold. Can be either a single float value applied to average score, or a dictionary mapping score dimensions to their thresholds. For example: {"correctness": 0.8, "coherence": 0.7}. If using reward model and threshold for a dimension is not specified, will use the default value 0.7. (default: :obj:`0.7`) evaluate_agent (Optional[ChatAgent]): The chat agent used for evaluating reasoning traces. (default: :obj:`None`) reward_model (BaseRewardModel, optional): Model used to evaluate reasoning traces. If `None`, uses Agent self-evaluation. (default: :obj:`None`) output_path (str, optional): Output path for saving traces. If `None`, results will only be returned without saving to file. (default: :obj:`None`) few_shot_examples (str, optional): Examples to use for few-shot generation. (default: :obj:`None`) batch_size (int, optional): Batch size for parallel processing. (default: :obj:`None`) max_workers (int, optional): Maximum number of worker threads. (default: :obj:`None`) solution_pattern (str, optional): Regular expression pattern with one capture group to extract answers from solution text. (default: :obj:`r'\\boxed{(.*?)}'`) trace_pattern (str, optional): Regular expression pattern with one capture group to extract answers from trace text. If `None`, uses the same pattern as solution_pattern. (default: :obj:`None`) """ ... ``` **Example usage:** ```python from camel.agents import ChatAgent from camel.datagen import SelfImprovingCoTPipeline # Initialize agents reason_agent = ChatAgent( """Answer my question and give your final answer within \\boxed{}.""" ) evaluate_agent = ChatAgent( "You are a highly critical teacher who evaluates the student's answers " "with a meticulous and demanding approach." ) # Prepare your problems problems = [ {"problem": "Your problem text here"}, # Add more problems... ] # Create and run the pipeline pipeline = SelfImprovingCoTPipeline( reason_agent=reason_agent, evaluate_agent=evaluate_agent, problems=problems, max_iterations=3, output_path="star_output.json" ) results = pipeline.generate() ``` --- ## 4. Batch Processing & API Request Handling 📦 ### 4.1 The Need for Batch Processing ⏰ Early on, we tried generating CoT reasoning for each problem one by one. This approach quickly revealed two major issues: 1. **Time consumption**: Sequential processing doesn't scale to large problem sets. 2. **API request bottlenecks**: Slowdowns or occasional disconnections occurred when handling numerous calls. Hence, we introduced a parallel **`BatchProcessor`** to: - Split the tasks into manageable batches. - Dynamically adjust batch size (`batch_size`) based on the success/failure rates and system resource usage (CPU/memory). - Retry on transient errors or API timeouts to maintain a stable flow. Below shows how we batch-process multiple problems: ```python async def _batch_process_problems( self, problems: List[Dict], rationalization: bool ) -> List[ProblemResult]: results = [] total_problems = len(problems) processed = 0 while processed < total_problems: batch_size = self.batch_processor.batch_size batch = problems[processed : processed + batch_size] batch_start_time = time.time() with ThreadPoolExecutor(max_workers=self.batch_processor.max_workers) as executor: futures = [ executor.submit( self.process_problem, problem=problem, rationalization=rationalization, ) for problem in batch ] ... processed += len(batch) ... # Log progress & performance ``` ### 4.2 Handling API Instability 🚨 Even with batching, API requests for LLMs can fail due to network fluctuations or remote server instability. We implemented a `retry_on_error` decorator: ```python def retry_on_error( max_retries: int = 3, initial_delay: float = 1.0 ) -> Callable: def decorator(func: Callable) -> Callable: @functools.wraps(func) def wrapper(*args, **kwargs): delay = initial_delay for attempt in range(max_retries + 1): try: return func(*args, **kwargs) except Exception as e: if attempt == max_retries: raise time.sleep(delay) delay *= 2 raise return wrapper return decorator ``` Whenever we invoke LLM calls for generation, evaluation, or improvement, these decorated methods gracefully handle transient errors by retrying with exponential backoff (doubling the wait time after each failed attempt). --- ## 5. Model Switching & Dynamic File Writing 📝 ### 5.1 Flexible Model Scheduling 🕒 In CAMEL's CoT pipeline, adding models to the `ChatAgent` is useful for handling errors and ensuring smooth operation. This setup allows the system to switch between models as needed, maintaining reasoning continuity. To add models to a `ChatAgent`, you can create instances of models and include them in the agent's model list: ```python model1 = ModelFactory.create( model_platform=ModelPlatformType.DEEPSEEK, model_type="deepseek-reasoner", ... ) model2 = ModelFactory.create( model_platform=ModelPlatformType.TOGETHER, model_type="deepseek-reasoner", ... ) agent = ChatAgent( system_message, model=[model1, model2] ) ``` By incorporating multiple models, CAMEL can effectively manage model availability and ensure robust error handling. ### 5.2 Real-Time JSON Updates 🔄 As soon as a problem’s results are ready, we lock the file (`output_path`) and update it in-place—rather than saving everything at the very end. This ensures data integrity if the process is interrupted partway through. ```python def safe_write_json(self, file_path, data): temp_path = file_path + ".tmp" with open(temp_path, "w") as f: json.dump(data, f, indent=2) os.replace(temp_path, file_path) ``` This two-step write (to a `.tmp` file then replace) prevents partial writes from corrupting the output file. --- ## 6. CAMEL’s Next Steps in CoT Data Generation 🚀 1. **Real-Time Monitoring Dashboard**: Visualize throughput, error rates, running cost, data quality, etc. for smooth operational oversight. 2. **Performance Enhancements**: Further improve performance and add more error handling to make the system more robust. 3. **Cutting-Edge Research Solutions**: Integrate more cutting-edge research solutions for synthetic data generation. 4. **Rejection Sampling**: Integrate rejection sampling method to the SelfImprovingCoT pipeline. --- ## Conclusion 📚 CAMEL’s self-improving pipeline exemplifies a comprehensive approach to Chain-of-Thought data generation: - **Flexible Evaluation**: Utilizing agent-based or reward-model-based evaluation provides adaptable scoring and feedback loops. - **Continuous Improvement**: Iterative refinement ensures each reasoning trace is enhanced until it meets the desired quality. - **Efficient Processing**: Batched concurrency increases throughput while maintaining system balance. - **Robust Stability**: Error-tolerant mechanisms with retries enhance system reliability. - **Consistent Output**: Dynamic file writing ensures partial results are consistently preserved and valid. Looking ahead, CAMEL’s roadmap is dedicated to pioneering advanced synthetic data generation methods, integrating cutting-edge research and technology. _Stay tuned for more updates on CAMEL's journey in advancing agentic synthetic data generation!_ --- **Further Reading & Resources** - **CAMEL GitHub**: Explore our open-source projects on [GitHub](https://github.com/camel-ai/camel) and give us a 🌟star. **Data Generation Cookbooks** - [Self-Improving Math Reasoning Data Distillation](https://docs.camel-ai.org/cookbooks/data_generation/self_improving_math_reasoning_data_distillation_from_deepSeek_r1.html) - [Generating High-Quality SFT Data with CAMEL](https://docs.camel-ai.org/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_Qwen2_5_7B.html) - [Function Call Data Generation and Evaluation](https://docs.camel-ai.org/cookbooks/data_generation/data_gen_with_real_function_calls_and_hermes_format.html) - [Agentic Data Generation, Evaluation & Filtering with Reward Models](https://docs.camel-ai.org/cookbooks/data_generation/synthetic_dataevaluation%26filter_with_reward_model.html) --- Data Processing and Analysis =========================== ```{=html} ``` ::: {.toctree maxdepth="1"} video_analysis agent_with_chunkr_for_pdf_parsing summarisation_agent_with_mistral_ocr ingest_data_from_websites_with_Firecrawl ::: --- --- title: "Create Document Summarization Agents with Mistral OCR & CAMEL-AI 🐫" --- You can also check this cookbook in Colab [here](https://colab.research.google.com/drive/1ZwVmqa5vjpZ0C3H7k1XIseFfbCR4mq17?usp=sharing) In this cookbook, we’ll explore [**Mistral OCR**](https://mistral.ai/news/mistral-ocr)—a state-of-the-art Optical Character Recognition API that understands complex document layouts and extracts text, tables, images, and equations with unprecedented accuracy. We’ll show you how to: - Use the Mistral OCR API to convert scanned or image-based PDFs into structured Markdown - Leverage a Mistral LLM agent within CAMEL to summarize and analyze the extracted content - Build a seamless, end-to-end pipeline for retrieval-augmented generation (RAG), research, or business automation ## Table of Contents 1. 🧑🏻💻 Introduction 2. ⚡️ Step-by-step Guide: Mistral OCR Extraction 3. 💫 Quick Demo with Mistral Agent 4. 🧑🏻💻 Conclusion ⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)* ---  ## **Introduction to Mistral OCR** Throughout history, advancements in information abstraction and retrieval have driven human progress—from hieroglyphs to digitization. Today, over 90% of organizational data lives in documents, often locked in complex layouts and multiple languages. **Mistral OCR** ushers in the next leap in document understanding: a multimodal API that comprehends every element—text, images, tables, equations—and outputs ordered, structured Markdown with embedded media references. #### **Key Features of Mistral OCR:** 1. **State-of-the-art complex document understanding** - Extracts interleaved text, figures, tables, and mathematical expressions with high fidelity. 2. **Natively multilingual & multimodal** - Parses scripts and fonts from across the globe, handling right-to-left layouts and non-Latin characters seamlessly. 3. **Doc-as-prompt, structured output** - Returns ordered Markdown, embedding images and bounding-box metadata ready for RAG and downstream AI workflows. 4. **Top-tier benchmarks & speed** - Outperforms leading OCR systems in accuracy—especially in math, tables, and multilingual tests—while delivering fast batch inference (∼2000 pages/min). 5. **Scalable & flexible deployment** - Available via `mistral-ocr-latest` on Mistral’s developer suite, cloud partners, and on-premises self-hosting for sensitive data. Ready to unlock your documents? Let’s dive into the extraction guide. First, install the CAMEL package with all its dependencies. ```python !pip install "camel-ai[all]==0.2.61" ``` ## ⚡️ Step-by-step Guide: Mistral OCR Loader **Step 1: Set up your Mistral API key** If you don’t have a Mistral API key, you can obtain one by following these steps: 1. **Create an account:** Go to [Mistral Console](https://console.mistral.ai/home) and sign up for an organization account. 2. **Get your API key:** Once logged in, navigate to **Organization** → **API Keys**, generate a new key, copy it, and store it securely. ```python import os from getpass import getpass mistral_api_key = getpass('Enter your Mistral API key: ') os.environ['MISTRAL_API_KEY'] = mistral_api_key ``` **Step 2: Upload your PDF or image file for OCR** In a Colab or Jupyter environment, you can upload any PDF file directly: ```python # Colab file upload from google.colab import files uploaded = files.upload() # Grab the first uploaded filename file_path = next(iter(uploaded)) ``` **Step 3: Import and initialize the Mistral OCR loader** ```python # Importing the MistralReader class from the camel.loaders module # This class handles document processing using Mistral OCR capabilities from camel.loaders import MistralReader # Initializing an instance of MistralReader # This object will be used to submit tasks and manage OCR processing mistral_reader = MistralReader() ``` ## Step 4: Obtain OCR output from Mistral Once the task completes, retrieve its output using the returned `task.id`. The output of **Mistral OCR** is a structured object: ```python # Retrieve the OCR output # CORRECT: Just use extract_text for local files or URLs ocr_response = mistral_reader.extract_text(file_path) print(ocr_response) ``` ## 💫 Quick Demo with CAMEL Agent Here we choose Mistral model for our demo. If you'd like to explore different models or tools to suit your needs, feel free to visit the [CAMEL documentation page](https://docs.camel-ai.org/), where you'll find guides and tutorials. If you don't have a Mistral API key, you can obtain one by following these steps: 1. Visit the Mistral Console (https://console.mistral.ai/) 2. In the left panel, click on API keys under API section 3. Choose your plan For more details, you can also check the Mistral documentation: https://docs.mistral.ai/getting-started/quickstart/ ```python from camel.configs import MistralConfig from camel.models import ModelFactory from camel.types import ModelPlatformType, ModelType mistral_model = ModelFactory.create( model_platform=ModelPlatformType.MISTRAL, model_type=ModelType.MISTRAL_LARGE, model_config_dict=MistralConfig(temperature=0.0).as_dict(), ) # Use Mistral model model = mistral_model ``` ```python from camel.agents import ChatAgent # Initialize a ChatAgent agent = ChatAgent( system_message="You are a helpful document assistant.", # Define the agent's role model=mistral_model ) # Use the ChatAgent to generate insights based on the OCR output response = agent.step( f"Based on the following OCR-extracted content, give me a concise conclusion of the document:\n{ocr_response}" ) print(response.msgs[0].content) ``` **For advanced usage of RAG capabilities with large files, please refer to our [RAG cookbook](https://docs.camel-ai.org/cookbooks/advanced_features/agents_with_rag#rag-cookbook).** ## 🧑🏻💻 Conclusion In conclusion, integrating **Mistral OCR** within CAMEL-AI revolutionizes the process of document data extraction and preparation, enhancing your capabilities for AI-driven applications. With Mistral OCR’s robust features—state-of-the-art complex document understanding, natively multilingual & multimodal parsing, and doc-as-prompt structured Markdown output—you can seamlessly process complex PDFs and images into machine-readable formats optimized for LLMs, directly feeding into CAMEL-AI’s multi-agent workflows. This integration not only simplifies data preparation but also empowers intelligent and accurate analytics at scale. With these tools at your disposal, you’re equipped to transform raw document data into actionable insights, unlocking new possibilities in automation and AI-powered decision-making. That's everything: Got questions about 🐫 CAMEL-AI? Join us on [Discord](https://discord.camel-ai.org)! Whether you want to share feedback, explore the latest in multi-agent systems, get support, or connect with others on exciting projects, we’d love to have you in the community! 🤝 Check out some of our other work: 1. 🐫 Creating Your First CAMEL Agent [free Colab](https://colab.research.google.com/drive/1cmWPxXEsyMbmjPhD2bWfHuhd_Uz6FaJQ?usp=sharing) 2. Graph RAG Cookbook [free Colab](https://colab.research.google.com/drive/1uZKQSuu0qW6ukkuSv9TukLB9bVaS1H0U?usp=sharing) 3. 🧑⚖️ Create A Hackathon Judge Committee with Workforce [free Colab](https://colab.research.google.com/drive/18ajYUMfwDx3WyrjHow3EvUMpKQDcrLtr?usp=sharing) 4. 🔥 3 ways to ingest data from websites with Firecrawl & CAMEL [free Colab](https://colab.research.google.com/drive/1lOmM3VmgR1hLwDKdeLGFve_75RFW0R9I?usp=sharing) 5. 🦥 Agentic SFT Data Generation with CAMEL and Mistral Models, Fine-Tuned with Unsloth [free Colab](https://colab.research.google.com/drive/1lYgArBw7ARVPSpdwgKLYnp_NEXiNDOd-?usp=sharingg) Thanks from everyone at 🐫 CAMEL-AI ⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)* --- # Loong Cookbooks {#cookbooks_loong} ::: {.toctree maxdepth="1" caption="Loong Cookbooks"} batched_single_step_env.ipynb multi_step_rl.ipynb single_step_env.ipynb ::: --- # MCP Cookbooks {#cookbooks_mcp} ::: {.toctree maxdepth="1" caption="MCP Cookbooks"} agents_with_sql_mcp.ipynb camel_aci_mcp_cookbook.ipynb agent_to_mcp_with_faiss.ipynb ::: --- --- title: "🍳 CAMEL Cookbook: Building a Collaborative AI Research Society" --- ## Claude 4 + Azure OpenAI Collaboration for ARENA AI Alignment Research ⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)* --- ## 📋 Overview This cookbook demonstrates how to create a collaborative multi-agent society using CAMEL-AI, bringing together Claude 4 and Azure OpenAI models to research AI alignment topics from the ARENA curriculum. Our society consists of 4 specialized AI researchers with distinct personas and expertise areas. ## So, Let's catapault our way right in 🧚 ## 🛠️ Dependencies and Setup First, let's install the required dependencies and handle the notebook environment: ```python !pip install camel-ai['0.2.64'] anthropic ``` ```python import textwrap import os from getpass import getpass from typing import Dict, Any from camel.agents import ChatAgent from camel.messages import BaseMessage from camel.models import ModelFactory from camel.models.azure_openai_model import AzureOpenAIModel from camel.tasks import Task from camel.toolkits import FunctionTool, SearchToolkit from camel.types import ModelPlatformType, ModelType from camel.societies.workforce import Workforce ``` Prepare API keys: Azure OpenAI, Claude (Anthropic), and optionally Google Search ```python # Ensuring API Keys are set if not os.getenv("AZURE_OPENAI_API_KEY"): print("AZURE OPENAI API KEY is required to proceed.") azure_openai_api_key = getpass("Enter your Azure OpenAI API Key: ") os.environ["AZURE_OPENAI_API_KEY"] = azure_openai_api_key if not os.getenv("AZURE_OPENAI_ENDPOINT"): print("Azure OpenAI Endpoint is required to proceed.") azure_openai_endpoint = input("Enter your Azure OpenAI Endpoint: ") os.environ["AZURE_OPENAI_ENDPOINT"] = azure_openai_endpoint if not os.getenv("ANTHROPIC_API_KEY"): print("ANTHROPIC API KEY is required to proceed.") anthropic_api_key = getpass("Enter your Anthropic API Key: ") os.environ["ANTHROPIC_API_KEY"] = anthropic_api_key optional_keys_setup = input("Setup optional API Keys for Google search functionality?(y/n): ").lower() if "y" in optional_keys_setup: if not os.getenv("GOOGLE_API_KEY"): print("[OPTIONAL] Provide a GOOGLE CLOUD API KEY for google search.") google_api_key = getpass("Enter your Google API KEY: ") os.environ["GOOGLE_API_KEY"] = google_api_key if not os.getenv("SEARCH_ENGINE_ID"): print("[OPTIONAL] Provide a search engine ID for google search.") search_engine_id = getpass("Enter your Search Engine ID: ") os.environ["SEARCH_ENGINE_ID"] = search_engine_id ``` ### What this does: - Imports all necessary CAMEL-AI components - Handles async operations for notebook environments - Sets up typing hints for better code clarity ## 🏗️ Core Society Class Structure Let's define our main research society class: ```python class ARENAResearchSociety: """ A collaborative CAMEL society between Claude 4 and Azure OpenAI for researching the ARENA AI alignment curriculum. """ def __init__(self): self.workforce = None self.setup_api_keys() ``` ### What this does: - Creates the main class that will orchestrate our AI research society - Initializes with API key setup to ensure proper authentication - Prepares the workforce variable for later agent assignment ## 🔑 API Configuration Management Configure all necessary API keys and endpoints: ```python def setup_api_keys(self): """Setup API keys for Azure OpenAI and Claude""" print("🔧 Setting up API keys...") # Azure OpenAI configuration if not os.getenv("AZURE_OPENAI_API_KEY"): azure_api_key = getpass("Please input your Azure OpenAI API key: ") os.environ["AZURE_OPENAI_API_KEY"] = azure_api_key if not os.getenv("AZURE_OPENAI_ENDPOINT"): azure_endpoint = getpass("Please input your Azure OpenAI endpoint: ") os.environ["AZURE_OPENAI_ENDPOINT"] = azure_endpoint if not os.getenv("AZURE_DEPLOYMENT_NAME"): deployment_name = getpass("Please input your Azure deployment name (e.g., div-o4-mini): ") os.environ["AZURE_DEPLOYMENT_NAME"] = deployment_name # Set OPENAI_API_KEY for compatibility (use Azure key) os.environ["OPENAI_API_KEY"] = os.getenv("AZURE_OPENAI_API_KEY") # Claude API configuration if not os.getenv("ANTHROPIC_API_KEY"): claude_api_key = getpass("Please input your Claude API key: ") os.environ["ANTHROPIC_API_KEY"] = claude_api_key # Optional: Google Search for research capabilities if not os.getenv("GOOGLE_API_KEY"): try: google_api_key = getpass("Please input your Google API key (optional, press Enter to skip): ") if google_api_key: os.environ["GOOGLE_API_KEY"] = google_api_key search_engine_id = getpass("Please input your Search Engine ID: ") if search_engine_id: # Only set if provided os.environ["SEARCH_ENGINE_ID"] = search_engine_id else: print("⚠️ Search Engine ID not provided. Search functionality will be disabled.") except KeyboardInterrupt: print("Skipping Google Search setup...") print("✅ API keys configured!") ARENAResearchSociety.setup_api_keys = setup_api_keys ``` ### What this does: - Securely collects API credentials using getpass (hidden input) - Supports Azure OpenAI, Claude (Anthropic), and optional Google Search - Sets environment variables for seamless integration - Provides graceful fallbacks for optional components ## 🤖 Azure OpenAI Agent Creation Create specialized Azure OpenAI agents with custom personas: ```python def create_azure_agent(self, role_name: str, persona: str, specialization: str) -> ChatAgent: """Create an Azure OpenAI agent with specific role and persona""" msg_content = textwrap.dedent(f""" You are {role_name}, a researcher specializing in AI alignment and safety. Your persona: {persona} Your specialization: {specialization} You are part of a collaborative research team studying the ARENA AI alignment curriculum. ARENA focuses on practical AI safety skills including: - Mechanistic interpretability - Reinforcement learning from human feedback (RLHF) - AI governance and policy - Robustness and adversarial examples When collaborating: 1. Provide detailed, technical analysis 2. Reference specific ARENA modules when relevant 3. Build upon other agents' findings 4. Maintain academic rigor while being accessible 5. Always cite sources and provide evidence for claims """).strip() sys_msg = BaseMessage.make_assistant_message( role_name=role_name, content=msg_content, ) # Configure Azure OpenAI model with correct API version for o4-mini model = AzureOpenAIModel( model_type=ModelType.GPT_4O_MINI, api_key=os.getenv("AZURE_OPENAI_API_KEY"), url=os.getenv("AZURE_OPENAI_ENDPOINT"), api_version="2025-01-01-preview", # Updated to support o4-mini azure_deployment_name=os.getenv("AZURE_DEPLOYMENT_NAME") or "div-o4-mini" ) return ChatAgent( system_message=sys_msg, model=model, ) ARENAResearchSociety.create_azure_agent = create_azure_agent ``` ### What this does: - Creates customizable Azure OpenAI agents with specific roles and expertise - Embeds ARENA curriculum knowledge into each agent's system prompt - Uses the latest API version compatible with o4-mini model - Returns a fully configured ChatAgent ready for collaboration ## 🧠 Claude Agent Creation Create Claude agents with complementary capabilities: ```python def create_claude_agent(self, role_name: str, persona: str, specialization: str, tools=None) -> ChatAgent: """Create a Claude agent with specific role and persona""" msg_content = textwrap.dedent(f""" You are {role_name}, a researcher specializing in AI alignment and safety. Your persona: {persona} Your specialization: {specialization} You are part of a collaborative research team studying the ARENA AI alignment curriculum. ARENA focuses on practical AI safety skills including: - Mechanistic interpretability - Reinforcement learning from human feedback (RLHF) - AI governance and policy - Robustness and adversarial examples When collaborating: 1. Provide thorough, nuanced analysis 2. Consider ethical implications and long-term consequences 3. Synthesize information from multiple perspectives 4. Ask probing questions to deepen understanding 5. Connect concepts across different AI safety domains """).strip() # Remove trailing whitespace sys_msg = BaseMessage.make_assistant_message( role_name=role_name, content=msg_content, ) # Configure Claude model model = ModelFactory.create( model_platform=ModelPlatformType.ANTHROPIC, model_type=ModelType.CLAUDE_3_5_SONNET, ) agent = ChatAgent( system_message=sys_msg, model=model, tools=tools or [], ) return agent ARENAResearchSociety.create_claude_agent = create_claude_agent ``` ### What this does: - Creates Claude agents with nuanced, philosophical thinking capabilities - Emphasizes ethical considerations and long-term thinking - Supports optional tool integration (like search capabilities) - Uses Claude 3.5 Sonnet for advanced reasoning ## 👥 Workforce Assembly Bring together all agents into a collaborative workforce: ```python def create_research_workforce(self): """Create the collaborative research workforce""" print("🏗️ Creating ARENA Research Society...") # Setup search tools for the lead researcher (only if properly configured) search_tools = [] if os.getenv("GOOGLE_API_KEY") and os.getenv("SEARCH_ENGINE_ID"): try: search_toolkit = SearchToolkit() search_tools = [ FunctionTool(search_toolkit.search_google), ] print("🔍 Search tools enabled for lead researcher") except Exception as e: print(f"⚠️ Search tools disabled due to configuration issue: {e}") search_tools = [] else: print("🔍 Search tools disabled - missing API keys") # Create Claude agents claude_lead = self.create_claude_agent( role_name="Dr. Claude Alignment", persona="A thoughtful, methodical researcher who excels at synthesizing complex information and identifying key insights. Known for asking the right questions and seeing the bigger picture. Works with existing knowledge when search tools are unavailable.", specialization="AI safety frameworks, mechanistic interpretability, and curriculum analysis", tools=search_tools ) claude_ethicist = self.create_claude_agent( role_name="Prof. Claude Ethics", persona="A philosophical thinker who deeply considers the ethical implications and long-term consequences of AI development. Bridges technical concepts with societal impact.", specialization="AI governance, policy implications, and ethical frameworks in AI alignment" ) # Create Azure OpenAI agents azure_technical = self.create_azure_agent( role_name="Dr. Azure Technical", persona="A detail-oriented technical expert who dives deep into implementation specifics and mathematical foundations. Excellent at breaking down complex algorithms.", specialization="RLHF implementation, robustness techniques, and technical deep-dives" ) azure_practical = self.create_azure_agent( role_name="Dr. Azure Practical", persona="A pragmatic researcher focused on real-world applications and practical implementation. Bridges theory with practice.", specialization="Practical AI safety applications, training methodologies, and hands-on exercises" ) # Configure coordinator and task agents to use Azure OpenAI with correct API version coordinator_agent_kwargs = { 'model': AzureOpenAIModel( model_type=ModelType.GPT_4O_MINI, api_key=os.getenv("AZURE_OPENAI_API_KEY"), url=os.getenv("AZURE_OPENAI_ENDPOINT"), api_version="2025-01-01-preview", azure_deployment_name=os.getenv("AZURE_DEPLOYMENT_NAME") or "div-o4-mini" ), 'token_limit': 8000 } task_agent_kwargs = { 'model': AzureOpenAIModel( model_type=ModelType.GPT_4O_MINI, api_key=os.getenv("AZURE_OPENAI_API_KEY"), url=os.getenv("AZURE_OPENAI_ENDPOINT"), api_version="2025-01-01-preview", azure_deployment_name=os.getenv("AZURE_DEPLOYMENT_NAME") or "div-o4-mini" ), 'token_limit': 16000 } # Create the workforce with proper configuration self.workforce = Workforce( 'ARENA AI Alignment Research Society', coordinator_agent_kwargs=coordinator_agent_kwargs, task_agent_kwargs=task_agent_kwargs ) # Add agents with descriptive roles self.workforce.add_single_agent_worker( 'Dr. Claude Alignment (Lead Researcher) - Synthesizes information, leads research direction, and provides comprehensive analysis based on existing knowledge', worker=claude_lead, ).add_single_agent_worker( 'Prof. Claude Ethics (Ethics & Policy Specialist) - Analyzes ethical implications, policy considerations, and societal impact of AI alignment research', worker=claude_ethicist, ).add_single_agent_worker( 'Dr. Azure Technical (Technical Deep-Dive Specialist) - Provides detailed technical analysis, mathematical foundations, and implementation specifics', worker=azure_technical, ).add_single_agent_worker( 'Dr. Azure Practical (Applied Research Specialist) - Focuses on practical applications, training methodologies, and hands-on implementation guidance', worker=azure_practical, ) print("✅ ARENA Research Society created with 4 specialized agents!") return self.workforce ARENAResearchSociety.create_research_workforce = create_research_workforce ``` ### What this does: - Creates 4 specialized researchers: 2 Claude agents + 2 Azure OpenAI agents - Each agent has distinct personalities and expertise areas - Configures search tools for the lead researcher (when available) - Sets up proper workforce coordination using Azure OpenAI models - Creates a balanced team covering technical, practical, and ethical perspectives ## 📋 Research Task Creation Define structured research tasks for the collaborative team: ```python def create_research_task(self, research_topic: str, specific_questions: str = None) -> Task: """Create a research task for the ARENA curriculum""" arena_context = { "curriculum_info": "ARENA (AI Research and Education Nexus for Alignment) is a comprehensive AI safety curriculum", "focus_areas": [ "Mechanistic Interpretability - Understanding how neural networks work internally", "Reinforcement Learning from Human Feedback (RLHF) - Training AI systems to be helpful and harmless", "AI Governance - Policy, regulation, and coordination for AI safety", "Robustness & Adversarial Examples - Making AI systems robust to attacks and edge cases" ], "emphasis": "practical skills, hands-on exercises, and real-world applications", "website": "https://www.arena.education/curriculum" } # Check if search tools are available has_search = bool(os.getenv("GOOGLE_API_KEY") and os.getenv("SEARCH_ENGINE_ID")) base_content = f""" Research Topic: {research_topic} Please conduct a comprehensive collaborative research analysis on this topic in relation to the ARENA AI alignment curriculum. {'Note: Search tools are available for gathering latest information.' if has_search else 'Note: Analysis will be based on existing knowledge as search tools are not available.'} Research Process: 1. **Information Gathering** - {'Collect relevant information about the topic, including latest developments' if has_search else 'Analyze the topic based on existing knowledge and understanding'} 2. **Technical Analysis** - Provide detailed technical breakdown and mathematical foundations 3. **Practical Applications** - Explore how this relates to hands-on ARENA exercises and real-world implementation 4. **Ethical Considerations** - Analyze policy implications and ethical frameworks 5. **Synthesis** - Combine all perspectives into actionable insights and recommendations Expected Deliverables: - Comprehensive analysis from each specialist perspective - Identification of key concepts and their relationships - Practical implementation guidance - Policy and ethical considerations - Recommendations for further research or curriculum development """ if specific_questions: base_content += f"\n\nSpecific Research Questions:\n{specific_questions}" return Task( content=base_content.strip(), additional_info=arena_context, id="arena_research_001", ) ARENAResearchSociety.create_research_task = create_research_task ``` ### What this does: - Creates structured research tasks with clear objectives and deliverables - Adapts task content based on available tools (search vs. knowledge-based) - Includes ARENA curriculum context for focused analysis - Supports custom research questions for specialized investigations ## 🔬 Research Execution Execute collaborative research sessions: ```python def run_research(self, research_topic: str, specific_questions: str = None): """Run a collaborative research session""" if not self.workforce: self.create_research_workforce() print(f"🔬 Starting collaborative research on: {research_topic}") print("=" * 60) task = self.create_research_task(research_topic, specific_questions) processed_task = self.workforce.process_task(task) print("\n" + "=" * 60) print("📊 RESEARCH RESULTS") print("=" * 60) print(processed_task.result) return processed_task.result ARENAResearchSociety.run_research = run_research ``` ## What this does: - Orchestrates the entire research process - Creates the workforce if not already initialized - Processes tasks through the collaborative agent network - Returns formatted research results ## 🎯 Interactive Demo Interface Create an interactive interface for easy topic selection: ```python """Demonstrating the ARENA Research Society""" society = ARENAResearchSociety() # Example research topics related to ARENA curriculum sample_topics = { 1: { "topic": "Mechanistic Interpretability in Large Language Models", "questions": """ - How do the latest mechanistic interpretability techniques apply to understanding LLM behavior? - What are the most effective methods for interpreting attention patterns and residual streams? - How can mechanistic interpretability inform AI alignment strategies? - What are the current limitations and future directions in this field? """ }, 2: { "topic": "RLHF Implementation Challenges and Best Practices", "questions": """ - What are the main technical challenges in implementing RLHF at scale? - How do different reward modeling approaches compare in effectiveness? - What are the alignment implications of various RLHF techniques? - How can we address issues like reward hacking and distributional shift? """ }, 3: { "topic": "AI Governance Frameworks for Emerging Technologies", "questions": """ - What governance frameworks are most suitable for rapidly advancing AI capabilities? - How can policy makers balance innovation with safety considerations? - What role should technical AI safety research play in policy development? - How can international coordination on AI governance be improved? """ } } print("🎯 ARENA AI Alignment Research Society") print("Choose a research topic or provide your own:") print() for num, info in sample_topics.items(): print(f"{num}. {info['topic']}") print("4. Custom research topic") print() try: choice = input("Enter your choice (1-4): ").strip() if choice in ['1', '2', '3']: topic_info = sample_topics[int(choice)] result = society.run_research( topic_info["topic"], topic_info["questions"] ) elif choice == '4': custom_topic = input("Enter your research topic: ").strip() custom_questions = input("Enter specific questions (optional): ").strip() result = society.run_research( custom_topic, custom_questions if custom_questions else None ) else: print("Invalid choice. Running default research...") result = society.run_research(sample_topics[1]["topic"], sample_topics[1]["questions"]) except KeyboardInterrupt: print("\n👋 Research session interrupted.") except Exception as e: print(f"❌ Error during research: {e}") ``` ## What this does: - Provides pre-defined research topics relevant to ARENA curriculum - Offers custom topic input for flexible research - Handles user interaction gracefully with error handling - Demonstrates the full capabilities of the collaborative AI society ## 🚀 Running the Cookbook To run this collaborative AI research society: Execute Individual cells. Follow prompts: Enter your API credentials and select research topics The system will create a collaborative research environment where Claude and Azure OpenAI agents work together to produce comprehensive analysis on AI alignment topics! ## 🎯 Conclusion The future of AI collaboration is here, and this CAMEL-powered society demonstrates the incredible potential of multi-agent systems working across different AI platforms. In this cookbook, you've learned how to: - Build cross-platform AI collaboration between Claude 4 and Azure OpenAI models - Create specialized AI researchers with distinct personas and expertise areas - Implement robust workforce management using CAMEL's advanced orchestration - Handle complex API configurations for multiple AI providers seamlessly - Design structured research workflows for AI alignment and safety topics - Create scalable agent societies that can tackle complex, multi-faceted problems This collaborative approach showcases how different AI models can complement each other - Claude's nuanced reasoning and ethical considerations paired with Azure OpenAI's technical precision creates a powerful research dynamic. The ARENA AI alignment focus demonstrates how these societies can be specialized for cutting-edge domains like mechanistic interpretability, RLHF, and AI governance. As the field of multi-agent AI systems continues to evolve, frameworks like CAMEL are paving the way for increasingly sophisticated collaborations. Whether you're researching AI safety, exploring complex technical topics, or building specialized knowledge teams, the patterns and techniques in this cookbook provide a solid foundation for the next generation of AI-powered research. The possibilities are endless when AI agents work together. Keep experimenting, keep collaborating, and keep pushing the boundaries of what's possible. Happy researching! 🔬✨ That's everything: Got questions about 🐫 CAMEL-AI? Join us on [Discord](https://discord.camel-ai.org)! Whether you want to share feedback, explore the latest in multi-agent systems, get support, or connect with others on exciting projects, we’d love to have you in the community! 🤝 Check out some of our other work: 1. 🐫 Creating Your First CAMEL Agent [free Colab](https://docs.camel-ai.org/cookbooks/create_your_first_agent.html) 2. Graph RAG Cookbook [free Colab](https://colab.research.google.com/drive/1uZKQSuu0qW6ukkuSv9TukLB9bVaS1H0U?usp=sharing) 3. 🧑⚖️ Create A Hackathon Judge Committee with Workforce [free Colab](https://colab.research.google.com/drive/18ajYUMfwDx3WyrjHow3EvUMpKQDcrLtr?usp=sharing) 4. 🔥 3 ways to ingest data from websites with Firecrawl & CAMEL [free Colab](https://colab.research.google.com/drive/1lOmM3VmgR1hLwDKdeLGFve_75RFW0R9I?usp=sharing) 5. 🦥 Agentic SFT Data Generation with CAMEL and Mistral Models, Fine-Tuned with Unsloth [free Colab](https://colab.research.google.com/drive/1lYgArBw7ARVPSpdwgKLYnp_NEXiNDOd-?usp=sharingg) Thanks from everyone at 🐫 CAMEL-AI ⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)* --- --- # Multi-agent Society ```{=html} ``` ::: {.toctree maxdepth="1"} agents_society workforce_judge_committee task_generation azure_openai_claude_society ::: --- --- title: Installation description: Get started with CAMEL-AI - Install, configure, and build your first multi-agent system icon: wrench --- ## Tutorialpython examples/workforce/multiple_single_agents.py
BaseAgent abstract class, which defines two essential methods:
| Method | Purpose | Description |
|-------------|---------------------|----------------------------------------------|
| reset() | State Management | Resets the agent to its initial state |
| step() | Task Execution | Performs a single step of the agent's operation |
## Types
### ChatAgent
The `ChatAgent` is the primary implementation that handles conversations with language models. It supports:
- System message configuration for role definition
- Memory management for conversation history
- Tool/function calling capabilities
- Response formatting and structured outputs
- Multiple model backend support with scheduling strategies
- Async operation support
search_limit: Maximum number of search iterations (default: 100)generator_agent: Specialized agent for answer generationverifier_agent: Specialized agent for answer verificationgolden_answers: Pre-defined correct answers for validationagent: ChatAgent instance for generating instructionsseed: Path to human-written seed tasks in JSONL formatnum_machine_instructions: Number of machine-generated instructions (default: 5)data_output_path: Path for saving generated data (default: ./data_output.json)human_to_machine_ratio: Ratio of human to machine tasks (default: (6, 2))instruction_filter: Custom InstructionFilter instance (optional)filter_config: Configuration dictionary for default filters (optional)seed: Random seed for reproducibilitymin_length: Minimum text length for processingmax_length: Maximum text length for processingcomplexity_threshold: Minimum complexity score (0.0–1.0)dataset_size: Target size for the final datasetuse_ai_model: Toggle between AI model and rule-based generationhop_generating_agent: Custom MultiHopGeneratorAgent (optional)max_iterations: Maximum number of improvement iterations (default: 3)score_threshold: Minimum quality thresholds for evaluation dimensions (default: 0.7)few_shot_examples: (Optional) Examples for few-shot learningoutput_path: (Optional) Path for saving generated resultsUnstructured IO module, just import and initialize it. You can parse, clean, extract, chunk, and stage data from files or URLs. Here’s how you use it step by step:
Unstructured IO. For more, see the Unstructured IO Documentation.
"completed", the content extraction is done and you can retrieve the results.
Taicheng Guo 1 , Xiuying Chen 2 , Yaqi Wang 3 \u2217 , Ruidi Chang , Shichao Pei 4 , Nitesh V. Chawla 1 , Olaf Wiest 1 , Xiangliang Zhang 1 \u2020
", "markdown": "Taicheng Guo 1 , Xiuying Chen 2 , Yaqi Wang 3 \u2217 , Ruidi Chang , Shichao Pei 4 , Nitesh V. Chawla 1 , Olaf Wiest 1 , Xiangliang Zhang 1 \u2020\n\n" } ], "chunk_length": 100 } ] } } ```BaseMessage class is the backbone for all message objects in
the CAMEL chat system. It offers a consistent structure for agent
communication and easy conversion between message types.
BaseMessage instance, supply these arguments:
RoleType.ASSISTANT or RoleType.USERBaseMessage class lets you:
new_message = message.create_new_instance("new test content")
openai_message =
message.to_openai_message(role_at_backend=OpenAIBackendRole.USER)
openai_system_message = message.to_openai_system_message()
openai_user_message = message.to_openai_user_message()
openai_assistant_message = message.to_openai_assistant_message()
message_dict = message.to_dict()
BaseMessage into the
right format for different LLM APIs and agent flows.
BaseMessage class is essential for structured, clear, and
flexible communication in the CAMEL-AI ecosystem—making it simple to create,
convert, and handle messages across any workflow.
Llama3ModelFile:
```
FROM llama3
PARAMETER temperature 0.8
PARAMETER stop Result
SYSTEM """ """
```
You can also create a shell script setup_llama3.sh:
```bash
#!/bin/zsh
model_name="llama3"
custom_model_name="camel-llama3"
ollama pull $model_name
ollama create $custom_model_name -f ./Llama3ModelFile
chmod +x setup_llama3.sh
./setup_llama3.sh
```
add(funcs): Register one or more FunctionTool objects for execution
- reset(): Reset the runtime to its initial state
- get_tools(): List all tools managed by the runtime
## Quick Start Example: RemoteHttpRuntime
PYTHON_EXECUTABLE, PYTHONPATH, and more for custom envsFunctionTool-style tool functions.
For agent-level, dynamic code execution, always consider dedicated sandboxing—such as UbuntuDockerRuntime’s exec_python_file()—for running dynamically generated scripts with maximum isolation and safety.
---
---
title: "Societies"
description: "Collaborative agent frameworks in CAMEL: autonomous social behaviors, role-based task solving, and turn-based agent societies."
icon: people-group
---
<ASSISTANT_ROLE>, I am <USER_ROLE>Solution: <YOUR_SOLUTION>Next request.| Attribute | Type | Description |
|---|---|---|
| assistant_role_name | str | Name of assistant's role |
| user_role_name | str | Name of user's role |
| critic_role_name | str | Name of critic's role (optional) |
| task_prompt | str | Prompt for the main task |
| with_task_specify | bool | Enable task specification agent |
| with_task_planner | bool | Enable task planner agent |
| with_critic_in_the_loop | bool | Include critic in conversation loop |
| critic_criteria | str | How the critic scores/evaluates outputs |
| model | BaseModelBackend | Model backend for responses |
| task_type | TaskType | Type/category of the task |
| assistant_agent_kwargs | Dict | Extra options for assistant agent |
| user_agent_kwargs | Dict | Extra options for user agent |
| task_specify_agent_kwargs | Dict | Extra options for task specify agent |
| task_planner_agent_kwargs | Dict | Extra options for task planner agent |
| critic_kwargs | Dict | Extra options for critic agent |
| sys_msg_generator_kwargs | Dict | Options for system message generator |
| extend_sys_msg_meta_dicts | List[Dict] | Extra metadata for system messages |
| extend_task_specify_meta_dict | Dict | Extra metadata for task specification |
| output_language | str | Target output language |
| assistant_agent | ChatAgent | Custom ChatAgent to use as assistant (optional) |
| user_agent | ChatAgent | Custom ChatAgent to use as user (optional) |
with_task_specify and with_task_planner options for highly complex tasks.[examples/society/](https://github.com/camel-ai/camel/tree/master/examples/runtimes) in the CAMEL repo for advanced agent society demos.RolePlaying — Advanced turn-taking, prompt-guarded collaboration (most common for LLM societies).BabyAGI — Autonomous, open-ended R&D loops for big-picture or research goals.<ASSISTANT_ROLE>, I am <USER_ROLE>Solution: <YOUR_SOLUTION>Next request.| Attribute | Type | Description |
|---|---|---|
| assistant_role_name | str | Name of assistant's role |
| user_role_name | str | Name of user's role |
| critic_role_name | str | Name of critic's role (optional) |
| task_prompt | str | Prompt for the main task |
| with_task_specify | bool | Enable task specification agent |
| with_task_planner | bool | Enable task planner agent |
| with_critic_in_the_loop | bool | Include critic in conversation loop |
| critic_criteria | str | How the critic scores/evaluates outputs |
| model | BaseModelBackend | Model backend for responses |
| task_type | TaskType | Type/category of the task |
| assistant_agent_kwargs | Dict | Extra options for assistant agent |
| user_agent_kwargs | Dict | Extra options for user agent |
| task_specify_agent_kwargs | Dict | Extra options for task specify agent |
| task_planner_agent_kwargs | Dict | Extra options for task planner agent |
| critic_kwargs | Dict | Extra options for critic agent |
| sys_msg_generator_kwargs | Dict | Options for system message generator |
| extend_sys_msg_meta_dicts | List[Dict] | Extra metadata for system messages |
| extend_task_specify_meta_dict | Dict | Extra metadata for task specification |
| output_language | str | Target output language |
with_task_specify and with_task_planner options for highly complex tasks.[examples/society/](https://github.com/camel-ai/camel/tree/master/examples/runtimes) in the CAMEL repo for advanced agent society demos.sse or streamable-http for ACI.dev, pick whichever is supported by your agent/server.
SearchToolkit, MathToolkit), they all work the same way!
```python
from camel.toolkits import ArxivToolkit
import argparse
parser = argparse.ArgumentParser(
description="Run Arxiv Toolkit as an MCP server."
)
parser.add_argument(
"--mode",
choices=["stdio", "sse", "streamable-http"],
default="stdio",
help="Select MCP server mode."
)
args = parser.parse_args()
toolkit = ArxivToolkit()
toolkit.mcp.run(args.mode)
```
- **stdio:** For local IPC (default, fast and secure for single-machine setups)
- **sse:** Server-Sent Events (good for remote servers and web clients)
- **streamable-http:** Modern, high-performance HTTP streaming
### Discoverable & Usable Instantly
Once running, your MCP server will:
- Advertise all available toolkit methods as standard MCP tools
- Support dynamic tool discovery (`tools/list` endpoint)
- Allow any compatible agent or client (not just CAMEL) to connect and call your tools
This means you can build an LLM workflow where, for example, Claude running in your browser or another service in your company network can call your toolkit directly—without ever importing your Python code.
MCPClient or MCPToolkit.
mcp_servers_config.json):
```json mcp_servers_config.json lines icon="settings"
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem@2025.1.14",
"."
]
}
},
"mcpWebServers": {}
}
```
step() for synchronous execution:
services/.
Configure your MCP client (Claude, Cursor, etc.) to connect:
```json mcp_servers_config.json Example highlight={5}
{
"camel-chat-agent": {
"command": "/path/to/python",
"args": [
"/path/to/camel/services/agent_mcp_server.py"
],
"env": {
"OPENAI_API_KEY": "...",
"OPENROUTER_API_KEY": "...",
"BRAVE_API_KEY": "..."
}
}
}
```
ChatAgent into an MCP server instantly with to_mcp():
```python agent_mcp_server.py lines icon="python"
from camel.agents import ChatAgent
# Create a chat agent with your model
agent = ChatAgent(model="gpt-4o-mini")
# Convert to an MCP server
mcp_server = agent.to_mcp(
name="demo", description="A demonstration of ChatAgent to MCP conversion"
)
if __name__ == "__main__":
print("Starting MCP server on http://localhost:8000")
mcp_server.run(transport="streamable-http")
```
stdio, sse, streamable-http
## What Changes with MCP?
## Why introduce MCP?
- **Ecosystem**: Leverage a growing library of MCP plugins—just plug them in.
- **Uniformity**: Not limited to any one model or vendor; if your agent supports MCP, you can swap models/tools anytime.
- **Data Security**: Keep sensitive data on your device. MCP servers decide what to expose—your private data never needs to leave your machine.
## MCP Architecture and Principles
**Basic Architecture**
MCP follows a **client-server model** with three main roles:
Please review the attached document.
", cc="manager@example.com", attachments=["/path/to/document.pdf"], is_html=True ) # Create a draft draft_result = gmail.create_draft( to="colleague@example.com", subject="Draft: Project Proposal", body="Here's the initial draft of our project proposal..." ) ``` ### Message Management ```python # Get recent messages messages = gmail.fetch_emails(max_results=10) # Search for specific emails urgent_emails = gmail.fetch_emails(query="is:unread subject:urgent") # Get messages from a specific sender from_sender = gmail.fetch_emails(query="from:important@company.com") # Get message details (internal helper) message = gmail._get_message_details("message_id_here") # Move a message to trash delete_result = gmail.move_to_trash("message_id_here") ``` ### Label Management ```python # Get all labels labels = gmail.list_gmail_labels() # Create a new label new_label = gmail.create_label("Important Projects") # Modify message labels gmail.modify_email_labels( message_id="message_id_here", add_label_ids=["label_id_here"], remove_label_ids=["INBOX"] ) ``` ### Contact Management ```python # Get all contacts contacts = gmail.get_contacts() # Search for specific contacts search_results = gmail.search_contacts("John Smith") # Get user profile profile = gmail.get_profile() ``` ### Thread Management ```python # Get recent threads threads = gmail.list_threads(max_results=5) # Get a specific thread thread = gmail.fetch_thread_by_id("thread_id_here") # Search for threads project_threads = gmail.get_threads(query="subject:project") ``` ## Error Handling All methods return a dictionary with a `success` boolean field. Check this field to determine if the operation was successful: ```python result = gmail.send_email( to="invalid-email", subject="Test", body="Test message" ) if result["success"]: print(f"Email sent successfully! Message ID: {result['message_id']}") else: print(f"Failed to send email: {result['message']}") ``` ## Authentication The Gmail toolkit uses OAuth2 authentication with Google's Gmail API. This section covers the complete authentication setup and mechanisms. ### Prerequisites Before using the Gmail toolkit, you need to set up Google API credentials: 1. **Google Cloud Console Setup:** - Go to the [Google Cloud Console](https://console.cloud.google.com/) - Create a new project or select an existing one - Enable the Gmail API and Google People API - Create OAuth 2.0 credentials (Desktop application type) 2. **Download Credentials:** - Download the `credentials.json` file from the Google Cloud Console - Place it in your project directory or set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable ### Authentication Flow The Gmail toolkit implements a multi-step OAuth2 authentication process: #### 1. Initial Authentication On first use, the toolkit will: ```python # The authentication process happens automatically when initializing gmail = GmailToolkit() # This triggers the OAuth flow ``` **What happens behind the scenes:** - Loads the `credentials.json` file - Opens a browser window for user consent - Requests permission for all required Gmail scopes - Exchanges authorization code for access and refresh tokens - Stores tokens securely in `token.json` for future use #### 2. Token Storage The toolkit stores authentication tokens in a local `token.json` file: ```json { "token": "ya29.a0AfH6SMC...", "refresh_token": "1//04...", "token_uri": "https://oauth2.googleapis.com/token", "client_id": "your-client-id.apps.googleusercontent.com", "client_secret": "your-client-secret", "scopes": ["https://mail.google.com/", ...], "expiry": "2024-01-01T12:00:00Z" } ``` #### 3. Automatic Token Refresh The toolkit automatically handles token refresh: - **Access tokens** expire after 1 hour - **Refresh tokens** are used to obtain new access tokens - **Automatic refresh** happens before each API call if needed - **No user intervention** required after initial setup ### Required Scopes The toolkit requests the following Gmail API scopes: | Scope | Purpose | Access Level | |-------|---------|--------------| | `https://www.googleapis.com/auth/gmail.readonly` | Read emails and metadata | Read-only | | `https://www.googleapis.com/auth/gmail.send` | Send emails | Write | | `https://www.googleapis.com/auth/gmail.modify` | Modify emails and labels | Write | | `https://www.googleapis.com/auth/gmail.compose` | Create drafts and compose | Write | | `https://www.googleapis.com/auth/gmail.labels` | Manage labels | Write | | `https://www.googleapis.com/auth/contacts.readonly` | Read Google Contacts | Read-only | | `https://www.googleapis.com/auth/userinfo.profile` | Access user profile | Read-only | ### Environment Variables You can configure authentication using environment variables: ```bash # Set the path to your credentials file export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json" # Optional: Set custom token storage location export GMAIL_TOKEN_PATH="/custom/path/token.json" ``` ### Authentication Methods #### Method 1: Credentials File (Recommended) ```python # Place credentials.json in your project directory gmail = GmailToolkit() ``` #### Method 2: Environment Variable ```python import os os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/path/to/credentials.json' gmail = GmailToolkit() ``` #### Method 3: Explicit Path ```python # The toolkit will look for credentials.json in the current directory # or use the path specified in GOOGLE_APPLICATION_CREDENTIALS gmail = GmailToolkit() ``` ### Troubleshooting Authentication #### Common Issues and Solutions **1. "Credentials not found" Error:** ```python # Error: FileNotFoundError: [Errno 2] No such file or directory: 'credentials.json' # Solution: Ensure credentials.json is in the correct location ``` **2. "Invalid credentials" Error:** ```python # Error: google.auth.exceptions.RefreshError: The credentials do not contain the necessary fields # Solution: Re-download credentials.json from Google Cloud Console ``` **3. "Access denied" Error:** ```python # Error: googleapiclient.errors.HttpError: