# Galileo Ai > ## Documentation Index --- # Source: https://docs.galileo.ai/galileo/galileo-nlp-studio/galileo-product-features/3p-integrations.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.galileo.ai/llms.txt > Use this file to discover all available pages before exploring further. # Third Party 3p Integrations > Galileo has integrates seamlessly with your tools. We have integrated with a number of Data Storage Providers, Labeling Solutions, and LLM APIs. To manage your integrations, go to *Integrations* under your *Profile Avatar Menu*. From your integrations page, you can turn integrations on or off. Your credentials are stored in a safe manner. Galileo is SOC2 Compliant. --- # Source: https://docs.galileo.ai/galileo/gen-ai-studio-products/galileo-evaluate/how-to/a-b-compare-prompts.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.galileo.ai/llms.txt > Use this file to discover all available pages before exploring further. # A/B Compare Prompts > Easily compare multiple LLM runs in a single screen for better decision making Galileo allows you to compare multiple evaluation runs side-by-side. This lets you view how different configurations of your system (i.e. different params, prompt templates, retriever strategies, etc.) handled the same set of queries, enabling you to quickly evaluate, analyze, and annotate your experiments. Galileo allows you to do this for both single-step workflows, or multi-step / chain workflows. **How do I get started?** To enter the *Compare Runs* mode, select the runs you want to compare from your and click "Compare Runs" on the Action Bar. Compare Runs For two runs to be comparable, the same evaluation dataset must be used to create them. Once you're in *Compare Runs* you can: * Compare how your different configurations responded to the same input. * Compare Metrics * Expand to see the full Trace of the multi-step workflow and identify which steps went wrong * Review and add Human Feedback * Toggle back and forth between inputs on your eval set.