# Coqui Tts > We use 👩‍✈️[Coqpit] for configuration management. It provides basic static type checking and serialization capabilities on top of native Python`dataclasses`. Here is how a simple configuration looks ## Pages - [Configuration](configuration.md): We use 👩‍✈️[Coqpit] for configuration management. It provides basic static type checking and serialization capabiliti... - [Contributing](contributing.md): :relative-images: - [Docker_Images](docker-images.md): (docker_images)= - [Humble FAQ](faq.md): We tried to collect common issues and questions we receive about 🐸TTS. It is worth checking before going deeper. - [Fine-tuning a 🐸 TTS model](finetuning.md): Fine-tuning takes a pre-trained model and retrains it to improve the model performance on a different task or dataset. - [Formatting Your Dataset](formatting-your-dataset.md): (formatting_your_dataset)= - [Implementing a New Language Frontend](implementing-a-new-language-frontend.md): - Language frontends are located under`TTS.tts.utils.text` - [Implementing a Model](implementing-a-new-model.md): 1. Implement layers. - [Documentation Content](index.md): :relative-images: - [Synthesizing Speech](inference.md): (synthesizing_speech)= - [Installation](installation.md): 🐸TTS supports python >=3.7 <3.11.0 and tested on Ubuntu 18.10, 19.10, 20.10. - [AudioProcessor API](main-classes-audio-processor.md): `TTS.utils.audio.AudioProcessor`is the core class for all the audio processing routines. It provides an API for - [Datasets](main-classes-dataset.md): .. autoclass:: TTS.tts.datasets.TTSDataset - [GAN API](main-classes-gan.md): The {class}`TTS.vocoder.models.gan.GAN`provides an easy way to implementing new GAN based models. You just need - [Model API](main-classes-model-api.md): Model API provides you a set of functions that easily make your model compatible with the`Trainer`, - [Speaker Manager API](main-classes-speaker-manager.md): The {class}`TTS.tts.utils.speakers.SpeakerManager`organize speaker related data and information for 🐸TTS models. It is - [Trainer API](main-classes-trainer-api.md): We made the trainer a separate project on - [Mary-TTS API Support for Coqui-TTS](marytts.md): [Mary (Modular Architecture for Research in sYnthesis) Text-to-Speech](http://mary.dfki.de/) is an open-source (GNU L... - [🐶 Bark](models-bark.md): Bark is a multi-lingual TTS model created by [Suno-AI](https://www.suno.ai/). It can generate conversational speech a... - [Forward TTS model(s)](models-forward-tts.md): A general feed-forward TTS model implementation that can be configured to different architectures by setting different - [Glow TTS](models-glow-tts.md): Glow TTS is a normalizing flow model for text-to-speech. It is built on the generic Glow model that is previously - [Overflow TTS](models-overflow.md): Neural HMMs are a type of neural transducer recently proposed for - [🌮 Tacotron 1 and 2](models-tacotron1-2.md): Tacotron is one of the first successful DL-based text-to-mel models and opened up the whole TTS field for more DL res... - [🐢 Tortoise](models-tortoise.md): Tortoise is a very expressive TTS system with impressive voice cloning capabilities. It is based on an GPT like autog... - [VITS](models-vits.md): VITS (Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech - [ⓍTTS](models-xtts.md): ⓍTTS is a super cool Text-to-Speech model that lets you clone voices in different languages by using just a quick 3-s... - [Training a Model](training-a-model.md): 1. Decide the model you want to use. - [TTS Datasets](tts-datasets.md): Some of the known public datasets that we successfully applied 🐸TTS: - [Tutorial For Nervous Beginners](tutorial-for-nervous-beginners.md): User friendly installation. Recommended only for synthesizing voice. - [What makes a good TTS dataset](what-makes-a-good-dataset.md): (what_makes_a_good_dataset)=