nvidia/fastpitch-hifigan-tts

RUN ANYWHERE

Expressive and engaging English voices for Q&A assistants, brand ambassadors, and service robots

Speech Synthesis: English-US Multispeaker - Model Overview

Description:

The English-US Multispeaker FastPitch-HifiGAN model transcribes text into audio representations using two model components: Fastpitch and HifiGAN. This model is ready for commercial use.

FastPitch is a mel-spectrogram generator, designed to be used as the first part of a neural text-to-speech system in conjunction with a neural vocoder. This model uses the International Phonetic Alphabet (IPA) for inference and training, and it can output a female or a male voice for US English.

HifiGAN is a neural vocoder model for text-to-speech applications. It is the second part of a two-stage speech synthesis pipeline.

References:

FastPitch Model on NGC

HifiGAN Model on NGC

HifiGAN paper: https://arxiv.org/abs/2010.05646

Model Architecture:

Network Architecture: FastPitch + HifiGAN

FastPitch is a fully-parallel text-to-speech transformer-based model, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and be engaging to the listener.

HifiGAN is a neural vocoder based on a generative adversarial network framework. During training, the model uses a powerful discriminator consisting of small sub-discriminators, each one focusing on specific periodic parts of a raw waveform.

Input:

For FastPitch (1st Stage): Text Strings in English

Other Properties Related to Input: 400 Character Text String Limit

Output:

For HifiGAN (2nd Stage): Audio of shape (batch x time) in wav format

Other Parameters Related to Output: Mono, Encoded 16 bit audio; 20 Second Maximum Length; Depending on input, this model can output a female or a male voice for American English with six (6) emotions for the female voice and four (4) emotions for male voice. The female voice is classified as "neutral," "calm," "happy," "angry," "fearful," and "sad." The male voice is classified as "neutral," “calm," "happy," and "angry."

Training & Evaluation Dataset:

Training Dataset:

** Data Collection Method by dataset

  • Human
    Properties (Quantity, Dataset Descriptions, Sensor(s)): This model is trained on a proprietary dataset of audio-text pairs sampled at 44100 Hz, which contains one Female and one Male voice speaking US English. While both genders are trained for all emotions, this dataset only releases those that passed the evaluation standard for expressiveness and quality. The dataset also contains a subset of sentences with different words emphasized.

Evaluation Dataset:

** Data Collection Method by dataset

  • Human
    Properties (Quantity, Dataset Descriptions, Sensor(s)): This model is trained on a proprietary dataset sampled at 44100 Hz, which contains one Female and one Male voice speaking US English. While both genders are trained for all emotions, this dataset only releases those that passed the evaluation standard for expressiveness and quality. The dataset also contains a subset of sentences with different words emphasized.

Software Integration

Runtime Engine(s):

  • Riva 2.15.0 or Higher

Supported Operating System(s):

  • Linux

Model Version(s):

  • 2.15.0

Inference

Engine: Triton
Test Hardware:

  • NVIDIA H100 GPU
  • NVIDIA A100 GPU
  • NVIDIA L40 GPU

Ethical Considerations (For NVIDIA Models Only):

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns here.

GOVERNING TERMS:

This trial is governed by the NVIDIA API Trial Terms of Service (found at https://assets.ngc.nvidia.com/products/api-catalog/legal/NVIDIA%20API%20Trial%20Terms%20of%20Service.pdf). The use of this model is governed by the AI Foundation Models Community License Agreement (found at NVIDIA Agreements | Enterprise Software | NVIDIA AI Foundation Models Community License Agreement).