LipsyncX
Dubbing model

ElevenLabs Dubbing

Dub videos with language detection and voice selection.

Translate audio or video while preserving emotion, timing, and tone, with speaker separation and background audio retention.

Best for: Global product demos
Inputs: Video
Outputs: Video

What this model is best at

Short answer: Translate audio or video while preserving emotion, timing, and tone, with speaker separation and background audio retention.

Use this workspace to preview the model, compare example output, and start creating with the recommended workflow for this model.

Highlight 1

Automatic language detection and translation.

Highlight 2

Preserves the original emotion and tone.

Highlight 3

Speaker separation for multi‑speaker content.

Dubbing

ElevenLabs Dubbing workspace

Start from the built-in workflow below, then tune the model inside the standard LipsyncX creation surface.

1. Upload photo

1. Choose a face

Step 1/4

Choose a face

Follow the next step to keep building your video.

Product demo localization

Auto‑detect and dub for multiple regions.

Original
Product demo localization original
Localized
Product demo localization generated

Popular use cases

Use case 1

Global demos

Scale product launches.

Use case 2

Customer training

Localize enablement.

Use case 3

Sales assets

Regionalized pitches.

Quick specs

Primary use
Automated dubbing + lip‑sync
Inputs
Video (auto‑detect language)
Output
Localized video with preserved tone
Best strength
Emotion‑preserving translation

Best practices

Review transcripts before finalizing output.
Use high‑quality source audio for best translation.
Spot‑check lip sync on close‑up scenes.

FAQ

How many languages are supported?

Supports translation into 32 languages.

How long can uploads be?

UI supports up to 45‑minute files; the API supports up to 2.5‑hour files.

Can I edit the translation?

Yes. You can review and edit the transcript before finalizing.

Ready to try ElevenLabs Dubbing?

Use the built-in workspace to test prompts, compare outputs, and see how this model fits your content workflow.