Distil-Whisper Large v3 delivers exceptional performance across different transcription scenarios:
- Short-form transcription: 9.7% WER (vs 8.4% for Large v3)
- Sequential long-form: 10.8% WER (vs 10.0% for Large v3)
- Chunked long-form: 10.9% WER (vs 11.0% for Large v3)
- Speed improvement: 6.3x faster than Whisper Large v3
- Model size: 756M parameters (vs 1550M for Large v3)