We have developed and are open-sourcing Whisper, a neural network that approximates human-level robustness in English speech recognition. Whisper is an automated speech recognition (ASR), system that was trained using 680,000 hours of multilingual, multitask supervised data from the internet. The use of such a diverse dataset results in a better resistance to accents, background noise, technical language, and other linguistic issues. It also allows transcription in multiple languages and translation from these languages into English. We provide inference code and open-sourcing models to help you build useful applications and further research on robust speech processing. The Whisper architecture is an end-to-end, simple approach that can be used as an encoder/decoder Transformer. The input audio is divided into 30-second chunks and converted into a log Mel spectrogram. This then goes into an encoder.