+It runs automatic speech recognition using the OpenAI's Whisper model.
+
+It requires the whisper.cpp library (
https://github.com/ggml-org/whisper.cpp)
+as a prerequisite. After installing the library it can be enabled using:
+@code{./configure --enable-whisper}.
+
+The filter has following options:
+
+@table @option
+@item model
+The file path of the downloaded whisper.cpp model (mandatory).
+
+@item language
+The language to use for transcription ('auto' for auto-detect).
+Default value: @code{"auto"}
+
+@item queue
+The maximum size that will be queued into the filter before processing the audio
+with whisper. Using a small value the audio stream will be processed more often,
+but the transcription quality will be lower and the required processing power
+will be higher. Using a large value (e.g. 10-20s) will produce more accurate
+results using less CPU (as using the whisper-cli tool), but the transcription
+latency will be higher, thus not useful to process real-time streams.
+Consider using the vad_model option associated with a large queue value.
+Default value: @code{"3"}