# Creating an automated meeting minutes generator with Whisper and GPT-4 In this tutorial, we'll harness the power of OpenAI's Whisper and GPT-4 models to develop an automated meeting minutes generator. The application transcribes audio from a meeting, provides a summary of the discussion, extracts key points and action items, and performs a sentiment analysis. ## Getting started This tutorial assumes a basic understanding of Python and an [OpenAI API key](/account/api-keys). You can use the audio file provided with this tutorial or your own. Additionally, you will need to install the [python-docx](https://python-docx.readthedocs.io/en/latest/) and [OpenAI](/docs/libraries/libraries) libraries. You can create a new Python environment and install the required packages with the following commands: ```bash python -m venv env source env/bin/activate pip install openai pip install python-docx ``` ## Transcribing audio with Whisper Audio Waveform created by DALL·E The first step in transcribing the audio from a meeting is to pass the audio file of the meeting into our{" "} /v1/audio API. Whisper, the model that powers the audio API, is capable of converting spoken language into written text. To start, we will avoid passing a{" "} prompt {" "} or{" "} temperature {" "} (optional parameters to control the model's output) and stick with the default values.