OpenAI Conversation

The OpenAI integration adds a conversation agent powered by OpenAI in Home Assistant.

This conversation agent is unable to control your house. It can only query information that has been provided by Home Assistant. To be able to answer questions about your house, Home Assistant will need to provide OpenAI with the details of your house, which include areas, devices and their states.

This integration requires an API key to use, which you can generate here.. This is a paid service, we advise you to monitor your costs in the OpenAI portal closely and configure usage limits to avoid unwanted costs associated with using the service.


Adding OpenAI Conversation to your Home Assistant instance can be done via the user interface, by using this My button:

Generate an API Key

The OpenAI key is used to authenticate requests to the OpenAI API. To generate an API key take the following steps:

  • Log in to the OpenAI portal or sign up for an account.
  • Enable billing with a valid credit card
  • Configure usage limits.
  • Visit the API Keys page to retrieve the API key you’ll use to configure the integration.


Options for OpenAI Conversation can be set via the user interface, by taking the following steps:

  • Browse to your Home Assistant instance.
  • In the sidebar click on Settings.
  • From the configuration menu select: Devices & Services.
  • If multiple instances of OpenAI Conversation are configured, choose the instance you want to configure.
  • Click on “Options”.
Prompt Template

The starting text for the AI language model to generate new text from. This text can include information about your Home Assistant instance, devices, and areas and is written using Home Assistant Templating.

Completion Model

The GPT-3 language model is used for text generation. You can find more details on the available models in the OpenAI GPT-3 Documentation.

Maximum Tokens to Return in Response

The maximum number of words or “tokens” that the AI model should generate in its completion of the prompt. For more information, see the OpenAI Completion Documentation.


A value that determines the level of creativity and risk-taking the model should use when generating text. A higher temperature means the model is more likely to generate unexpected results, while a lower temperature results in more deterministic results. See the OpenAI Completion Documentation for more information.

Top P

An alternative to temperature, top_p determines the proportion of the most likely word choices the model should consider when generating text. A higher top_p means the model will only consider the most likely words, while a lower top_p means a wider range of words, including less likely ones, will be considered. For more information, see the OpenAI Completion API Reference.