Profiler


The Profiler integration provides a profile which is a set of statistics that identifies how much time each part of Home Assistant is taking. It can help track down a performance issue or provide insight about a misbehaving integration.

Configuration

To add the Profiler integration to your Home Assistant instance, use this My button:

Service profiler.start

Start the profiler for the specified number of seconds.

Service data attribute Optional Description
seconds yes The number of seconds to run the profile. Defaults to 60.0

When the profile is complete, Profiler will generate a Python cprof and a callgrind.out file in your configuration directory. The exact path to these files will appear in a persistent notification so they can be easily located and copied to your desktop.

The cprof file can be viewed with:

Additionally, the profiler will generate a callgrind.out file that can be viewed with:

The gprof2dot tool generates DOT files, which can be converted to images using the dot tool from Graphviz or viewed directly using xdot. The -e and -n parameters can be used to set the minimum percentage required to include a function in the output file. Observe these examples:

# Generating the .dot files:
gprof2dot -f pstats    -e 0.05 -n 0.25 profile.1234567890123456.cprof -o profile.dot
gprof2dot -f callgrind -e 0.05 -n 0.25 callgrind.out.1234567890123456 -o callgrind.dot

# Converting to SVG and PNG formats:
dot callgrind.dot -Tsvg -o callgrind.svg
dot callgrind.dot -Tpng -o callgrind.png

# Alternatively, both commands in a single line:
gprof2dot -f pstats profile.1234567890123456.cprof | dot -Tsvg -o profile.svg

Service profiler.memory

Start the memory profiler for the specified number of seconds.

Service data attribute Optional Description
seconds yes The number of seconds to run the profile. Defaults to 60.0

When the memory profile is complete, Profiler will generate a .hpy file in your configuration directory. The exact path to these files will appear in a persistent notification so they can be easily located and copied to your desktop.

The hpy file can be viewed with any text editor. A visual representation can be viewed using the Heapy Profile Browser which is a part of the guppy3 package and can be launched via the below script:

#! /usr/bin/python3
from guppy import hpy
hpy().pb()

Service profiler.start_log_objects

Start logging the growth of objects in memory.

Service data attribute Optional Description
scan_interval yes The the frequency between logging objects. Defaults to 30.0

Periodically log the growth of new objects in memory. This service’s primary use case is finding memory leaks. This service can be run for long periods to find slow leaks. For finding fast leaks, profiler.start_log_object_sources is preferred; however, it is much more CPU intensive.

See the corresponding documentation for growth() regarding the format in which this data is logged.

Service profiler.stop_log_objects

Stop logging the growth of objects in memory.

Service profiler.start_log_object_sources

Start logging the growth of objects in memory and attempt to find the source of the new objects.

Service data attribute Optional Description
scan_interval yes The the frequency between logging objects. Defaults to 30.0
max_objects yes The number of new objects to examine for source information. Defaults to 5

Periodically log the growth of new objects in memory. This service’s primary use case is finding memory leaks.

This service is similar to start_log_objects except that it is much more CPU intensive since it will attempt to locate the source of each new object up to max_objects each time it logs.

Service profiler.stop_log_object_sources

Stop logging the growth of objects with sources in memory.

Service profiler.dump_log_objects

Service data attribute Optional Description
type no The type of object to dump to the log.

When start_log_objects highlights the growth of a collection of objects in memory, this service can help investigate. The repr of each object that matches type will be logged.

This service is useful for investigating the state of objects in memory. For example, if your system has templates that are rendering too frequently, the below example service calls shows how to find which templates are the source of the problem:

service: profiler.dump_log_objects
data:
  type: RenderInfo
service: profiler.dump_log_objects
data:
  type: Template

Service profiler.log_thread_frames

To help discover run away threads, why the executor is overloaded, or other threading problems, the current frames for each running thread will be logged when this service is called.

An example is below:

[homeassistant.components.profiler] Thread [SyncWorker_6]: File "/usr/local/lib/python3.8/threading.py", line 890, in _bootstrap
    self._bootstrap_inner()
  File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 80, in _worker
    work_item.run()
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/src/homeassistant/homeassistant/components/samsungtv/media_player.py", line 139, in update
    self._state = STATE_ON if self._bridge.is_on() else STATE_OFF
  File "/usr/src/homeassistant/homeassistant/components/samsungtv/bridge.py", line 72, in is_on
    return self._get_remote() is not None
  File "/usr/src/homeassistant/homeassistant/components/samsungtv/bridge.py", line 274, in _get_remote
    self._remote.open()
  File "/usr/local/lib/python3.8/site-packages/samsungtvws/remote.py", line 146, in open
    self.connection = websocket.create_connection(
  File "/usr/local/lib/python3.8/site-packages/websocket/_core.py", line 511, in create_connection
    websock.connect(url, **options)
  File "/usr/local/lib/python3.8/site-packages/websocket/_core.py", line 219, in connect
    self.sock, addrs = connect(url, self.sock_opt, proxy_info(**options),
  File "/usr/local/lib/python3.8/site-packages/websocket/_http.py", line 120, in connect
    sock = _open_socket(addrinfo_list, options.sockopt, options.timeout)
  File "/usr/local/lib/python3.8/site-packages/websocket/_http.py", line 170, in _open_socket
    sock.connect(address)

Service profiler.log_event_loop_scheduled

Log what is scheduled in the event loop. This can be helpful in tracking down integrations that do not stop listeners when Home Assistant stops or do not have sufficient locking to avoid scheduling updates before the previous update is finished.

Each upcoming scheduled item is logged similar to the below example:

[homeassistant.components.profiler] Scheduled: <TimerHandle when=1528307.1818668307 async_track_point_in_utc_time.<locals>.run_action(<Job HassJobType.Coroutinefunction <bound method DataUpdateCoordinator._handle_refresh_interval of <homeassistant.components.screenlogic.ScreenlogicDataUpdateCoordinator object at 0x7f985d896d30>>>) at /usr/src/homeassistant/homeassistant/helpers/event.py:1175>

Service profiler.lru_stats

Logs statistics from lru_cache and lru-dict to help tune Home Assistant and locate memory leaks.