Scrape Sensor


The scrape sensor platform is scraping information from websites. The sensor loads a HTML page and gives you the option to search and split out a value. As this is not a full-blown web scraper like scrapy, it will most likely only work with simple web pages and it can be time-consuming to get the right section.

To enable this sensor, add the following lines to your configuration.yaml file:

# Example configuration.yaml entry
sensor:
  - platform: scrape
    resource: https://www.home-assistant.io
    select: ".current-version h1"

Configuration Variables

resource

(string)(Required)The URL to the website that contains the value.

select

(string)(Required)Defines the HTML tag to search for. Check Beautifulsoup’s CSS selectors for details.

attribute

(string)(Optional)Get value of an attribute on the selected tag.

name

(string)(Optional)Name of the sensor.

Default value: Web scrape

unit_of_measurement

(string)(Optional)Defines the units of measurement of the sensor, if any.

authentication

(string)(Optional)Type of the HTTP authentication. Either basic or digest.

username

(string)(Optional)The username for accessing the website.

password

(string)(Optional)The password for accessing the website.

headers

(string)(Optional)Headers to use for the web request.

Examples

In this section you find some real-life examples of how to use this sensor. There is also a Jupyter notebook available for this example to give you a bit more insight.

Home Assistant

The current release Home Assistant is published on https://www.home-assistant.io/

sensor:
# Example configuration.yaml entry
  - platform: scrape
    resource: https://www.home-assistant.io
    name: Release
    select: ".current-version h1"
    value_template: '{{ value.split(":")[1] }}'

Available implementations

Get the counter for all our implementations from the Component overview page.

# Example configuration.yaml entry
sensor:
  - platform: scrape
    resource: https://www.home-assistant.io/components/
    name: Home Assistant impl.
    select: 'a[href="#all"]'
    value_template: '{{ value.split("(")[1].split(")")[0] }}'

Get a value out of a tag

The German Federal Office for Radiation protection (Bundesamt für Strahlenschutz) is publishing various details about optical radiation including an UV index. This example is getting the index for a region in Germany.

# Example configuration.yaml entry
sensor:
  - platform: scrape
    resource: http://www.bfs.de/DE/themen/opt/uv/uv-index/prognose/prognose_node.html
    name: Coast Ostsee
    select: 'p:nth-of-type(19)'
    unit_of_measurement: 'UV Index'

IFTTT status

If you make heavy use of the IFTTT web service for your automations and are curious about the status of IFTTT then you can display the current state of IFTTT in your frontend.

# Example configuration.yaml entry
sensor:
  - platform: scrape
    resource: http://status.ifttt.com/
    name: IFTTT status
    select: '.component-status'

Get the latest podcast episode file URL

If you want to get the file URL for the latest episode of your favorite podcast, so you can pass it on to a compatible media player.

# Example configuration.yaml entry
sensor:
  - platform: scrape
    resource: https://hasspodcast.io/feed/podcast
    name: Home Assistant Podcast
    select: 'enclosure:nth-of-type(1)'
    attribute: url

Energy price

This example tries to retrieve the price for electricity.

# Example configuration.yaml entry
sensor:
  - platform: scrape
    resource: https://elen.nu/timpriser-pa-el-for-elomrade-se3-stockholm/
    name: Electricity price
    select: ".elspot-content"
    value_template: '{{ ((value.split(" ")[0]) | replace (",", ".")) }}'
    unit_of_measurement: "öre/kWh"

BOM Weather

The Australian Bureau of Meterology website returns an error if the User Agent header is not sent.

# Example configuration.yaml entry
sensor:
  - platform: scrape
    resource: http://www.bom.gov.au/vic/forecasts/melbourne.shtml
    name: Melbourne Forecast Summary
    select: ".main .forecast p"
    value_template: '{{ value | truncate(255) }}'
    # Request every hour
    scan_interval: 3600
    headers:
      User-Agent: Mozilla/5.0