Project TTN Daten in InfluxDB für Grafana: Difference between revisions
(10 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
It uses a TTN HTTP integration for a device like https://github.com/joba-1/ATTiny84TTN to store it in an InfluxDB database. | It uses a TTN HTTP integration for a device like https://github.com/joba-1/ATTiny84TTN to store it in an InfluxDB database. | ||
Finally it shows an example Grafana dashboard to display the data from the InfluxDB. | Finally it shows an example Grafana dashboard to display the data from the InfluxDB. | ||
Tested with my (The Things ID joba1) devices registered on https://eu1.cloud.thethings.network/console/ | |||
== InfluxDB == | == InfluxDB == | ||
Line 32: | Line 34: | ||
sudo vi /etc/influxdb/influxdb.conf | sudo vi /etc/influxdb/influxdb.conf | ||
For Flux query language (i tried on 1.8.5) you may need | |||
''' | |||
[http] | |||
flux-enabled = true | |||
''' | |||
=== Start InfluxDB === | === Start InfluxDB === | ||
Line 59: | Line 67: | ||
Sometimes invalid data gets into the database and needs to be cleaned. | Sometimes invalid data gets into the database and needs to be cleaned. | ||
InfluxDB does not allow to delete existing values, so brute force is the only option I know | InfluxDB does not allow to delete existing values, so brute force is the only option I know | ||
The "group by *" is required to preserve tags | The "group by *" is required to preserve tags according to docs, | ||
but it did NOT work for me: | |||
influx --database ttn -execute 'SELECT * INTO temp FROM "measurements" WHERE pres_hPa > 700' | influx --database ttn -execute 'SELECT * INTO temp FROM "measurements" WHERE pres_hPa > 700' | ||
Line 65: | Line 75: | ||
influx --database ttn -execute 'SELECT * INTO "measurements" FROM temp group by *' | influx --database ttn -execute 'SELECT * INTO "measurements" FROM temp group by *' | ||
influx --database ttn -execute 'drop measurement temp' | influx --database ttn -execute 'drop measurement temp' | ||
use the following python code | |||
==== Use Python to rework InfluxDB data ==== | ==== Use Python to rework InfluxDB data ==== | ||
Example code (to copy fields into tags with same name) | Example code (to copy fields into tags with same name). See also https://github.com/joba-1/Stromableser/blob/main/fixStromableser.py | ||
from influxdb import InfluxDBClient | from influxdb import InfluxDBClient | ||
Line 111: | Line 124: | ||
} | } | ||
data.append(item) | data.append(item) | ||
if len(data) == 10000: | |||
print("Add {} items".format(len(data))) | |||
client.write_points(data, time_precision='s') | |||
data = [] | |||
print("Add {} items".format(len(data))) | print("Add {} items".format(len(data))) | ||
client.write_points(data, time_precision='s') | client.write_points(data, time_precision='s') | ||
==== Post from ESP8266 ==== | |||
<pre> | |||
// Post Data to InfluxDB | |||
#include <ESP8266HTTPClient.h> | |||
WiFiClient client; | |||
HTTPClient http; | |||
int _influx_status = 0; | |||
void post_data() { | |||
static const char Uri[]="/write?db=" INFLUX_DB "&precision=s"; | |||
char msg[300]; | |||
snprintf(msg, sizeof(msg), | |||
"temperatures zulauf=%.2f,ruecklauf=%.2f,vorlauf=%.2f,kamin=%.2f\n", | |||
_temp_c[0], _temp_c[1], _temp_c[2], _temp_c[3]); | |||
http.begin(client, INFLUX_SERVER, INFLUX_PORT, Uri); | |||
http.setUserAgent(NAME); | |||
_influx_status = http.POST(msg); | |||
String payload = http.getString(); | |||
http.end(); | |||
if( _influx_status < 200 || _influx_status > 299 ) { | |||
snprintf(msg, sizeof(msg), | |||
"Post %s:%d%s status %d response '%s'", | |||
INFLUX_SERVER, INFLUX_PORT, Uri, _influx_status, payload.c_str()); | |||
syslog.log(LOG_ERR, msg); | |||
}; | |||
} | |||
</pre> | |||
== Grafana == | == Grafana == | ||
=== Install Grafana === | === Install Grafana === | ||
I now use leap supplied grafana 7.4. No need for the below | |||
Here are the [https://grafana.com/grafana/download installation instructions]. Basically just do | Here are the [https://grafana.com/grafana/download installation instructions]. Basically just do | ||
Line 125: | Line 176: | ||
It complains about missing urw-fonts, but they are not really required -> ignore. | It complains about missing urw-fonts, but they are not really required -> ignore. | ||
Also accept there is no signing key | Also accept there is no signing key | ||
=== Grafana via Apache Proxy === | |||
/etc/grafana/grafana.ini | |||
[server] | |||
protocol = http | |||
domain = banzhaf.chickenkiller.com | |||
root_url = https://banzhaf.chickenkiller.com/grafana | |||
/etc/apache2/conf.d/grafana.conf | |||
<Location "/grafana"> | |||
ProxyPass "http://localhost:3000" | |||
ProxyPassReverse "http://localhost:3000" | |||
</Location> | |||
=== Start Grafana === | === Start Grafana === | ||
Line 141: | Line 206: | ||
* Add datasource | * Add datasource | ||
** Select InfluxDB | ** Select InfluxDB | ||
** Check use as default | ** Check use as default (optional) | ||
** Select Query Language InfluxQL (Flux is also possible, but has no graphical query editor) | |||
** Enter URL of your InfluxDB, e.g. http://localhost:8086 | ** Enter URL of your InfluxDB, e.g. http://localhost:8086 | ||
** Enter our test db in InfluxDB Details: my_test_db | ** Enter our test db in InfluxDB Details: my_test_db | ||
Line 158: | Line 224: | ||
** We could add more panels, but for now: select Save dashboard icon and give it a name: My counts | ** We could add more panels, but for now: select Save dashboard icon and give it a name: My counts | ||
** Select the region around the dot, repeat the zoom until you see the single values. | ** Select the region around the dot, repeat the zoom until you see the single values. | ||
=== Datasource with InfluxDB Query Language Flux === | |||
Supported at least since InfluxDB 1.8.5 with Grafana 7.4.2, needs to be enabled in influxdb.conf | |||
* URL: unchanged, e.g. http://localhost:8086 | |||
* Access: unchanged (Server) | |||
* Cookies: can stay empty | |||
* Auth: can all stay disabled | |||
* Custom HTTP headers: not needed | |||
* Organization: like configured for Grafana (I use Banzhaf) | |||
* Token: user:password | |||
* Default Bucket: database name | |||
* Min interval and max serias not needed | |||
== Python and Apache == | == Python and Apache == | ||
Line 359: | Line 437: | ||
[[File:TTN-thp84-Device-Grafana.png|600px]] | [[File:TTN-thp84-Device-Grafana.png|600px]] | ||
My local dashboard: http://job4:3000/d/lUpjxBfZk/ttn-thp84-device?from=now-30d&to=now |
Latest revision as of 17:47, 15 April 2023
Flexible Visualization of TTN Data
This page describes installing Grafana and InfluxDB on openSuse 42.2. It also describes how to configure Apache 2.4 to receive data from a TheThingsNetwork applications. It uses a TTN HTTP integration for a device like https://github.com/joba-1/ATTiny84TTN to store it in an InfluxDB database. Finally it shows an example Grafana dashboard to display the data from the InfluxDB.
Tested with my (The Things ID joba1) devices registered on https://eu1.cloud.thethings.network/console/
InfluxDB
Installing InfluxDB
Here is the Download Portal Currently, it tells me to
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.7.9.x86_64.rpm sudo zypper install influxdb-1.7.9.x86_64.rpm
It warns about missing package shadow-util. On openSuse this is named shadow and is installed -> ignore.
To use systemd for managing the database service, copy the service file
sudo cp -av /usr/lib/influxdb/scripts/influxdb.service /etc/systemd/system/
For later versions of openSuse, Influx Installation Documentation suggests using a repository - should have the same results:
sudo zypper ar -f obs://devel:languages:go/ go sudo zypper in influxdb
Configure InfluxDB
I did not need to change anything here, but in case you want to check:
sudo vi /etc/influxdb/influxdb.conf
For Flux query language (i tried on 1.8.5) you may need [http] flux-enabled = true
Start InfluxDB
sudo systemctl daemon-reload sudo systemctl enable influxdb sudo systemctl start influxdb sudo systemctl status influxdb
Test InfluxDB
Call the CLI and insert some test data. More details here: https://docs.influxdata.com/influxdb/v1.7/tools/shell/
influx CREATE DATABASE my_test_db USE my_test_db INSERT my_table,my_key=my_test my_value=1 INSERT my_table,my_key=my_test my_value=2 INSERT my_table,my_key=my_test my_value=3 SELECT * FROM my_table QUIT
Tips and Tricks for InfluxDB
Delete measurements
Sometimes invalid data gets into the database and needs to be cleaned. InfluxDB does not allow to delete existing values, so brute force is the only option I know The "group by *" is required to preserve tags according to docs,
but it did NOT work for me:
influx --database ttn -execute 'SELECT * INTO temp FROM "measurements" WHERE pres_hPa > 700' influx --database ttn -execute 'drop measurement "measurements"' influx --database ttn -execute 'SELECT * INTO "measurements" FROM temp group by *' influx --database ttn -execute 'drop measurement temp'
use the following python code
Use Python to rework InfluxDB data
Example code (to copy fields into tags with same name). See also https://github.com/joba-1/Stromableser/blob/main/fixStromableser.py
from influxdb import InfluxDBClient # InfluxDB parameters server = 'job4' port = 8086 database='ttn' # CLI interface if __name__ == "__main__": # read table "measurements", fix anomalies and write into table "data" client = InfluxDBClient(host=server, port=port, database=database) response = client.query('select * from "measurements"', epoch='s') print("Status: {}\n".format(response.error)) points = response.get_points() data = [] for point in points: if point['app_id'] == None: point['app_id'] = point['app_id_1'] point['app_id'] = point['app_id'].strip('"') point.pop('app_id_1', None) if point['dev_id'] == None: point['dev_id'] = point['dev_id_1'] point['dev_id'] = point['dev_id'].strip('"') point.pop('dev_id_1', None) item = { "measurement": "data", "tags": { "app_id": point["app_id"], "dev_id": point["dev_id"] }, "time": point["time"], "fields": { "temp_degC": float(point["temp_degC"]), "vcc_V": float(point["vcc_V"]), "pres_hPa": float(point["pres_hPa"]), "humi_Percent": float(point["humi_Percent"]), "gtw_id": point["gtw_id"], "rssi": float(point["rssi"]) } } data.append(item) if len(data) == 10000: print("Add {} items".format(len(data))) client.write_points(data, time_precision='s') data = [] print("Add {} items".format(len(data))) client.write_points(data, time_precision='s')
Post from ESP8266
// Post Data to InfluxDB #include <ESP8266HTTPClient.h> WiFiClient client; HTTPClient http; int _influx_status = 0; void post_data() { static const char Uri[]="/write?db=" INFLUX_DB "&precision=s"; char msg[300]; snprintf(msg, sizeof(msg), "temperatures zulauf=%.2f,ruecklauf=%.2f,vorlauf=%.2f,kamin=%.2f\n", _temp_c[0], _temp_c[1], _temp_c[2], _temp_c[3]); http.begin(client, INFLUX_SERVER, INFLUX_PORT, Uri); http.setUserAgent(NAME); _influx_status = http.POST(msg); String payload = http.getString(); http.end(); if( _influx_status < 200 || _influx_status > 299 ) { snprintf(msg, sizeof(msg), "Post %s:%d%s status %d response '%s'", INFLUX_SERVER, INFLUX_PORT, Uri, _influx_status, payload.c_str()); syslog.log(LOG_ERR, msg); }; }
Grafana
Install Grafana
I now use leap supplied grafana 7.4. No need for the below
Here are the installation instructions. Basically just do
wget https://dl.grafana.com/oss/release/grafana-6.5.2-1.x86_64.rpm sudo zypper install grafana-6.5.2-1.x86_64.rpm
It complains about missing urw-fonts, but they are not really required -> ignore. Also accept there is no signing key
Grafana via Apache Proxy
/etc/grafana/grafana.ini
[server] protocol = http domain = banzhaf.chickenkiller.com root_url = https://banzhaf.chickenkiller.com/grafana
/etc/apache2/conf.d/grafana.conf
<Location "/grafana"> ProxyPass "http://localhost:3000" ProxyPassReverse "http://localhost:3000" </Location>
Start Grafana
sudo systemctl daemon-reload sudo systemctl enable grafana-server.service sudo systemctl start grafana-server.service sudo systemctl status grafana-server.service
Test Grafana
Login as user admin and password admin. You will be asked for a new password immediately. Go to the steps in the web UI:
- Add datasource
- Select InfluxDB
- Check use as default (optional)
- Select Query Language InfluxQL (Flux is also possible, but has no graphical query editor)
- Enter URL of your InfluxDB, e.g. http://localhost:8086
- Enter our test db in InfluxDB Details: my_test_db
- Select Save & Test should respond with success
- Add dashboard
- Go back to the first page via the Grafana symbol in the upper left and select New dashboard
- Select Add query
- Select measurement: my_table
- Select field: my_value
- optionally enter an alias: my_count
- Select next icon: Visualization
- Select Null values: connected (to draw lines between dots)
- Select Legend: Min and Max (to display them below the dashboard)
- Select next icon: General
- Enter a panel title
- We could add more panels, but for now: select Save dashboard icon and give it a name: My counts
- Select the region around the dot, repeat the zoom until you see the single values.
Datasource with InfluxDB Query Language Flux
Supported at least since InfluxDB 1.8.5 with Grafana 7.4.2, needs to be enabled in influxdb.conf
- URL: unchanged, e.g. http://localhost:8086
- Access: unchanged (Server)
- Cookies: can stay empty
- Auth: can all stay disabled
- Custom HTTP headers: not needed
- Organization: like configured for Grafana (I use Banzhaf)
- Token: user:password
- Default Bucket: database name
- Min interval and max serias not needed
Python and Apache
Insert JSON Values into InfluxDB with Python
The ThingsNetwork offers an HTTP integration service. It sends a POST request with JSON data to an URL you define each time your LoRaWAN device sends data. Lets store this data with InfluxDB so we can display it with Grafana.
As a first step, lets assume we somehow get the JSON data of one measurement in one line via standard input. The script will parse the json, format a messages with the interesting fields (here including some poayload fileds from my thp84 app) post a request to the InfluxDB and print the result of that request.
#!/usr/bin/python import sys import json import requests import time server = 'localhost' port = 8086 database='my_test_db' url = 'http://{}:{}/write?db={}&precision=s'.format(server, port, database) for line in sys.stdin: try: data = json.loads(line) ts = time.strptime(data["metadata"]["time"].split('.', 1)[0], '%Y-%m-%dT%H:%M:%S') msg = 'node,app_id="{}",dev_id="{}" temp_degC={},vcc_V={},pres_hPa={},humi_Percent={},gtw_id="{}",rssi={} {}'.format( data["app_id"], data["dev_id"], data["payload_fields"]["temp_degC"], data["payload_fields"]["vcc_V"], data["payload_fields"]["pres_hPa"], data["payload_fields"]["humi_Percent"], data["metadata"]["gateways"][0]["gtw_id"], data["metadata"]["gateways"][0]["rssi"], int(time.mktime(ts))) result = requests.post(url, data=msg) print('insert date "{}" result: "{}"'.format(data["metadata"]["time"], result.text)) except Exception as e: print("ignoring '{}': '{}'".format(line, e))
Call Python as CGI from Apache Webserver
Next step is to use similar python code to insert the data automatically each time apache receives an HTTP request from The ThingsNetwork.
- Enable the apache python module by adding "python" to the APACHE_MODULES line of /etc/sysconfig/apache2.
- Enable execution of python scripts in a directory by adding file /etc/apache2/conf.d/cgi-py.conf (basename not relevant, but needs to end in .conf):
<Directory /srv/www/htdocs/ttn> Options +ExecCGI AddHandler cgi-script .py </Directory>
- Create the directory and restart apache to load the new config
mkdir /srv/www/htdocs/ttn systemctrl restart apache
- Create a python file in the configured directory that prints the whole html page, including headers and make the file executable
vi /srv/www/htdocs/ttn/hello-cgi.py chmod +x /srv/www/htdocs/ttn/hello-cgi.py
- Content of a simple example file:
#!/usr/bin/env python print("Content-Type: text/html;charset=utf-8") print() print("Hello CGI World!")
Call Python as WSGI from Apache Webserver
WSGI is the more modern approach to calling python. To make it work,
- Install apache module for wsgi and python3 (zypper in apache2-mod_wsgi-python3)
- If above step didn't do it, enable the apache wsgi module by adding "wsgi" to the APACHE_MODULES line of /etc/sysconfig/apache2.
- Enable execution of a python script by adding file /etc/apache2/conf.d/wsgi-py.conf (basename not relevant, but needs to end in .conf):
WSGIScriptAlias /ttn/hello-wsgi.py /srv/www/htdocs/ttn/hello-wsgi.py <Directory /srv/www/htdocs/ttn> Require all granted </Directory>
- Create the directory and restart apache to load the new config
mkdir /srv/www/htdocs/ttn systemctrl restart apache
- Create the python file used in the alias that prints the whole html page, including headers and make the file executable
vi /srv/www/htdocs/ttn/hello-wsgi.py chmod +x /srv/www/htdocs/ttn/hello-wsgi.py
- Content of a simple example file:
def application(environ, start_response): status = '200 OK' output = b'Hello WSGI World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output]
- A bit more involved example showing the environment and POST data (if any)
import json def application(environ, start_response): status = '200 OK' output = 'Hello WSGI World!\n' response_body = [ '%s: %s' % (key, value) for key, value in sorted(environ.items()) ] output += '\n'.join(response_body) try: if environ["REQUEST_METHOD"] == "POST": output += "\nPOST method" request_body_size = int(environ['CONTENT_LENGTH']) request_body = environ['wsgi.input'].read(request_body_size) s = "".join(chr(b) for b in request_body) output += "\nData: " + s data = json.loads(s) output += "\nJson: " + json.dumps(data, sort_keys=True, indent=4) except Exception as e: output += "\nException: " + str(e) response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output.encode('Utf-8')]
Call this e.g. with
wget -q -O- --post-data '{"a":1}' http://localhost/ttn/hello-wsgi.py
More infos about setting up wsgi are in the WSGI guidelines.
Transform TTN JSON data to InfluxDB Inserts
Modified python script to send POST request data from TTN to a logfile and insert data into InfluxDB
import json import requests import time import calendar # Log to contain the whole JSON objects from TTN log_file = '/var/log/ttn/thp84.log' # InfluxDB parameters server = 'localhost' port = 8086 database='ttn' url = 'http://{}:{}/write?db={}&precision=s'.format(server, port, database) def application(environ, start_response): try: if environ["REQUEST_METHOD"] == "POST": # get data as received from TTN request_body_size = int(environ['CONTENT_LENGTH']) request_body = environ['wsgi.input'].read(request_body_size) body_string = "".join(chr(b) for b in request_body) # write received data to log file with open(log_file, 'a') as log: log.write(body_string) log.write('\n') # parse utc timestamp from TTN into time tuple data = json.loads(body_string) ts = time.strptime(data["metadata"]["time"].split('.', 1)[0], '%Y-%m-%dT%H:%M:%S') # find gateway with best RSSI rssi = -999 for gw in data["metadata"]["gateways"]: if gw["rssi"] > rssi: rssi = gw["rssi"] gtw_id = gw["gtw_id"] # build InfluxDB insert string with explicit utc time in seconds resolution msg = 'measurements,app_id="{}",dev_id="{}" temp_degC={},vcc_V={},pres_hPa={},humi_Percent={},gtw_id="{}",rssi={} {}'.format( data["app_id"], data["dev_id"], data["payload_fields"]["temp_degC"], data["payload_fields"]["vcc_V"], data["payload_fields"]["pres_hPa"], data["payload_fields"]["humi_Percent"], gtw_id, rssi, int(calendar.timegm(ts))) # send insert request to the database result = requests.post(url, data=msg) output = 'OK' except Exception as e: output = "Exception: " + str(e) # send OK to TTN status = '200 OK' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output.encode('Utf-8')]
Import Historic TTN Data
Import data from logs into InfluxDB
cat /var/log/ttn/thp84.log | python3 thp84.py
Grafana Dashboard for TTN Data from InfluxDB
build dashboard with a panel for each data item (like temperature, rssi, ...)
An example is https://github.com/joba-1/ATTiny84TTN/blob/master/plots/Grafana-dashboard-TTN-thp84-Device.json
My local dashboard: http://job4:3000/d/lUpjxBfZk/ttn-thp84-device?from=now-30d&to=now