Replaced simple user:pass authentication in examples with JWT token

This commit is contained in:
Fabio Manganiello 2021-02-24 23:47:57 +01:00
parent a2d77a7f30
commit c0c228262f
5 changed files with 44 additions and 21 deletions

View file

@ -257,11 +257,13 @@ assistant.google.pushtotalk:
procedures, or through the HTTP API:
```shell
curl -XPOST -H 'Content-Type: application/json' -d '
curl -XPOST \
-H "Authorization: Bearer $PP_TOKEN" \
-H 'Content-Type: application/json' -d '
{
"type":"request",
"action":"assistant.google.pushtotalk.start_conversation"
}' -a 'username:password' http://your-rpi:8008/execute
}' http://your-rpi:8008/execute
```
### Features
@ -335,11 +337,13 @@ assistant.echo:
conversations programmatically through e.g. Platypush event hooks, procedures, or through the HTTP API:
```shell
curl -XPOST -H 'Content-Type: application/json' -d '
curl -XPOST \
-H "Authorization: Bearer $PP_TOKEN" \
-H 'Content-Type: application/json' -d '
{
"type":"request",
"action":"assistant.echo.start_conversation"
}' -a 'username:password' http://your-rpi:8008/execute
}' http://your-rpi:8008/execute
```
### Features
@ -537,14 +541,16 @@ backend.stt.deepspeech:
`stt.deepspeech.stop_detection`. You can also use it to perform offline speech transcription from audio files:
```shell
curl -XPOST -H 'Content-Type: application/json' -d '
curl -XPOST \
-H "Authorization: Bearer $PP_TOKEN" \
-H 'Content-Type: application/json' -d '
{
"type":"request",
"action":"stt.deepspeech.detect",
"args": {
"audio_file": "~/audio.wav"
}
}' -a 'username:password' http://your-rpi:8008/execute
}' http://your-rpi:8008/execute
# Example response
{

View file

@ -158,8 +158,10 @@ camera.ir.mlx90640:
Restart the service, and if you haven't already create a user from the web interface at `http://your-rpi:8008`. You
should now be able to take pictures through the API:
```yaml
curl -XPOST -H 'Content-Type: application/json' -d '
```shell
curl -XPOST \
-H "Authorization: Bearer $PP_TOKEN" \
-H 'Content-Type: application/json' -d '
{
"type":"request",
"action":"camera.ir.mlx90640.capture",
@ -167,7 +169,7 @@ curl -XPOST -H 'Content-Type: application/json' -d '
"output_file":"~/snap.png",
"scale_factor":20
}
}' -u 'username:password' http://localhost:8008/execute
}' http://localhost:8008/execute
```
If everything went well, the thermal picture should be stored under `~/snap.png`. In my case it looks like this while
@ -473,7 +475,9 @@ the [`tensorflow.predict`](https://docs.platypush.tech/en/latest/platypush/plugi
method:
```shell
curl -XPOST -u 'user:pass' -H 'Content-Type: application/json' -d '
curl -XPOST \
-H "Authorization: Bearer $PP_TOKEN" \
-H 'Content-Type: application/json' -d '
{
"type":"request",
"action":"tensorflow.predict",

View file

@ -804,12 +804,13 @@ python -m platypush.plugins.google.credentials \
- With Platypush running, check the data sources that are available on your account:
```shell
curl -XPOST -H 'Content-Type: application/json' -d '
curl -XPOST \
-H "Authorization: Bearer $PP_TOKEN" \
-H 'Content-Type: application/json' -d '
{
"type":"request",
"action":"google.fit.get_data_sources"
}' -u 'username:password' \
http://your-pi:8008/execute
}' http://your-pi:8008/execute
```
- Take note of the `dataStreamId` attributes of the metrics that you want to monitor and add them to the configuration

View file

@ -223,7 +223,9 @@ code or through whichever platypush backend you have configured:
```shell
# HTTP request
curl -XPOST -a 'username:password' -H 'Content-Type: application/json' -d '
curl -XPOST \
-H "Authorization: Bearer $PP_TOKEN" \
-H 'Content-Type: application/json' -d '
{
"type":"request",
"action":"zigbee.mqtt.device_set",
@ -306,7 +308,9 @@ on [Z-Wave events](https://docs.platypush.tech/en/latest/platypush/events/zwave.
```shell
# HTTP request
curl -XPOST -a 'username:password' -H 'Content-Type: application/json' -d '
curl -XPOST \
-H "Authorization: Bearer $PP_TOKEN" \
-H 'Content-Type: application/json' -d '
{
"type":"request",
"action":"zwave.get_value",

View file

@ -168,12 +168,18 @@ light.hue:
```
If you have the HTTP backend running, for example, you can easily dispatch such a request to it through the available
JSON-RPC execute endpoint (after logging at least once at the control panel at `http://localhost:8008` and creating a
user):
JSON-RPC execute endpoint.
First create a user through the web panel at `http://localhost:8008`, then generate a token for the user to authenticate
the API calls - you can easily generate a token from the web panel itself, Settings -> Generate token.
Store the token under an environment variable (e.g. `$PP_TOKEN`) and use it in your calls over the `Authorization: Bearer`
header:
```shell
# cURL example
curl -XPOST -H 'Content-Type: application/json' -u 'username:password' \
curl -XPOST -H 'Content-Type: application/json' \
-H "Authorization: Bearer $PP_TOKEN" \
-d '{"type":"request", "action":"light.hue.on", "args": {"groups": ["Living Room", "Bedroom"]}}' \
http://localhost:8008/execute
@ -184,7 +190,7 @@ echo '{
"args": {
"groups": ["Living Room", "Bedroom"]
}
}' | http -a 'username:password' http://localhost:8008/execute
}' | http http://localhost:8008/execute "Authorization: Bearer $PP_TOKEN"
```
And you can also easily send requests programmatically through your own Python scripts, basically using Platypush as a
@ -388,7 +394,8 @@ In both cases, you can call the procedure either from an event hook or directly
```shell
# cURL example
curl -XPOST -H 'Content-Type: application/json' -u 'username:password' \
curl -XPOST -H 'Content-Type: application/json' \
-H "Authorization: Bearer $PP_TOKEN" \
-d '{"type":"request", "action":"procedure.at_home"}' \
http://localhost:8008/execute
```
@ -604,8 +611,9 @@ If you enabled the HTTP backend then you may want to point your browser to `http
Then you can test the HTTP backend by sending e.g. a `get_lights` command:
```shell
curl -XPOST -u 'username:password' \
curl -XPOST \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $PP_TOKEN" \
-d '{"type":"request", "action":"light.hue.get_lights"}' \
http://localhost:8008/execute
```