diff --git a/static/pages/Build-custom-voice-assistants.md b/static/pages/Build-custom-voice-assistants.md index 17a9ee7..e9b3754 100644 --- a/static/pages/Build-custom-voice-assistants.md +++ b/static/pages/Build-custom-voice-assistants.md @@ -257,11 +257,13 @@ assistant.google.pushtotalk: procedures, or through the HTTP API: ```shell -curl -XPOST -H 'Content-Type: application/json' -d ' +curl -XPOST \ + -H "Authorization: Bearer $PP_TOKEN" \ + -H 'Content-Type: application/json' -d ' { "type":"request", "action":"assistant.google.pushtotalk.start_conversation" -}' -a 'username:password' http://your-rpi:8008/execute +}' http://your-rpi:8008/execute ``` ### Features @@ -335,11 +337,13 @@ assistant.echo: conversations programmatically through e.g. Platypush event hooks, procedures, or through the HTTP API: ```shell -curl -XPOST -H 'Content-Type: application/json' -d ' +curl -XPOST \ + -H "Authorization: Bearer $PP_TOKEN" \ + -H 'Content-Type: application/json' -d ' { "type":"request", "action":"assistant.echo.start_conversation" -}' -a 'username:password' http://your-rpi:8008/execute +}' http://your-rpi:8008/execute ``` ### Features @@ -537,14 +541,16 @@ backend.stt.deepspeech: `stt.deepspeech.stop_detection`. You can also use it to perform offline speech transcription from audio files: ```shell -curl -XPOST -H 'Content-Type: application/json' -d ' +curl -XPOST \ + -H "Authorization: Bearer $PP_TOKEN" \ + -H 'Content-Type: application/json' -d ' { "type":"request", "action":"stt.deepspeech.detect", "args": { "audio_file": "~/audio.wav" } -}' -a 'username:password' http://your-rpi:8008/execute +}' http://your-rpi:8008/execute # Example response { diff --git a/static/pages/Detect-people-with-a-RaspberryPi-a-thermal-camera-Platypush-and-a-pinch-of-machine-learning.md b/static/pages/Detect-people-with-a-RaspberryPi-a-thermal-camera-Platypush-and-a-pinch-of-machine-learning.md index 7e9bb77..7c0cef2 100644 --- a/static/pages/Detect-people-with-a-RaspberryPi-a-thermal-camera-Platypush-and-a-pinch-of-machine-learning.md +++ b/static/pages/Detect-people-with-a-RaspberryPi-a-thermal-camera-Platypush-and-a-pinch-of-machine-learning.md @@ -158,8 +158,10 @@ camera.ir.mlx90640: Restart the service, and if you haven't already create a user from the web interface at `http://your-rpi:8008`. You should now be able to take pictures through the API: -```yaml -curl -XPOST -H 'Content-Type: application/json' -d ' +```shell +curl -XPOST \ +-H "Authorization: Bearer $PP_TOKEN" \ +-H 'Content-Type: application/json' -d ' { "type":"request", "action":"camera.ir.mlx90640.capture", @@ -167,7 +169,7 @@ curl -XPOST -H 'Content-Type: application/json' -d ' "output_file":"~/snap.png", "scale_factor":20 } -}' -u 'username:password' http://localhost:8008/execute +}' http://localhost:8008/execute ``` If everything went well, the thermal picture should be stored under `~/snap.png`. In my case it looks like this while @@ -473,7 +475,9 @@ the [`tensorflow.predict`](https://docs.platypush.tech/en/latest/platypush/plugi method: ```shell -curl -XPOST -u 'user:pass' -H 'Content-Type: application/json' -d ' +curl -XPOST \ +-H "Authorization: Bearer $PP_TOKEN" \ +-H 'Content-Type: application/json' -d ' { "type":"request", "action":"tensorflow.predict", diff --git a/static/pages/How-to-build-your-personal-infrastructure-for-data-collection-and-visualization.md b/static/pages/How-to-build-your-personal-infrastructure-for-data-collection-and-visualization.md index ba167af..31fead2 100644 --- a/static/pages/How-to-build-your-personal-infrastructure-for-data-collection-and-visualization.md +++ b/static/pages/How-to-build-your-personal-infrastructure-for-data-collection-and-visualization.md @@ -804,12 +804,13 @@ python -m platypush.plugins.google.credentials \ - With Platypush running, check the data sources that are available on your account: ```shell -curl -XPOST -H 'Content-Type: application/json' -d ' +curl -XPOST \ + -H "Authorization: Bearer $PP_TOKEN" \ + -H 'Content-Type: application/json' -d ' { "type":"request", "action":"google.fit.get_data_sources" - }' -u 'username:password' \ - http://your-pi:8008/execute + }' http://your-pi:8008/execute ``` - Take note of the `dataStreamId` attributes of the metrics that you want to monitor and add them to the configuration diff --git a/static/pages/Transform-a-RaspberryPi-into-a-universal-Zigbee-and-Z-Wave-bridge.md b/static/pages/Transform-a-RaspberryPi-into-a-universal-Zigbee-and-Z-Wave-bridge.md index fda7af1..e06ecb6 100644 --- a/static/pages/Transform-a-RaspberryPi-into-a-universal-Zigbee-and-Z-Wave-bridge.md +++ b/static/pages/Transform-a-RaspberryPi-into-a-universal-Zigbee-and-Z-Wave-bridge.md @@ -223,7 +223,9 @@ code or through whichever platypush backend you have configured: ```shell # HTTP request -curl -XPOST -a 'username:password' -H 'Content-Type: application/json' -d ' +curl -XPOST \ + -H "Authorization: Bearer $PP_TOKEN" \ + -H 'Content-Type: application/json' -d ' { "type":"request", "action":"zigbee.mqtt.device_set", @@ -306,7 +308,9 @@ on [Z-Wave events](https://docs.platypush.tech/en/latest/platypush/events/zwave. ```shell # HTTP request -curl -XPOST -a 'username:password' -H 'Content-Type: application/json' -d ' +curl -XPOST \ + -H "Authorization: Bearer $PP_TOKEN" \ + -H 'Content-Type: application/json' -d ' { "type":"request", "action":"zwave.get_value", diff --git a/static/pages/Ultimate-self-hosted-automation-with-Platypush.md b/static/pages/Ultimate-self-hosted-automation-with-Platypush.md index 98530ca..f75696b 100644 --- a/static/pages/Ultimate-self-hosted-automation-with-Platypush.md +++ b/static/pages/Ultimate-self-hosted-automation-with-Platypush.md @@ -168,12 +168,18 @@ light.hue: ``` If you have the HTTP backend running, for example, you can easily dispatch such a request to it through the available -JSON-RPC execute endpoint (after logging at least once at the control panel at `http://localhost:8008` and creating a -user): +JSON-RPC execute endpoint. + +First create a user through the web panel at `http://localhost:8008`, then generate a token for the user to authenticate +the API calls - you can easily generate a token from the web panel itself, Settings -> Generate token. + +Store the token under an environment variable (e.g. `$PP_TOKEN`) and use it in your calls over the `Authorization: Bearer` +header: ```shell # cURL example -curl -XPOST -H 'Content-Type: application/json' -u 'username:password' \ +curl -XPOST -H 'Content-Type: application/json' \ + -H "Authorization: Bearer $PP_TOKEN" \ -d '{"type":"request", "action":"light.hue.on", "args": {"groups": ["Living Room", "Bedroom"]}}' \ http://localhost:8008/execute @@ -184,7 +190,7 @@ echo '{ "args": { "groups": ["Living Room", "Bedroom"] } -}' | http -a 'username:password' http://localhost:8008/execute +}' | http http://localhost:8008/execute "Authorization: Bearer $PP_TOKEN" ``` And you can also easily send requests programmatically through your own Python scripts, basically using Platypush as a @@ -388,7 +394,8 @@ In both cases, you can call the procedure either from an event hook or directly ```shell # cURL example -curl -XPOST -H 'Content-Type: application/json' -u 'username:password' \ +curl -XPOST -H 'Content-Type: application/json' \ + -H "Authorization: Bearer $PP_TOKEN" \ -d '{"type":"request", "action":"procedure.at_home"}' \ http://localhost:8008/execute ``` @@ -604,8 +611,9 @@ If you enabled the HTTP backend then you may want to point your browser to `http Then you can test the HTTP backend by sending e.g. a `get_lights` command: ```shell -curl -XPOST -u 'username:password' \ +curl -XPOST \ -H 'Content-Type: application/json' \ + -H "Authorization: Bearer $PP_TOKEN" \ -d '{"type":"request", "action":"light.hue.get_lights"}' \ http://localhost:8008/execute ```