From d89725f8abc9103e82030d251eeab83decb52071 Mon Sep 17 00:00:00 2001 From: Fabio Manganiello Date: Mon, 25 Jan 2021 19:41:47 +0100 Subject: [PATCH] Migrated second article --- ...omizable-voice-assistant-with-Platypush.md | 236 +++++++++++++++++- 1 file changed, 234 insertions(+), 2 deletions(-) diff --git a/static/pages/Build-your-customizable-voice-assistant-with-Platypush.md b/static/pages/Build-your-customizable-voice-assistant-with-Platypush.md index 05a7842..5b1d24f 100644 --- a/static/pages/Build-your-customizable-voice-assistant-with-Platypush.md +++ b/static/pages/Build-your-customizable-voice-assistant-with-Platypush.md @@ -19,11 +19,11 @@ of the skills/integrations supported by the product, regardless of whichever ans that phrase. And, most of all, my goal was to have all the business logic of the actions to run on my own device(s), not on someone else’s cloud. I feel like by now that goal has been mostly accomplished (assistant technology with 100% flexibility when it comes to phrase patterns and custom actions), and today I’d like to show you how to set up your own -Google Assistant on steroids as well with a Raspberry Pi, microphone and platypush. I’ll also show how to run your +Google Assistant on steroids as well with a Raspberry Pi, microphone and Platypush. I’ll also show how to run your custom hotword detection models through the [Snowboy](https://snowboy.kitt.ai/) integration, for those who wish greater flexibility when it comes to how to summon your digital butler besides the boring “Ok Google” formula, or those who aren’t that happy with the idea of having Google to constantly listen to everything that is said in the room. For those who are unfamiliar with -platypush, I suggest +Platypush, I suggest reading [my previous article](https://blog.platypush.tech/article/Ultimate-self-hosted-automation-with-Platypush) on what it is, what it can do, why I built it and how to get started with it. @@ -65,3 +65,235 @@ I eventually decided to develop the integration with the Google Assistant and ig - There are [few Python examples for the Alexa SDK](https://developer.amazon.com/en-US/alexa/alexa-skills-kit/alexa-skill-python-tutorial#sample-python-projects), but they focus on how to develop a skill. I’m not interested in building a skill that runs on Amazon’s servers — I’m interested in detecting hotwords and raw speech on any device, and the SDK should let me do whatever I want with that. + +I eventually opted for +the [Google Assistant library](https://developers.google.com/assistant/sdk/guides/library/python/), but that +has [recently been deprecated with short notice](https://github.com/googlesamples/assistant-sdk-python/issues/356), and +there’s an ongoing discussion of which will be the future alternatives. However, the voice integration with Platypush +still works, and whichever new SDK/API Google will release in the near future I’ll make sure that it’ll still be +supported. The two options currently provided are: + +- If you’re running Platypush on an x86/x86_64 machine or on a Raspberry Pi earlier than the model 4 (except for the + Raspberry Pi Zero, since it’s based on ARM6 and the Assistant library wasn’t compiled it for it), you can still use + the assistant library — even though it’s not guaranteed to work against future builds of the libc, given the + deprecated status of the library. + +- Otherwise, you can use the Snowboy integration for hotword detection together with Platypush’ s wrapper around the + Google push-to-talk sample for conversation support. + +In this article we’ll see how to get started with both the configurations. + +## Installation and configuration + +First things first: in order to get your assistant working you’ll need: + +- An x86/x86_64/ARM device/OS compatible with Platypush and either the Google Assistant library or Snowboy (tested on + most of the Raspberry Pi models, Banana Pis and Odroid, and on ASUS Tinkerboard). + +- A microphone. Literally any Linux-compatible microphone would work. + +I’ll also assume that you have already installed Platypush on your device — the instructions are provided on +the [Github page](https://git.platypush.tech/platypush/platypush), on +the [wiki](https://git.platypush.tech/platypush/platypush/-/wikis/home#installation) and in +my [previous article](https://blog.platypush.tech/article/Ultimate-self-hosted-automation-with-Platypush). + +Follow these steps to get the assistant running: + +- Install the required dependencies: + +```shell +# To run the Google Assistant hotword service + speech detection +# (it won't work on RaspberryPi Zero and arm6 architecture) +[sudo] pip install 'platypush[google-assistant-legacy]' + +# To run the just the Google Assistant speech detection and use +# Snowboy for hotword detection +[sudo] pip install 'platypush[google-assistant]' +``` + +- Follow [these steps](https://developers.google.com/assistant/sdk/guides/service/python/embed/config-dev-project-and-account) + to create and configure a new project in the Google Console and download the required credentials + files. + +- Generate your user’s credentials file for the assistant to connect it to your account: + +```shell +export CREDENTIALS_FILE=~/.config/google-oauthlib-tool/credentials.json + +google-oauthlib-tool --scope https://www.googleapis.com/auth/assistant-sdk-prototype \ + --scope https://www.googleapis.com/auth/gcm \ + --save --headless --client-secrets $CREDENTIALS_FILE +``` + +- Open the prompted URL in your browser, log in with your Google account if needed and then enter the prompted + authorization code in the terminal. + +The above steps are common both for the Assistant library and the Snowboy+push-to-talk configurations. Let’s now tackle +how to get things working with the Assistant library, provided that it still works on your device. + +### Google Assistant library + +- Enable the Google Assistant backend (to listen to the hotword) and plugin (to programmatically start/stop + conversations in your custom actions) in your Platypush configuration file (by default + `~/.config/platypush/config.yaml`): + +```yaml +backend.assistant.google: + enabled: True + +assistant.google: + enabled: True +``` + +- Refer to the official documentation to check the additional initialization parameters and actions provided by the + [assistant backend](https://platypush.readthedocs.io/en/latest/platypush/backend/assistant.google.html) and + [plugin](https://platypush.readthedocs.io/en/latest/platypush/plugins/assistant.google.html). + +- Restart Platypush and keep an eye on the output to check that everything is alright. Oh, and also double check that + your microphone is not muted. + +- Just say “OK Google” or “Hey Google”. The basic assistant should work out of the box. + +### Snowboy + Google Assistant library + +Follow the steps in the next section if the Assistant library doesn’t work on your device (in most of the cases you’ll +see a segmentation fault if you try to import it caused by a mismatching libc version), or if you want more options when +it comes to supported hotwords, and/or you don’t like the idea of having Google to constantly listen all of your +conversation to detect when you say the hotword. + +```shell +# Install the Snowboy dependencies +[sudo] pip install 'platypush[hotword]' +``` + +- Go to the [Snowboy home page](https://snowboy.kitt.ai/), register/login and then select the hotword model(s) you like. + You’ll notice that before downloading a model you’ll be asked to provide three voice sample of yours saying the + hotword — a good idea to keep voice models free while getting everyone to improve them. + +- Configure the Snowboy backend and the Google push-to-talk plugin in your Platypush configuration. Example: + +```yaml +backend.assistant.snowboy: + audio_gain: 1.0 + models: + computer: + voice_model_file: ~/path/models/computer.umdl + assistant_plugin: assistant.google.pushtotalk + assistant_language: it-IT + detect_sound: ~/path/sounds/sound1.wav + sensitivity: 0.45 + + ok_google: + voice_model_file: ~/path/models/OK Google.pmdl + assistant_plugin: assistant.google.pushtotalk + assistant_language: en-US + detect_sound: ~/path/sounds/sound2.wav + sensitivity: 0.42 + +assistant.google.pushtotalk: + language: en-US +``` + +A few words about the configuration tweaks: + +- Tweak `audio_gain` to adjust the gain of your microphone (1.0 for a 100% gain). + +- `model` will contain a key-value list of the voice models that you want to use. + +- For each model you’ll have to specify its `voice_model_file` (downloaded from the Snowboy website), which + `assistant_plugin` will be used (`assistant.google.pushtotalk` in this case), the assistant_language code, i.e. the + selected language for the assistant conversation when that hotword is detected (default: `en-US`), an optional + detect_sound, a WAV file that will be played when a conversation starts, and the sensitivity of that model, between 0 + and 1 — with 0 meaning no sensitivity and 1 very high sensitivity (tweak it to your own needs, but be aware that a + value higher than 0.5 might trigger more false positives). + +- The `assistant.google.pushtotalk` plugin configuration only requires the default assistant language to be used. + +Refer to the official documentation for extra initialization parameters and methods provided by the +[Snowboy backend](https://platypush.readthedocs.io/en/latest/platypush/backend/assistant.snowboy.html) and the +[push-to-talk plugin](https://platypush.readthedocs.io/en/latest/platypush/plugins/assistant.google.pushtotalk.html). + +Restart Platypush and check the logs for any errors, then say your hotword. If everything went well, an assistant +conversation will be started when the hotword is detected. + +## Create custom events on speech detected + +So now that you’ve got the basic features of the assistant up and running, it’s time to customize the configuration and +leverage the versatility of Platypush to get your assistant to run whatever you like through when you say whichever +phrase you like. You can create event hooks for any of the events triggered by the assistant — among those, +`SpeechRecognizedEvent`, `ConversationStartEvent`, `HotwordDetectedEvent`, `TimerEndEvent` etc., and those hooks can run +anything that has a Platypush plugin. Let’s see an example to turn on your Philips Hue lights when you say “turn on the +lights”: + +```yaml +event.hook.AssistantTurnLightsOn: + if: + type: platypush.message.event.assistant.SpeechRecognizedEvent + phrase: "turn on (the)? lights?" + then: + - action: light.hue.on +``` + +You’ll also notice that the answer of the assistant is suppressed if the detected phrase matches an existing rule, but +if you still want the assistant to speak a custom phrase you can use the `tts` or `tts.google plugins`: + +```yaml +event.hook.AssistantTurnOnLightsAnimation: + if: + type: platypush.message.event.assistant.SpeechRecognizedEvent + phrase: "turn on (the)? animation" + then: + - action: light.hue.animate + args: + animation: color_transition + transition_seconds: 0.25 + + - action: tts.say + args: + text: Enjoy the light show +``` + +You can also programmatically start a conversation without using the hotword to trigger the assistant. For example, this +is a rule that triggers the assistant whenever you press a Flic button: + +```yaml +event.hook.FlicButtonStartConversation: + if: + type: platypush.message.event.button.flic.FlicButtonEvent + btn_addr: 00:11:22:33:44:55 + sequence: + - ShortPressEvent + then: + - action: assistant.google.start_conversation + # or: + # - action: assistant.google.pushtotalk.start_conversation +``` + +Additional win: if you have configured the HTTP backend and you have access to the web panel or the dashboard then +you’ll notice that the status of the conversation will also appear on the web page as a modal dialog, where you’ll see +when a hotword has been detected, the recognized speech and the transcript of the assistant response. + +That’s all you need to know to customize your assistant — now you can for instance write rules that would blink your +lights when an assistant timer ends, or programmatically play your favourite playlist on mpd/mopidy when you say a +particular phrase, or handle a home made multi-room music setup with Snapcast+platypush through voice commands. As long +as there’s a platypush plugin to do what you want to do, you can do it already. + +## Live demo + +A [TL;DR video](https://photos.app.goo.gl/mCscTDFcB4SzazeK7) with a practical example: + +In this video: + +- Using Google Assistant basic features ("how's the weather?") with the "OK Google" hotword (in English) + +- Triggering a conversation in Italian when I say the "computer" hotword instead + +- Support for custom responses through the Text-to-Speech plugin + +- Control the music through custom hooks that leverage mopidy as a backend (and synchronize music with devices in other rooms through the Snapcast plugin) + +- Trigger a conversation without hotword - in this case I defined a hook that starts a conversation when something approaches a distance sensor on my Raspberry + +- Take pictures from a camera on another Raspberry and preview them on the screen through platypush' camera plugins, and send them to mobile devices through the Pushbullet or AutoRemote plugins + +- All the conversations and responses are visually shown on the platypush web dashboard