If the user provided `username` and `password` in the plugin
configuration, then we should use those credentials to refresh the OAuth
token when expired.
- Merged `trello` plugin and backend into a single plugin.
- Removed legacy `Response` objects, replaced with data classes and
schemas.
- Fixed the Websocket connection flow to reflect the new authentication
protocol.
Closes: #307
Dropdown components should always be rendered under the root element, or
nasty effects caused by absolute parenting may end up hiding dropdown
elements regardless of their `z-index`.
The new approach uses a single `<DropdownContainer>` element in the
main `App` file. Each `<Dropdown>` component will push updates to the
bus whenever it triggers open/close events, and the dropdown component
to be rendered will be pushed upstream and rendered in the root element.
- Streaming and media subtitles endpoints moved from Flask to Tornado
routes - the old Flask streaming route no longer worked behind a
Tornado server.
- Storing the streaming state on Redis rather than in a local variable,
or different Tornado processes may end up with different copies of the
registry.
Closes: #336
Instead of relying on the official Google YouTube API (limited, subject
to breaking changes with short/no notice depending on Google's strategy
against scrapers, and with an initial setup that has a high cost), we'll
just stick to Piped from now on.
It's free, it doesn't require API keys, it's unlikely to change, it's
not subject to Google's hostile practices against developers, and
anybody can run an instance.
`youtube-dl` is mostly dead and there are several forks available, thus
we need to give the user the ability to pick which `youtube-dl`
executable fork they want to use.
Among these, `yt-dlp` is probably the most maintained today and it's
also included in many default repos, so it's been added as an extra
requirement for all the media plugins.
Closes: #268
The base `media` plugin is abstract, hence the `Media` panel needs to
pass the current media plugin to the `Utils` mixins so it can pick the
right action.
- The default PopcornTime API host has changed, as popcorn-time.ga is no
longer available.
- The iMDb API now requires a paid tier even for a basic query. The
official iMDb API layer (and the API key requirement) has thus been
replaced with a dear ol' scraping of the frontend endpoint.
- Pass of Black/LINT.
This allows loading spinners, modals and other components with a real
fullscreen background to stretch over the required space, without being
covered by the navigator or other sibling components.
This also requires the collapsed navigator to have a 1px margin-right,
or its separation border won't be visible.
The plugin now leverages the `sound` plugin for playback, like all other
`tts` plugins now do, instead of an external `media` plugin.
This also removes the need for the `/tts/mimic3/say` endpoint.
- Added `input_format`/`output_format` options to both input and output
audio streams.
- Replaced the previous (confusing) occurrences of `ffmpeg_format` and
`format`.
- Added custom `dtype` option for `sound.play`.
- Added `join` flag (default: false) to `sound.play` to wait for the
playback to finish.
Different versions of the `sounddevice` dependency may or may not return
the `index` parameter when querying the available sound devices.
Thus, the code should be ready for both cases.
The integration was based on my old fork of the AVS service, which is no
longer functional given the changes the the Amazon's backend side.
A new `avs-device-sdk` is now available, but it seems that it requires
lengthy compilation processes which are RaspberryPi-specific.
Further investigation is needed for a new Alexa plugin - see #334.
It hurts to see it go, as I really believed in this project.
But the website of the project went away in 2020, the Github project
hasn't seen any activity since 2021, and the fork that is supposed to be
used as a replacement for training .pmdl models hasn't been updated
since 2021 - and it only supports Python 2 on Ubuntu 16.04 or 18.04.
One day I may dedicate some efforts to bring Snowboy back to life, but
until then it's definitely not in a state where it's usable for a
Platypush integration.
It only existed as a back-compatibility layer with armv6, since there
was no build of the assistant library that worked on Raspberry Pi Zero.
But that API layer has been discontinued by Google and it's no longer
functional, so only the `assistant.google` integration (on x86_64 and
armv7) is currently supported.
After creating the virtual environment, we should add `<VENV_DIR>/bin`
to the `PATH` variable, so any next `python`/`pip` commands will be
executed in the new environment.
Before this fix, `platyvenv`, unlike `platydock`, didn't take into
account any extra before/after installation commands that individual
integrations may instruct to run in their manifest files.
- `iputils` should be an explicit system dependency for `ping`.
Some minimal systems (like some Docker images) may not have the `ping`
command installed out of the box.
- `hid` and `marshmallow_dataclass` should be among the auto-mocked
modules.
If an extension is configured and enabled, then the UI will now include
a tick next to its name and the currently loaded configuration in the
`Configuration` tab.
Added a `wrapped` "hidden" parameter to the function returned by the
`@action` decorator.
We need this to access the underlying decorated function when e.g. we
need to access its specs or decorators.
- The `inspect` plugin and the Sphinx inspection extensions now use the
same underlying logic.
- Moved all the common inspection logic under
`platypush.common.reflection`.
- Faster scanning of the available integrations and components through a
pool of threads.
- Added `doc_url` parameters.
- Migrated events and responses metadata scanning logic.
- Now expanding some custom Sphinx tag instead of returning errors when
running outside of the Sphinx context - it includes `:class:`,
`:meth:` and `.. schema::`.
- Check if it's part of the metadata through a function call rather than
checking `Base.metadata` in every single module.
- Make it possible to override them (mostly for doc generation logic
that needs to be able to import those classes).
- Make it possible to extend them.
1. Improved documentation. Every plugin now reports the exact steps to
get the integration up and running with the right API scopes.
2. All Google plugins now have a standard process to get (and reuse) the
client secret. Except for PubSub, Translate and Maps (which have
their own flows), all the Google plugins now read the client secrets
from `<WORKDIR>/credentials/google/client_secret.json` by default.
3. Black/LINT for some of those plugins, which hadn't been touched in a
while.
4. The interface to pass API scopes is now leaner. It's now possible to
pass a scope directly as e.g. `calendar.readonly` rather than
`https://www.googleapis.com/auth/calendar.readonly`.
5. Improved the logic to retrieve the right scope tokens file. If e.g.
an integration requires the role `A`, and a credentials file exists
for the roles `A` and `B`, then this file will be used rather than
prompting the user to authenticate again.
The old type configuration
(`platypush.plugins.calendar.name.CalendarNamePlugin`) is a bit clunky.
Instead, since the type will always be a plugin, we should encourage
the use of `calendar.name` directly to identify the type.
If no docstring is specified for a constructor, Python usually pre-fills
a standard text - "Initialize self. See help(type(self))".
We don't need this default text in our plugins documentation.
If the client that forwarded the request is no longer available (either
because an exception or a timeout was raised) then its I/O buffer and
event loop may be closed.
In this case, the response callback should handle and report the
exception, and still set the event, so that any other threads waiting
for the response can move on.
Added an `add_dependencies` plugin to the Sphinx build process that
parses the manifest files of the scanned backends and plugins and
automatically generates the documentation for the required dependencies
and triggered events.
This means that those dependencies are no longer required to be listed
in the docstring of the class itself.
Also in this commit:
- Black/LINT for some integrations that hadn't been touched in a long
time.
- Deleted some leftovers from previous refactors (deprecated
`backend.mqtt`, `backend.zwave.mqtt`, `backend.http.request.rss`).
- Deleted deprecated `inotify` backend - replaced by `file.monitor` (see
#289).
By default, the `phue` library will store the file containing the token
and the bridge configuration under `~/.python_hue`.
That's outside of our application folder, and it can't easily be copied
around or added to Docker volumes.
We should instead have it under `<WORKDIR>/light.hue/config.json`, in
line with what the other plugins do, and if `~/.python_hue` is available
but `<WORKDIR>/light.hue/config.json` isn't then we should copy the
legacy file to the new one.
I've tried my best to keep it around, but the endpoints seem to be
broken, they no longer have a link to their API v3 documentation, and
the API Explorer that was supposed to be in the dashboard is gone.
The @action decorator should capture all the exceptions,
log them and return them on `Response.errors`.
This ensures that uncaught exceptions from plugin
actions won't unwind out of control, and also that they
are logged and treated consistently across all the
integrations.
If we include the class name by default then we won't have to
explicitly modify the client_id in the implementation classes
in order to prevent clashes.
- Do `abspath`+`expanduser` on the configuration file path before
checking if it exists.
- If the path doesn't exist, but the user explicitly passed a
configuration file, then copy/create the default configuration
under the specified directory.
The new configuration:
- Enables `backend.http` by default
- Removes the extra `config.auto.yaml` dependency
- Includes many more examples, lots of updates for existing examples,
and extensive comments.
Following some common UNIX conventions, if no configuration file is
specified and none exists under the default locations, then a new
configuration directory should be created under:
```
- if root: /etc/platypush
- else:
- if XDG_CONFIG_HOME:
- $XDG_CONFIG_HOME/platypush
- else:
- ~/.config/platypush
```
The two scripts now share the same command interface, behaviour and base
class.
Also, Platydock now builds a Docker image instead of just printing a
Dockerfile, unless the `--print` option is passed.
Instead of having a custom `get_installed` callable field, with
replicated code for each package manager, the field has now been
promoted to a class method containing the common logic, and the
instances now expect a `list` field (base command to list the installed
packages using the specified package manager) and a `parse_list_line`
callback field (to extract the base package name given a raw line from
the command above).
Also, we shouldn't run the list command if we're running within a Docker
context - the host and container environments will be different.
This is useful to determine which is the default set of scripts that
should be used by the installer depending on the detected installed
package manager.
If the /install folder on the container doesn't contain a copy of the
source files, then the git repository will be cloned under that folder.
The user can specify via `-r/--ref` option which tag/branch/commit they
want to install.
Created `platypush/install` folder that contains:
- Dockerfiles for the supported distros
- Lists of required base dependencies for the supported distros
- Install and run scripts
- Added Debian to supported base images
Platydock now will only print out a Dockerfile given a configuration
file.
No more maintaining the state of containers, storing separate workdirs
and configuration directories etc. - that introduced way too much
overhead over Docker.
The database settings could also be overridden in the configuration file
besides the command line.
We should therefore pass the path to the runtime configuration file, so
the Alembic process can initialize its configuration from the same file
and use the same settings.
If the path of the default database engine is overridden via `--workdir`
option then it won't be visible to the new `python` subprocess spawned
for Alembic.
We should load the latest timestamps from the db when the thread starts
instead of doing it in the constructor.
The constructor may be invoked when the entities engine hasn't been
initialized yet, and result in deadlocks.
The `variable` plugin may break in the constructor the first time the
application is started.
That's because it tries to initialize the cache of stored variables, but
the local database hasn't yet been initialized.
That's because plugins are registered _before_ the entities engine is
initialized, as the entities engine assumes that it already has plugins
to scan for entities.
Therefore, the initialization of the `variable` plugin's cache should be
lazy (only done upon the first call to `get`/`set` etc.), in order to
prevent deadlock situations where the plugin waits for the engine to
start, but the engine will be initialized only after the plugin is
ready.
And the lazy initialization logic should also ensure that the entities
engine has been properly started (and emit a `TimeoutError` if that's
not the case), in order to prevent race conditions.
- If a Python optional dependency is available as a system package on
the target system, try and install it that route rather than pip. It's
usually faster and it decreases the risk of breaking system packages.
- Added support for apk dependencies in manifest files. This brings the
number of distros officially supported by all the extensions to four:
- Alpine
- Arch
- Debian
- Ubuntu