The Zeroconf registration part may randomly get stuck, resulting in the
web server not being properly started.
It's therefore better to run the Zeroconf registration process
asynchronously, for it's not strictly required for the web server to
execute.
The Zeroconf registration part may randomly get stuck, resulting in the
web server not being properly started.
It's therefore better to run the Zeroconf registration process
asynchronously, for it's not strictly required for the web server to
execute.
When a cronjob receives a TIME_SYNC event (because the system clock has
changed/drifted and the cronjobs are expected to recalculate their next
run slot) it should also clear the event.
Otherwise, the next `wait` will be skipped and the cronjob will be
executed even if it wasn't scheduled.
This may make things a bit less optimal, but it's probably the only
possible solution that preserves my sanity.
Managing upserts of cached instances that were previously made transient
and expunged from the session is far from easy, and the management of
recursive parent/children relationships only add one more layer of
complexity (and that management is already complex enough in its current
implementation).
The `disable_logging` attribute was only available on events and
responses, and it could only either entirely disable or enable logging
for all the events of a certain type.
The new flag allows more customization by setting the default logging
level used for any message of a certain type (or `None` to disable
logging). This makes it possible to e.g. set some verbose events to
debug level, and the user can see them if they configure the application
in debug mode.
It also delegates the logging logic to the message itself, instead of
having different parts of the application handling their own logic.
The `disable_logging` attribute was only available on events and
responses, and it could only either entirely disable or enable logging
for all the events of a certain type.
The new flag allows more customization by setting the default logging
level used for any message of a certain type (or `None` to disable
logging). This makes it possible to e.g. set some verbose events to
debug level, and the user can see them if they configure the application
in debug mode.
It also delegates the logging logic to the message itself, instead of
having different parts of the application handling their own logic.
ZWaveJS has broken back-compatibility with zwavejs2mqtt when it comes to
events format.
Only a partial representation of the node and value objects is
forwarded, and that's often not sufficient to infer the full state of
the node with its values.
The `_dispatch_event` logic has therefore been modified to accommodate
both the implementation.
This means that we have to go conservative in order to preserve
back-compatibility and not over-complicate things, even if it (slightly)
comes at the expense of performance.
ZWaveJS has broken back-compatibility with zwavejs2mqtt when it comes to
events format.
Only a partial representation of the node and value objects is
forwarded, and that's often not sufficient to infer the full state of
the node with its values.
The `_dispatch_event` logic has therefore been modified to accommodate
both the implementation.
This means that we have to go conservative in order to preserve
back-compatibility and not over-complicate things, even if it (slightly)
comes at the expense of performance.
Since Parenthesized context managers are only supported on very recent
versions of Python (thanks black for breaking back-compatibility), we
should still use the old multiline syntax - it's not worth breaking
compatibility with Python >= 3.6 and < 3.10 just to avoid typing a
backslash.
The most recent versions of ZwaveJS-UI don't send the `hexId` of the
node on node change events. We have therefore to infer it from the
reported `dbLink`.
The most recent versions of ZwaveJS-UI don't send the `hexId` of the
node on node change events. We have therefore to infer it from the
reported `dbLink`.
The parent->child relationship is now modelled on the database itself,
so we no longer need value names specifically formatted as
`[DeviceName] ValueName` - the UI will take care of it.
- Infer entity types on the basis of their semantic type (bool, decimal,
list) and read-only attribute (read-only bool is `BinarySensor`,
read-write bool is `Switch`, read-only decimal is `NumericSensor`,
read-write decimal is `Dimmer`, etc.) instead of trying to infer it
from the command class. Only a small set of command classes (like
configurations, vendor-specific or internal values) will be excluded.
This should greatly increase the number of supported values.
- Added support for `EnumSwitch` entities.
- Added inference for illuminance and humidity sensors.
Adding the credentials ensures that tokens associated to non-existing
users, or users with an invalid password, won't be accepted, even if
they were correctly encrypted using the host's keypair.
This adds an additional layer of security in case the host's keypair
gets compromised.
PyJWT is a very brittle and cumbersome dependency that expects several
cryptography libraries to be already installed on the system, and it can
lead to hard-to-debug errors when ported to different systems.
Moreover, it installs the whole `cryptography` package, which is several
MBs in size, takes time to compile, and it requires a Rust compiler to
be present on the target machine.
Platypush will now use the Python-native `rsa` module to handle JWT
tokens.
`UserManager.get_users` should not return a reference to the query
object, since the query object will be invalidated as soon as the
connection is closed.
Instead, it should return directly the list of `User` objects.
- The `declarative_base` instance should be shared
- Database `session_locks` should be stored at module, not instance
level
- Better isolation of scoped sessions
- Enclapsulated `get_session` method in `UserManager`
- Don't publish a `get` request if the device has no exposed queriable
attributes.
- Perform the recursive build of the `get` request payload before
checking for the `access` attribute.
Changed from `type` to `category`, which is basically the `name_plural`
attribute of the associated entity type metadata.
This allows us to define distinct entity metadata entries that we still
want to share the same grouping - for instance, `temperature_sensor`,
`humidity_sensor` and `battery` should all be grouped under `Sensors` on
the frontend.
If may happen (usually because of a race condition) that a cronjob has
already been started, but it hasn't yet changed its status from IDLE to
RUNNING when the scheduler checks it.
This fix guards the application against such events. If they occur, we
should just report them and move on, not terminate the whole scheduler.
- Don't return a redirect to the login page if an authentication failed
over a JSON endpoint - instead, return a JSON payload with the error.
- Added support for additional fonts.
- Re-designed the login/registration page.
- Updated caniuse database.
- Using tidalapi's `UserPlaylist.add` and `UserPlaylist.delete` methods
instead of defining my own through `_api_request`, so we won't have to
deal with the logic to set the ETag header.
- Added `remove_from_playlist` method.
- Wrapped insert/update/delete operations in transactions
- Proper (and much more efficient) bulk logic
- Better upsert logic
- Return inserted/updated records if the engine supports it
A `WebhookEvent` hook can now return a tuple in the format `(data,
http_code, headers)` in order to customize the HTTP status code and the
headers of a response.
When a client triggers a `WebhookEvent` by calling a configured webhook
over `/hook/<hook_name>`, the server will now wait for the configured
`@hook` function to complete and it will return the returned response
back to the client.
This makes webhooks much more powerful, as they can be used to proxy
HTTP calls or other services, and in general return something to the
client instead of just executing actions.
This class handles runnable plugins that have their own asyncio event
loop, without the pain usually caused by the management of multiple
threads + asyncio loops.
- Added initial synchronization and users cache.
- Added loop to poll for new events (TODO: use websocket after the first sync)
- Added login, sync and join actions
This is useful for two reason:
1. Slightly faster variable initialization times.
2. The cached variable object won't fail on the next `.get()`/`.set()`
if the `db` or `redis` plugins have failed for some reason.
The relevant clipboard monitoring logic has been moved to the
`clipboard` plugin. Thus, enabling the plugin should provide all the
feature, with no need for an additional backend.
The polling logic has been moved to the `light.hue` plugin itself
instead, so it's no longer required to have both a plugin and a backend
enabled in order to fully manage a Hue bridge.
- Added support for lights as native platform entities.
- Improved performance by using the JSON API objects whenever possible
to interact with the bridge instead of the native Python objects,
which perform a bunch of lazy API calls under the hood resulting in
degraded performance.
- Fixed lights animation attributes by setting only the ones actually
supported by a light.
- Several LINT fixes.
Some plugins may represent entity IDs as integers, while the database
maps external IDs to strings. This may result in entities being
incorrectly mapped during merging. Casting to string prevents these
type-related ambiguities.
The cron scheduler has been made more robust against changes in the
system clock (caused by e.g. DST changes, NTP syncs or manual setting).
A more granular management for cronjob events has been introduced, now
supporting a `TIME_SYNC` event besides the usual `STOP`. When the cron
scheduler detects a system clock drift (i.e. the timestamp offset before
and after a blocking wait is >1 sec) then all the cronjobs are notified
and forced to refresh their state.
If Platypush is supposed to work also without a manually created
`config.yaml`, and the HTTP backend is enabled by default in that
configuration, then Flask and companions should be among the required
dependencies.
The UI relies on these events upon refresh to detect if a device is
still reacheable. Therefore, we shouldn't mask them if we don't detect
any changes with the current entity configuration/state.
If an event comes from an entity that hasn't been persisted yet on the
internal storage then we wait for the entity record to be committed
before firing an event. It's better to wait a couple of seconds for the
database to synchronize rather than dealing with entity events with
incomplete objects.