Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Asynchronous Deferred Response Support #73

Open
jsetton opened this issue May 16, 2018 · 12 comments
Open

Asynchronous Deferred Response Support #73

jsetton opened this issue May 16, 2018 · 12 comments
Assignees

Comments

@jsetton
Copy link
Collaborator

jsetton commented May 16, 2018

@digitaldan - I have been working to add asynchronous response support so we can send deferred response for device such as locks to report the accurate state once the action is complete.

Part of this integration is the need to send asynchronous responses to the Alexa Event Gateway which requires the use of an authentication token not related to the one provided in the directive endpoint scope. That token is actually retrievable through the Alexa.Authorization directive sent to the skill when the "Send Alexa Events" skill permission is enabled and a user enables the skill on his account.

When handling the Authorization directives and getting the OAuth2 tokens from LWA API, we are expected to store these keys for each user, so they can be used later on along with having to refresh them when necessary. So this is the part, I am looking for some feedback. How do we want to store this information? My first intuition was to just use a DymamoDB table to do so. Another option, if we want to just rely on each user openHAB instance, we could store that information in the cloud connector service config through the REST API.

@digitaldan
Copy link
Collaborator

Thanks @jsetton I'll take a look today!

@jsetton jsetton mentioned this issue Feb 25, 2019
@jsetton
Copy link
Collaborator Author

jsetton commented Apr 1, 2019

@digitaldan So I created a separate branch including the changes I had originally implemented for the deferred response support on top of the latest code refactoring framework.

I was wondering if you had any additional thoughts on how we could get this feature along with proactive reporting supported by the skill. I still believe that we should have a binding that would handle most of the directive requests.

In regards to your concern of not exposing the skill client credentials, I was able to test successfully using the API gateway as another way to interface with the skill Lambda function, giving the ability for a possible binding to request for an access token that would be provided/refreshed by the Lambda function. However, this will probably increase the overall AWS usage so I am not sure if this is the best solution.

@digitaldan
Copy link
Collaborator

Device tokens are valid for 30 mins? So if the binding only requested a token when it expired, i think this would greatly minimize the number of requests. Also if these requests are short lived (1000ms or less) than the pricing is very low . That sounds awesome. The binding would not be hard, I wonder how a user would configure it? Would we match all alexa tags for reporting? Let the user choose which items (already tagged) will be reported on?

@jsetton
Copy link
Collaborator Author

jsetton commented Apr 2, 2019

Device tokens are valid for 30 mins? So if the binding only requested a token when it expired, i think this would greatly minimize the number of requests

It's valid 60 minutes. So we could limit the number of requests if the binding store the access token locally with its expiration time, or maybe only request a new one if the current one is failing.

Also if these requests are short lived (1000ms or less) than the pricing is very low.

I am not concerned about the compute time per request but more the number of them. Also the API gateway has its own resource pricing as well although 1 million calls are covered by the free tier. If we go by 24 calls per user per day, that's 720 per month on average to add with the Alexa directive request handling. It's hard to say as I don't know how many users have the skill enabled but I feel that the free tier won't be able to cover the compute resources. On the positive side, offloading the majority of the directive request computation to the binding should reduce the overall compute time. So we may just be at the same level in the end.

I wonder how a user would configure it? Would we match all alexa tags for reporting? Let the user choose which items (already tagged) will be reported on?

The first idea I had was to completely move most of the directive handling into the binding. So ultimately, the current version of the binding would be very basic and still support a minimal version based on v2 tags as we discussed last year.

So basically, when the user enables the skill, it generates an Authorization directive that the Lambda function would handle, store the relevant data in its dynamodb table and provide the current access token back to the binding in the process to be stored. For any other directives, the function would pass on the request to the binding which would handle it and return the synchronous response to the skill.

In regards to item tracking, when the first discovery request is received, the binding should start tracking the items that are included in the response. So in terms of user configuration, there shouldn't be any difference on how we are doing it today.

The second one, based on what you seem to think of, would be to keep the directive handling in the lambda function and use the binding as an event trigger. In that approach, for deferred responses, the function would pass the relevant item to wait on, to the binding which would call back the function via the API gateway when the event occurs. For proactive reporting, the function would provide a list of items to track during discovery requests to the binding, which would call back the function the same way when item state changes.

Comparing the two ideas, the second one will certainly require more compute time on the AWS side. This is why I think the first one would better. What do you think?

Another aspect part of asynchronous support I didn't mention is that proactive discovery report can also be generated when users are adding or removing items.

@digitaldan
Copy link
Collaborator

This is why I think the first one would better. What do you think?

I agree #1 sounds better, but this will require rewriting a ton of logic in our Java binding. I wonder how long it would take for us to get that done? I'm not opposed at all, but it think a proof of concept would be good to judge the level of effort required. Unless you already have something, I can create a binding and borrow from a few other bindings i have authored (or co-authored) and get something up maybe this weekend which would be able to parse items, be notified of item updates and expose a new REST interface in OH. From there if we could demonstrate supporting something simple, like a switch and/or dimmer, that might be helpful.

@jsetton
Copy link
Collaborator Author

jsetton commented Apr 16, 2019

I meant to respond back earlier but I got sidetrack. That sounds like a good plan. 👍 I started to look into creating a binding based on that idea but the learning curve to just get the standard functionalities (item registry + rest interface) you mentioned above is pretty steep for me, at the moment, and would take me a bit of time to implement just to get to the interesting part 😄. If you provide the initial skeleton, I can certainly focus on adding the response logic based on what I have already implemented.

One aspect I also noticed is that we would need to update the cloud connector routes to expose the new interface. In terms of endpoints, I was thinking about:

  • <baseURL>/alexa/directive (Directive requests)
  • <baseURL>/alexa/token (Token information to be stored when skill gets enabled)

@digitaldan
Copy link
Collaborator

@jsetton I would like to release our current v3 skill while we work on this feature. My one concern however is how we would support turning this feature on after we initially deploy v3. If we enable it later, i'm afraid users may need to disable and enable the app, or at least reauthorize it so we get the accept grant for those existing users. The migration docs say this:

Important: When a skill with permission to send Alexa events is certified, there is a backfill process that sends an AcceptGrant for every customer with your v2 skill enabled. Make sure that your system is ready for these messages and can successfully acquire and store authentication and refresh tokens for each customer.

One thought is we turn this on and store the tokens even though they will not be used right away.

@jsetton
Copy link
Collaborator Author

jsetton commented May 11, 2019

Based on what you pasted from the migration docs, we should be fine actually. The skill currently doesn't have the permission to send Alexa events enabled. So if I understand correctly, whenever we enable that feature and it is certified/deployed, the skill would receive the backfill requests at that time. Also, you can submit a request to have these backfills regenerated if necessary.

However, the biggest concern is how we would get users using current v3 to migrate to new v3 with binding. When we get to that point, do you think we will need to give users a deadline to do so or should we keep supporting both setup?

@digitaldan
Copy link
Collaborator

When we get to that point, do you think we will need to give users a deadline to do so or should we keep supporting both setup?

My concern is that we start down this path and delay v3 too long. I feel what we have now would be awesome for our community and people will really appreciate the new features. If we need to support this version longer term while both are available, I think that is fine. I'm also ok setting a deadline for retirement after some reasonable amount of time.

Also I just got a skeleton binding working, it listens for item changes events as well as adds a "/rest/alexa" path using the build in REST service. I'll post it here shortly.

@jsetton
Copy link
Collaborator Author

jsetton commented May 11, 2019

I feel what we have now would be awesome for our community and people will really appreciate the new features.

I totally agree. I was just providing some potential future decisions that we would need to make once we release current v3 while keeping in mind the main constraint in term of the amount of AWS ressources usage.

Also I just got a skeleton binding working, it listens for item changes events as well as adds a "/rest/alexa" path using the build in REST service. I'll post it here shortly.

👍

@jsetton jsetton self-assigned this Jun 8, 2019
@jsetton jsetton changed the title V3 Asynchronous Deferred Response Support Asynchronous Deferred Response Support Aug 19, 2019
@BoBiene
Copy link

BoBiene commented Nov 28, 2020

@digitaldan & @jsetton thank you for your great work. 👍

Are there any plans implementing this, so there would be support for alaxa rules on motion Sensors / dor contact?

@ggolec
Copy link

ggolec commented Oct 5, 2024

Any change in the plans? We still can't use motion and contact for triggering routines in Alexa app.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants