At the current time, I don’t believe we have enough users or devices for our plan to exceed the tag limit. However, I’m considering, for the future, a solution for storing user alerting preferences for each device they have access to.
I’m thinking that the best way to approach this would be to have the tag of a table id on the device linking to its table with all of its users’ stored preferences, similar to a foreign key.
My main question is: With the current structure and storage limitations, would this be the best approach to handle the scalability of storing user preferences?
I don’t think I would store any information on the device or the user object; instead, I would use a single data table with columns for experienceUserId and deviceId - as it sounds like what you want to store is per-user, per-device preferences. Then, other columns in the table could store whatever those key-value preference pairs are.
Then, when it comes time to save a user’s preferences, it’s a table upsert for the row matching the experienceUserId and deviceId.
As for sending alerts, the Device State Trigger could look up all rows matching the deviceId; iterate over them to see, given the preferences, which users should receive a notification; and then send those notifications. This last bit could get messy as your application achieves a greater scale, but some of those processes can be passed off to separate workflow runs using the Workflow Trigger Node.
The possibility of the flow timing out, as a single run can only go 60 seconds maximum. Theoretically at scale …
One device state report fetches hundreds, maybe thousands, of data table rows (one for each user with a preference matching the device ID)
A Function Node (most efficient in this scenario) determines which users need a notification given the returned preferences and the attribute value(s) in the state report
If, say, a hundred need a notification, that’s 100 requests to Twilio, SendGrid, a third-party API, or whatever it is you are doing to alert your users. This is where you run a timeout risk.