Hi,
Good morning. I am trying to collect work flow errors and device log entries and send to mqtt.
Left hand side workflow is the one collecting errors for the right hand side work flow.
Below are the parameters for Flow:Errors
Below are the parameters for Device :get log entries
My timestamp for the errors are same for each run. Need help on this. Are they supposed to be same or am i missing something here
Thanks in advance.
From our documentation on workflow metrics:
Note: Workflow Metrics do not update in real time. Rather, they are aggregated and reported approximately every 15 minutes.
This is true for the run statistics as well as the errors. We note this in the user interface for workflow errors as well:
This section contains a sampled subset of errors produced by this Workflow. Timestamps are approximate (within 15 minutes).
So that would be why the timestamps are the same - because what we’re returning is not the actual timestamp that an error occurred, but an aggregated timestamp that rolls up into 15-minute increments.
Thanks. So what are the best parameters for flow:errors and device:get log entries.
Duration should be multiple of 15min (for Both). and timer also should be multiple of 15min…for better results?
Device: Get Log Entries is not aggregated into 15-minute buckets; the timestamps on the state report objects returned by that endpoint represent the time the device state was reported - or the value of the time
property when reporting device state - down to the millisecond.
As to your question about “better results”, I guess it depends on what exactly the use case is here. You seem to be collecting workflow errors for this very workflow, and device state reports for a specific device and then publishing them to an MQTT topic? And from what I’ve seen in your application, that is being received by an edge workflow, possibly deployed to the same device you are fetching state reports for, where you are trying to record it to a PostGRES database that’s on the same device?
Your right-side Timer Trigger is fetching workflow run statistics (average time, successful run counts, error run counts) also for itself and then trying to report those to an MSSQL database that uses connection info stored on device tags, but there is no device on the payload for it to reference?
All of that said, you could change your timers to fire only every 15 minutes, but if you run then on 15-minute intervals (i.e. 8:00, 8:15, 8:30, 8:45), the aggregation job for calculating workflow errors probably would not have completed by then. So however you construct this, it should be resilient to the previous bucket’s data already being fetched and also possibly a bucket being missed as the timing of the trigger and the errors aggregation job may not always align.
I am working a dashboard that will be used for device diagnostics. So, right side workflow is a “testing” flow to generate errors to populate a database with a sample data. Left side one is sending those errors to a edge workflow and into a PostGRES database.
So, how do i avoid capturing duplicate data / missing data. Any sample flow would be of great help!.
So is your goal to display a dashboard that shows workflow errors for a specific edge compute device, and for that dashboard to only be available on that device’s local network?
If so, the closest thing I have to an example is our guide on How To Visualize Your Data at the Edge With Losant and InfluxDB, which walks through running InfluxDB in a separate Docker container; piping telemetry data over to that container; and visualizing it on a dashboard. Your use case sounds similar, except you’d want to fire data over to the container out of the Workflow Error Trigger instead.
I would read that guide and see if that is what you are looking for. You could at least use it as a starting point.