Direct Integrations with Underlying Time Series Database

From my understanding Losant stores device state in an underlying time series database. For functionality that currently isn’t natively supported by Losant the documentation implies duplicating devices state into another storage solution. Is it possible if external solutions have integrations with whatever Losant uses under the hood to directly interface with Losant’s time series database to avoid data duplication?

We don’t have any plugins that will integrate with the time series database directly, but there is all kinds of magic you can do through the workflow engine to feed data from your source of truth over to Losant.

For example, your physical hardware could publish its data to, say, an MQTT broker hosted on AWS instead of Losant. Inside of AWS, you could do any sort of processing and filtering you would like on that data and then feed that result back into Losant through a number of methods - an integration, directly publishing to our MQTT broker, a webhook, or through the Losant API.

Depending on the route chosen, that data could either be written directly to our TSDB or it could hit the workflow engine first, in which case it’s easy to then feed it directly to the TSDB (using a Device State Node) or even filter and process the data even further.

All of this begs the question: What is the functionality you are seeking in Losant that we do not support? I may be able to either provide a workaround or prioritize a needed feature based on your feedback.

Thanks for the quick feedback. That is a good point about utilizing services that are stateless ( in the sense of device state) before it hits Losant’s services. This would still be a duplication for what I am considering, because we may want to extract process information on historical data.

What motivated the original post was doing trend mining, pattern matching, correlations, adding annotations to certain trends in process data so that other users can see issues that have already been addressed. There are solutions that address this in the market that integrate with historians and things like influxdb.

Another area would be the limitation of batch data processing via the notebooks. My experience may be a little out of date for this area, it was about a year ago I was working on it, but complex batch analysis was hindered by the time constraints on the notebook runtime. These batch processes tend to utilize weekly to monthly historical data and as the number of devices scale up notebooks stop being viable.

Thanks, that’s helpful. We do have some changes to our underlying time series database in the works that will enable some new, long-desired features towards the end of the year, and knowing what our users are trying and failing to do currently definitely helps us prioritize what we’ll tackle first.

As for notebooks, 0.02% of our requested executions have timed out in the past 90 days so I’m hoping whatever issues you ran into in the past are resolved. To avoid duplication of work, I suggest giving that feature a try again. We do occasionally publish new base images that include additional third-party packages that may help you - and if you come across any that you’d like us to support, please let us know as we are very open to adding them and publishing new images.