Sensor Correction strategy

Let’s pretend that I have a correction factor that needs to be applied to a sensor. And let’s pretend I have many sensors, all of which may have different correction factors.

And now I want to show these corrected values on a Time Series chart.

So I was thinking that I could put the correction factor into a Tag on each Device. Then I kind of got stuck. I don’t really want to store the corrected values because at some point I might want to change the correction factor. What I really want to do is only have the correction factor affect how the raw data is displayed on the chart. In a perfect world I was thinking that I could use the Attribute field on the Block configuration to create a formula that might look something ‘attribute’ * {{Tag}}. Or that somehow Attribute could reference a virtual field. Or that there would be a new option to create a formula right here… image

These approaches obviously aren’t going to work (for now). Any other creative ideas?

1 Like

Adding an expression to the time series (and other) blocks is something we’ve been investigating for a while as a solution to this.

At the moment, the best solution is to use a workflow that stores the modified value, usually on a different attribute. You can then graph the modified attribute.

I was ‘afraid’ that might be the answer…for now. Thanks for the quick response.

I’m not too jazzed about storing a value like this as you can well imagine. Too ‘persistent’ for something so ephemeral . :slight_smile:

This is an general issue of the LOSANT Data model (see Device attribute metrics).

While it is good and helpful to have device tags to organize devices, it is absolutely necessary to have sensor meta information (that could be stored in tags either).

This may be factors or offsets, but what is more critical: sensor data are physical properties, that have:

  • a unit
  • a valid data range
  • a precision
  • a sensor id
  • a calibration date

Without this information, it´s just a number, but you will recognize the difference, if you think its Celsius but in fact is Kelvin or Farenheid.

Another issue is, that sensor topology and target topology are usually not equal. A sensor is part of a measuring device or gateway, so maybe many sensors are part of a single device (on the side of electrical connection). But sensors are connected to physical spaces too. So maybe a sensor belongs to a duct, a plant, a room or whatever.

So both sides are important. Topoligy mapping cannot be handeled on device level, it is a property of sensors.

So, I think this is a general issue of the LOSANT data model that needs to be fixed sooner or later. It is absolutely necessary to have sensor tags in addition to device tags. And we should have some general option for data mapping (which ist standard for most scada-applications like BSCADA…)

Best regards
Eckehard

3 Likes

Brilliant!

You have taken this thread to a whole new level. I hope @Brandon_Cannaday and some of the other visionaries at Losant will weigh in on this discussion. Tags are a helpful construct, but the absence of sensor metadata is indeed a limitation that I hope Losant will address. Even the current implementation of Tags can be a bit awkward. For example, think about how in an Experience View, Users can belong to many Groups, but as far as I know a Device has nothing similar. Sure, you can add a Tag called “Group”, but then the ‘value’ you give it has no referential integrity and no comparable many-to-many relationship.

1 Like

This is an area of interest to us as well.

With most of our equipment (4-20ma) or values from ECU we have valid ranges, for argument 0-100% for engine load, or 0-5000mm for a level sensor.

When the ECU is off we get -32768, or if sensor fail we get the same.

This is important as it allows workflows based on this behaviour.

However it does mean in some scenarios (for correct plotting over time) that these values represent 0 in average calculations and not the unconnected value of -32768. Currently this means for accurate reporting calculations over time the data must pass through workflows to mask/set floor values.

Scale and/unit is the other scenario that pops up all the time, is lack of control over what the field or what specific instruments present values such as temperature, pressure flow etc…

We have to normalise these after the fact through additional workflows so that they can be presented in a consistent fashion.

The concept of units would be very useful.

However some systems demand type such as cumulocity, which means everything must/should be normalised at the edge.

I am in two minds as to how/where this should occurr.

Just thinking out loud here :wink:

1 Like

Some model for creating database relationships of one to one, one to many and many to many would be a very useful addition for sorting these things out. Whether these relationships are between devices (gateways) and devices (peripherals) or devices and other… lets call them “user constructed objects” (which could simply be a name of a thing stored in a data table with a key creating a relationship to something else) such as the concept of a Plant, Facility or Machine.