How to assemble a splitted array coming over separate data payload messages

Hello,

Because the maximum packet size to send over MQTT is 256Kb, I had to split my array of numbers into chunks of arrays that I am sending over MQTT in separate payload messages, my question is how to assemble my split arrays to a single array once received on the Losant side.

Thank you!

@CHAIMAA_DRIOUECH,

Unfortunately, you still wouldn’t be able to report more than 256Kb of data to Losant.

But I’m curious ( I did see your last post ). Your data is coming from Node-RED. Could you describe a little bit more about your use case? What is the data? What do you want to do with it once in Losant?

It doesn’t solve your problem but it is related. I would like to point you to the new Blob Attributes:

@anaptfox

In my use case, I am collecting raw data from my sensor that I am storing it in an array and then trying to send it to Losant that will pass it to AWS for more manipulation, because the memory size of my array does exceed the 256Kb in some cases, for example I have 49166 numbers in my array length, I will split it into chunks of 25000 numbers per chunk which end up for this example in two chunks that I succeeding now to send through MQTT in separate messages, what I am looking for is to get my two arrays chunks of data back to one single array before send it to AWS.

@CHAIMAA_DRIOUECH,

Thanks so much for the explanation.

I have two routes for you.

  1. If the purpose of the workflow you’re building is to just accept values over MQTT, rebuild the payload, and forward to AWS for more processing, would it be possible to just send directly to AWS? It’s very common for our customers leveraging Lamda or other AWS services to integrate them into Losant in this manner.

  2. If you don’t want to report this value as device state, it should be possible to reassemble your payload in a workflow to send to AWS. More below.

Reassembling a Payload in a Workflow

There are two things to consider here:

We don’t guarantee the order of execution for the Workflow Engine. So, you’ll have to keep an index to organize your chunks (and possibly order if that’s important).

{
   "data": {
      "chunk": "[ DATA ]",
      "chunk_index": "122334"
   }
}

Then you can use Workflow Storage to store the chunks or retrieve stored chunks.

You can use the Function Node to resemble the chunks.

You may have to use the Loop Node to support ordering.

Thanks for your response @anaptfox,

Unfortunately the workflow storage didn’t work either with Error:
TooLargeError Size of Storage Value (144941) is greater than maximum allowed (16384).

In this case I guess I will have to explore the other options.