JSON to file weird output

Hi there,
I am doing a simple web to file stream:

All I get in the file is a string:

[object Object]

Check the output here: https://files.onlosant.com/6253be33e5d19b4a80bb49b8/decoy/decoy.json

Okay I needed to add the JSON encode block, I assumed it was automatically serializing the objects.
But now I discovered another issue, how can I just add new lines for each execution?
There should be an option called “append”

At this time we do not have an option to append to an existing file in that node, but I will make a feature request to get that added.

As a workaround, you could …

  • Use an HTTP Node to fetch the existing file.
  • Use an Object Node, Array Node, Mutate Node, or Function Node (depending on the format of the current file, the data to be appended, and your comfort level with writing JavaScript) to append the next JSON manually.
  • Use the File: Create Node to overwrite the old file with the manipulated data.

Yes I thought about that workaround or add a new row into a database with JSON serialized as string type?

@Paolo_Proxy,

If by database you mean a Data Table, then the string type has a limit of 768 Characters, which may not work for your use case.

Thank you,
Heath

Ah yes that is a bit small for a json payload agreed.

Losant Data Tables are designed for relatively static data sets. Something like a table that maps fault codes to their description and other meta data.

Data Tables are backed by a database optimized “write light” and “read heavy”. This means data tables can be queried at high rates, but you’ll notice scalability issues if you attempt to continually insert or update data.

If you do want to continually insert data into a database, the recommended approach would be to connect to a managed cloud DB on Azure, AWS, or Google.

For Azure, we’ve got a guide for using MSSQL.

For Google BigQuery, we’ve got the built-in GCP: BigQuery Node and a guide on using it as a Warm Storage solution.

For AWS, I’d recommend using the AWS Lambda Node to invoke Lambda functions, which in turn query data from something like AWS RDS.

If the data you’re receiving is time series data and related to a device, then it should be stored on Device Attributes. Behind the scenes, all data reported to a device attribute is stored in a time-series database.

1 Like

Thanks for the details, this is very informative.
The original idea behind the file based JSON output is just a quick way to log the data and then write a proper workflow which will store it in the device status.

That makes sense. You could look at Blob Attributes.

If you want to store an object from the payload as a blob attribute, you can do so with a single, nested, template in the Device State Node.

attributeName = {{encodeBase64 (jsonEncode data)}}

Behind the scenes, the actual blob data is securely stored in a cloud storage bucket. What’s put in the time-series database is a pointer to the location in the bucket.

In most cases, when you want to access the blob data, the platform generates a timed URL where it can be downloaded and accessed through whatever mechanism you require.

1 Like

all data reported to a device attribute is stored in a time-series database. >> Can i retrieve the data stored in time series database?For how much time period data stores in that database?
Is it a good solution for long term data storage?

Yes, data can exported as CSV. You can export data for a single device, or perform a bulk export for multiple devices.

This is based on the data retention period in your license. For sandbox accounts, the data retention is 30 days. For licensed organization, the default data retention is 6 months, however many customers choose to increase that to one year.

The time series database is not a good solution for long term data storage. Losant does support automatic data archiving to your own cloud storage bucket. Another solution is to use something like Google BigQuery or equivalent data warehouse on other cloud vendors. We’ve got a guide on using BigQuery here: