Ok, I would approach this very differently. We have similar scenarios.
We have completely decoupled the collection of data vs publishing. We have other local requirements like yours and the requirement to potentially publish data to other services. And following on from unix philosophy use small tools that do one thing well.
Our approach has been to write a python agent to collect data, log in to file locally and publish via Redis.
We then run agents that subscribe to redis channels and publish data or do something else with that data. In addition if we have some other data source we wish to merge, we can have addition agents that publish to the same channel.
So our on device topology looks like this
Python MODBUS agent (or some other protocol) collect data via polling
Agent Publishes to Redis
Losant Agent subscribes to Redis
Receives a redis published payload.
Publishes via MQTT to Losant (typically via mosquitto bridge).
If we want to do something else with the collected data - for instance present it via MODBUS to the customer, we then have a second agent that subscribes to the same REDIS channel and either writes to the customer RTU, or acts as a MODBUS device which the customer can read from.
Why do we take this approach? Each scenario is different for us (we don’t have a product as such) From one project to the next equipment differs, quantity differs etc… We go from a single modbus device to 80+ in a matter of weeks, along with multiple sensor/protocol/data sources representing a single device. So we need the flexiblity of de-coupling and may only have half a day to deploy ;-(
In your case to keep things simple I would start by having your python client read whatever data it needs and then publishes to Losant and your Node-Red at the same time. No point sending the data to losant and back again. This also means if your outbound connection is down then the local node-red app still has the data.
Hopefully that all makes sense.