MQTT with python

I am trying to do a send_state using python code as suggested. I am having two issues.

Issue 1: I get message such as below. What does that mean?
Message throughput limit exceeded on /losant//state

Issue 2: following the example python code for MQTT, all I want to do is, connect , send a mqtt json, disconnect. I do not want to sit in a loop and send json on a timer. How I can do that?

By the way, I am running the python Losant MQTT code using python3

Can someone help and share

Hello @Sreyams_Jain1,

The message you are seeing can be caused if you are sending a large number of messages too quickly. Incoming and outgoing messages are each limited to 30 messages in a 15-second window (or, on average, two per second). The documentation on Resource Limits can be found here.

To change the example code to your requirements, all you will need to do is remove the loop and timer, and you will be able to customize the JSON to what you would like to send.

Let me know if I can help any further! :grinning:
Julia

Understood. Having got this message, is there anything needed to reset the state of the device or will it clear out after a while

This is the code I am using that does not work. What am I doing wrong?

import time
from losantmqtt import Device
from datetime import datetime

Construct device

device = Device(“gateway device id”, “key”, “secret”)

Connect to Losant.

device.connect(blocking=False)

Send once

if device.is_connected():
now = datetime.now()
datetime_object = now.strftime("%m/%d/%Y, %H:%M:%S")
frame = “Sending Once”
peopleCount = 1
device.send_state({some json key:values})
print(“sent”)

In the logs I see it connects and disconnects right away . No payload is being sent at all.
Is there any other issue going on here?

The device.send_state is returning FALSE. Connects and disconnects with a timeout

Hello!

I was able to reproduce your problem and have found a solution. Here is my code:

from losantmqtt import Device
from datetime import datetime


device = Device("xxxxx", "xxxxx", "xxxxx")

def on_connect(device):

    now = datetime.now()
    datetime_object = now.strftime("%m/%d/%Y, %H:%M:%S")
    frame = "Sending Once"
    peopleCount = 1
    device.send_state({"temp":1})
    print("sent")



# Listen for commands.
device.add_event_observer("connect", on_connect)

# Connect to Losant.
device.connect()

The above code will connect and send one state report to the specified device. You can add a close() if you’d like, as this keeps the connection open until timeout.

Hopefully this helps!
Julia

Hi Julia,

I tried exactly the way you suggested as below

Still the send_state fails with status = false

FYI - Deviceid is a gateway id and it has all the attributes

Hi @Sreyams_Jain1,

I have just a couple of follow-ups so we can debug this further. I apologize if they seem obvious, but they have tripped me up before!

The first is to double check that you are using the correct deviceId, and that it is from the Losant platform. The deviceId can be found in the top right corner of the device page.


Secondly, could you provide a screenshot of your device log when it connects and disconnects? This can also provide some clues as to what the error is. Here is mine:

Thanks!
Julia

The code worked sending the state. I understand it goes to the topic /losant/deviceid/state.
Once the message is sent, it gets disconnected. This has been seen when I try to get that message using MQTT (subscribe to the same topic) using some tools like node-red.

Only one device (or gateway) can connect using that device ID. Any other connection will disconnect the original.

So if your nodered subscribe is authenticating using the same ID as your python code, then this explains your disconnects.

So, are you saying, I cannot use a python code sending MQTT message and at the same time in node-red I cannot subscribe to it and build application around it in node-red?

If so, what is the solution?

You would have a workflow that is triggered on a device state update and that what publish the data via MQTT to a topic of your choise.

You nodered instance would then subscribe to that topic.

If your pi is co-located with node-red then the other way would be to run a mosquitto broker (configured as a bridge) on the local network.

Local pi connects to bridge and publishes thorugh that. You local node-red connects to the bridge and subscribes to the topic published on pi. Only the bridge connects to losant.

We use this model (actually a variant) in all of our deployments. So local python agents collect data and publish via the bridge, anything local that needs this info also connects to the local broker.

If the nodered device and pi are on different networks then you need to use the first example.

If I understand you correct …

first choice is create a workflow that connects to MQTT to send msg. That means my python code should call an api to kick off that workflow with data sent and node red subscribes?

or within nodered, if I call my python code that is publishing data and also have the subscribing node to fetch, will it work?

Just setting device attributes can be a trigger. In the workflow use the state device state node to trigger the workflow .

You will need to define your MQTT topic that node red will subscribe to using Integrations.

However I am not to sure what you are really trying to achieve. May be best to describe where the node/red and pi are and their roles and what you actual goal is.

on the PI, I have the python code sitting in a loop that collects and sends some data via MQTT. On the same PI, I need to have a node-red application that wants to fetch that data sent and do other processing.

Both nodered and pi are on the same network

Ok, I would approach this very differently. We have similar scenarios.

We have completely decoupled the collection of data vs publishing. We have other local requirements like yours and the requirement to potentially publish data to other services. And following on from unix philosophy use small tools that do one thing well.

Our approach has been to write a python agent to collect data, log in to file locally and publish via Redis.

We then run agents that subscribe to redis channels and publish data or do something else with that data. In addition if we have some other data source we wish to merge, we can have addition agents that publish to the same channel.

So our on device topology looks like this

   Python MODBUS agent (or some other protocol) collect data via polling
      \/
   Agent Publishes to Redis 
      \/
  Losant Agent subscribes to Redis 
  Receives a redis published payload. 
      \/
  Publishes via MQTT to Losant (typically via mosquitto bridge).  

If we want to do something else with the collected data - for instance present it via MODBUS to the customer, we then have a second agent that subscribes to the same REDIS channel and either writes to the customer RTU, or acts as a MODBUS device which the customer can read from.

Why do we take this approach? Each scenario is different for us (we don’t have a product as such) From one project to the next equipment differs, quantity differs etc… We go from a single modbus device to 80+ in a matter of weeks, along with multiple sensor/protocol/data sources representing a single device. So we need the flexiblity of de-coupling and may only have half a day to deploy ;-(

In your case to keep things simple I would start by having your python client read whatever data it needs and then publishes to Losant and your Node-Red at the same time. No point sending the data to losant and back again. This also means if your outbound connection is down then the local node-red app still has the data.

Hopefully that all makes sense.

I do not understand your test in last paragraph, specially when you say publish to Losant and nodered at same time and then sayin gno point sending data to losant and back again

Given your node-red instance and the python code used to collect the data are on the same device.

You could in fact have node-red collect the data and send it to direct to Losant and do something else with it at the same time. That would be the simplest.

Or have your python code publish the data to Losant (MQTT) and at the same time send it to the local node-red instance - you could use a number of ways of doing that including but not limited to (MQTT local connection, Local Redis connection via Pub/Sub, REST etc)

I don’t see the point of sending the data off the PI to Losant just to send it back to node-red running on the same device. But that is just my opinion :wink:

I understand your point.

But, I have tried running nodered at different ip for subscribing and pi at different ip, sending data. That does not work either