Every minute, I seem to disconnect from Structure’s broker. The device connection log is reporting that my client is already connected. I’m trying to determine whether or not this is on Node-Red’s side or Losant’s. Here are some logs:
Are you able to get the MQTT disconnect code from your client? The “already connected” message on our end is typically caused by an unclean client disconnect and then a reconnect before the TCP timeout window is exceeded.
I think I figured it out. The default “reconnection period” in Node-Red is 15000ms; I reduced it to the default of 1000ms and we seem to be in business.
Nope. It does fine for awhile, then the client goes into an infinite loop of disconnection and reconnection. I imagine it has something to do with what you’re saying, but I’m unsure how to debug. The client is mqtt via Node-Red.
I’m fairly certain success w/ reducing the reconnection period was a false positive, as this seems to be the opposite of what should happen. I’m going to increase the reconnection period to 60000ms and see what happens.
I’m not sure how to get the disconnect code from Node-Red, but that underlying client has a error callback that will return the reason for the disconnect. As a test, I will run Node-Red locally and see if I can reproduce the issue.
So after some investigation on our side, we were able to reproduce the issue. It turns out that the default Node-Red keepalive time of 60 seconds (which means the mqtt client would send a ping every 60 seconds) was longer than the tcp connection inactivity timeout of our load balancer (50 seconds). By default, the keepalive time for the node mqtt module is 15 seconds, so we had not hit this issue using the straight node mqtt module.
We have tweaked our mqtt load balancer settings to allow tcp inactivity of 90 seconds - so now the default Node-Red settings work without issue (I’ve had a Node-Red mqtt connections alive without disconnects for multiple hours now).
Let us know if this fixes the issue for you as well!
I had tried reducing the keepalive time in Node-Red below 60s, which should have fixed the problem, correct (it didn’t seem to make a difference at the time)?
These are my settings, which I just modified to use the new domain name:
Anyway, so far so good–but last time I thought it was working, it started crapping out in about 20m. I’ll keep an eye on it.
Correct, a keepalive below 60 should have fixed the problem previously (before we changed load balancer settings).
You don’t need the “legacy MQTT 3.1 support” box checked, although I’m not sure that that makes any difference.
According to the connection log for your device (which you can see on your device page in Losant), almost all of your recent disconnects are due to “Message throughput limit exceeded”. It looks like you are sending quite a number of messages very fast - for instance, in the most recent connection, the connection lasted about 43 seconds but the client sent 641 messages, which is well above the 2 per second limit.
I got a bit of the same issue with my Node-Red sending payload to Losant using MQTT.
I’m using some LORA sensors. They are connected to Node-Red via http and node-red do the payload transcription for losant and send it.
It worked great using a single MQTT output for one device. But since I’ll like to have a single output for all my devices just changing the topic to know witch one it is. It seams like a Gateway and some peripherals will be perfect. But now my MQTT node does not work properly and have a lot of disconnections every 5 secs.
I tried changing the “keep it alive” for more or less that the stock 60s and it didn’t change anything.
Maybe i’m a bit confused with the gateway thing. The topic must be “losant/perippheralID/state” and the MQTT node ID must be the one from the gateway, right?