429 Too many requests?

I have a Losant Cloud workflow powering an API endpoint. I have no users of this particular endpoint by myself, so this error makes no sense to me.
I have a small Vue front end, and I noticed 429 errors. I replicated the request in Postman, and get the same intermittent result.
I can run up to 20 sequential API requests - no concurrency happening, and it’s random, not 20, but sometimes, one will come back with a 429 and a payload
“{
“error”: “Too many concurrent requests”
}”
I don’t set that payload or error in any of my workflows, and no debugs are fired for the workflow. The platform just drops it.

Since I’m not doing anything concurrently, I assume this must refer to something bigger than the endpoint I am testing against. I have looked over the docs, and I don’t see a limit related to Experience APIs.

Can you help me understand what is causing this error?

Losant will return a 429 error for an experience endpoint if there are too many in-flight experience workflows executing - even if the flow has already replied using an Endpoint Reply Node. We first will send requests to an overflow queue, but if too many of those pile up, we revert to a 429.

@Dylan_Schuster - Thanks for the response, that makes sense, and I understand you may need to throttle the platform. At the same time, I’d like to understand more about how I can avoid this.

What number is too many requests?
Is it documented somewhere?
Can I see my usage somewhere?
Is this limit per each experience workflow, or a count of all experience workflows executing?
Is it just experience workflows, or do Edge / Cloud workflows count?
Does it grow linearly as devices or endpoint APIs are added to the system? Or is there one size for all?

I only had 5 devices online, so this concerns me as I am in the process of scaling. They do regularly interact with the experience API’s, as I built my backend using them.
Thanks,

An organization-owned application may run 30 concurrent experience workflows at a time (per application). That is for the entire execution time of the workflow - not just until you reply to the endpoint request.

If you exceed that, any new requests are kicked over to an overflow queue, and that is where they wait until one of your application’s 30 concurrency slots frees up.

If requests are getting added to the overflow queue faster than we can process them, then eventually we start issuing 429 responses and not queueing more requests.

In your particular case (I sent a DM about this yesterday), you have added some Delay Nodes to your experience workflows that are holding those concurrency slots open for an additional 5 seconds per request, which is enough to start sending requests to the overflow queue and eventually overwhelm it. If you can tell me what purpose those Delay Nodes are serving, I may be able to offer a more performant alternative.

Thank you, that’s good info to know, and explains what I’ve been seeing.

The delay node was a poor experiment/workaround that needs to be reworked. I’ve reached out via DM, as it relates to another workflow issue/limitation outside the scope of this topic. I think if we can solve that issue, the delay goes away, and this will be a non-issue for me.
Thanks!