No more notebook execution available at this time

Hi there, I am struggling to understand this problem.

image

Is there a way to queue them so they run when there is availability?

The problem then is to build additional logic to figure out which inputs to rerun those notebooks on…

Is not that I am running 1000 notebooks, it run 5 of them out of 11 I required.

image

I also tried to increase the delay to maximum of 59 seconds but then I am getting a flow delay

this is a very crucial problem I would like to have a solution otherwise the entire benefit of using Notebooks are immaterial to me.

“No more notebook executions available at this time” can throw for one of two reasons …

  1. You have used your organization’s allotment of monthly notebook execution minutes, or
  2. More likely in your case based on what I am seeing here, you are attempting to start an additional notebook execution when you are already at the maximum number of allowed concurrent executions.

Delay Nodes are unlikely to help you here. If you want to kick off another notebook execution immediately after one completes, you can use a combination of the Notebook: Complete Trigger (to know when a slot is available) and the Notebook: Execute Node (to start another execution).

If you want to maintain a queue of executions, you can do that either with a data table (where you query the table and, if a pending execution row exists, start that execution and then mark the row as in progress / delete it) or with workflow storage.

Hello yes that is probably the reason, how many concurrent executions I am allowed?
My usage overall is pretty low in terms of minutes.

Standard for organizations is 5 concurrent notebook executions. Given this statement …

Is not that I am running 1000 notebooks, it run 5 of them out of 11 I required.

My guess is that is what your organization is at 5 also. I’ll verify and follow up if that is not accurate.

1 Like

It is not clear how can I use the Complete Trigger, I basically have a loop where I execute many playbooks but with different inputs.
Do you mean I should put the inputs in a loop first, then use the complete trigger to pull them off the stack and execute one by one? How do I start the first trigger? I should also manually start the first one right?
Some sort of example template will be useful to explore this concept.

How often do you intend to execute these notebooks? Is it on a regular basis (i.e. once a week)? Do the results of one execution need to feed into the next execution?

There are a lot of ways we could approach this, so if we can back up and you can tell me more about the overall goal, I can come up with more specifics around how best to get these 11 notebooks running.

Hello Dylan,
thanks for taking the time here’s a brief description:
a) A user uploads file into storage via a web page, usually I wakeup to the tune of 100 files every end of the month, with notifications on my email.
b) Once I see the upload frenzy is settled, I log in and trigger the workflow which is a loop, that iterate through the table of file uploads and then executes one notebook for each input file basically.

So in summary I don’t need real time execution, they are totally unrelated, the order does not matter and I can also wait one day until they are all processed serially.

What I don’t want to keep doing is to sort of, trigger the loop during the day multiple times until all the files are processed.

Also keep in mind those files will grow linearly as we onboard more customers.
Probably at some point I will need a different solution maybe in pure python, but for now I want to keep it in Losant.

Looking forward your suggestion!

Got it, that helps, thanks.

Have you considered doing this as a Resource Job? Based on what you’ve told me here, that sounds like the best course of action to take, assuming your notebooks can reliably finish executing within 15 minutes.

  1. Set up a resource job that iterates over the relevant rows in your data table.
  2. Have it run serially (one at a time) and give it a 15-minute timeout length.
  3. Set up an Application Workflow with a Timer Trigger that fires once a month (or on whatever interval you’d like). That Timer Trigger fires a Job: Execute Node that starts your Resource Job.
  4. In the workflow that manages the Resource Job’s behavior, connect a Job: Iteration Trigger to a Notebook: Execute Node that fires the relevant notebook for the current iteration (data table row).

If that approach doesn’t work, you could still kick off a process automatically once a month with a Timer Trigger that fetches the iterations with a Table: Get Rows Node. You could fire off the notebook (or first few notebooks?) immediately using a Notebook: Execute Node and, using a Workflow Trigger Node, schedule the next batch for, say, 60 minutes later - and so on until you have gone through all iterations.

It’s more work to set up but probably more bulletproof given the maximum 15-minute timeout length in the Resource Job case.


Finally, I don’t know if the “monthly” part is important here or if it’s just how often you want to think about this, but you could also run these more on-demand by using the Workflow Trigger Node to schedule a notebook execution for a few minutes later in the same workflow that issues the response to your user’s initial upload request.


Let me know if you have any questions about any of these approaches.

Wow okay I am studying the Resource job now, I will report here how it goes.