Workflow debug persistence


#1

The debug is incredible. The only problem is that its data is not very persistent. For example, if you’re testing a fairly complex workflow like the create user experience and forget to also have the edit workflow window open in a separate window, then you have to run your entire create user test all over again just to get the debug data.

Worse yet, say you remember to have the workflow open in a separate tab while running the flow, but accidentally close the tab, all that valuable debug data goes away.
Same is true if you forget to link a node (which is easy in a complex workflow w many nodes) to the debug before running a test, you might have the run the test all over again WITH the tab open and be sure not to close the tab before you’ve copy and pasted the debug into a more persist place like textedit or something.

Why not make a master debug default to run on every workflow and have its data persist between window closes? This way if a user wants more granular debug he/she can drag some nodes, but in case he/she forgets, then at least they’ll get some valuable data to stick around for a while.


#2

Workflow payloads are ephemeral on our end; we do not store them after the workflow run completes. (We do store some info if the workflow throws an error, but that would not be useful in your case.)

For your use case, the easiest option would be to drop a request out to a third-party logging platform like Loggly at the end of your workflow run and send the payload to that. We have a Loggly integration in the works as a custom node if you decide to use them.

You could also write the payload to a row in a data table, but depending on how many payloads you generate, this could become problematic very quickly.


#3

Gotcha, makes sense. I’ll look into Loggly thanks!