Workflows and TDD

Hey all!

I’m looking for feedback/advice on implementing automated testing in workflows (app, experience, and edge) for test driven development. I’ve been toying with this a little and here’s what I’ve come up with:

  • Break workflows up into microservices…keep them small and easy to test (stateless & functional whenever possible)
  • Use virtual buttons and timers to run regular sample payloads through the workflow (basically an assert)
  • Use a validate payload node / conditional / switch and or function node to evaluate if the payload is what it should be (the test condition)
  • If the test fails, use a function node and a throw statement to raise an exception in the workflow (if you need the workflow to actually “fail”, else just pass a success/failure to the next step)
  • Optionally use an MQTT node to push the failure to an event handler workflow which could create / manage events automatically
  • Can also use the Workflow Failure trigger to “catch” exceptions raised centrally and push them to an event handler
  • Could then use the api to roll back versions for test failure events if so desired, or just notify devs so they can address it.

In this way, I think you could implement unit testing in Losant, for workflows anyways. Experience pages / components would need some kind of workflow specifically to test them…else you’ll be out of Losant and into other testing DSLs (Robot Framework for example). I suspect this approach is also most likely for acceptance testing. For APIs you could simulate this easy enough but much harder for UIs.

Would love to hear feedback and thoughts!

Vis: @Lenny_Convis

Automated workflow testing is actually on our tentative roadmap; it’s something we’ve been kicking around for a while and are about to start defining, probably next quarter. So we certainly welcome any ideas you have.

As to your suggestions, they are pretty close to what our in-house solutions team has implemented as a workaround framework - though most of their testing happens in application workflows and not edge workflows.

  • Whenever flows are updated, a “master workflow” is fired with a Virtual Button Trigger.
  • That workflow then fires a series of other workflows using the Workflow Trigger Node, potentially with a custom payload passed in. (You could also build an array of payloads and use a Loop Node to fire off each payload to the same workflow.)
  • For each fired-off workflow, there is some sort of flag set after its Virtual Button Trigger (usually via a Mutate Node) to add a property to the payload indicating this is a test run. This is useful if you want to avoid certain side effects such as the creation of devices or the editing of users (though you need to guard against those cases individually).
  • Pass/fail is almost always done with a Validate Payload Node, though simpler cases can be tested with a Conditional Node or a Switch Node.
  • Failure cases usually log an event, though you can output that any way you want. This is usually done through the false path of one of the nodes mentioned above.
  • Your idea of throwing an exception within a Function Node and catching that in a Workflow Error Trigger would also work; we haven’t really adopted that internally because the suite described above was built out before we created that trigger.

As for rolling back changes in the event of a failure, I recommend making use of application workflow versions and experience versions for snapshotting working functionality, which allows you to quickly roll back in the case of an error (switching application workflows’ default version, or pointing your experience domain to a specific version).

1 Like

Dylan, this is great feedback. Thank you for taking the time for this thorough write-up. I’ll consider this further and share any good implementations I end up with.

Thanks!