My greatest issue with testing adventures has consistently been that you to a great extent end up with tests that reflect your usage thus should be refreshed with each rationale change. The best methodologies that I have seen test that activities are terminated sooner or later because of the adventure running and not really testing that things occur in a specific request. There are situations where you do need to implement request, however generally not
For instance, on the off chance that I have an adventure that handles making an API demand, I have to truly test two things: 1) that the solicitation is really dispatched and 2) that it fires an activity to either store information or set a mistake condition dependent on the outcome.
In the event that I later choose to pull in the state through selector instead of passing it in the activity payload, my test shouldn't come up short since how the adventure gets the information it needs to run is irrelevant.
In the event that I need to physically move the adventure to its following stage by calling a capacity in my test, at that point it will consistently implement at any rate the number of yield proclamations, which again is unimportant.
I wish I had a Tacocabana extraordinary answer for fix these issues, however as is the thing that will in general happen is that your tests don't generally get bugs and should be every now and again refreshed, so the most they truly offer is constraining you to survey your execution twice.