Exploratory testing is an effective technique for testing user stories and compliments test automation well. But how do you report the results and coverage of Exploratory testing? enter Session-based Test Management (SBTM).

If you’re a tester and haven’t heard of test sessions before, you might be surprised to learn that you’re probably using them. That’s because test sessions aren’t new. They’re simply the periods of time, usually between 45 minutes and two hours, devoted to fulfilling specific test objective. Each session has a scope (test objective and time period), called a 'charter' in SBTM.

Test Sessions are created and planned for individual User Stories, Bugs or Issues within JIRA. Typically a Story/Issue would have many Test Sessions associated with it, with a simple change having one or two sessions and a complex change having many sessions in accordance to it's risk. This risk based and flexible approach to the testing effort makes Exploratory Testing using Session-based Testing fit so well into Agile projects. 

Step 1 - Planning

Creating session's ready for Exploratory testing can happen during Sprint Planning, Development, or when the issue is ready for testing. We recommend some initial planning take place as early as possible and then a review of Test Sessions before testing starts - Are additional sessions required? Is an specified session still relevant?

A Session can be created from three locations in Behave Pro for JIRA:

  • The Test Session panel located on the right-hand side of an Issue
  • The Test Session section on an agile boards Issue Detail View.
  • The Test Session board, showing all the sessions for a given sprint or kanban board column

Planning a session requires:

  • A Charter to describe the scope/mission of the Session.
  • A fixed Time period for the Session. This should be between 45 minutes and 2 hours.
  • A short Title that describes the session and it's charter. This is handy when mentioning a session in a discussion.

You can assign the new session to a tester but we recommend assigning testers when a Story or Issue moves into the testing phase. This allows you to adapt to team member availability.

Charter Ideas

A good Session Charter isn't so precise it's essentially a single test case but it also shouldn't be so broad and vague that the session can never be completed within the time period assigned.

When you are new to SBT it can be hard to know what sessions to plan, try this simple technique - Identify important and relevant risks to the Story or Issue and the application under test, and create sessions for each of them:

  • Is the change or feature particularly sensitive to performance? Yes, create a session on performance testing.
  • Is usability the key for product success? Yes, create a session on usability.
  • Have changes been made to existing high-value features? Any regressions could be costly, create one or more sessions focusing on regressions.

A basic list of risks to consider: capability, reliability, usability, security, scalability, performance, installability (Software installation process) and compatibility. Remember this list is just a starting point for session charter ideas.

Step 2 - Executing a Test Session

Opening a test session by clicking the session title on a Issue or from Test Session board will open the session detail view. The session detail view as three panels:

  • The top panel contains all the information related to the sessions charter. The charter can be edited until the session is Started
  • The wide left hand panel below the charter panel is the Session panel. This is where all the session execution activity, Starting/Ending, Note taking and reporting defects, takes place.
  • The panels to the right of the Session panel contain metrics and reports about the session

Starting a session

Planning and tracking your testing effort with Test Sessions is important so sessions have three self explanatory states; Open, In progress, and Ended. The session status is visible within the Session panel.

The Session charter can be edited while the session is Open but no notes or defects can be recorded until the session is In progress. So it's just a question of clicking Start session and thinking up test ideas and executing them until the sessions time has expired.

If a defect/bug is found it can be reported via Report defect in the Session panel. 

There are a number advantage of reporting defects from within the session rather than creating a regular issues in JIRA:

  • Report defect creates JIRA Issues but automatically selects 'Defect' or 'Bug' Issue types for the user.
  • Records metrics about the number of defects found within a session
  • Displays traceability information on the reported defect showing it's parent Issue and the session it was found in.
  • If issue linking is configured the Issue under test can display a list of the defects found
  • Adds a note to the session activity linking to the reported defect

Taking notes

A lot of testing can take place in the space of a couple of hours so it's important to keep notes during the session. In particular at the end of a session you need to write a short summary of sessions results and brief the product owner or team. Having your notes available makes writing an accurate summary easy and means you don't forget important items. Also in regulatory environments where evidence has to be submitted, the session notes and attachments can act as this evidence.

There are a variety of different note types you want to keep during a session so Behave Pro allows you to add one or more labels to a note:

  • Ideas - For recording your test ideas before you execute them and records the types of tests you have conducted
  • Questions - Something isn't quite clear and you have question to be asked during the session debrief, it could be a possible bug or intended functionality.
  • Surprises - Something surprised you during the test session and needs further investigation later in the session as a defect/bug might be hiding behind the surprise.
  • Issues and concerns - Something that doesn't warrant raising a defect but needs discussion during the debrief or further investigation during the session
  • Positives - Testing doesn't have to be negative, why not recording and share the good points with the team.

Pro tip: If you find yourself spending to much time off charter and want to get back on track, take note of current diversion and create a new session charter for it. Observations and discoveries during a session often make for interesting charters for future sessions.

Stopping a session

When you have finished testing all the is left to do is click End Session. When this is done the user will be prompted to write a short summary of the session, rate the quality of work that was tested and the session coverage, and optionally record how their time was spent.

The summary, quality and coverage ratings will be displayed next to the session on it's Issue Panel and Test Session board so colleagues can see the outcome of the testing and deliver into the session for further details. The summary is only a high level view of the sessions results and a full debrief should be conducted to maximise the value of exploratory testing.

Step 3 - Debriefing a Test Session

A high volume of information is captured during a Test Session and this needs to be passed back to the team in a compact and concise manor to be useful, this is where the debrief comes in. Debriefs don't just share information with the team, but also makes the session and testing effort accountable to the team and provides an opportunity to review and improve how sessions are conducted.

The debrief consists of the person who conducted the session summarising how the session went, key information discovered and their opinion in relation to quality, with the Product Owner, Scrum Master (If you practice Scrum) and possibly other teams members.

Key areas for discussion during the debrief are:

  • Review the defects/bugs found during the session - These need to be triaged
  • Coverage within the session - was any areas missed due to time constraints?
  • Any Risks or Questions identified during the session - Discussing theses often dismisses or creates new defect/bug reports 
  • Positives about Application, System or Software under test - Testing doesn't just have to report the negatives :smiley: 

Notice how all this information is captured in the session notes and the notes make it easier to run the debrief.

Debrief outcome

At the end of a session debrief a set of actions are usually decided upon. This could be more work is required on Story or Issue by the Developer, particularly if there defects/bugs found or previously unknown risks have been identified.

One outcome from a session debrief could be the creation of another session - this is particularly true if the product owner or team are concerned about one of the risks that were identified during the session or if the sessions coverage was low.

Session improvement

It's often a good idea to have a fellow tester or test coach is involved in a debrief, she or he can review from the session notes the test ideas that were executed and make suggestions for alternative testing ideas and techniques for future consideration. You can think of this as running a retrospective on the Test Session.

What's next?

What we have discussed here is just the tip of the iceberg on Session-based Testing and the support Behave Pro offers for it.

Once you have got yourself going with Session-based Testing you might want to consider your workflow and configuration around reporting bugs:

  • Consider your triage process for bugs/defects reported during Sessions and how they fit into your team's "Definition of done". Typically this includes all defects related to the story must be fixed before the story is considered "done" and with Behave Pro's help you can enforce this in JIRA.
  • Setup more powerful traceability between reported bugs and their originating issue using Issue Links.

When Session-based testing is established amongst a team you might want to Build a dashboard using JIRA's powerful gadgets to surface trends and defects found during exploratory testing sessions 

Did this answer your question?