Search code examples
sql-servertestingintegration-testingfunctional-testingtest-plan

Test plan for reporting system


I have a software suite that consists of multiple integrated software packages. They all run off of a single centralised SQL database.

We are in the stage where we are writing test plans and have allocated a single test plan for each independent module of the software. The only one left to write is the test plan for the reporting module. This particular module does nothing but run reports on the data in the SQL database (which will be written by the other modules).

Any testing iterations are preceded by developer, regression and integration testing, which should remove any issues of the database data not being correctly maintained.

My dilemma is how to approach the black box test plan for the reporting module. The way I see it there are three options:

  • Append the reporting test cases to the test plans for the modules that influence them (downside: the modules work together to produce the reports; reports cannot be divided up by module like that)
  • Write a test plan for reporting with specified pre-requisites, that are essentially a list of instructions of tasks to perform in the other modules, followed by test cases to test that the reporting is producing correctly in response to these tasks (downside: very complicated and long-winded)
  • Write a test plan for reporting with set datasets on a dedicated controlled SQL database (downside: lack of flexibility)

It looks to me that the second option is the best. It's the longest-winded but that alone is hardly a reason to discount something.

Does anyone have any experience with testing a module purely for reporting, who can hopefully provide an insight into the best / industry-standard ways to do this?

Thanks in advance!


Solution

  • A useful question to ask yourself is "What is the purpose of this test?". Check out the Agile Testing Quadrants for some detail on the role of different test types.

    enter image description here (Image credit: Lisa Crispin)

    Option 1 focuses on the integration points themselves, which may be valuable to the team as it can simplify the diagnosis of issues (given that only 1 module will be used at a time) but fails to test system as it would practically be used. In this sense, probably falls into Quadrant 2.

    Option 2 focuses more on testing the system as it would be used in a real world scenario, invoking multiple modules. You lose the easy diagnosis of issues from option 1, but you actually start to test it in a way that would be valuable to the end user, putting it in Quadrant 3.

    Option 3 is basically a less flexible version of option 2. You're also losing a lot of interaction with the individual modules that makes option 2 so valuable (in that it exercises the system as a whole). A sufficiently 'real-world-like' database could make this a Q3 option, but you're still losing flexibility.

    Comparing Options 2 and 3 through this lens, we can see they serve different purposes. Certainly, they both execute a lot of the same code paths, but option 1 mainly supports the team, letting them know when a change to specific module has broken the reporting while option 2 serves to critique the product, asking if it will actually work in a real world scenario. The question now is which of those outcomes is more valuable to you, which is really up to you and your team.

    Hope that helps.