General discussion


Roles in Test Planning & Execution

By vicki ·
I am experiencing some problems with the quality of testing on my project. I am asking for your help in a discussion to explore how we can improve. How does your organization work?

In our organization the tester is responsible for the planning, a test outline to demonstrate what they will test, actual test cases, test execution, and finally a test summary. Testers are included in project plans and discussions from day one, given ample education on the products and business processes they support, given advice on scenarios to test for, and in some cases even specific data to match those scenarios is developed.

The problem is despite all of the above assistance, the quality of initial test cases, adequacy of test data developed, and thoroughness of test execution do not meet expectations. It often feels as if the test team missed the mark on what the product is trying to accomplish. The team also fails to understand the importance and value of a good regression test and means to ensure product integration. As a result, to testing tasks are very slow to be completed, there is much rework in the test process itself and often system errors are caught by systems analyst, technical writers, and in the worse case, customers rather than the testers.

In trying to think ?how can we do better?? I?ve begun to question the roles in our team for testing. Specifically, are we putting too much responsibility on the testers. Should system analyst be writing test outlines, should technical writers develop the test cases, product champion do integration and acceptance testing, and tester literally only test the system?

This conversation is currently closed to new comments.

7 total posts (Page 1 of 1)  
Thread display: Collapse - | Expand +

All Comments

Collapse -

Testing Roles

by BFilmFan In reply to Roles in Test Planning & ...

Most test lab personnel are not qualified to be writing engineering tests. The engineer or architect that designed the product should write the test cases. With their advanced knowledge, they have an understanding of test scenarios to confirm product functionality.

I've seen very few shops where the test personnel were writing the actual test cases.

If your engineers can't write test cases that a lab person could run and verify, I recommend that you get a team leader onboard with that experience to teach them.

Collapse -

That's a hard one

by Tony Hopkinson In reply to Roles in Test Planning & ...

Testing has to be done all the way through
You first cases come from the requirements.
There should be a whole suite of amplifications from the BAs
Then there will be more enhancements from the designers/devlopers, particularly during rounds 2 .. n, when impact assessments from code/design changes can be done. That's just your test team.

Then there's unit testing and basic integration testing.

To me it sounds like you are doing the throw it over the partition manouvre. You can get a lot better results from integrating the test team, short sharp iterations and rigidly clear functional test cases.

When a test fails early, assess whether it's a show stopper, ie there's no point in continuing until there is a fix or there is still some use to be gained. Bouncing a version back because a caption is spelt wrong and not bothering to check whether the information it annotates is complete rubbish will annoy the heck out of a developer. As will raising issues for each incorrect calculation on a page, when it picked up the wrong input data.

Get an issues tracking system up and running and review frequently. I once got a fault raised that made me decide the test team were cretins, I should never have seen it.

"If you put the surname in the first name box and the first name in the surname box the name comes out backwards on the report."

A couple of other things, don't let your tester raise faults by test cases. It won't let me enter the data and when you fix it so I can, it adds up wrong are two issues, not one with a footnote.

Make a clear distinction between what is a fault ie the developer did not do as specified and what is a change to the specification.

From the other side have your devlopers give the testers a clue on where to concentrate their efforts, and where the fix might have an impact on another area of the functionality, especially if it's already been tested.

The best way to get more out of testing though is to write less bugs and find more bugs before it gets to the test team.

Depends on your code base but DUnit/Nunit type packages can pick up a lot of 'stupid' bugs. This can also be used to give your testers a good deal of confidence about certain aspects of the software.

Automated test tools, though they do have a large training overhead can get you some kick *** improvements as well. But the biggest successes I've seen are from integration and comunication.

Testing isn't an unfortunate overhead because your developers are crap or your designer's misinformed, your BA's not knowledgeable enough or your customers not knowing what they want, it's an integral part of the process.

The first part of a successful test strategy is the key players deciding what do we have to test, what do we think we should test and what do we have to prove we have tested.

Without that, you are wasting your time.

The easiest way to support your team is to have them lean on each other.

If you are blessed with Technical Authors, don't have them testing, they've got enough to do, they can be very valuable at raising issues though, particularly things like installation options, UI consistency, and oops we forgot that bit.

Another trick, customer work shops on alpha versions, feedback, confidence, feeling part of it, seen as value for money and the often missed , that's not a bug it's a feature we find very useful.

Start simple and evolve, after all you've learnt a lot from your first iteration haven't you? It's not a failure, it's an opportunity to improve.

Collapse -

Oh involve

by Tony Hopkinson In reply to That's a hard one

Customer support, we second them on our test teams. One uses the keyboard a lot, he's fast and he knows the shortcuts. Raised piles of usability issues, like tab orders , things not refreshing automatically, key strokes being missed, AVs because he was calling up another function before the software had finished the first one.

In other words, the ones that make your finished product look like low quality drivel.

Collapse -

Big complicated question

by raelayne In reply to Roles in Test Planning & ...

Things are changing in this area, aren't they?

There's test-driven development: build the tests first, watch them fail, and then watch them drop off one by one as you build the system.

There's the notion that you build quality in from the start, and that "testing" is just a way of verifying, at each step of the development process, that what you've done so far is right.

Usability is an interesting topic in itself -- while there's lots of research already done, and we should all be applying the concepts that are known as a result of that research, in the end it's usable if people can use it easily. And you don't know that for sure unless you watch them attempt to use it.

I think I've seen every possible approach to this. Here's what I think works best in a world where we're forced to do more and more with less and less.

1) Make sure everyone knows he or she is responsibile for the quality of his or her work. No "over the wall" crap.

2) Test cases begin with use cases ("requirements"). So the analysts or software engineers who wrote the use cases write the test cases as well, working with subject matter experts.

3) Quality assurance isn't just testing -- it's having a process in place to assure that all requirements have been properly identified; that the analysis is complete and thorough; that the design is appropriate. How do you "test" these early project deliverables? Do that right, and the system testing becomes less problematic.

4) The best software engineers you have should be your testers. And they should automate the tests, using whatever tools they like. They'll have the skills to build the test databases, etc. Don't ever make the mistake of thinking testing specialists without engineering skills can adequately test a product.

5) If you need your analysts or a tech writer to clean up the language in functional test scripts so other people will know what the purpose is, do that. Especially if you share them with customers. I've used a lot of tools over the years to record test scripts, but I find Excel works as well as anything else -- just record the manual steps so anyone can execute them. I was skeptical the first time someone suggested Excel to me, but it does take the focus off of the tool and puts it on the task at hand. Remember, poor analyst + tool = bad analysis that looks pretty. Don't waste time on complicated tools -- the hard part is figuring out how to test the system.

6) Everyone should agree that the tests adequately cover the requirements and all of the situations that might arise. That means all constituencies -- customers, marketing people, ... everyone.

7) And I agree with a couple of previous posts -- have a defect management system (there's no excuse for not having one -- Bugzilla is free), a software configuration management system (SubVersion and CVS are free ...), etc.

Everyone who has an opinion, is needed to help champion the roll-out, needs to understand how the system works because they'll be supporting it, ... should "test" the system. They're not really testing, of course; what they do isn't generally formal enough to qualify as testing. Don't tell them that, however. And take advantage -- they might find some bugs, right?

I know this is a woefully inadequate response given your issues, but it's such a big question, and software is so much more complex than it used to be. The old ways just don't work anymore. You're on the right track questioning roles. Why not take a step back and question the whole thing?

Collapse -

Establish a Solid Foundation

by RB_ITProfessional In reply to Roles in Test Planning & ...

As one previous poster mentioned, this is indeed a very bloated question. There are so many ways to approach this. You stated in your post ?It often feels as if the test team missed the mark on what the product is trying to accomplish. ? You?ve already begun to ask the question of why is the team missing the mark. Perhaps another way of looking at this problem is to ask the question, does the team truly know what the product is trying to accomplish? To approach this problem, I personally might be inclined to turn my suspicion to the original functional requirements. Ideally under the guidance of a very competent BA, your functional requirements should be captured and written reflective of what is truly needed by the users. The functional requirements are the foundation to all other work on the project. If the requirements are not accurate, then it stands to reason that most other deliverables of the project will be an inaccurate representation of what the user needs.

While I don?t have a one size fits all solution, I offer the following questions to assist in the thought process as you work through this problem:

-What is the requirements gathering process? Does it involve all key stakeholders and user groups? Is there anyone missing from the process?

-How skilled are your BA?s at getting to the core of what users need without focusing on an actual solution?

-What is the process to validate the requirements in all phases of the project? Is the project truly focused on meeting the needs of the users at all phases of the project?

-How skilled is the project team at assessing the feasibility of the requirements?

-Are the requirements properly prioritized? Not allocating enough time to work on the most high priority items is a recipe for trouble. Who provides input for prioritizing the requirements? Are the requirements prioritized and worked on at the discretion of the tech team or do the users provide guidance on what are the most high priority items?

-What is the requirements management process? Are there clearly defined lines between each requirement, the responsible party, and the stakeholder that the requirement serves? Are there appropriate change control procedures in place?

Please keep us updated on how you approach this problem and the outcome.

Best Regards

Collapse -

Answer is in the Question

by Wayne M. In reply to Roles in Test Planning & ...

Carefully reread the post and I think the conflict becomes obvious. The question is whether one is willing to make the changes to resolve the problem. Hint, the problem is not with the testing team.

Per the post, "the tester is responsible for the planning, a test outline to demonstrate what they will test, actual test cases, test execution, and finally a test summary." The complaint, however, is "the quality of initial test cases, adequacy of test data developed, and thoroughness of test execution do not meet expectations." I will assume that the test team is meeting all of its defined responsibilities and note the missed "expectations" are undefined. The question is whether the organization and management is will to give the testing team the responsibility to meet the expectations.

Previous posts have already mentioned Test First Design. This approach uses the tests to define the product and will allow the tests to begin to meet the expectations. This approach also, however, requires a major cultural shift to accept the importance given to testing. The testing team needs to be given the responsibility for driving the development, not just reacting the results given them by others. Developers also need to accept that testing is a primary component of their job and not something to be foisted off to the "dead wood".

Carefully define one's expectations for testing and be willing ot give the responsibility necessary to meet the expectations. I have tried to consistantly referred to "testing" rather than "the testing team", because of the results of changing the responsibilities of testing will undoubtably lead to organizational realignment.

The question one must ask is "Am I willing to change the organization to make testing valuable, or do I just want to gripe about the test team?"

Collapse -

Experience is key....

by ahayward43 In reply to Roles in Test Planning & ...

If your testers are doing poorly in all those areas, it seems to me you need to hire more experienced testers, an experienced IT Manager who has experiential knowledge of testing processes,methodologies and the ability to execute them.

If your testers are involved in project level discussions, maybe they should not be. The IT manager should attend the meetings and discuss the strategy and approach to testing with the team and provide one test plan to include roles and responsibilities. This may foster some accountability.

If there are no processes defined that details how testing is to be carried out with samples, maybe creating processes will also provide some structure and framework for them to operate in and to follow...process improvement.

Maybe there should be a team lead who can lead them into quality testing as well.

I could take all day...but I want.
good luck

Back to IT Employment Forum
7 total posts (Page 1 of 1)  

Related Discussions

Related Forums