Many beginning hands-on computer courses require written tests. They even come with an extensive test-bank of multiple-choice, true/false, and short-answer questions. Surely, with such a wealth of questions to choose from, it should be easy to come up with a fair test for student (and teacher) evaluation.
Why, then, no matter how much I tailored the questions in the test-bank to the class material, would the students invariably question the test’s validity? Why was I spending an inordinate amount of class time explaining why answer A was better than B? And why did the test scores rarely coincide with the students’ ability to do the hands-on labs?
The trouble with test-banks
Most students in my beginning Windows or MS Office classes seemed enthusiastic about their progress until they took their first paper-and-pencil test. Those who did poorly on the test often viewed their grade as a setback. Students who did well on the tests seldom took pride in their good grades; for them, taking a written test to evaluate their hands-on computer skills was just a necessary nuisance that took time away from real learning.
From my students’ perspective, a written test cannot possibly evaluate what they really know—it simply determines what they do not know. When I compared their test scores with their performance on the hands-on exercises, I had to agree.
What are we really testing for?
Multiple-choice and other types of short-answer questions usually zero in on one particular method of completing a task, but there are many ways of doing things on a computer. For example, one student may use the Delete key to delete text while another may use the Backspace key. Others just switch to overwrite mode and simply type over the text they no longer want.
Since students generally pick one method and master that, how can the instructor choose questions from the test-bank that will fairly evaluate their ability to perform a task such as deleting text? Should questions be chosen that test for all possible methods of deleting text? Or, should questions be chosen that test only for the one method the instructor “thinks” they should know, such as the one demonstrated in class? Suppose the student prefers the overwrite method, yet the test measures the student’s use of the Delete key. Is that a fair measure of the student’s ability to delete text?
Testing for what they know
Rather than work within the constraints of the test-bank, I decided to design my own “how-to” questions that would let the students tell me what they know. An example of a how-to question for the delete function would be:
“Word processing is so much fun!”
With this type of question, the students can describe any method they prefer, knowing that if their method worked, it would be marked correct.
Designing “how-to” tests
To design a “how-to” test you must first look at the lesson’s objectives, and then develop a question around that objective. For example, if the objective is to be able to set a hard page break, the question might be:
If the objective is to be able to double-space a document, the question might be:
In any case, the student is to list the steps he or she would perform to accomplish the given task. If anyone following those steps can do it too, then the answer is correct.
Administering the “how-to” test
Even though the students write down the steps they use to perform the given task, the “how-to” test is still administered as a hands-on test. I encourage the students to try out the operation on the computer first with some sample text, either from one of their own files, or one I give to them, before trying to answer the question. Once they perform the operation to their satisfaction, they can then write down the steps they used.
Testing for terminology
In the beginning, students will use their own words to explain how to perform an operation, such as “you click on the box with the B on it.” This is OK at first, but eventually they will have to demonstrate their grasp of computer terminology.
After one or two tests, let them know that you will be evaluating them on their use of the correct terms for what they are describing.
Keep the tests short
To prevent students who didn’t study from copying sections verbatim from the book, or getting lost performing operations over and over again, keep the tests short. They should take the average student no longer than 30 minutes to complete. If the students are given short tests that they know will focus on the course objectives, they will be better prepared to take them.
Switching from “how-to” to “what-if”
Once students have proven that they are proficient in working with Windows and word processing, it’s usually time to move on from how-to questions to applications such as Excel and Access. The course objectives also change. Now the student must be able to use the software to get information from raw data. To test for this objective, I use “what-if” tests.
Designing “what-if” tests
A what-if test presents the student with a set of data and then asks them a series of questions based on the data. In this example, I asked the question, “What if Candy’s Craft Outlets had the following sales for the first quarter?”:
|Sample test problem|
Then I provided the following instructions:
“Enter the data as shown onto a new worksheet. Then enter the appropriate formulas to answer the following questions. Give both the answer to the question and the formula(s) used to come up with your answer.
- What were the total sales for the Pennsylvania store this quarter?
- What were the average sales for all stores this quarter?”
Unlike multiple-choice or short-answer questions, this type of question lets me know whether the students can truly navigate and enter data in a spreadsheet and, more importantly, enter the appropriate formulas to get the information they need from their data. All this without having to ask them such things as, “What is a function?”, or “What distinguishes a formula from a label in Excel?”
Open book required
When preparing for the multiple-choice/short-answer tests, some students told me they actually tried to memorize entire chapters. Despite all their hard work, they still found both the written tests and the hands-on work difficult. Now I require all students to bring their books to a test. I’d rather test for their ability to read and comprehend technical material than their ability to memorize the material. How-to and what-if questions let me do just that.
Testing for all the right things
Testing against the objectives not only gives the students an opportunity to show me what they really learned, it also gives me a better idea of what I have actually taught them. No longer is time spent on debating answer A over B. Rather, that time is better spent thinking of ways to best use the software.
Do you start with test-bank questions and revise from there? Do you scrap the canned questions and create your own? What about student comments on tests? Write us an e-mail about your best practices for assessing student skills.