General discussion

Locked

User Sadism

By elwoos ·
I'm thinking about sending out a User Satisfaction (aka sadism) survey to all my users. Does anyone have any constructive ideas on how to approach this (other than not to do it)? What also might be useful would be examples of some that others have used.

Am I just being masochistic? It's probably going to have to happen sooner or later for benchmarking

thanks

steve

This conversation is currently closed to new comments.

8 total posts (Page 1 of 1)  
| Thread display: Collapse - | Expand +

All Comments

Collapse -

Try these resources

by Jay Garmon Contributor In reply to User Sadism

This article
http://techrepublic.com.com/5100-6263-1043912-1-1.html

This download
http://techrepublic.com.com/5129-6321-10217009.html

Collapse -

Close-Ended Questions for Minimal Masochism

by Salamander In reply to User Sadism

From a statistical standpoint, try to keep as many of your questions in close-ended categories that you can assign point values to.

For example, I use often use these standard response categories in agreeing/disagreeing with a given statement: strongly agree (4), agree (3), neither agree nor disagree (2), disagree (1), strongly disagree (0). You get the idea. Just keep all your "high scores" in the same direction, so that, when you average the survey results, higher point values indicate higher customer satisfaction and vice-versa.

When I do the close-ended questions, this makes collection, compilation, and reporting of the data *much* easier.

Sure, I throw in 1-2 open-ended questions at the end to let people vent and collect new ideas, but if you're going to use this for benchmarking, you need to have enough close-ended questions with point values to give you hard numbers. It is often nearly impossible to point-scale open-ended responses, because the evaluator's take on them is always subjective. I usually just pick through these and use them as illustrative points in the final report.

If you stick to discrete response categories, then you can easily show measurable improvement (or decline, whichever the case may be) in a much easier fashion over time. It's soooo much less painful to do it this way, especially if you're dealing with a large group of victims/respondents.

Collapse -

Maybe Masochistic

by elwoos In reply to Close-Ended Questions for ...

Thanks for that reply it's very helpful. I suppose the next logical question is what sort of response rate would you expect and from how many users.

thanks

steve

Collapse -

If you want to be statistically accurate...

by JamesRL In reply to Maybe Masochistic

You have to determine the "demographics" of your user community. And then make sure that you "oversample" to ensure you have enough data.

Let me try to clarify. If you have one hundred users who all work in finance out of one location, you need fewer responses than if you have 2 locations of 50 each. The more diverse your user community, the more samples you need.

The way that polling firms go about their business is to create a demogarphic profile(age, gender, race(if important to the survey) geographic location etc), and keeping making calls until they have more than enough samples to fill their demographic needs. Then they have a random method of weeding out responses until they fit the profile.

So the question isn't do I have enough responses, but do I have enough responses which roughly correlate to the user community I am trying to understand.

You should also know that when you send out questionaires and ask for a response, you get motivated responders - it usually works out that you get responses from the people who are on either end of the spectrum - those who love you or hate you. The indifferent tend not to respond.

In terms of setting up a survey, you should go into it with a a set objective - what do you want to know? You want to gather standard measures that you can trendline year over year, but aslo ask some questions about specific issues you are facing today.

James Linn

Collapse -

It also helps if

by HAL 9000 Moderator In reply to If you want to be statist ...

You have most of the questions as multiple choice and not a written response as this type of response is not at all helpful in gathering what is really happening as you will put your own slant onto the responses which may be totally different to what the responder intended. Unfortunately we all read what we want to see rather than what was intended in things like this.

Other wise if you follow all of the above I don't think you'll go far wrong.

Col

Collapse -

Agree

by Salamander In reply to It also helps if

Multiple choice questions are really the only ones you can use to make any kind of valid quantitative analysis. I agree in that the evaluator can slant the responses, otherwise. I have seen that happen and it is not good...

Collapse -

Response rate

by Salamander In reply to Maybe Masochistic

Generally, a 50-60 percent response rate is considered acceptable for statistical purposes. That may seem low, but that's really what you need to make generalizations.

Collapse -

Guide to Survey Research

by Salamander In reply to Maybe Masochistic

This is a good guide to get you started. It deals with the design steps, pretesting, validity, analysis, etc.

http://writing.colostate.edu/references/research/survey/index.cfm

Please delete any spaces in the link.

Back to IT Employment Forum
8 total posts (Page 1 of 1)  

Related Discussions

Related Forums