Date Added: Oct 2011
In this paper, the authors identify trends about, benefits from, and barriers to performing user evaluations in software engineering research. From a corpus of over 3,000 papers spanning ten years, they report on various subtypes of user evaluations (e.g., coding tasks vs. questionnaires) and relate user evaluations to paper topics (e.g., debugging vs. technology transfer). They identify the external measures of impact, such as best paper awards and citation counts that are correlated with the presence of user evaluations. They complement this with a survey of over 100 researchers from over 40 different universities and labs in which they identify a set of perceived barriers to performing user evaluations.