Why respondents select no-opinion response option in consumer research?

Anna Matel, Tomasz Poskrobko


In surveys, which are a commonly accepted research method in social sciences, we always observe a certain percentage of respondents giving no-opinion responses such as “no opinion” or “hard to say”. In this study, we treat no-opinion responses as a motivated decision to refuse to respond. The aim of the study was to determine what factors involved in the organisation of a study increase the percentage of respondents who opt for no-opinion responses. The factors on which we focused include in particular the significance of the difficulty of questions; the order of questionnaire questions; motivating respondents through rewards, and the research technique. In the first part of the study, 575 students were divided into 5 groups. Each group was surveyed about environmental consumer attitudes in different survey conditions. In addition, the respondents were asked to rank the difficulty of individual questions in the survey. Findings: The study showed that the percentage of no-opinion responses increases as the questions become more difficult. The respondents were more likely to avoid stating their opinion on those unecological behaviours that they exhibited more frequently. The change of the research technique from a questionnaire to a direct interview caused a decrease in the percentage of noopinion responses. The respondents opted for a “no opinion” response less frequently when the interview was conducted by a lecturer than when it was conducted by a student. Changing the order of questions also affected the percentage of no-opinion responses; however, that was only true for questions that the respondents recognised as easy. Conclusions: The study showed that the choice of a research technique intended to reduce the percentage of no-opinion responses depends on the quality of questions. If they are difficult and require the respondents to engage cognitive resources, a better solution is to employ the direct interview method. However, if the questions are sensitive and the respondent may feel pressure to give a response that conforms to social norms, a better solution is to ensure them anonymity, e.g. by employing the questionnaire technique.

Słowa kluczowe

rating scale, no-opinion response options, response bias, research technique.

Pełny tekst / Download full text:

PDF (English)


Alwin D.F., Krosnick J.A. (1991). The reliability of survey attitude measurement: The infl uence of question and respondent characteristics. Sociological Methods and Research, 20, 139-181.

Borgers N., Hox J., Sikkel D. (2004). Response Effects in Surveys on Children and Adolescents:

The Effect of Number of Response Options, Negative Wording, and Neutral Mid-Point. Quality & Quantity, 38, 17-33.

Borkowski B., Dudek H., Szczesny W. (2003). Ekonometria. Wybrane zagadnienia. Wydawnictwo Naukowe PWN, Warszawa.

Branas-Garza P. (2007). Promoting helping behavior with framing in dictator games. Journal of Economic Psychology, 28, 477-486.

Burkill S., Copas A., Couper M.P., Clifton S., Prah P., Datta J., et al. (2016). Using the Web to Collect Data on Sensitive Behaviours: A Study Looking at Mode Effects on the British National Survey of Sexual Attitudes and Lifestyles. PLOS ONE, 11(2).

Byrka K. (2015). Łańcuchowe zmiany zachowań. W kontekście ochrony środowiska i promocji zdrowia. PWN, Warszawa.

Cannell Ch.F., Miller P.V., Oksenberg L. (1981). Research on Interviewing Techniques. Sociological Methodology, 12, 389–437.

Doušak M. (2017). Survey Mode as a Moderator of Context Effects. Advances in Methodology & Statistics / Metodoloski zvezki, 14, 1-17.

Galesic M., Tourangeau R., Couper M., Conrad F. (2008). Eye-tracking data new insights on response order effects and other cognitive shortcuts in survey responding. Public Opinion Quarterly, 72(5), 892-913.

Gilljam M., Granberg G. (1993). Should we take don’t know for an answer? Public Opinion Quarterly, 57, 348- 357.

Holbrook A.L., Green M.C., Krosnick J.A. (2003). Telephone versus face-to-face interviewing of national probability samples with long questionnaires comparisons of respondent satisfi cing and social desirability response bias. Public Opinion Quarterly, 67, 79-125.

Krosnick J.A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213-236.

Krosnick J.A., Fabrigar, L.R. (1997). Designing Rating Scaling for Effective Measurement in Surveys. Survey Measurement and Process Quality. New York: Wiley.

Krosnick J.A., Holbrook A.L., Berent M.K., et al. (2002). The impact of “no-opinion” response options on data quality non-attitude reduction or an invitation to satisfi ce? Public Opinion Quarterly, 66, 371-403.

Krosnick J.A., Narayan S., Smith, W.R. (1996). Satisfi cing in surveys: Initial evidence. New Directions for Evaluation, 70, 29–43.

Kulas, J.T., Stachowski, A.A. & Haynes, B.A. (2008). Journal of Business and Psychology, 22(3), 251–259. DOI: 10.1007/s10869-008-9064-2.

Lasorsa D.L. (2003). Question-order effects in surveys: the case of political interest, news attention, and knowledge. Journalism and Mass Communication Quarterly, 80(3), 499-512.

Naemi B.D., Beal D.J., Payne S.C. (2009). Personality Predictors of Extreme Response Style. Journal of Personality, 77(1). DOI: 10.1111/j.1467-6494.2008.00545.x

Raaijmakers Q.A. W., Van Hoof A., ’t Hart H., Verbogt T.F.M.A., Vollebergh W.A.M. (2000). Adolescents’ midpoint responses on Liker-type scale items: Neutral or missing values? International Journal of Public Opinion Research, 12(2), 207-216.

Rousu M.C., O’Connor R. (2017). Smokers’ BMI and perceived health: Does the order of questions matter? Preventive Medicine Reports, 5, 140-143.

Saris W.E., Gallhofer I.N. (2014). Design, Evaluation, and Analysis of Questionnaires for Survey Research. Wiley, Hoboken.

Schwarz N., Clore G.L. (1983). Mood, misattribution, and judgments of wellbeing: Informative and directive functions of affective states. Journal of Personality and Social Psychology, 45, 513-523.

Schwarz N. Knäuper B., Hippler H.J., Noelle-Neumann E., Clark L. (1991). Rating scales numeric values may change the meaning of scale labels. Public Opinion Quarterly, 55(4), 570–582.

Simon, H.A. (1957). Models of man. New York, Wiley.

Tourangeau, R., Rasinski, K.A. (1988). Cognitive processes underlying context effects in attitude measurement. Psychological Bulletin, 103, 299–314.

Tourangeau R., Rips L.J., Rasinski K. (2000). The psychology of survey response. Cambridge, Cambridge University Press.

Triki A., Cook G.L., Bay D. (2017). Machiavellianism, Moral Orientation, Social Desirability Response Bias, and Anti-intellectualism: A Profi le of Canadian Accountants. Journal of Business Ethics, 144, 623-635.

Vanderhoven E. (2012). Face-to-Face Peer Assessment in Secondary Education: Does Anonymity Matter? Procedia – Social and Behavioral Sciences, 69, 1340-1347.

Wierzbiński J. (2009). Badanie zaufania do organizacji: problemy metodologiczne. Warszawa, Wydawnictwo Naukowe Wydziału Zarządzania UW.

Wierzbiński J., Kuźmińska A.O., Król G. (2014). Konsekwencje wyboru typu skali odpowiedzi w badaniach ankietowych. Problemy Zarządzania, 1(45), 118-119.

Wieczorkowska G., Wierzbiński J., Siarkiewicz M. (2009). Wybrane problemy metodologiczne analitycznych badań sondażowych. W: M. Zahorska, E. Nasalska (Red.). Wartości – polityka – społeczeństwo. Warszawa, Wydawnictwo Naukowe Scholar.

DOI: http://dx.doi.org/10.7206/DEC.1733-0092.105


  • There are currently no refbacks.