Large scale surveys is an essential part of academic research, but at the same time most universities prohibit incentivizing research participants which can make it difficult to recruit bodies of study participants that are statistically significant.
Quantitative surveys with large significant populations historically have been time-consuming, expensive and a laborious to perform—often requiring thousands of phone calls and/or distributing survey forms by expensive letter mails (Callegaro, Manfreda and Vehovar, 2015; Dziuban et al., 2015).
Large-scale studies based on a manual process of recruiting participants, therefore, only have been accessible to established researchers with access to significant funds from the academic- or the corporate world. But even with access to funds, recruiting thousands of participants by phone or letter mails willing to offer maybe an hour of their precious time to participate in any kind of study is difficult. This also explains the high number of studies in the social sciences with obvious deficiencies in the statistical validity of the research population, including Hofstede’s Theory of Cultural dimensions (Hofstede, 2017) and Schwartz Seven Dimension (Schwartz, 2006); two of the most influential models of culture in the fields of cross-cultural psychology, international management, and cross-cultural communication.
Hofstede’s theory (2017), as pointed out by Mcsweeney (2002), originates from a survey with over 117.000 respondents from 66 countries (Mcsweeney 2002, p.94) but only from six countries the number of respondents is over 1.000; in fifteen countries, the number of respondents is less than 200; in Pakistan only 37, and the number of respondents from Hong Kong and Singapore, respectively, is 71 and 59 (Hofstede 1980, p.73; Hofstede 1980, p.411;). Consequently, the data-set of Hofstedes model is not statistically significant which is also pointed out by Schmitz and Weber (2014) whom conclude that Hofstede’s research lack validity and that his theory should “neither to be used as a standard of cross-national comparisons, nor as the basis for general descriptions about countries as whole” (Schmitz and Weber 2014, p. 21). Still, this theory is one of the most widely accepted in the social sciences cited over 72.000 times, according to Google Scholar (2017).
Schwartz theory illustrates another common deficiency which often is pointed out in the limitation section of academic research reports: Sample Selection Bias and participants recruited only from the academics.
While the high number of academic studies performed exclusively with participants from the academics is obvious as it is relatively easy for an academic research team to recruit research participants from their own institution whom often are students with lots of free time, happily becoming lab rats for a few dollars or a movie ticket—this does not hide the fact that a research population of only students and professors by no mean is significantly representative of a general population.
The web and its influence on research
The introduction of the web in the 1990s’ was a paradigm shift in research methodologies, making it possible for researchers to send out thousands or even millions of study participant invitations and survey forms with just a click.
The fact that the web has made it easier to perform studies, as noted by Callegaro, Manfreda and Vehovar (2015), however, also have resulted in the number of surveys people are asked to be part of have increased exponentially each year since the millennia, and most people today are flooded with requests to be part of surveys by email, in pop ups on their favourite websites and in social networks platforms like Facebook which have resulted in a steady decline in both response rate—as well as the quality of responses (Nonresponse in Social Science Surveys, 2013). As of this, a growing industry of companies helping researchers in recruiting survey populations which are statistically significant and nonbiased has become an important part of academic research, enabling small research teams and even individual researchers to perform large-scale studies that just a few years ago would have been out of reach for all but a few well-funded scholars.
While the methodologies these companies use to recruit study participants vary, it is now well-established that not offering study participants incentives makes it difficult to recruit statistically significant study populations (Singer and Ye, 2012; Wetzels et al., 2008; Callegaro, Manfreda and Vehovar, 2015).
Not all universities agree to incentivizing research participants
Identifying, scanning and inviting thousands of participants in the short window of time available for many research projects, the only viable alternative for many scholars is to outsource the recruitment to a third party like Profilic (2017); one of the leading recruitment companies of participants to academic studies. Founded as an Oxford University Incubator company and with a recruitment process founded on strict ethical guidelines, Prolific’s process include paying research participants fairly for their time and the company today have many of the worlds leading academic institutions as their clients—including Harvard, Yale and Cambridge
Not all universities, however, agree to pay research participants, and may many even prohibit it. As of this, it is important that scholars carefully consider the research standards of the academic institutions they apply to, to avoid getting into a situation where their research projects might be impossible to perform due to policy issues.
- References
- Callegaro, M., Manfreda, K. and Vehovar, V. (2015). Web survey methodology. Beverly Hills (California) [etc.]: Sage.
- Dziuban, C., Picciano, A., Graham, C. and Moskal, P. (2015). Conducting research in online and blended learning environments. Routledge.
- Hofstede, G. (1980). Culture’s Consequences: International Differences in Work-Related Values. 2nd ed. SAGE Publications.
- Hofstede, G. (1997). Cultures and organizations: Software of the mind. 1st ed. London: McGraw-Hill USA.
- Hofstede, G. (2011). Dimensionalizing Cultures: The Hofstede Model in Context. Online Readings in Psychology and Culture, 2(1), pp.3-5.
- Hofstede, G. and Hofstede, J. (2005). Cultures and organizations. 1st ed. New York: McGraw-Hill.
- Hofstede, G. (2017). Cultural Dimensions – Geert Hofstede. [online] Geert-hofstede.com. Available at: https://geert-hofstede.com/cultural-dimensions.html [Accessed 29 Aug. 2017].
- Mcsweeney, B. (2002). Hofstede’s Model of National Cultural Differences and their Consequences: A Triumph of Faith – a Failure of Analysis. Human Relations, [online] 55(1), pp.89-118. Available at: http://journals.sagepub.com/doi/abs/10.1177/0018726702551004 [Accessed 2 May 2017].
- Nonresponse in Social Science Surveys. (2013). Washington: National Academies Press.
- Prolific. (2017). Prolific. [online] Available at: https://www.prolific.ac/ [Accessed 29 Aug. 2017].
- Prolific.ac. (2017). Ethical rewards. [online] Available at: https://www.prolific.ac/researchers [Accessed 29 Aug. 2017].
- Schwartz, S. (2006). A Theory of Cultural Value Orientations: Explication and Applications. Comparative Sociology, 5(2), pp.137-182.
- Schmitz, L. and Weber, W. (2014). Are Hofstede’s dimensions valid? A test for measurement invariance of uncertainty avoidance. interculture journal: Online-Zeitschrift für interkulturelle Studien, [online] 12(22), pp.11-26. Available at: http://www.ssoar.info/ssoar/handle/document/45472 [Accessed 2 May 2017].
- Singer, E. and Ye, C. (2012). The Use and Effects of Incentives in Surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), pp.112-141.
- Wetzels, W., Schmeets, H., Brakel, J. and Feskens, R. (2008). Impact of Prepaid Incentives in Face-to-Face Surveys: A Large-Scale Experiment with Postage Stamps. International Journal of Public Opinion Research, 20(4), pp.507-516.