Survey Research Trends in Operations and Supply Chain Management

CARISCA hosts biannual capacity-building workshops for faculty and students and at KNUST.

May 10-11, 2022

On May 10 and 11, 2022, CARISCA hosted its second faculty workshop of the academic year to build the research capacity of faculty at Kwame Nkrumah University of Science and Technology (KNUST). Over the two-day workshop, faculty and graduate students from the KNUST School of Business learned about best practices in survey research and applied what they learned to their own research projects.

Presenters were Matthew Robson, professor of marketing and international management at Cardiff University, and Adegoke Oke, professor of supply chain management at Arizona State University.

Survey Research Tips and Trends 

A common lament of researchers in recent years, Robson said, is that it seems impossible to publish survey research in the top journals because peer reviewers object to the approach. Although the number of survey-based papers in top journals has declined, it is possible to get them published, Robson explained.

As a survey researcher who co-authored the best paper of the year in the Journal of International Marketing in 2019, Robson offered workshop participants advice on addressing some of the potential criticisms and misconceptions reviewers have about survey research.

“Survey research has strengths. It also has weaknesses,” Robson said. “It’s how you speak to those strengths and how you downplay those weaknesses that I think are quite important.” 

Among the primary strengths of survey research is that it is generalizable and pragmatic. Robson explained that business managers and other “non-academic” professionals understand and like surveys.

A second benefit of surveys is that they allow researchers to measure a construct directly rather than only infer it from secondary data. Another advantage is that surveys use well-developed procedures, which enable researchers to study a concept across different cultures and compare and aggregate the results. A fourth benefit is that survey research can control for other variables, or explanations for the results, that reviewers may expect. 

Often, Robson said, survey research is combined with other research methods, and this mixed-method approach can help address reviewer questions. Robson also offered workshop participants tips on developing good surveys, questionnaires and measures. Following are his suggestions:

Survey Tips:

  • Be as thorough as possible with your population and sampling frame, and keep detailed records. Reviewers tend to raise questions when they think researchers are not being transparent about how the survey population and sample were defined and selected.
    “You need to kill them with details,” Robson advised.
  • Use multiple procedures to ensure your informants, or respondents, are competent to answer the survey questions. It’s not sufficient to ask respondents to rate their knowledge level at the end of the survey. Screen them in advance through pre-study interviews or by using informants you know.
  • Provide respondents with digestible insights in a managerial report following the study to keep them “on the hook.” They will be more likely to respond to follow-up questions and participate in future research if they gain some benefit from the study.
  • Plan in advance how you will respond to common-method bias problems, which occur when the survey instrument causes variance in responses.
    Some ways to overcome common-method bias are using multiple data sources, getting your study endorsed by top management (so respondents take it more seriously), measuring the dependent and independent variables in separate surveys, and using multiple respondents from the same company instead of only one.  

Questionnaire Tips:

  • Deliver your survey online through a platform such as Qualtrics.
  • Guard against making your survey too long. The ideal length will vary depending on the subject area and your respondents.
  • Order your questions so respondents cannot guess the correct answer to a question based on previous questions.
    “Don’t give away your construct,” said Robson.
  • Mix up your response formats to prevent informants from rotely selecting the same answer to multiple questions in a row.
  • Insert attention-check questions to make sure respondents are reading the survey carefully. For example, add a question such as “Please select the answer in the middle.” If respondents check the wrong answer, you may want to eliminate them from the survey results.

Measurement Tips:

  • Make sure every question on your survey is critical. Include enough questions to measure each construct, but guard against devoting space to noncritical items.
  • Use language that is unambiguous. Avoid jargon or academic terms that your respondents may not understand. 
  • Use in-depth interviews to modify your questions to match your construct. Keep detailed records to help you respond to reviewers’ questions. 
  • Match wording of the scale items to wording of the definition.
    As Robson put it, “make sure reviewers can’t insert something even as narrow as a sheet of paper between your items and your definition.” 

Operationalization of Variables in Survey Research 

ASU Professor Adegoke Oke presented the second half of the workshop on the operationalization of variables in survey research, which he noted is a major area where reviewers find fault. 

Operationalization is the process of deciding how to translate abstract concepts into something more concrete and more directly observable. This step should happen early in the research design.

Problems often arise, he said, because researchers fail to consider operationalization until too late in the process. 

“What people tend to do is finish developing concepts, and they move the operationalization process to a later stage,” said Oke. “But if you want to do it well, you need to move the process a little bit earlier. From the time you come up with your research questions, you need to start thinking about operationalization.”

Oke then laid out these steps in operationalizing survey research concepts:

Step 1: Clarify the concepts. After deciding on the concepts you wish to study, identify different definitions of those concepts and decide on a working definition. Your study should focus on that definition, not a broad concept with different meanings.

Step 2: Develop indicators that accurately capture the concepts. Oke calls this “descending the ladder of abstraction.” Move from a broad concept down to measurable indicators. 

Step 3: Evaluate the indicators. Before you launch a full study, evaluate your survey instrument for reliability and validity. Reliability means that your survey instrument will consistently get the same results. Validity relates to the accuracy of your survey. Are you measuring what you intend to measure? You should test reliability and validity through a pilot study before launching your full survey. 

“My advice,” said Oke, “is do not take a risk. You can avoid issues if you do a good pilot study. 

“The bar for cross-sectional survey research design in top journals is now very, very high,” Oke added. “Let’s make our surveys better. That’s why we’re doing this workshop.”

More than 65 faculty members and graduate students from KNUST attended the presentations. Over 90% of the attendees who responded to a post-workshop survey rated the program as very useful.

Nathaniel Boso, dean of the KNUST School of Business, closed out the first day of the workshop by reinforcing the importance of producing high-quality research.

“We want to make sure that the quality of our research improves, not just for academic publication but also for the common good,” Boso noted. “If we do good research, it helps our society become a better place. If we are doing substandard research, it is going to be consumed by people, and our way of life will be worse off.”


Photo by Chris Liverani on Unsplash