Low Accuracy of Marketing Research Data: Some Issues in Validity Analysis
Marketing research is usually conducted by internal and external researchers. Internal research involves the people in a company and external research is handled by marketing research consultants and academicians. There are widespread anxieties about the disconnection of external marketing researchs and industry requirements (November, 2014; Quora, 2017).
Academicians have sounded this issue long ago (see Mayer, 1970). In marketing, a myriad of marketing-oriented research has been published in thousands of scientific journals. Almost in every article, we can find practical and academic contributions. However, marketing academic research and business practices are less connected (Hughes et al., 2018; Maiken, 1979; November, 2004).
Many factors affect the disconnection of business academic research and business practices (November, 2004). Based on Mayer (1970), Quora (2017), and Rogers (n.d.), the factors referred to are:
- Customers: Who considers your research necessary?
- Congruency: Academicians depart from scientific theory. Practitioners rely on their experience and knowledge. Practitioners will adopt marketing result recommendations if they align with their experience and knowledge.
- Methodology: Academicians are concerned with the methodology, and practitioners focus on whether the findings are helpful. For example, Best Brand Award is a popular branded marketing research among practitioners. They struggle to get that award because it can be used gimmick in their promotion. They care less about how the results are obtained.
- Reductionism: Marketers usually use samples instead of population. Therefore, based on the sample size and the sampling technique, there is no guarantee that the sample represents the population. Marketing researchers can not answer this question: “Does your sample represent my consumers?”
- Accuracy: Low precision of data because of poor measurement tools and data collection methods.
- Replication: academics were not convinced by their findings. Generally, they recommend that other researchers perform simple or constructive replication to confirm their findings.
In the next session, the discussion focuses on low-precision issues. The author stresses this issue because hundreds of widely published books or academic articles promote low-quality validity test technics and results. This article aims to promote appropriate ways concerning that issue.
Low Precision Issue
According to Quora (2017), the accuracy of marketing research data is only between 10 and 20%. The problem of marketing data precision is not only among academics but also among marketing practitioners. In the survey of 964 marketers and marketing data analysts, Johnson (2021) reported that as much as 41 percent of marketing analysts and 30% of marketing practitioners state a low trust in marketing reports.
Unreliable instruments and inaccurate data cause low precision of data. Therefore, validity and reliability tests are one package that must be fulfilled.
Unreliable measurement tools are characterized by the tools’ consistency in producing data. Imagine that we measure the temperature of warm water. We used two instruments, namely the thermometer and the index finger. Measurements were made five times. With a thermometer, the numbers produced by five people are 37, 36.8, 37.1, 37.2, and 36.9 degrees Celsius.
Meanwhile, with the index finger, the approximate number of five people is 30, 40, 36, 45, and 25 degrees Celsius. The thermometer is more reliable than the index finger because it produces more consistent data from the naked eye. Reliable measurements generate valid data. There are many reliability tests, but the most widely used in marketing are Cronbach Alpha and Construct Reliability.
The validity test relates to whether we measure what we want to measure. This question contains two specific questions. First, does the instrument we use to measure the variable we want to measure or construct validity? Second, empirically does the data we have to describe the actual situation (empirical validity).
A construct is an abstract concept that cannot be measured directly. A construct should be translated into observational variables for measurement. Construct validity concerns content, face, convergent, discriminant, and nomological validity. The first three elements are needed in academic and practical-oriented marketing research, while the last two are usually used in academia.
Content validity assesses whether an instrument includes all aspects of a construct. Two questions are posed in this validity test: whether the observation variables cover all aspects of the construct? The second question is, are the proposed observation variables necessary? Table 1 contains questionable content validity.
The problem of content validity in the above operationalization:
- Satisfaction toward product or service quality. There are two objects in this statement, namely, product and service. Those two objects should be separated.
- In this statement, “product” or “service quality,” the object is not clearly defined. Products are usually composed of several types, and the satisfaction towards each type differs.
- Is the term of service in statement 1 addressed to service as a supplement or primary marketing offering?
- Statement no. 3 contains a premise of bias. This statement assumes that to be satisfied, a product must be the best in its category. This premise is vulnerable to objections.
- Where do the variables come from?
To fulfill content validity, first, marketing researchers need to refer to theory and occasionally do not use their own opinions. Second, marketing researchers need to ask for expert judgment about whether the proposed observation variables are essential, clear, or free of bias (Lawshe, 1975; Rodrigues et al., 2017). The number of experts is between 5 – 40 people (Lawshe, 1975). There are several technic to ensure whether an instrument meets content validity, such as CVA, CVI, I-CVI, and Kappa.
The problem with content validity in marketing academic research is rarely realized. Most marketing researchers ignore this test. As a result, the observational variables used are too many, too few, unclear, and irrelevant.
Face validity is an approach based on a subjective view of whether the questions can be clearly understood (Moores et al., 2012) and do not cause difficulty responding to them. In other words, face validity is related to language quality and measurement tools (Taherdoost, 2016). Face validity is conducted by asking potential respondents’ opinions about their understanding of the questions. This phase can be carried out through pilot research with focus groups or surveys (Moores et al. (2012).
Marketing academic research generally ignores face validity. As a result, respondents gave random answers because the questionnaire was too long and the questions were unclear, which could cause respondents to give arbitrary answers (Quora, 2017; Rogers, n.d.).
Convergent validity is the cohesiveness of the observed variables describing the construct (Hair et al., 2014; Straus and Smith, 2009). Structural Equation Modeling (SEM) is the primary technique of CFA (Hair et al., 2016; Schreiber et al., 2006). This technique only uses common variance to analyze the relationship between variables (Hair et al. 2014). Some researchers use inappropriate tools, such as product-moment correlation, correlation, Barlet test of Sphericity, and exploratory factor analysis (EFA).
Product Moment Correlation
Product moment correlation is a validity test technique that is very popular among marketing students. Most nationally published marketing research books only propose this technique for convergent validity tests.
The following is a screenshot of an article published in a national accredited Sinta 3 journal.
The problem with the correlation technic for validity analysis is twofold. First, it is obtained by associating an observation variable with the construct’s total score. The question is, can the total score be used to represent the construct? The answer is ‘can’ if the observed variables are valid (Hair et al., 2014). However, if the score is obtained from an invalid observation variable, the total score will contain a high error. Second, the correlation product of moments uses the total variance as input. As is known, variance consists of actual, error, and unique variance. Thus, the moment product correlation contains information derived from the error variance and unique variance so that its quality can be questioned.
Second, using the p-value as a limit for determining whether an observation variable is valid is a fatal error. A significant or insignificant decision in the r-coefficient test differs from a valid or invalid decision. The p-value determines the significance of the correlation between the observed variables and their constructs or ‘there’ or ‘no’ correlation. Even though the correlation was high (r> 0.70), researchers could not explain the variance extracted. As a result, whether the observation variable has a high (VE>0.50) or low (VE<0.50) ability to explain the construct cannot be known.
Third, the product-moment correlation cannot explain convergent validity, namely the integration or cohesiveness of the indicators explaining the construct. As a consequence, discriminant validity cannot be explained either.
Barlet test of Sphericity
Some researchers use the Barlet test of Sphericity to test convergent validity. An article by Nestian et al. (2021) (you can find it here), which was published in the Scopus-indexed journal, Sustainability, is not free from errors due to using this technique. The question is, why is this technique not appropriate?
Figure 2. The Screenshot of Nestian et al. (2021) Article Pages.
This technique tests whether the variables involved in the factor analysis are correlated. The chi-square value approximates the Barlett Test value. The null hypothesis (H0) is that there is no correlation between variables, while the alternative hypothesis. The alternative hypothesis (Ha) states that there is a correlation between variables. The conclusion is to accept or reject H0. If Ho is rejected, it can be concluded that there is a correlation between variables. So, this technique only detects whether a correlation exists or not. It does not explain whether the observed variables are associated with only one construct and have a high ability (VE>0.50) to explain the construct.
There are times when the Barlett test of Sphericity successfully detects the existence of multicollinearity, and the variables involved fall into several constructs. Such a result contradicts the principle of convergent validity that expects the solidity of all variables in one construct.
Exploratory Factor Analysis
Many studies use exploratory factor analysis (EFA) to test convergent validity. One of them is an article published in LSE-Life Science Education, a Scopus Q1 journal (you can find it here). Unfortunately, the EFA is an imprecise tool for validity testing.
Factor analysis can be used to identify the structure of the relationship between variables or between respondents. Let us say we have a variable. With factor analysis, we can find out the dimensions of a construct. Indeed, we can use extraction techniques that use common variance (for example, principal, axis factoring and alpha factoring). However, the variance extracted (called commonalities) will be dispersed into latent several variables (called components in SPSS), resulting in the inaccuracy of the result.
Confirmatory Factor Analysis as True Validity Test
Structural Equation Modeling (SEM) is the primary technique of CFA (Hair et al., 2016; Schreiber et al., 2006). This technique has two models, namely, the measurement model and the structural model. The measurement model describes the relationship between the observed variables and their constructs. According to the research framework, the structural model analyzes the structural relationship between one construct and another. The combination of the two models is called the complete model of SEM (complete model of SEM).
The relationship between operational variables and their constructs is reflective and formative. In marketing research, most relationships between observation items and their constructs are reflective. More details descriptions can be found at www.bilsonsimamora.com.
Marketing researchers usually ignore this approach. This approach addresses this question: Is the data accurately describing the variable? Empirical validity consists of concurrent validity and predictive validity.
Concurrent validity concerns whether our data is the same as the actual situation. For example, the officially recognized height of Mount Everest (also called the golden standard) is 8849 meters. A climber takes measurements, and the result is 8,510 meters. Of course, the data owned by the climber is different from the official version, so it is considered invalid.
The thing is, this official (or golden standard) version was not always around. Suppose we conduct research on customer satisfaction at McDonald’s restaurants. We interviewed 1000 respondents. The result is 67% satisfied and 33% dissatisfied. The question is, do these numbers meet concurrent validity? It cannot be determined due to the absence of a golden standard for the 1000 respondents we examined.
The absence of concurrent validity can be overcome partly using this approach. Predictive validity examines whether, with the data we have, the measured variable can predict other related variables whose relationship is scientifically accepted by itself (given). For example, the theory states that customer satisfaction influences loyalty positively. If s researcher finds a negative result, his data should be concluded as invalid.
Unfortunately, many marketing researchers cover their failure in the ‘discussion’ by explaining why the results are not as specified. They even make the failure to propose a new insight called outcomes. The problem is healed by suggesting that further researchers could check the issue.
A researcher should ensure content, face, and convergent validity. In addition to the above approaches, marketing researchers should conduct discriminant and nomological validity for theory-oriented research. Pearson correlation, Barlett test of Sphericity, and exploratory factor analysis are not tools of convergent validity.
Hair, J.F., Black, W.C., Babin, B.J. and Anderson, R.E. (2014). Multivariate Data Analysis. 7th Edition, Pearson Education, Upper Saddle River
Hughes, T., Stone, M., Aravopoulou, E., Tiu Wright, L., & Machtynger, L. (2018). Academic research into marketing: Many publications, but little impact? Cogent Business and Management, 5(1), 1–18. https://doi.org/10.1080/23311975.2018.1516108
Johnson, L. (2021). Analysts Don’t Trust the Data that Drives Marketing Decisions. Adverity. Retrieved May 9, 2023, from https://www.adverity.com/blog/analysts-dont-trust-data-that-drives-marketing-decisions
Lawshe, C.H. (1975). A quantitative approach to content validity. Personnel Psychology, 28, 563-575. Retrieved October 28, 2022, from https://parsmodir.com/wp-content/uploads/2015/03/lawshe.pdf
Maiken, J. M., 1979. What is the appropriate orientation for the marketing academician? In: Ferrell et al. (Eds.), p. 58. Conceptual and Theoretical Developments in Marketing. Chicago: American Marketing Association.
Mayer, C.S. (1970). Assessing the Accuracy of Marketing Research. Journal of Marketing Research, 7(3), 285-291. https://doi.org/10.2307/3150284
Neștian, Ștefan A., Vodă, A. I., Tiță, S. M., Guță, A. L., & Turnea, E.-S. (2021). Does Individual Knowledge Management in Online Education Prepare Business Students for Employability in Online Businesses? Sustainability, 13(4), 2091. MDPI AG. Retrieved from http://dx.doi.org/10.3390/su13042091
November, P. (2004). Seven reasons why marketing practitioners should ignore marketing academic research. Australasian Marketing Journal, 12(2), 39–50. https://doi.org/10.1016/S1441-3582(04)70096-8
Quora. (2017). How Accurate Is Marketing Data? Forbes. Retrieved May 8, 2023, from https://www.forbes.com/sites/quora/2017/07/05/how-accurate-is-marketing-data/?sh=6a81b39368e2
Rodrigues, I.B., Adachi, J.D., Beattie, K.A. et al. (2017). Development and validation of a new tool to measure the facilitators, barriers and preferences to exercise in people with osteoporosis. BMC Musculoskelet Disord, 18, 540. https://doi.org/10.1186/s12891-017-1914-5
Rogers, M. (n.d.). 6 Mistakes that Prevent Accurate Marketing Research. Digsite [Business Website]. Retrieved May 8, 2022, from https://www.digsite.com/blog/market-research/6-mistakes-prevent-accurate-market-research#:~:text=What%20is%20accurate%20market%20research,among%20large%20groups%20of%20people.
Schreiber, J.B., Nora, A., Stage, F.K., Barlow, E.A., & King, J. (2006). Reporting structural equation modelling and confirmatory factor analysis results: A review. The Journal of Educational Research, 99(6), 323-338, DOI: 10.3200/JOER.99.6.323-338
Strauss, M. E., & Smith, G. T. (2009). Construct validity: advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1–25. https://doi.org/10.1146/annurev.clinpsy.032408.153639