Management and Marketing - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 9 of 9
  • Item
    No Preview Available
    Beyond Experiments
    Diener, E ; Northcott, R ; Zyphur, MJ ; West, SG (SAGE PUBLICATIONS LTD, 2022-07)
    It is often claimed that only experiments can support strong causal inferences and therefore they should be privileged in the behavioral sciences. We disagree. Overvaluing experiments results in their overuse both by researchers and decision makers and in an underappreciation of their shortcomings. Neglect of other methods often follows. Experiments can suggest whether X causes Y in a specific experimental setting; however, they often fail to elucidate either the mechanisms responsible for an effect or the strength of an effect in everyday natural settings. In this article, we consider two overarching issues. First, experiments have important limitations. We highlight problems with external, construct, statistical-conclusion, and internal validity; replicability; and conceptual issues associated with simple X causes Y thinking. Second, quasi-experimental and nonexperimental methods are absolutely essential. As well as themselves estimating causal effects, these other methods can provide information and understanding that goes beyond that provided by experiments. A research program progresses best when experiments are not treated as privileged but instead are combined with these other methods.
  • Item
    No Preview Available
    Long-Run Effects in Dynamic Systems: New Tools for Cross-Lagged Panel Models
    Shamsollahi, A ; Zyphur, MJ ; Ozkok, O (SAGE PUBLICATIONS INC, 2022-07)
    Cross-lagged panel models (CLPMs) are common, but their applications often focus on “short-run” effects among temporally proximal observations. This addresses questions about how dynamic systems may immediately respond to interventions, but fails to show how systems evolve over longer timeframes. We explore three types of “long-run” effects in dynamic systems that extend recent work on “impulse responses,” which reflect potential long-run effects of one-time interventions. Going beyond these, we first treat evaluations of system (in)stability by testing for “permanent effects,” which are important because in unstable systems even a one-time intervention may have enduring effects. Second, we explore classic econometric long-run effects that show how dynamic systems may respond to interventions that are sustained over time. Third, we treat “accumulated responses” to model how systems may respond to repeated interventions over time. We illustrate tests of each long-run effect in a simulated dataset and we provide all materials online including user-friendly R code that automates estimating, testing, reporting, and plotting all effects (see https://doi.org/10.26188/13506861 ). We conclude by emphasizing the value of aligning specific longitudinal hypotheses with quantitative methods.
  • Item
    Thumbnail Image
    The I and We of Team Identification: A Multilevel Study of Exhaustion and (In)congruence Among Individuals and Teams in Team Identification
    Junker, NM ; van Dick, R ; Hausser, JA ; Ellwart, T ; Zyphur, MJ (SAGE PUBLICATIONS INC, 2022-02)
    The social identity approach to stress proposes that the beneficial effects of social identification develop through individual and group processes, but few studies have addressed both levels simultaneously. Using a multilevel person–environment fit framework, we investigate the group-level relationship between team identification (TI) and exhaustion, the individual-level relationship for people within a group, and the cross-level moderation effect to test whether individual-level exhaustion depends on the level of (in)congruence in TI between individuals and their group as a whole. We test our hypotheses in a sample of 525 employees from 82 teams. Multilevel polynomial regression analysis revealed a negative linear relationship between individual-level identification and exhaustion. Surprisingly, the relation between group-level identification and exhaustion was curvilinear, indicating that group-level identification was more beneficial at low and high levels compared with medium levels. As predicted, the cross-level moderation of the individual-level relationship by group-level identification was also significant, showing that as individuals became more incongruent in a positive direction (i.e., they identified more strongly than the average team member), they reported less exhaustion, but only if the group-level identification was average or high. These results emphasize the benefits of analyzing TI in a multilevel framework, with both theoretical and practical implications.
  • Item
    Thumbnail Image
    From Data to Causes III: Bayesian Priors for General Cross-Lagged Panel Models (GCLM)
    Zyphur, MJ ; Hamaker, EL ; Tay, L ; Voelkle, M ; Preacher, KJ ; Zhang, Z ; Allison, PD ; Pierides, DC ; Koval, P ; Diener, EF (FRONTIERS MEDIA SA, 2021-02-15)
    This article describes some potential uses of Bayesian estimation for time-series and panel data models by incorporating information from prior probabilities (i.e., priors) in addition to observed data. Drawing on econometrics and other literatures we illustrate the use of informative "shrinkage" or "small variance" priors (including so-called "Minnesota priors") while extending prior work on the general cross-lagged panel model (GCLM). Using a panel dataset of national income and subjective well-being (SWB) we describe three key benefits of these priors. First, they shrink parameter estimates toward zero or toward each other for time-varying parameters, which lends additional support for an income → SWB effect that is not supported with maximum likelihood (ML). This is useful because, second, these priors increase model parsimony and the stability of estimates (keeping them within more reasonable bounds) and thus improve out-of-sample predictions and interpretability, which means estimated effect should also be more trustworthy than under ML. Third, these priors allow estimating otherwise under-identified models under ML, allowing higher-order lagged effects and time-varying parameters that are otherwise impossible to estimate using observed data alone. In conclusion we note some of the responsibilities that come with the use of priors which, departing from typical commentaries on their scientific applications, we describe as involving reflection on how best to apply modeling tools to address matters of worldly concern.
  • Item
    Thumbnail Image
    From Data to Causes II: Comparing Approaches to Panel Data Analysis
    Zyphur, MJ ; Voelkle, MC ; Tay, L ; Allison, PD ; Preacher, KJ ; Zhang, Z ; Hamaker, EL ; Shamsollahi, A ; Pierides, DC ; KOVAL, P ; Diener, E (SAGE Publications, 2020-10-01)
    This article compares a general cross-lagged model (GCLM) to other panel data methods based on their coherence with a causal logic and pragmatic concerns regarding modeled dynamics and hypothesis testing. We examine three “static” models that do not incorporate temporal dynamics: random- and fixed-effects models that estimate contemporaneous relationships; and latent curve models. We then describe “dynamic” models that incorporate temporal dynamics in the form of lagged effects: cross-lagged models estimated in a structural equation model (SEM) or multilevel model (MLM) framework; Arellano-Bond dynamic panel data methods; and autoregressive latent trajectory models. We describe the implications of overlooking temporal dynamics in static models and show how even popular cross-lagged models fail to control for stable factors over time. We also show that Arellano-Bond and autoregressive latent trajectory models have various shortcomings. By contrasting these approaches, we clarify the benefits and drawbacks of common methods for modeling panel data, including the GCLM approach we propose. We conclude with a discussion of issues regarding causal inference, including difficulties in separating different types of time-invariant and time-varying effects over time.
  • Item
    Thumbnail Image
    From Data to Causes I: Building A General Cross-Lagged Panel Model (GCLM)
    Zyphur, MJ ; Allison, PD ; Tay, L ; Voelkle, MC ; Preacher, KJ ; Zhang, Z ; Hamaker, EL ; Shamsollahi, A ; Pierides, DC ; KOVAL, P ; Diener, E (SAGE Publications, 2020-10-01)
    This is the first paper in a series of two that synthesizes, compares, and extends methods for causal inference with longitudinal panel data in a structural equation modeling (SEM) framework. Starting with a cross-lagged approach, this paper builds a general cross-lagged panel model (GCLM) with parameters to account for stable factors while increasing the range of dynamic processes that can be modeled. We illustrate the GCLM by examining the relationship between national income and subjective well-being (SWB), showing how to examine hypotheses about short-run (via Granger-Sims tests) versus long-run effects (via impulse responses). When controlling for stable factors, we find no short-run or long-run effects among these variables, showing national SWB to be relatively stable, whereas income is less so. Our second paper addresses the differences between the GCLM and other methods. Online Supplementary Materials offer an Excel file automating GCLM input for Mplus (with an example also for Lavaan in R) and analyses using additional data sets and all program input/output. We also offer an introductory GCLM presentation at https://youtu.be/tHnnaRNPbXs. We conclude with a discussion of issues surrounding causal inference.
  • Item
    Thumbnail Image
    Modeling interaction as a complex system
    van Berkel, N ; Dennis, S ; Zyphur, M ; Li, J ; Heathcote, A ; Kostakos, V (Taylor & Francis, 2021)
    Researchers in Human-Computer Interaction typically rely on experiments to assess the causal effects of experimental conditions on variables of interest. Although this classic approach can be very useful, it offers little help in tackling questions of causality in the kind of data that are increasingly common in HCI – capturing user behavior ‘in the wild.’ To analyze such data, model-based regressions such as cross-lagged panel models or vector autoregressions can be used, but these require parametric assumptions about the structural form of effects among the variables. To overcome some of the limitations associated with experiments and model-based regressions, we adopt and extend ‘empirical dynamic modelling’ methods from ecology that lend themselves to conceptualizing multiple users’ behavior as complex nonlinear dynamical systems. Extending a method known as ‘convergent cross mapping’ or CCM, we show how to make causal inferences that do not rely on experimental manipulations or model-based regressions and, by virtue of being non-parametric, can accommodate data emanating from complex nonlinear dynamical systems. By using this approach for multiple users, which we call ‘multiple convergent cross mapping’ or MCCM, researchers can achieve a better understanding of the interactions between users and technology – by distinguishing causality from correlation – in real-world settings.
  • Item
    Thumbnail Image
    Making Quantitative Research Work: From Positivist Dogma to Actual Social Scientific Inquiry
    Zyphur, MJ ; Pierides, DC (Springer Verlag, 2020-11-01)
    Researchers misunderstand their role in creating ethical problems when they allow dogmas to purportedly divorce scientists and scientific practices from the values that they embody. Cortina (J Bus Ethics. https://doi.org/10.1007/s10551-019-04195-8, 2019), Edwards (J Bus Ethics. https://doi.org/10.1007/s10551-019-04197-6, 2019), and Powell (J Bus Ethics. https://doi.org/10.1007/s10551-019-04196-7, 2019) help us clarify and further develop our position by responding to our critique of, and alternatives to, this misleading separation. In this rebuttal, we explore how the desire to achieve the separation of facts and values is unscientific on the very terms endorsed by its advocates—this separation is refuted by empirical observation. We show that positivists like Cortina and Edwards offer no rigorous theoretical or empirical justifications to substantiate their claims, let alone critique ours. Following Powell, we point to how classical pragmatism understands ‘purpose’ in scientific pursuits while also providing an alternative to the dogmas of positivism and related philosophical positions. In place of dogmatic, unscientific cries about an abstract and therefore always-unobservable ‘reality,’ we invite all organizational scholars to join us in shifting the discussion about quantitative research towards empirically grounded scientific inquiry. This makes the ethics of actual people and their practices central to quantitative research, including the thoughts, discourses, and behaviors of researchers who are always in particular places doing particular things. We propose that quantitative researchers can thus start to think about their research practices as a kind of work, rather than having the status of a kind of dogma. We conclude with some implications that this has for future research and education, including the relevance of research and research methods.
  • Item
    Thumbnail Image
    Statistics and Probability Have Always Been Value-Laden: An Historical Ontology of Quantitative Research Methods
    Zyphur, MJ ; Pierides, DC (Springer Verlag, 2020-11-01)
    Quantitative researchers often discuss research ethics as if specific ethical problems can be reduced to abstract normative logics (e.g., virtue ethics, utilitarianism, deontology). Such approaches overlook how values are embedded in every aspect of quantitative methods, including ‘observations,’ ‘facts,’ and notions of ‘objectivity.’ We describe how quantitative research practices, concepts, discourses, and their objects/subjects of study have always been value-laden, from the invention of statistics and probability in the 1600s to their subsequent adoption as a logic made to appear as if it exists prior to, and separate from, ethics and values. This logic, which was embraced in the Academy of Management from the 1960s, casts management researchers as ethical agents who ought to know about a reality conceptualized as naturally existing in the image of statistics and probability (replete with ‘constructs’), while overlooking that S&P logic and practices, which researchers made for themselves, have an appreciable role in making the world appear this way. We introduce a different way to conceptualize reality and ethics, wherein the process of scientific inquiry itself requires an examination of its own practices and commitments. Instead of resorting to decontextualized notions of ‘rigor’ and its ‘best practices,’ quantitative researchers can adopt more purposeful ways to reason about the ethics and relevance of their methods and their science. We end by considering implications for addressing ‘post truth’ and ‘alternative facts’ problems as collective concerns, wherein it is actually the pluralistic nature of description that makes defending a collectively valuable version of reality so important and urgent.