Management and Marketing - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 6 of 6
  • Item
    Thumbnail Image
    From Data to Causes III: Bayesian Priors for General Cross-Lagged Panel Models (GCLM)
    Zyphur, MJ ; Hamaker, EL ; Tay, L ; Voelkle, M ; Preacher, KJ ; Zhang, Z ; Allison, PD ; Pierides, DC ; Koval, P ; Diener, EF (FRONTIERS MEDIA SA, 2021-02-15)
    This article describes some potential uses of Bayesian estimation for time-series and panel data models by incorporating information from prior probabilities (i.e., priors) in addition to observed data. Drawing on econometrics and other literatures we illustrate the use of informative "shrinkage" or "small variance" priors (including so-called "Minnesota priors") while extending prior work on the general cross-lagged panel model (GCLM). Using a panel dataset of national income and subjective well-being (SWB) we describe three key benefits of these priors. First, they shrink parameter estimates toward zero or toward each other for time-varying parameters, which lends additional support for an income → SWB effect that is not supported with maximum likelihood (ML). This is useful because, second, these priors increase model parsimony and the stability of estimates (keeping them within more reasonable bounds) and thus improve out-of-sample predictions and interpretability, which means estimated effect should also be more trustworthy than under ML. Third, these priors allow estimating otherwise under-identified models under ML, allowing higher-order lagged effects and time-varying parameters that are otherwise impossible to estimate using observed data alone. In conclusion we note some of the responsibilities that come with the use of priors which, departing from typical commentaries on their scientific applications, we describe as involving reflection on how best to apply modeling tools to address matters of worldly concern.
  • Item
    Thumbnail Image
    From Data to Causes II: Comparing Approaches to Panel Data Analysis
    Zyphur, MJ ; Voelkle, MC ; Tay, L ; Allison, PD ; Preacher, KJ ; Zhang, Z ; Hamaker, EL ; Shamsollahi, A ; Pierides, DC ; KOVAL, P ; Diener, E (SAGE Publications, 2020-10-01)
    This article compares a general cross-lagged model (GCLM) to other panel data methods based on their coherence with a causal logic and pragmatic concerns regarding modeled dynamics and hypothesis testing. We examine three “static” models that do not incorporate temporal dynamics: random- and fixed-effects models that estimate contemporaneous relationships; and latent curve models. We then describe “dynamic” models that incorporate temporal dynamics in the form of lagged effects: cross-lagged models estimated in a structural equation model (SEM) or multilevel model (MLM) framework; Arellano-Bond dynamic panel data methods; and autoregressive latent trajectory models. We describe the implications of overlooking temporal dynamics in static models and show how even popular cross-lagged models fail to control for stable factors over time. We also show that Arellano-Bond and autoregressive latent trajectory models have various shortcomings. By contrasting these approaches, we clarify the benefits and drawbacks of common methods for modeling panel data, including the GCLM approach we propose. We conclude with a discussion of issues regarding causal inference, including difficulties in separating different types of time-invariant and time-varying effects over time.
  • Item
    Thumbnail Image
    From Data to Causes I: Building A General Cross-Lagged Panel Model (GCLM)
    Zyphur, MJ ; Allison, PD ; Tay, L ; Voelkle, MC ; Preacher, KJ ; Zhang, Z ; Hamaker, EL ; Shamsollahi, A ; Pierides, DC ; KOVAL, P ; Diener, E (SAGE Publications, 2020-10-01)
    This is the first paper in a series of two that synthesizes, compares, and extends methods for causal inference with longitudinal panel data in a structural equation modeling (SEM) framework. Starting with a cross-lagged approach, this paper builds a general cross-lagged panel model (GCLM) with parameters to account for stable factors while increasing the range of dynamic processes that can be modeled. We illustrate the GCLM by examining the relationship between national income and subjective well-being (SWB), showing how to examine hypotheses about short-run (via Granger-Sims tests) versus long-run effects (via impulse responses). When controlling for stable factors, we find no short-run or long-run effects among these variables, showing national SWB to be relatively stable, whereas income is less so. Our second paper addresses the differences between the GCLM and other methods. Online Supplementary Materials offer an Excel file automating GCLM input for Mplus (with an example also for Lavaan in R) and analyses using additional data sets and all program input/output. We also offer an introductory GCLM presentation at https://youtu.be/tHnnaRNPbXs. We conclude with a discussion of issues surrounding causal inference.
  • Item
    Thumbnail Image
    Modeling interaction as a complex system
    van Berkel, N ; Dennis, S ; Zyphur, M ; Li, J ; Heathcote, A ; Kostakos, V (Taylor & Francis, 2021)
    Researchers in Human-Computer Interaction typically rely on experiments to assess the causal effects of experimental conditions on variables of interest. Although this classic approach can be very useful, it offers little help in tackling questions of causality in the kind of data that are increasingly common in HCI – capturing user behavior ‘in the wild.’ To analyze such data, model-based regressions such as cross-lagged panel models or vector autoregressions can be used, but these require parametric assumptions about the structural form of effects among the variables. To overcome some of the limitations associated with experiments and model-based regressions, we adopt and extend ‘empirical dynamic modelling’ methods from ecology that lend themselves to conceptualizing multiple users’ behavior as complex nonlinear dynamical systems. Extending a method known as ‘convergent cross mapping’ or CCM, we show how to make causal inferences that do not rely on experimental manipulations or model-based regressions and, by virtue of being non-parametric, can accommodate data emanating from complex nonlinear dynamical systems. By using this approach for multiple users, which we call ‘multiple convergent cross mapping’ or MCCM, researchers can achieve a better understanding of the interactions between users and technology – by distinguishing causality from correlation – in real-world settings.
  • Item
    Thumbnail Image
    Making Quantitative Research Work: From Positivist Dogma to Actual Social Scientific Inquiry
    Zyphur, MJ ; Pierides, DC (Springer Verlag, 2020-11-01)
    Researchers misunderstand their role in creating ethical problems when they allow dogmas to purportedly divorce scientists and scientific practices from the values that they embody. Cortina (J Bus Ethics. https://doi.org/10.1007/s10551-019-04195-8, 2019), Edwards (J Bus Ethics. https://doi.org/10.1007/s10551-019-04197-6, 2019), and Powell (J Bus Ethics. https://doi.org/10.1007/s10551-019-04196-7, 2019) help us clarify and further develop our position by responding to our critique of, and alternatives to, this misleading separation. In this rebuttal, we explore how the desire to achieve the separation of facts and values is unscientific on the very terms endorsed by its advocates—this separation is refuted by empirical observation. We show that positivists like Cortina and Edwards offer no rigorous theoretical or empirical justifications to substantiate their claims, let alone critique ours. Following Powell, we point to how classical pragmatism understands ‘purpose’ in scientific pursuits while also providing an alternative to the dogmas of positivism and related philosophical positions. In place of dogmatic, unscientific cries about an abstract and therefore always-unobservable ‘reality,’ we invite all organizational scholars to join us in shifting the discussion about quantitative research towards empirically grounded scientific inquiry. This makes the ethics of actual people and their practices central to quantitative research, including the thoughts, discourses, and behaviors of researchers who are always in particular places doing particular things. We propose that quantitative researchers can thus start to think about their research practices as a kind of work, rather than having the status of a kind of dogma. We conclude with some implications that this has for future research and education, including the relevance of research and research methods.
  • Item
    Thumbnail Image
    Statistics and Probability Have Always Been Value-Laden: An Historical Ontology of Quantitative Research Methods
    Zyphur, MJ ; Pierides, DC (Springer Verlag, 2020-11-01)
    Quantitative researchers often discuss research ethics as if specific ethical problems can be reduced to abstract normative logics (e.g., virtue ethics, utilitarianism, deontology). Such approaches overlook how values are embedded in every aspect of quantitative methods, including ‘observations,’ ‘facts,’ and notions of ‘objectivity.’ We describe how quantitative research practices, concepts, discourses, and their objects/subjects of study have always been value-laden, from the invention of statistics and probability in the 1600s to their subsequent adoption as a logic made to appear as if it exists prior to, and separate from, ethics and values. This logic, which was embraced in the Academy of Management from the 1960s, casts management researchers as ethical agents who ought to know about a reality conceptualized as naturally existing in the image of statistics and probability (replete with ‘constructs’), while overlooking that S&P logic and practices, which researchers made for themselves, have an appreciable role in making the world appear this way. We introduce a different way to conceptualize reality and ethics, wherein the process of scientific inquiry itself requires an examination of its own practices and commitments. Instead of resorting to decontextualized notions of ‘rigor’ and its ‘best practices,’ quantitative researchers can adopt more purposeful ways to reason about the ethics and relevance of their methods and their science. We end by considering implications for addressing ‘post truth’ and ‘alternative facts’ problems as collective concerns, wherein it is actually the pluralistic nature of description that makes defending a collectively valuable version of reality so important and urgent.