Finance - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 3 of 3
  • Item
    No Preview Available
    Emotional Engagement and Trading Performance
    Bossaerts, P ; Fattinger, F ; Rotaru, K ; Xu, K (INFORMS, 2020-04-27)
    Extensive research in neuroscience proves that rational decision-making depends on accurate anticipative emotions. We test this proposition in the context of financial markets. We replicate a multiperiod trading game that reliably generates bubbles, while tracking participants’ heart rate and skin conductance. We find that participants whose heart rate changes in anticipation of trading at inflated prices achieve higher earnings. In contrast, when such trades precede heart rate changes, earnings decrease. Higher (lower) earnings accrue to participants whose skin conductance responds to the market value of stock (cash) holdings. Our findings demonstrate that emotions are integral to sound financial decision-making.
  • Item
    Thumbnail Image
    Exploiting Distributional Temporal Difference Learning to Deal with Tail Risk
    Bossaerts, P ; Huang, S ; Yadav, N (MDPI AG, 2020-12-01)
    In traditional Reinforcement Learning (RL), agents learn to optimize actions in a dynamic context based on recursive estimation of expected values. We show that this form of machine learning fails when rewards (returns) are affected by tail risk, i.e., leptokurtosis. Here, we adapt a recent extension of RL, called distributional RL (disRL), and introduce estimation efficiency, while properly adjusting for differential impact of outliers on the two terms of the RL prediction error in the updating equations. We show that the resulting “efficient distributional RL” (e-disRL) learns much faster, and is robust once it settles on a policy. Our paper also provides a brief, nontechnical overview of machine learning, focusing on RL.
  • Item
    Thumbnail Image
    Separating Probability and Reversal Learning in a Novel Probabilistic Reversal Learning Task for Mice
    Metha, JA ; Brian, ML ; Oberrauch, S ; Barnes, SA ; Featherby, TJ ; Bossaerts, P ; Murawski, C ; Hoyer, D ; Jacobson, LH (Frontiers Media SA, 2020-01-09)
    The exploration/exploitation tradeoff – pursuing a known reward vs. sampling from lesser known options in the hope of finding a better payoff – is a fundamental aspect of learning and decision making. In humans, this has been studied using multi-armed bandit tasks. The same processes have also been studied using simplified probabilistic reversal learning (PRL) tasks with binary choices. Our investigations suggest that protocols previously used to explore PRL in mice may prove beyond their cognitive capacities, with animals performing at a no-better-than-chance level. We sought a novel probabilistic learning task to improve behavioral responding in mice, whilst allowing the investigation of the exploration/exploitation tradeoff in decision making. To achieve this, we developed a two-lever operant chamber task with levers corresponding to different probabilities (high/low) of receiving a saccharin reward, reversing the reward contingencies associated with levers once animals reached a threshold of 80% responding at the high rewarding lever. We found that, unlike in existing PRL tasks, mice are able to learn and behave near optimally with 80% high/20% low reward probabilities. Altering the reward contingencies towards equality showed that some mice displayed preference for the high rewarding lever with probabilities as close as 60% high/40% low. Additionally, we show that animal choice behavior can be effectively modelled using reinforcement learning (RL) models incorporating learning rates for positive and negative prediction error, a perseveration parameter, and a noise parameter. This new decision task, coupled with RL analyses, advances access to investigate the neuroscience of the exploration/exploitation tradeoff in decision making.