Abstract
In the past, the bottom-up study of financial stock markets relied on first-generation multi-agent systems (MAS) , which employed zero-intelligence agents and often required the additional implementation of so-called noise traders to emulate price formation processes. Nowadays, thanks to the tools developed in cognitive science and machine learning, MAS can quantitatively gauge agent learning, a pivotal element for information and stock price estimation in finance. In our previous work, we therefore devised a new generation MAS stock market simulator , which implements two key features: firstly, each agent autonomously learns to perform price forecasting and stock trading via model-free reinforcement learning ; secondly, all agents ’ trading decisions feed a centralised double-auction limit order book, emulating price and volume microstructures. Here, we study which trading strategies (represented as reinforcement learning policies) the agents learn and the time-dependency of their heterogeneity. Our central result is that there are more ways to succeed in trading than to fail. More specifically, we find that : i- better-performing agents learn in time more diverse trading strategies than worse-performing ones, ii- they tend to employ a fundamentalist, rather than chartist, approach to asset price valuation, and iii- their transaction orders are less stringent (i.e. larger bids or lower asks).
Original language | English |
---|---|
Pages (from-to) | 1523-1544 |
Number of pages | 22 |
Journal | Computational Economics |
Volume | 61 |
Issue number | 4 |
DOIs | |
State | Published - Apr 2023 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Keywords
- Agent-based
- Multi-agent system
- Reinforcement learning
- Stock markets
ASJC Scopus subject areas
- Economics, Econometrics and Finance (miscellaneous)
- Computer Science Applications