Research on Decision Biases of Generative-AI-Driven Robo-Advisory and Investor Protection Mechanisms
Download as PDF
DOI: 10.25236/gemmsd.2025.087
Author(s)
Xiuling Zhou, Shuang Diao
Corresponding Author
Shuang Diao
Abstract
This paper proposes a governance-ready framework for identifying and mitigating decision biases introduced by generative AI in robo-advisory, integrating explainability constraints, causal robustness, and compliance executability within a layered architecture spanning data, features, forecasting, optimization, and risk governance. Counterfactual evaluation and constrained decision optimization coordinate return, risk, and compliance objectives, and a semi-synthetic, empirically calibrated study indicates that, at equal risk budgets, generative-AI-enhanced advisory improves allocation diversity, communication personalization, and scenario responsiveness while remaining susceptible to prompt injection, hallucination, overconfidence, and interaction-induced risk-preference drift. An integrated protection bundle composed of decision evidence cards, policy corridors, robust optimization, and human-in-the-loop governance reduces erroneous extrapolation and strategy whipsawing, enhances suitability alignment, and strengthens tail-risk control under stress, offering a pragmatic deployment roadmap.
Keywords
Generative AI; Robo-advisory; Decision bias; Causal robustness; Investor protection; Explainability; Reinforcement learning; Robust optimization; Suitability; Compliance