The financial advice given to savers by conversational robots, or chatbots, “wrongly increase user confidence”, concludes a study by the Prudential Control and Resolution Authority (ACPR), the banking and insurance policeman. “The explanations provided in the form of a conversation wrongly increase the confidence of users in the incorrect proposals of the robot-advisor” and can make them accept more “ill-suited” advice, indicates in summary this study transmitted Thursday to the press and carried out in partnership with the Telecom Paris engineering school.

A robo-advisor, called Robex, was developed for the purposes of the study, carried out with 256 participants, mostly novices in financial matters, specifies the supervisor. Conversational robots, based on algorithms and artificial intelligence, are fashionable among life insurers and bankers to offer, alongside advisers, financial investments to their clients.

The “motivation of the advice through these tools sometimes lacks clarity or remains too generic, without precise explanations based on the characteristics specific to the client”, underlines the study. Moreover, the explanations given by the robot “do not significantly improve the understanding of the proposal by users, nor their ability to follow or not the advice given, depending on whether it is correct or not”.

More embarrassing, “people who have received a lower level of education understand an incorrect proposition less well but are more likely to accept it anyway”. French financial regulations, recalls the APCR, “requires that advice for the choice of a contract be justified by showing the client the link between the characteristics of the financial proposal and his personal situation”. The objective is to “protect the client from the negative consequences of choosing an ill-suited investment”, whether it is too risky or does not meet its objectives, further underlines the authority.