False Advertising: Undisclosed Chatbot Use?

False Advertising: Undisclosed Chatbot Use?

Classes at institutions like Stanford are examining the issue of living in a “post truth world.” Who cares about accuracy? ChatGPT may not. It could say: “only your dog’s seller will be legally liable if it bites the newspaper delivery boy, not you.” 

However, businesses should care.

Legal liability for using bots, among other technologies, can be affected by various regulatory regimes. In September, the FTC started an investigation into various ways that companies use chatbots to interact with their customers, particularly minors. The concern is that the interactions can be deceptive, similar to those public figures who wear certain fashion garments without disclosing that they are paid for sponsoring. This is particularly apt given the recent suicide by one caused allegedly due to a chat bot.

More recently, proposed California legislation can affect your business, too. Under this proposed rule, disclosing to some users that they are interacting with a chatbot would be required. Whether and how this rule — which may be adopted by other states — will be implemented remains to be seen.

This being said, analogies to existing bodies of law exist. Fraud, negligent misrepresentation, and various unfair competition laws cover much of this behavior — albeit to non-bot cases. Nonetheless, using a bot is akin to teaming with a dog or human independent contractor. While ChatGPT may say truth is dead, it still matters under the law.