In May, the FTC fired a warning shot to the industry, saying it was “focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers”. Generative AI products are in the crosshairs of regulators around the world, as AI experts and ethicists sound the alarm over the enormous amount of personal data consumed by the technology, as well as its potentially harmful outputs, ranging from misinformation to sexist and racist comments. In a letter sent to the Microsoft-backed company, the FTC said it would look at whether people have been harmed by the AI chatbot’s creation of false information about them, as well as whether OpenAI has engaged in “unfair or deceptive” privacy and data security practices. The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging probe into ChatGPT maker OpenAI.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |