- Some of the biggest names in IT have already committed to the EU’s Code of Practice.
- Businesses will need to report on updated AI safety measures by July of this year.
Additional initiatives to increase the openness of AI technologies like OpenAI’s ChatGPT have been explored by European Union officials.
Companies using generative AI technologies that have the “potential to generate disinformation” should mark their material. Vera Jourova, vice president for values and transparency at the European Commission, told the media on June 5. This is part of an attempt to fight “fake news.”
Reporting Updated AI Safety Measures
The necessity for “safeguards” to prevent bad actors from using services that use generative AI, such as Microsoft’s Bing Chat and Google’s Bard, for misinformation was also mentioned by Jourova. The European Union (EU) in 2018 developed its “Code of Practice on Disinformation.” Which serves as an agreement and a tool for actors in the digital sector on self-regulatory norms to fight misinformation.
Some of the biggest names in IT have already committed to the EU’s Code of Practice on Disinformation for 2022. According to Jourova, these businesses and others will need to report on updated AI safety measures by July of this year.
The fact that Twitter left the code of practice was also brought up. With the implication that the firm would be subject to more regulatory oversight.
The EU Artificial Intelligence Act, a comprehensive set of principles for the public use of AI and the corporations deploying it, is now under preparation, which is why the vice president is making these remarks now.
Official European legislation won’t take effect for another two to three years, but in the interim, European authorities have pushed businesses to draught a voluntary code of behavior for generative AI developers.