The Information Technology Industry Council (ITI) tech trade group has published a new guide for global policymakers focused on critical areas of AI technology as part of the organization’s AI Futures Initiative.
The new guidance from ITI covers the need to authenticate AI-generated content, including content associated with chatbots, image, and audio generators.
“AI continues to dominate policy conversations around the world. As AI-generated content grows in its sophistication and adoption, there is a sense of urgency to leverage the transformative technology for social benefit and to minimize the harms that could come from its use, including the spread of mis- and dis-information,” said ITI Senior Vice President of Policy and General Counsel John Miller.
“ITI’s new policy guide outlines the risks associated with AI-generated content, the authentication techniques and tools available to help address them, and considerations relevant for policy development,” added Miller.
Some key actions the guidance recommends to policymakers include avoiding prescriptive approaches that mandate one technique, the risks of over-indexing on one tool and missing the benefits another might provide, and the need to invest in more AI authentication technology.
Other key areas that the guidance addresses include promoting public-private partnerships to understand the limitations and benefits of authentication techniques, and ensuring AI itself plays a role in detecting AI-generated content.
“We look forward to continued collaboration with global governments as they develop their AI policies and seek to increase their understanding of the ever-evolving AI landscape,” said Miller.