OpenAI, Google, others pledge to watermark AI content for safety -White House

Top AI companies including OpenAI, Alphabet and Meta Platforms have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer, the Biden administration said on Friday.

The companies - which also include Anthropic, Inflection, Amazon.com and OpenAI partner Microsoft - pledged to thoroughly test systems before releasing them and share information about how to reduce risks and invest in cybersecurity.

The move is seen as a win for the Biden administration's effort to regulate the technology which has experienced a boom in investment and consumer popularity.

Since generative AI, which uses data to create new content like ChatGPT's human-sounding prose, became wildly popular this year, lawmakers around the world began considering how to mitigate the dangers of the emerging technology to national security and the economy.

U.S. Senate Majority Chuck Schumer, who has called for "comprehensive legislation" to advance and ensure safeguards on artificial intelligence, praised the commitments on Friday and said he would continue working to build and expand on those.

The Biden administration said it would work to establish an international framework to govern the development and use of AI, according to the White House.

Congress is considering a bill that would require political ads to disclose whether AI was used to create imagery or other content.

President Joe Biden, who is hosting executives from the seven companies at the White House on Friday, is also working on developing an executive order and bipartisan legislation on AI technology.

As part of the effort, the seven companies committed to developing a system to "watermark" all forms of content, from text, images, audios, to videos generated by AI so that users will know when the technology has been used.

This watermark, embedded in the content in a technical manner, presumably will make it easier for users to spot deep-fake images or audios that may, for example, show violence that has not occurred, create a better scam or distort a photo of a politician to put the person in an unflattering light.

It is unclear how the watermark will be evident in the sharing of the information.

The companies also pledged to focus on protecting users' privacy as AI develops and on ensuring that the technology is free of bias and not used to discriminate against vulnerable groups. Other commitments include developing AI solutions to scientific problems like medical research and mitigating climate change.

X
Sitelerimizde reklam ve pazarlama faaliyetlerinin yürütülmesi amaçları ile çerezler kullanılmaktadır.

Bu çerezler, kullanıcıların tarayıcı ve cihazlarını tanımlayarak çalışır.

İnternet sitemizin düzgün çalışması, kişiselleştirilmiş reklam deneyimi, internet sitemizi optimize edebilmemiz, ziyaret tercihlerinizi hatırlayabilmemiz için veri politikasındaki amaçlarla sınırlı ve mevzuata uygun şekilde çerez konumlandırmaktayız.

Bu çerezlere izin vermeniz halinde sizlere özel kişiselleştirilmiş reklamlar sunabilir, sayfalarımızda sizlere daha iyi reklam deneyimi yaşatabiliriz. Bunu yaparken amacımızın size daha iyi reklam bir deneyimi sunmak olduğunu ve sizlere en iyi içerikleri sunabilmek adına elimizden gelen çabayı gösterdiğimizi ve bu noktada, reklamların maliyetlerimizi karşılamak noktasında tek gelir kalemimiz olduğunu sizlere hatırlatmak isteriz.