Meta Platforms will require advertisers to disclose if they used AI or other digital methods to create or change political or social issue ads. The Facebook owner announced this initiative on Thursday, aiming to reduce misinformation before the Canadian federal elections.
The disclosure rule applies if an ad features a photorealistic image, video, or realistic-sounding audio. The technology must create or alter the media “to depict a real person as saying or doing something they did not actually say or do.”
It also applies to ads that present non-existent people or realistic-looking events that didn’t happen. The policy covers ads that alter footage of real events or depict events that allegedly occurred, “but is not a true image, video or audio recording of the event.”
Last November, Meta stated that it would extend its ban on new political ads after the U.S. election. They reacted to the widespread misinformation seen during the previous presidential election.
Meta also prohibited political campaigns and advertisers in regulated industries from using its generative AI advertising products in 2023.
However, earlier this year, Meta ended its U.S. fact-checking programs. They also curbed discussions on contentious topics like immigration and gender identity. This was done after conservatives pressured them to implement the “biggest overhaul of its approach to managing political content.”
Last December, the Instagram owner also claimed that generative AI had a limited impact across its apps in 2024. Generative AI failed “to build a significant audience on Facebook and Instagram or use AI effectively.”
Meta has also introduced a feature allowing users to disclose when they share AI-generated images, video, or audio, enabling the platform to label it.
Click here for more Technology news.