General

Brookings Institute: The Role of Technology in Online Misinformation

This report outlines the logic of digital personalization, which uses big data to analyze individual interests to determine the types of messages most likely to resonate with particular demographics. Those same technologies can also operate I the service of misinformation through text prediction, tools that receive user inputs and produce new texts that is as credible as the original text itself. The report addresses potential policy solutions that can counter digital personalization, closing with a discussion of regulatory or normative tools that are less likely to be effective in countering the adverse effects of digital technology.

 

Tools That Fight Disinformation Online

A list of online tools available to help build understanding of techniques involved in the dissemination of disinformation; detection and tracking of trollbots and untrustworthy Twitter accounts; the tracking and detection of potential manipulation of information spreading on Twitter; tools designed for collaborative verification of internet content; fact-checking tools; verification tools; tools that rate news outlets based on “probability of disinformation on a specific media outlet" and many more.

 

Break free from misinformation in an escape room (Video clip)

Center for an Informed Public. June 14, 2022.

Our mission is to resist strategic misinformation, promote an informed society, and strengthen democratic discourse.

A research project of the University of Washington‘s Center for an Informed Public in partnership with the UW Technology & Social Change Group, UW GAMER Research Group and Puzzle Break, immerse people in an interactive escape room of manipulated media, social media bots, deep fakes, and other forms of deception to learn about misinformation. These games are designed to improve people’s awareness of misinformation tactics and generate reflection on the emotional triggers and psychological biases that make misinformation so powerful.

 

Image Provenance Analysis for Disinformation Detection

Composite images are the outcome of combining pieces extracted from two or more other images, sometimes with the intent to deceive the observer and convey false narratives. Consider an image suspected of being a composite, and a large corpus of images that might have donated pieces to the composite (such as photos from social media. In this conversation, we will discuss our most recent advances in provenance analysis, concluding with our latest endeavours towards extending it to unveil disinformation campaigns.

Video of event included in this site.

Speakers:

  • Walter Scheirer, Dennis O. Doughty Collegiate Associate Professor, University of Notre Dame
  • Daniel Moreira, Incoming Assistant Professor, Loyola University

 

Justice Sees Fake News Disaster, and TSE Seeks Police Power to Act in The Final Stretch of Brazil's Election

UOL. Patricia Campos MelloOctober 20, 2022.

Court will vote on a resolution that extends the power to act against misinformation and also ban paid advertising on the internet during the election period.

Chief Justice of the TSE (Supreme Electoral Court), Alexandre de Moraes, had a meeting this Wednesday (19) with representatives of the main social media platforms. At the meeting, he said that the platforms' performance was reasonably good in the first round, but that in this second round the fake news situation is disastrous.

 

Disinformation Day 2022 Considers Pressing Need for Cross-sector Collaboration and New Tools for Fact Checkers

University of Texas. Stacey Ingram-Kaleh. November 9, 2022

October 26, 2022 marked the first annual Disinformation Day hosted by Good Systems’ “Designing Responsible AI Technologies to Curb Disinformation” research team. Approximately 150 attendees from across the globe came together virtually to discuss challenges and opportunities in curbing the spread of digital disinformation. Thought leaders representing a range of disciplines and sectors examined the needs of fact checkers, explored issues of bias, fairness, and justice in mis- and disinformation, and outlined next steps for addressing these pressing issues together.

 

European Commission to revise Code of Practice against Disinformation

Lexology. Herbert Smith Freehills. March 31, 2022

The Code of Practice against Disinformation was published in September 2018 and was subsequently signed by Facebook, Google, Mozilla and Twitter, among others. The Code is a self-regulatory document and, following European Commission assessments and reports on adherence, guidance was issued in May 2021 to address shortfalls in the Code of Practice and provide a more robust monitoring framework. Most recently, the Commission announced that there will be 26 new signatories joining the drafting process for a revised version of the Code, expected to be released by the end of March 2022.

 

Brief: Disinformation Risk in the United States Online Media Market, October 2022

Global Disinformation Index. October 21, 2022

GDI’s research looked at 69 U.S. news sites, selected on the basis of online traffic and social media followers, as well as geographical coverage and racial, ethnic and religious community representation. The index scores sites across 16 indicators – indicators which themselves contain many, many more individual data points – and generates a score for the degree to which a site is at risk of disinforming its readers.

The data from the study corroborates today’s general impression that hyperbolic, emotional, and alarmist language is a feature of the U.S. news media landscape.

 

The Truth in Fake News: How Disinformation Laws Are Reframing the Concepts of Truth and Accuracy on Digital Platforms

BRILL: In European Convention on Human Rights Law Review. Paolo Cavaliere. October 11, 2022  

The European Union’s (EU) strategy to address the spread of disinformation, and most notably the Code of Practice on Disinformation and the forthcoming Digital Services Act, tasks digital platforms with a range of actions to minimize the distribution of issue-based and political adverts that are verifiable false or misleading. This article discusses the implications of the EU’s approach with a focus on its categorical approach, specifically what it means to conceptualize disinformation as a form of advertisement and by what standards digital platforms are expected to assess the truthful or misleading nature of the content that they distribute because of this categorization. The analysis will show how the emerging EU anti-disinformation framework marks a departure from the European Court of Human Rights consolidated standards of review for public interest and commercial speech and the tests utilized to assess their accuracy.

 

Disinformation and freedom of expression during armed conflict

UN Web TV. October 19, 2022

At the 77th Session of the UN Human Rights Council, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression presented her new report on disinformation and freedom of opinion and expression during armed conflicts.

 

Hate and disinformation spiked after Musk's Twitter takeover | View

Euronews. Heather Dannyelle Thompson. November 24, 2022

In just over two weeks, Musk’s takeover of Twitter has rocked the internet. Hate speech and disinformation have already spiked in what appears to be mostly trolls and right-wing extremists seeking to test the boundaries of Musk’s approach to unchecked free speech on his newly acquired platform.

The chaos at Twitter comes at a distinct time of transformation of the internet. Not only is the online space facing regulation globally, but the advances in artificial intelligence that power tomorrow’s tools of disinformation are not slowing down either.

 

Congressman Schiff, Senator Whitehouse Urge Meta to Maintain Policies on Election Misinformation, Uphold Trump Suspension

News Release. December 14, 2022.

Congressman Adam Schiff (D-Calif.) and Senator Sheldon Whitehouse (D-R.I.) sent a letter to Meta's President of Global Affairs, Nicholas Clegg, urging Meta to maintain its commitment to keeping dangerous election denial content off its platform.

“After each election cycle, social media platforms like Meta often alter or roll back certain misinformation policies, because they are temporary and specific to the election season,” Schiff and Whitehouse write. “Doing so in this current environment, in which election disinformation continuously erodes trust in the integrity of the voting process, would be a tragic mistake. Meta must commit to strong election misinformation policies year-round, as we are still witnessing falsehoods about voting and the prior elections spreading on your platform.”

 

Handbook to combat CBRN disinformation

United Nations Interregional Crime and Justice Institute. January 13, 2023.

To produce the Handbook to combat disinformation, UNICRI has monitored several social media platforms, paying specific attention to the role of violent non-state actors, namely: violent extremists; terrorist organizations (particularly those associated with ISIL, also known as Da’esh and Al-Qaida); and organized criminal groups.  The Handbook aims at enhancing understanding of CBRN disinformation on social media while developing competencies to prevent and respond to disinformation with a specific focus on techniques for debunking false information. It also equips practitioners with the competencies to effectively analyse, understand and respond to CBRN disinformation in the media and on social media platforms.

 

DISINFORMATION: Top Risks of 2023

EURASIA GROUP. Ian Bremmer & Cliff Kupchan

Rapid-fire advancements in artificial intelligence could help misinformation thrive in the year ahead, a new report is warning. That’s according to the Top Risk Report for 2023, an annual document from the U.S.-based geopolitical risk analysts at the Eurasia Group.  The “weapons of mass disruption” that are emerging from speedy technological innovations “will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the report said.

Artificial Intelligence and Deepfakes

Designing Responsible AI Technologies to Curb Disinformation

University of Texas. October 2022

The rise of social media and the growing scale of online information have led to a surge of intentional disinformation and incidental misinformation. It is increasingly difficult to tell fact from fiction, and the challenge is more complex than simply differentiating “fake news” from simple facts. This project uses qualitative methods and machine learning models to understand how digital disinformation arises and spreads, how it affects different groups in society, and how to design effective human-centred interventions.

 

What a Pixel can Tell: text-to-Image Generation and its Disinformation Potential

Disinfo Rada: Democracy Reporting International. September 2022

In recent years, many new tools and tactics have been used to generate and spread disinformation online and while the wider public and experts grapple with the emergence of deepfakes ‒ images, video or audio altered using artificial intelligence (AI) that are difficult to detect as false ‒ a whole new threat is emerging on the horizon: fully synthetic content, such as hyperrealistic images created based on text prompts, powered by AI In contrast to current methods, this technology does not distort existing photos or videos ‒ it creates entirely new ones. When used for disinformation purposes, text-to-image generation models enable disinformation actors to produce imagery to support false narratives to gain a better understanding of how much of a threat text-to-image generation poses to democracy, we interviewed leading global experts who work directly in the fields of AI, disinformation and text-to-image generation.

 

High-school students should be taught to spot fake videos and disinformation, public safety minister says

Globe and Mail. Marie Woolf. November 18, 2022

High-school students should be educated about how to spot fake videos and photos and disinformation, because they are so prevalent online, federal Public Safety Minister Marco Mendicino says.

Speaking from the G7 summit in Germany, the minister said disinformation is “one of the most pervasive threats to all our democracies right now” and more needs to be done to raise awareness and equip Canadians to navigate its dangers.

 

Is Europe ready for an information war?

Debating Europe. June 23, 2022

What does it mean to “win the information war”? During the Russian invasion of Ukraine, headlines have proclaimed Ukraine to be “winning” its information war against Russia. But what is an information war? Is it a fancy name for propaganda? Does it also include, for example, controlling the flow of information to open source platforms (which can then be geolocated using Open Source Intelligence (OSINT) techniques)? What might future information war mean in a world of the metaverse and Extended Reality (XR)?

 

There’s a Fix to Disinformation: Make Social Media Algorithms Transparent

The author cites a number of examples and makes case for us to consider algorithmic transparency as part of our national defence.

 

Algorithmic Transparency

Algorithmic transparency is openness about the purpose, structure and underlying actions of the algorithms used to search for, process and deliver information. An algorithm is a set of steps that a computer program follows in order to make a decision about a particular course of action.

 

Algorithmic Transparency in the Public Sector

A YouTube video presented by Natalia Domagala of AI Ethics: Global Perspectives. Drawing on her professional experience working on data ethics, open data and open government, Domagala explains the concept of algorithmic transparency and why it is a critical need in our society today. She shares different examples of algorithmic transparency measures from Europe and North America with a special focus on the UK’s new Algorithmic Transparency Standard. She concludes her module with an outlook on the field of algorithmic transparency over the next few years and suggestions on what actors in the field ought to focus on going forward.

 

Is AI the only antidote to disinformation?

World Economic Forum. July 20, 2022.

The stability of our society is more threatened by disinformation than anything else we can imagine. It is a pandemic that has engulfed small and large economies alike. People around the world face threats to life and personal safety because of the volumes of emotionally charged and socially divisive pieces of misinformation, much of it fuelled by emerging technology. This content either manipulates the perceptions of people or propagates absolute falsehoods in society.

 

ChatGPT: Faking it, a genuine artificial concern

The Economic Times. January 23, 2023.

Spread of misinformation can have serious consequences, from influencing public opinion to undermining trust in institutions. With the ability to generate large amounts of text quickly and convincingly, generative AI tools like ChatGPT could be used to create and disseminate fake news on a large scale.

 

As Deepfakes Flourish, Countries Struggle With Response

NYT. Tiffany Hsu. Jan 22, 2023.

Deepfake technology — software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone. In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

 

From deepfakes to ChatGPT, misinformation thrives with AI advancements: report

Global News. Rachel Gilmore. Jan 4, 2023

Rapid-fire advancements in artificial intelligence could help misinformation thrive in the year ahead, a new report is warning. That’s according to the Top Risk Report for 2023, an annual document from the U.S.-based geopolitical risk analysts at the Eurasia Group. The “weapons of mass disruption” that are emerging from speedy technological innovations “will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the report said.

 

Risk of extinction by AI should be global priority, say experts

Guardian. Geneva Abdul. May 30, 2023 

A group of leading technology experts from across the world have warned that artificial intelligence technology should be considered a societal risk and prioritized in the same class as pandemics and nuclear wars. The statement, signed by hundreds of executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology posed to humanity. Signatories included the chief executives of Google’s DeepMind, the ChatGPT developer OpenAI, and the AI startup Anthropic.

 

How to avoid falling for misinformation, fake AI images on social media

Washington Post. Heather Kelly. May 22, 2023.

The rapid spread of easily accessible AI tools is muddling the waters even further. Look no further than the mystery of the Pope in an expensive-looking puffy coat, or a recent fake tweet that was quickly debunked claiming there was an explosion near the pentagon. How do you know what to trust, what not to share and what to flag to tech companies? Here are some basic tools everyone should use when consuming breaking news online.

 

Apple co-founder says AI may make scams harder to spot

BBC News. Philippa Wain. May 9, 2023.

Apple co-founder Steve Wozniak has warned that artificial intelligence (AI) could make scams and misinformation harder to spot. Mr. Wozniak says he fears the technology will be harnessed by "bad actors.” Speaking to the BBC, he said AI content should be clearly labelled, and regulation was needed for the sector.

 

AI and Content Policy

Dig Watch. June 2023.

Overall, by automating and simplifying online content moderation procedures, AI has the potential to improve the enforcement of content policies. However, difficulties and ethical concerns must be addressed, such as algorithmic biases that unintentionally result in the unjust targeting or exclusion of particular groups and issues like algorithmic transparency and accountability. In addition, the proliferation of AI-generated content adds considerably to the debate.

 

Take risk of AI, disinformation seriously – UN Chief

Vatican News. Zeus Legaspi. June 13, 2023.

Member states must address the “rapid spread of lies and hate” in the digital ecosystem, says United Nations (UN) Secretary-General António Guterres, as the UN launches a new report to promote information integrity on digital platforms.

UN chief António Guterres called on countries to seriously heed the warnings over the risks posed by artificial intelligence (AI), particularly generative AI, which he said are “loudest from the developers who designed it”.

 

Guardrails Urgently Needed to Contain “Clear and Present Global Threat” of Online Mis- and Disinformation and Hate Speech, says UN Secretary-General

U.N. Africa Renewal. António Guterres. June 11, 2023.

The world must address the “grave global harm” caused by the proliferation of hate and lies in the digital space, United Nations Secretary-General António Guterres said at the launch of his report into information integrity on digital platforms.

Alarm over the potential threat posed by the rapid development of generative artificial intelligence must not obscure the damage already being done by digital technologies that enable the spread of online hate speech, mis- and disinformation, stressed the UN chief.

 

Fakery and confusion: Campaigns brace for explosion of AI in 2024

Politico. Madison Fernandez, June 18, 2023.

Dozens of Democratic strategists gathered on Zoom on Wednesday for a novel meeting. The topic: How to combat an expected explosion of AI-generated fake content flooding TV airwaves and mailboxes in 2024.

The meeting, hosted by the progressive group Arena, drew more than 70 officials. They talked about how generative AI could produce misinformation and disinformation at a pace and scale campaigns have not experienced before.

 

A tsunami of AI misinformation will shape next year’s knife-edge elections

The Guardian. John Naughton. August 12, 2023.

If you thought social media had a hand in getting Trump elected, watch what happens when you throw AI into the mix. And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.

 

A.I.’s unlearning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data

Yahoo Finance. Stephen Pastis. August 30, 2023.

It’s nearly impossible to remove a user’s data from a trained A.I. model without resetting the model and forfeiting the extensive money and effort put into training it. To use a human analogy, once an A.I. has “seen” something, there is no easy way to tell the model to “forget” what it saw. And deleting the model entirely is also surprisingly difficult.

This represents one of the thorniest, unresolved, challenges of our incipient artificial intelligence era, alongside issues like A.I. "hallucinations" and the difficulties of explaining certain A.I. outputs. According to many experts, the A.I. unlearning problem is on a collision course with inadequate regulations around privacy and misinformation.

 

Google to make disclosure of AI-generated content mandatory for election advertisers

Reuters. September 6, 2023. Alphabet Inc's (GOOGL.O)

Google will make it mandatory for all election advertisers to add a clear and conspicuous disclosure starting mid-November when their ads contain AI generated content, the company said on Wednesday. The policy would apply to image, video, and audio content, across its platforms, the company said in a blog post.

Deepfakes created by AI algorithms threaten to blur the lines between fact and fiction, making it difficult for voters to distinguish the real from the fake.

 

The evolution of disinformation: A deepfake future

CSIS. October 2023

This report is based on the views expressed during, and short papers contributed by speakers at, a workshop organized by Canada Security Intelligence Service as part of its Academic Outreach and Exploitation of Information Service as part of its Academic Outreach and Stakeholder Engagement (AOSE) and Analysis and Exploitation of Information Sources (AXIS) Programs. Offered as a means to supporting ongoing discussion, the report does not constitute an analytical document, nor does it represent any formal position of the organizations involved.

 

Generative AI: A Double-Edged Sword in Shaping Canada’s Future Elections

The Niagara Independent. November 17, 2023

Generative Artificial Intelligence (AI) stands poised to revolutionize the landscape of Canadian elections, promising both transformative advancements and worrisome challenges. As this technology rapidly evolves, its potential impact on the democratic process demands close scrutiny, navigating a complex terrain of opportunities and risks.

 

AI ‘Tom Cruise’ joins fake news barrage targeting Olympics

Politico. Seb Starcevic. November 10, 2023.

In a dramatic voiceover interspersed with footage of International Olympic Committee chief Thomas Bach, Hollywood A-lister Tom Cruise warns that “corrupt officials” are “slowly and painfully destroying the Olympic sports that have existed for thousands of years.”

The bizarre revelation comes as part of a purported four-part Netflix documentary series, “Olympics Has Fallen,” alleging corruption at the heart of the International Olympic Committee (IOC).

Just one big problem: The entire thing is fake.

 

The US isn’t ready for AI fake news.

Renew Democracy Initiative. James Lewis. November 30, 2023.

The United States is just beginning to come to grips with the intersection of disinformation and AI technology, which has been more frightening for what it seems to portend for our future than its actual influence on recent elections. With the rapid advancement of AI technology, however, experts are suggesting that 2024 could be the tipping point.

 

Confronting the Threat of Deepfakes in Politics

Tech Policy Press. Maggie Engler and Numa Dhamani. November 10, 2023.

Next year, more than 2 billion voters will head to the polls in a record-breaking number of elections around the world, including in the United States, India, and the European Union. Deepfakes are already emerging in the lead up to the 2024 U.S. presidential election, from Trump himself circulating a fake image of himself kneeling in prayer on Truth Social to a DeSantis campaign video showing Trump embracing Dr. Anthony Fauci, former Chief Medical Advisor to the President of United States.

The most obvious concern with deepfakes is the ability to create audio or video recordings of candidates saying things that they never said. But perhaps a bigger concern revolves around the ability to distinguish the authenticity of a candidate’s statement. During the 2016 U.S. presidential election, a 2005 Access Hollywood tape captured Trump saying that he could grope women without their consent. Trump initially apologized, but then later suggested that it was fake news. In a world where the public does not know if an audio or video of a candidate is real or fake, will individuals caught in a lie or scandal now blame it on a deepfake?

 

Generative AI is Already Catalyzing Disinformation. How Long Until Chatbots Manipulate Us Directly?

Tech Policy Press. Zak Rogoff. October 23, 2023.

Over the last few years, it's become clear that unscrupulous companies and politicians are willing to pursue any new technology that promises the ability to manipulate opinion at scale. Generative AI represents the latest wave of such technologies. Despite the fact that the potential harms are already apparent, law- and policymakers have to date failed to put the necessary guardrails in place.

 

AI-driven misinformation ‘biggest short-term threat to global economy’

The Guardian. Larry Elliott. January 10, 2023.

A wave of artificial intelligence-driven misinformation and disinformation that could influence key looming elections poses the biggest short-term threat to the global economy, the World Economic Forum (WEF) has said.

In a deeply gloomy assessment, the body that convenes its annual meeting in Davos next week expressed concern that politics could be disrupted by the spread of false information, potentially leading to riots, strikes and crackdowns on dissent from governments.

 

Explicit fake images of Taylor Swift prove laws haven't kept pace with tech, experts say

CBC. Rhianna Schmunk. January 26, 2024.

Explicit AI-generated photos of one of the world's most famous artists spread rapidly across social media this week, highlighting once again what experts describe as an urgent need to crack down on technology and platforms that make it possible for harmful images to be shared.

 

Testimony of Sam Gregory, Executive Director, WITNESSBefore the United States House Committee on Oversight and Accountability, Subcommittee on Cybersecurity, Information Technology, and Government Innovation ‘Advances in Deepfake Technology’

WITNESS. November 8, 2023.

WITNESS efforts have included tracking technical developments, contributing to technical standards development, engagement on detection and authenticity approaches that support consumer literacy, analysis, and real-time response to contemporary usages, research, and consultative work with rights defenders, journalists, content creators, technologists, and other members of civil society to understand harmful misuses. Their testimony is further informed by three decades of experience helping communities, citizens, journalists, and human rights defenders create trustworthy photos and videos related to critical societal issues and protect themselves against the misuse of their content and harmful attacks on themselves and their work.

 

AI-Generated Fake News Is Coming to an Election Near You

WIRED. January 22, 2024.

My prediction for 2024 is that AI-generated misinformation will be coming to an election near you, and you likely won’t even realize it. In fact, you may have already been exposed to some examples. In May of 2023, a viral fake story about a bombing at the Pentagon was accompanied by an AI-generated image which showed a big cloud of smoke. This caused public uproar and even a dip in the stock market. Republican presidential candidate Ron DeSantis used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By mixing real and AI-generated images, politicians can blur the lines between fact and fiction, and use AI to boost their political attacks.

 

AI-powered disinformation is spreading — is Canada ready for the political impact?

CBC. Catherine Tunney. January 18, 2023.

The rise of deepfakes comes as billions of people around the world prepare to vote this year. Just days before Slovakia's national election last fall, a mysterious voice recording began spreading a lie online.

The manipulated file made it sound like Michal Simecka, leader of the Progressive Slovakia party, was discussing buying votes with a local journalist. But the conversation never happened; the file was later debunked as a "deepfake" hoax.

 

Tech giants sign voluntary pledge to fight election-related deepfakes

TechCrunch. Kyle Wiggers. February 16, 2024.

Tech companies are pledging to fight election-related deepfakes as policymakers amp up pressure.

On February 16th, at the Munich Security Conference, vendors including Microsoft, Meta, Google, Amazon, Adobe and IBM signed an accord signaling their intention to adopt a common framework for responding to AI-generated deepfakes intended to mislead voters. Thirteen other companies, including AI startups OpenAI, Anthropic, Inflection AI, ElevenLabs and Stability AI and social media platforms X (formerly Twitter), TikTok and Snap, joined in signing the accord, along with chipmaker Arm and security firms McAfee and TrendMicro.

 

4 ways to future-proof against deepfakes in 2024 and beyond

World Economic Forum. Anna Maria Collard. February 12, 2024.

With an increased accessibility to genAI tools, today’s deepfake creators do not need technical know-how or deep pockets to generate hyper-realistic synthetic video, audio or image versions of real people. For example, the researcher behind Countercloud used widely available AI tools to generate a fully automated disinformation research project at the cost of less than $400 per month, illustrating how cheap and easy it has become to create disinformation campaigns at scale. 

 

A Tech Accord to Combat Deceptive Use of AI in 2024 Elections

AI Elections accord. February 2024.

2024 will bring elections to more people than any year in history, with 40+ countries and more than four billion people choosing their leaders and representatives through the right to vote. At the same time, the rapid development of artificial intelligence, or AI, is creating new opportunities as well as challenges for the democratic process. All of society will have to lean into the opportunities afforded by AI and to take new steps together to protect elections and the electoral process during this exceptional year.

 

2024 will be the year of democracy - or disinformation

King’s Colledge London. Resham Kotecha & Elena Simpiri. February 27, 2024.

In this year of elections around the world, how will AI shape or harm democracies? The authors of this article explore the impact AI is already having, whether states are ready for the sheer volume of rule-breaking we might see and why everyone should take a more critical approach to what we see.

With nearly 2 billion people heading to the polls this year, 2024 is being touted as the year of democracy. Key elections are being held in the UK, the US, the EU, and India, with many other countries also set to hold elections over the course of the year. Along with many organisations working with data and AI, at the Open Data Institute, we’re cognisant of the vast opportunities - and significant challenges - that these technologies can play in shaping and harming our democracies.

 

Tracking AI-enabled Misinformation: 725 ‘Unreliable AI-Generated News’ Websites (and Counting), Plus the Top False Narratives Generated by Artificial Intelligence Tools

NewsGuard. McKenzie Sadeghi et al. February 2024.

NewsGuard has so far identified 725 AI-generated news and information sites operating with little to no human oversight and is tracking false narratives produced by artificial intelligence tools. From unreliable AI-generated news outlets operating with little to no human oversight, to fabricated images produced by AI image generators, the rollout of generative artificial intelligence tools has been a boon to content farms and misinformation purveyors alike.

 

Blinken warns risks of disinformation, falsehoods over U.S. Elections

Video. Global News. March 18, 2024

U.S. Secretary of State Anthony Blinken said at a democracy summit in Seoul Korea, that 2024 is an “extraordinary election year” to highlight risks of disinformation and falsehoods in cyberspace. Blinken repeated Washington’s accusation that Russia and China are behind global campaigns aimed at manipulating information, while some European officials have also accused Russia of conducting disinformation campaigns using AI.

 

How AI-generated disinformation might impact this year’s elections and how journalists should report on it

Reuters Institute. Marina Adami. March 15, 2024

From satire to robocalls, generative AI is entering politics in a crucial year. Four experts reflect on its possible consequences and on how to cover it.

A phone call from the US president, covert recordings of politicians, false video clips of newsreaders, and surprising photographs of celebrities. A wide array of media can now be generated or altered with artificial intelligence, sometimes mimicking real people, often very convincingly.

 

Nations target AI disinformation ahead of elections, and other digital technology stories you need to know

World Economic Forum. Cathy Li. March 12, 2024

Government agencies and businesses in North America and Europe have announced plans to curb AI misinformation ahead of elections scheduled this year.

 

Deepfakes are still new, but 2024 could be the year they have an impact on elections

The Conversation. Eileen Culloty. March 19, 2024

Disinformation caught many people off guard during the 2016 Brexit referendum and US presidential election. Since then, a mini-industry has developed to analyse and counter it.

Yet despite that, we have entered 2024 – a year of more than 40 elections worldwide – more fearful than ever about disinformation. In many ways, the problem is more challenging than it was in 2016.

Advances in technology since then are one reason for that, in particular the development that has taken place with synthetic media, otherwise known as deepfakes. It is increasingly difficult to know whether media has been fabricated by a computer or is based on something that really happened.

 

Platforms, Algorithms and Blockchain

Handshaking with God: the 2022 Strengthened Code of Practice on Disinformation and its Impact on Digital Sovereignty

Media Laws. Ylenia Maria Citino. November 8, 2022

 

Twitter takeover: fears raised over disinformation and hate speech

The Guardian. Dan Milmo & Alex Hern. October 28, 2022

Elon Musk’s Twitter acquisition has been polarizing, sparking reactions from politicians, regulators and non-profits across different continents.

Some have expressed concerns about potential changes to Twitter’s content moderation policies now that it’s in the hands of the Tesla billionaire, while others celebrated how they expect the platform’s newly minted leader will handle content and speech on Twitter.

Senior politicians in the UK and Europe on Friday warned Musk over content moderation on Twitter, with the EU stressing the platform will “fly by our rules” and a UK minister expressing concerns over hate speech under the billionaire’s ownership.

 

Social Media and the 2022 Midterm Elections: Anticipating Online Threats to Democratic Legitimacy

Centre for American Progress.E. Simpson,A. Conner, A. Maciolek. November 3, 2022.  

Social media companies continue to allow attacks on U.S. democracy to proliferate on their platforms, undermining election legitimacy, fuelling hate and violence, and sowing chaos.

This issue brief outlines what is needed from social media companies and identifies three of the top threats they pose to the 2022 midterm elections—the season opener for the 2024 presidential election.

 

Truth Cops: Leaked Documents Outline DHS’s Plans to Police Disinformation

The Intercept. Ken Klippenstein & Lee Fang. October 31, 2022

The Department of Homeland Security is quietly broadening its efforts to curb speech it considers dangerous, an investigation by The Intercept has found. Years of internal DHS memos, emails, and documents — obtained via leaks and an ongoing lawsuit, as well as public documents — illustrate an expansive effort by the agency to influence tech platforms.

 

As 2022 midterms approach, disinformation on social media platforms continues

PBS. David Klepper (AP). October 21, 2022  

With less than three weeks before the polls close, misinformation about voting and elections abounds on social media despite promises by tech companies to address a problem blamed for increasing polarization and distrust.

While platforms like Twitter, TikTok, Facebook and YouTube say they’ve expanded their work to detect and stop harmful claims that could suppress the vote or even lead to violent confrontations, a review of some of the sites shows they’re still playing catch-up with 2020, when then-President Donald Trump’s lies about the election he lost to Joe Biden helped fuel an insurrection at the U.S. Capitol.

 

History Is a Good Antidote to Disinformation About the Invasion of Ukraine

CIGI. Heidi Tworek. March 8, 2022

Much of the current misinformation online exists to scam and to manipulate through speed. TikTok has become a key platform for misleading content. TikTok’s algorithm appears to offer up many misleading videos alongside scam calls for donations. These videos often depict older conflicts or conflicts in other places; the posters claim they are occurring in Ukraine and can garner millions of views. Abbie Richards suggests that “TikTok’s platform architecture is amplifying fear and permitting misinformation to thrive at a time of high anxiety,” calling the platform’s design “incompatible with the needs of the current moment.” It is hard to resist the siren call of doom scrolling. But a slower accumulation of knowledge at moments of crisis can avoid hurtful faux pas and prevent inadvertent spreading of disinformation.

 

Experts grade Facebook, TikTok, Twitter, YouTube on readiness to handle midterm election misinformation

The Conversation. October 26, 2022

The 2016 U.S. election was a wake-up call about the dangers of political misinformation on social media. With two more election cycles rife with misinformation under their belts, social media companies have experienced identifying and countering misinformation. However, the nature of the threat misinformation poses to society continues to shift in form and targets. The big lie about the 2020 presidential election has become a major theme, and immigrant communities are increasingly in the crosshairs of disinformation campaigns – deliberate efforts to spread misinformation.

Social media companies have announced plans to deal with misinformation in the 2022 midterm elections, but the companies vary in their approaches and effectiveness. We asked experts on social media to grade how ready Facebook, TikTok, Twitter and YouTube are to handle the task.

 

Misinformation and hate are trending in this election year

CNN Politics. Zachary B. Wolf. October 31, 2022

Misinformation is trending now that Elon Musk, the self-described “Chief Twit,” has bought Twitter, his favourite social media platform.

Meanwhile, displays of hate are breaking out in public now that Kanye West, who now goes by Ye, has despicably fashioned himself as a folk hero for those spewing antisemitic messages, pushing his own anti-Jewish conspiracy theories.

The stories dovetail not just because they are built on the wild spread of false claims, but also because West’s Twitter account – locked in early October for an antisemitic tweet in which he said he was going “death con 3 on Jewish people” – was recently reactivated. More on that below.

 

Musk’s Twitter takeover highlights disinformation risk

Emerald Insight. November 7, 2022

Musk has repeatedly said he wants the platform to prioritize ‘free speech,’ but has also reassured European regulators that he will be complying with local laws, even where they involve content screening. Although Twitter’s policy has yet to be finalized, the turmoil highlights the risks of online disinformation. The business models of social media companies and tech platforms contain strong incentives that promote misinformation and disinformation. Advertising comprises 80% of the income of Google's parent company Alphabet, and well over 90% for Twitter and for Meta, the owner of Facebook and Instagram.

Social media offer advertisers hundreds of millions of users who are difficult to reach through other media. High levels of engagement ensure that the audience becomes 'captive'. Moreover, using data collected on users enables platforms to match advertisers and potential customers efficiently.

 

Coalition Sends Letter Urging Social Media Platforms to Prevent Online Election Disinformation

Legal Defence Fund. October 13, 2022

Today, LDF and a coalition of civil rights, public interest, voting rights, and other organizations, sent a letter urging social media companies to take immediate steps to curb the spread of voting disinformation in the midterms and future elections and to help prevent the undermining of our democracy. This letter is a follow up to another sent last May. Many companies have announced updates to their voter interference and disinformation policies in recent weeks but the policies have little effect unless enforced continually and consistently.

 

CIA analyst decries free speech 'nonsense' on Musk's Twitter, claims it will benefit Russian disinformation

Fox News. Gabriel Hays. November 26, 2022

CIA analyst Bob Baer claimed that "Putin is going to be all over Twitter" thanks to billionaire owner Elon Musk’s policies for running the company.

He also stated that the "voice of the people" Musk claimed wants free speech is "Russian intelligence" looking to undermine American support for Ukraine.

During a recent segment on CNN, the analyst argued that Musk’s pro-free-speech attitude towards operating the company, particularly in the way he has decided to reinstate banned accounts and not suspend users for any speech, means Russian hackers will benefit.

 

What Russia’s Cyber Sovereignty Woes Tell Us About a Future “Splinternet”

International Forum for Democratic Studies. Elizabeth Kerley. October 13, 2022  

Since launching its full-scale invasion of Ukraine, Russia has been putting its longstanding aspirations for “cyber sovereignty” to the test. In keeping with its longstanding objectives of “technological independence and information control,” the Kremlin has promoted homegrown tech in the face of sanctions while also halting the flow of independent information. Meanwhile this April, 61 mostly democratic countries signed a declaration articulating a vision for the internet that is “open, free, global, interoperable, reliable, and secure.” What does this clash of visions portend for the digital domain?

 

Twitter, others slip on removing hate speech, EU review says

Tech Explore. Kelvin Chan. November 24, 2022  

Twitter took longer to review hateful content and removed less of it in 2022 compared with the previous year, according to European Union data released Thursday.

The EU figures were published as part of an annual evaluation of online platforms' compliance with the 27-nation bloc's code of conduct on disinformation.

Twitter wasn't alone—most other tech companies signed up to the voluntary code also scored worse. But the figures could foreshadow trouble for Twitter in complying with the EU's tough new online rules after owner Elon Musk fired many of the platform's 7,500 full-time workers and an untold number of contractors responsible for content moderation and other crucial tasks.

 

90% of People Claim They Fact-Check News Stories As Trust in Media Plummets

Security Org. Aliza Vigderman. November 4, 2022

As the popularity of social media surpasses traditional news sources, information has grown more unreliable, and “fake news” becomes harder to detect. The same digital platforms that empower global communication seed doubt and spread misinformation.

The misinformation and disinformation that have influenced elections and hampered public health policies also damaged faith in all forms of media. Meanwhile, political attacks on some news sources have divided Americans further into partisan camps.

The nation is united, however, in recognizing the problem. Our second annual study of more than 1,000 people revealed that nine out of 10 American adults fact check their news, and 96 percent want to limit the spread of false information.

 

How Blockchain Can Help Combat Disinformation

As digital disinformation grows more and more prevalent, there’s one emerging technology with the potential to address many of the root causes of and risks associated with misleading and manipulated media: blockchain. While it’s no panacea, blockchain can help in three key areas: First, a blockchain-based system could offer a decentralized, trusted mechanism for verifying the provenance and other important metadata for online content. Second, it could enable content creators and sharers to maintain a reputation independent of any publication or institution. And finally, it makes it possible to financially incentivize the creation and distribution of content that meets community-driven standards for accuracy and integrity. Of course, any technological solution will have to be complemented by substantial policy and education initiatives — but in an ever-more complex digital media landscape, blockchain offers a promising starting point to ensure we can trust the information we see, hear, and watch.

 

Twitter Looks to Prevent a Disinformation Free-for-All Ahead of 2022 Midterms

PCMag. Nathaniel Mott. August 11, 2022.

By applying its Civic Integrity Policy to the upcoming US elections, Twitter is looking to 'enables healthy civic conversation' on its platform. (Don't laugh.) The company expanded(Opens in a new window) the Civic Integrity Policy ahead of the 2020 presidential election to "further protect against content that could suppress the vote and help stop the spread of harmful misinformation that could compromise the integrity of an election or other civic process." Now it's looking to apply those same measures to the 2022 midterms being held in November.

 

TWITTER - The mission of our civic integrity work is to protect the conversation on Twitter during elections or other civic processes.

We're working to prepare for elections, elevate credible information, and help keep you safe on Twitter. Our civic integrity policy aims to prevent the use of Twitter to share or spread false or misleading information about a civic process (e.g., elections or census) that may disrupt or undermine public confidence in that process.

This policy is enforced when the risk for manipulation or interference is highest — generally a few months before and a couple of weeks after election day, depending on local and external factors. This policy is an additional, temporary protection on top of all the Twitter Rules, which are enforced year-round.

 

TWITTER - The mission of our civic integrity work is to protect the conversation on Twitter during elections or other civic processes.

We're working to prepare for elections, elevate credible information, and help keep you safe on Twitter. Our civic integrity policy aims to prevent the use of Twitter to share or spread false or misleading information about a civic process (e.g., elections or census) that may disrupt or undermine public confidence in that process.

This policy is enforced when the risk for manipulation or interference is highest — generally a few months before and a couple of weeks after election day, depending on local and external factors. This policy is an additional, temporary protection on top of all the Twitter Rules, which are enforced year-round.

 

Biden admin pushed to bar Twitter users for COVID ‘disinformation,’ files show

New York Post. Jesse O’Neill. December 26, 2022.

The Biden White House pressured Twitter to both “elevate” and “suppress” users based on their stances on COVID-19 — ultimately “censoring info that was true but inconvenient” to policy makers, according to the latest edition of the “Twitter files”. The coercion campaign during the pandemic began with the Trump administration — which asked Twitter to crack down on stories about panic buying and “runs on grocery stores” in the early days of the outbreak — but was stepped up under Biden, whose administration was focused on the removal of “anti-vaxxer accounts,” according to The Free Press reporter David Zweig.

 

For Teens (and Adults) Fighting Misinformation, TikTok Is Still ‘Uncharted Territory

EdSurge. Nadia Tamez-Robledo. December 7, 2022.

TikTok may have started as the preferred social media platform for modern dance crazes, but the platform’s growth has made it a home for something else—misinformation. Add to that its popularity among teens and its powerful algorithm, and you have a mix that worries some educators about TikTok’s potential negative impacts for young users. A recent study from NewsGuard found that roughly one in five TikTok videos contain misinformation, whether the topic is COVID-19 vaccines or the Russia-Ukraine war.

 

3 Lessons on Misinformation in the Midterms Spread on Social Media

Brennan Center for Justice. Maya Kornberg et al.. January 5, 2023.

The Brennan Center has developed recommendations on how to fight misinformation based on analysis of how it takes root and circulates. Election-related falsehoods corrode American democracy. Since 2020, lies about a stolen presidential election cropped up in dozens of campaigns for election administrator positions and spurred unprecedented threats to election officials. The result has been a deluge of resignations that drained expertise from election offices across the country. Further, public trust in elections has plummeted amid disinformation promoted by Donald Trump and other prominent election deniers.

 

Climate change misinformation 'rocket boosters' on Elon Musk's Twitter

CTV News. David Klepper (AP Staff). January 19, 2023.

Search for the word "climate" on Twitter and the first automatic recommendation isn't "climate crisis" or "climate jobs" or even "climate change" but instead "climate scam." Clicking on the recommendation yields dozens of posts denying the reality of climate change and making misleading claims about efforts to mitigate it. Such misinformation has flourished on Twitter since it was bought by Elon Musk last year, but the site isn't the only one promoting content that scientists and environmental advocates say undercuts public support for policies intended to respond to a changing climate.

 

Over 1330 Facebook groups and pages spreading disinformation identified in the Balkan region

Debunk. Radovan Ognjenovic & Daniela Vukcevic. December 30, 2022.

Groups and pages on social media have been continuously spreading and amplifying misleading content on four different topics – Russia, NATO, LGBTQIA+, and COVID-19. Although not exclusively, administrators of these groups and pages shared misleading content containing all the aforementioned sentiments: pro-Russian, anti-NATO, anti-LGBT, and questioned the efficiency of COVID-19 measures, unrelatedly to their country of origin or language.

 

Our latest commitments to countering disinformation in Central and Eastern Europe

Google. Annette Kroeber-Riel. May 4, 2023.

Today, we are announcing new long-term partnerships we've established across Central and Eastern Europe, a region considered highly vulnerable to disinformation and propaganda due to its geographic proximity to the war in Ukraine. An issue that was highlighted in a recent IPSOS survey, conducted in cooperation with Central European Digital Media Observatory (CEDMO). In the Baltics, we've entered into long-term partnership with the Civic Resilience Initiative and the Baltic Center for Media Excellence. These two established and well-respected organizations will receive €1.3 million in funding from Google to build on their impactful work towards increasing media literacy, building further resilience and actively tackling disinformation in Lithuania, Latvia and Estonia.

 

A new Twitter policy cripples journalists’ efforts to halt disinformation

The Hill. Shannon Jankowski. May 23, 2023.

Bot detection tools can be a game changer for exposing targeted falsehoods and conspiracy theories, especially for small, local newsrooms serving marginalized communities. Because of upcoming U.S. election - public will rely on journalists to detect and expose this disinformation in their reporting. But, under Elon Musk’s leadership — which, ironically, began with a focus on eliminating bots on the platform — Twitter’s newly amended application programming interface (API) policy may rob journalists of access to bot detection tools, which are critical to identifying and understanding the spread of disinformation on social media.

 

Beware Fake News

CIGI. May 2023.

Influence operations targeting liberal democratic regimes are deeply troubling. They disrupt the twin bedrocks of effective democratic governance: the free flow of information and trust. These campaigns can be undertaken by malicious foreign governments who aim to sow chaos, or by non-state actors, such as ISIS, who seek to radicalize disaffected individuals in the West. Countering these operations is both necessary and possible. Such efforts require the engagement of not only governments but also the platforms. Working together, these actors can preserve liberal democratic governance by minimizing exposure to fake news and other influence operations, promoting user immunity and promulgating counter narratives to misinformation.

 

Rana Ayyub: “Misinformation threatens to be the new ‘true information'”

The Nobel Prize. May 2023.

Twitter trends, TikTok videos, Instagram reels, Facebook posts, and WhatsApp forwards might have democratized the spaces of communication, but they have also become the most potent platforms to disseminate fake news. As technology continues to advance, the war against misinformation and fake news ironically gets tougher for the world. Misinformation threatens to be the new ‘true information’ as it aids and enables the most anti-democratic values.”

 

EU Official Says Twitter Abandons Bloc's Voluntary Pact Against Disinformation

Associated press. May 26, 2023.

Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday. European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU's disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter's “obligation” remained, referring to the EU's tough new digital rules taking effect in August.

 

‘Bye, bye birdie’: EU bids farewell to Twitter as company pulls out of code to fight disinformation

Euronews. Giulia Carbonaro. May 29, 2023.

The European Commission’s Vice-President for Values and Transparency bashed Twitter’s latest decision to leave the EU’s anti-disinformation code as “irresponsible” at a time when Russia’s disinformation is extremely dangerous. Twitter’s decision to pull out of the EU’s voluntary code to fight the spread of disinformation and fake news in the bloc was announced by Thierry Breton, the EU’s internal market commissioner.

 

EU official says Twitter abandons bloc's voluntary pact against disinformation

ABC News. Kelvin Chan. May 26, 2023.

Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday. European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU's disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter's “obligation” remained, referring to the EU's tough new digital rules taking effect in August.

 

‘Bye, bye birdie’: EU bids farewell to Twitter as company pulls out of code to fight disinformation

Euro News. Giulia Carbonaro & Sophia Khatsenkova. May 31, 2023.

Dozens of tech firms have voluntarily signed up to the EU’s anti-disinformation code revamped last year, including Meta (with Instagram and Facebook), TikTok, Google, Microsoft and Twitch. Despite the fact that Twitter’s withdrawal could appear to be a major setback in the fight against disinformation and fake news in the EU, Jourova said that “the Code remains strong, sets high standards and is at the heart of our efforts to address disinformation”.

 

Algorithms can be useful in detecting fake news, stopping its spread and countering misinformation

The Conversation. Laks V.S. Lakshmanan. June 7, 2023.

Fake news is a complex problem and can span text, images and video.

For written articles in particular, there are several ways of generating fake news. A fake news article could be produced by selectively editing facts, including people’s names, dates or statistics. An article could also be completely fabricated with made-up events or people.

Fake news articles can also be machine-generated as advances in artificial intelligence make it particularly easy to generate misinformation.

 

The tech platforms have surrendered in the fight over election-related misinformation

Columbia Journalism Review. Mathew Ingram. June 15, 2023.

YouTube, Twitter, and Meta (formerly Facebook) have eased restrictions on election denial content. YouTube announced it will no longer remove videos claiming the 2020 presidential election was fraudulent, while Twitter and Meta dismantled most of their restrictions related to election denial. These decisions have sparked debates about striking a balance between protecting users and fostering open discussion, as well as concerns about the potential spread of misinformation and its impact on democracy.

 

Scoop: YouTube reverses misinformation policy to allow U.S. election denialism

AXIOS. Dara Fischer. June 2, 2023.

In a reversal of its election integrity policy, YouTube will leave up content that says fraud, errors or glitches occurred in the 2020 presidential election and other U.S. elections, the company confirmed to Axios Friday.

Why it matters: YouTube established the policy in December 2020, after enough states had certified the 2020 election results. Now, the company said in a statement, leaving the policy in place may have the effect of "curtailing political speech without meaningfully reducing the risk of violence or other real-world harm."

 

Who knowingly shares false political information online?

Misinformation Review. Shane Littrell, Casey Klofstad, et al. August 25, 2023.

Some people share misinformation accidentally, but others do so knowingly. Using a 2022 U.S. survey, researchers found that 14 percent of respondents reported knowingly sharing misinformation, and that these respondents were more likely to also report support for political violence, a desire to run for office, and warm feelings toward extremists. Furthermore, they were also more likely to have elevated levels of a psychological need for chaos, dark tetrad traits, and paranoia. The findings illuminate one vector through which misinformation is spread.

 

EU warns Elon Musk after Twitter found to have highest rate of disinformation

The Guardian. Lisa O’Carroll. September 26, 2023.

The EU has issued a warning to Elon Musk to comply with sweeping new laws on fake news and Russian propaganda, after X – formerly known as Twitter – was found to have the highest ratio of disinformation posts of all large social media platforms. The report analyzed the ratio of disinformation for a new report laying bare for the first time the scale of fake news on social media across the EU, with millions of fake accounts removed by TikTok and LinkedIn.

Facebook was the second-worst offender, according to the first ever report recording posts that will be deemed illegal across the EU under the Digital Services Act (DSA), which came into force in August.

 

Meta Takes Down 'Largest Ever' Chinese Influence Operation

Time. Vera Bergengruen. August 31, 2023.

A sprawling network of fake accounts linked to Chinese law enforcement was taken down by Meta this week in what the social-media company called “the largest known cross-platform covert influence operation in the world.”

The operation was the largest the company has removed in its history: on Facebook alone, Meta says it removed 7,704 accounts, 954 pages, and 15 groups linked to the effort to push pro-China talking points and attack the government’s critics. But its fingerprints extended beyond Facebook and Instagram, the platforms owned by Meta. The Chinese influence operation targeted at least 50 other platforms and apps, including YouTube, Reddit, Pinterest, TikTok, Pinterest, Medium, and X, the company formerly known as Twitter, according to Meta's analysts.

 

Social media users’ perceptions about health mis- and disinformation on social media

Oxford Academic. Jim P. Stimpson and Alexander N. Ortega. September 26, 2023.

The study used recently released nationally representative data with new measures on health information seeking to estimate the prevalence and predictors of adult social media users’ perceptions of health mis- and disinformation on social media.

Their study identified specific population groups that could be the target of future intervention efforts, including individuals who rely on social media for decision-making. The perception among social media users that there is a high prevalence of false and misleading health information on these platforms may increase the need for urgent action to mitigate the dissemination of such harmful health misinformation that negatively affects public health.

 

X’s Unchecked Propaganda: Engagement Soared by 70% for Russian, Chinese, and Iranian Disinformation Sources Following a Change by Elon Musk

NewsGuard. McKenzie Sadeghi, Jack Brewster, and Macrina Wang. September 2023.

Until April 20, 2023, users on X (formerly known as Twitter) were notified that China Daily and other state-run outlets that lack editorial independence are “state-affiliated.” But on April 21, X owner Elon Musk stripped the platform of labels indicating which accounts are state-run. This cleared the path for Chinese propaganda sources, as well as Russian and Iranian state outlets, to disseminate disinformation unchecked with X users no longer having transparent information about the nature of the source. The impact was immediate and dramatic.

 

Multistakeholder Pledge: Digital Protection - Prevention of the Harmful Impact of Hate Speech, Misinformation, and Disinformation

Global Compact on Refuges. September 2023.

The rise of misinformation, disinformation and hate speech on digital platforms is causing real-world harm to the most vulnerable, especially refugees, displaced and stateless people.

These offline harms include xenophobia, racism, persecution, violence, killings. Misinformation, disinformation and hate speech can be a contributing factor of forced displacement. In the case of people who are already displaced, harms can include trafficking, exploitation, barriers to accessing rights and services.

The pledge will increase the number of stakeholders who are taking action to prevent the harmful impact on displaced and stateless populations, and on humanitarian action, of mis/disinformation and hate speech on their platforms.

 

Elon Musk Will Test the EU’s Digital Services Act

Tech Policy Press. Gabby Miller. September 11, 2023.

Elon Musk, the self-proclaimed free speech absolutist, has once again ramped up attacks meant to silence his critics, this time while bolstering an online movement with ties to white nationalists and antisemitic propagandists. His latest target is the Anti-Defamation League (ADL), an anti-hate organization focused on combating antisemitism, which he threatened with legal action via Tweet early last week. Musk blames ADL for the exodus of advertisers from his rapidly deteriorating social media platform.

The platform formerly known as Twitter, referred to as "X," is now required by law to conduct its first annual risk assessment to demonstrate compliance with the European Union's Digital Services Act (DSA). The DSA applies to Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) like X, and it aims to combat disinformation, similar to the content Musk promotes on the platform.

 

MPs say tech giants must be held responsible for online misinformation by foreign actors

National Observer. Mickey Djuric. October 25, 2023.

A parliamentary committee is calling on Canada to hold tech giants accountable for publishing false or misleading information online, especially when it is spread by foreign actors.

That was among 22 recommendations the House ethics committee made in a report released Tuesday after its study into threats posed by foreign interference in Canada's affairs, with a focus on China and Russia.

 

EU warns Elon Musk over ‘disinformation’ on X about Hamas attack

The Guardian. Lisa O’Carroll. October 10, 2023.

The EU has issued a warning to Elon Musk over the alleged disinformation about the Hamas attack on Israel, including fake news and “repurposed old images”, on X, which was formerly known as Twitter.

The letter arrives less than two months after sweeping new laws regulating content on social media seen in the EU came into force under the Digital Services Act.

 

Beware Fake News.

CIGI. Eric Jardine. October 2023.

Influence operations, whether launched by governments or non-state actors, existed long before social media, but what is new about contemporary influence operations is their scale, severity and impact, all of which are likely to grow more pronounced as digital platforms extend their reach via the internet and become ever more central to our social, economic and political lives. Such efforts represent a clear cyber security challenge.

 

Meta, TikTok given a week by EU to detail measures against disinformation

Reuters. Charlotte Van Campenhout and Bart H. Meijer. October 19, 2023.

Meta (META.O) and TikTok have been given a week by the European Commission to provide details on measures taken to counter the spread of terrorist, violent content and hate speech on their platforms, a week after Elon Musk's X was told to do the same.

 

Our ongoing work to fight misinformation online

Google. Amanda Storey. October 26, 2023.

Google aims to balance access to information with safeguarding users and society. The company emphasizes the importance of ensuring that information is not only accessible but also safe to benefit users.

Google takes its responsibility seriously by prioritizing the provision of trustworthy information and content, safeguarding users from potential harm, ensuring the delivery of reliable information, and collaborating with experts and organizations to contribute to a safer internet.

 

The disinformation sleuths: a key role for scientists in impending elections

Nature. October 4, 2023.

Access to social-media data is essential to those who research political campaigns and their outcomes. However, unlike in previous years, scientists will not have free access to data from X, previously known as Twitter. Many still consider X to be among the world’s most influential social-media platforms for political discussion, but the company has discontinued its policy of giving researchers special access to its data. Disinformation campaigns — some armed with AI-generated deepfakes — are likely to be rampant in the coming months, says Ulrike Klinger, who studies political communication at the European University Viadrina in Frankfurt (Oder), Germany. “And we cannot monitor them because we don’t have access to data.”

 

Brand Danger: X and Misinformation Super-spreaders Share Ad Money from False or Egregiously Misleading Claims About the Israel-Hamas War

NewsGuard. Jack Nrewster, Coalter Palmer et al. November 22, 2023.

On X, programmatic ads appear below viral posts spreading false claims about the Israel-Hamas war. Shockingly, a new ad revenue sharing program rewards these misinformation spreaders with a portion of income from major brands, governments, and non-profits.

 

China is using the world’s largest known online disinformation operation to harass Americans, a CNN review finds

CNN. Donie O’Sullivan, Curt Devine & Allison Gordon. November 13, 2023.

The Chinese government has built up the world’s largest known online disinformation operation and is using it to harass US residents, politicians, and businesses—at times threatening its targets with violence, a CNN review of court documents and public disclosures by social media companies has found.

 

Online disinformation : UNESCO unveils action plan to regulate social media platforms

UNESCO.Audrey Azoulay. November 6, 2023.

Digital technology has enabled immense progress on freedom of speech. However, social media platforms, in parallel, have expedited and intensified the dissemination of false information and hate speech, presenting considerable threats to societal cohesion, peace, and stability. In order to preserve access to information, there is a pressing need for regulation of these platforms without delay. Simultaneously, it is crucial to safeguard freedom of expression and human rights.

 

Exclusive: Elon Musk's X restructuring curtails disinformation research, spurs legal fears

Reuters. Shelia Dang. November 6,2023.

Social media researchers have canceled, suspended or changed more than 100 studies about X, formerly Twitter, as a result of actions taken by Elon Musk that limit access to the social media platform, nearly a dozen interviews and a survey of planned projects show.

Musk's restrictions on critical methods of gathering data on the global platform have suppressed the ability to untangle the origin and spread of false information during real-time events such as Hamas' attack on Israel and the Israeli airstrikes in Gaza, researchers told Reuters.

 

UK watchdog Ofcom puts Big Tech on notice

Politico. Mark Scott. November 9, 2023.

Social media platforms that don’t clamp down on illegal and hate content will face the full force of the United Kingdom’s new online safety rules, according to Melanie Dawes, head of the country’s regulator in charge of the new regime.

 

U.S. stops helping Big Tech spot foreign meddling amid GOP legal threats

Washington Post. Naomi Nix, Cat Zakrzewski. November 30, 2023.

The US federal government has stopped warning some social networks about foreign disinformation campaigns on their platforms, reversing a years-long approach to preventing Russia and other actors from interfering in American politics less than a year before the US presidential elections. Meta no longer receives notifications of global influence campaigns from the Biden administration, halting a prolonged partnership between the federal government and the world’s largest social media company. Federal agencies have also stopped communicating about political disinformation with Pinterest, according to the company. In July 2023, a federal judge limited the Biden administration’s communications with tech platforms in response to a lawsuit alleging such coordination ran afoul of the First Amendment by encouraging companies to remove falsehoods about COVID-19 and the 2020 election.

 

How online misinformation exploits ‘information voids’ — and what to do about it

Nature. January 9, 2024.

In 2024’s super election year, providers of online search engines and their users need to be especially aware of how online misinformation can seem all too credible.

This year, countries with a combined population of 4 billion — around half the world’s people — are holding elections, in what is being described as the biggest election year in recorded history. Some researchers are concerned that 2024 could also be one of the biggest years for the spreading of misinformation and disinformation. Both refer to misleading content, but disinformation is deliberately generated.

 

Former Harvard disinformation scholar says she was pushed out of her job after college faced pressure from Facebook

CNN. Donie O’Sullivan & Clare Duffy. December 4, 2023.

A nationally recognized online disinformation researcher has accused Harvard University of shutting down the project she led to protect its relationship with mega-donor and Facebook founder Mark Zuckerberg.

The allegations, made by Dr. Joan Donovan, raise questions about the influence the tech giant might have over seemingly independent research. Facebook’s parent company Meta has long sought to defend itself against research that implicates it in harming society: from the proliferation of election disinformation to creating addictive habits in children. Details of the disclosure were first reported by The Washington Post.

 

Meta unveils team to combat disinformation and AI harms in EU elections

Aljazeera. February 26, 2024.

Tech giant’s head of EU affairs says team will bring together experts from across the company. Facebook owner Meta has unveiled plans to launch a dedicated team to combat disinformation and harms generated by artificial intelligence (AI) ahead of the upcoming European Parliament elections.

 

EU Policy. TikTok sets up in-app ‘election centres’ to fight fake news

EuroNews. Cynthia Kroet. February 14, 2024.

Major online platforms must tackle disinformation, under new EU digital service rules that take effect Saturday. TikTok announced today (14 February) that it will set up what it calls in-app election centres for each of the 27 EU countries.

The move by the social media network is a bid to reduce the spread of online misinformation as the bloc goes to the polls in June. The tool will be available as of next month to ensure people can “separate fact from fiction”, Kevin Morgan, TikTok’s head of trust and safety for Europe, the Middle East and Africa, said in a statement.

 

EU Policy. Meta second to set up EU online election centre to fight disinformation

EuroNews. Cynthia Kroet. February 26, 2024.

The online platform will add fact-checking organisations in Bulgaria, France, and Slovakia to its network ahead of the EU elections. US tech giant Meta, which owns Facebook and Instagram, is to set up an EU-specific 'operations centre' to combat misinformation around the European Parliament elections in June, the company has announced weeks after its Chinese rival TikTok made a similar move.

 

AI and misinformation: what’s ahead for social media as the US election looms?

Guardian. Rachel Leingang. February 10, 2024.

Innovation is outpacing our ability to handle misinformation, experts say. That makes falsehoods easy to weaponize. As the United States’ fractured political system prepares for a tense election, social media companies may not be prepared for an onslaught of viral rumors and lies that could disrupt the voting process – an ongoing feature of elections in the misinformation age.

 

Social media is the No. 1 source of disinformation, according to US internet users

Insider Intelligence. Sara Lebow. February 27, 2024.

Key stat: 64% of US adults think disinformation and “fake news” are most widespread on social media, according to a September 2023 survey from UNESCO and Ipsos.

It’s a presidential election year, which means the risk for misinformation and disinformation on social media is rampant. That presents a major brand safety challenge for marketers, whose content could end up next to unsavory posts.

 

US Supreme Court seems wary of curbing US government contacts with social media platforms

Reuters. Andrew Chung & John Kruzel. March 18, 2024

The U.S. Supreme Court justices on Monday appeared skeptical of a challenge on free speech grounds to how President Joe Biden's administration encouraged social media platforms to remove posts that federal officials deemed misinformation, including about elections and COVID-19.

 

New Study Unveils Strategies to Combat Disinformation Wars on Social Media

Carnegie Mellon University. Maryan Saeedi. March 23, 2024

In an era where social media platforms have become battlegrounds for information integrity, a new study sheds light on the mechanics of disinformation spread and offers innovative solutions to counteract it.

Conducted by a team of researchers from Brandeis University, George Mason University, Massachusetts Institute of Technology, and Carnegie Mellon University examined the dynamics of “disinformation wars,” which refers to the intentional spread of fake news while pretending to be an ordinary account or user on platforms like X (formerly known as Twitter). This method has proved to be alarmingly effective in misleading the public.

 

Meta to shutter key disinformation tracking tool before 2024 election

The Record. Suzanne Smalley. March 22, 2024

Meta’s decision to close its CrowdTangle division — a tool that tracks content across social media — has raised the ire of more than 100 research and advocacy groups who say it will make it harder to fight disinformation.

Groups including the Mozilla Foundation, the Center for Democracy and Technology and Access Now sent the social media behemoth an open letter Thursday decrying the decision to shutter the unit in August, asking Meta to, at a minimum, invest in CrowdTangle through January. Meta announced it would close CrowdTangle last week.

 

US Sanctions Russian Firms Over 'Fake Websites'

Agence France-Presse. March 20, 2024

The U.S. Treasury Department imposed sanctions Wednesday against two people and their Russia-based companies it accused of supporting a Kremlin-directed disinformation campaign involving the impersonation of legitimate news websites.

The sanctions targeted Moscow-based company Social Design Agency and its founder, Ilya Andreevich Gambashidze, as well as the Russian-based Company Group Structura and its owner, Nikolai Aleksandrovich Tupikin, according to a statement from the Treasury Department.