
Major content moderation failures by both X and Meta are serving to underline the risks of both deliberate disinformation and inadvertent misinformation impacting on next year’s presidential elections …
policy prohibits misleading video artificial intelligence, but doesn’t apply to deceptive edits made with more conventional techniques.
Platformer reports on another moderation failure on Threads.
On Threads, Alex Stamos of the Stanford Internet Observatory found several verified accounts sharing a video that purported to show violence in Israel that actually showed a celebration of a football championship in Algeria in 2020.
“Sadly, the destruction of the teams Twitter put in place to fight organized manipulation makes it harder for individuals to speak to a global audience as their message gets buried by troll farms, state propaganda organs and grifters,” he wrote.
Highlights risks to election integrity
The ease with which fake posts can spread on social media platforms is highlighting the very significant risk of next year’s presidential election being influenced by both individuals and nation states running bot farms.
The CIA, FBI, and NSA all agree that Russia interfered in the 2016 presidential election and is almost certain to do so again next year. While progress has been made on countermeasures, most of the measures introduced by what was then Twitter were undone by Musk following his purchase.
FTC: We use income earning auto affiliate links. More.