Xbox AI transparency report reveals 19 million toxic messages blocked, and improved player safety in Minecraft and Call of Duty


What you need to know

  • Xbox has published its latest AI transparency report covering data from January to June 2024.
  • Player safety has been boosted with a combination of AI and human moderation, blocking over 19 million pieces of harmful content which violated the Xbox Community Standards.
  • Minecraft and Call of Duty are implementing advanced moderation tools and community standards to reduce toxic behavior.

Xbox has just published its latest AI transparency report alongside an Xbox Wire update. The report, which covers data from January to June this year, details how Xbox uses artificial intelligence to enhance player safety, improve moderation processes, and ensure a positive gaming experience for players. AI is being used as an effective tool to block millions of harmful messages in both voice and text.

19 million pieces of content violating Xbox Community Standards were prevented from reaching players

Breaking down some of the most interesting statistics from the report, the headline figure of 19 million pieces of content being blocked over the 6 month period is impressive, and using a dual AI approach to moderation has done a lot of heavy lifting here, allowing humans to examine the more nuanced content. The tools Xbox is using automatically identify and classify harmful messages before they reach players, making quick decisions on whether the messages violate community standards. So far it’s significantly improved the speed and efficiency of content moderation on Xbox (and anecdotally I’ve received zero messages from angry teenagers in Call of Duty!)



Source link

Previous articleMeta Orion AR glasses could get these 3 incredible sci-fi tricks, according to new Project Aria update