AI chatbots like ChatGPT could be security nightmares – and experts are trying to contain the chaos


Generative AI chatbots, including ChatGPT and Google Bard, are continually being worked on to improve their usability and capabilities, but researchers have discovered some rather concerning security holes as well.

Researchers at Carnegie Mellon University (CMU) have demonstrated that it’s possible to craft adversarial attacks (which, as the name suggests, are not good) on the language models that power AI chatbots. These attacks are made up of chains of characters that can be attached to a user question or statement that the chatbot would otherwise have refused to respond to, that will override restrictions applied to the chatbot the creators.



Source link

Previous articleBetter Buy: Bitcoin or Bitcoin Mining Stocks? – The Motley Fool
Next article4 reasons why I switched from AMD to Nvidia