Samsung has banned its employees from using ChatGPT and other AI generative tools in the workplace following a serious security breach.
The South Korean giant has told staff it is cracking down on the use of AI writer tools after top secret data, including company source code, was leaked online recently by workers using ChatGPT to help with some of their tasks.
The company says its move to, “create a secure environment” comes amid fears data used by AI tools is being stored on external servers, making it difficult for companies like Samsung to see exactly what has been revealed, where it is stored, how it is secured and how to delete it.
Samsung ChatGPT ban
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” Bloomberg (opens in new tab) reported an internal Samsung company memo as saying. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.”
The restrictions will apply to anyone using a Samsung company phone, tablet or computer, with staff also asked not to upload sensitive business information using any personal devices.
“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” Samsung’s memo added.
The company reportedly suffered three incidents of employees leaking sensitive information via ChatGPT earlier this year, which is especially damaging since ChatGPT retains user input data to further train itself, meaning internal trade secrets from Samsung are now effectively in the hands of OpenAI, the company behind the platform.
In response to the incidents, Samsung is reportedly developing its own in-house AI service for internal use by employees – however this limits prompts to 1024 bytes in size.