Apple reportedly banned its employees from using artificial intelligence tools like ChatGPT for work purposes. The company’s decision stems from concerns about confidential data leakage.
Apple informed its employees about this move through an internal memo.
Sharing confidential data with ChatGPT poses a security risk
Generative AI tools like OpenAI’s ChatGPT and Google’s Bard are all the rage these days. They can help automate trivial tasks and write simple code, helping boost productivity. However, these tools also gather data and send it back to developers for research and improvement. This creates a security risk, especially when dealing with confidential data.
ChatGPT allows users to turn off chat history, which prevents data sharing with its AI model. However, users must manually enable this option.
The Wall Street Journal reported Apple’s ChatGPT ban after seeing an internal document and talking with anonymous insiders. The publication also said Apple informed its employees not to use Microsoft’s GitHub Copilot, which can help automate coding.
Apple is not the first company to ban such tools. Its concerns are legitimate, as Samsung employees have inadvertently leaked trade secrets to ChatGPT. This led that company to limit ChatGPT’s upload capacity to 1,024 bytes per person on its network.
Many companies ban ChatGPT and similar tools
Verizon, JPMorganChase and others also banned such generative AI tools. Meanwhile, Amazon encouraged employees to use its own AI-based tools for work purposes. Apple reportedly is working on such an internal tool as well.
Interestingly, news of Apple’s ChatGPT ban for its workers comes within hours of the official ChatGPT app’s release for iPhone. Less than 24 hours after its launch, the app has skyrocketed to the No. 1 position on the App Store.