Recently, Tech giant Samsung banned its users from using openAI’s ChatGPT; now, Apple has also restricted its employees from using the popular artificial intelligence (AI) chatbot ChatGPT out of data breach concerns.
The Wall Street Journal reported that an internal Apple document prohibits using Microsoft-backed ChatGPT and similar artificial intelligence (AI) tools while the company develops its AI technology.
The document indicates that the iPhone developer is concerned about employees using the programs and disclosing confidential company information.
It also mentioned a limitation on GitHub’s AI utility Copilot, which automates software code writing. Microsoft, a competitor in the Big Tech industry, controls Copilot.
This internal prohibition follows the May 18 release of ChatGPT for iOS in the Apple app store.
The new app is currently available for iPhone and iPad users in the United States, with plans to expand to additional countries “in the coming weeks” and the release of an Android version “soon.”
Alongside Apple, other major corporations have restricted ChatGPT’s internal utilization. On May 2, Samsung issued an internal memo prohibiting using generative AI tools like ChatGPT.
In the case of Samsung, the policy followed an incident in which Samsung employees uploaded “sensitive code” to the platform.
Samsung warned its employees who use such applications on their devices not to submit any company data or face “disciplinary action up to and including termination of employment.”
In addition to Samsung and Apple, JPMorgan, Bank of America, Goldman Sachs, and Citigroup have prohibited the internal use of generative AI tools such as ChatGPT.
Many companies that prohibit the use of AI chatbots by their employees are also in the process of developing their applications. Apple’s CEO, Tim Cook, stated in early May that the company intends to “weave” AI into its products.