In a recent development, Microsoft faced a hiccup in its relationship with OpenAI as it temporarily restricted employee access to ChatGPT, a flagship product co-developed with the AI research organization. The move, prompted by security concerns, was first reported by CNBC.
An internal update from Microsoft acknowledged the temporary unavailability of several AI tools, including ChatGPT, citing “security and data concerns.” Corporate devices were affected, with access restrictions extending to other AI services like Midjourney and Replika. Notably, this restriction coincided with OpenAI’s public announcement of a Distributed Denial of Service (DDoS) attack on their system, leading to global outages in the aftermath of DevDay and a delay in GPT rollout.
Microsoft clarified in an internal communication that despite its substantial investment in OpenAI and the inherent safeguards in ChatGPT to prevent misuse, the platform is considered a third-party external service. Employees were cautioned to exercise vigilance while using it, citing potential risks to privacy and security.
A Microsoft spokesperson later informed CNBC that the temporary blockage was an unintended consequence of testing control systems for large language models. “We restored service shortly after we identified our error,” the spokesperson stated. Emphasizing Microsoft’s commitment to privacy and security, the spokesperson encouraged employees and customers to opt for services like Bing Chat Enterprise and ChatGPT Enterprise, which offer heightened levels of protection.
This incident highlights the intricate challenges in balancing innovation and security within the evolving landscape of AI collaboration between tech giants like Microsoft and OpenAI.
Source: Gizmodo