Microsoft is reportedly set to launch a privacy-focused version of ChatGPT later this quarter. The chatbot version will be aimed at large organizations, such as banks and healthcare providers, concerned about data breaches and regulatory compliance.
The new product will be separate from the version used by other businesses and individuals and will run on its own dedicated server. By doing this, Microsoft aims to ensure that confidential data is not used to train ChatGPT’s language model, thus preventing accidental data leaks.
While having a dedicated server will help protect sensitive data, it comes at a higher cost for customers. The Information reports that the privacy-focused ChatGPT could cost more than 10 times the current rate of ChatGPT usage. This will pose a challenge for Microsoft in marketing the product to customers.
OpenAI has also announced plans to launch a new privacy-focused business subscription, which could compete with Microsoft’s product. OpenAI already offers a private ChatGPT service to Morgan Stanley, allowing the bank’s wealth management division to analyze thousands of the bank’s market research documents.
The introduction of a privacy-focused ChatGPT will be a game-changer for organizations that deal with sensitive data. Samsung, for instance, has already banned the use of generative AI company-wide after some employees entered confidential information into ChatGPT. But with Microsoft’s privacy-focused ChatGPT, companies can enjoy the benefits of AI without worrying about the risk of data leaks.
Microsoft has already contacted organizations about the upcoming product, as many existing customers have contracts with Azure that help keep their data safe. The company’s move to provide a privacy-focused ChatGPT for large organizations demonstrates its commitment to data privacy and security. By providing this service, Microsoft is enabling organizations to use AI technology to improve operational efficiency while also mitigating risks associated with data breaches and regulatory compliance.