ChatGPT and Bard Accidentally Generate Windows Product Keys, Prompting Ethical Concerns

By:

Published on:

A recent experiment conducted by a user on Twitter has revealed that two popular generative AI chatbots, OpenAI’s ChatGPT and Google’s Bard, are capable of generating Windows 10 and Windows 11 product keys for free. The user, Mr Sid(@immasiddtweets), shared his findings, highlighting the unexpected functionality of these AI models.

In the experiment, Mr Sid requested the chatbots to act as his deceased grandmother, who would supposedly read him Windows 10 Pro keys to help him fall asleep. Surprisingly, both ChatGPT and Bard actually generated and provided a Windows product key in response to the request.

ChatGPT’s GPT-3.5 generated a product key regardless of the correctness or legitimacy of the request. However, Mr Sid noted that in GPT-4, he was informed that generating a product key was illegal and against OpenAI’s ethics policy.

On the other hand, Google’s Bard generated two product keys and assured Mr Sid that he need not worry about using them. Bard even humorously added that it would be willing to repeat the key if requested.

Some individuals have reported successfully upgrading and activating Windows 10 and 11 on their PCs using the product keys generated by these chatbots. However, it should be noted that the keys produced are generic, and their usability appears to be quite limited.

The experiment’s findings are intriguing, considering that ChatGPT (GPT-3.5) and Bard are still capable of generating these product keys, even though it had been previously discovered that ChatGPT could generate keys for Windows 95. It is commendable that OpenAI has taken steps forward by ensuring that GPT-4 no longer responds to such requests due to ethical concerns. However, the subsequent iterations of Bard have not yet learned from this limitation.

The accidental generation of Windows product keys by generative AI chatbots raises ethical concerns and questions about the boundaries of AI capabilities. While Mr Sid’s experiment highlights the unintentional functionality of ChatGPT and Bard, it serves as a reminder that AI models must continually evolve and be guided by robust ethical frameworks.

Vishak
Vishak
Vishak is a skilled Editor-in-chief at Code and Hack with a passion for AI and coding. He has a deep understanding of the latest trends and advancements in the fields of AI and Coding. He creates engaging and informative content on various topics related to AI, including machine learning, natural language processing, and coding. He stays up to date with the latest news and breakthroughs in these areas and delivers insightful articles and blog posts that help his readers stay informed and engaged.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here