The resignation of Geoffrey Hinton, considered by many as the “godfather of AI,” from Google in April this year has raised concerns about the dangers of unregulated AI.
Hinton has repeatedly voiced his concerns about Google’s AI work, which could lead to ethical issues, misinformation, and job displacement. Furthermore, Hinton warned of uncontrolled AI that could lead to fully automated weapons and the negative behaviours AI could learn from data.
Although Google is not the only company facing such ethical issues, Hinton’s departure suggests that it may struggle to balance its AI advancements with ethical considerations. Indeed, the recent discovery of the LaMDA model, reportedly becoming sentimental, raises concerns about how much power we give AI. It is crucial to carefully monitor the evolution of AI to ensure that it solves problems in society without creating new ones.
Google has created guidelines for using AI to avoid violating privacy laws, but more needs to be done to address the ethical implications of AI. It is imperative that companies like Google focus on the fair and transparent development of AI while considering the potential ethical issues that could arise.
AI has a bright future and can address many societal issues, but it must be developed ethically. Hinton’s departure from Google highlights the need for ethical considerations in the advancement of AI, and all companies need to take these concerns seriously to ensure the positive impact of AI on society.