in

Why Fear Artificial Intelligence? — Google’s Guidelines For Creating Responsible AI

Why Fear Artificial Intelligence

Artificial intelligence (AI) is more accessible to more and more people around the world. As this technology creates new opportunities to improve and help us on a day-to-day basis — its progress also raises questions about how it works, such as what problems it can lead to if it is not developed responsibly.

As its name suggests, it is an intelligence created by humans but carried out by machines. It has the same capabilities as humans — it learns, improves, and fends for itself in certain fields.

While many see this technology as an element that marks the present and will be crucial in the future, others look at it with suspicion and fear at the impact it can have on their lives.

Famous personalities like Elon Musk believe that this technology will threaten humans in the future, especially in the workplace. The truth is that science fiction has also made people fear it, presenting dystopian futures in which it comes to control humans. 

In that sense, Google has been concerned about the danger that AI can pose when it is not developed carefully and how it interacts with people. AI needs to be learned like a human but stay efficient and save for being a machine. So why fear artificial intelligence?

Google notes that artificial intelligence has to now live with prejudice, and they want to do something about it. To do this, Google has launched appropriate programs on the subject of “Responsible AI”.

Google’s Responsible AI Principle:

Two of Google’s basic AI principles are “be responsible to people” and “avoid creating or reinforcing unfair biases.” This includes developing interpretable artificial intelligence systems that put people at the forefront of every step of the development process while ensuring that any unfair biases a human may have are not reflected in the output of a model.

Socially Beneficial

Google aspires to create technology that solves important problems and helps people in their daily lives. In that sense, artificial intelligence has great potential and other technologies that also pose significant challenges.

The technology giant works to develop AI responsibly and establishes a series of specific application areas that it will not pursue, such as not implementing artificial intelligence in technologies that can cause harm or injury to people. “Artificial intelligence must be socially beneficial, making a big impact in a wide range of fields, such as healthcare, security, energy, transportation, and manufacturing,” says Anna Ukhanova, technical program manager at Google AI.

Google will work so that the information available through AI models is accurate and of high quality. Along with that the technology that also “must be responsible to the people, being subject to the direction and control of the human being.”

Artificial intelligence algorithms and data sets can reflect, reinforce, and reduce unfair biases. In this sense, Google will try to avoid unfair impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, income, nationality or political or religious beliefs, among others.

Anna Ukhanova researches machine learning and artificial intelligence and has explained that the success of this technology is directly related to responsible use. Machine learning or artificial intelligence generally relies on large data sets that systems are “trained on,” and verifying that data is not easy. “Oftentimes, the biases are already in the data. We are not made to process such large amounts of data,” explains Fernanda Viegas, co-lead of Google’s People + AI efforts.

Mark The Limits

For this reason, tools are needed to address the data and prevent possible inequalities at an early stage, and the company says it is in the process of developing more tools of this type.

An important step in countering inequality is transparency, and simple design changes can often make a big difference,” says Anna Ukhanova. The researcher uses Google Translate as an example, whose automatic translator translates gender-neutral terms in such a way that both the female and male forms are translated into the target language.

However, it is also important to show the user that technology has its limits and to try to explain as best as possible how the system achieves certain results.

The debate is really complicated. Artificial intelligence would always be considered from extreme points of view. Either it is seen as a technology that knows everything and solves all problems, or as something that leads us to the abyss,” says Fernanda Viegas.

In order to move forward, you have to know how to see the middle ground. For example, artificial intelligence is capable of doing great things, like detecting cancer or predicting earthquakes. “At the same time, we must also talk about where the technology is failing, what its problems are and where its limits are. This is the only way we can make people end up relying on artificial intelligence, ” concludes Viegas.

SabariNath

Written by SabariNath

Sabarinath is the founder and chief-editor of Code and Hack. With an unwavering passion for all things futuristic tech, open source, and coding, he delves into the world of emerging technologies and shares his expertise through captivating articles and in-depth guides. Sabarinath's unique ability to simplify complex concepts makes his writing accessible and engaging for coding newbies, empowering them to embark on their coding journey with confidence. With a wealth of knowledge and experience, Sabarinath is dedicated to providing valuable insights, staying at the forefront of technological advancements, and inspiring readers to explore the limitless possibilities of the digital realm.

Leave a Reply

Avatar

Your email address will not be published. Required fields are marked *

How To Protect Yourself From Spyware Attack

Spyware Attack: What Is It, How Does It Affect And How To Avoid It

Artificial Intelligence As A Co-Pilot

Artificial Intelligence As A Co-Pilot: The United States Air Force Military Aircraft Makes History