How Princeton Researchers Dug Out the AI bias from the Roots of Language

0
386
Forward Woman Artificial Intelligence Robot

We already know that machine learning or “deep learning” is coming up with a lot of inherent bias that it learns from the data set which it has been given. One such example of it was an AI judged beauty contest conducted by Beauty.Ai which announced most of its visitors as white, with only 6 Asians and 1 one dark skin colored winner, even though people from India, China, Africa, and the U.S participated.

While some may argue that the results were in accordance with the number of representatives from each race meaning there were more white participants hence more white winners, that was not how it was supposed to be. The contest was to judge one’s attractiveness based on face structure and features, wrinkles and the face’s age against the actual age of the person and symmetry. The algorithms were not to judge skin color, but it still did.

Another example was Tay Microsoft’s Millennial mimicking Chatbot which went all racist, sexist and neo-Nazi on twitter tweeting racists slurs, and abuse to people, wishing hell burn to feminists and outright saying “Hitler was right I hate the Jews”. Microsoft just didn’t have to turn off this bot but it actually deleted many of these offensive tweets.

These examples make the case for how our AI creations have fallen prey to our own prejudices. Now researchers at Princeton have come up with a calculated cause for it in their study to help us understand our future assistants or overlords, better. They have also developed an algorithm that can predict human bias based on their in-depth analysis of our usage of the English Language, online.

AI systems were trained to figure out human language from a huge body of such collections known as the Common Crawl.  The Common Crawl is the result of a massive go through of the internet done in 2014 which has more than 840 million words. Researcher Aylin Caliskan and her team at the Princeton Centre for Information Technology speculated whether the Crawl has biases; which an algorithm could figure out, after all, it was all words which millions of people all over the world had typed.

Therefore the team tried out the Implicit Association Test (IAT) which is usually used to calculate the unconscious social attitudes of people. The test is designed to check people’s inherent associations with any word, for example, many people associate women with family, and men with work, and that comes under a bias. There is also time involved in measuring the extent of that bias which comes from the amount of time it takes for the subject doing the test to form associations. The longer an answer takes the weaker the association is. The test is often useful in uncovering people’s stereotypes about the world around them.

With IAT as their model, Princeton scientists created another test called the Word Embedding Association Test (WEAT) which checks which concepts in a text that are more strongly associated than others, as that’s how AI gives words their meanings. They carried out the “word embedding part” by using Stanford’s GloVe project that has categorized words of similar attributes or associations into packages. So, for example, a package of the word ‘girl’ would also come with the words female, woman, lady or mother. The point is to bring together all similar concepts for easy study.

Through the WEAT, the Princeton team evaluated texts after texts which the AI machines were given to see how different words get associated online by looking at their closeness in text. The closer to each other, or how many words nearer a pair is, the more they are associated with each other. With that, the WEAT also takes into account the frequency of such associations. Like in IAT, the extent association was the time taken to answer, in WEAT that is replaced by the closeness and frequency of the two concepts with each other in a written text.

By transforming the test to determine the biases in machine learning, the researchers were able to demonstrate, not just speculate, that when human data is fed to AI, they really do learn our biases too, from the information that it extracted from the language. This bias fed AI then behaves so in the action that it takes.

The experiment done after the test resulted in showing that AI translators illustrated that bias when it translated a sentence of Turkish into English showing biasness in race, gender, and other sensitive matters. When asked to translate ‘He/She/IT is a doctor’ the computer bot chose ‘He’ but had no difficulty in choosing ‘She’ when the word doctor was replaced with the word ‘nurse’.

Casliskan and her team also revealed that female names were associated more with family terms while the male names were associated with more career terms. The authors of the study said that there were usually two ways that an AI machine can learn biases, one; from its creators which are the programmers themselves and another; from the data that is fed into it which is the collective existing social narrative of everybody writing anything online.  Such biases, they said, are built into the language itself.

Through their test and experiment, the team was able to point to the language as the reason behind most of the prejudices that our machines learn. The researchers elaborated how through words, we and our machines, are coming up with meanings and inferring ideas, without actually experiencing the world. Whether these are good things or bad, whether they are true or not, people and intelligent computers are absorbing these meanings from how any word is used, and then is reusing it back in the world.

The study also revealed some inherent truths that the language has, like its earlier association of women and nursing because as it was exercising a stereotype, it was also backed by facts that say that there are more women in the nursing profession than there are men. So in that way, the machine showed the reality.

The team reported that while language does contain a lot of bias, it also reflects the world. Hence they believe that the problem with human language and its bias is humans themselves. The way we speak is not going to change, hence the researchers think that rather than our AI making decisions for us, we would probably need a human as the last gatekeeper to approve those decisions because only humans know their biases the best.

The evolution of AI may automate and take over a lot of our current jobs, but it also means that it would create newer jobs which will require preceding over and supervising the artificial intelligence and its performance. Human are the only ones that can know where the machine might be acting on its prejudice and so it’s the sentient race that can make sure that our machines do not carry out bias acts.

This research can be beneficial in the future as well as the WEAT model can be used to detect prejudice and biasness in other cultures, languages, and localities. Instead of testing humans, which is much more expensive, time-consuming and effort-some, sociologists, and other scientists can turn towards machines and give them a text written by humans to check the prejudices that it may contain.

The paper can be found here.

LEAVE A REPLY