The most distinctive and most supreme quality about humans is their intelligence. Intelligence, in a very simplified definition, is the inferences we draw from raw data by processing it through multiple degrees or layers of our brain through learned experiences. Our intelligence, developed through hundreds of millions of years of evolution, has enabled us to not just transform our lives but dominate the planet Earth and all the living things that live on it. We, the humans, have made multiple jumps and innovations to take our civilization to the next state of existence. From the Pyramids to the modern-day industrial civilization, we have come a long way and we are now poised to take the next leap, i.e. develop an intelligence better than ours.
Artificial intelligence or deep learning is a concept that has been lingering around and the idea has been toyed around with by numerous scientists since World War II when Allen Turner, the British scientist, made a sort of system that mimicked some behaviors of the human brain in its most primitive form. It has seven decades and now finally, we have made some important breakthroughs in this field lately that have brought us tantalizingly closer towards making our dream of developing a superior intelligence a plausible reality.
Leading the race are tech firms like Google, who now have whole departments with high budgets dedicated towards making progress in the field of artificial intelligence and powering our way towards the future with breathtaking technology like driverless cars and trucks, autonomous heavy construction and print machinery, human home assistants, intelligent decision makers to be used in the fields of medicine and a whole lot more. This technology is going to make our lives more enriched and a whole lot easier definitely.
But, when developing a technology that has to have intellectual powers superior than us to justify its existence, we definitely need to look at the downsides of it as well, after all it runs on sets of data that are processed through layers of artificial neurons who internalize a single set of simplified data and then make complex connections to constitute layers of decisions making powers. We, as humans, definitely know the limitations of the reliability and limitations of each data set and the vulnerabilities of drawing inferences through just raw data. There has been a lot of talk about the dangers of rogue AI, by leading scientists and entrepreneurs like Stephen Hawking and Elon Musk, about how a powerful intelligent system could develop the powers to sustain and perform decisions all by itself without the need for human input.
Issues like racism, discrimination, and biases are all human traits that are frowned upon and if we take a closer retrospective look at these elements, we get to know that these forms of issues exist because some humans internalized the wrong set of data and drew inferences from them that they believe are perfectly okay to not just practice but relish and pass down to later generations.
An answer is only as good as the data fed to secure it. A recent research published in Science, led by researchers from the University of Princeton and the University of Bath, concluded that artificial intelligence also has the tendency to unconsciously display bias after being subjected to an implicit association test. This test checked out how the artificially intelligent systems associated words with feelings and whether there were any biases that it had internalized from the data that it had been fed with. The results signified that AI systems were showing a high amount of bias in issues like associating negative words and connotations with black people and they also had sexist attitudes towards women when asked to list down the kinds of jobs that women are supposed to or can occupy.
This problem signals an early roadblock and one of the potential dangers of letting the machines learn all by themselves through unfiltered data. Ethical behavior has been one of the hottest points of debate in the field of AI and Elon Musk, the tech entrepreneur, has opened up an institute called “OpenAI” , which in its own words, has been supplied with the following mission statement, “Discovering and enacting the path to safe artificial general intelligence.” The effort by Elon Musk is commendable and it’s worthy because his own companies like Tesla, will rely heavily on AI for their future product coherence and subsequent success and this concern on Musk’s part shows concern from one of the leading voices among the proponents of this great technology.
AI definitely holds much promise for the future in that it will significantly alter the way we now approach fields like medicine, healthcare, safer roads, increasing industrial efficiency and more. But this system needs to be supplied with an ethical framework and proper demarcation of boundaries on what it learns and how it learns it so that we can get results that are targeted towards reducing the attitudes of bias, discrimination, sexism and racism making this world a much better place to live and thrive in the future.