Our lives are increasingly influenced by artificial intelligence. For the first time in human history, biotech and infotech have combined in an innovative manner to create technology that we do not fully understand the implications of. A technology that may potentially redefine how human beings will exist and function (or not) is not just full of possibilities, but also threats.
As such, like any other unchartered territory, AI also attracts a set of myths and comes accompanied with some major risks and challenges. In this piece, let’s look at some of them:
One of the major challenges around AI is that it’s not a national problem. It’s a global problem and needs a global solution. Like climate and nuclear power, challenges related to AI are not local in nature. A portal like google or FB for instance affects people globally and it’s not even scratching the surface of AI.
Imagine our love affairs or sense of identity being replaced by information fed into neural networks across the globe. As a result, a person in Uzbekistan might actually be thinking what a power in America wants it to think. Or vice versa, for that matter.
Which brings us to the logistical challenge of how cultural prejudices against AI lead to a distracted discourse. The only way to avoid such biases is to focus on a dialogue, and encourage a rational, factual discourse to ensure that the question of safety can be pressed upon the stakeholders.
There is only one way to deal with the various confusions and myths surrounding AI. This open and unbiased discourse is possible only when various collaborative efforts to research AI are made
Given the context, it is important to set up, International Research Collaborations rather than a ban on all the risky AI because that would lead to fast relocation of research to low safety standard geographies and become dangerous.
At this moment, there is no regulatory and legal framework around AI. As a result, there is no way to ensure that any challenges related to ethical, or operational hazards are being addressed. It also means encouraging AI manufacturers to ensure more investment in the safety and reliability of AI.
Similarly, principles like transparency, non-manipulability, predictability of AI, need to be enforced through research funding and other institutional measures to promote safety. For instance, one of the factors we almost never get to hear about is the ability of AI systems to suffer.
There are AI systems, specially those that are designed similar to the human brain, which can actually feel the suffering as research and experimentation is conducted on them. It might be a considerable choice to place such AI research under the supervision of ethical commissions, just like in case of research on animals.
One needs to remember when considering implications of AI that it is a self-developing, learning and perpetually evolving entity. Which means that a lot of human workforce as currently envisioned, will be giving way to the AI workforce. This could potentially render an entire class of humans useless.
It is important to consider whether or not are we equipped to deal with, socio-political systems and the current human workforce giving way to machines in a sensible way. We need to consider solutions like universal basic income or a negative income tax etc. to pre-plan and mitigate the negative impact of AI.
All said and done, AI also has a lot of advantages and father-reaching implications than most of us can start to fully grasp. So, rather than operating from over enthusiasm or from fear, it is important that all stakeholders work towards responsible conversations about AI.
So that AI does not become a tool in the hands of governments, AI manufacturers and any malicious powers that be, a holistic discourse needs to be driven. Corporations, researchers, policy makers, researcher all need to come together to ensure that this innovative technology doesn’t end up disrupting life itself.