(This is Part Two of a two-part post by Tantu Vardhan.)
To view part one click here.

Recap: The author uses various professional perspectives to capture the interaction between society and artificial intelligence. From the outset it is understood that AI would alter the answers to fundamental questions on what it it means to be, survive and excel as a human being. The first part is a series of conversations with an artist and entrepreneur on their experience with AI and their specialty.
3. A short psychoanalysis: A Development Psychologist’s Perspective.
(A leading psychologist who is in his early 40’s. He proudly calls himself a vigilant futurist. He discusses the prospects of how human beings as a species would be able to adapt in an environment consisting of Alien Intelligence)
Computers have become highly skilled at making inferences from structured hypotheses, especially probabilistic inferences. But the really hard problem is deciding which hypotheses, out of all the many possibilities, are worth testing. Even preschoolers are remarkably good at creating brand-new, out-of-the-box concepts and hypotheses in a very creative way. Somehow they strike the right combination of rationality and irrationality, systematic and randomness, to do this, in a way we haven’t even begun to understand[1]. Young children’s thoughts and actions often do seem random, most of the times even crazy. But we should understand, they also have an uncanny capacity to zero in on the right sort of weird hypothesis; in fact, they can be substantially better at this than grown-ups.
Computation is still the best and the only scientific explanation we have on, how a physical object like a brain can act intelligently. But at least for now, we have no idea at all how the sort of creativity we see in children is possible. Until we do, the largest and most powerful computers will still be no match for the smallest and weakest humans[2].
I consider the question of “thinking machines”, from the standpoint of what thought itself is and how our human survival instincts are limiting our abilities to envision and recognise other species of thinking. Thinking isn’t mere computation — it’s also cognition and contemplation, which inevitably leads to imagination. Imagination is how we elevate the real towards the goal, and this requires a moral framework of what is the actual goal. Morality is predicated on consciousness and on having a self-conscious inner life, equipped enough to contemplate the question of what is ideal.
The famous aphorism attributed to Einstein — “Imagination is more important than knowledge” — is interesting only because it exposes the real question worth contemplating: not that of artificial intelligence but of artificial imagination[3].
Of course, imagination is always “artificial,” in the sense of being concerned with the unreal or trans-real possibilities of transcending reality to envision alternatives to it. This requires a capacity for accepting uncertainty. But the algorithms driving machine computation thrive on goal-oriented executions in which there’s no room for uncertainty.
“If this, then that” is the antithesis of imagination, which lives in the unanswered, and often vitally unanswerable, the realm of “What if?” I strongly believe that if we end up losing our capacity for asking such unanswerable questions, we would end up losing not only the ability to produce those thought-things that we call works of art but also the capacity to ask all the unanswerable questions upon which every civilisation is founded.
I feel the most important thing about making machines that can think is that they will think differently, thinking differently like how Steve Jobs would like it to be. Because of a quirk in our evolutionary history, we are cruising as if we were the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular[4]. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousness possible in the universe.
We like to call our human intelligence “general purpose,” because, compared with other kinds of minds we’ve met, it can solve more kinds of problems, but as we continue to build synthetic minds, we’ll come to realise that human thinking isn’t general at all but only one species of thinking.
For this confusion, I believe we would be able to arrive at a solution when there is a need to come to face with digital reality. I’m not certain of whether we would be able to contact extraterrestrial beings from one of the billion inhabitable planets in the sky in the next 200 years, but I can be almost 100 per cent certain that we’ll have manufactured an alien intelligence by then. AI could just as well stand for Alien Intelligence.
I strongly believe, in our first encounter with these synthetic beings, we might end up encountering the same benefits and challenges we expect from contact with an Avatar or an ET. They’ll force us to reevaluate our roles, our beliefs, our goals, our identity.
This situation might even force us to question from the place human beings started to imagine and question life, “what is the purpose of a human being?”. I believe our answer to this, would be that humans are meant for imagining and inventing new kinds of intelligences that biology couldn’t evolve[5]. I would say our job then, is to become the creator and make machines that think differently. And call them artificial aliens.
4. A line of legal reasoning: A Law Student’s Perspective:
( A sophomore law candidate tries to understand and face the fears of AI lawyer taking his place in the system.)
Lawyers can get a bad reputation for being slimy and conniving, but a synthetic brained lawyer like ROSS has neither of those qualities. ROSS is a piece of artificial intelligence software. It uses the supercomputing power of IBM Watson to comb through huge batches of data and, over time, learn how to best serve its’ users.
Ask ROSS to look up an obscure court ruling from 13 years ago, and ROSS will not only search for the case in an instant – without contest or complaint – but it’ll offer opinions in plain language about the old ruling’s relevance to the case at hand. ROSS was just unveiled as a “new hire” at the law firm Baker & Hostetler, which handles bankruptcy cases. Several other firms have shown interests and signed licenses to employ ROSS’ services[6].
At first confrontation. My reaction to this complex piece of AI system that can solve the complex legal problems, which I was meant to solve was alarming[7]. The basic economics thought in law school told me that the world has a found a substitute good for you, a lawyer.
I soon happened to realise that these machines that can think could be a potential threat if I restrict my scope of legal knowledge and practice to scraping through case laws and books. I came to face the stark reality of understanding the need for me as a lawyer to better my skills to face my new synthetic nemesis as a colleague and not on the opposite sides.
This technology could help my potential employers, fully established law firms to use the power of AI to serve justice. Right now, a huge proportionate of people in our country do not have the financial means to hire a lawyer. This is despite the country having a surplus of attorneys on tap. With an A.I abled lawyer like ROSS, lawyers can scale their abilities and start to service this very large untapped market in need. In other words, by using AI lawyers like ROSS, law firms could charge lower fees since they wouldn’t be paying humans (who generally prefer to get paid for their work) to handle clients’ cases[8].
Besides, those lawyers like me currently out of work could use AI services like ROSS, which offer a lower barrier of entry into the market, to create more affordable options for clients. I would perceive the software as a force that levels a playing field that many tend to see as unfairly tilted depending on who’s got the deepest pockets.
On that note, I strongly believe that with the scope of employability of soft wares like ROSS, lawyers like me can focus on advocating for their client and try being creative. Rather than spending hours swimming though hundreds of links, reading through hundreds of pages of cases looking for the passages of law I would need to scrape through to make a strong case.
5. A scientist’s environmental protocol: A scientist/ environmentalist perspective:
(when a despair scientist meets the hopeful environmentalist in him)
Machines that think will think for themselves. It’s in the nature of intelligence to grow, to expand like knowledge itself. Like us, the thinking machines we make will be ambitious, hungry for power — both physical and computational — but nuanced with the shadows of evolution. Our thinking machines will be smarter than we are, and the machines they make will be smarter still.
Our tryst with constructing systems we have less understood have always proven to either lose its way or go against the core ideal. We’ve been building ambitious semi-autonomous constructions for a long time — governments and corporations, NGOs. We designed them all to serve us and to serve the common good, but we aren’t perfect designers and they’ve developed goals of their own. Over time, the goals of the organisation are never exactly aligned with the intentions of the designers[9].
I would like to add that the notion of smart machines capable of building even smarter machines is “the most important design problem of all time”. Like our biological children, our thinking machines will live beyond us. They need to surpass us too, and that requires designing into them the values that make us human[10]. It’s a hard design problem, and we must get it right before we pursue further towards the dream of all forms of intelligence peacefully coexisting with each other.
The purpose of the solitary walker may be straightforward — to catch fish, to understand birds, or merely to get home safely before the tide comes in just like an abled thinking machine who is meant follow protocols embedded in it to have a cognitive function[11]. But what if the purpose of the solitary walker is no more than a solitary walk. What if it is to find balance, to be at one with nature, to enrich the imagination, or to feed the soul to set an outright path for the future generations to come.
On that note, I would like to conclude the purpose of a potential thinking machine should be to reinforce those very qualities that make the solitary walker a human being, in shared humanity with other human beings. This might indeed be a challenge for a thinking machine.
III. Boiling/ Cooling Point:
That brings us to the conclusion of this intellectual discussion on AI, built on the power of perspectives. You would have observed the different types of perspectives we have recorded from the same species known as Human Beings. Each perspective had a distinct contribution to the discussion due to the variety of backgrounds from which each individual came from. This enabled the discussion to ponder over points which are less discussed, although fragmented it in presentation, to have a holistic interpretation of the scope and effect of the next big thing in science, AI. This exploration of AI at its point of convergence with art, behavioural psychology, social entrepreneurship, legal ethics and environmental issues has helped us unravel and come to terms with questions, worth asking and exploring. On that note, I would like to conclude with a mystifying quote by a man with great belief and wisdom on the future of science, Dr. Freemon Dyson:
“I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant. If I am right, then the whole question is irrelevant.[12]”
[1] Cotterill, Rodney (2003), “Cyberchild: a Simulation Test-Bed for Consciousness Studies”, in Holland, Owen, Machine Consciousness, Exeter, UK: Imprint Academic.
[2] Chalmers, David (2011), “A Computational Foundation for the Study of Cognition”, Journal of Cognitive Science
[3] Doan, Trung (2009), Pentti Haikonen’s architecture for conscious machines
[4] Cotterill, Rodney (2003), “Cyberchild: a Simulation Test-Bed for Consciousness Studies”, in Holland, Owen, Machine Consciousness, Exeter, UK: Imprint Academic
[5] Bach, Joscha (2008). “Seven Principles of Synthetic Intelligence”. In Wang, Pei; Goertzel, Ben; Franklin, Stan. Artificial General Intelligence, 2008: Proceedings of the First AGI Conference.
[6] “Your brand new AI lawyer”. Retrieved from http://www.rossintelligence.com.
[7] “Machine Learning: A job killer?”. econfuture – Robots, AI and Unemployment – Future Economics and Technology
[8]“Your brand new AI lawyer”. Retrieved from http://www.rossintelligence.com
[9] “AI set to exceed human brain power”. CNN. 2008.
[10] Brooks, Rodney (2014) “artificial intelligence is a tool, not a threat”
[11] Lohr, Steve (2016). “The Promise of Artificial Intelligence Unfolds in Small Steps“. New York Times.
[12] Dyson, Freeman (September 1998). Imagined Worlds. The Jerusalem-Harvard Lectures. HUP.