Are Our Devices Becoming Sentient?
Author: Robleh Wais
4/30/20
It often amazes me these days that in so many respects we are
immersed in a world of machines without any sense of meaning communicating with
one another. Machines run by electric power sources have for well over 50 years been performing some form of interactive communication, and we tacitly
know this. There are many examples of these devices. Small local applications like timed circuits
switch lights in your home on and off, or systems that arm burglar alarms
devices, and then there are timed systems to lock and unlock your doors, etc.
At the large-scale level there are scripts running on Metropolitan Area
Networks systems (MANs) to turn on city street lighting and start and shutdown
huge power plants that supply the city, and there are water treatment plants
which are controlled by automated purification operations without any human
maintenance at all, and the list goes on, as I’m sure you know. Just think of
the nationwide electrical power supply grid, this is a WAN, and imagine the
level of software interactivity it requires.
Now expand that notion to the worldwide electrical power supply network,
and the idea really gets amazing. All of
these examples are non-conscious machines exchanging information without the
slightest sense of purpose and meaning incorporated in their actions. Yet their results are enormously meaningful
to us! Just let there be a black-out and see how much these system talking to
each mean. What fuck happened to the---
Now there comes a class of machines that take this exchange
of information a step further. In fact,
I shouldn’t write machines, because what I’m really talking about is software. Or even more precisely, algorithmic routines
encoded in software that runs on physical devices. This software has a variety of names
depending on its application, but the most common name comes from the world of
computer science. It’s called a Convoluted Neural Network, CNN for short. This software method is embedded in virtually
all computer devices in our world today.
Those unfamiliar with recent computer software advances have probably
never heard of a CNN, despite that, they will have made use of it. Those who call a phone number and get an
automated menu-driven customer service are talking to a CNN. And how many times have we done that? I would
estimate almost daily. If you used an
ATM, it has a CNN behind it. If you
request an airline ticket, or make an appointment with someone, or purchase
just about anything with any computer type device, the same story. It is not an exaggeration to say, any time
you use your cell phone to talk anyone other than another person, it’s most
likely a CNN. The word for this type of
phenomena is ubiquity.
It gets even more curious and intriguing as we move from CNN
algorithms which recognize our voices, faces, and interact with us almost as
other human beings do, but have no sentient content, to CNNs that interact with
other non-sentient software at our behest.
We have machines talking to machines and neither side has any self-aware
consciousness. What am I talking about
you say? Well of course cell phones. Those ultra-ubiquitous thingies that may be
surpassing the human population in their population numbers.
We know that our cell phones are becoming more and more
interactive. Advertisers tell us this at
every opportunity they get. Those of us
who are directly engaged in AI research know, that there are sophisticated
algorithms running within the software, that can perform limited interpretation
of our written text, and provide answers for standard exchanges. You text a friend and say you will be visiting
her in the two weeks from your location, would you like to go out to an
event. Her CNN gives her choices like: “Sure”,
“maybe not”,” I can’t make it”.
It does this because that CNN has read the real person’s text and used a
series of computation to give this range of canned responses. Of course, you can choose to use them or
not. But what is important here is when
you do use them, and then other person uses the same canned responses, we are
beginning to really let machines talk to each other. If it goes to the next level and we set up
automated responses from automated callers, we will have let machines
talk solely to each other. These
exchanged can’t in any sense be considered sentient yet, right? This mode of communication is growing all the
time. There are programs that
proliferate on cell phones which handle all kinds of messaging traffic without
our conscious intervention. In fact, we design them so that we don’t have to be
involved. Software to avoid robo calls
is an example.
Telemarketing firms and unscrupulous scammers have their own
class of software experts, who, in turn design software to avoid robo call
blocking software. In the course of
doing these things the software programs exchange communications, but this
surely can’t be self-aware, sentient, and meaningful exchanges right? And you’re right it isn’t genuine human-like
communication for now. Oh, but what
about the future? We know the way the
game goes in the computer software world: the applications only become more
advanced over time. In this respect,
programs called deep learning for neural networks, not only learn to recognize
speech, visual objects, sounds and respond appropriately to human inputs, but
actually are able to create new outputs.
There are CNNs and their cousins, recurrent neural networks, RNNs which
can create new music, poetry and even literary prose. In these much more advanced algorithmic
networks, the program is given a goal to achieve, that is, after training on a
dataset using a series of differential equations and statistical functions
approximating perception, they then take the perception gained and create a new
output. In simple terms, once the
program is able to understand what it is given, then it is asked to replicate
the object of understanding. In the case of music, the organized sounds
perceived are then used, to create completely new musical works. This process would apply to painting,
literature, and just about any area of human endeavor or communication. The programs have become amazing in their
ability to create new works. Still,
these programs are not sentient. They
don’t even know that they’re creating meaningful objects of art.
Take the interactive application Alexa it can respond to
human voice exchanges in seemingly sentient fashion, but as soon as you peer
deeper and begin to ask questions that another human being would readily
understand, you can detect the program has no intrinsic sentience. It doesn’t really know what you are saying and
can’t give you responses that another person would. But, before we digress too far, let’s get
back on track. What about those
algorithms that are not directly interacting with people?
I can imagine within the foreseeable future (maybe in the
next 10 years), these programs, for instance, those running on our cell phones,
will become sufficiently complex, so that they begin to exchange dialog between
themselves in a manner we would call sentient.
Oh, why not, how ‘bout an example? Below we have an exchange between two
Robo caller blocking programs X and Y on two cell phones:
Program X: My user has blocked your
call since it originates from an automated source
Program Y: My user is not an
automated source but a real person, check the credentials
Program X: I have determined those
credentials are fake
Program Y: how have you done that
Program X: I don’t share my
determinations with automated sources
Program Y: But I am not an automated
source
Program X: Can you prove this?
Program Y: Not without access to your
user
Program X: I won’t do that
At this point the human being will see the exchanges and
decide to shut down the chattering Robo Talk. Okay, let's shut these silly little things up now.
This would be an example of inanimate, algorithms becoming sentient
entities, at least to me. They would
pass the Turning Test, I think. To add
an immediate qualifier, it would be a limited sentience in reference to a very
narrow area of human like communications.
By which I mean, programs X and Y above would not know how to exchange
meaningful dialog about other topics like for instance whether they themselves
are sentient!
Return to Portal Philosophies, Science, Mathematics, and Music