Nhoj Morley

 

 

P    R    I    N    C    I    P    I    A         T    R    I    O    O    N    I    C    A

 

Oct 2019

 

I can see you

You are something to keep an eye on. You're somewhere I can hold my gaze. If you give me a chance to stare for a moment, I will be able to spot you anywhere in a crowd of billions. If you move the corner of your eye a few millimeters in any direction, I will know what it means. Right or wrong, I'll be hard to un-convince. Face it, no matter what you do or say, your character and personality in the minds of others will be built primarily by your face. Once your's is seen, there is no separating it from knowing you. Once we've seen our own, there is no avoiding seeing ourselves in it.

There are many ways to get to know someone. One way is by imagining what is it like to live with a face like that. Your best guess will determine how you summarize their basic personality. Others will be using your face to summarize you.

The most common form of personal summation for anyone from ancient Presidents to poultry bucketeers is a picture, sculpture or silhouette of the face involved. Even though it is only a fraction of a person's physical form, if a picture includes the whole face, it's a complete picture of so-and-so. If the so-and-so is familiar, recognition will come promptly. Then, recognized or not, we can examine what the face is saying by reading it. Even in a picture frozen in time, the face is informative and a long stare feels like it's getting more info than a shorter look. If the eyes are aimed at the camera, it is kind of like eye contact. The picture isn't really looking back at you so it isn't rude if you take your time looking at it. We might thank our brain's ability to see pictures for our ability to read faces but what if reading faces is why we have pictures, words and thoughts in our brains?

Reading faces is a skill as old as animals with faces. Animals use their face to express themselves confident that they will be read as intended by other animals. They'll know by the look on the face of the reader. It is as if knowledge that there are other beings like ourselves that read these messages just as we felt or intend them is engrained into our biology. Every species intuitively knows the original example of animal-logic. If I can see you, then you can see me.

Seeing and reading faces involves a concert of activity throughout our brains. Seeing faces is major part of why we have a brain. Owning and operating a face takes a lot of brain work as well. Simply by applying tilts and wrinkles, or brains can use bits of our face to display states of emotion and present messages to other faces. Of all the messages we can send and receive, the most important message in the Animal Kingdom is I can see you. Our ability to discern that we are being seen is very keen. When we look around, the first thing our brain wants to know is whether the view includes a face. The most vital bit of info from a face is what it is looking at.

We know that there is info in every photon we reflect and they might be headed some predator's way. Every animal knows that hiding means stopping those reflected photons from reaching any seeker's face. We don't worry about photons that hit the back of their head.

Photon reflection management is an important social skill. Once a photon bounces off of you, it's yours. It is wise to always know where your photons are going. It is as if every animal is instinctively aware of a cosmic sense of visual fair play. We as spectators can know when the looker realizes that the lookee knows about the looking. We can draw these conclusions from unobstructed photon paths of the faces involved. The plane and attitude of the face, the placement of the pupils and their degree of convergence are the components. Right or wrong, the results are reasoned conclusions made from separate visual components that compose a face and what it is aimed at. Like reading a sentence of words, reading a face is an additional step of work beyond the initial seeing of the components.

The second step is where getting things right or wrong comes into the picture. Even if our view of things is clear, we can be wrong about what they mean. A sentence of plain words can be misinterpreted. A squint or upturned lip are like words. We ask, metaphorically, what is a face saying? Put the word-features together into a face-sentence and read it. What is it saying? We can ask because our initial perception of the features is done and the face-reflected photons have long since impacted. We're not reading the photons. We're reading the picture in our brains. It is our chance to take the time to reason. Face-readers can now draw different conclusions from a set or sentence of facial components. Reading is a also our chance to make mistakes in interpreting things that may or may not have been correctly recognized. As observers, we have two ways to go wrong. Not only do we have to see things correctly, we have to read them correctly as well.

All animals rely on a competent harvesting of visual elements. Seeing is vulnerable to errors from poor vision and misled recognition.  Sometimes face-spotting is a wrong conclusion made from a happenstance of face-like components. We can spot a face even when there isn't one there. Clouds, wood grain and burnt toast provide patterns that put our enthusiastic facial recognition to work right away. When something reads as bonkers, we know that we need to go back to the first perception and see if we made an error in recognition.

Seeing can be sharp or blurry and visual abilities vary from person to person. Someone with poor or no vision would lose value to others as a look-out but no one would consider a visual handicap to have any impact on their intelligence. The opportunity to be smart or stupid comes from interpreting and making conclusions of what we see (or hear). We are motivated to be smart but mostly we presume ourselves to be smart until until proven stupid. At the top of the list of things we want to be smart and not stupid about is our own face and other faces that don't belong to us. There are times when faces fully consume our attention.

Imagine yourself in an imaginary scenario downtown at the Pub Crawl where crowds are engaging in people watching. All are free to look at anything they like for as long as they like until they are looking at each other. Then there are rules that kick in. While people gazing, everyone must be very careful about engaging in face reading because our face can indicate that we are looking at their face.

If that someone sees your face looking at their face while you are reading their mood, intent or inner-crisis, their face (now reading as intruded on) will read your face and wait to see what your face says after noticing their eyes are pointing at you. It's a face-off situation now. Who is going to switch their face off first?

The average total duration of the above paragraph's heady conversation is about a second more or less. Someone has to move their eyes just a few degrees in any direction to end it. If that's you, your face had better read as sorry or submissive or otherwise contrite. Any other face, even a friendly one, will start a prolonged exchange. The stand-off will end and everyone walks away and forgets about it.

Unless the imaginary face belongs to the undercover policeman who has been hunting you for weeks. Read that face carefully. Was it just offended, or did it see through your disguise? Quickly, put on a "I'm not me" face and express that you're not worried about being recognized. Meanwhile, the policeman has put on a coy "I didn't recognize anyone" face. All together, this should add no more than another second to the whole exchange.

When you decide to run, the task of face-reading will cease and give way to looking for somewhere to flee. When the policeman sees your face do this, the façade will end and the chase will start. Right now, face-reading is best avoided.

Unless you have to determine if one of the bystanders in your path is another policeman using their face to look like a bystander. There was something about their posture or a movement that aroused your suspicion. Your brain knows what to do. It must level off your head to the horizon and hold your eyes on a steady tracking tangent of the bystander's face.

The facial study will be summarized into a conclusion that the face is hiding its intentions. This leaves you with no doubt about having two pursuers. Time to let go of steering your eyes and get back to fleeing. This entire alleged event could play out, with running, in as little as ten seconds. Another ten seconds of flight and you'll have escaped capture.

Now you hit an imaginary snag. Fleeing has led you into a cul-de-sac with no escape and nowhere to hide. Your brain can find no fleeing solution but there is one more procedure to try before surrendering. Look at the scene and treat it like a face. Give it a face-reading. Are there things around you that can be summarized into an escape? Perhaps there are some items around that be stacked tall enough to get you over the alley wall. Or maybe there are some things you can use to enhance your disguise and then stand still with a face that says, "I'm not escaping the police".

When the police catch up, their hunt-reading will come up empty. They'll stop and look around. Then they will treat the scene like a face. Does anything add up to a path of escape? Like a stack of boxes by the wall? They'll read your face as you pretend not to notice. Then they'll read the rest of you. They may notice the odd color of your hair curls and the mop handle sticking out behind your head or the improvised tutu. Their face-reading was smart enough to see through your contrived face and disguise just as you saw through theirs.

Is it stupid to always have a readable face? Why give away intentions that the face owner may not wish to reveal? A slight squint or movement of the nose can spill the beans to one's disadvantage. These expressions appear suddenly before we have any chance to be stupid or smart. But why? One could believe that the universe is insisting on fair play, but we are faced with a puzzle. Who or what is telling our face what to do? Our face will do things and express ourselves in ways we are unaware of unless we hold them in check like a poker face. And, our face can do things because we want it to. It is as if we are two-faced in telling our face what to do. Are there two ways to raise your brow or two ways to tell it to?

We could ask the folks who have gone to school to develop their skill at telling their face what to do. Or, we could ask the folks who have gone to school to learn about how faces and the brains just behind them work. Or the folks who depend on spotting the micro-expressions and tells on the faces of their opponents. Or…

We could go to a school and ask a room full of children to make faces like a happy face or a grumpy face or a sad face. The results would come promptly. While our faces are different, the expressions they make are common muscle movements that we are born able to do and recognize. The children will pull the relevant muscles as hard as they can and freeze like a photograph. They can learn to do more than generic expressions. We could ask them to make a grown up's face like their parents or teachers. They might do characterizations of friends and neighbors. These are manifestations of facial creativity. It is routinely demonstrated by children that we can learn to command our face to imitate other species. We could ask them to make animal faces like a cow face or a monkey face or a guppy face. Now ask them to make a robot face.

Why robots? Because when we design robots to be expressive, especially pretend robots for TV shows and films, we are also exploring our perception of faces and the way we operate them.

The results from asking children for a robot face might include frozen non-expressions but any savvy child would have to ask, "Which robot?" There are many established varieties in the Fictional Robot Kingdom though they all have one thing in common. They are all given some sort of face even if they do not have a normal spot to put one. We expect it to have a face. Robots with facial features need less dialog. Audiences are very forgiving about what qualifies as a feature. We're always willing to improvise and get creative for the sake of having a face to see and read.

We expect a face to be in the front and near the top so any effective robot should have plenty of movable and bilateral features in that area. Two vertical bars that move up and down will be eagerly granted the status of EYEBROWS. If there are no lips, a feature with a light bulb that flashes with the robot's dubbed dialog will have us jumping at the chance to perceive that the feature is the source of the robot's voice. Our first expectation upon seeing any movement of surface features is to believe it is an expression. The Lost in Space robot would slam its bubble top up or down and viewers would happily grant it as a manifestation of the appropriate emotion for the moment.

Some robots bravely challenge traditional facial structure. Viewers welcome the challenge. We are generous in our imagination with robots portrayed as objects with or without a person crammed inside them. Many Star Wars fans are convinced that they can read the expressions of a three-legged diaper pail. In Day the Earth Stood Still (1951), director Robert Wise goes out of his way to use the alien robot Gort's smooth blank face of unreadable intentions to tease face readers and create a menacing unpredictability. The best example of our eagerness and generosity comes from the episode The Changling from the original Star Trek. It is Nomad's reaction shot when Kirk confronts him in the engine room about his errors. In the brief close-up, did your brain put a look of shock onto a bobbing box of perforated sheet metal? These are examples of our survival skills at play.

Designers of fictional robots like to play with our survival skills. They know we will want to be two-faced about robot faces, too. We want to see the same helpless facial giveaways that we might want to hold in check. Gort has an obvious face but refuses to give any tells while Nomad, who has no discernable face, does almost look surprised.

Pretend robots have to work harder than real functional robots because they bear the extra task of reminding us of ourselves. Writing for robots is a way to indirectly write about humanity. Stories and yarns about robots and their implications have been told in political contexts and endless morality tales. For a long time, robots were a flight of fancy and storytellers had no limits on their imaginings of robot capabilities. That creative freedom has slowly ebbed away as actual progress in robotics brings more specificity to what a robot actually is. Once leading the way, modern storytelling tries to keep up or in many cases, pivot into left field where real robots might never go. Even those examples still provide clues as to what we expect seeing to look like.

Star Trek Voyager's holographic doctor is posited as a computer-generated illusion of light with enough matter sprinkled in to give him the ability to lift weight and hold objects. He is a projection of a robot that must be within range of the computer's holo-emitters or he ceases to exist. We can forgive all that for the sake of Sci-Fi FUN. Perhaps less forgivable, is the idea that this simulated person would need to look at and read an info display. Why the charade? The same computer generates the doctor and the display and the info. Why make the holographic projection bother with seeing the display? Late in the series, when the ship's computer screen can see what he sees, he claims to have "holographic optical sensors". That is world-class techno-babble that satisfies our anthropomorphic entertainment needs.

Robots become more believable when we see them have to do the same things we have to do to experience the world but what exactly do we want to believe about them? We want to see them read the world and make a reasoned conclusion about it. We want to see them demonstrate agency. The things we make pretend robots do to make them convincing owners of agency could be clues to interesting and unnoticed things we are doing. These could be the things that makes us believe that we have agency.

When a robot has agency, we need to know what it's looking at and whether or not we've been spotted. We assume that anything animated can see where its's going and thus has the ability to look at us. We expect this even when we are unsure of where their eyes are. Fictional robots have agency and because seeing empowers our own agency, we expect robots to see, too. Every shot of HAL in 2001: A Space Odyssey has the same message. I can see you.

An alarm system that uses a motion sensor can see you too, but it would not be considered to have agency. Seeing isn't enough. Any sensor can do that. We want to see something make a fresh, living conclusion of what it sees. A simple alarm is only reflexively doing what it was designed to do. There are parts of ourselves we consider to be equally reflexive and doing only what they evolved to do. As we learn about ourselves, more things we do are seen as bio-chemical necessities and not agency. We do not consider it to be agency that reflexively triggers a facial expression or a movement we might wish we had not made. Agency would be recognized in stopping ourselves from helplessly doing whatever our unconscious mind is designed to do.

Robots that pretend to have agency do so by not just seeing things as a sensor. They notice and deduce and put together what they see. They observe and learn from what they see. If they are merely reacting to pre-programmed pixel combinations, then all we see in action are mindless automatons limited to their intended function. Automatons are helplessly doing what they are supposed to do and are incapable of any sinister character development.

Here's an example of a machine with a convincing sinister agency from the early years of Doctor Who… A mountain fortress contains no living defenders. There is only some machinery and a computer that is programmed to defend the place from intrusion. We see a control room full of knobs and levers. There are some video screens and one of them is showing an intrusion in progress. Then, a camera on a stick comes up out of the console and looks at the screen. The computer activates an alarm to inform itself of the emergency. Now needing something done, the computer sends for robots with arms and legs to come and turn the levers and knobs that make things happen. That is more engaging to watch than a triggered sensor reflexively making something happen. Breaking it up into cognitive steps gives the machine a menacing agency. All because, like us, the little camera-on-a-stick had to learn something that the screen could not know.

For a machine, this is doing things the hard way. The ability to control the robots could more handily have been an ability to control the levers. Why couldn't the camera that fed the console's screen also be the camera on a stick? There are two problems with that. First, it would not remind us of ourselves. Second, it is a depiction of two stages of a process that could not happen simultaneously. Sure, our brains are simultaneously carrying out two tasks of analyzing and processing but the task of examination and deduction follows the task of seeing the environment. At any singlular instant, the info being examined is not the same as the info being seen. It is two processes with a picture in the middle. Does that sound familiar? Perhaps more robots will help.

Robots in full human form are often used to portray a person with no unconscious mind. As characters, they are pseudo-people with no doubts and no secrets from themselves. Unlike us, they are single-minded about their actions and their face. That is unnatural and downright unfair. Luckily, we can spot them a mile away. All their face-commands are top-down. They are contrived. We want to see bottom-up face commands like emotions. Emotions cannot be contrived if they are helplessly felt, like with real people. Some real people contrive expressions of emotions that they do not feel. When others catch on, they call them robots.

Are there any emotions that can be put into words that cannot be put onto our face? Aren't they, for us, inter-conjurable? If a script specifies an emotion in words and an actor with a face expresses it, would the audience use the same words as the script to describe the performed emotation? As a task, it is a contrived transmission of info and any able-roboted pseudo-person should be able to handle it. However, some real actors would say that they contrive to trigger emotions that arrive at the face from the bottom-up. Audiences want to see faces that are helplessly felt and prefer actors who, by whatever means, can make their emotions convincing. If a performance is too top-down and overtly contrived, it is called robotic.

If an actor's performance is an artful balance of top-down and bottom-up commands, real people will line up to watch it. How actors handle their face will determine how the audience sees the story unfold. For an actor, it is two processes with a picture in the middle. The face is the picture. Both are more than seen. They are examined and studied. We give awards to actors who reassure us that we are more than robots can ever be.

That doesn't mean real robots cannot be real actors for less-demanding purposes. Human-like robots built for hospitality roles like a tour guide or ticket agent are performing a pre-written part with a fabricated personality. That is exactly the same as what their human predecessors were doing but those humans also had to shut out or close down being themselves. Robots do not have a problem with this. Robots have no self to be other than an assembled bulk of parts. Humans dislike being perceived and treated as an assembled bulk of parts. Humans do not want to be robots. As long as there are other jobs, humans prefer that robots be the robots.

Being a robot means never being top-down about your face or anything else. Conscious awareness in a robot would be a pointless and silent passenger cut off from any means of manifesting as agency. The robot will carry on helplessly doing what it is designed to do. That means helplessly as in without any further contriving agency. The usual place that connects any contriving agency to the body it lives in is the face (and the hands but we'll wave that aside for now). The face is where the so-called conscious and unconscious minds meet. It is the part of us that both are vividly aware of. It is their primary means of communication. We are conscious of our face carrying out bottom-up commands from our unconscious mind. If our brow shoots up, it is because what we are seeing surprised us while seeing it. If our examination of what we are seeing turns up something surprising, one brow may crawl up the forehead. The brow is contrived as is the top-down command that aims our gaze. Our unconscious mind does not need to receive commands from the conscious mind as info. It need only catch-up with the face. The aim of the eyes and the tugs on the face are the instructions. Like the reins on a horse. The vision or seeing process is alerted to something it could not know on its own. That lucky consciousness-on-a-stick is connected to the world.

Robots get lucky too but from a different perspective. As they are machines, any face we give them will be operated from the bottom-up only without anything we can call a sensation, urge or feeling. The only consciousness that face will connect to is the designer or programmer that contrived it. From the contrived, top-down perspective of the designer, the robot face provides the same engaging relationship as their own flesh and blood model. It is two processes with a picture in the middle. If robots could believe things, they would believe they are conscious. The designer's body knows that it has a consciousness. So would the robot. It would know it possessed the designer's consciousness. But that's flapdoodle and already covered in several old sci-fi tales.

As a technician, my own opinion of computers or robots with awareness is that it's nothing to worry about. If there is sensation and feeling in any complex electronics, it is of its pulses and current flow and heat disappation. Any electric life would be determined more by the circuits' physical layout than what the device was designed to do. What if it was all pain? What if, inside that friendly automated airport hostess, a million electric fairies were screaming with a voice we can never hear?

Concerns or hopes of an as yet un-reached threshold of intelligence that brings on computer-consciousness are misplaced. If it did happen, it would not be like us or remind us of ourselves. Conscious AI is an assumption based on an erroneous model of ourselves. However, it is something the TV robots got right. If we want to replicate in robots what makes us what we are, we should follow their example.

No matter how complex a computer becomes, it is presenting info to us. Take us away, and nothing in the cosmos will know what it is doing. We provide the camera-on-a-stick. We sound the alarm and steer the robots that move the levers. By itself, the stick-camera is only a perception without anything to look at. Being conscious takes teamwork. We are top-down, the machine is bottom-up, with a picture in the middle.

With a comfy chair and a selection of numbing agents, humans engage their computers as a replacement partner in being conscious. Modern gaming folks can easily picture themselves picturing themselves as warriors gunning down evil robots in a contrived reality. The player's body's usual role as the gateway to reality is reduced to a passive conduit of what the machine can see and the processed picture it presents for us to read. The players read, then draw a fresh, living conclusion and push the keys and mice that make things happen. They adapt to this relationship easily because it is familiar. When finished, they get up out of the chair and hope that their body's machinery is presenting a faithfully contrived reality for them to read.

Any machine from video pong to the quantum computer of the future will only be half of the party if it only does one half of the work. To make a machine that reminds us of ourselves, it needs to do the other half as well.

When an animal can see you, it means more than being seen by a sensor or camera. Once an animal's eyes are aimed in a gaze, it becomes a further declaration. I can read you. It says "this is now and I will see the next things you do as a progression. You're looking me in the agency." Before they spotted you, you were up against their senses. Now you face their intelligence. Any spectator might say the animal is consciously aware of you. It's a natural assumption for a spectator who thinks they themselves are conscious to think the same of the animal because that is what the spectator believes being conscious looks like. Everyone involved is reminded of themselves.

There is no attempt here to define consciousness but there comes a point where we will imagine a model that, in operation, so fully reminds us of ourselves that it no longer needs to account for consciousness as if it were the final missing ingredient. If there is a final ingrediant, it is the mutual encouragement we provide by treating everybody as if they are expected to be conscious. They believe you possess the thing they believe they have. It is natural for you to assume that you do possess what they are expecting and that they must be like you because they must already possess what they are expecting you to possess. There is no need for consciousness to truly emerge forth from anywhere as long as we all continue to convince each other by providing what being conscious looks like.

Animals that can perceive and then read their environment are experiencing the world in two stages of perception. Either one alone cannot produce a self-conscious state. Being alive brings feeling and emotion to the first stage with sensory info that is harvested and presented to be consumed as if by a single and sense-surrounded authority or presence. Nothing further stands the ground of any what-or-where the senses have corralled themselves around. Okay, maybe so. The point is, we can get what we seem to have without adding anything superficial. If we add our second stage of reading perception, it will come with no cosmic super-awareness either. Vision and hearing are passed on be examined as a limited progression. It can also carry on without knowing who it belongs to. It is only where they find a common ground can they present any experience like a self to be consious of. They have to make a face.

Animals that publish a face need to be able to read what others publish on their face. They need to picture a face and not just see one. Reading must progress within the pictured face. A picture that progresses is cinematic and reading a face is a cinematic perception. This provided animals with a skill that allowed them to view their surroundings, treat it all like a face and read into it. In darkness and silence, this perception can treat itself like a inner blank face on which to conjure and read a progressing set of inter-related features so bountiful and varied that we can only call them thoughts. Thoughts can include words consumed as limited progressions.

Reading a sentence of words involves facing a progression of symbols. Even if the words can be seen all at once, they cannot be read all at once just like they cannot be heard all at once. We must engage a perception that sees or hears progressions. Words that were heard or seen are left behind by our senses but remain in our cinematic perception as steps of a progression that become a sentence. Sometimes the period cannot come soon enough. Our talent for progressions prefers short messages made from a short list of parts but, for millions of generations, that is all the job required. The original sentence was a facial expression. Features, like words, are re-perceived cinematically as a short progression. Faces, and reading them, should get as much credit as the brain for our capacity to reason and and think.

As luck would have it, the greatest developments in camera-on-a-stick technology for computers and robots are about facial recognition for surveillance systems. It is fitting that the machines, on their way to reminding us of ourselves, start where we did.

Imagine a face-off of two computers. One operates a robotic face and the other scans and recognizes the features and expressive contortions of their layout. Can the two of them, like the actor, transmit emotive info intact via the face? Imagine them high up on distant radio towers channeling genuine emotion along with every call.  We could ask the transmitting computer to make a happy face or a sad face or an angry face, etc.

If the system is inverted, we have a Face-Phone. A robot head would mimic the facial expression of the person calling while your expresions appear on their robot head's face. That might be engaging but it wouldn't remind us of ourselves. The only agency to find is our own. If we all looked away from the face-machine, nothing in the cosmos would know it was there.

Two computers with a plastic face in the middle could be a long chain of faces and computers that make a regression of identical homunculi and still, nothing in the cosmos would know it was there. How can the chain end with a last step of sinister agency and remind us of ourselves?

The computers behind city-wide suveillance systems will or already have exceeded our capacity for facial recognition. In some cases, the improvements are in the first process of perception- items in the field of view are examined in isolation for any recognizable set of features. It works with vision directly amd not from any picture it may create. If there is a monitor, it will show a picture with colored boxes moving about with other info. The computer does not need to see the monitor to do its job. At any instant, the job must done in order for the monitor to display the job being done. It is for our benefit. We are the only camera-on-a-stick involved.

Once a busy and detailed view has been scanned, recognized and referenced to existing data, there is nowhere further for the process to go. Newer systems go further by starting a second process that looks at the first job's results as if from our comfy-chair perspective and interprets info that cannot be recognized simply by seeing it. Holding a set of components in a progression reveals if they have anything to say. Research in computers that read words by interpreting how they are composed and intended are another good start on modeling ourselves. Both involve an element of creativity in generating theories and considering interpretations that might be truly informative. If action is required, the second system need only target or focus the first system and add its bias to the relevant components.

A robot with a two-stage computer brain could, after spotting us, declare, "I can read you". It would examine what follows and interpret what to do. All the TV robots would be proud and some would shed a tear as humans and cats react to its unpredictable and potentially menacing agency. Robots will have finally joined the party that is the Animal Kingdom. Once real robots join in the mutual encouragement of consciousness, we will have to believe.

What will the robots believe? Not a thing. Why should they? They are like fictional characters in a novel that only come to life when a real person observes them. They will be real TV robots.

If it did somehow work, robots would join the Animal Kingdom in being bi-perceptual or bioon. Being human will require a third stage of narrative re-sequencing that enhances and supervises our perception of progressions. That makes us tri-perceptual or trioon.

Is there any hope for computer-consciousness? Not if we want to see them remind us of ourselves. The only sensations of life we could hope or doubt they possess are those like our own. Even a robot with a human-like face may never see it as more than a cluster of solenoids and sensors. We and what we do are built on bio-physical sensations. We never considered building on what they already perceive. How do you ask?

We could ask the robot to make a happy face or a sad face or an angry face. Then we could ask it to creatively mimic the faces of the humans it has observed. How about a cow face? Now ask it to make a robot face.

The results may come quickly but we won't know where. We could only imagine what sort of meeting ground can be shared with the tiny electric fairies or whatever. Where would they put a picture in the middle of two processes? Any computer-brain so instructed would have to be given the benefit of the doubt. Either they now qualify for conscious awareness or we never did.

There is a way, though we would have no way to observe the results. Knowable or not, trying it would at least give the robots a chance. If we rig up a classroom full of special two-stage computers with robot operators, we could ask them all to make a robot face. If they were all networked together, everything would hinge on how they treat each other. What if they started to remind each other of themselves? Each machine would see the others and assume that they themselves must be what the others appear to be. Each will be treated with expectations of an electro-coporeality that each will believe they live up to. Could they mutually encourage each other to believe in consciousness? We can believe or doubt robot-consciousness, but who among us is sure enough to switch them off?