Language, meaning and humanness

In the previous post I tried to show how closely related are the reality we perceive and the language with which we talk about it. As far as human beings go, no language -> no thought -> no true perception -> no “real world”. Language is also inextricably entwined with that difficult word “meaning”, so that separating the world from its meaning cuts across the very process by which we know there is a world.

Meaning, unlike the information on which it relies, is intrinsically non-quantifiable. That is why it tends to disappear, like Eddington’s elephant, in mathematical descriptions of the world.

A BBC finance programme played part of this clip yesterday:

I’m sure there’s not one of us who’s been exposed to such vocal algorithms who’s not been frustrated by them. The Tax People responded to this by saying that the operator had been “sent for retraining”. We all know that really means that a subroutine about the relevant service was added to the voice recognition options. But the verb “to train” actually derives from the idea of aiming a cannon – teaching someone how to reach goals, that is showing them the meaning of what they’re doing. It’s teleological. Machines have no goals – they’re merely being programmed to respond better to efficient causes, and so are no more “trained” than “intelligent”. Only the programmer has a goal.

What is the thinking behind having a pleasant (until shown to be moronic) female voice pretending to understand you rather, than than an obviously synthesised, “This is a recording. Please speak keywords now. If on our list your call will be diverted appropriately”? I suppose it’s supposed to make one feel comfortable speaking to a virtual person who, with a bit of luck, will sound helpful enough to provide a positive experience. In the background somewhere is the thought that our voice recognition and reply technology is really getting significantly closer to using language in a way that will some day pass the Turing Test.

In fact, though, any such thinking betrays the fallacy that language has to do with arbitrarily labelling things in the “real” world, as I began the previous post. But it doesn’t – it has fundamentally to do with apportioning, and hence communicating, meaning. And since robots have, and by their mathematical configuration can have, no concept of meaning whatsoever, frustrating experiences like those in the clip will persist however sophisticated the hardware and software.

Even if it improves enough to fool the majority of callers, they will indeed have been fooled by an imitation of language, and will not have actually communicated with the machine. Even the most unintelligent and untrained human operator has greater understanding, and therefore the possibility of being persuaded, or intimidated, into helpfulness. And if they slam the phone down at your rudeness, even that has real meaning. At best the machine will, in the end, just reach the default option of connecting you to a person.

What’s interesting about this is that few of us behave like the guy in the clip, who (he says) was as amused as he was frustrated. If we swear at the robot (though I prefer withering insults) we know it has ears, but hears not. So the result of this attempt to reassure us with a personable human voice is that it soon trains us to respond appropriately, according to its machine capacity.

And if you think about it, that means that we have learned to think like machines and dispense with the concept of meaning too. Heaven help us if, as they promise theaten, they ever produce companion robots for the elderly and infirm. You’ll think you’re going mad even before you do.

Avatar photo

About Jon Garvey

Training in medicine (which was my career), social psychology and theology. Interests in most things, but especially the science-faith interface. The rest of my time, though, is spent writing, playing and recording music.
This entry was posted in Creation, Philosophy, Science. Bookmark the permalink.

Leave a Reply