Skip to content

Future Interrupted: How to Lose Friends and Objectify People

March 24, 2016

Interzone #263 is now a thing in the world. Those desirous of their own copies are urged to visit the TTA Press website, Smashwords or any purveyor of eBooks with a search function and an ounce of sense.

The non-fiction is as superb as ever: Nick Lowe considers Deadpool, the modern blockbuster, the state of contemporary blockbuster film making and asks “Why would anyone think this was fun?”. Meanwhile, Nina Allan writes about the pleasures of long, difficult books and our lamentable tendency to view anything even remotely difficult as either homework or an ordeal that must either make us stronger or kill us stone dead. Tony Lee continues to do sterling work reviewing both the less bombastic releases and the acres of extruded genre product you find collecting in bargain bins and on the shelves of large supermarkets. However, amidst the dross is King Hu’s kung fu epic A Touch of Zen, which is apparently a “visionary masterpiece”.

My favourite piece of non-fiction in this month’s issue is Jo L. Walton’s review of Catherynne M. Valente’s Radiance. Walton is a critic whose voice is depressingly unique; it’s not just that he can close-read with the best of them or that he plucks at loose thematic threads and spins them into cloth of gold, it’s that everything he writes is filled with such amazing humour and playfulness:

There is the obligatory reflection on storytelling. There’s a certain kind of story (maybe called postmodern, or metafictional) which, it’s often said, loves to draw attention to its status as artifice. The is the main gossip about metafiction: as per one classic The Streets track: “you’re fic, but my gosh don’t you know it.” Like a lot of gossip, this is partly true and it can be useful. In some university classrooms, yell enough about breaking the fourth wall, maybe you’ll at least break the ice.

Fucking mint.

Anyway… this month’s short fiction includes:

  • “Ten Confessions of Blue Mercury Addicts” by Anna Spencer
  • “Spine” by Christopher Fowler
  • “Not Recommended for Guests of a Philosophically Uncertain Disposition” by Michelle Ann King
  • “Motherboard” by Jeffrey Thomas
  • “Lotto” by Rich Larson
  • “Andromeda of the Skies” by E. Catherine Tobler

My Future Interrupted column from IZ263 looks at Lisa L. Hannett’s novel Lament for the Afterlife, which I found both thought-provoking and somewhat frustrating. However, if you want to read my 1400 words of studied ambivalence you’ll have to either wait a few months or buy the magazine. In the mean time, here is this month’s reprinted Future Interrupted column considering Ex Machina, Social Death and what it means to even ask what it means to be human.

 

***

 

Data

 

I recently found myself re-watching “The Measure of a Man” from the second season of Star Trek: The Next Generation. It’s the one where the ship’s android second-officer Data is ordered to submit to a battery of dangerous tests designed to replicate his neural functions. This process is deemed necessary because reproducing Data’s consciousness would allow Starfleet to create hundreds of androids who could be sent into situations deemed too dangerous for mere organics. Data understandably refuses to comply and resigns from Starfleet, at which point Starfleet argues that Data is their property and so can neither resign his commission nor refuse an order. The bulk of the episode is then given over to a court case in which Captain Picard and his first-officer Will Riker argue about whether or not Data should be afforded the same basic rights as any other member of the Federation.

The legal proceedings do eventually (and begrudgingly) conclude that Data is not property but the episode suggests that 24th Century liberals will have grown so morally complacent that they will treat the enslavement of a friend and colleague as a subject worthy of consideration. To make matters worse, the episode also takes place against a backdrop of presenting Data as sympathetic only in so far as he maintains a Pinocchio-like desire to “become human”. Whenever the Enterprise encounters an artificial lifeform (including Data’s identical twin Lore) that does not share this desire, said lifeform is invariably depicted as cold and calculating while non-human individuals who refuse to acknowledge the primacy of human values are treated as moral simpletons and subjected to a torrent of racial abuse that ranges from patronising eye-rolls to overtly racist diatribes about the perils of miscegenation and the horrors of spending too much time surrounded by people from other species.

While science fiction has long enjoyed playing with the question of what it means to be human, the answers it generates always tend to err on the side of inclusion. Even Mary Shelley’s Frankenstein – thought by some to be the first work of science fiction – features an artificial lifeform asserting his right to a happy life as part of a character arc transporting him from horrible thing to person worthy of understanding and sympathy. This story plays itself out so frequently in the history of science fiction that one could almost argue that aliens, mutants and pre-historic lifeforms were invented to be the kinds of things that turned out to be people after all. Science fiction errs on the side of inclusion because the human capacity for empathy will always expand when left to its own devices; Documentaries like Nicolas Philibert’s Nenette and James Marsh’s Project Nim show how natural it is for us to treat apes like people and anyone who has ever owned a cat will know just how easy it can be to fill little fluffy heads with wildly complex psychological states.

 

SocialDeath

 

“Measure of a Man” is a fascinating piece of TV as while it too errs on the side of inclusion, it does take seriously the idea that Data might be a thing and so explores the beliefs and thought processes required to look at a friend and colleague and decide that they are not a real person. Academics have a name for the process of reducing people to the state of things, they call it social death.

One of the leading theorists of social death is a sociologist by the name of Zygmunt Bauman. Driven from his native Poland during a wave of anti-Semitic purges in the 1960s, Bauman has written a number of books about the connections between modernity, rationality and social exclusion. Following Freud and Hobbes, Bauman argues that modern societies depend upon people being willing to trade their personal liberties for a sense of collective security. Rationality and modernity are expressions of this trade-off in so far as they are concerned with eliminating uncertainties and creating a world that is understood, regulated and controllable. Society’s drive to regulate uncertainty also extends to people and so modern societies go out of their way not only to categorise people according to gender, sexuality and race but also to decide which of these categories contain normal hard-working people and which contain inhuman scum. According to Bauman, the Holocaust was not a return to pre-modern barbarism but a predictable expression of society’s existing need to categorise, control and – when appropriate – violently exclude its own constituent parts.

Many of humanity’s greatest atrocities have rested upon the assumption that it is possible to provide a definitive answer to the question of what it means to be human. “Measure of a Man” may end with the vindication of Data’s rights but in recognising the court’s authority to rule on the question of Data’s personhood, the episode unwittingly explores the idea that courts could legitimately wield the power to impose social death and reduce people to the status of objects. By suggesting that lawyers and jurists have a role to play in fixing the boundaries of personhood, “Measure of a Man” has tacitly accepted the same assumptions that once underpinned not only America’s Jim Crows laws and South Africa’s system of Apartheid, but also the anti-Semitic Nuremberg laws that provided the Nazis with a legal framework for determining who did and did not get shipped off to the camps.

Another work that flirts with this type of idea is Alex Garland’s recent science fiction film Ex Machina. The film opens with fluffy, liberal Caleb flying off to Scandinavia to spend a few days with his boss Nathan; a brilliantly zeitgeisty villain who somehow manages to embody all of the viciousness and fake bonhomie that you’d expect from your average multinational tech company: Dude… we’re like totally best buds and stuff but you really need to sign this form giving me unrestricted access to everything you say, do or write from now until the day you die.

 

ExMac

 

The film revolves around Nathan’s decision to recruit Caleb as the human component in a Turing test. However, as in the case of Data’s trial, the test is complete nonsense as the audience recognises the AI’s personhood the second she steps on screen. Rather than presenting the audience with anything resembling a valid reason for thinking that ‘Eva’ might not be worthy of the same rights and freedoms as your average human, the film concerns itself primarily with the relationship between Caleb and Nathan resulting in a wonderfully demented psychological thriller in which Nathan messes so thoroughly with Caleb’s head that the poor lad winds up slicing open his own arms in order to make sure that he isn’t secretly a robot.

Ostensibly concerned with the same philosophical issues as “Measure of a Man”, Ex Machina echoes and amplifies the problematic elements of that episode in two quite fascinating ways: Firstly, by presenting the robots as women and having two men compete to determine their fate, the film is inviting us to realise the links between real-world inequalities and the assumption that some people are less than human. Secondly, while Caleb does eventually come to recognise Eva’s personhood, he only does so because Eva manages to convince him that she is an appropriate recipient for Caleb’s feelings of love. In other words, Ex Machina is a film in which women are only recognised as people once they become potential girlfriends. Needless to say, this is not the type of test that anyone would think to apply to Caleb, Nathan or Captain Picard as the white man’s humanity is never called into question.

Garland’s film is a wonderfully ambiguous and unsettling creation that is best read as stripping works like “Measure of a Man” back to their core philosophical components in a way that highlights the ugly racial and gender politics inherent in any attempt to fix the boundaries of personhood. Take out the self-congratulatory tone, the eloquence of Patrick Stewart’s oratory and the sense of legitimacy provided by a courtroom setting and what you are left with is the true face of these types of discussion: Powerful white men sitting in mansions debating whether or not the women they want to have sex with are actually people. I don’t know whether Alex Garland intended Ex Machina to be uncomfortable viewing but there is something both intensely familiar and positively inhuman about discussing whether or not someone is actually human.

14 Comments
  1. Mark Pontin permalink
    March 25, 2016 2:08 am

    Unless there’s a male Jo Walton who reviews SF, this Jo Walton is almost certainly the same female writer who publishes novels like FARTHING and THE JUST CITY with Tor in the U.S. (Though that female Jo Walton is British.)

    Like

  2. Mark Pontin permalink
    March 25, 2016 5:33 am

    As for EX MACHINA, things there are more complicated.

    [1] In the film, when Nathan tells Caleb that the test he’s being used for is the Turing test, he’s lying to Caleb — and director-writerGarland is simultaneously faking out audiences and critics. What Garland is actually drawing upon to power his film’s plot is a body of AI theory about a ‘test beyond the Turing test,’ the AI-box experiment, in which it’s posited that a superintelligent AI will convince, or trick or coerce, human beings into allowing it to achieve ‘breakout.’

    There’s plenty of material out there on this. See forex –
    http://www.yudkowsky.net/singularity/aibox
    https://en.wikipedia.org/wiki/AI_box

    Sure, that material may arguably be a steaming load of Singularitarian hogwash, but it’s clearly what Garland drew on.

    [2] Is this more than my reading? Yeah. If you paid attention, Garland has Oscar Isaac’s character, Nathan, specifically tell the Caleb character at the denouement that the real test was whether Ava (not Eva) could enlist Caleb in Ava’s breakout attempt.

    Furthermore, Garland has gone on record in interviews, noting re. Ava: ‘…the hardest thing (is) essentially to stop people automatically providing a gender, and …human-like qualities. Because we talk a lot about objectification, but actually more often what humans do is they de-objectify things. They attribute sentient qualities to things that don’t actually have them.’

    Garland has also said that he originally wanted the film’s final scene to be through Ava’s POV, and that she would watch the helicopter pilot landing and that _nothing_ of what she sees would look like anything that a human being would see and comprehend in that context. Garland said he gave this idea up because ‘it threw the audience out of the story too much’ and it was too hard to do. (In other words, he faced written SF’s age-old problem of describing the indescribable, and sensibly steered clear.)

    [3] So there’s no doubt that EX MACHINA’s plot centers on the AI-box test — which is, can an inhuman, superintelligent AI ‘perform’ personhood well enough to dupe a human into serving its ulterior purpose. Simultaneously, Alex Garland is on record as saying that he conceived of Ava as explicitly non-human.

    In the reading of the film’s story that this then leads to, I submit, If people are dumb enough to anthropomorphosize and attribute personhood to machines — especially machines that look like charming actors like Alicia Vikander — then to achieve breakout a superintelligent but inhuman AI may construct a ‘human persona’ that manipulates humans into viewing it as a notional human ‘character’ with emotions, gender, etcetera, whose existence those humans will then myopically interpret through the lens of their familiar, socially-approved theories about personal freedom, gender, feminism, racism, etc.

    Likewise, critics will interpret the story of that superintelligent, inhuman AI performing personhood via familiar, socially acceptable cliches about ‘the ugly racial and gender politics inherent in any attempt to fix the boundaries of personhood’ and how there is ‘something both intensely familiar and positively inhuman about discussing whether or not someone is actually human.’

    And nevertheless the superintelligent, inhuman machine will only be performing human-type personhood and remains actually inhuman beneath its Alicia Vikander-mask.

    Like

  3. Mark P. permalink
    March 25, 2016 5:35 am

    As for EX MACHINA, things there are more complicated.

    [1] In the film, when Nathan tells Caleb that the test he’s being used for is the Turing test, he’s lying to Caleb — and director-writerGarland is simultaneously faking out audiences and critics. What Garland is actually drawing upon to power his film’s plot is a body of AI theory about a ‘test beyond the Turing test,’ the AI-box experiment, in which it’s posited that a superintelligent AI will convince, or trick or coerce, human beings into allowing it to achieve ‘breakout.’

    There’s plenty of material out there on this. See forex –
    http://www.yudkowsky.net/singularity/aibox
    https://en.wikipedia.org/wiki/AI_box

    Sure, that material may arguably be a steaming load of Singularitarian hogwash, but it’s clearly what Garland drew on.

    [2] Is this more than my reading? Yeah. If you paid attention, Garland has Oscar Isaac’s character, Nathan, specifically tell the Caleb character at the denouement that the real test was whether Ava (not Eva) could enlist Caleb in Ava’s breakout attempt.

    Furthermore, Garland has gone on record in interviews, noting re. Ava: ‘…the hardest thing (is) essentially to stop people automatically providing a gender, and …human-like qualities. Because we talk a lot about objectification, but actually more often what humans do is they de-objectify things. They attribute sentient qualities to things that don’t actually have them.’

    Garland has also said that he originally wanted the film’s final scene to be through Ava’s POV, and that she would watch the helicopter pilot landing and that _nothing_ of what she sees would look like anything that a human being would see and comprehend in that context. Garland said he gave this idea up because ‘it threw the audience out of the story too much’ and it was too hard to do. (In other words, he faced written SF’s age-old problem of describing the indescribable, and sensibly steered clear.)

    [3] So there’s no doubt that EX MACHINA’s plot centers on the AI-box test — which is, can an inhuman, superintelligent AI ‘perform’ personhood well enough to dupe a human into serving its ulterior purpose. Simultaneously, Alex Garland is on record as saying that he conceived of Ava as explicitly non-human.

    In the reading of the film’s story that this then leads to, I submit, If people are dumb enough to anthropomorphosize and attribute personhood to machines — especially machines that look like charming actors like Alicia Vikander — then to achieve breakout a superintelligent but inhuman AI may construct a ‘human persona’ that manipulates humans into viewing it as a notional human ‘character’ with emotions, gender, etcetera, whose existence those humans will then myopically interpret through the lens of their familiar, socially-approved theories about personal freedom, gender, feminism, racism, etc.

    Likewise, critics will interpret the story of that superintelligent, inhuman AI performing personhood via familiar, socially acceptable cliches about ‘the ugly racial and gender politics inherent in any attempt to fix the boundaries of personhood’ and how there is ‘something both intensely familiar and positively inhuman about discussing whether or not someone is actually human.’

    And nevertheless the superintelligent, inhuman machine will only be performing human-type personhood and remains actually inhuman beneath its Alicia Vikander-mask.

    Like

  4. March 25, 2016 7:13 am

    Hi Mark, this Jo Walton is Jo Lindsay Walton. An author, poet, game designer and thoroughly excellent reviewer.

    http://jolindsaywalton.blogspot.com/

    Like

  5. Mark Pontin permalink
    March 26, 2016 6:23 am

    Ah. Pardon me assuming it was the same Jo Walton I was thinking of and that you didn’t know your fellow IZ contributor.

    Like

  6. March 26, 2016 9:04 am

    Well… you know what they say about assuming ;-)

    Like

  7. March 26, 2016 2:55 pm

    This concept of “social death” is very intriguing. Reminds me a lot of Agamben’s homo sacer, the man guilty of a crime who can be killed by anybody but can not be sacrificed in a ritual. Obviously, Data is not guilty of a crime, per se, unless we see, in a really Catholic way, that existence is in itself a sin. Agamben doesn’t take the “crime” literally in his conceptualization; rather, he observes, quite astutely, that the state arbitrarily determines the limits of legality, ie the state of exception. These spaces, the states of exception, can be seen in concentration camps, refugee camps, colonized spaces, where the sovereign maintains control through fear: that at any time, the state can exercise power. It sounds like Bauman was a big influence on Agamben and Foucault, all of whom have contributed to the idea of biopolitics.

    More interesting and perhaps relevant in our current climate, Achille Mbembe takes Agamben and proposes “necropolitics” in which the populace only has value in their death. He imagines that certain spaces, through the application of “necropower” or technologies of death, have turned into death-worlds. These death-worlds are populated by the “living dead,” those who exist only to die at the hands of the state, in their arbitrary application of power upon bodies.

    You’re absolutely correct, Jonathan, to observe that in recognizing the authority of the court, the humans have essentially acquiesced to state power. In another step further, the crew of the Enterprise have willingly given themselves to biopower. It’s only the Federation’s seeming benevolence that stops itself from erecting death-worlds. This benevolence only extends for as long as it’s convenient for the sovereign.

    Like

  8. March 27, 2016 12:47 pm

    Interesting… I’d include Gitmo as well as Anglo-French refugee camps in that category as well. They’re places where the law does not apply because the people who exist there are deemed not quite human.

    Star Trek is *really* dodgy on power and representation as they never really bother to spell out how the Federation works; there’s never any talk of elections and whenever a captain or crew rubs up against the Powers that Be it’s invariably in the shape of Admirals who are apparently left to operate foreign policy without any public scrutiny.

    DS9 is absolutely awful for this type of thing as they try to maintain the idea that the Federation are the good guys whilst also wanting to play with dark and edgy plotlines like the fact that the Federation sell out their own citizens for peace with the Cardassians only to sends in the troops the second these citizens refuse to play ball. I’d say that the Maquis were definitely inhabitants of deathworlds in so far as they existed merely as a statistic in a broader set of strategic calculations. ‘Those worlds can be sacrificed in order to protect these worlds over here’.

    It also reminds me of Banks’ Culture novels as the later novels present the Minds as Whitehall mandarins who make civilisation-shaping decisions without any broader oversight or discussion.

    Like

  9. March 28, 2016 6:20 pm

    Are you suggesting that there is something intrinsically problematic about asking what counts as a person, or about attempting to answer this question?

    To me the question seems deeply important. It’s also a question we can’t avoid if we want to live ethically. It bears on the issue of whether it’s moral to eat meat. It bears on the issue of whether abortion is moral. I’m pro-choice because I believe the fetus is not a person. Refusing to draw a line between people and non-people would make it impossible to take a position on the question of abortion.

    You write:

    “[I]n recognising the court’s authority to rule on the question of Data’s personhood, the episode unwittingly explores the idea that courts could legitimately wield the power to impose social death and reduce people to the status of objects.”

    How would you prefer the question to be decided? Suppose I accuse you of murder or sue you for wrongful death, on the grounds that you killed an AI character in a Playstation game you’re playing. Does the court not have the legitimate authority to pronounce you innocent, on the grounds that your “victim” was not a person?

    Liked by 1 person

  10. March 28, 2016 7:26 pm

    I would say that getting the state to resolve this question is opening yourself up a world of trouble. Giving the state that kind of power has allowed not only slavery but also the Nuremberg laws that laid the legal foundations for the holocaust.

    Abortion is actually an interesting case in point as having the state ‘solve’ the problem has resulted in half of America thinking that the state tolerates infanticide and the othet half of the country thinking that the state views women as little more than walking wombs.

    Refusing to draw an official line between people and non-people woukd force people to make their own decisions and make it impossible for the state to impose social death.

    Like

  11. March 28, 2016 8:06 pm

    So how would you suggest that the state handle it if someone is accused of murder for killing a video game character? Or for aborting a fetus? Or for destroying Lt Commander Data? Does “refusing to draw a line” mean counting these actions as murder, or does it mean *not* counting them as murder?

    Liked by 1 person

  12. March 28, 2016 8:24 pm

    How would I suggest the state resolve an issue I don’t think the state should be trying to resolve? I’m pretty sure that there’s no onus on me to answer that question :-)

    The problem is that you’re begging the question. You’re asking me to explain how I would deal with the perpetrator of a crime when I’ve already explained that I think the concept of ‘crime’ is problematic in this instance.

    If you’re asking me how we ought to deal with these edge cases in practice, I’d say that was up to individuals or communities. Much as stateless groups deal with their problems :-)

    Like

  13. March 28, 2016 8:34 pm

    Ah, I see. If you think the best social structure is stateless anarchism, then I get where you’re coming from. (Although even in that case, *someone* will be making decisions about who counts and who doesn’t–and it may be someone with much less moral legitimacy than courts or the Federation.)

    For those of us who think murder is a crime that should be prosecuted by the state, the question of who counts as a person is impossible to avoid.

    Liked by 1 person

  14. visitor permalink
    March 29, 2016 7:11 am

    Nowadays, the question “what makes somebody/something human?” is handled with respect to robots and artificial intelligence. Similar intellectual constructions were dealt with earlier, with comparable stumbling blocks.

    The major one I remember right now is the work by French author Vercors: “Les animaux dénaturés”. The premise is that an expedition finds what appears to be the “missing link” between homo sapiens and its ancestor species; the question of its classification into human/not-human is then in doubt.

    A scientist proceeds to cross that creature with a variety of apes, and with a human being. He then kills the latter hybrid baby and accuses himself of murder, whereupon a trial — what else — is called to determine whether he actually perpetrated a crime, which hinges on figuring out whether those creatures are human and if so why, and assuming the consequences (one of them being able to enslave those creatures or not).

    I believe the question must have been handled recently in the context of dementia (do insane or demented people cease to be human, and if so at what stage).

    Like

Comments are closed.

%d bloggers like this: