Cognitive Perception and Augmentation

The previous issue of the New Yorker (April 2, “The Mind Issue“) is really a tremendous collection of writing. Rachel Aviv on the mysterious wanderings of Hannah Upp in a dissociative fugue, Joshua Rothman on out-of-body experiences with virtual reality, and a profile of the maddening, dreadful Scott Pruitt at the EPA are all very insightful and well-written. And a profile of philosopher (and “theoretical cognitive scientist”) Andy Clark was particularly fascinating.

Galaxy brain.

As many know, I’ve been a big fan of Charlie Stross‘s writing for years now. And so many of these essays seemed to touch on technologies that he’s written about, either conceptually or almost precisely. Rothman’s piece on VR describes the experience of inhabiting a new body in a virtual world to derive empathy with the subject. By completing a series of hand-eye-touch coordination exercises, the participant begins to physically identify with the perspective of their virtual role:

I put on a V.R. headset and looked into such a mirror to see the body of a young woman wearing jeans, a T-shirt, and ballet flats. When I moved, she moved.

“You’re going to see a number of floating spheres, and you have to touch them,” Guillermo Iruretagoyena, a software developer, said.

A few colorful balls appeared near my hands and feet, and I moved my limbs to touch them. The spheres disappeared, and new ones took their place. After I touched the new spheres, Iruretagoyena explained that the “embodiment phase” was complete—I had tricked my brain into thinking that the virtual limbs were mine. My virtual self didn’t feel particularly real. The quality of the virtual world was on a par with a nineteen-nineties video game, and when I leaned into the mirror to make eye contact with myself my face was planar and cartoonish. Like a vampire’s, my body cast no shadow.

To my right, I heard the sound of keys in a door. I turned and saw a hallway. At the end of it, a man entered, with dark hair and a beige sweater.

“You fat cow,” he said, in a low voice. “Would it hurt to put on something nice?”

He began walking toward me. I looked at myself in the mirror. “Look at me!” he shouted. He walked up to a dresser, saw my cell phone, and threw it against the wall.

I watched, merely interested. It was obvious that he was a virtual person; I was no more intimidated by him than I would be by an image on a screen. Then he got closer, and closer still, invading my personal space. In real life, I’m tall, but I found myself craning my neck to look up at him. As he loomed over me, gazing into my eyes, I leaned away and held my breath. I could sense my heart racing, my chest tightening, and sweat breaking out on my temples. I felt physically threatened, as though my actual body were in danger. “This isn’t real,” I told myself. Still, I felt afraid.

 The technology is now being used to, on a small scale, treat domestic abusers, with marked success (albeit at a small scale). But this sort of gender-bending sounds like nothing less than Stross’s Glasshouse, where in the 27th century the male-identifying protagonist is reconstituted as a late 20th century woman in the services of a social experiment, a weird interactive version of the “ancestor simulation” so often hypothesized by Nick Bostrom et al.

More revelatory – and relevant to my own everyday life – is Larissa MacFarquhar’s profile of Clark. Most of this article is about a philosophical conception of intelligence and selfhood, and how rather than developing a wholly artificial intelligence with no external reference points, there’s a synthesis between mind and body in which the two have a symbiotic relationship and, perhaps, cannot exist in isolation. We are not, contra popular perception, beings whose self exists only in our single minds. Much like humanity’s adoption of tools to accomplish tasks, we augment our own brain-based computing power in various ways:

Consider a woman named Inga, who wants to go to the Museum of Modern Art in New York City. She consults her memory, recalls that the museum is on Fifty-third Street, and off she goes. Now consider Otto, an Alzheimer’s patient. Otto carries a notebook with him everywhere, in which he writes down information that he thinks he’ll need. His memory is quite bad now, so he uses the notebook constantly, looking up facts or jotting down new ones. One day, he, too, decides to go to moma, and, knowing that his notebook contains the address, he looks it up.

Before Inga consulted her memory or Otto his notebook, neither one of them had the address “Fifty-third Street” consciously in mind; but both would have said, if asked, that they knew where the museum was—in the way that if you ask someone if she knows the time she will say yes, and then look at her watch. So what’s the difference? You might say that, whereas Inga always has access to her memory, Otto doesn’t always have access to his notebook. He doesn’t bring it into the shower, and can’t read it in the dark. But Inga doesn’t always have access to her memory, either—she doesn’t when she’s asleep, or drunk.

Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that there is no important difference between Inga and Otto, memory and notebook. He believes that the mind extends into the world and is regularly entangled with a whole range of devices. But this isn’t really a factual claim; clearly, you can make a case either way. No, it’s more a way of thinking about what sort of creature a human is. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help.

Compare that with the protagonist of the first third of Accelerando, the fast-talking augmented-computing Manfred Macx, with a body-computer “metacortex” full of assignable “agents” to handle running down a train of thought to its ultimate conclusion:

His channels are jabbering away in a corner of his head-up display, throwing compressed infobursts of filtered press releases at him. They compete for his attention, bickering and rudely waving in front of the scenery…

He speed reads a new pop-philosophy tome while he brushes his teeth, then blogs his web throughput to a public annotation server; he’s still too enervated to finish his pre-breakfast routine by posting a morning rant on his storyboard site. His brain is still fuzzy, like a scalpel blade clogged with too much blood: He needs stimulus, excitement, the burn of the new…

Manfred pauses for a moment, triggering agents to go hunt down arrest statistics, police relations, information on corpus juris, Dutch animal-cruelty laws. He isn’t sure whether to dial two-one-one on the archaic voice phone or let it ride…

The metacortex – a distributed cloud of software agents that surrounds him in netspace, borrowing CPU cycles from convenient processors (such as his robot pet) – is as much a part of Manfred as the society of mind that occupies his skull; his thoughts migrate into it, spawning new agents to research new experiences, and at night, they return to roost and share their knowledge.

While the sensory overload risks are as real as those of a Twitter feed or RSS reader today, the difference with Macx’s metacortex is how directed its activities are. Not the result of some obscure algorithm, but generated solely by his own interests and ideas. Imagine a Wikipedia session of several dozen tabs, but being able to consume them all in near-simultaneity.

I identify closely with this, not the least because of my own tendency towards distraction and idle thought, but also my reliance on notepads and Evernote alike to keep track of the world around me. I can walk into a grocery store with five things to buy and emerge with ten; only three of them will have been from my list and I’ll have forgotten there were five to begin with. The ability to offload a thought or a task to remember is vital towards freeing up “processing power” (metaphorically speaking and not in the sense of nootropics), and I can only hope that someday the prospect of multitasking across distributed mental processors becomes a reality. It’s only then, I tell myself, that I’ll be able to finish writing that book I never started. To pursue an idea all the way to its end. In short, to fulfill my – and our – full potential as thinking beings.

In the meantime, the closest I’ll come is to keep reading Stross’s excellent speculative fiction.

The Means of Consumption

PC sales are down. Way, way down.

What’s to blame? Zero Hedge says that in addition to lackluster sales and poor reception for Windows *, we are, after all, still in a pretty severely depressed economy and that there’s just no end-user demand for new OSes or new computers in general. None of which is wrong. Windows 8, in particular, severly hamstrings Windows as an operating system, forcing it to suffer from the same limitations as a phone (which is just silly, especially when Windows 7 was a solid OS).

But the comments point out that we’ve really reached a point in modern computing power where most people just don’t need it. The rise of mobile and tablet devices has only compounded that. If the average person uses a machine just to tweet or surf the internet or check email or even just watch a movie, what’s the point of having several cubic powers worth of CPUs and RAM capacity greater than that of hard drives less than a decade ago? The smaller devices speak to that and obviate a need for real “computing” devices.

But two comments in particular caught my eye. The first:

[M]ost people don’t do physics simulations, train neural nets, backtest stock trading strategies and so on.

In tight times – why upgrade something that’s already better than most need?  Even I still use some  2 core relative clunkers (that were the hottest thing going when bought).  Because they do their job and are dead-reliable.

And the second:

[E]very manuf [sic] caught the disease it seems.  They don’t give a shit about their installed base, only new sales, and are just slavishly following the migration of most people to crap mobiles – crap if you need any real computing power and flexibility and multi-tasking.

I recently got a Nexus 10 – it’s cute, sometimes handy and so on.  But solve any real problem on it?  You must be joking, it’s just not there.  It’s great for consuming content, sucks for creating anything real – it’s a toy that probably does match the mainstream mentality – the “average guy” who half of people are even dumber than.  That ain’t me.  I’m a maker…I need real tools.

This is just the digital embodiment of a long-time trend. We don’t shape our environments how we used to – we don’t create; we only consume. We refine what exists without thinking bigger. And the sad part about something like the news about PC sales, which could conceivably serve as a wakeup call, is that it won’t matter. If there is a lesson to be learned, it’s that Windows 7 was fine and why should we bother iterating new versions. But the lesson is that there is at least some segment of humanity that’s trying to create and only needs the proper tools to do it. Possessing the means of consumption allows one only to consume (the Apple model); if we can repopularize “dual-use” technologies that don’t restrict content distribution but also enable its creation, well, now we might see innovation for all the right reasons.