Cognitive Perception and Augmentation

The previous issue of the New Yorker (April 2, “The Mind Issue“) is really a tremendous collection of writing. Rachel Aviv on the mysterious wanderings of Hannah Upp in a dissociative fugue, Joshua Rothman on out-of-body experiences with virtual reality, and a profile of the maddening, dreadful Scott Pruitt at the EPA are all very insightful and well-written. And a profile of philosopher (and “theoretical cognitive scientist”) Andy Clark was particularly fascinating.

Galaxy brain.

As many know, I’ve been a big fan of Charlie Stross‘s writing for years now. And so many of these essays seemed to touch on technologies that he’s written about, either conceptually or almost precisely. Rothman’s piece on VR describes the experience of inhabiting a new body in a virtual world to derive empathy with the subject. By completing a series of hand-eye-touch coordination exercises, the participant begins to physically identify with the perspective of their virtual role:

I put on a V.R. headset and looked into such a mirror to see the body of a young woman wearing jeans, a T-shirt, and ballet flats. When I moved, she moved.

“You’re going to see a number of floating spheres, and you have to touch them,” Guillermo Iruretagoyena, a software developer, said.

A few colorful balls appeared near my hands and feet, and I moved my limbs to touch them. The spheres disappeared, and new ones took their place. After I touched the new spheres, Iruretagoyena explained that the “embodiment phase” was complete—I had tricked my brain into thinking that the virtual limbs were mine. My virtual self didn’t feel particularly real. The quality of the virtual world was on a par with a nineteen-nineties video game, and when I leaned into the mirror to make eye contact with myself my face was planar and cartoonish. Like a vampire’s, my body cast no shadow.

To my right, I heard the sound of keys in a door. I turned and saw a hallway. At the end of it, a man entered, with dark hair and a beige sweater.

“You fat cow,” he said, in a low voice. “Would it hurt to put on something nice?”

He began walking toward me. I looked at myself in the mirror. “Look at me!” he shouted. He walked up to a dresser, saw my cell phone, and threw it against the wall.

I watched, merely interested. It was obvious that he was a virtual person; I was no more intimidated by him than I would be by an image on a screen. Then he got closer, and closer still, invading my personal space. In real life, I’m tall, but I found myself craning my neck to look up at him. As he loomed over me, gazing into my eyes, I leaned away and held my breath. I could sense my heart racing, my chest tightening, and sweat breaking out on my temples. I felt physically threatened, as though my actual body were in danger. “This isn’t real,” I told myself. Still, I felt afraid.

 The technology is now being used to, on a small scale, treat domestic abusers, with marked success (albeit at a small scale). But this sort of gender-bending sounds like nothing less than Stross’s Glasshouse, where in the 27th century the male-identifying protagonist is reconstituted as a late 20th century woman in the services of a social experiment, a weird interactive version of the “ancestor simulation” so often hypothesized by Nick Bostrom et al.

More revelatory – and relevant to my own everyday life – is Larissa MacFarquhar’s profile of Clark. Most of this article is about a philosophical conception of intelligence and selfhood, and how rather than developing a wholly artificial intelligence with no external reference points, there’s a synthesis between mind and body in which the two have a symbiotic relationship and, perhaps, cannot exist in isolation. We are not, contra popular perception, beings whose self exists only in our single minds. Much like humanity’s adoption of tools to accomplish tasks, we augment our own brain-based computing power in various ways:

Consider a woman named Inga, who wants to go to the Museum of Modern Art in New York City. She consults her memory, recalls that the museum is on Fifty-third Street, and off she goes. Now consider Otto, an Alzheimer’s patient. Otto carries a notebook with him everywhere, in which he writes down information that he thinks he’ll need. His memory is quite bad now, so he uses the notebook constantly, looking up facts or jotting down new ones. One day, he, too, decides to go to moma, and, knowing that his notebook contains the address, he looks it up.

Before Inga consulted her memory or Otto his notebook, neither one of them had the address “Fifty-third Street” consciously in mind; but both would have said, if asked, that they knew where the museum was—in the way that if you ask someone if she knows the time she will say yes, and then look at her watch. So what’s the difference? You might say that, whereas Inga always has access to her memory, Otto doesn’t always have access to his notebook. He doesn’t bring it into the shower, and can’t read it in the dark. But Inga doesn’t always have access to her memory, either—she doesn’t when she’s asleep, or drunk.

Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that there is no important difference between Inga and Otto, memory and notebook. He believes that the mind extends into the world and is regularly entangled with a whole range of devices. But this isn’t really a factual claim; clearly, you can make a case either way. No, it’s more a way of thinking about what sort of creature a human is. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help.

Compare that with the protagonist of the first third of Accelerando, the fast-talking augmented-computing Manfred Macx, with a body-computer “metacortex” full of assignable “agents” to handle running down a train of thought to its ultimate conclusion:

His channels are jabbering away in a corner of his head-up display, throwing compressed infobursts of filtered press releases at him. They compete for his attention, bickering and rudely waving in front of the scenery…

He speed reads a new pop-philosophy tome while he brushes his teeth, then blogs his web throughput to a public annotation server; he’s still too enervated to finish his pre-breakfast routine by posting a morning rant on his storyboard site. His brain is still fuzzy, like a scalpel blade clogged with too much blood: He needs stimulus, excitement, the burn of the new…

Manfred pauses for a moment, triggering agents to go hunt down arrest statistics, police relations, information on corpus juris, Dutch animal-cruelty laws. He isn’t sure whether to dial two-one-one on the archaic voice phone or let it ride…

The metacortex – a distributed cloud of software agents that surrounds him in netspace, borrowing CPU cycles from convenient processors (such as his robot pet) – is as much a part of Manfred as the society of mind that occupies his skull; his thoughts migrate into it, spawning new agents to research new experiences, and at night, they return to roost and share their knowledge.

While the sensory overload risks are as real as those of a Twitter feed or RSS reader today, the difference with Macx’s metacortex is how directed its activities are. Not the result of some obscure algorithm, but generated solely by his own interests and ideas. Imagine a Wikipedia session of several dozen tabs, but being able to consume them all in near-simultaneity.

I identify closely with this, not the least because of my own tendency towards distraction and idle thought, but also my reliance on notepads and Evernote alike to keep track of the world around me. I can walk into a grocery store with five things to buy and emerge with ten; only three of them will have been from my list and I’ll have forgotten there were five to begin with. The ability to offload a thought or a task to remember is vital towards freeing up “processing power” (metaphorically speaking and not in the sense of nootropics), and I can only hope that someday the prospect of multitasking across distributed mental processors becomes a reality. It’s only then, I tell myself, that I’ll be able to finish writing that book I never started. To pursue an idea all the way to its end. In short, to fulfill my – and our – full potential as thinking beings.

In the meantime, the closest I’ll come is to keep reading Stross’s excellent speculative fiction.

Ghost Fleet: A Review

screen-shot-2015-06-11-at-10-39-31-am

I was a bit late in getting to it, but I was pleasantly surprised by P.W. Singer and August Cole’s Ghost Fleet. It took a bit of effort to get into it, but the temporal leap the novel takes into years after a second Pearl Harbor attack allows for some very interesting worldbuilding. The United States has been taken down a peg and enjoys little to none of its previous dominance. What does the post-hegemonic era look like for America? How, in the fabled era of “degraded ISR,” can American armed forces operate and conduct operations? While we’re living through that transition now, Singer and Cole explore what that future might actually resemble.

Riddled throughout with trenchant criticisms of the current political-military-industrial complex (such as a “Big Two” defense contractors, numerous references to the failings of the F-35, and the Air Force’s institutional resistance to unmanned air-to-air platforms), the vision fleshed out in Ghost Fleet is not a flattering one to our current state of affairs. At times the references are a bit on the nose, but the degree of underlying wit makes up for it.

If nothing else, the opening sequence helps explain even to the layman the importance of sensor platforms and space-based assets, the US military’s dependence on them, and their exquisite vulnerability. Finite quantities of ship-launched missiles and other material become apparent in a way that can be challenging to discern in real-life operations. Our reliance on Chinese-produced microchips and other advanced technology becomes a easily-exploitable Achilles’ Heel, in a manner all too reminiscent of the Battlestar Galactica pilot miniseries.

A new techno-thriller is, of course, cause for comparison to Tom Clancy, and where this far outshines him is in its willingness to critique technology and current trends in military procurement rather than lauding it unreservedly, while crafting somewhat multi-dimensional characters (some of whom are even not white!). And as I’ve written before, even if wrong in the details, fiction like this helps broaden the aperture a bit and convey the potentialities of future conflict. If not China, then Russia; if not the F-35, then perhaps the long-range strike bomber: things will go wrong, technologies will fail, and the United States may well be caught unawares. Hopefully, with novels such as Ghost Fleet illustrating the cost of unpreparedness, it will be possible to forestall the future it envisions.

Diamond in the Techno-Thriller: The Value of Speculative Fiction

A few years ago, some sort of switch got flipped in my brain and all of a sudden I became far more capable of and willing to plow through half a dozen novels in a single stretch than to finish a single non-fiction book. Recently, equilibrium has been at least somewhat restored, but I continue to find myself immersed in fiction in a way that I rarely was before.

Some recent reading has included a pair of Larry Bond novels from the late 1980s and early 1990s, Vortex and Cauldron. Larry Bond is most famously, of course, the man who helped Tom Clancy game out many of his books’ wartime scenarios (and Bond co-wrote Red Storm Rising with Clancy). I hadn’t known Bond as an author in his own right, but recently read those two works of his in succession.

What’s wonderful about books like these is generally not their literary qualities, but nor is it even the conduct or proposed sequence of events in particular conflicts. Can fiction, in fact, predict the future of warfare? Perhaps, but more interestingly, such books serve as a time capsule of the era in which they were written. Much of the “valued added” from this is detailed (at times overly so) descriptions and explanations of the weaponry, arms systems, and military organization of the era. But furthermore, while not predictive in any meaningful way, these novels can help widen the Overton Window of the imagination, to at least consider a divergent future drastically different from our own.

With books set in the future, but now a dated future, it’s almost like reading alternate history. As of this writing, I’m reading The Third World War: August 1985, which is an account of World War III written in the past tense as a historical survey from the point of view of two years later (e.g., 1987). Of course, the book was actually published in 1979, along with a followup, The Third World War: The Untold Story, which was published two years later and dives deeper into specifics of nuclear postures, the war in the North Atlantic and the North Sea, Solidarity’s effect in Poland, and other issues. It is a look at a world that never was, but seemed frightfully close to being so. And from that perspective, it’s a chilling look at the prospective war facing the world of the past.

Obviously, these never came to pass, but when one considers what might have been, that can seem a blessing. Continue reading

Population, Climate, and the Ethics of the Future

The Economist had a piece today about rapidly falling mortality rates worldwide, particularly in poor and low-income countries. Health professionals writing for The Lancet “advocate the establishment of a global ‘sustainable development goal,’ in which countries aim to reduce the number of premature deaths by 40% by 2030,” a very laudable goal to increase lives.

A few days ago, there was a somewhat widely-publicized article in Science, which showed that contrary to popular belief, world population would not in fact peak at 9 billion in 2050, but rather continue to grow, reaching 11 billion by the end of the century. This rather sharply throws into relief the need for better family planning, sanitation, and water management.

And today, in the wake of yesterday’s climate change protests on Wall Street, is an article decrying the strain of thought that exhorts one to act “for the children” and only for the children who will inherit the earth. When we focus only on future generations, we continue to encourage waiting-for-the-last-minute.

Humanity, as a whole, doesn’t know what it wants. We fear inexorable population growth, stripping the planet of its every natural resource; yet we try as best we can to halve the global death rate. We talk about saving the future through climate change action and legislation for children whose very existence might exacerbate the conditions leading to that very global warming. We are walking, talking, breathing contradictions.

What’s the right course of action? We banned CFCs to preserve the atmosphere, which in turn gave pharmaceutical companies the opportunity to design new, patented, trademarked asthma inhalers – which could, of course, then be sold for obscene amounts of money. The world gained, a subset of people suffered.

We’re saving people today, but we’re not sure about the future. We don’t know who will exist, who our children will be or how many children they’ll have; our progeny remains mystery. We’re establishing a global existential rent control: saving today’s lives, very possibly at the expense of tomorrow’s. Making it easier for the people who exist now to keep on existing as they are, while making future existence more tenuous and a little meaner.

What we should do and what we must do and what we ought to do seem radically opposed to one another. Are we preserving lives now that would be better off lost? Are we permitting future humans who might be born into cruder conditions than their parents? Is that immoral? Is the uncertainty and doubt of an ominous future reason enough to worry about current and future populations?

We obviously don’t have any answers individually or as a society. Inertia, momentum, myopia prevent us from taking a longer view. But for now that might be okay – I’m not sure we’d like what we see up ahead.

The Challenge

Originally meant for a Facebook post but it soon spiraled out of control. The subject is a piece by Jason Pontin in the Massachusetts Institute of Technology Review: “Why We Can’t Solve Big Problems.”

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills”

Since Apollo 17‘s flight in 1972, no humans have been back to the moon, or gone anywhere beyond low Earth orbit. No one has traveled faster than the crew of Apollo 10. (Since the last flight of the supersonic Concorde in 2003, civilian travel has become slower.) Blithe optimism about technology’s powers has evaporated, too, as big problems that people had imagined technology would solve, such as hunger, poverty, malaria, climate change, cancer, and the diseases of old age, have come to seem intractably hard.

Not to say that the article is entirely pessimistic for the future. In a lot of cases it’s not so much a question of know-how as it is mere willpower.

I’ve written about this before (the common thread through all writing on this seems to be the Concorde.  Humans could once buy a ticket to travel faster than the speed of sound. Those days now lie behind us).

And we’re running out of steam, too. Consider the troubled F-35 acquisition program (I hate holding up acquisitions as an example of anything, but…here I am). It’s not even as advanced as the F-22. Yet we still don’t have a combat-ready B variant (the Marine Corps has stood up an all F-35B squadron consisting of exactly three aircraft). And of course, our most advanced aircraft, the F-22 and B-2, were meant to be procured in far greater numbers but went into the “death spiral” of rising cost and declining orders.

This is not a problem unique to “legacy” industries. Even the hyped new media and tech sectors are seeing their own trivialization. As a Businessweek article pointed out, “The best minds of my generation are thinking about how to make people click ads.” As Jeff Hammerbacher says, this does indeed suck.

I don’t know what the solution is, but this is hardly a matter of perception. There’s an explanation as to why we no longer live in an age of optimism with the stars as the limit and a sense of awe and wonder at what tomorrow might bring. We’re stuck in a quagmire with little consequential technological progress, no political progress at all, and a generational rift that could just as easily be a referendum on moving into the 21st century. Other than Los Angeles, who’s building an urban heavy rail line? Who’s developing a faster way to travel? A better way to compute? A food replicator? A way to make money while also enhancing the common good?

The closest we’re getting right now is 3-D printing, and I have very high hopes for the field. Should it really reach its true potential, global supply chains will be completely disrupted (and for the better). But it’ll have to go beyond mere plastics. And other than that, what’s on the horizon? What about today, other than the tiny details, has changed in the last 30 years? What in that time has changed for the better?

I recently read Charles Stross’s Halting State, which deserves a more comprehensive treatment at some point, but which also has the following passage:

“Imagine you were a time-traveller from the 1980s, say 1984, and you stepped out of your TARDIS right here, outside, uh, West Port Books.” (Which tells you where you are.) “Looking around, what would you see that tells you you’re not in Thatcherland anymore?”

“You’re playing a game, right?”

“If you want it to be a game, it’s a game.” Actually it’s not a game, it’s a stratagem, but let’s hope she doesn’t spot it.

“Okay.” She points at the office building opposite. “But that…okay, the lights are modern, and there are the flat screens inside the window. Does that help?”

“A little.” Traffic lights change: Cars drive past. “Look at the cars. They’re a little bit different, more melted-looking, and some of them don’t have drivers. But most of the buildings—they’re the same as they’ve ever been. The people, they’re the same. Okay, so fashions change a little. But how’d you tell you weren’t in 1988? As opposed to ’98? Or ’08? Or today?”

“I don’t—” She blinks rapidly, then something clicks: “The mobile phones! Everyone’s got them, and they’re a lot smaller, right?”

“I picked 1984 for a reason. They didn’t have mobies then—they were just coming in. No Internet, except a few university research departments. No cable TV, no laptops, no websites, no games—”

“Didn’t they have Space Invaders?”

You feel like kicking yourself. “I guess. But apart from that…everything out here on the street looks the same, near enough, but it doesn’t work the same.”

Humanity possesses boundless reserves of optimism just waiting for the right conditions to be unleashed. But I fear we’re a long way away from that. We currently live in an age of in-between, a mere interlude of history, with our small times and small men and small problems. What’s next?

The Windup Girl: A Review

https://twitter.com/#!/johnrobb/status/167605495866208256

https://twitter.com/#!/johnrobb/status/167606075862941696

Everything I hoped it would be and more. This morning John Robb referenced aspects of The Windup Girl and that brought it all rushing back.

Bacigalupi paints the picture of a world where the “calorie men,” representatives of the midwestern agricultural combines that released the blister rust plagues into the wild, whose “U-Tex” and other genetically-engineered crops are the only defense against the diseases created by the same men, and the sterility of which forces India, Burma, and the other starving nations of the world into semi-feudal servility. A world in which rising seas have swallowed New York, Mumbai, New Orleans, and Rangoon, and where only the coal-powered monstrous pumps of King Rama XII prevent the similar fate from befalling Bangkok. Where the combustion engine has been replaced by kink-spring power wound by men and elephant-derived megadonts, where the exertion of labor to power the world requires the fuel of food, and calories are the currency of the realm.

In the midst of this, a former Japanese pleasure construct – the titular “windup girl” – discovers instincts and desires beyond the total obedience and urge to please that has not just been bred into her, but programmed into the very fiber of her being. An accidental übermenschen trapped among a peoples who regard her as trash, she represents a future that she can’t even understand yet. Which, coincidentally, is precisely what Bacigalupi has written here.

It is a rich portrait, indeed, and Bacigalupi excels at the alternate history/speculative fiction techniques of hint-dropping and hastily-sketched background details that he doles out like candy along a forest trail. But you’ll want to go where he’s luring you.

Megalopolis

Last week I had the pleasure of attending another Chicago Council on Foreign Affairs event specifically for Young Professionals. In this case it was a conversation between all-around-urban-intellectual Greg Lindsay and architect Jeanne Gang on nothing less critical than “The Future of Cities.”

Lindsay just cowrote the book Aerotropolis: How We’ll Live Next with John Karsada, which at its most basic is about the coming airport-centric design and planning that will determine the future of cities and the course of twenty-first century urbanism. But even that mouthful of a description doesn’t really do the book justice. Reading Geoff Manaugh’s interview of Lindsay (and also, Lindsay’s of Manaugh), puts the book in a new light and raises a whole variety of additional interpretations to Aerotropolis‘ main theories.

The talk, however, did not focus solely on Lindsay’s book. After a rather stilted introduction from a local Boeing representative, Lindsay launched into a brief overview of the cities of the future. In the next twenty years more “urban fabric” will be created than in the entire rest of human history. And none of them will look like Chicago. They will be born into nowhere, separated from their surrounding regions. Continue reading

Two Steps Back

Do you get the feeling that we’re slowing down? I mean that in the entropic sense, that humanity may have gone as far as it can and is now contracting. Look at how far we’ve come since the year 1910 – two world wars and all the carnage and technological progress they produced, rocketry and space exploration (we put a man on the moon), the rise of computing, Moore’s Law, all the conveniences of modern life. And yet, where are the big breakthroughs?

John Horgan recently wrote in Scientific American about “scientific regress,” fields of science that are not just slowing down as a result of diminishing returns, but that are actually retreating from their own discoveries. Infectious disease is back, including some that were on the brink of eradication. The Concorde, fastest commercial jet in history, was entirely scrapped, and there are no plans to replace it. Even science itself has come under fire – evolution has shifted from common knowledge to a disputable “theory.”

Research and technologies without ‘practical’ application never get off the ground. Hence the hole in the ground that could have been America’s own Large Hadron Collider. Who knows what CERN’s will discover? Alexander Fleming was just studying some bacteria. He ‘invented’ penicillin. Or consider the Vela satellites used to detect nuclear explosions on Earth, which ended up discovering the existence of Gamma-ray bursts. Even the most mundane of new technologies can have serendipitous results, and that’s why continued innovation and discovery is so important. But we’ve stopped.

Even in terms of military procurement – and let’s not forget that ARPA and DARPA brought us the internet and the global positioning system – we’re taking steps backwards in the name of fiscal sanity. Not that balanced budgets are an ignoble pursuit, but we’re voluntarily ending production of the most advanced fighter in the world (the F-22) in favor of its slightly less capable cousin (the F-35). Production of the F-35 itself will be notably slashed. With Britain retiring the Harrier and the F-35B variant in jeopardy, the novel technology of VTOL aircraft may itself not be long for this world.

Meanwhile, the Russian-designed contemporary of the F-35, the Sukhoi Su-35, is making waves, with China about to become a major purchaser of the technology. It takes ages for a new system to come online – the Airbus A400M military transport just now making maiden flights has been in the works since 1982! And even the new weapons systems intended to create capabilities where there are none – the Marine Corps’s Expeditionary Fighting Vehicle comes to mind – are being canceled.

We don’ produce anything any more. The picture of our economy, especially vis-a-vis China, is that of a junkyard. We have a resource economy now, where we ship raw materials out for “more skilled” hands to mold into a finished product. These products are things that just fuel our consumerism, a consumerism wherein we look forward to things breaking just so we can feel the rush of buying something new.

We put so little energy into real long-term thought. Everything we do as a society is all about the quick buck, the near-term gain, what we can see and hold and spend now. Politics continue to be an internal, mind-numbing struggle with no winners and no vision beyond the next election. And of course today’s politicians won’t be living with the consequences of their decisions (there’s still time to atone, though). As the Great Society gets rolled back, the New Deal is next. And what then, the gains of the Progressive Era?

It’s not like I don’t understand why – when you don’t even have a paycheck to look forward to in the next week, every day becomes its own micro-scale struggle just to get to the next one. But it’s not impossible to take care of today’s problems and plan for the future. I’ve previously called for stronger leadership, or a real public works plan, or maybe some British-style openness and transparency (and when the Brits are leading the way in those fields, you just know something’s gone horribly wrong somewhere). These things are not impossible. And they’re not too expensive. I don’t care how bad the deficit looks; no one cares (no, really, outside of a vocal few, it’s not the most pressing concern). It’s certainly a problem, but we have the chance to solve other problems while still looking to the future.

Things are expensive. But in the long long term, doing nothing and stagnating will be even more costly. We need to keep building, inventing, dreaming, knocking over test tubes accidentally, leaving petri dishes next to each other, and to stop arguing over today. Tomorrow is more important.

Think big. Think bold. But most importantly, think ahead.

A Brief History of Future War

Another article at Fortnight today, this one the most relevant to regular readers of this blog. Simply titled “Future War,” it’s a fairly comprehensive overview of Things I’m Interested In militarily. Opening excerpt:

Much as we in the United States may have forgotten our two land wars in Asia, we’re still in them.

But if all goes according to plan, we’ll be completely out of both Iraq and Afghanistan by 2015. Except for the “advisory and assistance brigades.” And special forces. And drones. And all the other minutiae and caveats that will have essentially set the stage for a near-permanent American presence in Central Asia for the foreseeable future.

But some day, an end will come both in name and in deed—even if that end turns out to be anticlimactic. It’s said all too often that “today’s generals are preparing to fight yesterday’s wars.” By the same token, the ascendancy of counterinsurgency doctrine in the United States military could be here to stay.

Charting the future course of war requires wisdom—and prescience. Who will do the fighting? How will our fighting be done? Why will we fight? And why will they fight? The pithy answers, in order, are: Very few people, remotely, preservation and economics.

Go read it!

Rational Pessimism

Matt Ridley’s new book about how we’ve got it so good today, The Rational Optimist: How Prosperity Evolves, has met with pretty decent reviews. I only just got around to reading Brendan O’Neill’s review for The American Conservative today (yes, yes, I know it’s dated August 1, but I’ve been busy), and it quashed any desire I might have had to read it.

I mean, I know I’m a pretty ornery cuss, but let’s face it: despite rapid advances in material prosperity, we as a society don’t seem particularly happy with our lot. O’Neill is right in saying that all the threats guaranteed to kill us all – Y2K, Bird Flu, that Man-Bird-Pig disease of a year or two ago – have never materialized, and that despite our constant worrying over the end, if it indeed comes it is almost certain to catch us by surprise.

And yet, there is so much of Ridley’s overall hypothesis that seems to make no sense. At the risk of becoming one of the “angry, graph-obsessed nitpicking” types O’Neill warns against, I think it would make sense to examine Ridley’s actual claims and see why they ring hollow.

In just the past 50 years, the average human “earned nearly three times as much money (corrected for inflation), ate one-third more calories of food, buried one-third as many of her children, and could expect to live one-third longer.”

Right off the bat, I can see one problem here: the average human. While wages and prosperity have risen steadily around the world, in the United States income disparity is at historical levels. Productivity has soared in the past fifty years,  but relative worker pay has dropped precipitously. We’re doing more and getting paid less to do it. So while much of the world may have seen a tangible increase in quality-of-life, we’re in many ways worse off than we were 20, 30 years ago. Continue reading