Cognitive Perception and Augmentation

The previous issue of the New Yorker (April 2, “The Mind Issue“) is really a tremendous collection of writing. Rachel Aviv on the mysterious wanderings of Hannah Upp in a dissociative fugue, Joshua Rothman on out-of-body experiences with virtual reality, and a profile of the maddening, dreadful Scott Pruitt at the EPA are all very insightful and well-written. And a profile of philosopher (and “theoretical cognitive scientist”) Andy Clark was particularly fascinating.

Galaxy brain.

As many know, I’ve been a big fan of Charlie Stross‘s writing for years now. And so many of these essays seemed to touch on technologies that he’s written about, either conceptually or almost precisely. Rothman’s piece on VR describes the experience of inhabiting a new body in a virtual world to derive empathy with the subject. By completing a series of hand-eye-touch coordination exercises, the participant begins to physically identify with the perspective of their virtual role:

I put on a V.R. headset and looked into such a mirror to see the body of a young woman wearing jeans, a T-shirt, and ballet flats. When I moved, she moved.

“You’re going to see a number of floating spheres, and you have to touch them,” Guillermo Iruretagoyena, a software developer, said.

A few colorful balls appeared near my hands and feet, and I moved my limbs to touch them. The spheres disappeared, and new ones took their place. After I touched the new spheres, Iruretagoyena explained that the “embodiment phase” was complete—I had tricked my brain into thinking that the virtual limbs were mine. My virtual self didn’t feel particularly real. The quality of the virtual world was on a par with a nineteen-nineties video game, and when I leaned into the mirror to make eye contact with myself my face was planar and cartoonish. Like a vampire’s, my body cast no shadow.

To my right, I heard the sound of keys in a door. I turned and saw a hallway. At the end of it, a man entered, with dark hair and a beige sweater.

“You fat cow,” he said, in a low voice. “Would it hurt to put on something nice?”

He began walking toward me. I looked at myself in the mirror. “Look at me!” he shouted. He walked up to a dresser, saw my cell phone, and threw it against the wall.

I watched, merely interested. It was obvious that he was a virtual person; I was no more intimidated by him than I would be by an image on a screen. Then he got closer, and closer still, invading my personal space. In real life, I’m tall, but I found myself craning my neck to look up at him. As he loomed over me, gazing into my eyes, I leaned away and held my breath. I could sense my heart racing, my chest tightening, and sweat breaking out on my temples. I felt physically threatened, as though my actual body were in danger. “This isn’t real,” I told myself. Still, I felt afraid.

 The technology is now being used to, on a small scale, treat domestic abusers, with marked success (albeit at a small scale). But this sort of gender-bending sounds like nothing less than Stross’s Glasshouse, where in the 27th century the male-identifying protagonist is reconstituted as a late 20th century woman in the services of a social experiment, a weird interactive version of the “ancestor simulation” so often hypothesized by Nick Bostrom et al.

More revelatory – and relevant to my own everyday life – is Larissa MacFarquhar’s profile of Clark. Most of this article is about a philosophical conception of intelligence and selfhood, and how rather than developing a wholly artificial intelligence with no external reference points, there’s a synthesis between mind and body in which the two have a symbiotic relationship and, perhaps, cannot exist in isolation. We are not, contra popular perception, beings whose self exists only in our single minds. Much like humanity’s adoption of tools to accomplish tasks, we augment our own brain-based computing power in various ways:

Consider a woman named Inga, who wants to go to the Museum of Modern Art in New York City. She consults her memory, recalls that the museum is on Fifty-third Street, and off she goes. Now consider Otto, an Alzheimer’s patient. Otto carries a notebook with him everywhere, in which he writes down information that he thinks he’ll need. His memory is quite bad now, so he uses the notebook constantly, looking up facts or jotting down new ones. One day, he, too, decides to go to moma, and, knowing that his notebook contains the address, he looks it up.

Before Inga consulted her memory or Otto his notebook, neither one of them had the address “Fifty-third Street” consciously in mind; but both would have said, if asked, that they knew where the museum was—in the way that if you ask someone if she knows the time she will say yes, and then look at her watch. So what’s the difference? You might say that, whereas Inga always has access to her memory, Otto doesn’t always have access to his notebook. He doesn’t bring it into the shower, and can’t read it in the dark. But Inga doesn’t always have access to her memory, either—she doesn’t when she’s asleep, or drunk.

Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that there is no important difference between Inga and Otto, memory and notebook. He believes that the mind extends into the world and is regularly entangled with a whole range of devices. But this isn’t really a factual claim; clearly, you can make a case either way. No, it’s more a way of thinking about what sort of creature a human is. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help.

Compare that with the protagonist of the first third of Accelerando, the fast-talking augmented-computing Manfred Macx, with a body-computer “metacortex” full of assignable “agents” to handle running down a train of thought to its ultimate conclusion:

His channels are jabbering away in a corner of his head-up display, throwing compressed infobursts of filtered press releases at him. They compete for his attention, bickering and rudely waving in front of the scenery…

He speed reads a new pop-philosophy tome while he brushes his teeth, then blogs his web throughput to a public annotation server; he’s still too enervated to finish his pre-breakfast routine by posting a morning rant on his storyboard site. His brain is still fuzzy, like a scalpel blade clogged with too much blood: He needs stimulus, excitement, the burn of the new…

Manfred pauses for a moment, triggering agents to go hunt down arrest statistics, police relations, information on corpus juris, Dutch animal-cruelty laws. He isn’t sure whether to dial two-one-one on the archaic voice phone or let it ride…

The metacortex – a distributed cloud of software agents that surrounds him in netspace, borrowing CPU cycles from convenient processors (such as his robot pet) – is as much a part of Manfred as the society of mind that occupies his skull; his thoughts migrate into it, spawning new agents to research new experiences, and at night, they return to roost and share their knowledge.

While the sensory overload risks are as real as those of a Twitter feed or RSS reader today, the difference with Macx’s metacortex is how directed its activities are. Not the result of some obscure algorithm, but generated solely by his own interests and ideas. Imagine a Wikipedia session of several dozen tabs, but being able to consume them all in near-simultaneity.

I identify closely with this, not the least because of my own tendency towards distraction and idle thought, but also my reliance on notepads and Evernote alike to keep track of the world around me. I can walk into a grocery store with five things to buy and emerge with ten; only three of them will have been from my list and I’ll have forgotten there were five to begin with. The ability to offload a thought or a task to remember is vital towards freeing up “processing power” (metaphorically speaking and not in the sense of nootropics), and I can only hope that someday the prospect of multitasking across distributed mental processors becomes a reality. It’s only then, I tell myself, that I’ll be able to finish writing that book I never started. To pursue an idea all the way to its end. In short, to fulfill my – and our – full potential as thinking beings.

In the meantime, the closest I’ll come is to keep reading Stross’s excellent speculative fiction.

What is “Systems Analysis”?

“Systems analysis,” as a concept, can be difficult to define and pin down. For much of my life, I assumed it was some sort of generic back-office IT function (see, for instance, the hundreds of man-on-the-street “American Voices” interviews in The Onion, which describe respondents in equal measure as ‘unemployed’ or ‘systems analyst’). But given the complexities of almost, well, everything in the modern era, an understanding of the logical underpinnings of systems analysis is critical.

Essentially, single variables cannot be considered in isolation. A new weapons platform or technological development or re-basing movement must be thought of in the context of existing technology, logistics capacity, weather, enemy reaction, enabling capabilities, fixed facilities, power projection, and so on, down an otherwise infinite fractal list of factors.

https://upload.wikimedia.org/wikipedia/en/thumb/e/e4/An_image_of_Strategist_Bernard_Brodie.jpg/220px-An_image_of_Strategist_Bernard_Brodie.jpg

Dr. Bernard Brodie, RAND Corporation (Wikimedia)

But all this is a long-winded introduction to Bernard Brodie’s hypothetical systems analysis example in Strategy in the Missile Age is one of the best,  most succinct ways of describing just how complex this interplay is. Brodie, of course, had a front-row seat to this effort, as the RAND Corporation was the earliest home to a methodological approach to the field. Beginning on page 381 in the 1967 edition:

Let us consider, for example, the problem of choosing between two kinds of strategic bombers. Each represents in its design an advanced “state of the art,” but each also represents a different concept. In one, which we shall call Bomber A, the designers have sought to maximize range. They have therefore settled for a subsonic top speed in a plane of fairly large size. The designers of Bomber B, on the contrary, have been more impressed with the need for a high dash speed during that part of the sortie which involves penetration of enemy territory, and have built a smaller, shorter-ranged plane capable of a Mach 2 dash for a portion of its flight. Let us assume also that the price of the smaller plane is about two-thirds that of the larger.

Perhaps we can take both types into our inventory, but even then we should have to compare them to determine which we should get in the larger numbers. Let us then pick a certain number of specific targets in enemy territory, perhaps three hundred, and specify the destruction of these targets as the job to be accomplished. Since we know that both types can accomplish this job with complete success if properly supported and handled, our question then becomes: which type can do it for the least money?

We do not ask at this stage which type can do it more reliably, because within limits we can buy reliability with dollars, usually by providing extra units. Some performance characteristics, to be sure, will not permit themselves to be thus translated into dollars-for example, one type of plane can arrive over target somewhat sooner than the other type, and it is not easy to price the value of this advantage but we shall postpone consideration of that and similar factors until later.

Let us assume that Bomber A has a cruising range of 6,000 miles, while Bomber B is capable of only 4,000 miles. This means that Bomber A has to be refueled only on its post-strike return journey, while Bomber B probably has to be refueled once in each direction. This at once tells us something about the number of “compatible” tankers that one has to buy for each type (“compatible” referring to the performance characteristics which enable it to operate smoothly with a particular type of bomber). Up to this point Bomber B has appeared the cheaper plane, at least in terms of initial purchase price, but its greater requirement in tankers actually makes it the more expensive having regard for the whole system. In comparing dollar costs, however, it is pointless to compare merely procurement prices for the two kinds of planes; one has to compare the complete systems, that is to say, the weapons, the vehicles, and the basing, protection, maintenance, and operating costs, and one must consider these costs for each system over a suitably long period of peacetime maintenance, say five years. These considerations involve us also in questions of manpower. We are in fact pricing, over some duration of time, the whole military structure required for each type of bomber.

https://upload.wikimedia.org/wikipedia/commons/7/70/B36-b-52-b-58-carswell.jpg

A B-36 Peacemaker, B-52 Stratofortress, and B-58 Hustler from Carswell AFB, TX en route to the former’s retirement in 1958. The B-52 would long outlive the more advanced Hustler. (Wikimedia)

Now we have the problem of comparing through a process of “operations analysis,” how the two types fare in combat, especially the survival expectancy of each type of plane during penetration. In other words, we have to find out how much the greater speed (and perhaps higher altitude) of Bomber B is worth as protection. If the enemy depends mostly on interceptors, the bomber’s high speed and altitude may help a great deal; if he is depending mostly on guided missiles, they may help relatively little. Thus a great deal depends on how much we know about his present and projected defenses, including the performance characteristics of his major weapons.

If our Bomber A is relying mostly on a low altitude approach to target, which its longer range may just make possible (we are probably thinking in terms of special high efficiency fuels for wartime sorties), it may actually have a better survival expectation than its faster competitor. Also, we know that penetration capability is enhanced by increasing the numbers of bombers penetrating (again, a matter of money) or by sending decoys in lieu of extra bombers to help confuse the enemy’s radar and saturate his defenses. Perhaps we find that the faster plane would outrun the decoys, which again might tend to give it a lower penetration score than one would otherwise expect. But decoys are expensive too, in acquisition costs, basing, and maintenance, and involve additional operating problems. The faster plane may be less accurate in its bombing than the other, which again would involve a requirement for more aircraft and thus more money.

We have given just barely enough to indicate the nature of a typical though relatively simple problem in what has come to be known as “systems analysis.” The central idea is that no weapon can be considered independently of the other weapons and commodities that are used with it, that all endure through some period of time and require men to service them and to be trained in their use, that all these items involve costs, and that therefore relative costs of different systems, as considered against some common standard of function, are basic to the problem of choice between systems. Systems analysis, which brings what is modern to present-day strategic analysis, is mostly a post-World War II development.

The challenges herein are immense, which in part explains the explosion not only of defense research and development but also of the defense bureaucracy as a whole. It’s a sprawling, tangled mass that can in many ways only be understood in relation to itself. But systems analysis is at least an attempt to build that into other assumptions and considerations.

Using this technique is not only a way to compare technologies with like missions; it’s an excellent tool for use in wargame design. This too is in fact an iterative process, as the insights from a wargame itself might reveal further interrelationships, which might then be used to craft a more complex operating environment (or refine the mechanics used to select force lists), and so on ad infinitum.

Practicality aside, Brodie’s writing serves as an excellent primer to what systems analysis entails, and more broadly, to the change in strategic thought and analysis since the end of World War II.

Ghost Fleet: A Review

screen-shot-2015-06-11-at-10-39-31-am

I was a bit late in getting to it, but I was pleasantly surprised by P.W. Singer and August Cole’s Ghost Fleet. It took a bit of effort to get into it, but the temporal leap the novel takes into years after a second Pearl Harbor attack allows for some very interesting worldbuilding. The United States has been taken down a peg and enjoys little to none of its previous dominance. What does the post-hegemonic era look like for America? How, in the fabled era of “degraded ISR,” can American armed forces operate and conduct operations? While we’re living through that transition now, Singer and Cole explore what that future might actually resemble.

Riddled throughout with trenchant criticisms of the current political-military-industrial complex (such as a “Big Two” defense contractors, numerous references to the failings of the F-35, and the Air Force’s institutional resistance to unmanned air-to-air platforms), the vision fleshed out in Ghost Fleet is not a flattering one to our current state of affairs. At times the references are a bit on the nose, but the degree of underlying wit makes up for it.

If nothing else, the opening sequence helps explain even to the layman the importance of sensor platforms and space-based assets, the US military’s dependence on them, and their exquisite vulnerability. Finite quantities of ship-launched missiles and other material become apparent in a way that can be challenging to discern in real-life operations. Our reliance on Chinese-produced microchips and other advanced technology becomes a easily-exploitable Achilles’ Heel, in a manner all too reminiscent of the Battlestar Galactica pilot miniseries.

A new techno-thriller is, of course, cause for comparison to Tom Clancy, and where this far outshines him is in its willingness to critique technology and current trends in military procurement rather than lauding it unreservedly, while crafting somewhat multi-dimensional characters (some of whom are even not white!). And as I’ve written before, even if wrong in the details, fiction like this helps broaden the aperture a bit and convey the potentialities of future conflict. If not China, then Russia; if not the F-35, then perhaps the long-range strike bomber: things will go wrong, technologies will fail, and the United States may well be caught unawares. Hopefully, with novels such as Ghost Fleet illustrating the cost of unpreparedness, it will be possible to forestall the future it envisions.

The “Utility” of Low-Yield Nuclear Weapons

The B61 family of modifications

The B61 family of modifications

An article in the New York Times made the rounds last week, asserting that the new modification (“mod”) of the B61 nuclear gravity bomb was of a lower yield than its predecessors, and arguing that lower-yield, precision weapons are destabilizing to nuclear strategy and that their relatively limited destructive capabilities in turn render them more likely to be used than the multi-megaton, Cold War-era city-busters. It was also proclaimed that this was the death knell for the Obama Administration’s disarmament and arms control efforts, and represented a “new arms race” between nuclear powers.

The argument is well-intentioned, but misguided.

The article quotes the usual suspects – James Cartwright, Andrew Weber, William Perry – and offers some assertions that are patently false on their face.

David Sanger and William Broad, the authors of this piece, focus solely on US weapons development and policy in a bubble, ignoring the larger context of the world in which nuclear weapons exist. As they and their interview subjects characterize it, the United States is the one upsetting the nuclear balance:

Already there are hints of a new arms race. Russia called the B61 tests “irresponsible” and “openly provocative.” China is said to be especially worried about plans for a nuclear-tipped cruise missile. And North Korea last week defended its pursuit of a hydrogen bomb by describing the “ever-growing nuclear threat” from the United States.

This, of course, ignores the fact that Russia has violated the Intermediate Nuclear Forces Treaty, China has refused to enter into any arms control arrangements and is busy expanding its own arsenal (including the production of new nuclear warheads and delivery vehicles; the former something that the United States still will not do), and North Korea has rejected carrot and stick approaches alike dating back several decades. If the Presidential Nuclear Initiatives in the aftermath of the Cold War – or the past 30 years of sanctions – were insufficient to dissuade Pyongyang from nuclear proliferation, it’s hard to envision what would. Continue reading

Diamond in the Techno-Thriller: The Value of Speculative Fiction

A few years ago, some sort of switch got flipped in my brain and all of a sudden I became far more capable of and willing to plow through half a dozen novels in a single stretch than to finish a single non-fiction book. Recently, equilibrium has been at least somewhat restored, but I continue to find myself immersed in fiction in a way that I rarely was before.

Some recent reading has included a pair of Larry Bond novels from the late 1980s and early 1990s, Vortex and Cauldron. Larry Bond is most famously, of course, the man who helped Tom Clancy game out many of his books’ wartime scenarios (and Bond co-wrote Red Storm Rising with Clancy). I hadn’t known Bond as an author in his own right, but recently read those two works of his in succession.

What’s wonderful about books like these is generally not their literary qualities, but nor is it even the conduct or proposed sequence of events in particular conflicts. Can fiction, in fact, predict the future of warfare? Perhaps, but more interestingly, such books serve as a time capsule of the era in which they were written. Much of the “valued added” from this is detailed (at times overly so) descriptions and explanations of the weaponry, arms systems, and military organization of the era. But furthermore, while not predictive in any meaningful way, these novels can help widen the Overton Window of the imagination, to at least consider a divergent future drastically different from our own.

With books set in the future, but now a dated future, it’s almost like reading alternate history. As of this writing, I’m reading The Third World War: August 1985, which is an account of World War III written in the past tense as a historical survey from the point of view of two years later (e.g., 1987). Of course, the book was actually published in 1979, along with a followup, The Third World War: The Untold Story, which was published two years later and dives deeper into specifics of nuclear postures, the war in the North Atlantic and the North Sea, Solidarity’s effect in Poland, and other issues. It is a look at a world that never was, but seemed frightfully close to being so. And from that perspective, it’s a chilling look at the prospective war facing the world of the past.

Obviously, these never came to pass, but when one considers what might have been, that can seem a blessing. Continue reading

Nike’s Revenge: The Return of Urban Missile Defense

News last week that the US is contemplating area cruise missile defense – around US cities – against Russian missiles, no less – was enough to fill me with a certain sort of  glee.

Given the problems we’ve had with defending against ballistic missiles and their far more predictable (and detectable) trajectories, the technological barriers to implementing an effective urban cruise missile defense are likely to be high. (On the other hand, if the Russian ICBM threat is as overhyped as State is playing it, then we’ve got some breathing room to develop such a system.) Of course, this wouldn’t be our first iteration of urban aerial defense.

Nike Hercules missiles on alert, 1970s.

The Nike program from the early Cold War was the US attempt to thwart Soviet nuclear attacks by shooting down bombers (at first, with Nike Ajax), and eventually expanding to target ballistic missiles (Nike Hercules/Zeus). Surface-to-air missiles versus aircraft, essentially. By the 1970s, the threat of a massive, overwhelming ICBM salvo led some missiles had begun to be armed with nuclear warheads, most notably those of the separate SENTINEL/SAFEGUARD program with its 5 megaton W71. But the last of the Nike sites was decommissioned in 1974, and Safeguard only lasted a few months before being shut down in 1975. An excellent overview of all this has been published, of course, as an Osprey book.

Raytheon/Kongsberg HAWK-AMRAAM mobile launcher.

The proposed system is a little different – new active electronically-scanned area (AESA) radars would enable F-16s to shoot down the cruise missiles, rather than relying on a ground-based interceptor. The fighters would be networked with some sort of barrage balloon-type airships carrying sensors, as well as radars and sensors at sea (too bad Washington must bid adieu to USS Barry – would that it might gain a second life from this effort). However, Raytheon is considering land-based versions of both the SM-6 and the AMRAAM, which would require some degree of construction for basing.

What’s most interesting to me is where exactly such systems might be deployed. In addition to the major SAC bases, various Nike anti-air batteries defended major industrial and population centers.

Nike sites in the continental United States

Reading a list of the 40″defense areas” from the 1950s is like a snapshot of US heavy industry at its peak: Hartford, Bridgeport, Chicago-Gary, Detroit, Kansas City, St. Louis, Niagara Falls-Buffalo, Cincinnati-Dayton, Providence, Pittsburgh, and Milwaukee (among others) were all deserving of their own ring of air defenses. But what would the map look like now?

It’s both a rhetorical and a practical question. The importance of place has changed, and in some ways the country as a whole is more sprawling than it was during the Cold War. More places to defend but also more targets for an enemy. As the JCS Vice Chairman Sandy Winnefeld is quoted, “we probably couldn’t protect the entire place from cruise missile attack unless we want to break the bank. But there are important areas in this country we need to make sure are defended from that kind of attack.” One can already imagine the hearings and horsetrading that would accompany any discussion of which cities are worthy of protection – picture the East Coast interceptor site debate multiplied a hundredfold.

Hopefully SM or AAMRAAM is a part of it, but if the new system of sensors and radars is indeed confined to mobile platforms – fighters, ships, balloons – we’ll be deprived of Nike’s wonderful legacy of ruins. Only one Nike site, that of Fort Barry in San Francisco’s Golden Gate Recreation Area, has been preserved and is open to the public. Others make for a nice, if haunting walk. Others have found new purpose – it was only in recent years that I learned Drumlin Farm, a wildlife sanctuary run by the Massachusetts Audubon Society and frequent field trip destination as a kid, was once home to  the control site of Launcher B-73 and its Ajax and later Hercules missiles.

Radar dome of Nike site Mike near Eielson AFB, AK.

Without the permanent physical infrastructure of a Nike-type program, our cruise missile defense initiative will be sorely lacking any vestiges for future generations to explore. Although how ironic is it that one of the better reasons to support a missile defense initiative is in anticipation of its eventual decommissioning? At any rate, it’s likely that any modern system would leave a smaller footprint than Nike, with presumably fewer control sites having command of multiple, if not all launch sites in a given area.

The land-based component might never be constructed. But at least with balloons in the skies overhead, we all might wake up one morning and wonder if we’ve been transported to some kind of alternate reality.

“Manhatan” (as it appeared on Fringe).

Pilots Are Special and Should Be Treated That Way

In the Secretary Gates v. General Moseley spat, the general’s comments on a place for manned and unmanned platforms jumped out (emphasis mine):

Moseley said there is no intentional bias against unmanned aircraft in the Air Force. There is a place for both manned and unmanned, he said. “Secretary Wynne got tired of hearing me say this when we were beaten up about not going all unmanned.” The reality is that there are few instances when the use of unmanned aviation is imperative. “One is when you believe the threat is so terrible that you’ll lose the human,” he said. “I believe the Air Force has never found that threat. We will penetrate any threat. We haven’t found a place we won’t go. So I don’t buy that one.

The other is when human pilots are the limiting factor to the persistence of the machine. “I got that one,” said Moseley. “You leave the plane out there for 30 hours on a reconnaissance mission. That’s a valid one.”

Isn’t that backwards? In this era of declining budgets, a need for persistence (in general), and an almost total lack of contested airspace, isn’t the onus on the Air Force to prove a need for the use of human pilots? Why are we still considering manned aircraft the default, and only operating them remotely in particular circumstances?

I guess it seems as if there’s no overwhelming rationale for maintaining pilot primacy with the vast majority of missions. And I’m the first to admit I would try and have it both ways: if the airspace is too dangerous, then of course we’d want to minimize pilot risk and maximize unmanned platforms. If  airspace is uncontested, then it would seem that the skill and reaction time of a human pilot is unwarranted.

Obviously, manned flight isn’t going anywhere, but to declare the Air Force as essentially a force of human-piloted aircraft, except for a few instances, seems to be ignoring the larger trends and gains to be had from unmanned aviation.

(Original link h/t Rich Ganske).

Flat Tops and Short Decks

article-2385430-1B2B7202000005DC-913_634x388[1]

Izumo’s commissioning on August 6, 2013

Japan unveiled its biggest warship since World War II on Tuesday, a $1.2 billion helicopter carrier aimed at defending territorial claims.

The move drew criticism from regional rival China, which accused its neighbor of “constant” military expansion.

The ceremony to showcase the 248-meter (810-feet) vessel came as Shinzo Abe’s conservative government, which took office last December, considers ditching the nation’s pacifist constitution and beefing up the military.

Japan plans to use the helicopter carrier, named Izumo and expected to go into service in 2015, to defend territorial claims following maritime skirmishes with China, which has demonstrated its own military ambitions in recent years.

This is via Nick Prime, who points out the theoretical possibility of fielding the F-35 on these. (Here’s another story).

Which, honestly, is what I thought was basically the only thing keeping the B variant alive. It’s not just the USMC that needs a VTOL-capable aircraft (or in the case of the JSF, “aircraft”), but a lot of our allies and partners in the region who have been investing in flattops like these (see: HMAS Canberra, ROKS Dokdo, etc.) with the possibility of flying such planes off of them. Even the Europeans are getting in on it – the French have a pretty good platform in the Mistral class, hence the brouhaha over the Russian acquisition of four of them.

And in that case, there had better be an aircraft that can use the short-decks. I mean, helicopters are great and all, but if we’re going to at least play bluewater navy and accept that power projection via the aircraft carrier is still a) relevant, and b) desirable, then doing it on the cheap is probably the best compromise. At this point you might as well assume that you’re going to lose them, so why not go for the more basic version? The Marines probably aren’t thrilled about a CAS aircraft that’s only 80% better than its predecessor (though certainly more than 80% more costly), but the key is that it’s not just for them. Our friends are getting in the game, and that’s not a bad thing.

Anyways, it is nice to see the JMSDF get a new flagship (that’s how they determine them, right? The biggest?). And one whose name has an interesting history, too.

 

UPDATE: Kyle Mizokami, as usual, has written excellent words on the Izumo. Short version: Japan’s going to have to go big or go home.

The Means of Consumption

PC sales are down. Way, way down.

What’s to blame? Zero Hedge says that in addition to lackluster sales and poor reception for Windows *, we are, after all, still in a pretty severely depressed economy and that there’s just no end-user demand for new OSes or new computers in general. None of which is wrong. Windows 8, in particular, severly hamstrings Windows as an operating system, forcing it to suffer from the same limitations as a phone (which is just silly, especially when Windows 7 was a solid OS).

But the comments point out that we’ve really reached a point in modern computing power where most people just don’t need it. The rise of mobile and tablet devices has only compounded that. If the average person uses a machine just to tweet or surf the internet or check email or even just watch a movie, what’s the point of having several cubic powers worth of CPUs and RAM capacity greater than that of hard drives less than a decade ago? The smaller devices speak to that and obviate a need for real “computing” devices.

But two comments in particular caught my eye. The first:

[M]ost people don’t do physics simulations, train neural nets, backtest stock trading strategies and so on.

In tight times – why upgrade something that’s already better than most need?  Even I still use some  2 core relative clunkers (that were the hottest thing going when bought).  Because they do their job and are dead-reliable.

And the second:

[E]very manuf [sic] caught the disease it seems.  They don’t give a shit about their installed base, only new sales, and are just slavishly following the migration of most people to crap mobiles – crap if you need any real computing power and flexibility and multi-tasking.

I recently got a Nexus 10 – it’s cute, sometimes handy and so on.  But solve any real problem on it?  You must be joking, it’s just not there.  It’s great for consuming content, sucks for creating anything real – it’s a toy that probably does match the mainstream mentality – the “average guy” who half of people are even dumber than.  That ain’t me.  I’m a maker…I need real tools.

This is just the digital embodiment of a long-time trend. We don’t shape our environments how we used to – we don’t create; we only consume. We refine what exists without thinking bigger. And the sad part about something like the news about PC sales, which could conceivably serve as a wakeup call, is that it won’t matter. If there is a lesson to be learned, it’s that Windows 7 was fine and why should we bother iterating new versions. But the lesson is that there is at least some segment of humanity that’s trying to create and only needs the proper tools to do it. Possessing the means of consumption allows one only to consume (the Apple model); if we can repopularize “dual-use” technologies that don’t restrict content distribution but also enable its creation, well, now we might see innovation for all the right reasons.

Third Time’s a Charm, and By “a Charm” I Mean Exactly the Same

DPRK test, actual scale (Not actually) [image: Petey Santeeny]

Following the other night’s North Korean nuclear test, there was definitely enough anxiety to keep observers and analysts up for hours. But there are a couple factors at play allowing me to sleep pretty soundly. Hopefully they’ll help you do the same!

The first is the relatively small yield – yes, it’s larger than the first two tests, but that really doesn’t mean anything. A 10 kiloton (or 6-7 kt or 15 kt) nuclear weapon is nothing to sneer at, but as the world saw with Hiroshima and Nagasaki, a weapon of that kind isn’t much more effective than conventional explosives. The firebombings of Tokyo did more damage and took more lives than either nuclear blast in World War II.

They’ve also talked about switching their nuclear fuel from plutonium to highly-enriched uranium, which is weird and kind of a step back. The United States used to use HEU but once we perfected plutonium processing techniques we stuck with that. It’s a much more effective fuel for a multi-stage thermonuclear explosion, and it’s a little weird for anyone to change from plutonium. If true, it could indicate a processing and/or supply issue, but that would be a good sign; it would means that they’re having trouble sourcing fissile material. So they may not even have the raw materials necessary to build many bombs.

The other part is a little up in the air and I’ve heard competing claims, but nothing I’ve read so far confirms (despite Pyongyang’s claims) that North Korea has successfully miniaturized a nuclear weapon – which would be a prerequisite for mounting it onto an ICBM. It’s one of the most difficult steps in the technological scale of nuclear science and requires increasing reaction efficiency. The small gain in yield this test provided makes me think that they definitely haven’t reached that step yet. I’m also not positive on the physics – and it might just be a coincidental concurrence rather than cause – but I believe the only miniaturized, i.e., ICBM warheads in existence are thermonuclear, and a failure to demonstrate that technology definitely means something.

So, in short, I’m not worried yet. They can’t build very many bombs; the bombs they can build aren’t especially powerful; they have no missile with the range to reach the United States and even if they did they haven’t miniaturized a warhead sufficiently to mount on it; and their only means of delivering one of the few extant bombs is by bomber, which exist in low numbers and also don’t have the range to hit the US, much less reach here undetected. So we’re all safe over here for the foreseeable future.

I don’t know that this really changes anything strategically even in the region. We’ve known, the South Koreans have known, and the Japanese have known; it’s common knowledge that North Korea has some nuclear weapons. And that hasn’t led to regional proliferation or a move to oust the Kim regime or anything like that. I don’t see “just another test” making a dramatic difference on that front. Dr. Farley probably says it best: “Last night, North Korea expended a significant fraction of its fissile material to achieve nearly nothing, beyond possibly the irritation of Beijing and the strengthening of right-wingers in Japan and the United States.”

Yeah, great job there, Pyongyang.