Category Archives: News Clips

Some news are still worth to remember even when it goes old.

A Capitalist’s Dilemma

We need more empowering innovations. Many great technology firms ruined by executives who only care about ROI. In order to revive empowering innovations, we need executives with visions.

by Clayton Christensen, November 3, 2012, New York Times

In many ways, the answer won’t depend on who wins on Tuesday. Anyone who says otherwise is overstating the power of the American president. But if the president doesn’t have the power to fix things, who does?

It’s not the Federal Reserve. The Fed has been injecting more and more capital into the economy because — at least in theory — capital fuels capitalism. And yet cash hoards in the billions are sitting unused on the pristine balance sheets of Fortune 500 corporations. Billions in capital is also sitting inert and uninvested at private equity funds.

Capitalists seem almost uninterested in capitalism, even as entrepreneurs eager to start companies find that they can’t get financing. Businesses and investors sound like the Ancient Mariner, who complained of “Water, water everywhere — nor any drop to drink.”

It’s a paradox, and at its nexus is what I’ll call the Doctrine of New Finance, which is taught with increasingly religious zeal by economists, and at times even by business professors like me who have failed to challenge it. This doctrine embraces measures of profitability that guide capitalists away from investments that can create real economic growth.

Executives and investors might finance three types of innovations with their capital. I’ll call the first type “empowering” innovations. These transform complicated and costly products available to a few into simpler, cheaper products available to the many.

The Ford Model T was an empowering innovation, as was the Sony transistor radio. So were the personal computers of I.B.M. and Compaq and online trading at Schwab. A more recent example is cloud computing. It transformed information technology that was previously accessible only to big companies into something that even small companies could afford.

Empowering innovations create jobs, because they require more and more people who can build, distribute, sell and service these products. Empowering investments also use capital — to expand capacity and to finance receivables and inventory.

The second type are “sustaining” innovations. These replace old products with new models. For example, the Toyota Prius hybrid is a marvelous product. But it’s not as if every time Toyota sells a Prius, the same customer also buys a Camry. There is a zero-sum aspect to sustaining innovations: They replace yesterday’s products with today’s products and create few jobs. They keep our economy vibrant — and, in dollars, they account for the most innovation. But they have a neutral effect on economic activity and on capital.

The third type are “efficiency” innovations. These reduce the cost of making and distributing existing products and services. Examples are minimills in steel and Geico in online insurance underwriting. Taken together in an industry, such innovations almost always reduce the net number of jobs, because they streamline processes. But they also preserve many of the remaining jobs — because without them entire companies and industries would disappear in competition against companies abroad that have innovated more efficiently.

Efficiency innovations also emancipate capital. Without them, much of an economy’s capital is held captive on balance sheets, with no way to redeploy it as fuel for new, empowering innovations. For example, Toyota’s just-in-time production system is an efficiency innovation, letting manufacturers operate with much less capital invested in inventory.

INDUSTRIES typically transition through these three types of innovations. By illustration, the early mainframe computers were so expensive and complicated that only big companies could own and use them. But personal computers were simple and affordable, empowering many more people.

Companies like I.B.M. and Hewlett-Packard had to hire hundreds of thousands of people to make and sell PC’s. These companies then designed and made better computers — sustaining innovations — that inspired us to keep buying newer and better products. Finally, companies like Dell made the industry much more efficient. This reduced net employment within the industry, but freed capital that had been used in the supply chain.

Ideally, the three innovations operate in a recurring circle. Empowering innovations are essential for growth because they create new consumption. As long as empowering innovations create more jobs than efficiency innovations eliminate, and as long as the capital that efficiency innovations liberate is invested back into empowering innovations, we keep recessions at bay. The dials on these three innovations are sensitive. But when they are set correctly, the economy is a magnificent machine.

For significant periods in the last 150 years, America’s economy has operated this way. In the seven recoveries from recession between 1948 and 1981, according to the McKinsey Global Institute, the economy returned to its prerecession employment peak in about six months, like clockwork — as if a spray of economic WD-40 had reset the balance on the three types of innovation, prompting a recovery.

In the last three recoveries, however, America’s economic engine has emitted sounds we’d never heard before. The 1990 recovery took 15 months, not the typical six, to reach the prerecession peaks of economic performance. After the 2001 recession, it took 39 months to get out of the valley. And now our machine has been grinding for 60 months, trying to hit its prerecession levels — and it’s not clear whether, when or how we’re going to get there. The economic machine is out of balance and losing its horsepower. But why?

The answer is that efficiency innovations are liberating capital, and in the United States this capital is being reinvested into still more efficiency innovations. In contrast, America is generating many fewer empowering innovations than in the past. We need to reset the balance between empowering and efficiency innovations.

The Doctrine of New Finance helped create this situation. The Republican intellectual George F. Gilder taught us that we should husband resources that are scarce and costly, but can waste resources that are abundant and cheap. When the doctrine emerged in stages between the 1930s and the ‘50s, capital was relatively scarce in our economy. So we taught our students how to magnify every dollar put into a company, to get the most revenue and profit per dollar of capital deployed. To measure the efficiency of doing this, we redefined profit not as dollars, yen or renminbi, but as ratios like RONA (return on net assets), ROCE (return on capital employed) and I.R.R. (internal rate of return).

Before these new measures, executives and investors used crude concepts like “tons of cash” to describe profitability. The new measures are fractions and give executives more options: They can innovate to add to the numerator of the RONA ratio, but they can also drive down the denominator by driving assets off the balance sheet — through outsourcing. Both routes drive up RONA and ROCE.

Similarly, I.R.R. gives investors more options. It goes up when the time horizon is short. So instead of investing in empowering innovations that pay off in five to eight years, investors can find higher internal rates of return by investing exclusively in quick wins in sustaining and efficiency innovations.

In a way, this mirrors the microeconomic paradox explored in my book “The Innovator’s Dilemma,” which shows how successful companies can fail by making the “right” decisions in the wrong situations. America today is in a macroeconomic paradox that we might call the capitalist’s dilemma. Executives, investors and analysts are doing what is right, from their perspective and according to what they’ve been taught. Those doctrines were appropriate to the circumstances when first articulated — when capital was scarce.

But we’ve never taught our apprentices that when capital is abundant and certain new skills are scarce, the same rules are the wrong rules. Continuing to measure the efficiency of capital prevents investment in empowering innovations that would create the new growth we need because it would drive down their RONA, ROCE and I.R.R.

It’s as if our leaders in Washington, all highly credentialed, are standing on a beach holding their fire hoses full open, pouring more capital into an ocean of capital. We are trying to solve the wrong problem.

Our approach to higher education is exacerbating our problems. Efficiency innovations often add workers with yesterday’s skills to the ranks of the unemployed. Empowering innovations, in turn, often change the nature of jobs — creating jobs that can’t be filled.

Today, the educational skills necessary to start companies that focus on empowering innovations are scarce. Yet our leaders are wasting education by shoveling out billions in Pell Grants and subsidized loans to students who graduate with skills and majors that employers cannot use.

Is there a solution? It’s complicated, but I offer three ideas to seed a productive discussion:

We can use capital with abandon now, because it’s abundant and cheap. But we can no longer waste education, subsidizing it in fields that offer few jobs. Optimizing return on capital will generate less growth than optimizing return on education.

Today, tax rates on personal income are progressive — they climb as we make more money. In contrast, there are only two tax rates on investment income. Income from investments that we hold for less than a year is taxed like personal income. But if we hold an investment for one day longer than 365, it is generally taxed at no more than 15 percent.

We should instead make capital gains regressive over time, based upon how long the capital is invested in a company. Taxes on short-term investments should continue to be taxed at personal income rates. But the rate should be reduced the longer the investment is held — so that, for example, tax rates on investments held for five years might be zero — and rates on investments held for eight years might be negative.

Federal tax receipts from capital gains comprise only a tiny percentage of all United States tax revenue. So the near-term impact on the budget will be minimal. But over the longer term, this policy change should have a positive impact on the federal deficit, from taxes paid by companies and their employees that make empowering innovations.

The major political parties are both wrong when it comes to taxing and distributing to the middle class the capital of the wealthiest 1 percent. It’s true that some of the richest Americans have been making money with money — investing in efficiency innovations rather than investing to create jobs. They are doing what their professors taught them to do, but times have changed.

If the I.R.S. taxes their wealth away and distributes it to everyone else, it still won’t help the economy. Without empowering products and services in our economy, most of this redistribution will be spent buying sustaining innovations — replacing consumption with consumption. We must give the wealthiest an incentive to invest for the long term. This can create growth.

Granted, mine is a simple model, and we face complicated problems. But I hope it helps us and our leaders understand that policies that were once right are now wrong, and that counterintuitive measures might actually work to turn our economy around.

Clayton M. Christensen is a business professor at Harvard and a co-author of “How Will You Measure Your Life?”

A Nerd’s Perspective on Software Patents

Software patent cost more harm than good to the society allowing companies patent obvious common things. We should have a patent system that penalize those “fake” patent by imposing a heavity fine if a patent is later shown to be invalid. Part of the fine should award to the person present evidence to over turn the “fake” patent. It lessen the burden of the patent office by outsourcing the validation of patent to the crowd and it will keep those who file patents honest.

by John Larson

As a programmer doing reasonably smart stuff on the web, I’m made a bit uneasy by software patents, namely because the possibility exists that I be sued for infringing upon them.

It’s not that I do anything nasty like meticulously reverse engineering the complex works of another for my own benefit, it’s that I build websites and web applications at all. I haven’t done the boatloads of research to know precisely how much I’m infringing, but, for example, most projects I do contain some sort of menu, and this technically violates Microsoft’s patent on ‘system and method for providing and displaying a Web page having an embedded menu’, which they have already demonstrated willingness to sue another company for. I do stuff that’s WAY more complicated than a bunch of navigation links of at the top of the screen, so I can only imagine how many other toes I’m stepping on when building systems that feature both ubiquitous and niche features.

So this characterizes the implication of patent law upon me personally. Now then, if you scale that up to a community of hundreds of thousands of programmers similarly impacted, and throw in the rising prominence of large companies suing one another over intellectual property[1], you will then have a sense of what the fuss over software patents covers. These are broad strokes, but they convey the gist.

Accordingly, protest has risen and lengthy debates have raged on about how to fix the “software patent problem”. Some of it goes on and on about legal precedent, objective tests for what’s patentable, the so-called “transformation test”, and other things that apt to make the eyes of a more casual reader glaze over (mine included).

What I want to explore is if it makes sense and is defensible to take a completely different tack: the pragmatic view of the average, motivated nerd.

Let us eschew, for a while, all the legal mumbo jumbo of definitions, specificity, precedent and so forth as they are usually applied to the debate of “is X patentable?”, and see if we can’t go closer to the root concepts in an effort to sidestep the whole tangled mess. I want to look through the lens of the reasons and motivations of why have a patent system at all.
The Essence of the Patent System

I’m not saying anything new or profound here, just summarizing to set the stage. The patent system is a societal construct: we all, as a society, agree to abide by certain constraints (namely not infringe upon anothers patented ideas for a fixed period of time), we willingly do this in order to reap benefits as a society, and there are consequences for an individual who breaks this agreement. (The whole thing is not unlike how we all, as a society, agree not to kill one another: we all more or less enjoy the overall benefit and are willing to give up that particular freedom, and there are consequences for an individual who breaks that agreement).

There are two really great benefits we reap as a society for having and honoring the patent system.

The first is that it encourages people to come up with new great things. The protection offered by patents effectively says “Hey, nice job coming up with that great new thing! Listen, we know you put a lot of hard work and investment into doing so, and for being the one who did all that we’ll give you a window of time in which you can be the only one who gets to reap the reward of that effort, without having some copy cat come along and bootstrap off of your blood and sweat.” An innovator, knowing that benefit lies on the other side is encouraged to invest time/effort/money up front. The rest of society gets to enjoy the fruits of that work, and for it pays the price of allowing a temporary monopoly.

The second benefit is that it encourages disclosure of really smart work. In this sense, the patent system effectively says “Wow, that’s something really smart that you did! Listen, we’re thinking long term for the expansion of the fabric of human knowledge, and so we’d love to know how you did that rather than see you take those secrets to the grave, or have your heirs forever keep it under lock and key. If you teach us how that works, we’ll get to expand as a civilization and in return we’ll make sure you have a window of time in which to benefit from your novel creation. Thanks for bettering the rest of us for the long haul.” Again, a nice benefit to society: it speeds up the proliferation of ingenuity and all the fruits that come with it, gained at the cost of allowing a temporary monopoly.
Evaluating Patents as a Cost/Benefit Proposition

From a clear understanding of the trade-off made when a patent is given, one can view it as a transaction willingly entered into by two parties. A patent application can be viewed as a business proposal that a society might freely choose to enter into (or politely decline) according to its interests and values, much like any business deal between two free-willed entities.

I propose that the debate no longer center around IF a given idea is patentable, but instead whether or not we WANT to grant a patent for a given idea: in other words, transform the debate to a value judgment as to whether we as a society care to pay the price of issuing a patent for the expected benefit, or would rather pass on the opportunity altogether. When it comes to software, I believe the best choice, as a culture, is to say “thanks but no thanks” to the opportunity of issuing patents, and it takes a look into the nature of software and the nerd culture that surrounds it to see clearly why.
Nerd Culture and Innovation

I love creating cool and interesting stuff with technology, and there are 100,000 others like me. There is no shortage of things out there in software that could be (or are) patented that a smart nerd with a little bit of gumption could look at and recreate without trouble. And by “look at” I mean simply get the view as an end user, not trolling through source code or employing sophisticated tools of reverse engineering.

Consider, for example, Amazon’s patent on one click ordering. When a customer is logged in, and has items in their cart, with one click they can place a completed order for those items. Kinda nifty, but anyone who’s done e-commerce programming can immediately work out how to implement such a feature using a customer’s information on file. I say society got a raw deal for issuing this patent: Amazon shared nothing of value with the rest of the world, and effectively earned the right squat on a generally useful idea because they ponied up some cash for lawyers and got some paperwork in first.

I would categorize ideas like that as “inevitable disclosure”: an idea that, by its very existence in user-facing software, reveals everything needed to reproduce it. The benefit of having information about how a such an idea was implemented in software disclosed is moot: one look and (or even sometimes hearing of the idea) is all a smart nerd needs to work out the rest. Apple’s patented “slide to unlock” widget is another example of an inevitable disclosure idea. So are rollover images:

We smart nerds are always thinking of and building new stuff to razzle-dazzle, be it for the pure fun of it, personal pride and reputation, or a great portfolio piece by which to impress the next prospective client for a contracting gig. And nerd culture, with its drive to innovate and share runs much deeper than small projects. Volunteer, collaborative open source projects have created top notch, large scale innovation in all realms of software, such as full-fledged operating systems, open web standards, content management platforms, e-commerce packages, audio and video compression schemes, office productivity suites, and more.

I point this all out to demonstrate a simple observation: innovation [in software] isn’t going to dry up if the incentive of patent protections were to disappear tomorrow. More than most (all?) industries, software grants much more space for hobbyists and enthusiasts to get involved. The overhead to major achievement is much smaller. We are numerous, we are smart, and we are hungry to create brilliant things for both personal and altruistic reasons.
Secrets in Software

The world of software won’t turn into the wild west of pillaging and stealing ideas in the absence of software patents, because things that are genuinely hard to do and which represent painstaking work and novel innovation can be kept a secret[2].

Come to think of it, the desire to file a patent to protect a software innovation may be a sign of admittance on the part of the applicant that the idea itself will be easy to replicate by a community of smart people (or even maybe your average nerd), which is a sure good reason to be disinterested in issuing a patent at all. “Thanks but no thanks”, I would rather we collectively say as a society: “keep it to yourself because the larger world will figure out how to execute and enjoy this idea sooner or later, and get there sooner without paying the price.”


[1] Which breaks my heart, because as a casual observer it appears as though legal strong-arming is becoming a passable substitute for actual marketplace competitiveness.

[2] Google’s proprietary index and ranking algorithms that power their web search are presumably breathtakingly brilliant. They constitute a large portion of the secret sauce which gives Google its competitive edge, and they reap the rewards of that not because they came first and get to squat on medium-obvious ideas, but because they do it better than your average smart person can figure out on their own. Contrast this against Apple’s slider thingee.
John Larson

Is Math Still Relevant?

Is math still relevant? That depends on your metaphysical view of the world. If the reality is indeed appearance of mathematics as some metaphysics theories suggest and we are living in endless possibility of equations, then maths is the only way to understand the Truth.

By Robert W. Lucky, IEEE Spectrum, March 2012
The queen of the sciences may someday lose its royal status

Long ago, when I was a freshman in ­engineering school, there was a required course in mechanical drawing. “You had better learn this skill,” the instructor said, “because all engineers start their careers at the ­drafting table.”

This was an ominous beginning to my education, but as it turned out, he was wrong. Neither I nor, I suspect, any of my classmates began our careers at the drafting table.

These days, engineers aren’t routinely taught drawing, but they spend a lot of time learning another skill that may be similarly unnecessary: mathematics. I confess this thought hadn’t occurred to me until recently, when a friend who teaches at a leading university made an off-hand comment. “Is it ­possible,” he suggested, “that the era of math­ematics in electrical ­engineering is coming to an end?”

When I asked him about this disturbing idea, he said that he had only been ­trying to be provocative and that his graduate students were now writing theses that were more mathematical than ever. I felt reassured that the mathematical basis of engineering is strong. But still, I wonder to what extent—and for how long—today’s under­graduate engineering students will be using classical ­mathematics as their careers unfold.

There are several trends that might suggest a diminishing role for mathematics in engineering work. First, there is the rise of software engineering as a separate discipline. It just doesn’t take as much math to write an operating system as it does to design a printed circuit board. Programming is rigidly structured and, at the same time, an evolving art form—neither of which is especially amenable to mathematical analysis.

Another trend veering us away from classical math is the increasing dependence on programs such as Matlab and Maple. The pencil-and-paper calculations with which we evaluated the relative performance of variations in design are now more easily made by simulation software packages—which, with their vast libraries of pre­packaged functions and data, are often more powerful. A purist might ask: Is using Matlab doing math? And of course, the answer is that sometimes it is, and sometimes it isn’t.

A third trend is the growing importance of a class of problems termed “wicked,” which involve social, political, economic, and un­defined or unknown issues that make the application of mathematics very difficult. The world is seemingly full of such frus­trating but important problems.

These trends notwithstanding, we should recognize the role of mathematics in the discovery of fundamental properties and truth. Maxwell’s equations—which are inscribed in marble in the foyer of the National Academy of Engineering—foretold the possibility of radio. It took about half a ­century for those radios to reach Shannon’s limit—described by his equation for channel ­capacity—but at least we knew where we were headed.

Theoretical physicists have explained through math the workings of the universe and even predicted the existence of previously unknown fundamental particles. The iconic image I carry in my mind is of Einstein at a blackboard that’s covered with tensor-filled equations. It is remarkable that one person scribbling math can uncover such secrets. It is as if the universe itself understands and obeys the mathematics that we humans invented.

There have been many philosophical discussions through the years about this wonderful power of math. In a famous 1960 paper en­titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” the physicist Eugene Wigner wrote, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift [that] we neither understand nor deserve.” In a 1980 paper with a similar title, the computer science pioneer Richard Hamming tried to answer the question, “How can it be that simple mathematics suffices to predict so much?”

This “unreasonable effectiveness” of mathematics will continue to be at the heart of engineering, but perhaps the way we use math will change. Still, it’s hard to imagine Einstein running simulations on his laptop.

Whales are people, too

I strongly against animal right because they are not human. However, my ethical theory is based on self reflective intelligent beings and Kantian rational moral contract. According to my ethical theory, since cetacean has near human intelligent, human should grant cetaceans individual right that close to human rights. On the other hand, other non-intelligent animals should only have species right. As long as the species are not going to extinct, human can use the animals as resources to server human.

A declaration of the rights of cetaceans
Feb 25th 2012, the Economist

THE “Declaration of the Rights of Man” was a crucial step in the French revolution. The document, drafted by the Marquis de Lafayette, marked a break with the political past by proposing that everyone, however humble his birth, had certain inalienable civil rights. These were liberty, property, security and resistance to oppression. Merely being a man conferred them.

These days, such rights extend to women as well. But what if you are not human? A session on cetaceans at the AAAS meeting discussed a proposal that whales and dolphins, too, should have rights. The suggestion of the speakers was that the protections these species are afforded by human laws should be extended and recognised not as an indulgence of the human aristocracy towards the bestial peasantry, but as a right as natural as those which humans now afford, in the more civilised parts of the world, to themselves.

The proposition that whales have rights is founded on the idea that they have a high degree of intelligence, and also have self-awareness of the sort that humans do. That is a controversial suggestion, but there is evidence to support it. Lori Marino of Emory University, in Atlanta, Georgia, reviewed this evidence.

One pertinent observation is that dolphins, whales and their kind have brains as anatomically complex as those of humans, and that these brains contain a particular type of nerve cell, known as a spindle cell, that in humans is associated with higher cognitive functions such as abstract reasoning. Cetacean brains are also, scaled appropriately for body size, almost as big as those of humans and significantly bigger than those of great apes, which are usually thought of as humanity’s closest intellectual cousins.

Whales and dolphins have complex cultures, too, which vary from group to group within a species. The way they hunt, the repertoire of vocal signals and even their use of tools differs from pod to pod. They also seem to have an awareness of themselves as individuals. At least some can, for example, recognise themselves in a mirror—a trick that humans, great apes and elephants can manage, but most other species cannot.

Thomas White, of Loyola Marymount University, in Los Angeles, then discussed the ethical implications of what Dr Marino had said. Dr White is a philosopher, and he sought to establish the idea that a person need not be human. In philosophy, he told the meeting, a person is a being with special characteristics who deserves special treatment as a result of those characteristics. In principle, other species can qualify. For the reasons outlined by Dr Marino, he claimed, cetaceans do indeed count as persons and therefore have moral rights—though ones appropriate to their species, which may therefore differ from those that would be accorded a human (for example, the right not to be removed from their natural environment).

Chris Butler-Stroud, of the Whale and Dolphin Conservation Society, in Britain, and Kari Koski of the Whale Museum in San Juan Island, Washington state, then charted some of the hesitant steps already being taken in the direction of establishing cetacean rights. Mr Butler-Stroud showed how the language used by international bodies concerned with these animals is changing. The term “stocks”, for example, with its implication that whales and dolphins are a resource suitable for exploitation, is being overtaken by “populations”, a word that is also applied to people.

Ms Koski gave an even more intriguing example. She told of how a group of killer whales that lives near Vancouver, passing between waters controlled (from a human point of view) by Canada and the United States, have acquired legal protection even though the species as a whole is not endangered. After a battle in the American courts these particular whales have been defined by their culture, and that culture is deemed endangered.

The idea of rights for whales is certainly a provocative one, and is reminiscent of the Australian philosopher Peter Singer’s proposal that human rights be extended to the great apes—chimpanzees, bonobos, gorillas and orang-utans. Like Dr Singer’s suggestion, though, it does ignore one nagging technicality. The full title of the French revolutionary document was “Declaration of the Rights of Man and Citizen”. No one has yet argued for votes for whales and dolphins. But considering some of the politicians who manage to get themselves chosen by human electorates, maybe it would not be such a bad idea.

Hong Kong Was Better Under the British

Maybe it is politically incorrect, but this article simply state the fact. If there is a referendum in Hong Kong today, asking the people whether they want to rejoin the UK or stay with China, I am pretty sure people will pick UK over China. If people are allow to migrate from one country to another, why can’t a whole city migrate too?

by Hugo Restall, WSJ, Feb 23 2012

The slow-motion implosion of Henry Tang, Beijing’s pick to be Hong Kong’s next chief executive, brings to mind a speech given shortly before the 1997 handover by former Far Eastern Economic Review Editor Derek Davies. Entitled “Two Cheers for Colonialism,” it attempted to explain why the city flourished under the British. Fifteen years later, the Chinese officials who are having trouble running Hong Kong might want to give it a read.

The Brits created a relatively incorrupt and competent civil service to run the city day-to-day. Mr. Davies’ countrymen might not appreciate his description of them: “They take enormous satisfaction in minutes, protocol, proper channels, precedents, even in the red tape that binds up their files inside the neat cubby holes within their registries.” But at least slavish adherence to bureaucratic procedure helped to create respect for the rule of law and prevented abuses of power.

Above the civil servants sat the career-grade officials appointed from London. These nabobs were often arrogant, affecting a contempt for journalists and other “unhelpful” critics. But they did respond to public opinion as transmitted through the newspapers and other channels.

Part of the reason was that Hong Kong officials were accountable to a democratically elected government in Britain sensitive to accusations of mismanaging a colony. But local officials often disobeyed London when it was in the local interest—for this reason frustrated Colonial Office mandarins sometimes dubbed the city “The Republic of Hong Kong.” For many decades it boasted a higher standard of governance than the mother country.

Mr. Davies nailed the real reason Hong Kong officials were so driven to excel: “Precisely because they were aware of their own anachronism, the questionable legitimacy of an alien, non-elected government they strove not to alienate the population. Their nervousness made them sensitive.”

The communists claim that the European powers stripped their colonies of natural resources and used them as captive markets for their manufacturers. But Hong Kong, devoid of resources other than refugees from communism, attracted investment and built up light industry to export back to Britain. And as for taking back the profits, Mr. Davies noted, “No British company here would have been mad enough to have repatriated its profits back to heavily-taxed, regularly devaluing Britain.”

Most expatriate officials retired to Blighty, so they were less tempted to do favors for the local business elite. The government rewarded them with pensions and OBEs. A Lands Department bureaucrat didn’t have to worry whether his child would be able to find employment in Hong Kong if a decision went against the largest property developer.

Contrast all this with Hong Kong post-handover. The government is still not democratic, but now it is accountable only to a highly corrupt and abusive single-party state. The first chief executive, Tung Chee Hwa, and Beijing’s favorite to take the post next month, Henry Tang, are both members of the Shanghainese business elite that moved to the city after 1949. The civil service is localized.

Many consequences flow from these changes, several of which involve land, which is all leased from the government. Real estate development and appreciation is the biggest source of wealth in Hong Kong, a major source of public revenue and also the source of most discontent.

In recent years, the Lands Department has made “mistakes” in negotiating leases that have allowed developers to make billions of Hong Kong dollars in extra profit. Several high-level officials have also left to work for the developers. This has bred public cynicism that Hong Kong is sinking into crony capitalism.

This helps explain why the public is so upset with Mr. Tang for illegally adding 2,400 square feet of extra floor space to his house. Likewise Michael Suen, now the secretary for education, failed to heed a 2006 order from the Lands Department to dismantle an illegal addition to his home. His offense was arguably worse, since he was secretary for housing, planning and lands at the time.

In both cases the issue is not just a matter of zoning and safety; illegal additions cheat the government out of revenue. But it’s unlikely Mr. Tang will face prosecution because nobody above or below him is independent enough to demand accountability. So now there is one set of rules for the public and another for the business and political elites.

Under the British, Hong Kong had the best of both worlds, the protections of democracy and the efficiency of all-powerful but nervous administrators imported from London. Now it has the worst of both worlds, an increasingly corrupt and feckless local ruling class backstopped by an authoritarian regime. The only good news is that the media remains free to expose scandals, but one has to wonder for how much longer.

Hong Kong’s Chinese rulers have been slow to realize that, to paraphrase Lampedusa, the only way to keep Hong Kong the same is to accept change. It is no longer a city of refugees happy to accept rule by outsiders. And democracy is the only system that can match the hybrid form of political accountability enjoyed under the British.

Mr. Davies ended his appraisal of colonialism’s faults and virtues thus: “I only hope and trust that a local Chinese will never draw a future British visitor aside and whisper to him that Hong Kong was better ruled by the foreign devils.” Fifteen years later, that sentiment is becoming common.

Why French Parents Are Superior

In general, I don’t like the French way of thinking, but this is one of the few things that I actually like about French. Kids should learn how to cope with boredom on his own. Give less immediate attention to your kid and he will learn patient.

By PAMELA DRUCKERMAN, Feb 4 2012, The Wall Street Journal
While Americans fret over modern parenthood, the French are raising happy, well-behaved children without all the anxiety. Pamela Druckerman on the Gallic secrets for avoiding tantrums, teaching patience and saying ‘non’ with authority.

Pamela Druckerman’s new book “Bringing Up Bebe,” catalogs her observations about why French children seem so much better behaved than their American counterparts. She talks with WSJ’s Gary Rosen about the lessons of French parenting techniques.

When my daughter was 18 months old, my husband and I decided to take her on a little summer holiday. We picked a coastal town that’s a few hours by train from Paris, where we were living (I’m American, he’s British), and booked a hotel room with a crib. Bean, as we call her, was our only child at this point, so forgive us for thinking: How hard could it be?

We ate breakfast at the hotel, but we had to eat lunch and dinner at the little seafood restaurants around the old port. We quickly discovered that having two restaurant meals a day with a toddler deserved to be its own circle of hell.

Bean would take a brief interest in the food, but within a few minutes she was spilling salt shakers and tearing apart sugar packets. Then she demanded to be sprung from her high chair so she could dash around the restaurant and bolt dangerously toward the docks.
Journal Community

Our strategy was to finish the meal quickly. We ordered while being seated, then begged the server to rush out some bread and bring us our appetizers and main courses at the same time. While my husband took a few bites of fish, I made sure that Bean didn’t get kicked by a waiter or lost at sea. Then we switched. We left enormous, apologetic tips to compensate for the arc of torn napkins and calamari around our table.

After a few more harrowing restaurant visits, I started noticing that the French families around us didn’t look like they were sharing our mealtime agony. Weirdly, they looked like they were on vacation. French toddlers were sitting contentedly in their high chairs, waiting for their food, or eating fish and even vegetables. There was no shrieking or whining. And there was no debris around their tables.

Though by that time I’d lived in France for a few years, I couldn’t explain this. And once I started thinking about French parenting, I realized it wasn’t just mealtime that was different. I suddenly had lots of questions. Why was it, for example, that in the hundreds of hours I’d clocked at French playgrounds, I’d never seen a child (except my own) throw a temper tantrum? Why didn’t my French friends ever need to rush off the phone because their kids were demanding something? Why hadn’t their living rooms been taken over by teepees and toy kitchens, the way ours had?
French Lessons

Soon it became clear to me that quietly and en masse, French parents were achieving outcomes that created a whole different atmosphere for family life. When American families visited our home, the parents usually spent much of the visit refereeing their kids’ spats, helping their toddlers do laps around the kitchen island, or getting down on the floor to build Lego villages. When French friends visited, by contrast, the grownups had coffee and the children played happily by themselves.

By the end of our ruined beach holiday, I decided to figure out what French parents were doing differently. Why didn’t French children throw food? And why weren’t their parents shouting? Could I change my wiring and get the same results with my own offspring?

Driven partly by maternal desperation, I have spent the last several years investigating French parenting. And now, with Bean 6 years old and twins who are 3, I can tell you this: The French aren’t perfect, but they have some parenting secrets that really do work.

I first realized I was on to something when I discovered a 2009 study, led by economists at Princeton, comparing the child-care experiences of similarly situated mothers in Columbus, Ohio, and Rennes, France. The researchers found that American moms considered it more than twice as unpleasant to deal with their kids. In a different study by the same economists, working mothers in Texas said that even housework was more pleasant than child care.
Previous Saturday Essays

Rest assured, I certainly don’t suffer from a pro-France bias. Au contraire, I’m not even sure that I like living here. I certainly don’t want my kids growing up to become sniffy Parisians.

But for all its problems, France is the perfect foil for the current problems in American parenting. Middle-class French parents (I didn’t follow the very rich or poor) have values that look familiar to me. They are zealous about talking to their kids, showing them nature and reading them lots of books. They take them to tennis lessons, painting classes and interactive science museums.

Yet the French have managed to be involved with their families without becoming obsessive. They assume that even good parents aren’t at the constant service of their children, and that there is no need to feel guilty about this. “For me, the evenings are for the parents,” one Parisian mother told me. “My daughter can be with us if she wants, but it’s adult time.” French parents want their kids to be stimulated, but not all the time. While some American toddlers are getting Mandarin tutors and preliteracy training, French kids are—by design—toddling around by themselves.

I’m hardly the first to point out that middle-class America has a parenting problem. This problem has been painstakingly diagnosed, critiqued and named: overparenting, hyperparenting, helicopter parenting, and my personal favorite, the kindergarchy. Nobody seems to like the relentless, unhappy pace of American parenting, least of all parents themselves.
[BEBEjump] Nicolas Héron for The Wall Street Journal

Delphine Porcher with daughter Pauline. The family’s daily rituals are an apprenticeship in learning to wait.

Of course, the French have all kinds of public services that help to make having kids more appealing and less stressful. Parents don’t have to pay for preschool, worry about health insurance or save for college. Many get monthly cash allotments—wired directly into their bank accounts—just for having kids.

But these public services don’t explain all of the differences. The French, I found, seem to have a whole different framework for raising kids. When I asked French parents how they disciplined their children, it took them a few beats just to understand what I meant. “Ah, you mean how do we educate them?” they asked. “Discipline,” I soon realized, is a narrow, seldom-used notion that deals with punishment. Whereas “educating” (which has nothing to do with school) is something they imagined themselves to be doing all the time.

One of the keys to this education is the simple act of learning how to wait. It is why the French babies I meet mostly sleep through the night from two or three months old. Their parents don’t pick them up the second they start crying, allowing the babies to learn how to fall back asleep. It is also why French toddlers will sit happily at a restaurant. Rather than snacking all day like American children, they mostly have to wait until mealtime to eat. (French kids consistently have three meals a day and one snack around 4 p.m.)

One Saturday I visited Delphine Porcher, a pretty labor lawyer in her mid-30s who lives with her family in the suburbs east of Paris. When I arrived, her husband was working on his laptop in the living room, while 1-year-old Aubane napped nearby. Pauline, their 3-year-old, was sitting at the kitchen table, completely absorbed in the task of plopping cupcake batter into little wrappers. She somehow resisted the temptation to eat the batter.

Delphine said that she never set out specifically to teach her kids patience. But her family’s daily rituals are an ongoing apprenticeship in how to delay gratification. Delphine said that she sometimes bought Pauline candy. (Bonbons are on display in most bakeries.) But Pauline wasn’t allowed to eat the candy until that day’s snack, even if it meant waiting many hours.

When Pauline tried to interrupt our conversation, Delphine said, “Just wait two minutes, my little one. I’m in the middle of talking.” It was both very polite and very firm. I was struck both by how sweetly Delphine said it and by how certain she seemed that Pauline would obey her. Delphine was also teaching her kids a related skill: learning to play by themselves. “The most important thing is that he learns to be happy by himself,” she said of her son, Aubane.

It’s a skill that French mothers explicitly try to cultivate in their kids more than American mothers do. In a 2004 study on the parenting beliefs of college-educated mothers in the U.S. and France, the American moms said that encouraging one’s child to play alone was of average importance. But the French moms said it was very important.

Later, I emailed Walter Mischel, the world’s leading expert on how children learn to delay gratification. As it happened, Mr. Mischel, 80 years old and a professor of psychology at Columbia University, was in Paris, staying at his longtime girlfriend’s apartment. He agreed to meet me for coffee.

Mr. Mischel is most famous for devising the “marshmallow test” in the late 1960s when he was at Stanford. In it, an experimenter leads a 4- or 5-year-old into a room where there is a marshmallow on a table. The experimenter tells the child he’s going to leave the room for a little while, and that if the child doesn’t eat the marshmallow until he comes back, he’ll be rewarded with two marshmallows. If he eats the marshmallow, he’ll get only that one.

Most kids could only wait about 30 seconds. Only one in three resisted for the full 15 minutes that the experimenter was away. The trick, the researchers found, was that the good delayers were able to distract themselves.

Following up in the mid-1980s, Mr. Mischel and his colleagues found that the good delayers were better at concentrating and reasoning, and didn’t “tend to go to pieces under stress,” as their report said.

Could it be that teaching children how to delay gratification—as middle-class French parents do—actually makes them calmer and more resilient? Might this partly explain why middle-class American kids, who are in general more used to getting what they want right away, so often fall apart under stress?

Mr. Mischel, who is originally from Vienna, hasn’t performed the marshmallow test on French children. But as a longtime observer of France, he said that he was struck by the difference between French and American kids. In the U.S., he said, “certainly the impression one has is that self-control has gotten increasingly difficult for kids.”

American parents want their kids to be patient, of course. We encourage our kids to share, to wait their turn, to set the table and to practice the piano. But patience isn’t a skill that we hone quite as assiduously as French parents do. We tend to view whether kids are good at waiting as a matter of temperament. In our view, parents either luck out and get a child who waits well or they don’t.

French parents and caregivers find it hard to believe that we are so laissez-faire about this crucial ability. When I mentioned the topic at a dinner party in Paris, my French host launched into a story about the year he lived in Southern California.

He and his wife had befriended an American couple and decided to spend a weekend away with them in Santa Barbara. It was the first time they’d met each other’s kids, who ranged in age from about 7 to 15. Years later, they still remember how the American kids frequently interrupted the adults in midsentence. And there were no fixed mealtimes; the American kids just went to the refrigerator and took food whenever they wanted. To the French couple, it seemed like the American kids were in charge.

“What struck us, and bothered us, was that the parents never said ‘no,’ ” the husband said. The children did “n’importe quoi,” his wife added.

After a while, it struck me that most French descriptions of American kids include this phrase “n’importe quoi,” meaning “whatever” or “anything they like.” It suggests that the American kids don’t have firm boundaries, that their parents lack authority, and that anything goes. It’s the antithesis of the French ideal of the cadre, or frame, that French parents often talk about. Cadre means that kids have very firm limits about certain things—that’s the frame—and that the parents strictly enforce these. But inside the cadre, French parents entrust their kids with quite a lot of freedom and autonomy.

Authority is one of the most impressive parts of French parenting—and perhaps the toughest one to master. Many French parents I meet have an easy, calm authority with their children that I can only envy. Their kids actually listen to them. French children aren’t constantly dashing off, talking back, or engaging in prolonged negotiations.

One Sunday morning at the park, my neighbor Frédérique witnessed me trying to cope with my son Leo, who was then 2 years old. Leo did everything quickly, and when I went to the park with him, I was in constant motion, too. He seemed to regard the gates around play areas as merely an invitation to exit.

Frédérique had recently adopted a beautiful redheaded 3-year-old from a Russian orphanage. At the time of our outing, she had been a mother for all of three months. Yet just by virtue of being French, she already had a whole different vision of authority than I did—what was possible and pas possible.

Frédérique and I were sitting at the perimeter of the sandbox, trying to talk. But Leo kept dashing outside the gate surrounding the sandbox. Each time, I got up to chase him, scold him, and drag him back while he screamed. At first, Frédérique watched this little ritual in silence. Then, without any condescension, she said that if I was running after Leo all the time, we wouldn’t be able to indulge in the small pleasure of sitting and chatting for a few minutes.

“That’s true,” I said. “But what can I do?” Frédérique said I should be sterner with Leo. In my mind, spending the afternoon chasing Leo was inevitable. In her mind, it was pas possible.

I pointed out that I’d been scolding Leo for the last 20 minutes. Frédérique smiled. She said that I needed to make my “no” stronger and to really believe in it. The next time Leo tried to run outside the gate, I said “no” more sharply than usual. He left anyway. I followed and dragged him back. “You see?” I said. “It’s not possible.”

Frédérique smiled again and told me not to shout but rather to speak with more conviction. I was scared that I would terrify him. “Don’t worry,” Frederique said, urging me on.

Leo didn’t listen the next time either. But I gradually felt my “nos” coming from a more convincing place. They weren’t louder, but they were more self-assured. By the fourth try, when I was finally brimming with conviction, Leo approached the gate but—miraculously—didn’t open it. He looked back and eyed me warily. I widened my eyes and tried to look disapproving.

After about 10 minutes, Leo stopped trying to leave altogether. He seemed to forget about the gate and just played in the sandbox with the other kids. Soon Frédérique and I were chatting, with our legs stretched out in front of us. I was shocked that Leo suddenly viewed me as an authority figure.

“See that,” Frédérique said, not gloating. “It was your tone of voice.” She pointed out that Leo didn’t appear to be traumatized. For the moment—and possibly for the first time ever—he actually seemed like a French child.

The Strange Birth and Long Life of Unix

Who said history is boring? This is a very interesting history of the world’s most important operating system.

The classic operating system turns 40, and its progeny abound
By Warren Toomey, IEEE Spectrum, December 2011

They say that when one door closes on you, another opens. People generally offer this bit of wisdom just to lend some solace after a misfortune. But sometimes it’s actually true. It certainly was for Ken Thompson and the late Dennis Ritchie, two of the greats of 20th-century information technology, when they created the Unix operating system, now considered one of the most inspiring and influential pieces of software ever written.

A door had slammed shut for Thompson and Ritchie in March of 1969, when their employer, the American Telephone & Telegraph Co., withdrew from a collaborative project with the Massachusetts Institute of Technology and General Electric to create an interactive time-sharing system called Multics, which stood for “Multiplexed Information and Computing Service.” Time-sharing, a technique that lets multiple people use a single computer simultaneously, had been invented only a decade earlier. Multics was to combine time-sharing with other technological advances of the era, allowing users to phone a computer from remote terminals and then read e-mail, edit documents, run calculations, and so forth. It was to be a great leap forward from the way computers were mostly being used, with people tediously preparing and submitting batch jobs on punch cards to be run one by one.

Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company’s renowned Bell Telephone Laboratories—­including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T’s corporate leaders decided to pull the plug.

After AT&T’s departure from the Multics project, managers at Bell Labs, in Murray Hill, N.J., became reluctant to allow any further work on computer operating systems, leaving some researchers there very frustrated. Although Multics hadn’t met many of its objectives, it had, as Ritchie later recalled, provided them with a “convenient interactive computing service, a good environment in which to do programming, [and] a system around which a fellowship could form.” Suddenly, it was gone.

With heavy hearts, the researchers returned to using their old batch system. At such an inauspicious moment, with management dead set against the idea, it surely would have seemed foolhardy to continue designing computer operating systems. But that’s exactly what Thompson, Ritchie, and many of their Bell Labs colleagues did. Now, some 40 years later, we should be thankful that these programmers ignored their bosses and continued their labor of love, which gave the world Unix, one of the greatest computer operating systems of all time.
Man men: Thompson (ken) and Ritchie (dmr) authored the first Unix manual or “man” pages, one of which is shown here. The first edition of the manual was released in November 1971.
Man men: Thompson (ken) and Ritchie (dmr) authored the first Unix manual or “man” pages, one of which is shown here. The first edition of the manual was released in November 1971. Click to enlarge.

The rogue project began in earnest when Thompson, Ritchie, and a third Bell Labs colleague, Rudd Canaday, began to sketch out on paper the design for a file system. Thompson then wrote the basics of a new operating system for the lab’s GE-645 mainframe. But with the Multics project ended, so too was the need for the GE-645. Thompson realized that any further programming he did on it was likely to go nowhere, so he dropped the effort.

Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it.

And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix.

Initially, Thompson used the GE-645 to compose and compile the software, which he then downloaded to the PDP‑7. But he soon weaned himself from the mainframe, and by the end of 1969 he was able to write operating-system code on the PDP-7 itself. That was a step in the right direction. But Thompson and the others helping him knew that the PDP‑7, which was already obsolete, would not be able to sustain their skunkworks for long. They also knew that the lab’s management wasn’t about to allow any more research on operating systems.

So Thompson and Ritchie got crea­tive. They formulated a proposal to their bosses to buy one of DEC’s newer minicomputers, a PDP-11, but couched the request in especially palatable terms. They said they were aiming to create tools for editing and formatting text, what you might call a word-processing system today. The fact that they would also have to write an operating system for the new machine to support the editor and text formatter was almost a footnote.

Management took the bait, and an order for a PDP-11 was placed in May 1970. The machine itself arrived soon after, although the disk drives for it took more than six months to appear. During the interim, Thompson, Ritchie, and others continued to develop Unix on the PDP-7. After the PDP-11’s disks were installed, the researchers moved their increasingly complex operating system over to the new machine. Next they brought over the roff text formatter written by Ossanna and derived from the runoff program, which had been used in an earlier time-sharing system.

Unix was put to its first real-world test within Bell Labs when three typists from AT&T’s patents department began using it to write, edit, and format patent applications. It was a hit. The patent department adopted the system wholeheartedly, which gave the researchers enough credibility to convince management to purchase another machine—a newer and more powerful PDP-11 model—allowing their stealth work on Unix to continue.

During its earliest days, Unix evolved constantly, so the idea of issuing named versions or releases seemed inappropriate. But the researchers did issue new editions of the programmer’s manual periodically, and the early Unix systems were named after each such edition. The first edition of the manual was completed in November 1971.

So what did the first edition of Unix offer that made it so great? For one thing, the system provided a hierarchical file system, which allowed something we all now take for granted: Files could be placed in directories—or equivalently, folders—that in turn could be put within other directories. Each file could contain no more than 64 kilobytes, and its name could be no more than six characters long. These restrictions seem awkwardly limiting now, but at the time they appeared perfectly adequate.

Although Unix was ostensibly created for word processing, the only editor available in 1971 was the line-oriented ed. Today, ed is still the only editor guaranteed to be present on all Unix systems. Apart from the text-processing and general system applications, the first edition of Unix included games such as blackjack, chess, and tic-tac-toe. For the system administrator, there were tools to dump and restore disk images to magnetic tape, to read and write paper tapes, and to create, check, mount, and unmount removable disk packs.

Most important, the system offered an interactive environment that by this time allowed time-sharing, so several people could use a single machine at once. Various programming languages were available to them, including BASIC, Fortran, the scripting of Unix commands, assembly language, and B. The last of these, a descendant of a BCPL (Basic Combined Programming Language), ultimately evolved into the immensely popular C language, which Ritchie created while also working on Unix.

The first edition of Unix let programmers call 34 different low-level routines built into the operating system. It’s a testament to the system’s enduring nature that nearly all of these system calls are still available—and still heavily used—on modern Unix and Linux systems four decades on. For its time, first-­edition Unix provided a remarkably powerful environment for software development. Yet it contained just 4200 lines of code at its heart and occupied a measly 16 KB of main memory when it ran.

Unix’s great influence can be traced in part to its elegant design, simplicity, portability, and serendipitous timing. But perhaps even more important was the devoted user community that soon grew up around it. And that came about only by an accident of its unique history.

The story goes like this: For years Unix remained nothing more than a Bell Labs research project, but by 1973 its authors felt the system was mature enough for them to present a paper on its design and implementation at a symposium of the Association for Computing Machinery. That paper was published in 1974 in the Communications of the ACM. Its appearance brought a flurry of requests for copies of the software.

This put AT&T in a bind. In 1956, AT&T had agreed to a U.S government consent decree that prevented the company from selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country’s long-distance phone service. So Unix could not be sold as a product. Instead, AT&T released the Unix source code under license to anyone who asked, charging only a nominal fee. The critical wrinkle here was that the consent decree prevented AT&T from supporting Unix. Indeed, for many years Bell Labs researchers proudly displayed their Unix policy at conferences with a slide that read, “No advertising, no support, no bug fixes, payment in advance.”

With no other channels of support available to them, early Unix adopters banded together for mutual assistance, forming a loose network of user groups all over the world. They had the source code, which helped. And they didn’t view Unix as a standard software product, because nobody seemed to be looking after it. So these early Unix users themselves set about fixing bugs, writing new tools, and generally improving the system as they saw fit.

The Usenix user group acted as a clearinghouse for the exchange of Unix software in the United States. People could send in magnetic tapes with new software or fixes to the system and get back tapes with the software and fixes that Usenix had received from others. In Australia, the University of Sydney produced a more robust version of Unix, the Australian Unix Share Accounting Method, which could cope with larger numbers of concurrent users and offered better performance.

By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were enthusiastically enhancing the system, and many of their improvements were being fed back to Bell Labs for incorporation in future releases. But as Unix became more popular, AT&T’s lawyers began looking harder at what various licensees were doing with their systems.

One person who caught their eye was John Lions, a computer scientist then teaching at the University of New South Wales, in Australia. In 1977, he published what was probably the most famous computing book of the time, A Commentary on the Unix Operating System, which contained an annotated listing of the central source code for Unix.

Unix’s licensing conditions allowed for the exchange of source code, and initially, Lions’s book was sold to licensees. But by 1979, AT&T’s lawyers had clamped down on the book’s distribution and use in academic classes. The anti­authoritarian Unix community reacted as you might expect, and samizdat copies of the book spread like wildfire. Many of us have nearly unreadable nth-­generation photocopies of the original book.

End runs around AT&T’s lawyers indeed became the norm—even at Bell Labs. For example, between the release of the sixth edition of Unix in 1975 and the seventh edition in 1979, Thompson collected dozens of important bug fixes to the system, coming both from within and outside of Bell Labs. He wanted these to filter out to the existing Unix user base, but the company’s lawyers felt that this would constitute a form of support and balked at their release. Nevertheless, those bug fixes soon became widely distributed through unofficial channels. For instance, Lou Katz, the founding president of Usenix, received a phone call one day telling him that if he went down to a certain spot on Mountain Avenue (where Bell Labs was located) at 2 p.m., he would find something of interest. Sure enough, Katz found a magnetic tape with the bug fixes, which were rapidly in the hands of countless users.

By the end of the 1970s, Unix, which had started a decade earlier as a reaction against the loss of a comfortable programming environment, was growing like a weed throughout academia and the IT industry. Unix would flower in the early 1980s before reaching the height of its popularity in the early 1990s.

For many reasons, Unix has since given way to other commercial and noncommercial systems. But its legacy, that of an elegant, well-designed, comfortable environment for software development, lives on. In recognition of their accomplishment, Thompson and Ritchie were given the Japan Prize earlier this year, adding to a collection of honors that includes the United States’ National Medal of Technology and Innovation and the Association of Computing Machinery’s Turing Award. Many other, often very personal, tributes to Ritchie and his enormous influence on computing were widely shared after his death this past October.

Unix is indeed one of the most influential operating systems ever invented. Its direct descendants now number in the hundreds. On one side of the family tree are various versions of Unix proper, which began to be commercialized in the 1980s after the Bell System monopoly was broken up, freeing AT&T from the stipulations of the 1956 consent decree. On the other side are various Unix-like operating systems derived from the version of Unix developed at the University of California, Berkeley, including the one Apple uses today on its computers, OS X. I say “Unix-like” because the developers of the Berkeley Software Distribution (BSD) Unix on which these systems were based worked hard to remove all the original AT&T code so that their software and its descendants would be freely distributable.

The effectiveness of those efforts were, however, called into question when the AT&T subsidiary Unix System Laboratories filed suit against Berkeley Software Design and the Regents of the University of California in 1992 over intellectual property rights to this software. The university in turn filed a counterclaim against AT&T for breaches to the license it provided AT&T for the use of code developed at Berkeley. The ensuing legal quagmire slowed the development of free Unix-like clones, including 386BSD, which was designed for the Intel 386 chip, the CPU then found in many IBM PCs.

Had this operating system been available at the time, Linus Torvalds says he probably wouldn’t have created Linux, an open-source Unix-like operating system he developed from scratch for PCs in the early 1990s. Linux has carried the Unix baton forward into the 21st century, powering a wide range of digital gadgets including wireless routers, televisions, desktop PCs, and Android smartphones. It even runs some supercomputers.

Although AT&T quickly settled its legal disputes with Berkeley Software Design and the University of California, legal wrangling over intellectual property claims to various parts of Unix and Linux have continued over the years, often involving byzantine corporate relations. By 2004, no fewer than five major lawsuits had been filed. Just this past August, a software company called the TSG Group (formerly known as the SCO Group), lost a bid in court to claim ownership of Unix copyrights that Novell had acquired when it purchased the Unix System Laboratories from AT&T in 1993.

As a programmer and Unix historian, I can’t help but find all this legal sparring a bit sad. From the very start, the authors and users of Unix worked as best they could to build and share, even if that meant defying authority. That outpouring of selflessness stands in sharp contrast to the greed that has driven subsequent legal battles over the ownership of Unix.

The world of computer hardware and software moves forward startlingly fast. For IT professionals, the rapid pace of change is typically a wonderful thing. But it makes us susceptible to the loss of our own history, including important lessons from the past. To address this issue in a small way, in 1995 I started a mailing list of old-time Unix ­aficionados. That effort morphed into the Unix Heritage Society. Our goal is not only to save the history of Unix but also to collect and curate these old systems and, where possible, bring them back to life. With help from many talented members of this society, I was able to restore much of the old Unix software to working order, including Ritchie’s first C compiler from 1972 and the first Unix system to be written in C, dating from 1973.

One holy grail that eluded us for a long time was the first edition of Unix in any form, electronic or otherwise. Then, in 2006, Al Kossow from the Computer History Museum, in Mountain View, Calif., unearthed a printed study of Unix dated 1972, which not only covered the internal workings of Unix but also included a complete assembly listing of the kernel, the main component of this operating system. This was an amazing find—like discovering an old Ford Model T collecting dust in a corner of a barn. But we didn’t just want to admire the chrome work from afar. We wanted to see the thing run again.

In 2008, Tim Newsham, an independent programmer in Hawaii, and I assembled a team of like-minded Unix enthusiasts and set out to bring this ancient system back from the dead. The work was technically arduous and often frustrating, but in the end, we had a copy of the first edition of Unix running on an emulated PDP-11/20. We sent out messages announcing our success to all those we thought would be interested. Thompson, always succinct, simply replied, “Amazing.” Indeed, his brainchild was amazing, and I’ve been happy to do what I can to make it, and the story behind it, better known.

Why Do Intellectuals Oppose Capitalism? by Robert Nozick

Why do we have so many intellectuals in the first place? They do not seem to be very productive in modern society. Maybe we need a few as keepers for our knowledge, but educating most of them are just wasting resources of the society.

Robert Nozick is Arthur Kingsley Porter Professor of Philosophy at Harvard University and the author of Anarchy, State, and Utopia and other books. This article is excerpted from his essay “Why Do Intellectuals Oppose Capitalism?” which originally appeared in The Future of Private Enterprise, ed. Craig Aronoff et al. (Georgia State University Business Press, 1986) and is reprinted in Robert Nozick, Socratic Puzzles (Harvard University Press, 1997).

It is surprising that intellectuals oppose capitalism so. Other groups of comparable socio-economic status do not show the same degree of opposition in the same proportions. Statistically, then, intellectuals are an anomaly.

Not all intellectuals are on the “left.” Like other groups, their opinions are spread along a curve. But in their case, the curve is shifted and skewed to the political left.

By intellectuals, I do not mean all people of intelligence or of a certain level of education, but those who, in their vocation, deal with ideas as expressed in words, shaping the word flow others receive. These wordsmiths include poets, novelists, literary critics, newspaper and magazine journalists, and many professors. It does not include those who primarily produce and transmit quantitatively or mathematically formulated information (the numbersmiths) or those working in visual media, painters, sculptors, cameramen. Unlike the wordsmiths, people in these occupations do not disproportionately oppose capitalism. The wordsmiths are concentrated in certain occupational sites: academia, the media, government bureaucracy.

Wordsmith intellectuals fare well in capitalist society; there they have great freedom to formulate, encounter, and propagate new ideas, to read and discuss them. Their occupational skills are in demand, their income much above average. Why then do they disproportionately oppose capitalism? Indeed, some data suggest that the more prosperous and successful the intellectual, the more likely he is to oppose capitalism. This opposition to capitalism is mainly “from the left” but not solely so. Yeats, Eliot, and Pound opposed market society from the right.

The opposition of wordsmith intellectuals to capitalism is a fact of social significance. They shape our ideas and images of society; they state the policy alternatives bureaucracies consider. From treatises to slogans, they give us the sentences to express ourselves. Their opposition matters, especially in a society that depends increasingly upon the explicit formulation and dissemination of information.

We can distinguish two types of explanation for the relatively high proportion of intellectuals in opposition to capitalism. One type finds a factor unique to the anti-capitalist intellectuals. The second type of explanation identifies a factor applying to all intellectuals, a force propelling them toward anti-capitalist views. Whether it pushes any particular intellectual over into anti-capitalism will depend upon the other forces acting upon him. In the aggregate, though, since it makes anti-capitalism more likely for each intellectual, such a factor will produce a larger proportion of anti-capitalist intellectuals. Our explanation will be of this second type. We will identify a factor which tilts intellectuals toward anti-capitalist attitudes but does not guarantee it in any particular case.

The Value of Intellectuals

Intellectuals now expect to be the most highly valued people in a society, those with the most prestige and power, those with the greatest rewards. Intellectuals feel entitled to this. But, by and large, a capitalist society does not honor its intellectuals. Ludwig von Mises explains the special resentment of intellectuals, in contrast to workers, by saying they mix socially with successful capitalists and so have them as a salient comparison group and are humiliated by their lesser status. However, even those intellectuals who do not mix socially are similarly resentful, while merely mixing is not enough–the sports and dancing instructors who cater to the rich and have affairs with them are not noticeably anti-capitalist.

Why then do contemporary intellectuals feel entitled to the highest rewards their society has to offer and resentful when they do not receive this? Intellectuals feel they are the most valuable people, the ones with the highest merit, and that society should reward people in accordance with their value and merit. But a capitalist society does not satisfy the principle of distribution “to each according to his merit or value.” Apart from the gifts, inheritances, and gambling winnings that occur in a free society, the market distributes to those who satisfy the perceived market-expressed demands of others, and how much it so distributes depends on how much is demanded and how great the alternative supply is. Unsuccessful businessmen and workers do not have the same animus against the capitalist system as do the wordsmith intellectuals. Only the sense of unrecognized superiority, of entitlement betrayed, produces that animus.

Why do wordsmith intellectuals think they are most valuable, and why do they think distribution should be in accordance with value? Note that this latter principle is not a necessary one. Other distributional patterns have been proposed, including equal distribution, distribution according to moral merit, distribution according to need. Indeed, there need not be any pattern of distribution a society is aiming to achieve, even a society concerned with justice. The justice of a distribution may reside in its arising from a just process of voluntary exchange of justly acquired property and services. Whatever outcome is produced by that process will be just, but there is no particular pattern the outcome must fit. Why, then, do wordsmiths view themselves as most valuable and accept the principle of distribution in accordance with value?

From the beginnings of recorded thought, intellectuals have told us their activity is most valuable. Plato valued the rational faculty above courage and the appetites and deemed that philosophers should rule; Aristotle held that intellectual contemplation was the highest activity. It is not surprising that surviving texts record this high evaluation of intellectual activity. The people who formulated evaluations, who wrote them down with reasons to back them up, were intellectuals, after all. They were praising themselves. Those who valued other things more than thinking things through with words, whether hunting or power or uninterrupted sensual pleasure, did not bother to leave enduring written records. Only the intellectual worked out a theory of who was best.

The Schooling of Intellectuals

What factor produced feelings of superior value on the part of intellectuals? I want to focus on one institution in particular: schools. As book knowledge became increasingly important, schooling–the education together in classes of young people in reading and book knowledge–spread. Schools became the major institution outside of the family to shape the attitudes of young people, and almost all those who later became intellectuals went through schools. There they were successful. They were judged against others and deemed superior. They were praised and rewarded, the teacher’s favorites. How could they fail to see themselves as superior? Daily, they experienced differences in facility with ideas, in quick-wittedness. The schools told them, and showed them, they were better.

The schools, too, exhibited and thereby taught the principle of reward in accordance with (intellectual) merit. To the intellectually meritorious went the praise, the teacher’s smiles, and the highest grades. In the currency the schools had to offer, the smartest constituted the upper class. Though not part of the official curricula, in the schools the intellectuals learned the lessons of their own greater value in comparison with the others, and of how this greater value entitled them to greater rewards.

The wider market society, however, taught a different lesson. There the greatest rewards did not go to the verbally brightest. There the intellectual skills were not most highly valued. Schooled in the lesson that they were most valuable, the most deserving of reward, the most entitled to reward, how could the intellectuals, by and large, fail to resent the capitalist society which deprived them of the just deserts to which their superiority “entitled” them? Is it surprising that what the schooled intellectuals felt for capitalist society was a deep and sullen animus that, although clothed with various publicly appropriate reasons, continued even when those particular reasons were shown to be inadequate?

In saying that intellectuals feel entitled to the highest rewards the general society can offer (wealth, status, etc.), I do not mean that intellectuals hold these rewards to be the highest goods. Perhaps they value more the intrinsic rewards of intellectual activity or the esteem of the ages. Nevertheless, they also feel entitled to the highest appreciation from the general society, to the most and best it has to offer, paltry though that may be. I don’t mean to emphasize especially the rewards that find their way into the intellectuals’ pockets or even reach them personally. Identifying themselves as intellectuals, they can resent the fact that intellectual activity is not most highly valued and rewarded.

The intellectual wants the whole society to be a school writ large, to be like the environment where he did so well and was so well appreciated. By incorporating standards of reward that are different from the wider society, the schools guarantee that some will experience downward mobility later. Those at the top of the school’s hierarchy will feel entitled to a top position, not only in that micro-society but in the wider one, a society whose system they will resent when it fails to treat them according to their self-prescribed wants and entitlements. The school system thereby produces anti-capitalist feeling among intellectuals. Rather, it produces anti-capitalist feeling among verbal intellectuals. Why do the numbersmiths not develop the same attitudes as these wordsmiths? I conjecture that these quantitatively bright children, although they get good grades on the relevant examinations, do not receive the same face-to-face attention and approval from the teachers as do the verbally bright children. It is the verbal skills that bring these personal rewards from the teacher, and apparently it is these rewards that especially shape the sense of entitlement.

Central Planning in the Classroom

There is a further point to be added. The (future) wordsmith intellectuals are successful within the formal, official social system of the schools, wherein the relevant rewards are distributed by the central authority of the teacher. The schools contain another informal social system within classrooms, hallways, and schoolyards, wherein rewards are distributed not by central direction but spontaneously at the pleasure and whim of schoolmates. Here the intellectuals do less well.

It is not surprising, therefore, that distribution of goods and rewards via a centrally organized distributional mechanism later strikes intellectuals as more appropriate than the “anarchy and chaos” of the marketplace. For distribution in a centrally planned socialist society stands to distribution in a capitalist society as distribution by the teacher stands to distribution by the schoolyard and hallway.

Our explanation does not postulate that (future) intellectuals constitute a majority even of the academic upper class of the school. This group may consist mostly of those with substantial (but not overwhelming) bookish skills along with social grace, strong motivation to please, friendliness, winning ways, and an ability to play by (and to seem to be following) the rules. Such pupils, too, will be highly regarded and rewarded by the teacher, and they will do extremely well in the wider society, as well. (And do well within the informal social system of the school. So they will not especially accept the norms of the school’s formal system.) Our explanation hypothesizes that (future) intellectuals are disproportionately represented in that portion of the schools’ (official) upper class that will experience relative downward mobility. Or, rather, in the group that predicts for itself a declining future. The animus will arise before the move into the wider world and the experience of an actual decline in status, at the point when the clever pupil realizes he (probably) will fare less well in the wider society than in his current school situation. This unintended consequence of the school system, the anti-capitalist animus of intellectuals, is, of course, reinforced when pupils read or are taught by intellectuals who present those very anti-capitalist attitudes.

No doubt, some wordsmith intellectuals were cantankerous and questioning pupils and so were disapproved of by their teachers. Did they too learn the lesson that the best should get the highest rewards and think, despite their teachers, that they themselves were best and so start with an early resentment against the school system’s distribution? Clearly, on this and the other issues discussed here, we need data on the school experiences of future wordsmith intellectuals to refine and test our hypotheses.

Stated as a general point, it is hardly contestable that the norms within schools will affect the normative beliefs of people after they leave the schools. The schools, after all, are the major non-familial society that children learn to operate in, and hence schooling constitutes their preparation for the larger non-familial society. It is not surprising that those successful by the norms of a school system should resent a society, adhering to different norms, which does not grant them the same success. Nor, when those are the very ones who go on to shape a society’s self-image, its evaluation of itself, is it surprising when the society’s verbally responsive portion turns against it. If you were designing a society, you would not seek to design it so that the wordsmiths, with all their influence, were schooled into animus against the norms of the society.

Our explanation of the disproportionate anti-capitalism of intellectuals is based upon a very plausible sociological generalization.

In a society where one extra-familial system or institution, the first young people enter, distributes rewards, those who do the very best therein will tend to internalize the norms of this institution and expect the wider society to operate in accordance with these norms; they will feel entitled to distributive shares in accordance with these norms or (at least) to a relative position equal to the one these norms would yield. Moreover, those constituting the upper class within the hierarchy of this first extra-familial institution who then experience (or foresee experiencing) movement to a lower relative position in the wider society will, because of their feeling of frustrated entitlement, tend to oppose the wider social system and feel animus toward its norms.

Notice that this is not a deterministic law. Not all those who experience downward social mobility will turn against the system. Such downward mobility, though, is a factor which tends to produce effects in that direction, and so will show itself in differing proportions at the aggregate level. We might distinguish ways an upper class can move down: it can get less than another group or (while no group moves above it) it can tie, failing to get more than those previously deemed lower. It is the first type of downward mobility which especially rankles and outrages; the second type is far more tolerable. Many intellectuals (say they) favor equality while only a small number call for an aristocracy of intellectuals. Our hypothesis speaks of the first type of downward mobility as especially productive of resentment and animus.

The school system imparts and rewards only some skills relevant to later success (it is, after all, a specialized institution) so its reward system will differ from that of the wider society. This guarantees that some, in moving to the wider society, will experience downward social mobility and its attendant consequences. Earlier I said that intellectuals want the society to be the schools writ large. Now we see that the resentment due to a frustrated sense of entitlement stems from the fact that the schools (as a specialized first extra-familial social system) are not the society writ small.

Our explanation now seems to predict the (disproportionate) resentment of schooled intellectuals against their society whatever its nature, whether capitalist or communist. (Intellectuals are disproportionately opposed to capitalism as compared with other groups of similar socioeconomic status within capitalist society. It is another question whether they are disproportionately opposed as compared with the degree of opposition of intellectuals in other societies to those societies.) Clearly, then, data about the attitudes of intellectuals within communist countries toward apparatchiks would be relevant; will those intellectuals feel animus toward that system?

Our hypothesis needs to be refined so that it does not apply (or apply as strongly) to every society. Must the school systems in every society inevitably produce anti-societal animus in the intellectuals who do not receive that society’s highest rewards? Probably not. A capitalist society is peculiar in that it seems to announce that it is open and responsive only to talent, individual initiative, personal merit. Growing up in an inherited caste or feudal society creates no expectation that reward will or should be in accordance with personal value. Despite the created expectation, a capitalist society rewards people only insofar as they serve the market-expressed desires of others; it rewards in accordance with economic contribution, not in accordance with personal value. However, it comes close enough to rewarding in accordance with value–value and contribution will very often be intermingled–so as to nurture the expectation produced by the schools. The ethos of the wider society is close enough to that of the schools so that the nearness creates resentment. Capitalist societies reward individual accomplishment or announce they do, and so they leave the intellectual, who considers himself most accomplished, particularly bitter.

Another factor, I think, plays a role. Schools will tend to produce such anti-capitalist attitudes the more they are attended together by a diversity of people. When almost all of those who will be economically successful are attending separate schools, the intellectuals will not have acquired that attitude of being superior to them. But even if many children of the upper class attend separate schools, an open society will have other schools that also include many who will become economically successful as entrepreneurs, and the intellectuals later will resentfully remember how superior they were academically to their peers who advanced more richly and powerfully. The openness of the society has another consequence, as well. The pupils, future wordsmiths and others, will not know how they will fare in the future. They can hope for anything. A society closed to advancement destroys those hopes early. In an open capitalist society, the pupils are not resigned early to limits on their advancement and social mobility, the society seems to announce that the most capable and valuable will rise to the very top, their schools have already given the academically most gifted the message that they are most valuable and deserving of the greatest rewards, and later these very pupils with the highest encouragement and hopes see others of their peers, whom they know and saw to be less meritorious, rising higher than they themselves, taking the foremost rewards to which they themselves felt themselves entitled. Is it any wonder they bear that society an animus?

Some Further Hypotheses

We have refined the hypothesis somewhat. It is not simply formal schools but formal schooling in a specified social context that produces anti-capitalist animus in (wordsmith) intellectuals. No doubt, the hypothesis requires further refining. But enough. It is time to turn the hypothesis over to the social scientists, to take it from armchair speculations in the study and give it to those who will immerse themselves in more particular facts and data. We can point, however, to some areas where our hypothesis might yield testable consequences and predictions. First, one might predict that the more meritocratic a country’s school system, the more likely its intellectuals are to be on the left. (Consider France.) Second, those intellectuals who were “late bloomers” in school would not have developed the same sense of entitlement to the very highest rewards; therefore, a lower percentage of the late-bloomer intellectuals will be anti-capitalist than of the early bloomers. Third, we limited our hypothesis to those societies (unlike Indian caste society) where the successful student plausibly could expect further comparable success in the wider society. In Western society, women have not heretofore plausibly held such expectations, so we would not expect the female students who constituted part of the academic upper class yet later underwent downward mobility to show the same anti-capitalist animus as male intellectuals. We might predict, then, that the more a society is known to move toward equality in occupational opportunity between women and men, the more its female intellectuals will exhibit the same disproportionate anti-capitalism its male intellectuals show.

Some readers may doubt this explanation of the anti-capitalism of intellectuals. Be this as it may, I think that an important phenomenon has been identified. The sociological generalization we have stated is intuitively compelling; something like it must be true. Some important effect therefore must be produced in that portion of the school’s upper class that experiences downward social mobility, some antagonism to the wider society must get generated. If that effect is not the disproportionate opposition of the intellectuals, then what is it? We started with a puzzling phenomenon in need of an explanation. We have found, I think, an explanatory factor that (once stated) is so obvious that we must believe it explains some real phenomenon.

This article originally appeared in the January/February 1998 edition of Cato Policy Report.

羅伯特‧諾齊克 著 秋風 譯


































Transistor Wars

As long as transistor continue to shrink for the next 30 years, I won’t be out of work before I retire. Somehow I have a feeling that I won’t see the end of Moore’s law in my life time, since there is always some new innovation around the corner.

Rival architectures face off in a bid to keep Moore’s Law alive
By Khaled Ahmed, Klaus Schuegraf, IEEE Spectrum, November 2011

In May, Intel announced the most dramatic change to the architecture of the transistor since the device was invented. The company will henceforth build its transistors in three dimensions, a shift that—if all goes well—should add at least a half dozen years to the life of Moore’s Law, the biennial doubling in transistor density that has driven the chip industry for decades.

But Intel’s big announcement was notable for another reason: It signaled the start of a growing schism among chipmakers. Despite all the great advantages of going 3-D, a simpler alternative design is also nearing production. Although it’s not yet clear which device architecture will win out, what is certain is that the complementary metal-oxide semiconductor (CMOS) field-effect transistor (FET)—the centerpiece of computer processors since the 1980s—will get an entirely new look. And the change is more than cosmetic; these designs will help open up a new world of low-power mobile electronics with fantastic capabilities.

There’s a simple reason everyone’s contemplating a redesign: The smaller you make a CMOS transistor, the more current it leaks when it’s switched off. This leakage arises from the device’s geometry. A standard CMOS transistor has four parts: a source, a drain, a channel that connects the two, and a gate on top to control the channel. When the gate is turned on, it creates a conductive path that allows electrons or holes to move from the source to the drain. When the gate is switched off, this conductive path is supposed to disappear. But as engineers have shrunk the distance between the source and drain, the gate’s control over the transistor channel has gotten weaker. Current sneaks through the part of the channel that’s farthest from the gate and also through the underlying silicon substrate. The only way to cut down on leaks is to find a way to remove all that excess silicon.

Over the past few decades, two very different solutions to this problem have emerged. One approach is to make the silicon channel of the traditional planar transistor as thin as possible, by eliminating the silicon substrate and instead building the channel on top of insulating material. The other scheme is to turn this channel on its side, popping it out of the transistor plane to create a 3-D device. Each approach comes with its own set of merits and manufacturing challenges, and chipmakers are now working out the best way to catch up with Intel’s leap forward. The next few years will see dramatic upheaval in an already fast-moving industry.

Change is nothing new to CMOS transistors, but the pace has been accelerating. When the first CMOS devices entered mass production in the 1980s, the path to further miniaturization seemed straightforward. Back in 1974, engineers at the IBM T. J. Watson Research Center in Yorktown Heights, N.Y., led by Robert Dennard, had already sketched out the ideal progression. The team described how steadily reducing gate length, gate insulator thickness, and other feature dimensions could simultaneously improve switching speed, power consumption, and transistor density.

But this set of rules, known as Dennard’s scaling law, hasn’t been followed for some time. During the 1990s boom in personal computing, the demand for faster microprocessors drove down transistor gate length faster than Dennard’s law called for. Shrinking transistors boosted speeds, but engineers found that as they did so, they couldn’t reduce the voltage across the devices to improve power consumption. So much current was being lost when the transistor was off that a strong voltage—applied on the drain to pull charge carriers through the channel—was needed to make sure the device switched as quickly as possible to avoid losing power in the switching process.

By 2001, the leakage power was fast approaching the amount of power needed to switch a transistor out of its “off” state. This was a warning sign for the industry. The trend promised chips that would consume the same amount of energy regardless of whether they were in use or not. Chipmakers needed to find new ways to boost transistor density. In 2003, as the length of transistor channels dropped to 45 nanometers, Intel debuted chips bearing devices made with strain engineering. These transistors boasted silicon channels that had been physically squeezed or pulled to boost speed and reduce the power lost due to resistance. By the next “node”—industry lingo for a transistor density milestone—companies had stopped shrinking transistor dimensions and instead began just squeezing transistors closer together. And in 2007, Intel bought Moore’s Law a few more years by introducing the first big materials change, replacing the ever-thinning silicon oxide insulator that sits between a transistor’s gate and channel with hafnium oxide.

This better-insulating material helped stanch a main source of leakage current—the tunneling of electrons between the gate and the channel. But leakage from the source to the drain was still a huge problem. As companies faced the prospect of creating even denser chips with features approaching 20 nm, it became increasingly clear that squeezing together traditional planar transistors or shrinking them even further would be impossible with existing technology. Swapping in a new insulator or adding more strain wouldn’t cut it. Driving down power consumption and saving Moore’s Law would require a fundamental change to transistor structure—a new design that could maximize the gate’s control over the channel.

Fortunately, over the course of more than 20 years of research, transistor designers have found two very powerful ways to boost the effectiveness of the transistor gate. As the gate itself can’t get much stronger, these schemes focus on making the channel easier to control. One approach replaces the bulk silicon of a normal transistor with a thin layer of silicon built on an insulating layer, creating a device that is often called an ultrathin body silicon-on-insulator, or UTB SOI, also known as a fully depleted SOI.

A second strategy turns the thin silicon channel by 90 degrees, creating a “fin” that juts out of the plane of the device. The transistor gate is then draped over the top of the channel like an upside-down U, bracketing it on three sides and giving the gate almost complete control of the channel. While conventional CMOS devices are largely flat, save for a thin insulating layer and the gate, these FinFETs—or Tri-Gate transistors, as Intel has named its three-sided devices—are decidedly 3-D. All the main components of the transistor—source, drain, channel, and gate—sit on top of the device’s substrate.

Both schemes offer the same basic advantage: By thinning the channel, they bring the gate closer to the drain. When a transistor is off, the drain’s electric field can take one of two paths inside the channel to zero-voltage destinations. It can propagate all the way across the channel to the source, or it can terminate at the transistor’s gate. If the field gets to the source, it can lower the energy barrier that keeps charge carriers in the source from entering the channel. But if the gate is close enough to the drain, it can act as a lightning rod, diverting field lines away from the source. This cuts down on leakage, and it also means that field lines don’t penetrate very far into the channel, dissipating even more energy by tugging on any stray carriers.

The first 3-D transistor was sketched out by Digh Hisamoto and others at Hitachi, who presented the design for a device dubbed a Delta at a conference in 1989. The UTB SOI’s roots extend even further back; they are a natural extension of early SOI channel research, which began in the 1980s when researchers started experimenting with transistors built with 200-nm thick, undoped silicon channels on insulating material.

But the promise of both of these thin-channel approaches wasn’t fully appreciated until 1996, when Chenming Hu and his colleagues at the University of California, Berkeley, began an ambitious study, funded by the U.S. Defense Advanced Research Projects Agency, to see how far these designs could go. At the time, the industry was producing 250-nm transistors, and no one knew whether the devices could be scaled below 100 nm. Hu’s team showed that the two alternate architectures could solve the power consumption problems of planar CMOS transistors and that they could operate with gate lengths of 20 nm—and later, even less.

The FinFET and the UTB SOI both offer big gains in power consumption. Logic chip designs typically require that a transistor in its on state draw at least 10 000 times as much current as the device leaks in its off state. For 30-nm transistors—about the size that most chipmakers are currently aiming for—this design spec means devices should leak no more than a few nanoamperes of current when they’re off. While 30-nm planar CMOS devices leak about 50 times that amount, both thin-channel designs hit the target quite easily.

But the two architectures aren’t entirely equal. To get the best performance, the channel of a UTB SOI should be no more than about one-fourth as thick as the length of the gate. Because a FinFET’s gate brackets the channel on three sides, the 3-D transistors can achieve the same level of control with a channel—or fin—that’s as much as half as thick as the length of the transistor gate.

This bigger channel volume gives FinFETs a distinct advantage when it comes to current-carrying capacity. The best R&D results suggest that a 25-nm FinFET can carry about 25 percent more current than a UTB SOI. This current boost doesn’t matter much if you have only a single transistor, but in an IC, it means you can charge capacitors 25 percent faster, making for much speedier chips. Faster chips obviously mean a lot to a microprocessor manufacturer like Intel. The question is whether other chipmakers will find the faster speeds meaningful enough to switch to FinFETs, a prospect that requires a big up-front investment and an entirely new set of manufacturing challenges.

The single biggest hurdle in making FinFETs is manufacturing the fins so that they’re both narrow and uniform. For a 20-nm transistor—roughly the same size as the one that Intel is putting into production—the fin must be about 10 nm wide and 25 nm high; it must also deviate by no more than half a nanometer—just a few atomic layers—in any given direction. Over the course of production, manufacturers must control all sources of variation, limiting it to no more than 1 nm in a 300-millimeter-wide wafer.

This precision is needed not only to manufacture the fin; it must also be maintained for the rest of the manufacturing process, including thermal treatment, doping, and the multiple film deposition and removal steps needed to build the transistor’s gate insulator and gate. As an added complication, the gate oxide and the gate must be deposited so that they follow the contours of the fin. Any process that damages the fin could affect how the device performs. The resultant variation in device quality would force engineers to operate circuits at a higher power than they’re designed for, eliminating any gains in power efficiency.

The unusual geometry of the FinFET also poses challenges for doping, which isn’t required but can help cut down on leakage current. FinFET channels need two kinds of dopants: One is deposited underneath the gate and the other into the parts of the channel that extend on either side of the gate, helping mate the channel to the source and drain. Manufacturers currently dope channels by shooting ions straight down into the material. But that approach won’t work for FinFETs. The devices need dopants to be distributed evenly through the top of the fin and the side walls; any unevenness in concentration will cause a pileup of charges, boosting the device’s resistance and wasting power.

Doping will get only more difficult in the future. As FinFETs shrink, they’ll get so close together that they will cast “shadows” on one another, preventing dopants from permeating every part of every fin. At Applied Materials’ Silicon Systems Group, we’ve been working on one possible fix: immersing fins in plasma so that dopants can migrate directly into the material, no matter what its shape is.

Because UTB SOI devices are quite similar to conventional planar CMOS transistors, they are easier to manufacture than FinFETs. Most existing designs and manufacturing techniques will work just as well with the new thin-silicon transistors as they do with the traditional variety. And in some ways, UTB SOIs are easier to produce than present-day transistors. The devices don’t need doped channels, a simplification that can save planar CMOS manufacturers some 20 to 30 steps out of roughly 400 in the wafer production process.

But the UTB SOI comes with its own challenges, chiefly the thin channel. The requirement that UTB SOI channels be half as thick as comparable FinFET fins makes any variations in thickness even more critical for these devices. A firm called Soitec, headquartered in Bernin, France, which has been leading the charge in manufacturing ultrathin silicon-on-insulator wafers, is currently demonstrating 10-nm-thick silicon layers that vary by just 0.5 nm in thickness. That’s an impressive achievement for wafers that measure 300 mm across, but it will need to be improved as transistors shrink. And it’s not clear how precise Soitec’s technique—which involves splitting a wafer to create an ultrathin silicon layer—can ultimately be made.

Another key stumbling block for UTB SOI adoption is the supply chain. At the moment, there are few potential providers of ultrathin SOI wafers, which could ultimately make manufacturers of UTB SOI chips dependent on a handful of sources. Intel’s Mark Bohr says the hard-to-find wafers could add 10 percent to the cost of a finished wafer, compared to 2 to 3 percent for wafers bearing 3-D transistors (an estimate from the SOI Industry Consortium suggests that finished UTB SOI wafers will actually be less expensive).

Going forward, we expect that chipmakers will split into two camps. Those interested in the speediest transistors will move toward FinFETs. Others who don’t want to invest as much in a switch will find UTB SOIs more attractive.

UTB SOI transistors have an additional feature that makes them particularly appealing for low-power applications: A small voltage can easily be applied to the very bottom of a chip full of UTB SOI devices. This small bias voltage alters the channel properties, reducing the electrical barrier that stops current flowing from the source to the drain. As a result, less voltage needs to be applied to the transistor gates to turn the devices on. When the transistors aren’t needed, this bias voltage can be removed, which restores the electrical barrier, reducing the amount of current that leaks through the device when it’s off. As Thomas Skotnicki of STMicroelectronics has long argued, this sort of dynamic switching saves power, making the devices particularly attractive for chips in smartphones and other mobile gadgets. Skotnicki says the company expects to release its first UTB SOI chip, which will use 28-nm transistors to power a mobile multimedia processor, by the end of 2012.

That said, few companies have committed to one technology or the other. STMicroelectronics—as well as firms such as GlobalFoundries and Samsung—is part of the International Semiconductor Development Alliance, which supports and benefits from device research at IBM and is investing in both FinFETs and UTB SOIs. Exactly how the industry will split up and which design will come to dominate will depend on decisions made by the biggest foundries and how quickly standards are developed. Reports suggest that Taiwan Semiconductor Manufacturing Co., which dominates bespoke manufacturing in the chip industry, will begin making 14-nm FinFETs in 2015, but it’s not clear whether the company will also support UTB SOI production. Switching to FinFET production requires a substantial investment, and whichever way TSMC swings, it will put pressure on other manufacturers, such as GlobalFoundries, United Microelectronics Corp., and newcomers to the foundry business such as Samsung, to choose a direction.

Also still unclear is how far each technology can be extended. Right now it looks like both FinFETs and UTB SOIs should be able to cover the next three generations of transistors. But UTB SOI transistors may not evolve much below 7 nm, because at that point, their gate oxide would need an effective thickness of 0.7 nm, which would require significant materials innovation. FinFETs may have a similar limit. In 2006, a team at the Korea Advanced Institute of Science and Technology used electron-beam lithography to build 3-nm FinFETs. But crafting a single device isn’t quite the same as packing millions together to make a microprocessor; when transistors are that close to each other, parasitic capacitances and resistances will draw current away from each switch. Some projections suggest that when FinFETs are scaled down to 7 nm or so, they will perform no better than planar devices.

Meanwhile, researchers are already trying to figure out what devices might succeed FinFETs and UTB SOIs, to continue Moore’s Law scaling. One possibility is to extrapolate the FinFET concept by using a nanowire device that is completely surrounded by a cylindrical gate. Another idea is to exploit quantum tunneling to create switches that can’t leak current when they’re not switched on. We don’t know what will come next. The emergence of FinFETs and UTB SOIs clearly shows that the days of simple transistor scaling are long behind us. But the switch to these new designs also offers a clear demonstration of how creative thinking and a good amount of competition can help us push Moore’s Law to its ultimate limit—whatever that might be.

RIP Dennis Ritchie (1941 – 2011)

When the world is mourning with the death of Steve Jobs, the world lost another tech pioneer Dennis Ritchie, the inventor of C and UNIX. To many geeks, Dennis’ role in the computer revolution is way more important than Steve.

Dennis Ritchie, the Bell Labs computer scientist who created the immensely popular C programming language and who was instrumental in the construction the well-known Unix operating system, died last weekend after a protracted illness. Ritchie was 70 years old.

Ritchie, who was born in a suburb of New York City, graduated from Harvard and later went on to earn a doctorate from the same institution while working at Bell Labs, which then belonged to AT&T (and is now part of the Alcatel-Lucent). There he joined forces with Ken Thompson and other Bell Labs colleagues to create the Unix operating system. Although early Unix evolved without the naming of progressively advanced versions, the birth of this operating system can be marked by the first edition of the Unix programmers’ manual, which was issued in November of 1971, almost 40 years ago.

Although AT&T had been engaged in the development of an advanced computer operating system called Multics in the late 1960s, corporate managers abandoned those efforts, making Thomson and Ritchie’s work on Unix that much more impressive. These researchers threw themselves into the development of Unix despite, rather than in response to, their employer’s leanings at the time. We should be thankful that Ritchie and his colleagues took such initiative and that they had the foresight and talent to build a system that was so simple, elegant, and portable that is survives today. Indeed, Unix has spawned dozens if not hundreds of direct derivatives and Unix-like operating systems, including Linux, which can now be found running everything from smartphones to supercomputers. Unix also underlies the current Macintosh operating system, OS X.

Ritchie’s work creating the C programming language took place at the same time and is closely tied to the early development of Unix. By 1973, Ritchie was able to rewrite the core of Unix, which had been programmed in assembly language, using C. In 1978, Brian Kernighan (another Bell Labs colleague) and Ritchie published The C Programming Language, which essentially defined the language (“K&R C”) and remains a classic on the C language and on good programming practice in general. For example, The C Programming Language established the widespread tradition of beginning instruction with an illustrative program that displays the words, “Hello, world.”

For their seminal work on Unix, Ritchie and Thompson received in 1983 the Association of Computing Machinery’s Turing Award. In 1990, the IEEE awarded Ritchie and Thompson the Richard W. Hamming Medal. Ritchie and Thompson’s work on Unix and C was also recognized at the highest level when President Bill Clinton awarded them the 1998 National Medal of Technology. And in May of this year, Ritchie and Thompson received the 2011 Japan Prize (which was also awarded to Tadamitsu Kishimoto and Toshio Hirano, who were honored for the discovery of interleukin-6).

Spectrum attended the Japan Prize awards ceremony and had an opportunity to ask Ritchie to reflect on some of the high points of his impressive career. During that interview, Ritchie admitted that Unix is far from being without flaws, although he didn’t attempt to enumerate them. “There are lots of little things—I don’t even want to think about going down the list,” he quipped. In December, Spectrum will be publishing a feature-length history of the development of the Unix operating system.

Rob Pike, a former member of the Unix team at Bell labs, informed the world of Ritchie’s death last night on Google+. There he wrote, “He was a quiet and mostly private man, but he was also my friend, colleague, and collaborator, and the world has lost a truly great mind.” A charming illustration of some of those qualities comes from David Madeo, who responded to Pike’s message by sharing this story:

I met Dennis Ritchie at a Usenix without knowing it. He had traded nametags with someone so I spent 30 minutes thinking “this guy really knows what he’s talking about.” Eventually, the other guy walked up and said, “I’m tired of dealing with your groupies” and switched the nametags back. I looked back down to realize who he was, the guy who not only wrote the book I used to learn C in freshman year, but invented the language in the first place. He apologized and said something along the lines that it was easier for him to have good conversations that way.