Tag Archives: IEEE Spectrum

Is Math Still Relevant?

Is math still relevant? That depends on your metaphysical view of the world. If the reality is indeed appearance of mathematics as some metaphysics theories suggest and we are living in endless possibility of equations, then maths is the only way to understand the Truth.

By Robert W. Lucky, IEEE Spectrum, March 2012
The queen of the sciences may someday lose its royal status

Long ago, when I was a freshman in ­engineering school, there was a required course in mechanical drawing. “You had better learn this skill,” the instructor said, “because all engineers start their careers at the ­drafting table.”

This was an ominous beginning to my education, but as it turned out, he was wrong. Neither I nor, I suspect, any of my classmates began our careers at the drafting table.

These days, engineers aren’t routinely taught drawing, but they spend a lot of time learning another skill that may be similarly unnecessary: mathematics. I confess this thought hadn’t occurred to me until recently, when a friend who teaches at a leading university made an off-hand comment. “Is it ­possible,” he suggested, “that the era of math­ematics in electrical ­engineering is coming to an end?”

When I asked him about this disturbing idea, he said that he had only been ­trying to be provocative and that his graduate students were now writing theses that were more mathematical than ever. I felt reassured that the mathematical basis of engineering is strong. But still, I wonder to what extent—and for how long—today’s under­graduate engineering students will be using classical ­mathematics as their careers unfold.

There are several trends that might suggest a diminishing role for mathematics in engineering work. First, there is the rise of software engineering as a separate discipline. It just doesn’t take as much math to write an operating system as it does to design a printed circuit board. Programming is rigidly structured and, at the same time, an evolving art form—neither of which is especially amenable to mathematical analysis.

Another trend veering us away from classical math is the increasing dependence on programs such as Matlab and Maple. The pencil-and-paper calculations with which we evaluated the relative performance of variations in design are now more easily made by simulation software packages—which, with their vast libraries of pre­packaged functions and data, are often more powerful. A purist might ask: Is using Matlab doing math? And of course, the answer is that sometimes it is, and sometimes it isn’t.

A third trend is the growing importance of a class of problems termed “wicked,” which involve social, political, economic, and un­defined or unknown issues that make the application of mathematics very difficult. The world is seemingly full of such frus­trating but important problems.

These trends notwithstanding, we should recognize the role of mathematics in the discovery of fundamental properties and truth. Maxwell’s equations—which are inscribed in marble in the foyer of the National Academy of Engineering—foretold the possibility of radio. It took about half a ­century for those radios to reach Shannon’s limit—described by his equation for channel ­capacity—but at least we knew where we were headed.

Theoretical physicists have explained through math the workings of the universe and even predicted the existence of previously unknown fundamental particles. The iconic image I carry in my mind is of Einstein at a blackboard that’s covered with tensor-filled equations. It is remarkable that one person scribbling math can uncover such secrets. It is as if the universe itself understands and obeys the mathematics that we humans invented.

There have been many philosophical discussions through the years about this wonderful power of math. In a famous 1960 paper en­titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” the physicist Eugene Wigner wrote, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift [that] we neither understand nor deserve.” In a 1980 paper with a similar title, the computer science pioneer Richard Hamming tried to answer the question, “How can it be that simple mathematics suffices to predict so much?”

This “unreasonable effectiveness” of mathematics will continue to be at the heart of engineering, but perhaps the way we use math will change. Still, it’s hard to imagine Einstein running simulations on his laptop.

The Strange Birth and Long Life of Unix

Who said history is boring? This is a very interesting history of the world’s most important operating system.

The classic operating system turns 40, and its progeny abound
By Warren Toomey, IEEE Spectrum, December 2011

They say that when one door closes on you, another opens. People generally offer this bit of wisdom just to lend some solace after a misfortune. But sometimes it’s actually true. It certainly was for Ken Thompson and the late Dennis Ritchie, two of the greats of 20th-century information technology, when they created the Unix operating system, now considered one of the most inspiring and influential pieces of software ever written.

A door had slammed shut for Thompson and Ritchie in March of 1969, when their employer, the American Telephone & Telegraph Co., withdrew from a collaborative project with the Massachusetts Institute of Technology and General Electric to create an interactive time-sharing system called Multics, which stood for “Multiplexed Information and Computing Service.” Time-sharing, a technique that lets multiple people use a single computer simultaneously, had been invented only a decade earlier. Multics was to combine time-sharing with other technological advances of the era, allowing users to phone a computer from remote terminals and then read e-mail, edit documents, run calculations, and so forth. It was to be a great leap forward from the way computers were mostly being used, with people tediously preparing and submitting batch jobs on punch cards to be run one by one.

Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company’s renowned Bell Telephone Laboratories—­including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T’s corporate leaders decided to pull the plug.

After AT&T’s departure from the Multics project, managers at Bell Labs, in Murray Hill, N.J., became reluctant to allow any further work on computer operating systems, leaving some researchers there very frustrated. Although Multics hadn’t met many of its objectives, it had, as Ritchie later recalled, provided them with a “convenient interactive computing service, a good environment in which to do programming, [and] a system around which a fellowship could form.” Suddenly, it was gone.

With heavy hearts, the researchers returned to using their old batch system. At such an inauspicious moment, with management dead set against the idea, it surely would have seemed foolhardy to continue designing computer operating systems. But that’s exactly what Thompson, Ritchie, and many of their Bell Labs colleagues did. Now, some 40 years later, we should be thankful that these programmers ignored their bosses and continued their labor of love, which gave the world Unix, one of the greatest computer operating systems of all time.
Man men: Thompson (ken) and Ritchie (dmr) authored the first Unix manual or “man” pages, one of which is shown here. The first edition of the manual was released in November 1971.
Man men: Thompson (ken) and Ritchie (dmr) authored the first Unix manual or “man” pages, one of which is shown here. The first edition of the manual was released in November 1971. Click to enlarge.

The rogue project began in earnest when Thompson, Ritchie, and a third Bell Labs colleague, Rudd Canaday, began to sketch out on paper the design for a file system. Thompson then wrote the basics of a new operating system for the lab’s GE-645 mainframe. But with the Multics project ended, so too was the need for the GE-645. Thompson realized that any further programming he did on it was likely to go nowhere, so he dropped the effort.

Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it.

And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix.

Initially, Thompson used the GE-645 to compose and compile the software, which he then downloaded to the PDP‑7. But he soon weaned himself from the mainframe, and by the end of 1969 he was able to write operating-system code on the PDP-7 itself. That was a step in the right direction. But Thompson and the others helping him knew that the PDP‑7, which was already obsolete, would not be able to sustain their skunkworks for long. They also knew that the lab’s management wasn’t about to allow any more research on operating systems.

So Thompson and Ritchie got crea­tive. They formulated a proposal to their bosses to buy one of DEC’s newer minicomputers, a PDP-11, but couched the request in especially palatable terms. They said they were aiming to create tools for editing and formatting text, what you might call a word-processing system today. The fact that they would also have to write an operating system for the new machine to support the editor and text formatter was almost a footnote.

Management took the bait, and an order for a PDP-11 was placed in May 1970. The machine itself arrived soon after, although the disk drives for it took more than six months to appear. During the interim, Thompson, Ritchie, and others continued to develop Unix on the PDP-7. After the PDP-11’s disks were installed, the researchers moved their increasingly complex operating system over to the new machine. Next they brought over the roff text formatter written by Ossanna and derived from the runoff program, which had been used in an earlier time-sharing system.

Unix was put to its first real-world test within Bell Labs when three typists from AT&T’s patents department began using it to write, edit, and format patent applications. It was a hit. The patent department adopted the system wholeheartedly, which gave the researchers enough credibility to convince management to purchase another machine—a newer and more powerful PDP-11 model—allowing their stealth work on Unix to continue.

During its earliest days, Unix evolved constantly, so the idea of issuing named versions or releases seemed inappropriate. But the researchers did issue new editions of the programmer’s manual periodically, and the early Unix systems were named after each such edition. The first edition of the manual was completed in November 1971.

So what did the first edition of Unix offer that made it so great? For one thing, the system provided a hierarchical file system, which allowed something we all now take for granted: Files could be placed in directories—or equivalently, folders—that in turn could be put within other directories. Each file could contain no more than 64 kilobytes, and its name could be no more than six characters long. These restrictions seem awkwardly limiting now, but at the time they appeared perfectly adequate.

Although Unix was ostensibly created for word processing, the only editor available in 1971 was the line-oriented ed. Today, ed is still the only editor guaranteed to be present on all Unix systems. Apart from the text-processing and general system applications, the first edition of Unix included games such as blackjack, chess, and tic-tac-toe. For the system administrator, there were tools to dump and restore disk images to magnetic tape, to read and write paper tapes, and to create, check, mount, and unmount removable disk packs.

Most important, the system offered an interactive environment that by this time allowed time-sharing, so several people could use a single machine at once. Various programming languages were available to them, including BASIC, Fortran, the scripting of Unix commands, assembly language, and B. The last of these, a descendant of a BCPL (Basic Combined Programming Language), ultimately evolved into the immensely popular C language, which Ritchie created while also working on Unix.

The first edition of Unix let programmers call 34 different low-level routines built into the operating system. It’s a testament to the system’s enduring nature that nearly all of these system calls are still available—and still heavily used—on modern Unix and Linux systems four decades on. For its time, first-­edition Unix provided a remarkably powerful environment for software development. Yet it contained just 4200 lines of code at its heart and occupied a measly 16 KB of main memory when it ran.

Unix’s great influence can be traced in part to its elegant design, simplicity, portability, and serendipitous timing. But perhaps even more important was the devoted user community that soon grew up around it. And that came about only by an accident of its unique history.

The story goes like this: For years Unix remained nothing more than a Bell Labs research project, but by 1973 its authors felt the system was mature enough for them to present a paper on its design and implementation at a symposium of the Association for Computing Machinery. That paper was published in 1974 in the Communications of the ACM. Its appearance brought a flurry of requests for copies of the software.

This put AT&T in a bind. In 1956, AT&T had agreed to a U.S government consent decree that prevented the company from selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country’s long-distance phone service. So Unix could not be sold as a product. Instead, AT&T released the Unix source code under license to anyone who asked, charging only a nominal fee. The critical wrinkle here was that the consent decree prevented AT&T from supporting Unix. Indeed, for many years Bell Labs researchers proudly displayed their Unix policy at conferences with a slide that read, “No advertising, no support, no bug fixes, payment in advance.”

With no other channels of support available to them, early Unix adopters banded together for mutual assistance, forming a loose network of user groups all over the world. They had the source code, which helped. And they didn’t view Unix as a standard software product, because nobody seemed to be looking after it. So these early Unix users themselves set about fixing bugs, writing new tools, and generally improving the system as they saw fit.

The Usenix user group acted as a clearinghouse for the exchange of Unix software in the United States. People could send in magnetic tapes with new software or fixes to the system and get back tapes with the software and fixes that Usenix had received from others. In Australia, the University of Sydney produced a more robust version of Unix, the Australian Unix Share Accounting Method, which could cope with larger numbers of concurrent users and offered better performance.

By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were enthusiastically enhancing the system, and many of their improvements were being fed back to Bell Labs for incorporation in future releases. But as Unix became more popular, AT&T’s lawyers began looking harder at what various licensees were doing with their systems.

One person who caught their eye was John Lions, a computer scientist then teaching at the University of New South Wales, in Australia. In 1977, he published what was probably the most famous computing book of the time, A Commentary on the Unix Operating System, which contained an annotated listing of the central source code for Unix.

Unix’s licensing conditions allowed for the exchange of source code, and initially, Lions’s book was sold to licensees. But by 1979, AT&T’s lawyers had clamped down on the book’s distribution and use in academic classes. The anti­authoritarian Unix community reacted as you might expect, and samizdat copies of the book spread like wildfire. Many of us have nearly unreadable nth-­generation photocopies of the original book.

End runs around AT&T’s lawyers indeed became the norm—even at Bell Labs. For example, between the release of the sixth edition of Unix in 1975 and the seventh edition in 1979, Thompson collected dozens of important bug fixes to the system, coming both from within and outside of Bell Labs. He wanted these to filter out to the existing Unix user base, but the company’s lawyers felt that this would constitute a form of support and balked at their release. Nevertheless, those bug fixes soon became widely distributed through unofficial channels. For instance, Lou Katz, the founding president of Usenix, received a phone call one day telling him that if he went down to a certain spot on Mountain Avenue (where Bell Labs was located) at 2 p.m., he would find something of interest. Sure enough, Katz found a magnetic tape with the bug fixes, which were rapidly in the hands of countless users.

By the end of the 1970s, Unix, which had started a decade earlier as a reaction against the loss of a comfortable programming environment, was growing like a weed throughout academia and the IT industry. Unix would flower in the early 1980s before reaching the height of its popularity in the early 1990s.

For many reasons, Unix has since given way to other commercial and noncommercial systems. But its legacy, that of an elegant, well-designed, comfortable environment for software development, lives on. In recognition of their accomplishment, Thompson and Ritchie were given the Japan Prize earlier this year, adding to a collection of honors that includes the United States’ National Medal of Technology and Innovation and the Association of Computing Machinery’s Turing Award. Many other, often very personal, tributes to Ritchie and his enormous influence on computing were widely shared after his death this past October.

Unix is indeed one of the most influential operating systems ever invented. Its direct descendants now number in the hundreds. On one side of the family tree are various versions of Unix proper, which began to be commercialized in the 1980s after the Bell System monopoly was broken up, freeing AT&T from the stipulations of the 1956 consent decree. On the other side are various Unix-like operating systems derived from the version of Unix developed at the University of California, Berkeley, including the one Apple uses today on its computers, OS X. I say “Unix-like” because the developers of the Berkeley Software Distribution (BSD) Unix on which these systems were based worked hard to remove all the original AT&T code so that their software and its descendants would be freely distributable.

The effectiveness of those efforts were, however, called into question when the AT&T subsidiary Unix System Laboratories filed suit against Berkeley Software Design and the Regents of the University of California in 1992 over intellectual property rights to this software. The university in turn filed a counterclaim against AT&T for breaches to the license it provided AT&T for the use of code developed at Berkeley. The ensuing legal quagmire slowed the development of free Unix-like clones, including 386BSD, which was designed for the Intel 386 chip, the CPU then found in many IBM PCs.

Had this operating system been available at the time, Linus Torvalds says he probably wouldn’t have created Linux, an open-source Unix-like operating system he developed from scratch for PCs in the early 1990s. Linux has carried the Unix baton forward into the 21st century, powering a wide range of digital gadgets including wireless routers, televisions, desktop PCs, and Android smartphones. It even runs some supercomputers.

Although AT&T quickly settled its legal disputes with Berkeley Software Design and the University of California, legal wrangling over intellectual property claims to various parts of Unix and Linux have continued over the years, often involving byzantine corporate relations. By 2004, no fewer than five major lawsuits had been filed. Just this past August, a software company called the TSG Group (formerly known as the SCO Group), lost a bid in court to claim ownership of Unix copyrights that Novell had acquired when it purchased the Unix System Laboratories from AT&T in 1993.

As a programmer and Unix historian, I can’t help but find all this legal sparring a bit sad. From the very start, the authors and users of Unix worked as best they could to build and share, even if that meant defying authority. That outpouring of selflessness stands in sharp contrast to the greed that has driven subsequent legal battles over the ownership of Unix.

The world of computer hardware and software moves forward startlingly fast. For IT professionals, the rapid pace of change is typically a wonderful thing. But it makes us susceptible to the loss of our own history, including important lessons from the past. To address this issue in a small way, in 1995 I started a mailing list of old-time Unix ­aficionados. That effort morphed into the Unix Heritage Society. Our goal is not only to save the history of Unix but also to collect and curate these old systems and, where possible, bring them back to life. With help from many talented members of this society, I was able to restore much of the old Unix software to working order, including Ritchie’s first C compiler from 1972 and the first Unix system to be written in C, dating from 1973.

One holy grail that eluded us for a long time was the first edition of Unix in any form, electronic or otherwise. Then, in 2006, Al Kossow from the Computer History Museum, in Mountain View, Calif., unearthed a printed study of Unix dated 1972, which not only covered the internal workings of Unix but also included a complete assembly listing of the kernel, the main component of this operating system. This was an amazing find—like discovering an old Ford Model T collecting dust in a corner of a barn. But we didn’t just want to admire the chrome work from afar. We wanted to see the thing run again.

In 2008, Tim Newsham, an independent programmer in Hawaii, and I assembled a team of like-minded Unix enthusiasts and set out to bring this ancient system back from the dead. The work was technically arduous and often frustrating, but in the end, we had a copy of the first edition of Unix running on an emulated PDP-11/20. We sent out messages announcing our success to all those we thought would be interested. Thompson, always succinct, simply replied, “Amazing.” Indeed, his brainchild was amazing, and I’ve been happy to do what I can to make it, and the story behind it, better known.

Transistor Wars

As long as transistor continue to shrink for the next 30 years, I won’t be out of work before I retire. Somehow I have a feeling that I won’t see the end of Moore’s law in my life time, since there is always some new innovation around the corner.

Rival architectures face off in a bid to keep Moore’s Law alive
By Khaled Ahmed, Klaus Schuegraf, IEEE Spectrum, November 2011

In May, Intel announced the most dramatic change to the architecture of the transistor since the device was invented. The company will henceforth build its transistors in three dimensions, a shift that—if all goes well—should add at least a half dozen years to the life of Moore’s Law, the biennial doubling in transistor density that has driven the chip industry for decades.

But Intel’s big announcement was notable for another reason: It signaled the start of a growing schism among chipmakers. Despite all the great advantages of going 3-D, a simpler alternative design is also nearing production. Although it’s not yet clear which device architecture will win out, what is certain is that the complementary metal-oxide semiconductor (CMOS) field-effect transistor (FET)—the centerpiece of computer processors since the 1980s—will get an entirely new look. And the change is more than cosmetic; these designs will help open up a new world of low-power mobile electronics with fantastic capabilities.

There’s a simple reason everyone’s contemplating a redesign: The smaller you make a CMOS transistor, the more current it leaks when it’s switched off. This leakage arises from the device’s geometry. A standard CMOS transistor has four parts: a source, a drain, a channel that connects the two, and a gate on top to control the channel. When the gate is turned on, it creates a conductive path that allows electrons or holes to move from the source to the drain. When the gate is switched off, this conductive path is supposed to disappear. But as engineers have shrunk the distance between the source and drain, the gate’s control over the transistor channel has gotten weaker. Current sneaks through the part of the channel that’s farthest from the gate and also through the underlying silicon substrate. The only way to cut down on leaks is to find a way to remove all that excess silicon.

Over the past few decades, two very different solutions to this problem have emerged. One approach is to make the silicon channel of the traditional planar transistor as thin as possible, by eliminating the silicon substrate and instead building the channel on top of insulating material. The other scheme is to turn this channel on its side, popping it out of the transistor plane to create a 3-D device. Each approach comes with its own set of merits and manufacturing challenges, and chipmakers are now working out the best way to catch up with Intel’s leap forward. The next few years will see dramatic upheaval in an already fast-moving industry.

Change is nothing new to CMOS transistors, but the pace has been accelerating. When the first CMOS devices entered mass production in the 1980s, the path to further miniaturization seemed straightforward. Back in 1974, engineers at the IBM T. J. Watson Research Center in Yorktown Heights, N.Y., led by Robert Dennard, had already sketched out the ideal progression. The team described how steadily reducing gate length, gate insulator thickness, and other feature dimensions could simultaneously improve switching speed, power consumption, and transistor density.

But this set of rules, known as Dennard’s scaling law, hasn’t been followed for some time. During the 1990s boom in personal computing, the demand for faster microprocessors drove down transistor gate length faster than Dennard’s law called for. Shrinking transistors boosted speeds, but engineers found that as they did so, they couldn’t reduce the voltage across the devices to improve power consumption. So much current was being lost when the transistor was off that a strong voltage—applied on the drain to pull charge carriers through the channel—was needed to make sure the device switched as quickly as possible to avoid losing power in the switching process.

By 2001, the leakage power was fast approaching the amount of power needed to switch a transistor out of its “off” state. This was a warning sign for the industry. The trend promised chips that would consume the same amount of energy regardless of whether they were in use or not. Chipmakers needed to find new ways to boost transistor density. In 2003, as the length of transistor channels dropped to 45 nanometers, Intel debuted chips bearing devices made with strain engineering. These transistors boasted silicon channels that had been physically squeezed or pulled to boost speed and reduce the power lost due to resistance. By the next “node”—industry lingo for a transistor density milestone—companies had stopped shrinking transistor dimensions and instead began just squeezing transistors closer together. And in 2007, Intel bought Moore’s Law a few more years by introducing the first big materials change, replacing the ever-thinning silicon oxide insulator that sits between a transistor’s gate and channel with hafnium oxide.

This better-insulating material helped stanch a main source of leakage current—the tunneling of electrons between the gate and the channel. But leakage from the source to the drain was still a huge problem. As companies faced the prospect of creating even denser chips with features approaching 20 nm, it became increasingly clear that squeezing together traditional planar transistors or shrinking them even further would be impossible with existing technology. Swapping in a new insulator or adding more strain wouldn’t cut it. Driving down power consumption and saving Moore’s Law would require a fundamental change to transistor structure—a new design that could maximize the gate’s control over the channel.

Fortunately, over the course of more than 20 years of research, transistor designers have found two very powerful ways to boost the effectiveness of the transistor gate. As the gate itself can’t get much stronger, these schemes focus on making the channel easier to control. One approach replaces the bulk silicon of a normal transistor with a thin layer of silicon built on an insulating layer, creating a device that is often called an ultrathin body silicon-on-insulator, or UTB SOI, also known as a fully depleted SOI.

A second strategy turns the thin silicon channel by 90 degrees, creating a “fin” that juts out of the plane of the device. The transistor gate is then draped over the top of the channel like an upside-down U, bracketing it on three sides and giving the gate almost complete control of the channel. While conventional CMOS devices are largely flat, save for a thin insulating layer and the gate, these FinFETs—or Tri-Gate transistors, as Intel has named its three-sided devices—are decidedly 3-D. All the main components of the transistor—source, drain, channel, and gate—sit on top of the device’s substrate.

Both schemes offer the same basic advantage: By thinning the channel, they bring the gate closer to the drain. When a transistor is off, the drain’s electric field can take one of two paths inside the channel to zero-voltage destinations. It can propagate all the way across the channel to the source, or it can terminate at the transistor’s gate. If the field gets to the source, it can lower the energy barrier that keeps charge carriers in the source from entering the channel. But if the gate is close enough to the drain, it can act as a lightning rod, diverting field lines away from the source. This cuts down on leakage, and it also means that field lines don’t penetrate very far into the channel, dissipating even more energy by tugging on any stray carriers.

The first 3-D transistor was sketched out by Digh Hisamoto and others at Hitachi, who presented the design for a device dubbed a Delta at a conference in 1989. The UTB SOI’s roots extend even further back; they are a natural extension of early SOI channel research, which began in the 1980s when researchers started experimenting with transistors built with 200-nm thick, undoped silicon channels on insulating material.

But the promise of both of these thin-channel approaches wasn’t fully appreciated until 1996, when Chenming Hu and his colleagues at the University of California, Berkeley, began an ambitious study, funded by the U.S. Defense Advanced Research Projects Agency, to see how far these designs could go. At the time, the industry was producing 250-nm transistors, and no one knew whether the devices could be scaled below 100 nm. Hu’s team showed that the two alternate architectures could solve the power consumption problems of planar CMOS transistors and that they could operate with gate lengths of 20 nm—and later, even less.

The FinFET and the UTB SOI both offer big gains in power consumption. Logic chip designs typically require that a transistor in its on state draw at least 10 000 times as much current as the device leaks in its off state. For 30-nm transistors—about the size that most chipmakers are currently aiming for—this design spec means devices should leak no more than a few nanoamperes of current when they’re off. While 30-nm planar CMOS devices leak about 50 times that amount, both thin-channel designs hit the target quite easily.

But the two architectures aren’t entirely equal. To get the best performance, the channel of a UTB SOI should be no more than about one-fourth as thick as the length of the gate. Because a FinFET’s gate brackets the channel on three sides, the 3-D transistors can achieve the same level of control with a channel—or fin—that’s as much as half as thick as the length of the transistor gate.

This bigger channel volume gives FinFETs a distinct advantage when it comes to current-carrying capacity. The best R&D results suggest that a 25-nm FinFET can carry about 25 percent more current than a UTB SOI. This current boost doesn’t matter much if you have only a single transistor, but in an IC, it means you can charge capacitors 25 percent faster, making for much speedier chips. Faster chips obviously mean a lot to a microprocessor manufacturer like Intel. The question is whether other chipmakers will find the faster speeds meaningful enough to switch to FinFETs, a prospect that requires a big up-front investment and an entirely new set of manufacturing challenges.

The single biggest hurdle in making FinFETs is manufacturing the fins so that they’re both narrow and uniform. For a 20-nm transistor—roughly the same size as the one that Intel is putting into production—the fin must be about 10 nm wide and 25 nm high; it must also deviate by no more than half a nanometer—just a few atomic layers—in any given direction. Over the course of production, manufacturers must control all sources of variation, limiting it to no more than 1 nm in a 300-millimeter-wide wafer.

This precision is needed not only to manufacture the fin; it must also be maintained for the rest of the manufacturing process, including thermal treatment, doping, and the multiple film deposition and removal steps needed to build the transistor’s gate insulator and gate. As an added complication, the gate oxide and the gate must be deposited so that they follow the contours of the fin. Any process that damages the fin could affect how the device performs. The resultant variation in device quality would force engineers to operate circuits at a higher power than they’re designed for, eliminating any gains in power efficiency.

The unusual geometry of the FinFET also poses challenges for doping, which isn’t required but can help cut down on leakage current. FinFET channels need two kinds of dopants: One is deposited underneath the gate and the other into the parts of the channel that extend on either side of the gate, helping mate the channel to the source and drain. Manufacturers currently dope channels by shooting ions straight down into the material. But that approach won’t work for FinFETs. The devices need dopants to be distributed evenly through the top of the fin and the side walls; any unevenness in concentration will cause a pileup of charges, boosting the device’s resistance and wasting power.

Doping will get only more difficult in the future. As FinFETs shrink, they’ll get so close together that they will cast “shadows” on one another, preventing dopants from permeating every part of every fin. At Applied Materials’ Silicon Systems Group, we’ve been working on one possible fix: immersing fins in plasma so that dopants can migrate directly into the material, no matter what its shape is.

Because UTB SOI devices are quite similar to conventional planar CMOS transistors, they are easier to manufacture than FinFETs. Most existing designs and manufacturing techniques will work just as well with the new thin-silicon transistors as they do with the traditional variety. And in some ways, UTB SOIs are easier to produce than present-day transistors. The devices don’t need doped channels, a simplification that can save planar CMOS manufacturers some 20 to 30 steps out of roughly 400 in the wafer production process.

But the UTB SOI comes with its own challenges, chiefly the thin channel. The requirement that UTB SOI channels be half as thick as comparable FinFET fins makes any variations in thickness even more critical for these devices. A firm called Soitec, headquartered in Bernin, France, which has been leading the charge in manufacturing ultrathin silicon-on-insulator wafers, is currently demonstrating 10-nm-thick silicon layers that vary by just 0.5 nm in thickness. That’s an impressive achievement for wafers that measure 300 mm across, but it will need to be improved as transistors shrink. And it’s not clear how precise Soitec’s technique—which involves splitting a wafer to create an ultrathin silicon layer—can ultimately be made.

Another key stumbling block for UTB SOI adoption is the supply chain. At the moment, there are few potential providers of ultrathin SOI wafers, which could ultimately make manufacturers of UTB SOI chips dependent on a handful of sources. Intel’s Mark Bohr says the hard-to-find wafers could add 10 percent to the cost of a finished wafer, compared to 2 to 3 percent for wafers bearing 3-D transistors (an estimate from the SOI Industry Consortium suggests that finished UTB SOI wafers will actually be less expensive).

Going forward, we expect that chipmakers will split into two camps. Those interested in the speediest transistors will move toward FinFETs. Others who don’t want to invest as much in a switch will find UTB SOIs more attractive.

UTB SOI transistors have an additional feature that makes them particularly appealing for low-power applications: A small voltage can easily be applied to the very bottom of a chip full of UTB SOI devices. This small bias voltage alters the channel properties, reducing the electrical barrier that stops current flowing from the source to the drain. As a result, less voltage needs to be applied to the transistor gates to turn the devices on. When the transistors aren’t needed, this bias voltage can be removed, which restores the electrical barrier, reducing the amount of current that leaks through the device when it’s off. As Thomas Skotnicki of STMicroelectronics has long argued, this sort of dynamic switching saves power, making the devices particularly attractive for chips in smartphones and other mobile gadgets. Skotnicki says the company expects to release its first UTB SOI chip, which will use 28-nm transistors to power a mobile multimedia processor, by the end of 2012.

That said, few companies have committed to one technology or the other. STMicroelectronics—as well as firms such as GlobalFoundries and Samsung—is part of the International Semiconductor Development Alliance, which supports and benefits from device research at IBM and is investing in both FinFETs and UTB SOIs. Exactly how the industry will split up and which design will come to dominate will depend on decisions made by the biggest foundries and how quickly standards are developed. Reports suggest that Taiwan Semiconductor Manufacturing Co., which dominates bespoke manufacturing in the chip industry, will begin making 14-nm FinFETs in 2015, but it’s not clear whether the company will also support UTB SOI production. Switching to FinFET production requires a substantial investment, and whichever way TSMC swings, it will put pressure on other manufacturers, such as GlobalFoundries, United Microelectronics Corp., and newcomers to the foundry business such as Samsung, to choose a direction.

Also still unclear is how far each technology can be extended. Right now it looks like both FinFETs and UTB SOIs should be able to cover the next three generations of transistors. But UTB SOI transistors may not evolve much below 7 nm, because at that point, their gate oxide would need an effective thickness of 0.7 nm, which would require significant materials innovation. FinFETs may have a similar limit. In 2006, a team at the Korea Advanced Institute of Science and Technology used electron-beam lithography to build 3-nm FinFETs. But crafting a single device isn’t quite the same as packing millions together to make a microprocessor; when transistors are that close to each other, parasitic capacitances and resistances will draw current away from each switch. Some projections suggest that when FinFETs are scaled down to 7 nm or so, they will perform no better than planar devices.

Meanwhile, researchers are already trying to figure out what devices might succeed FinFETs and UTB SOIs, to continue Moore’s Law scaling. One possibility is to extrapolate the FinFET concept by using a nanowire device that is completely surrounded by a cylindrical gate. Another idea is to exploit quantum tunneling to create switches that can’t leak current when they’re not switched on. We don’t know what will come next. The emergence of FinFETs and UTB SOIs clearly shows that the days of simple transistor scaling are long behind us. But the switch to these new designs also offers a clear demonstration of how creative thinking and a good amount of competition can help us push Moore’s Law to its ultimate limit—whatever that might be.

RIP Dennis Ritchie (1941 – 2011)

When the world is mourning with the death of Steve Jobs, the world lost another tech pioneer Dennis Ritchie, the inventor of C and UNIX. To many geeks, Dennis’ role in the computer revolution is way more important than Steve.

Dennis Ritchie, the Bell Labs computer scientist who created the immensely popular C programming language and who was instrumental in the construction the well-known Unix operating system, died last weekend after a protracted illness. Ritchie was 70 years old.

Ritchie, who was born in a suburb of New York City, graduated from Harvard and later went on to earn a doctorate from the same institution while working at Bell Labs, which then belonged to AT&T (and is now part of the Alcatel-Lucent). There he joined forces with Ken Thompson and other Bell Labs colleagues to create the Unix operating system. Although early Unix evolved without the naming of progressively advanced versions, the birth of this operating system can be marked by the first edition of the Unix programmers’ manual, which was issued in November of 1971, almost 40 years ago.

Although AT&T had been engaged in the development of an advanced computer operating system called Multics in the late 1960s, corporate managers abandoned those efforts, making Thomson and Ritchie’s work on Unix that much more impressive. These researchers threw themselves into the development of Unix despite, rather than in response to, their employer’s leanings at the time. We should be thankful that Ritchie and his colleagues took such initiative and that they had the foresight and talent to build a system that was so simple, elegant, and portable that is survives today. Indeed, Unix has spawned dozens if not hundreds of direct derivatives and Unix-like operating systems, including Linux, which can now be found running everything from smartphones to supercomputers. Unix also underlies the current Macintosh operating system, OS X.

Ritchie’s work creating the C programming language took place at the same time and is closely tied to the early development of Unix. By 1973, Ritchie was able to rewrite the core of Unix, which had been programmed in assembly language, using C. In 1978, Brian Kernighan (another Bell Labs colleague) and Ritchie published The C Programming Language, which essentially defined the language (“K&R C”) and remains a classic on the C language and on good programming practice in general. For example, The C Programming Language established the widespread tradition of beginning instruction with an illustrative program that displays the words, “Hello, world.”

For their seminal work on Unix, Ritchie and Thompson received in 1983 the Association of Computing Machinery’s Turing Award. In 1990, the IEEE awarded Ritchie and Thompson the Richard W. Hamming Medal. Ritchie and Thompson’s work on Unix and C was also recognized at the highest level when President Bill Clinton awarded them the 1998 National Medal of Technology. And in May of this year, Ritchie and Thompson received the 2011 Japan Prize (which was also awarded to Tadamitsu Kishimoto and Toshio Hirano, who were honored for the discovery of interleukin-6).

Spectrum attended the Japan Prize awards ceremony and had an opportunity to ask Ritchie to reflect on some of the high points of his impressive career. During that interview, Ritchie admitted that Unix is far from being without flaws, although he didn’t attempt to enumerate them. “There are lots of little things—I don’t even want to think about going down the list,” he quipped. In December, Spectrum will be publishing a feature-length history of the development of the Unix operating system.

Rob Pike, a former member of the Unix team at Bell labs, informed the world of Ritchie’s death last night on Google+. There he wrote, “He was a quiet and mostly private man, but he was also my friend, colleague, and collaborator, and the world has lost a truly great mind.” A charming illustration of some of those qualities comes from David Madeo, who responded to Pike’s message by sharing this story:

I met Dennis Ritchie at a Usenix without knowing it. He had traded nametags with someone so I spent 30 minutes thinking “this guy really knows what he’s talking about.” Eventually, the other guy walked up and said, “I’m tired of dealing with your groupies” and switched the nametags back. I looked back down to realize who he was, the guy who not only wrote the book I used to learn C in freshman year, but invented the language in the first place. He apologized and said something along the lines that it was easier for him to have good conversations that way.

Faster Than a Speeding Photon

Neutrino is faster than light! If there is no experimental error, it will be the biggest discovery since Einstein’s relativity theory. In fact, this result prove that Einstein is wrong. If something can be faster than light, then time travel may be possible.

By Rachel Courtland, IEEE Spectrum, Fri, September 23, 2011

The photon should never lose a race. But on Thursday, stories started trickling in of a baffling result: neutrinos that move faster than light. News of this potential violation of special relativity is everywhere now. But despite a flurry of media coverage, it’s still hard to know what to make of the result.

As far as particle physics results go, the finding itself is fairly easy to convey. OPERA, a 1300-metric-ton detector that sits in Italy’s underground Gran Sasso National Laboratory, detected neutrinos that seem to move faster than the speed of light. The nearly massless particles made the 2.43-millisecond, 730-kilometer trip from CERN, where they were created, to OPERA’s detectors about 60 nanoseconds faster than a photon would.

The OPERA team hasn’t released the results lightly. But after three years work, OPERA spokesperson Antonio Ereditato told Science, it was time to spread the news and put the question to the community. “We are forced to say something,” Ereditato said. “We could not sweep it under the carpet because that would be dishonest.” And the experiment seems carefully done. The OPERA team estimates they have measured the 60 nanosecond delay with a precision of about 10 nanoseconds. Yesterday, Nature News reported the team’s result has a certainty of about six sigma, “the physicists’ way of saying it is certainly correct”.

But as straightforward as you can imagine a particle footrace to be, interpreting the result and dealing with the implications is another matter. Words like “flabbergasted” and “extraordinary” are circulating, but often with a strong note of caution. Physicist Jim Al-Khalili of the University of Surrey was so convinced the finding is the result of measurement error, he told the BBC’s Jason Palmer that “if the CERN experiment proves to be correct and neutrinos have broken the speed of light, I will eat my boxer shorts on live TV.” Others say it’s just too early to call. When approached by Reuters, renowned physicist Stephen Hawking declined to comment on the result. “It is premature to comment on this,” he said. “Further experiments and clarifications are needed.”

For now, no one’s speculating too wildly about what the result might mean if it holds up: there has been some talk of time travel and extra dimensions. And on the whole, the coverage of the OPERA findings, especially given the fast-breaking nature of the news cycle (the team’s preprint posted last night) has been pretty careful. But there is one key question few have tackled head on: the conflict with long-standing astrophysical results.

One of the key neutrino speed measurements comes from observations of supernova 1987A. Photons and neutrinos from this explosion reached Earth just hours apart in February 1987. But as Nature News and other outlets noted, if OPERA’s measurement of neutrino speed is correct, neutrinos created in the explosion should have arrived at Earth years before the light from the supernova was finally picked up by astronomers.

New Scientist’s Lisa Grossman found a few potential explanations for the conflicting results. She quotes theorist Mark Sher of the College of William and Mary in Williamsburg, Virginia, who speculates that maybe – just maybe – the speed difference between the OPERA and supernova results could be chalked up to differences in the energy or type of neutrinos.

That said, no one is arguing that the OPERA results are in immediate need of a theoretical explanation, because there could be errors the team hasn’t accounted for. The experiment relies on very precise timing and careful measurement of the distance between the neutrino source at CERN and the detector. John Timmer of Ars Technica does a good job of explaining how the OPERA team used fastidious accounting, GPS signals, and atomic clocks to reduce the uncertainty. But he notes that there are other potential sources of error that could add up, not to mention those pesky “unknown unknowns”.

Many physicists seem to be looking forward to independent tests using two other neutrino experiments – the MINOS experiment in Minnesota, which captures neutrinos created at Fermilab, and another neutrino beam experiment in Japan called T2K.

But for now, we can only wait. And, perhaps, come up with explanations of our own.

Do Romantic Thoughts Reduce Women’s Interest in Engineering?

If romance reduce girls’ pursuit in engineering, probably the reverse is also true that girls choose engineering have less interest in romance as well. They should do a follow up research and survey a large sample of engineering girls, see how many of them had a boyfriend in high school.

Now, someone should come up with a research showing male engineers are not romantic, so Pat cannot complain I am not romantic.

BY Steven Cherry, IEEE Spectrum, Fri, August 26, 2011
A new study suggests thoughts of romance can reduce college women’s interest in science and engineering

In the 1960s, when women first began enrolling at universities in record numbers, many people wondered: “Why weren’t more of them studying engineering?” Fifty years later, we’re still wondering. Only one in seven U.S. engineers is a woman. The so-called “engineering gender gap” is still a chasm.

And that’s not likely to change very quickly. The average college graduate nowadays is a woman—57 percent to 43—but when it comes to the so-called STEM fields, that’s science, technology, engineering, and math, women account for only 35 percent. And most of those are for life and physical sciences, not engineering or computer science.

It’s a problem perhaps best examined by psychologists, and examining it they are. And a new series of studies argues that—as clichéd as it sounds—maybe love really does have something to do with it.

An article based on the studies, will be published next month in the peer-reviewed journal, Personality and Social Psychology Bulletin.

My guest today is the paper’s lead author. Lora Park is an assistant professor of psychology at the University of Buffalo, in New York, and principal investigator at the Self and Motivation Lab there. She joins us by phone.A new study suggests thoughts of romance can reduce college women’s interest in science and engineering

Effects of Everyday Romantic Goal Pursuit on Women’s Attitudes Toward Math and Science

The present research examined the impact of everyday romantic goal strivings on women’s attitudes toward science, technology,engineering, and math (STEM). It was hypothesized that women may distance themselves from STEM when the goal to be romantically desirable is activated because pursuing intelligence goals in masculine domains (i.e., STEM) conflicts with pursuing romantic goals associated with traditional romantic scripts and gender norms. Consistent with hypotheses, women, but not men, who viewed images (Study 1) or overheard conversations (Studies 2a-2b) related to romantic goals reported less positive attitudes toward STEM and less preference for majoring in math/science compared to other disciplines. On days when women pursued romantic goals, the more romantic activities they engaged in and the more desirable they felt, but the fewer math activities they engaged in. Furthermore, women’s previous day romantic goal strivings predicted feeling more desirable but being less invested in math on the following day (Study 3).

Link to the paper: http://www.buffalo.edu/news/pdf/August11/ParkRomanticAttitudes.pdf

When the Problem Is the Problem

This is the only thing I learned from my master degree. Asking the right question is half way done to get the right answer. In fact asking the right question probably more important than getting the right answer. Once you stated the question correctly, things magically fall into place and you can outsource the work to someone else.

Finding the right problem is half the solution
By Robert W. Lucky, July 2011, IEEE Spectrum

A problem well stated is a problem half solved.
– Inventor Charles Franklin Kettering (1876–1958)

We’re all fairly good at problem solving. That’s the skill we were taught and endlessly drilled on at school. Once we have a problem, we know how to turn the crank and get a solution. Ah, but finding a problem—there’s the rub.

Everyone knows that finding a good problem is the key to research, yet no one teaches us how to do that. Engineering education is based on the presumption that there exists a predefined problem worthy of a solution. If only it were so!

After many years of managing research, I’m still not sure how to find good problems. Often I discovered that good problems were obvious only in retrospect, and even then I was sometimes proved wrong years later. Nonetheless, I did observe that there were some people who regularly found good problems, while others never seemed to be working along fruitful paths. So there must be something to be said about ways to go about this.

Internet pioneer Craig Partridge recently sent around a list of open research problems in communications and networking, as well as a set of criteria for what constitutes a good problem. He offers some sensible guidelines for choosing research problems, such as having a reasonable expectation of results, believing that someone will care about your results and that others will be able to build upon them, and ensuring that the problem is indeed open and underexplored.

All of this is easier said than done, however. Given any prospective problem, a search may reveal a plethora of previous work, but much of it will be hard to retrieve. On the other hand, if there is little or no previous work, maybe there’s a reason no one is interested in this problem. You need something in between. Moreover, even in defining the problem you need to see a way in, the germ of some solution, and a possible escape path to a lesser result, like the runaway truck ramps on steep downhill highways.

Timing is critical. If a good problem area is opened up, everyone rushes in, and soon there are diminishing returns. On unimportant problems, this same herd behavior leads to a self-approving circle of papers on a subject of little practical significance. Real progress usually comes from a succession of incremental and progressive results, as opposed to those that feature only variations on a problem’s theme.

At Bell Labs, the mathematician Richard Hamming used to divide his fellow researchers into two groups: those who worked behind closed doors and those whose doors were always open. The closed-door people were more focused and worked harder to produce good immediate results, but they failed in the long term.

Today I think we can take the open or closed door as a metaphor for researchers who are actively connected and those who are not. And just as there may be a right amount of networking, there may also be a right amount of reading, as opposed to writing. Hamming observed that some people spent all their time in the library but never produced any original results, while others wrote furiously but were relatively ignorant of the relevant literature.

Hamming, who shared an office with Claude Shannon and knew many famous scientists and engineers, also remarked on what he saw as a “Nobel Prize effect,” where once having achieved a famous result, a researcher felt that he or she could work only on great problems, consequently never doing great work again. From small-problem acorns, great trees of research grow.

Like a lot of things in life, it helps to be in the right place at the right time. Sometimes all the good and well-intentioned advice in the world won’t help you avoid working on a dead-end problem. I know—I’ve been there, done that

Are Compact Fluorescent Lightbulbs Really Cheaper Over Time?

I hate the lighting produced by CFL bulbs. I am going to switch from incandescent bulb to LED lights directly when the price of LED lights comes down. CFL is a in-between gaping technically that eventually should be phased out.

By Joseph Calamia, March 2011, IEEE Spectrum
CFLs must last long enough for their energy efficiency to make up for their higher cost

You buy a compact fluorescent lamp. The packaging says it will last for 6000 hours—about five years, if used for three hours a day. A year later, it burns out.

Last year, IEEE Spectrum reported that some Europeans opposed legislation to phase out incandescent lighting. Rather than replace their lights with compact fluorescents, consumers started hoarding traditional bulbs.

From the comments on that article, it seems that some IEEE Spectrum readers aren’t completely sold on CFLs either. We received questions about why the lights don’t always meet their long-lifetime claims, what can cause them to fail, and ultimately, how dead bulbs affect the advertised savings of switching from incandescent.

Tests of compact fluorescent lamps’ lifetime vary among countries. The majority of CFLs sold in the United States adhere to the U.S. Department of Energy and Environmental Protection Agency’s Energy Star approval program, according to the U.S. National Electrical Manufacturers Association. For these bulbs, IEEE Spectrum found some answers.

How is a compact fluorescent lamp’s lifetime calculated in the first place?

“With any given lamp that rolls off a production line, whatever the technology, they’re not all going to have the same exact lifetime,” says Alex Baker, lighting program manager for the Energy Star program. In an initial test to determine an average lifetime, he says, manufacturers leave a large sample of lamps lit. The defined average “rated life” is the time it takes for half of the lamps to go out. Baker says that this average life definition is an old lighting industry standard that applies to incandescent and compact fluorescent lamps alike.

In reality, the odds may actually be somewhat greater than 50 percent that your 6000-hour-rated bulb will still be burning bright at 6000 hours. “Currently, qualified CFLs in the market may have longer lifetimes than manufacturers are claiming,” says Jen Stutsman, of the Department of Energy’s public affairs office. “More often than not, more than 50 percent of the lamps of a sample set are burning during the final hour of the manufacturer’s chosen rated lifetime,” she says, noting that manufacturers often opt to end lifetime evaluations prematurely, to save on testing costs.

Although manufacturers usually conduct this initial rated life test in-house, the Energy Star program requires other lifetime evaluations conducted by accredited third-party laboratories. Jeremy Snyder directed one of those testing facilities, the Program for the Evaluation and Analysis of Residential Lighting (PEARL) in Troy, N.Y., which evaluated Energy Star–qualified bulbs until late 2010, when the Energy Star program started conducting these tests itself. Snyder works at the Rensselaer Polytechnic Institute’s Lighting Research Center, which conducts a variety of tests on lighting products, including CFLs and LEDs. Some Energy Star lifetime tests, he says, require 10 sample lamps for each product—five pointing toward the ceiling and five toward the floor. One “interim life test” entails leaving the lamps lit for 40 percent of their rated life. Three strikes, or burnt-out lamps, and the product risks losing its qualification.

Besides waiting for bulbs to burn out, testers also measure the light output of lamps over time, to ensure that the CFLs do not appreciably dim with use. Using a hollow “integrating sphere,” which has a white interior to reflect light in all directions, Lighting Research Center staff can take precise measurements of a lamp’s total light output in lumens. The Energy Star program requires that 10 tested lights maintain an average of 90 percent of their initial lumen output for 1000 hours of life, and 80 percent of their initial lumen output at 40 percent of their rated life.

Is there any way to accelerate these lifetime tests?

“There are techniques for accelerated testing of incandescent lamps, but there’s no accepted accelerated testing for other types,” says Michael L. Grather, the primary lighting performance engineer at Luminaire Testing Laboratory and Underwriters’ Laboratories in Allentown, Penn For incandescent bulbs, one common method is to run more electric current through the filament than the lamp might experience in normal use. But Grather says a similar test for CFLs wouldn’t give consumers an accurate prediction of the bulb’s life: “You’re not fairly indicating what’s going to happen as a function of time. You’re just stressing different components—the electronics but not the entire lamp.”

Perhaps the closest such evaluation for CFLs is the Energy Star “rapid cycle test.” For this evaluation, testers divide the total rated life of the lamp, measured in hours, by two and switch the compact fluorescent on for five minutes and off for five minutes that number of times. For example, a CFL with a 6000-hour rated life must undergo 3000 such rapid cycles. At least five out of a sample of six lamps must survive for the product to keep its Energy Star approval.

In real scenarios, what causes CFLs to fall short of their rated life?

As anyone who frequently replaces CFLs in closets or hallways has likely discovered, rapid cycling can prematurely kill a CFL. Repeatedly starting the lamp shortens its life, Snyder explains, because high voltage at start-up sends the lamp’s mercury ions hurtling toward the starting electrode, which can destroy the electrode’s coating over time. Snyder suggests consumers keep this in mind when deciding where to use a compact fluorescent. The Lighting Research Center has published a worksheet [PDF] for consumers to better understand how frequent switching reduces a lamp’s lifetime. The sheet provides a series of multipliers so that consumers can better predict a bulb’s longevity. The multipliers range from 1.5 (for bulbs left on for at least 12 hours) to 0.4 (for bulbs turned off after 15 minutes). Despite any lifetime reduction, Snyder says consumers should still turn off lights not needed for more than a few minutes.

Another CFL slayer is temperature. “Incandescents thrive on heat,” Baker says. “The hotter they get, the more light you get out of them. But a CFL is very temperature sensitive.” He notes that “recessed cans”—insulated lighting fixtures—prove a particularly nasty compact fluorescent death trap, especially when attached to dimmers, which can also shorten the electronic ballast’s life. He says consumers often install CFLs meant for table or floor lamps inside these fixtures, instead of lamps specially designed for higher temperatures, as indicated on their packages. Among other things, these high temperatures can destroy the lamps’ electrolytic capacitors—the main reason, he says, that CFLs fail when overheated.

How do shorter-than-expected lifetimes affect the payback equation?

Actually predicting the savings of switching from an incandescent must account for both the cost of the lamp and its energy savings over time. Although the initial price of a compact fluorescent (which can range [PDF] from US $0.50 in a multipack to over $9) is usually more than that of an incandescent (usually less than a U.S. dollar), a CFL can use a fraction of the energy an incandescent requires. Over its lifetime, the compact fluorescent should make up for its higher initial cost in savings—if it lives long enough. It should also offset the estimated 4 milligrams of mercury it contains. You might think of mercury vapor as the CFL’s equivalent of an incandescent’s filament. The electrodes in the CFL excite this vapor, which in turn radiates and excites the lamp’s phosphor coating, giving off light. Given that coal-burning power plants also release mercury into the air, an amount that the Energy Star program estimates at around 0.012 milligrams per kilowatt-hour, if the CFL can save enough energy it should offset this environmental cost, too.

Exactly how long a CFL must live to make up for its higher costs depends on the price of the lamp, the price of electric power, and how much energy the compact fluorescent requires to produce the same amount of light as its incandescent counterpart. Many manufacturers claim that consumers can take an incandescent wattage and divide it by four, and sometimes five, to find an equivalent CFL in terms of light output, says Russ Leslie, associate director at the Lighting Research Center. But he believes that’s “a little bit too greedy.” Instead, he recommends dividing by three. “You’ll still save a lot of energy, but you’re more likely to be happy with the light output,” he says.

To estimate your particular savings, the Energy Star program has published a spreadsheet where you can enter the price you’re paying for electricity, the average number of hours your household uses the lamp each day, the price you paid for the bulb, and its wattage. The sheet also includes the assumptions used to calculate the comparison between compact fluorescent and incandescent bulbs. Playing with the default assumptions given in the sheet, we reduced the CFL’s lifetime by 60 percent to account for frequent switching, doubled the initial price to make up for dead bulbs, deleted the assumed labor costs for changing bulbs, and increased the CFL’s wattage to give us a bit more light. The compact fluorescent won. We invite you to try the same, with your own lighting and energy costs, and let us know your results.