Category Archives: Reference

Filing cabinet for a digitized world.

RIP Dennis Ritchie (1941 – 2011)

When the world is mourning with the death of Steve Jobs, the world lost another tech pioneer Dennis Ritchie, the inventor of C and UNIX. To many geeks, Dennis’ role in the computer revolution is way more important than Steve.

Dennis Ritchie, the Bell Labs computer scientist who created the immensely popular C programming language and who was instrumental in the construction the well-known Unix operating system, died last weekend after a protracted illness. Ritchie was 70 years old.

Ritchie, who was born in a suburb of New York City, graduated from Harvard and later went on to earn a doctorate from the same institution while working at Bell Labs, which then belonged to AT&T (and is now part of the Alcatel-Lucent). There he joined forces with Ken Thompson and other Bell Labs colleagues to create the Unix operating system. Although early Unix evolved without the naming of progressively advanced versions, the birth of this operating system can be marked by the first edition of the Unix programmers’ manual, which was issued in November of 1971, almost 40 years ago.

Although AT&T had been engaged in the development of an advanced computer operating system called Multics in the late 1960s, corporate managers abandoned those efforts, making Thomson and Ritchie’s work on Unix that much more impressive. These researchers threw themselves into the development of Unix despite, rather than in response to, their employer’s leanings at the time. We should be thankful that Ritchie and his colleagues took such initiative and that they had the foresight and talent to build a system that was so simple, elegant, and portable that is survives today. Indeed, Unix has spawned dozens if not hundreds of direct derivatives and Unix-like operating systems, including Linux, which can now be found running everything from smartphones to supercomputers. Unix also underlies the current Macintosh operating system, OS X.

Ritchie’s work creating the C programming language took place at the same time and is closely tied to the early development of Unix. By 1973, Ritchie was able to rewrite the core of Unix, which had been programmed in assembly language, using C. In 1978, Brian Kernighan (another Bell Labs colleague) and Ritchie published The C Programming Language, which essentially defined the language (“K&R C”) and remains a classic on the C language and on good programming practice in general. For example, The C Programming Language established the widespread tradition of beginning instruction with an illustrative program that displays the words, “Hello, world.”

For their seminal work on Unix, Ritchie and Thompson received in 1983 the Association of Computing Machinery’s Turing Award. In 1990, the IEEE awarded Ritchie and Thompson the Richard W. Hamming Medal. Ritchie and Thompson’s work on Unix and C was also recognized at the highest level when President Bill Clinton awarded them the 1998 National Medal of Technology. And in May of this year, Ritchie and Thompson received the 2011 Japan Prize (which was also awarded to Tadamitsu Kishimoto and Toshio Hirano, who were honored for the discovery of interleukin-6).

Spectrum attended the Japan Prize awards ceremony and had an opportunity to ask Ritchie to reflect on some of the high points of his impressive career. During that interview, Ritchie admitted that Unix is far from being without flaws, although he didn’t attempt to enumerate them. “There are lots of little things—I don’t even want to think about going down the list,” he quipped. In December, Spectrum will be publishing a feature-length history of the development of the Unix operating system.

Rob Pike, a former member of the Unix team at Bell labs, informed the world of Ritchie’s death last night on Google+. There he wrote, “He was a quiet and mostly private man, but he was also my friend, colleague, and collaborator, and the world has lost a truly great mind.” A charming illustration of some of those qualities comes from David Madeo, who responded to Pike’s message by sharing this story:

I met Dennis Ritchie at a Usenix without knowing it. He had traded nametags with someone so I spent 30 minutes thinking “this guy really knows what he’s talking about.” Eventually, the other guy walked up and said, “I’m tired of dealing with your groupies” and switched the nametags back. I looked back down to realize who he was, the guy who not only wrote the book I used to learn C in freshman year, but invented the language in the first place. He apologized and said something along the lines that it was easier for him to have good conversations that way.

Faster Than a Speeding Photon

Neutrino is faster than light! If there is no experimental error, it will be the biggest discovery since Einstein’s relativity theory. In fact, this result prove that Einstein is wrong. If something can be faster than light, then time travel may be possible.

By Rachel Courtland, IEEE Spectrum, Fri, September 23, 2011

The photon should never lose a race. But on Thursday, stories started trickling in of a baffling result: neutrinos that move faster than light. News of this potential violation of special relativity is everywhere now. But despite a flurry of media coverage, it’s still hard to know what to make of the result.

As far as particle physics results go, the finding itself is fairly easy to convey. OPERA, a 1300-metric-ton detector that sits in Italy’s underground Gran Sasso National Laboratory, detected neutrinos that seem to move faster than the speed of light. The nearly massless particles made the 2.43-millisecond, 730-kilometer trip from CERN, where they were created, to OPERA’s detectors about 60 nanoseconds faster than a photon would.

The OPERA team hasn’t released the results lightly. But after three years work, OPERA spokesperson Antonio Ereditato told Science, it was time to spread the news and put the question to the community. “We are forced to say something,” Ereditato said. “We could not sweep it under the carpet because that would be dishonest.” And the experiment seems carefully done. The OPERA team estimates they have measured the 60 nanosecond delay with a precision of about 10 nanoseconds. Yesterday, Nature News reported the team’s result has a certainty of about six sigma, “the physicists’ way of saying it is certainly correct”.

But as straightforward as you can imagine a particle footrace to be, interpreting the result and dealing with the implications is another matter. Words like “flabbergasted” and “extraordinary” are circulating, but often with a strong note of caution. Physicist Jim Al-Khalili of the University of Surrey was so convinced the finding is the result of measurement error, he told the BBC’s Jason Palmer that “if the CERN experiment proves to be correct and neutrinos have broken the speed of light, I will eat my boxer shorts on live TV.” Others say it’s just too early to call. When approached by Reuters, renowned physicist Stephen Hawking declined to comment on the result. “It is premature to comment on this,” he said. “Further experiments and clarifications are needed.”

For now, no one’s speculating too wildly about what the result might mean if it holds up: there has been some talk of time travel and extra dimensions. And on the whole, the coverage of the OPERA findings, especially given the fast-breaking nature of the news cycle (the team’s preprint posted last night) has been pretty careful. But there is one key question few have tackled head on: the conflict with long-standing astrophysical results.

One of the key neutrino speed measurements comes from observations of supernova 1987A. Photons and neutrinos from this explosion reached Earth just hours apart in February 1987. But as Nature News and other outlets noted, if OPERA’s measurement of neutrino speed is correct, neutrinos created in the explosion should have arrived at Earth years before the light from the supernova was finally picked up by astronomers.

New Scientist’s Lisa Grossman found a few potential explanations for the conflicting results. She quotes theorist Mark Sher of the College of William and Mary in Williamsburg, Virginia, who speculates that maybe – just maybe – the speed difference between the OPERA and supernova results could be chalked up to differences in the energy or type of neutrinos.

That said, no one is arguing that the OPERA results are in immediate need of a theoretical explanation, because there could be errors the team hasn’t accounted for. The experiment relies on very precise timing and careful measurement of the distance between the neutrino source at CERN and the detector. John Timmer of Ars Technica does a good job of explaining how the OPERA team used fastidious accounting, GPS signals, and atomic clocks to reduce the uncertainty. But he notes that there are other potential sources of error that could add up, not to mention those pesky “unknown unknowns”.

Many physicists seem to be looking forward to independent tests using two other neutrino experiments – the MINOS experiment in Minnesota, which captures neutrinos created at Fermilab, and another neutrino beam experiment in Japan called T2K.

But for now, we can only wait. And, perhaps, come up with explanations of our own.

Do Romantic Thoughts Reduce Women’s Interest in Engineering?

If romance reduce girls’ pursuit in engineering, probably the reverse is also true that girls choose engineering have less interest in romance as well. They should do a follow up research and survey a large sample of engineering girls, see how many of them had a boyfriend in high school.

Now, someone should come up with a research showing male engineers are not romantic, so Pat cannot complain I am not romantic.

BY Steven Cherry, IEEE Spectrum, Fri, August 26, 2011
A new study suggests thoughts of romance can reduce college women’s interest in science and engineering

In the 1960s, when women first began enrolling at universities in record numbers, many people wondered: “Why weren’t more of them studying engineering?” Fifty years later, we’re still wondering. Only one in seven U.S. engineers is a woman. The so-called “engineering gender gap” is still a chasm.

And that’s not likely to change very quickly. The average college graduate nowadays is a woman—57 percent to 43—but when it comes to the so-called STEM fields, that’s science, technology, engineering, and math, women account for only 35 percent. And most of those are for life and physical sciences, not engineering or computer science.

It’s a problem perhaps best examined by psychologists, and examining it they are. And a new series of studies argues that—as clichéd as it sounds—maybe love really does have something to do with it.

An article based on the studies, will be published next month in the peer-reviewed journal, Personality and Social Psychology Bulletin.

My guest today is the paper’s lead author. Lora Park is an assistant professor of psychology at the University of Buffalo, in New York, and principal investigator at the Self and Motivation Lab there. She joins us by phone.A new study suggests thoughts of romance can reduce college women’s interest in science and engineering

Effects of Everyday Romantic Goal Pursuit on Women’s Attitudes Toward Math and Science

Abstract:
The present research examined the impact of everyday romantic goal strivings on women’s attitudes toward science, technology,engineering, and math (STEM). It was hypothesized that women may distance themselves from STEM when the goal to be romantically desirable is activated because pursuing intelligence goals in masculine domains (i.e., STEM) conflicts with pursuing romantic goals associated with traditional romantic scripts and gender norms. Consistent with hypotheses, women, but not men, who viewed images (Study 1) or overheard conversations (Studies 2a-2b) related to romantic goals reported less positive attitudes toward STEM and less preference for majoring in math/science compared to other disciplines. On days when women pursued romantic goals, the more romantic activities they engaged in and the more desirable they felt, but the fewer math activities they engaged in. Furthermore, women’s previous day romantic goal strivings predicted feeling more desirable but being less invested in math on the following day (Study 3).

Link to the paper: http://www.buffalo.edu/news/pdf/August11/ParkRomanticAttitudes.pdf

History of Work Ethic

I like Plato’s idea that wisdom is directly proportion the amount of leisure time a person has. To Plato, leisure does not mean indulge yourself in brainless entertainment, it means time for thinking and exercise of the mind. When my week is too busy for me to think and reflect, I can feel the rotting of my mind.

In the hi-tech age, Plato’s work ethic comes back with a slight twist. Manual labor of the slaves are replaced robots and computers. Repetitive manual labor has no intrinsic value, thinking as work brings meaning to the educated. Just like in protestant work ethic, idleness is still a deadly sin, but work without using your mind is equally bad.

Historical Context of the Work Ethic
by, Roger B. Hill, Ph.D., the Work Ethic Site

From a historical perspective, the cultural norm placing a positive moral value on doing a good job because work has intrinsic value for its own sake was a relatively recent development (Lipset, 1990). Work, for much of the ancient history of the human race, has been hard and degrading. Working hard–in the absence of compulsion–was not the norm for Hebrew, classical, or medieval cultures (Rose, 1985). It was not until the Protestant Reformation that physical labor became culturally acceptable for all persons, even the wealthy.

1.Attitudes Toward Work During the Classical Period

One of the significant influences on the culture of the western world has been the Judeo-Christian belief system. Growing awareness of the multicultural dimensions of contemporary society has moved educators to consider alternative viewpoints and perspectives, but an understanding of western thought is an important element in the understanding of the history of the United States.

Traditional Judeo-Christian beliefs state that sometime after the dawn of creation, man was placed in the Garden of Eden “to work it and take care of it” (NIV, 1973, Genesis 2:15). What was likely an ideal work situation was disrupted when sin entered the world and humans were ejected from the Garden. Genesis 3:19 described the human plight from that time on. “By the sweat of your brow you will eat your food until you return to the ground, since from it you were taken; for dust you are and to dust you will return” (NIV, 1973). Rose stated that the Hebrew belief system viewed work as a “curse devised by God explicitly to punish the disobedience and ingratitude of Adam and Eve” (1985, p. 28). Numerous scriptures from the Old Testament in fact supported work, not from the stance that there was any joy in it, but from the premise that it was necessary to prevent poverty and destitution (NIV; 1973; Proverbs 10:14, Proverbs 13:4, Proverbs 14:23, Proverbs 20:13, Ecclesiastes 9:10).

The Greeks, like the Hebrews, also regarded work as a curse (Maywood, 1982). According to Tilgher (1930), the Greek word for work was ponos, taken from the Latin poena, which meant sorrow. Manual labor was for slaves. The cultural norms allowed free men to pursue warfare, large-scale commerce, and the arts, especially architecture or sculpture (Rose, 1985).

Mental labor was also considered to be work and was denounced by the Greeks. The mechanical arts were deplored because they required a person to use practical thinking, “brutalizing the mind till it was unfit for thinking of truth” (Tilgher, 1930, p. 4). Skilled crafts were accepted and recognized as having some social value, but were not regarded as much better than work appropriate for slaves. Hard work, whether due to economic need or under the orders of a master, was disdained.

It was recognized that work was necessary for the satisfaction of material needs, but philosophers such as Plato and Aristotle made it clear that the purpose for which the majority of men labored was “in order that the minority, the élite, might engage in pure exercises of the mind–art, philosophy, and politics” (Tilgher, 1930, p. 5). Plato recognized the notion of a division of labor, separating them first into categories of rich and poor, and then into categories by different kinds of work, and he argued that such an arrangement could only be avoided by abolition of private property (Anthony, 1977). Aristotle supported the ownership of private property and wealth. He viewed work as a corrupt waste of time that would make a citizen’s pursuit of virtue more difficult (Anthony, 1977).

Braude (1975) described the Greek belief that a person’s prudence, morality, and wisdom was directly proportional to the amount of leisure time that person had. A person who worked, when there was no need to do so, would run the risk of obliterating the distinction between slave and master. Leadership, in the Greek state and culture, was based on the work a person didn’t have to do, and any person who broke this cultural norm was acting to subvert the state itself.

The Romans adopted much of their belief system from the culture of the Greeks and they also held manual labor in low regard (Lipset, 1990). The Romans were industrious, however, and demonstrated competence in organization, administration, building, and warfare. Through the empire that they established, the Roman culture was spread through much of the civilized world during the period from c500 BC until c117 AD (Webster Encyclopedia, 1985). The Roman empire spanned most of Europe, the Middle East, Egypt, and North Africa and greatly influenced the Western culture in which the theoretical constructs underlying this study were developed.

Slavery had been an integral part of the ancient world prior to the Roman empire, but the employment of slaves was much more widely utilized by the Romans than by the Greeks before them (Anthony, 1977). Early on in the Roman system, moderate numbers of slaves were held and they were treated relatively well. As the size of landholdings grew, however, thousands of slaves were required for large-scale grain production on some estates, and their treatment grew worse. Slaves came to be viewed as cattle, with no rights as human beings and with little hope of ever being freed. In fact, in some instances cattle received greater care than slaves, since cattle were not as capable of caring for themselves as were slaves (Anthony, 1977).

For the Romans, work was to be done by slaves, and only two occupations were suitable for a free man–agriculture and big business (Maywood, 1982). A goal of these endeavors, as defined by the Roman culture, was to achieve an “honorable retirement into rural peace as a country gentleman” (Tilgher, 1930, p. 8). Any pursuit of handicrafts or the hiring out of a person’s arms was considered to be vulgar, dishonoring, and beneath the dignity of a Roman citizen.

Philosophically, both the Greeks and the Romans viewed the work that slaves performed and the wealth that free men possessed as a means to achieve the supreme ideal of life–man’s independence of external things, self-sufficiency, and satisfaction with one’s self (Tilgher, 1930). Although work was something that would degrade virtue, wealth was not directly related to virtue except in the matter of how it was used. The view of Antisthenes that wealth and virtue were incompatible and the view of the Stoics that wealth should be pursued for the purpose of generosity and social good represented extremes of philosophical thought. The most accepted view was that pursuit of gain to meet normal needs was appropriate.

From the perspective of a contemporary culture, respect for workers upon whom the economic structure of a nation and a society rested would have been logical for the Greeks and the Romans, but no such respect was evident. Even free men, who were not privileged to be wealthy and were obliged to work along side slaves, were not treated with any sense of gratitude, but were held in contempt. The cultural norms of the classical era regarding work were in stark contrast to the work ethic of the latter day.

3.Attitudes Toward Work During the Medieval Period

The fall of the Roman empire marked the beginning of a period generally known as the Middle Ages. During this time, from c400 AD until c1400 AD, Christian thought dominated the culture of Europe (Braude, 1975). Woven into the Christian conceptions about work, however, were Hebrew, Greek, and Roman themes. Work was still perceived as punishment by God for man’s original sin, but to this purely negative view was added the positive aspect of earnings which prevented one from being reliant on the charity of others for the physical needs of life (Tilgher, 1930). Wealth was recognized as an opportunity to share with those who might be less fortunate and work which produced wealth therefore became acceptable.

Early Christian thought placed an emphasis on the shortness of time until the second coming of Christ and the end of the world. Any attachment to physical things of the world or striving to accumulate excessive wealth was frowned upon. As time passed and the world did not end, the Christian church began to turn its attention to social structure and the organization of the believers on earth. Monasteries were formed where monks performed the religious and intellectual work of the church (reading, copying manuscripts, etc.), but lay people tended to the manual labor needed to supply the needs of the community. People who were wealthy were expected to meet their own needs, but to give the excess of their riches to charity. Handicraft, farming, and small scale commerce were acceptable for people of moderate means, but receiving interest for money loaned, charging more than a “just” price, and big business were not acceptable (Tilgher, 1930).

As was the case for the Greeks and the Romans, social status within the medieval culture was related to the work a person did. Aristotelianism was also evident in the system of divine law taught by the Catholic church during this time (Anthony, 1977). A hierarchy of professions and trades was developed by St. Thomas Aquinas as part of his encyclopedic consideration of all things human and divine (Tilgher, 1930). Agriculture was ranked first, followed by the handicrafts and then commerce. These were considered to be the work of the world, however, and the work of the church was in a higher category (Rose, 1985). The ideal occupation was the monastic life of prayer and contemplation of God (Braude, 1975; Tilgher, 1930). Whether as a cleric or in some worldly occupation, each person embarked on a particular work course as a result of the calling of God, and it was the duty of a worker to remain in his class, passing on his family work from father to son.

In the culture of the medieval period, work still held no intrinsic value. The function of work was to meet the physical needs of one’s family and community, and to avoid idleness which would lead to sin (Tilgher, 1930). Work was a part of the economic structure of human society which, like all other things, was ordered by God.

4.Protestantism and the Protestant Ethic

With the Reformation, a period of religious and political upheaval in western Europe during the sixteenth century, came a new perspective on work. Two key religious leaders who influenced the development of western culture during this period were Martin Luther and John Calvin. Luther was an Augustinian friar who became discontent with the Catholic church and was a leader within the Protestant movement. He believed that people could serve God through their work, that the professions were useful, that work was the universal base of society and the cause of differing social classes, and that a person should work diligently in their own occupation and should not try to change from the profession to which he was born. To do so would be to go against God’s laws since God assigned each person to his own place in the social hierarchy (Lipset, 1990; Tilgher, 1930).

The major point at which Luther differed from the medieval concept of work was regarding the superiority of one form of work over another. Luther regarded the monastic and contemplative life, held up as the ideal during the middle ages, as an egotistic and unaffectionate exercise on the part of the monks, and he accused them of evading their duty to their neighbors (Tilgher, 1930). For Luther, a person’s vocation was equated as his calling, but all calling’s were of equal spiritual dignity. This tenant was significant because it affirmed manual labor.

Luther still did not pave the way for a profit-oriented economic system because he disapproved of commerce as an occupation (Lipset, 1990; Tilgher, 1930). From his perspective, commerce did not involve any real work. Luther also believed that each person should earn an income which would meet his basic needs, but to accumulate or horde wealth was sinful.

According to Weber (1904, 1905), it was John Calvin who introduced the theological doctrines which combined with those of Martin Luther to form a significant new attitude toward work. Calvin was a French theologian whose concept of predestination was revolutionary. Central to Calvinist belief was the Elect, those persons chosen by God to inherit eternal life. All other people were damned and nothing could change that since God was unchanging. While it was impossible to know for certain whether a person was one of the Elect, one could have a sense of it based on his own personal encounters with God. Outwardly the only evidence was in the person’s daily life and deeds, and success in one’s worldly endeavors was a sign of possible inclusion as one of the Elect. A person who was indifferent and displayed idleness was most certainly one of the damned, but a person who was active, austere, and hard-working gave evidence to himself and to others that he was one of God’s chosen ones (Tilgher, 1930).

Calvin taught that all men must work, even the rich, because to work was the will of God. It was the duty of men to serve as God’s instruments here on earth, to reshape the world in the fashion of the Kingdom of God, and to become a part of the continuing process of His creation (Braude, 1975). Men were not to lust after wealth, possessions, or easy living, but were to reinvest the profits of their labor into financing further ventures. Earnings were thus to be reinvested over and over again, ad infinitum, or to the end of time (Lipset, 1990). Using profits to help others rise from a lessor level of subsistence violated God’s will since persons could only demonstrate that they were among the Elect through their own labor (Lipset, 1990).

Selection of an occupation and pursuing it to achieve the greatest profit possible was considered by Calvinists to be a religious duty. Not only condoning, but encouraging the pursuit of unlimited profit was a radical departure from the Christian beliefs of the middle ages. In addition, unlike Luther, Calvin considered it appropriate to seek an occupation which would provide the greatest earnings possible. If that meant abandoning the family trade or profession, the change was not only allowed, but it was considered to be one’s religious duty (Tilgher, 1930).

The norms regarding work which developed out of the Protestant Reformation, based on the combined theological teachings of Luther and Calvin, encouraged work in a chosen occupation with an attitude of service to God, viewed work as a calling and avoided placing greater spiritual dignity on one job than another, approved of working diligently to achieve maximum profits, required reinvestment of profits back into one’s business, allowed a person to change from the craft or profession of his father, and associated success in one’s work with the likelihood of being one of God’s Elect.

5.Two Perspectives of the Protestant Ethic

The attitudes toward work which became a part of the culture during the sixteenth century, and the economic value system which they nurtured, represented a significant change from medieval and classical ways of thinking about work (Anthony, 1977). Max Weber, the German economic sociologist, coined a term for the new beliefs about work calling it the “Protestant ethic.” The key elements of the Protestant ethic were diligence, punctuality, deferment of gratification, and primacy of the work domain (Rose, 1985). Two distinct perspectives were evident in the literature with regard to the development of the Protestant ethic.

One perspective was the materialist viewpoint which stated that the belief system, called the Protestant ethic, grew out of changes in the economic structure and the need for values to support new ways of behavior. Anthony (1977) attributes this view to Karl Marx. The other perspective, delineated by Max Weber (1904, 1905), viewed changes in the economic structure as an outgrowth of shifts in theological beliefs. Regardless of the viewpoint, it is evident that a rapid expansion in commerce and the rise of industrialism coincided with the Protestant Reformation (Rose, 1985).

Bernstein (1988), in an argument supporting the materialist viewpoint, enumerated three sixteenth century trends which probably contributed to the support by Luther and Calvin of diligence: (1) a rapid population increase of Germany and Western Europe, (2) inflation, and (3) a high unemployment rate. Probably the most serious of these was the rapid expansion in population. Between 1500 and 1600, the population of Germany increased by 25% and the British population increased by 40% (Bernstein, 1988). In the cities, the increases were even greater as people from rural areas were displaced by enclosure of large tracts of land for sheep farming. In addition, the import of large quantities of silver and gold from Mexico and Peru contributed to inflation in general price levels of between 300% and 400%, and even higher inflation in food prices (Bernstein, 1988). Along with the growth in population and the inflation problems, unemployment was estimated at 20% in some cities (Bernstein, 1988). People without jobs became commonplace on the streets of cities, begging and struggling to survive.

European cities acted to alleviate the problems of unemployment and begging on the streets by passing laws which prohibited begging. The general perception of the time was that work was available for those who wanted to work, and that beggars and vagrants were just lazy. The reality was that the movement of people into the cities far exceeded the capacity of the urban areas to provide jobs. The theological premise that work was a necessary penance for original sin caused increased prejudice toward those without work. Bernstein (1988) suggested that a fundamental misunderstanding of the economic realities facing the poor contributed to the theological development of the Protestant ethic.

From a marxist view, what actually occurred was the development of a religious base of support for a new industrial system which required workers who would accept long hours and poor working conditions (Anthony, 1977; Berenstein, 1988). Berenstein did not accuse the theological leaders of the Protestant Reformation of deliberately constructing a belief system which would support the new economic order, but proposed that they did misconstrue the realities of the poor and the unemployed of their day.

From the perspective of Max Weber (1904, 1905), the theological beliefs came first and change in the economic system resulted. Motivation of persons to work hard and to reinvest profits in new business ventures was perceived as an outcome primarily of Calvinism. Weber further concluded that countries with belief systems which were predominantly Protestant prospered more under capitalism than did those which were predominantly Catholic (Rose, 1985).

6.The Work Ethic and the Rise of Capitalism

During the medieval period, the feudal system became the dominant economic structure in Europe. This was a social, economic, and political system under which landowners provided governance and protection to those who lived and worked on their property. Centralization of government, the growth of trade, and the establishment of economically powerful towns, during the fifteenth century, provided alternative choices for subsistence, and the feudal system died out (Webster Encyclopedia, 1985). One of the factors that made the feudal system work was the predominant religious belief that it was sinful for people to seek work other than within the God ordained occupations fathers passed on to their sons. With the Protestant Reformation, and the spread of a theology which ordained the divine dignity of all occupations as well as the right of choosing one’s work, the underpinnings of an emerging capitalist economic system were established.

Anthony (1977) described the significance of an ideology advocating regular systematic work as essential to the transformation from the feudal system to the modern society. In the emerging capitalist system, work was good. It satisfied the economic interests of an increasing number of small businessmen and it became a social duty–a norm. Hard work brought respect and contributed to the social order and well being of the community. The dignity with which society viewed work brought dignity for workers as well, and contempt for those who were idle or lazy.

The Protestant ethic, which gave “moral sanction to profit making through hard work, organization, and rational calculation” (Yankelovich, 1981, p. 247), spread throughout Europe and to America through the Protestant sects. In particular, the English Puritans, the French Huguenots, and the Swiss and Dutch Reformed subscribed to Calvinist theology that was especially conducive to productivity and capital growth (Lipset, 1990). As time passed, attitudes and beliefs which supported hard work became secularized, and were woven into the norms of Western culture (Lipset, 1990; Rodgers, 1978; Rose, 1985; Super, 1982). Weber (1904, 1905) especially emphasized the popular writings of Benjamin Franklin as an example of how, by the eighteenth century, diligence in work, scrupulous use of time, and deferment of pleasure had become a part of the popular philosophy of work in the Western world.

7.The Work Ethic in America

Although the Protestant ethic became a significant factor in shaping the culture and society of Europe after the sixteenth century, its impact did not eliminate the social hierarchy which gave status to those whose wealth allowed exemption from toil and made gentility synonymous with leisure (Rodgers, 1978). The early adventurers who first found America were searching, not for a place to work and build a new land, but for a new Eden where abundance and riches would allow them to follow Aristotle’s instruction that leisure was the only life fitting for a free man. The New England Puritans, the Pennsylvania Quakers, and others of the Protestant sects, who eventually settled in America, however, came with no hopes or illusions of a life of ease.

The early settlers referred to America as a wilderness, in part because they sought the spiritual growth associated with coming through the wilderness in the Bible (Rodgers, 1978). From their viewpoint, the moral life was one of hard work and determination, and they approached the task of building a new world in the wilderness as an opportunity to prove their own moral worth. What resulted was a land preoccupied with toil.

When significant numbers of Europeans began to visit the new world in the early 1800’s, they were amazed with the extent of the transformation (Rodgers, 1978). Visitors to the northern states were particularly impressed by the industrious pace. They often complained about the lack of opportunities for amusement, and they were perplexed by the lack of a social strata dedicated to a life of leisure.

Work in preindustrial America was not incessant, however. The work of agriculture was seasonal, hectic during planting and harvesting but more relaxed during the winter months. Even in workshops and stores, the pace was not constant. Changing demands due to the seasons, varied availability of materials, and poor transportation and communication contributed to interruptions in the steadiness of work. The work ethic of this era did not demand the ceaseless regularity which came with the age of machines, but supported sincere dedication to accomplish those tasks a person might have before them. The work ethic “was not a certain rate of business but a way of thinking” (Rodgers, 1978, p. 19).

8.The Work Ethic and the Industrial Revolution

As work in America was being dramatically affected by the industrial revolution in the mid-nineteenth century, the work ethic had become secularized in a number of ways. The idea of work as a calling had been replaced by the concept of public usefulness. Economists warned of the poverty and decay that would befall the country if people failed to work hard, and moralists stressed the social duty of each person to be productive (Rodgers, 1978). Schools taught, along with the alphabet and the spelling book, that idleness was a disgrace. The work ethic also provided a sociological as well as an ideological explanation for the origins of social hierarchy through the corollary that effort expended in work would be rewarded (Gilbert, 1977).

Some elements of the work ethic, however, did not bode well with the industrial age. One of the central themes of the work ethic was that an individual could be the master of his own fate through hard work. Within the context of the craft and agricultural society this was true. A person could advance his position in life through manual labor and the economic benefits it would produce. Manual labor, however, began to be replaced by machine manufacture and intensive division of labor came with the industrial age. As a result, individual control over the quantity and methods of personal production began to be removed (Gilbert, 1977).

The impact of industrialization and the speed with which it spread during the second half of the nineteenth century was notable. Rodgers (1978) reported that as late as 1850 most American manufacturing was still being done in homes and workshops. This pattern was not confined to rural areas, but was found in cities also where all varieties of craftsmen plied their trades. Some division of labor was utilized, but most work was performed using time-honored hand methods. A certain measure of independence and creativity could be taken for granted in the workplace. No one directly supervised home workers or farmers, and in the small shops and mills, supervision was mostly unstructured. The cotton textile industry of New England was the major exception.

Rodgers (1978) described the founding, in the early 1820’s, of Lowell, Massachusetts as the real beginning of the industrial age in America. By the end of the decade, nineteen textile mills were in operation in the city, and 5,000 workers were employed in the mills. During the years that followed, factories were built in other towns as competition in the industry grew. These cotton mills were distinguished from other factories of the day by their size, the discipline demanded of their workers, and the paternalistic regulations imposed on employees (Rodgers, 1978). Gradually the patterns of employment and management initiated in the cotton mills spread to other industries, and during the later half of the nineteenth century, the home and workshop trades were essentially replaced by the mass production of factories.

In the factories, skill and craftsmanship were replaced by discipline and anonymity. A host of carefully preserved hand trades–tailoring, barrel making, glass blowing, felt-hat making, pottery making, and shoe making–disappeared as they were replaced by new inventions and specialization of labor (Rodgers, 1978). Although new skills were needed in some factories, the trend was toward a semiskilled labor force, typically operating one machine to perform one small piece of a manufacturing process. The sense of control over one’s destiny was missing in the new workplace, and the emptiness and lack of intellectual stimulation in work threatened the work ethic (Gilbert, 1977). In the secularized attitudes which comprised the work ethic up until that time, a central component was the promise of psychological reward for efforts in one’s work, but the factory system did little to support a sense of purpose or self-fulfillment for those who were on the assembly lines.

The factory system also threatened the promise of economic reward–another key premise of the work ethic. The output of products manufactured by factories was so great that by the 1880’s industrial capacity exceeded that which the economy could absorb (Rodgers, 1978). Under the system of home and workshop industries, production had been a virtue, and excess goods were not a problem. Now that factories could produce more than the nation could use, hard work and production no longer always provided assurance of prosperity.

In the first half of the twentieth century, the industrial system continued to dominate work in America and much of the rest of the world. Technology continued to advance, but innovation tended to be focused on those areas of manufacture which had not yet been mastered by machines. Little was done to change the routine tasks of feeding materials into automated equipment or other forms of semiskilled labor which were more economically done by low wage workers (Rodgers, 1978).

9.The Work Ethic and Industrial Management

Management of industries became more stematic and structured as increased competition forced factory owners to hold costs down. The model of management which developed, the traditional model, was characterized by a very authoritarian style which did not acknowledge the work ethic. To the contrary, Daft and Steers (1986) described this model as holding “that the average worker was basically lazy and was motivated almost entirely by money (p. 93).” Workers were assumed to neither desire nor be capable of autonomous or self-directed work. As a result, the scientific management concept was developed, predicated on specialization and division of jobs into simple tasks. Scientific management was claimed to increase worker production and result in increased pay. It was therefore seen as beneficial to workers, as well as to the company, since monetary gain was viewed as the primary motivating factor for both.

As use of scientific management became more widespread in the early 1900’s, it became apparent that factors other than pay were significant to worker motivation. Some workers were self-starters and didn’t respond well to close supervision and others became distrustful of management when pay increases failed to keep pace with improved productivity (Daft and Steers, 1986). Although unacknowledged in management practice, these were indicators of continued viability of the work ethic in employees.

By the end of World War II scientific management was considered inadequate and outdated to deal with the needs of industry (Jaggi, 1988). At this point the behaviorist school of thought emerged to provide alternative theories for guiding the management of workers. Contrary to the principles of scientific management, the behaviorists argued that workers were not intrinsically lazy. They were adaptive. If the environment failed to provide a challenge, workers became lazy, but if appropriate opportunities were provided, workers would become creative and motivated.

In response to the new theories, managers turned their attention to finding various ways to make jobs more fulfilling for workers. Human relations became an important issue and efforts were made to make people feel useful and important at work. Company newspapers, employee awards, and company social events were among the tools used by management to enhance the job environment (Daft and Steers, 1986), but the basic nature of the workplace remained unchanged. The adversarial relationship between employee and employer persisted.

In the late 1950’s job enrichment theories began to provide the basis for fundamental changes in employer-employee relationships. Herzberg, Mausner, and Snyderman (1959) identified factors such as achievement, recognition, responsibility, advancement, and personal growth which, when provided as an intrinsic component of a job, tended to motivate workers to perform better. Factors such as salary, company policies, supervisory style, working conditions, and relations with fellow workers tended to impair worker performance if inadequately provided for, but did not particularly improve worker motivation when present.

In 1960, when the concepts of theory “X” and theory “Y” were introduced by McGregor, the basis for a management style conducive to achieving job enrichment for workers was provided (Jaggi, 1988). Theory “X” referred to the authoritarian management style characteristic of scientific management but theory “Y” supported a participatory style of management.

Jaggi (1988) defined participatory management as “a cooperative process in which management and workers work together to accomplish a common goal (p. 446).” Unlike authoritarian styles of management, which provided top-down, directive control over workers assumed to be unmotivated and in need of guidance, participatory management asserted that worker involvement in decisionmaking provided valuable input and enhanced employee satisfaction and morale. Yankelovich and Immerwahr (1984) described participatory management as a system which would open the way for the work ethic to be a powerful resource in the workplace. They stated, however, that the persistence of the traditional model in American management discouraged workers, even though many wanted to work hard and do good work for its own sake.

10.The Work Ethic in the Information Age

Just as the people of the mid-nineteenth century encountered tremendous cultural and social change with the dawn of the industrial age, the people of the late twentieth century experienced tremendous cultural and social shifts with the advent of the information age. Toffler (1980) likened these times of change to waves washing over the culture, bringing with it changes in norms and expectations, as well as uncertainty about the future.

Since 1956 (Naisbitt, 1984) white-collar workers in technical, managerial, and clerical positions have outnumbered workers in blue-collar jobs. Porat (1977), in a study for the U.S. Department of Commerce, examined over 400 occupations in 201 industries. He determined that in 1967, the economic contribution of jobs primarily dealing with production of information, as compared with goods-producing jobs, accounted for 46% of the GNP and more than 53% of the income earned. Some jobs in manufacturing and industry also became more technical and necessitated a higher level of thinking on the job as machines were interfaced with computers and control systems became more complex.

Yankelovich and Immerwahr (1984) contrasted the work required of most people during the industrial age with the work of the information age. Industrial age jobs were typically low-discretion, required little decisionmaking, and were analyzed and broken into simple tasks which required very little thinking or judgement on the part of workers. Information age jobs, in contrast, were high-discretion and required considerable thinking and decisionmaking on the part of workers (Miller, 1986). In the workplace characterized by high-discretion, the work ethic became a much more important construct than it was during the manipulative era of machines. Maccoby (1988) emphasized the importance, in this setting, of giving employees authority to make decisions which would meet the needs of customers as well as support the goals of their own companies.

As high-discretion, information age jobs provided opportunities for greater self-expression by workers, people began to find more self-fulfillment in their work. Yankelovich and Harmon (1988) reported that a significant transformation in the meaning of the work ethic resulted. Throughout history, work had been associated with pain, sacrifice, and drudgery. The previously mentioned Greek word for work, ponos, also meant “pain.” For the Hebrews as well as for the medieval Christians, the unpleasantness of work was associated with Divine punishment for man’s sin. The Protestant ethic maintained that work was a sacrifice that demonstrated moral worthiness, and it stressed the importance of postponed gratification. With the information age, however, came work which was perceived as good and rewarding in itself. Most workers were satisfied with their work and wanted to be successful in it (Wattenberg, 1984).

According the Yankelovich and Harmon (1988), the work ethic of the 1980’s stressed skill, challenge, autonomy, recognition, and the quality of work produced. Autonomy was identified as a particularly important factor in worker satisfaction with their jobs. Motivation to work involved trust, caring, meaning, self-knowledge, challenge, opportunity for personal growth, and dignity (Maccoby, 1988; Walton, 1974). Workers were seeking control over their work and a sense of empowerment and many information age jobs were conducive to meeting these needs. As a result, the work ethic was not abandoned during the information age, but was transformed to a state of relevance not found in most industrial age occupations.

Even though the information age was well established by the 1980’s and 1990’s, not all jobs were high-discretion. Some occupations continued to consist primarily of manual labor and allowed minimal opportunity for worker involvement in decisionmaking. In addition, authoritarian forms of management continued to be utilized and the potential of the work ethic was wasted. Statistics reported by Yankelovich and Immerwahr (1984) indicated that by the early 1980’s, 43% of the workforce perceived their jobs as high-discretion and 21% of the workforce perceived their jobs as low-discretion. The high-discretion workers were likely to be better educated, to be in white-collar or service jobs, and to have experienced technological changes in their work. The low-discretion workers were more likely to be union members, to be in blue-collar jobs, and to be working in positions characterized by dirt, noise, and pollution.

11.The Work Ethic and Empowerment

As a result of the rapid changes associated with the Information Age workplace, codified and systematized knowledge not limited to a specific organizational context was important during the 1980’s and 1990’s (Maccoby, 1983). Higher levels of education became necessary along with skills at solving problems, managing people, and applying the latest information to the tasks at hand. With increased education, higher expectations and aspirations for careers emerged.

Young people, in particular, entering the workforce with high school and college educations, expected opportunities for advancement (Maccoby, 1983; Sheehy, 1990). They anticipated that talent and hard work would be the basis for success rather than chance or luck. In essence, information age workers expected application of a positive work ethic to result in rewards, and they sometimes became impatient if progress was not experienced in a relatively short period of time (Sheehy, 1990).

For workers who acquired positions of supervision or ownership, motivation to accomplish personal goals through success in the organization enhanced the expression of work ethic attributes. Barnard (1938) identified the process of persons in an organization coordinating their activities to attain common goals as important to the well-being of the organization. One of the essential elements for this process was the creation and allocation of satisfaction among individuals (Barnard, 1938).

Further explanation for organizational behavior was provided by a model developed by Getzels and Guba (Getzels, 1968). The major elements of the model were institution, role, and expectation which formed the normative dimension of activity in a social system; and individual, personality, and need-disposition which constituted the personal dimension of activity in a social system (Getzels, 1968). To the extent that a person’s work ethic beliefs influenced personality and need-disposition, the observed behavior of that individual within the context of the workplace would be affected. Particularly in the high-discretion workplace of the information age, role and expectations found within the workplace would tend to be reinforced by a strong work ethic.

12. Other Changes in the Workplace

Besides changes in the jobs people performed, changes in the levels of education required for those jobs, and changes in the extent to which people were given control or empowerment in their work, the workforce of the 1980’s and 1990’s reflected a larger number of women and a reduced number of workers older than 65. Changes in gender and age of workers had a significant impact on the culture of the later twentieth century and influenced the pattern of work related norms such as the work ethic.

Rodgers (1978) told of the growing restlessness of women in the late 1800’s and the early 1900’s. As the economic center of society was moved out of the home or workshop and into the factory, women were left behind. Some women became operatives in textile mills, office workers, or salesclerks, and increased numbers were employed as teachers (Sawhill, 1974). Women comprised a relatively small percentage of the workforce, however, and their wages were about half that of men. Those who labored at housework and child-rearing received no pay at all and often were afforded little respect or appreciation for what they did.

It was not until World War II and the years following that women began to enter the workplace in great numbers. In 1900 women made up 18% of the nation’s workforce, but by 1947 they comprised 28% of the workforce (Levitan & Johnson, 1983). By 1980 42.5% of the nation’s workers were women (Stencel, 1981). In 1990 the number of women workers was approaching 50% of the workforce, and Naisbitt and Aburdene (1990) reported that women held 39.3% of all executive, administrative, and management jobs. Due to the increase in the number of women working outside the home, their attitudes about work have become a significant influence on the work ethic in the contemporary workplace.

Comparisons of attitudes of men and women in the workplace have shown that men tended to be more concerned with earning a good income, having freedom from close supervision, having leadership opportunities, and having a job that enhanced their social status. Women were inclined to seek job characteristics which allowed them to help others, to be original and creative, to progress steadily in their work, and to work with people rather than things (Lyson, 1984). Women, more than men, also tended to seek personal benefits such as enjoyment, pride, fulfillment, and personal challenge (Bridges, 1989).

Another trend which shaped the workforce of the later twentieth century was an increase in the number of older workers who retired from their jobs. Statistics reported by Quinn (1983) showed that in 1950, persons 65 years old and older comprised 45.8% of the workforce as compared to 18.4% in 1981. Part of this trend can be explained by the continued shift away from agriculture and self-employment–occupations which traditionally had high older worker participation rates. In addition, increased provision for retirement income, as a result of pensions or other retirement plans, has removed the financial burden which necessitated work for many older adults in the past.

Deans (1972) noted a trend on the part of younger workers to view work differently than older workers. He found less acceptance, among young people entering the workforce, of the concept that hard work was a virtue and a duty and less upward striving by young workers compared to that of their parents and grandparents. Yankelovich (1981) reported findings which contradicted the view that younger workers were less committed to the work ethic, but he did find a decline in belief that hard work would pay off. This was a significant shift because pay and “getting ahead” were the primary incentives management used to encourage productivity during the industrial age. If economic reward had lost its ability to motivate workers, then productivity could be expected to decline,
in the absence of some other reason for working hard (Yankelovich, 1981). Within this context, the work ethic, and a management style which unfettered it, was a significant factor for maintaining and increasing performance.

13. Influences Shaping the Contemporary Work Ethic

The work ethic is a cultural norm that places a positive moral value on doing a good job and is based on a belief that work has intrinsic value for its own sake (Cherrington, 1980; Quinn, 1983; Yankelovich & Immerwahr, 1984). Like other cultural norms, a person’s adherence to or belief in the work ethic is principally influenced by socialization experiences during childhood and adolescence. Through interaction with family, peers, and significant adults, a person “learns to place a value on work behavior as others approach him in situations demanding increasing responsibility for productivity” (Braude, 1975, p. 134). Based on praise or blame and affection or anger, a child appraises his or her performance in household chores, or later in part-time jobs, but this appraisal is based on the perspective of others. As a child matures, these attitudes toward work become internalized, and work performance is less dependent on the reactions of others.

Children are also influenced by the attitudes of others toward work (Braude, 1975). If a parent demonstrates a dislike for a job or a fear of unemployment, children will tend to assimilate these attitudes. Parents who demonstrate a strong work ethic tend to impart a strong work ethic to their children.

Another significant factor shaping the work attitudes of people is the socialization which occurs in the workplace. As a person enters the workplace, the perceptions and reactions of others tend to confirm or contradict the work attitudes shaped in childhood (Braude, 1975). The occupational culture, especially the influence of an “inner fraternity” of colleagues, has a significant impact on the attitudes toward work and the work ethic which form part of each person’s belief system.

Among the mechanisms provided by society to transfer the culture to young people is the public school. One of the functions of schools is to foster student understanding of cultural norms, and in some cases to recognize the merits of accepting them. Vocational education,
for example, has as a stated goal that it will promote the work ethic (Gregson, 1991; Miller, 1985). Reubens (1974) listed “inculcation of good work attitudes” as one of the highest priorities for high school education. In the absence of early socialization which supports good work attitudes, schools should not be expected to completely transform a young person’s work ethic orientation, but enlightening students about what the work ethic is, and why it is important to success in the contemporary workplace, should be a component of secondary education.

The trouble with outsourcing

I totally agree with the problem of outsourcing. Only simple, repetitive tasks that are easy to QA are suitable to outsource. For complex tasks, it takes more time to write the contracts and specifications for outsourcing than actually doing the tasks yourself.

By Jul 30th 2011, The Economist
Outsourcing is sometimes more hassle than it is worth

WHEN Ford’s River Rouge Plant was completed in 1928 it boasted everything it needed to turn raw materials into finished cars: 100,000 workers, 16m square feet of factory floor, 100 miles of railway track and its own docks and furnaces. Today it is still Ford’s largest plant, but only a shadow of its former glory. Most of the parts are made by sub-contractors and merely fitted together by the plant’s 6,000 workers. The local steel mill is run by a Russian company, Severstal.

Outsourcing has transformed global business. Over the past few decades companies have contracted out everything from mopping the floors to spotting the flaws in their internet security. TPI, a company that specialises in the sector, estimates that $100 billion-worth of new contracts are signed every year. Oxford Economics reckons that in Britain, one of the world’s most mature economies, 10% of workers toil away in “outsourced” jobs and companies spend $200 billion a year on outsourcing. Even war is being outsourced: America employs more contract workers in Afghanistan than regular troops.

Can the outsourcing boom go on indefinitely? And is the practice as useful as its advocates claim, or is the popular suspicion that it leads to cut corners and dismal service correct? There are signs that outsourcing often goes wrong, and that companies are rethinking their approach to it.

The latest TPI quarterly index of outsourcing (which measures commercial contracts of $25m or more) suggests that the total value of such contracts for the second quarter of 2011 fell by 18% compared with the second quarter of 2010. Dismal figures in the Americas (ie, mostly the United States) dragged down the average: the value of contracts there was 50% lower in the second quarter of 2011 than in the first half of 2010. This is partly explained by America’s gloomy economy, but even more by the maturity of the market: TPI suspects that much of what can sensibly be outsourced already has been.

Miles Robinson of Mayer Brown, a law firm, notes that there has also been an uptick in legal disputes over outsourcing. In one case EDS, an IT company, had to pay BSkyB, a media company, £318m ($469m) in damages. The two firms spent an estimated £70m on legal fees and were tied up in court for five months. Such nightmares are worse in India, where the courts move with Dickensian speed, or in China, where the legal system is patchy. And since many disputes stay out of court, the well of discontent with outsourcing is surely deeper than the legal record shows.

Some of the worst business disasters of recent years have been caused or aggravated by outsourcing. Eight years ago Boeing, America’s biggest aeroplane-maker, decided to follow the example of car firms and hire contractors to do most of the grunt work on its new 787 Dreamliner. The result was a nightmare. Some of the parts did not fit together. Some of the dozens of sub-contractors failed to deliver their components on time, despite having sub-contracted their work to sub-sub-contractors. Boeing had to take over some of the sub-contractors to prevent them from collapsing. If the Dreamliner starts rolling off the production line towards the end of this year, as Boeing promises, it will be billions over budget and three years behind schedule.

Outsourcing can go wrong in a colourful variety of ways. Sometimes companies squeeze their contractors so hard that they are forced to cut corners. (This is a big problem in the car industry, where a handful of global firms can bully the 80,000 parts-makers.) Sometimes vendors overpromise in order to win a contract and then fail to deliver. Sometimes both parties write sloppy contracts. And some companies undermine their overall strategies with injudicious outsourcing. Service companies, for example, contract out customer complaints to foreign call centres and then wonder why their customers hate them.

When outsourcing goes wrong, it is the devil to put right. When companies outsource a job, they typically eliminate the department that used to do it. They become entwined with their contractors, handing over sensitive material and inviting contractors to work alongside their own staff. Extricating themselves from this tangle can be tough. It is much easier to close a department than to rebuild it. Sacking a contractor can mean that factories grind to a halt, bills languish unpaid and chaos mounts.

None of this means that companies are going to re-embrace the River Rouge model any time soon. Some companies, such as Boeing, are bringing more work back in-house, in the jargon. But the business logic behind outsourcing remains compelling, so long as it is done right. Many tasks are peripheral to a firm’s core business and can be done better and more cheaply by specialists. Cleaning is an obvious example; many back-office jobs also fit the bill. Outsourcing firms offer labour arbitrage, using cheap Indians to enter data rather than expensive Swedes. They can offer economies of scale, too. TPI points out that, for all the problems in America, outsourcing is continuing to grow in emerging markets and, more surprisingly, in Europe, where Germany and France are late converts to the idea.

Companies are rethinking outsourcing, rather than jettisoning it. They are dumping huge long-term deals in favour of smaller, less rigid ones. The annualised value of “mega-relationships” worth $100m or more a year fell by 62% this year compared with last. Companies are forming relationships with several outsourcers, rather than putting all their eggs in few baskets. They are signing shorter contracts, too. But still, they need to think harder about what is their core business, and what is peripheral. And above all, newspaper editors need to say no to the temptation to outsource business columns to cheaper, hungrier writers.

CREATION MYTH

It would be nice to work in an environment like Xerox PARC, total freedom to let your research and build with no budget limitation. Unfortunately, in ASIC world we always squeezed by schedule and resources constraint and don’t get much room to innovate. I share the same feeling as the inventor of laser printer, the management are often short sighted, so I have to develop many of my work behind the curtain. I can only unveil it when there is a working prototype with clear benefit over the previous work.

by Gladwell, Malcolm. The New Yorker87. 13 (May 16, 2011):
Xerox PARC was the innovation arm of the Xerox Corporation. Apple was already one of the hottest technology firms in the country. Steve Jobs’ involvement with Xerox PARC is discussed.

In late 1979, a twenty-four-year-old entrepreneur paid a visit to a research center in Silicon Valley called Xerox PARC. He was the co-founder of a small computer startup down the road, in Cupertino. His name was Steve Jobs.

Xerox PARC was the innovation arm of the Xerox Corporation. It was, and remains, on Coyote Hill Road, in Palo Alto, nestled in the foothills on the edge of town, in a long, low concrete building, with enormous terraces looking out over the jewels of  Silicon Valley. To the northwest was Stanford University’s Hoover Tower. To the north was Hewlett-Packard’s sprawling campus. All around were scores of the other chip designers, software firms, venture capitalists, and hardware-makers. A visitor to PARC, taking in that view, could easily imagine that it was the computer world’s castle, lording over the valley below–and, at the time, this wasn’t far from the truth. In 1970, Xerox had assembled the world’s greatest computer engineers and programmers, and for the next ten years they had an unparalleled run of innovation and invention. If you were obsessed with the future in the seventies, you were obsessed with Xerox PARC–which was why the young Steve Jobs had driven to Coyote Hill Road.

Apple was already one of the hottest tech firms in the country. Everyone in the Valley wanted a piece of it. So Jobs proposed a deal: he would allow Xerox to buy a hundred thousand shares of his company for a million dollars–its highly anticipated I.P.O. was just a year away–if PARC would “open its kimono.” A lot of haggling ensued. Jobs was the fox, after all, and PARC was the henhouse. What would he be allowed to see? What wouldn’t he be allowed to see? Some at PARC thought that the whole idea was lunacy, but, in the end, Xerox went ahead with it. One PARC scientist recalls Jobs as “rambunctious”–a fresh-cheeked, caffeinated version of today’s austere digital emperor. He was given a couple of tours, and he ended up standing in front of a Xerox Alto, PARC’s prized personal computer.

An engineer named Larry Tesler conducted the demonstration. He moved the cursor across the screen with the aid of a “mouse.” Directing a conventional computer, in those days, meant typing in a command on the keyboard. Tesler just clicked on one of the icons on the screen. He opened and closed “windows,” deftly moving from one task to another. He wrote on an elegant word-processing program, and exchanged e-mails with other people at PARC, on the world’s first Ethernet network. Jobs had come with one of his software engineers, Bill Atkinson, and Atkinson moved in as close as he could, his nose almost touching the screen. “Jobs was pacing around the room, acting up the whole time,” Tesler recalled. “He was very excited. Then, when he began seeing the things I could do onscreen, he watched for about a minute and started jumping around the room, shouting, ‘Why aren’t you doing anything with this? This is the greatest thing. This is revolutionary!’ ”

Xerox began selling a successor to the Alto in 1981. It was slow and underpowered–and Xerox ultimately withdrew from personal computers altogether. Jobs, meanwhile, raced back to Apple, and demanded that the team working on the company’s next generation of personal computers change course. He wanted menus on the screen. He wanted windows. He wanted a mouse. The result was the Macintosh, perhaps the most famous product in the history of Silicon Valley.

“If Xerox had known what it had and had taken advantage of its real opportunities,” Jobs said, years later, “it could have been as big as I.B.M. plus Microsoft plus Xerox combined–and the largest high-technology company in the world.”

This is the legend of Xerox PARC. Jobs is the Biblical Jacob and Xerox is Esau, squandering his birthright for a pittance. In the past thirty years, the legend has been vindicated by history. Xerox, once the darling of the American high-technology community, slipped from its former dominance. Apple is now ascendant, and the demonstration in that room in Palo Alto has come to symbolize the vision and ruthlessness that separate true innovators from also-rans. As with all legends, however, the truth is a bit more complicated. After Jobs returned from PARC, he met with a man named Dean Hovey, who was one of the founders of the industrial-design firm that would become known as IDEO. “Jobs went to Xerox PARC on a Wednesday or a Thursday, and I saw him on the Friday afternoon,” Hovey recalled. “I had a series of ideas that I wanted to bounce off him, and I barely got two words out of my mouth when he said, ‘No, no, no, you’ve got to do a mouse.’ I was, like, ‘What’s a mouse?’ I didn’t have a clue. So he explains it, and he says, ‘You know, [the Xerox mouse] is a mouse that cost three hundred dollars to build and it breaks within two weeks. Here’s your design spec: Our mouse needs to be manufacturable for less than fifteen bucks. It needs to not fail for a couple of years, and I want to be able to use it on Formica and my bluejeans.’ From that meeting, I went to Walgreens, which is still there, at the corner of Grant and El Camino in Mountain View, and I wandered around and bought all the underarm deodorants that I could find, because they had that ball in them. I bought a butter dish. That was the beginnings of the mouse.”

I spoke with Hovey in a ramshackle building in downtown Palo Alto, where his firm had started out. He had asked the current tenant if he could borrow his old office for the morning, just for the fun of telling the story of the Apple mouse in the place where it was invented. The room was the size of someone’s bedroom. It looked as if it had last been painted in the Coolidge Administration. Hovey, who is lean and healthy in a Northern California yoga-and-yogurt sort of way, sat uncomfortably at a rickety desk in a corner of the room. “Our first machine shop was literally out on the roof,” he said, pointing out the window to a little narrow strip of rooftop, covered in green outdoor carpeting. “We didn’t tell the planning commission. We went and got that clear corrugated stuff and put it across the top for a roof. We got out through the window.” He had brought a big plastic bag full of the artifacts of that moment: diagrams scribbled on lined paper, dozens of differently sized plastic mouse shells, a spool of guitar wire, a tiny set of wheels from a toy train set, and the metal lid from a jar of Ralph’s preserves. He turned the lid over. It was filled with a waxlike substance, the middle of which had a round indentation, in the shape of a small ball. “It’s epoxy casting resin,” he said. “You pour it, and then I put Vaseline on a smooth steel ball, and set it in the resin, and it hardens around it.” He tucked the steel ball underneath the lid and rolled it around the tabletop. “It’s a kind of mouse.” The hard part was that the roller ball needed to be connected to the housing of the mouse, so that it didn’t fall out, and so that it could transmit information about its movements to the cursor on the screen. But if the friction created by those connections was greater than the friction between the tabletop and the roller ball, the mouse would skip. And the more the mouse was used the more dust it would pick up off the tabletop, and the more it would skip. The Xerox PARC mouse was an elaborate affair, with an array of ball bearings supporting the roller ball. But there was too much friction on the top of the ball, and it couldn’t deal with dust and grime. At first, Hovey set to work with various arrangements of ball bearings, but nothing quite worked. “This was the ‘aha’ moment,” Hovey said, placing his fingers loosely around the sides of the ball, so that they barely touched its surface. “So the ball’s sitting here. And it rolls. I attribute that not to the table but to the oldness of the building. The floor’s not level. So I started playing with it, and that’s when I realized: I want it to roll. I don’t want it to be supported by all kinds of ball bearings. I want to just barely touch it.”

The trick was to connect the ball to the rest of the mouse at the two points where there was the least friction– right where his fingertips had been, dead center on either side of the ball. “If it’s right at midpoint, there’s no force causing it to rotate. So it rolls.”

Hovey estimated their consulting fee at thirty-five dollars an hour; the whole project cost perhaps a hundred thousand dollars. “I originally pitched Apple on doing this mostly for royalties, as opposed to a consulting job,” he recalled. “I said, ‘I’m thinking fifty cents apiece,’ because I was thinking that they’d sell fifty thousand, maybe a hundred thousand of them.” He burst out laughing, because of how far off his estimates ended up being. “Steve’s pretty savvy. He said no. Maybe if I’d asked for a nickel, I would have been fine.” Here is the first complicating fact about the Jobs visit. In the legend of Xerox PARC, Jobs stole the personal computer from Xerox. But the striking thing about Jobs’s instructions to Hovey is that he didn’t want to reproduce what he saw at PARC. “You know, there were disputes around the number of buttons–three buttons, two buttons, one-button mouse,” Hovey went on. “The mouse at Xerox had three buttons. But we came around to the fact that learning to mouse is a feat in and of itself, and to make it as simple as possible, with just one button, was pretty important.”

So was what Jobs took from Xerox the idea of the mouse? Not quite, because Xerox never owned the idea of the mouse. The PARC researchers got it from the computer scientist Douglas Engelbart, at Stanford Research Institute, fifteen minutes away on the other side of the university campus. Engelbart dreamed up the idea of moving the cursor around the screen with a stand-alone mechanical “animal” back in the mid- nineteen-sixties. His mouse was a bulky, rectangular affair, with what looked like steel roller-skate wheels. If you lined up Engelbart’s mouse, Xerox’s mouse, and Apple’s mouse, you would not see the serial reproduction of an object. You would see the evolution of a concept.

The same is true of the graphical user interface that so captured Jobs’s imagination. Xerox PARC’s innovation had been to replace the traditional computer command line with onscreen icons. But when you clicked on an icon you got a pop-up menu: this was the intermediary between the user’s intention and the computer’s response. Jobs’s software team took the graphical interface a giant step further. It emphasized “direct manipulation.” If you wanted to make a window bigger, you just pulled on its corner and made it bigger; if you wanted to move a window across the screen, you just grabbed it and moved it. The Apple designers also invented the menu bar, the pull-down menu, and the trash can–all features that radically simplified the original Xerox PARC idea.

The difference between direct and indirect manipulation–between three buttons and one button, three hundred dollars and fifteen dollars, and a roller ball supported by ball bearings and a free-rolling ball–is not trivial. It is the difference between something intended for experts, which is what Xerox PARC had in mind, and something that’s appropriate for a mass audience, which is what Apple had in mind. PARC was building a personal computer. Apple wanted to build a popular computer.

In a recent study, “The Culture of Military Innovation,” the military scholar Dima Adamsky makes a similar argument about the so-called Revolution in Military Affairs. R.M.A. refers to the way armies have transformed themselves with the tools of the digital age–such as precision-guided missiles, surveillance drones, and real-time command, control, and communications technologies–and Adamsky begins with the simple observation that it is impossible to determine who invented R.M.A. The first people to imagine how digital technology would transform warfare were a cadre of senior military intellectuals in the Soviet Union, during the nineteen-seventies. The first country to come up with these high-tech systems was the United States. And the first country to use them was Israel, in its 1982 clash with the Syrian Air Force in Lebanon’s Bekaa Valley, a battle commonly referred to as “the Bekaa Valley turkey shoot.” Israel coordinated all the major innovations of R.M.A. in a manner so devastating that it destroyed nineteen surface-to-air batteries and eighty-seven Syrian aircraft while losing only a handful of its own planes.

That’s three revolutions, not one, and Adamsky’s point is that each of these strands is necessarily distinct, drawing on separate skills and circumstances. The Soviets had a strong, centralized military bureaucracy, with a long tradition of theoretical analysis. It made sense that they were the first to understand the military implications of new information systems. But they didn’t do anything with it, because centralized military bureaucracies with strong intellectual traditions aren’t very good at connecting word and deed. The United States, by contrast, has a decentralized, bottom-up entrepreneurial culture, which has historically had a strong orientation toward technological solutions. The military’s close ties to the country’ high-tech community made it unsurprising that the U.S. would be the first to invent precision-guidance and next-generation command-and-control communications. But those assets also meant that Soviet-style systemic analysis wasn’t going to be a priority. As for the Israelis, their military culture grew out of a background of resource constraint and constant threat. In response, they became brilliantly improvisational and creative. But, as Adamsky points out, a military built around urgent, short-term “fire extinguishing” is not going to be distinguished by reflective theory. No one stole the revolution. Each party viewed the problem from a different perspective, and carved off a different piece of the puzzle.

In the history of the mouse, Engelbart was the Soviet Union. He was the visionary, who saw the mouse before anyone else did. But visionaries are limited by their visions. “Engelbart’s self-defined mission was not to produce a product, or even a prototype; it was an open-ended search for knowledge,” Matthew Hiltzik writes, in “Dealers of Lightning” (1999), his wonderful history of Xerox PARC. “Consequently, no project in his lab ever seemed to come to an end.” Xerox PARC was the United States: it was a place where things got made. “Xerox created this perfect environment,” recalled Bob Metcalfe, who worked there through much of the nineteen-seventies, before leaving to found the networking company 3Com. “There wasn’t any hierarchy. We built out our own tools. When we needed to publish papers, we built a printer. When we needed to edit the papers, we built a computer. When we needed to connect computers, we figured out how to connect them. We had big budgets. Unlike many of our brethren, we didn’t have to teach. We could just research. It was heaven.” But heaven is not a good place to commercialize a product. “We built a computer and it was a beautiful thing,” Metcalfe went on. “We developed our computer language, our own display, our own language. It was a gold-plated product. But it cost sixteen thousand dollars, and it needed to cost three thousand dollars.” For an actual product, you need threat and constraint–and the improvisation and creativity necessary to turn a gold-plated three-hundred-dollar mouse into something that works on Formica and costs fifteen dollars. Apple was Israel. Xerox couldn’t have been I.B.M. and Microsoft combined, in other words. “You can be one of the most successful makers of enterprise technology products the world has ever known, but that doesn’t mean your instincts will carry over to the consumer market,” the tech writer Harry McCracken recently wrote. “They’re really different, and few companies have ever been successful in both.” He was talking about the decision by the networking giant Cisco System, this spring, to shut down its Flip camera business, at a cost of many hundreds of millions of dollars. But he could just as easily have been talking about the Xerox of forty years ago, which was one of the most successful makers of enterprise technology the world has ever known. The fair question is whether Xerox, through its research arm in Palo Alto, found a better way to be Xerox–and the answer is that it did, although that story doesn’t get told nearly as often.

One of the people at Xerox PARC when Steve Jobs visited was an optical engineer named Gary Starkweather. He is a solid and irrepressibly cheerful man, with large, practical hands and the engineer’s gift of pretending that what is impossibly difficult is actually pretty easy, once you shave off a bit here, and remember some of your high-school calculus, and realize that the thing that you thought should go in left to right should actually go in right to left. Once, before the palatial Coyote Hill Road building was constructed, a group that Starkweather had to be connected to was moved to another building, across the Foothill Expressway, half a mile away. There was no way to run a cable under the highway. So Starkweather fired a laser through the air between the two buildings, an improvised communications system that meant that, if you were driving down the Foothill Expressway on a foggy night and happened to look up, you might see a mysterious red beam streaking across the sky. When a motorist drove into the median ditch, “we had to turn it down,” Starkweather recalled, with a mischievous smile.

Lasers were Starkweather’s specialty. He started at Xerox’s East Coast research facility in Webster, New York, outside Rochester. Xerox built machines that scanned a printed page of type using a photographic lens, and then printed a duplicate. Starkweather’s idea was to skip the first step–to run a document from a computer directly into a photocopier, by means of a laser, and turn the Xerox machine into a printer. It was a radical idea. The printer, since Gutenberg, had been limited to the function of re-creation: if you wanted to print a specific image or letter, you had to have a physical character or mark corresponding to that image or letter. What Starkweather wanted to do was take the array of bits and bytes, ones and zeros that constitute digital images, and transfer them straight into the guts of a copier. That meant, at least in theory, that he could print anything.

“One morning, I woke up and I thought, Why don’t we just print something out directly?” Starkweather said. “But when I flew that past my boss he thought it was the most brain-dead idea he had ever heard. He basically told me to find something else to do. The feeling was that lasers were too expensive. They didn’t work that well. Nobody wants to do this, computers aren’t powerful enough. And I guess, in my naivete, I kept thinking, He’s just not right–there’s something about this I really like. It got to be a frustrating situation. He and I came to loggerheads over the thing, about late 1969, early 1970. I was running my experiments in the back room behind a black curtain. I played with them when I could. He threatened to lay off my people if I didn’t stop. I was having to make a decision: do I abandon this, or do I try and go up the ladder with it?” Then Starkweather heard that Xerox was opening a research center in Palo Alto, three thousand miles away from its New York headquarters. He went to a senior vice-president of Xerox, threatening to leave for I.B.M. if he didn’t get a transfer. In January of 1971, his wish was granted, and, within ten months, he had a prototype up and running.

Starkweather is retired now, and lives in a gated community just north of Orlando, Florida. When we spoke, he was sitting at a picnic table, inside a screened-in porch in his back yard. Behind him, golfers whirred by in carts. He was wearing white chinos and a shiny black short-sleeved shirt, decorated with fluorescent images of vintage hot rods. He had brought out two large plastic bins filled with the artifacts of his research, and he spread the contents on the table: a metal octagonal disk, sketches on lab paper, a black plastic laser housing that served as the innards for one of his printers.

“There was still a tremendous amount of opposition from the Webster group, who saw no future in computer printing,” he went on. “They said, ‘I.B.M. is doing that. Why do we need to do that?’ and so forth. Also, there were two or three competing projects, which I guess I have the luxury of calling ridiculous. One group had fifty people and another had twenty. I had two.” Starkweather picked up a picture of one of his in-house competitors, something called an “optical carriage printer.” It was the size of one of those modular Italian kitchen units that you see advertised in fancy design magazines. “It was an unbelievable device,” he said, with a rueful chuckle. “It had a ten-inch drum, which turned at five thousand r.p.m., like a super washing machine. It had characters printed on its surface. I think they only ever sold ten of them. The problem was that it was spinning so fast that the drum would blow out and the characters would fly off. And there was only this one lady in Troy, New York, who knew how to put the characters on so that they would stay.

“So we finally decided to have what I called a fly-off. There was a full page of text–where some of them were non-serif characters, Helvetica, stuff like that–and then a page of graph paper with grid lines, and pages with pictures and some other complex stuff–and everybody had to print all six pages. Well, once we decided on those six pages, I knew I’d won, because I knew there wasn’t anything I couldn’t print. Are you kidding? If you can translate it into bits, I can print it. Some of these other machines had to go through hoops just to print a curve. A week after the fly-off, they folded those other projects. I was the only game in town.” The project turned into the Xerox 9700, the first high-speed, cut-paper laser printer in the world.

In one sense, the Starkweather story is of a piece with the Steve Jobs visit. It is an example of the imaginative poverty of Xerox management. Starkweather had to hide his laser behind a curtain. He had to fight for his transfer to PARC. He had to endure the indignity of the fly-off, and even then Xerox management remained skeptical. The founder of PARC, Jack Goldman, had to bring in a team from Rochester for a personal demonstration. After that, Starkweather and Goldman had an idea for getting the laser printer to market quickly: graft a laser onto a Xerox copier called the 7000. The 7000 was an older model, and Xerox had lots of 7000s sitting around that had just come off lease. Goldman even had a customer ready: the Lawrence Livermore laboratory was prepared to buy a whole slate of the machines. Xerox said no. Then Starkweather wanted to make what he called a photo-typesetter, which produced camera-ready copy right on your desk. Xerox said no. “I wanted to work on higher-performance scanners,” Starkweather continued. “In other words, what if we print something other than documents? For example, I made a high-resolution scanner and you could print on glass plates.” He rummaged in one of the boxes on the picnic table and came out with a sheet of glass, roughly six inches square, on which a photograph of a child’s face appeared. The same idea, he said, could have been used to make “masks” for the semiconductor industry–the densely patterned screens used to etch the designs on computer chips. “No one would ever follow through, because Xerox said, ‘Now you’re in Intel’s market, what are you doing that for?’ They just could not seem to see that they were in the information business. This”–he lifted up the plate with the little girl’s face on it–“is a copy. It’s just not a copy of an office document.” But he got nowhere. “Xerox had been infested by a bunch of spreadsheet experts who thought you could decide every product based on metrics. Unfortunately, creativity wasn’t on a metric.”

A few days after that afternoon in his back yard, however, Starkweather e-mailed an addendum to his discussion of his experiences at PARC. “Despite all the hassles and risks that happened in getting the laser printer going, in retrospect the journey was that much more exciting,” he wrote. “Often difficulties are just opportunities in disguise.” Perhaps he felt that he had painted too negative a picture of his time at Xerox, or suffered a pang of guilt about what it must have been like to be one of those Xerox executives on the other side of the table. The truth is that Starkweather was a difficult employee. It went hand in hand with what made him such an extraordinary innovator. When his boss told him to quit working on lasers, he continued in secret. He was disruptive and stubborn and independent-minded–and he had a thousand ideas, and sorting out the good ideas from the bad wasn’t always easy. Should Xerox have put out a special order of laser printers for Lawrence Livermore, based on the old 7000 copier? In “Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer” (1988)–a book dedicated to the idea that Xerox was run by the blind–Douglas Smith and Robert Alexander admit that the proposal was hopelessly impractical: “The scanty Livermore proposal could not justify the investment required to start a laser printing business. . . . How and where would Xerox manufacture the laser printers? Who would sell and service them? Who would buy them and why?” Starkweather, and his compatriots at Xerox PARC, weren’t the source of disciplined strategic insights. They were wild geysers of creative energy.

The psychologist Dean Simonton argues that this fecundity is often at the heart of what distinguishes the truly gifted. The difference between Bach and his forgotten peers isn’t necessarily that he had a better ratio of hits to misses. The difference is that the mediocre might have a dozen ideas, while Bach, in his lifetime, created more than a thousand full-fledged musical compositions. A genius is a genius, Simonton maintains, because he can put together such a staggering number of insights, ideas, theories, random observations, and unexpected connections that he almost inevitably ends up with something great. “Quality,” Simonton writes, is “a probabilistic function of quantity.”

Simonton’s point is that there is nothing neat and efficient about creativity. “The more successes there are,” he says, “the more failures there are as well”–meaning that the person who had far more ideas than the rest of us will have far more bad ideas than the rest of us, too. This is why managing the creative process is so difficult. The making of the classic Rolling Stones album “Exile on Main Street” was an ordeal, Keith Richards writes in his new memoir, because the band had too many ideas. It had to fight from under an avalanche of mediocrity: “Head in the Toilet Blues,” “Leather Jackets,” “Windmill,” “I Was Just a Country Boy,” “Bent Green Needles,” “Labour Pains,” and “Pommes de Terre”–the last of which Richards explains with the apologetic, “Well, we were in France at the time.”

At one point, Richards quotes a friend, Jim Dickinson, remembering the origins of the song “Brown Sugar”: I watched Mick write the lyrics. . . . He wrote it down as fast as he could move his hand. I’d never seen anything like it. He had one of those yellow legal pads, and he’d write a verse a page, just write a verse and then turn the page, and when he had three pages filled, they started to cut it. It was amazing. Richards goes on to marvel, “It’s unbelievable how prolific he was.” Then he writes, “Sometimes you’d wonder how to turn the fucking tap off. The odd times he would come out with so many lyrics, you’re crowding the airwaves, boy.” Richards clearly saw himself as the creative steward of the Rolling Stones (only in a rock-and-roll band, by the way, can someone like Keith Richards perceive himself as the responsible one), and he came to understand that one of the hardest and most crucial parts of his job was to “turn the fucking tap off,” to rein in Mick Jagger’s incredible creative energy.

The more Starkweather talked, the more apparent it became that his entire career had been a version of this problem. Someone was always trying to turn his tap off. But someone had to turn his tap off: the interests of the innovator aren’t perfectly aligned with the interests of the corporation. Starkweather saw ideas on their own merits. Xerox was a multinational corporation, with shareholders, a huge sales force, and a vast corporate customer base, and it needed to consider every new idea within the context of what it already had. Xerox’s managers didn’t always make the right decisions when they said no to Starkweather. But he got to PARC, didn’t he? And Xerox, to its great credit, had a PARC–a place where, a continent away from the top managers, an engineer could sit and dream, and get every purchase order approved, and fire a laser across the Foothill Expressway if he was so inclined. Yes, he had to pit his laser printer against lesser ideas in the contest. But he won the contest. And, the instant he did, Xerox cancelled the competing projects and gave him the green light.

“I flew out there and gave a presentation to them on what I was looking at,” Starkweather said of his first visit to PARC. “They really liked it, because at the time they were building a personal computer, and they were beside themselves figuring out how they were going to get whatever was on the screen onto a sheet of paper. And when I showed them how I was going to put prints on a sheet of paper it was a marriage made in heaven.” The reason Xerox invented the laser printer, in other words, is that it invented the personal computer. Without the big idea, it would never have seen the value of the small idea. If you consider innovation to be efficient and ideas precious, that is a tragedy: you give the crown jewels away to Steve Jobs, and all you’re left with is a printer. But in the real, messy world of creativity, giving away the thing you don’t really understand for the thing that you do is an inevitable tradeoff.

“When you have a bunch of smart people with a broad enough charter, you will always get something good out of it,” Nathan Myhrvold, formerly a senior executive at Microsoft, argues. “It’s one of the best investments you could possibly make–but only if you chose to value it in terms of successes. If you chose to evaluate it in terms of how many times you failed, or times you could have succeeded and didn’t, then you are bound to be unhappy. Innovation is an unruly thing. There will be some ideas that don’t get caught in your cup. But that’s not what the game is about. The game is what you catch, not what you spill.”

In the nineteen-nineties, Myhrvold created a research laboratory at Microsoft modelled in part on what Xerox had done in Palo Alto in the nineteen-seventies, because he considered PARC a triumph, not a failure. “Xerox did research outside their business model, and when you do that you should not be surprised that you have a hard time dealing with it–any more than if some bright guy at Pfizer wrote a word processor. Good luck to Pfizer getting into the word-processing business. Meanwhile, the thing that they invented that was similar to their own business–a really big machine that spit paper out–they made a lot of money on it.” And so they did. Gary Starkweather’s laser printer made billions for Xerox. It paid for every other single project at Xerox PARC, many times over.

In 1988, Starkweather got a call from the head of one of Xerox’s competitors, trying to lure him away. It was someone whom he had met years ago. “The decision was painful,” he said. “I was a year from being a twenty-five-year veteran of the company. I mean, I’d done enough for Xerox that unless I burned the building down they would never fire me. But that wasn’t the issue. It’s about having ideas that are constantly squashed. So I said, ‘Enough of this,’ and I left.”

He had a good many years at his new company, he said. It was an extraordinarily creative place. He was part of decision-making at the highest level. “Every employee from technician to manager was hot for the new, exciting stuff,” he went on. “So, as far as buzz and daily environment, it was far and away the most fun I’ve ever had.” But it wasn’t perfect. “I remember I called in the head marketing guy and I said, ‘I want you to give me all the information you can come up with on when people buy one of our products–what software do they buy, what business are they in–so I can see the model of how people are using the machines.’ He looked at me and said, ‘I have no idea about that.’ ” Where was the rigor? Then Starkweather had a scheme for hooking up a high-resolution display to one of his new company’s computers. “I got it running and brought it into management and said, ‘Why don’t we show this at the tech expo in San Francisco? You’ll be able to rule the world.’ They said, ‘I don’t know. We don’t have room for it.’ It was that sort of thing. It was like me saying I’ve discovered a gold mine and you saying we can’t afford a shovel.”

He shrugged a little wearily. It was ever thus. The innovator says go. The company says stop–and maybe the only lesson of the legend of Xerox PARC is that what happened there happens, in one way or another, everywhere. By the way, the man who hired Gary Starkweather away to the company that couldn’t afford a shovel? His name was Steve Jobs.

Xerox PARC, Apple, and the truth about innovation.

Too much information

Applying the concept of a sprint in Agile development can help me cope with information overload. I block off a period of time, 2-3 hours, to concentration on my work. I will hide myself, disconnect from email and instant messages to avoid any interruption. I also learn that nothing cannot wait for a few hours or a day or two. You just have to set the expectation right that people cannot demand instance response from you all the time.

Jun 30th 2011, The Economist
How to cope with data overload

GOOGLE “information overload” and you are immediately overloaded with information: more than 7m hits in 0.05 seconds. Some of this information is interesting: for example, that the phrase “information overload” was popularised by Alvin Toffler in 1970. Some of it is mere noise: obscure companies promoting their services and even more obscure bloggers sounding off. The overall impression is at once overwhelming and confusing.

“Information overload” is one of the biggest irritations in modern life. There are e-mails to answer, virtual friends to pester, YouTube videos to watch and, back in the physical world, meetings to attend, papers to shuffle and spouses to appease. A survey by Reuters once found that two-thirds of managers believe that the data deluge has made their jobs less satisfying or hurt their personal relationships. One-third think that it has damaged their health. Another survey suggests that most managers think most of the information they receive is useless.

Commentators have coined a profusion of phrases to describe the anxiety and anomie caused by too much information: “data asphyxiation” (William van Winkle), “data smog” (David Shenk), “information fatigue syndrome” (David Lewis), “cognitive overload” (Eric Schmidt) and “time famine” (Leslie Perlow). Johann Hari, a British journalist, notes that there is a good reason why “wired” means both “connected to the internet” and “high, frantic, unable to concentrate”.

These worries are exaggerated. Stick-in-the-muds have always complained about new technologies: the Victorians fussed that the telegraph meant that “the businessman of the present day must be continually on the jump.” And businesspeople have always had to deal with constant pressure and interruptions—hence the word “business”. In his classic study of managerial work in 1973 Henry Mintzberg compared managers to jugglers: they keep 50 balls in the air and periodically check on each one before sending it aloft once more.

Yet clearly there is a problem. It is not merely the dizzying increase in the volume of information (the amount of data being stored doubles every 18 months). It is also the combination of omnipresence and fragmentation. Many professionals are welded to their smartphones. They are also constantly bombarded with unrelated bits and pieces—a poke from a friend one moment, the latest Greek financial tragedy the next.

The data fog is thickening at a time when companies are trying to squeeze ever more out of their workers. A survey in America by Spherion Staffing discovered that 53% of workers had been compelled to take on extra tasks since the recession started. This dismal trend may well continue—many companies remain reluctant to hire new people even as business picks up. So there will be little respite from the dense data smog, which some researchers fear may be poisonous.

They raise three big worries. First, information overload can make people feel anxious and powerless: scientists have discovered that multitaskers produce more stress hormones. Second, overload can reduce creativity. Teresa Amabile of Harvard Business School has spent more than a decade studying the work habits of more than 9,000 people. She finds that focus and creativity are connected. People are more likely to be creative if they are allowed to focus on something for some time without interruptions. If constantly interrupted or forced to attend meetings, they are less likely to be creative. Third, overload can also make workers less productive. David Meyer, of the University of Michigan, has shown that people who complete certain tasks in parallel take much longer and make many more errors than people who complete the same tasks in sequence.

What can be done about information overload? One answer is technological: rely on the people who created the fog to invent filters that will clean it up. Xerox promises to restore “information sanity” by developing better filtering and managing devices. Google is trying to improve its online searches by taking into account more personal information. (Some people fret that this will breach their privacy, but it will probably deliver quicker, more accurate searches.) A popular computer program called “Freedom” disconnects you from the web at preset times.

A second answer involves willpower. Ration your intake. Turn off your mobile phone and internet from time to time.

But such ruses are not enough. Smarter filters cannot stop people from obsessively checking their BlackBerrys. Some do so because it makes them feel important; others because they may be addicted to the “dopamine squirt” they get from receiving messages, as Edward Hallowell and John Ratey, two academics, have argued. And self-discipline can be counter-productive if your company doesn’t embrace it. Some bosses get shirty if their underlings are unreachable even for a few minutes.

Most companies are better at giving employees access to the information superhighway than at teaching them how to drive. This is starting to change. Management consultants have spotted an opportunity. Derek Dean and Caroline Webb of McKinsey urge businesses to embrace three principles to deal with data overload: find time to focus, filter out noise and forget about work when you can. Business leaders are chipping in. David Novak of Yum! Brands urges people to ask themselves whether what they are doing is constructive or a mere “activity”. John Doerr, a venture capitalist, urges people to focus on a narrow range of objectives and filter out everything else. Cristobal Conde of SunGard, an IT firm, preserves “thinking time” in his schedule when he cannot be disturbed. This might sound like common sense. But common sense is rare amid the cacophony of corporate life.

Slaying the Cable Monster: Why HDMI Brands Don’t Matter

I have been keep saying those who buy expensive HDMI cable are idiots and now here is the prove.

By Will Greenwald, May 13 2011, PC Magazine
For the vast majority of HDTV owners, a $5 HDMI cable will provide the same performance as a $100 one.

You’ve probably experienced this when shopping for a new HDTV: A store clerk sidles up and offers to help. He then points you toward the necessary HDMI cables to go with your new television. And they’re expensive. Maybe $60 or $70, sometimes even more than $100 (You could buy a cheap Blu-ray player or a handful of Blu-ray discs for that price!). The clerk then claims that these are special cables. Superior cables. Cables you absolutely need if you want the best possible home theater experience. And the claims are, for the vast majority of home theater users, utter rubbish.

The truth is, for most HDTV setups, there is absolutely no effective difference between a no-name $3 HDMI cable you can order from Amazon.com and a $120 Monster cable you buy at a brick-and-mortar electronics store. We ran five different HDMI cables, ranging in price from less than $5 up to more than $100, through rigorous tests to determine whether there’s any difference in a dirt-cheap cable and one that costs a fortune.

HDMI Basics

The first thing to remember about HDMI is that it is a digital standard. Unlike component video, composite video, S-video, or coaxial cable, HDMI signals don’t gradually degrade, or get fuzzy and lose clarity as the signal fades or interference grows. For digital signals like HDMI, as long as there is enough data for the receiver to put together a picture, it will form. If there isn’t, it will just drop off. While processing artifacts can occur and gaps in the signal can cause blocky effects or screen blanking, generally an HDMI signal will display whenever the signal successfully reaches the receiver. Claims that more expensive cables put forth greater video or audio fidelity are nonsense; it’s like saying you can get better-looking YouTube videos on your laptop by buying more expensive Ethernet cables. From a technical standpoint, it simply doesn’t make sense.

This doesn’t mean that all HDMI cables are created equal in all cases. HDMI includes multiple specifications detailing standards of bandwidth and the capabilities of the cable.

The current HDMI specification, version 1.4a, requires all compliant cables to support 3D video, 4K resolution (approximately 4000-by-2000-pixel resolution, or about four times the detail of the current HD standard of 1080p), Ethernet data transmissions, and audio return channels. Each of these features requires more bandwidth, and considerably older HDMI cables (and all older HDMI-equipped devices) rated at HDMI 1.3b or lower can’t handle that much bandwidth. For most users, 3D is the only feature they’ll use. Ethernet over HDMI is used mostly for networking devices instead ofconnecting viapure Ethernet or Wi-Fi (the methods most consumer electronics products use). Audio return channels are only useful in certain situations with dedicated sound systems (and the same task can be accomplished by running an audio cable to the system). And there aren’t currently any consumer-grade displays or playback devices capable of handling 4Kresolutions (the least-expensive 4K projector you’ll find is more than $75,000). In all of these cases, it’s a yes or no question: does it support these features? There is no question of clarity or superior signal.

That said, there are cases where higher quality cables and going to lengths to maintain signal quality are important. They just aren’t cases that apply for most HDTV owners. If you’re going to run an HDMI cable for lengths longer than 10 feet, you should be concerned about insulation to protect against signal degradation. It’s not an issue for 6-foot lengths of cable, but as the distance between media device and display increases, signal quality decreases and the more susceptible the signal becomes to magnetic interference. In fact, for distances of over 30 feet, the HDMI licensing board recommends either using a signal amplifier or considering an alternate solution, like an HDMI-over-Ethernet converter. When you’re running up against the maximum length, the greater insulation and build quality of more expensive cables can potentially improve the stability of your signal. However, if there’s a 30-foot gap between your Blu-ray player and your HDTV, you might want to rearrange some furniture. Or just use a technology designed for long distances.

The second thing to know about HDMI cables is that they are almost always expensive when you buy them at brick-and-mortar stores. If you walk into a Best Buy or Radio Shack, you can expect to pay at least $40 for a 6-foot HDMI cable. Even at discount stores like Wal-Mart and Target, the cheapest, most generic HDMI cables retail for $15 and more. Online, you’ll do a lot better on prices. Amazon.com and Monoprice.com (the “ancient custom installer’s secret”) slash even Wal-Mart’s HDMI cable prices into tiny bits. Both sites sell several models of HDMI cables for as little as $1.50. These are generally generic HDMI cables, or seldom-heard-of brands, but they work just fine for most HDTV users. We can be certain of this, because we tested them in the PCMag Labs.

Testing the Cables

We tested five cables including Monster Cable’s 1200 Higher Definition Experience Pack, a combination HDMI/Ethernet bundle that lists for $119.95 but we found for $79.95 at Amazon.com, the Monster Cable HDMI 500HD High Speed Cable ($59.95 list, we got it at Amazon for $52.62), the Spider International E-HDMI-0006 E-Series Super High Speed HDMI with Ethernet cable ($64.99 list price and a $45.29 Amazon price), the Cables Unlimited 6-Foot HDMI Male to Male Cable (PCM-2295-06) that Amazon carries for $3.19, and an unbranded, OEM cable from Monoprice that was shipped in a Belkin bag but doesn’t match any of the company’s own HDMI cables (and retails for $3.28, or $2.78 if you buy 50 cables or more).

We’ve left out some of the more lavishly expensive HDMI cables, like the AudioQuest series of HDMI cables, because they retail for nearly $700. Unless those cables can let me eat the food I see on the Food Network, they’re not worth the price of an actual HDTV.

Based purely on the cables’ specs, Monster Cable’s HDMI cables are superior. Of course, that’s because Monster Cable is the only company of the four to offer any notable specifications. Spider International and Cables Unlimited offered very little information in the way of the cables, and the generic cable had no specifications besides it being 28 AWG (American Wire Gauge), a number that simply references the width of the wire used in the cable (28 AWG is a standard measurement, though some cables can be slightly thicker at 26 or 24 AWG). HDMI standards require that all HDMI 1.4 cables be able to handle a bandwidth of 10.2 gigabits per second (Gbps). The Monster Blu-Ray 1200 Higher Definition Experience Pack has a rated speed of 17.8 Gbps. Again, what really matters is whether the cable is HDMI-1.4-compliant, and it can support the necessary features mentioned above. The higher bandwidth doesn’t matter for HDTV signals. It might make a difference with 4K-video, but since HDTVs currently top out at 1080p, that point is moot.

As long as the cable is HDMI-1.4 compliant and it can hit 10.2 Gbps, which is will if it’s 1.4-compliant, it will do the trick. Also, we couldn’t find a cable that wasn’t 1.4-compliant, so that shouldn’t be a problem.

For consistency, we used only 6-foot or 2-meter (6.6-foot) cables to ensure that cable length didn’t affect the results of the tests. We paired a Sony Bravia KDL-46EX720 3D HDTV with an LG BD670 Blu-ray player for all tests. The television was set to standard, default image settings, and the Blu-ray player was set to output only a 1080p video signal. We put the cables through three different tests: a technical quality evaluation, a blind video test, and a 3D-support test.

For the technical quality evaluation, we used the HQV video benchmark Blu-ray Disc. For each cable, we ran through the gamut of HQV video tests, which checks video for numerous image processing, frame-rate synchronization, and color-correction capabilities. The tests include numerous patterns and animations to expose possible display problems. All five cables passed HQV’s tests with flying colors, with a single exception, which was consistent across all of them (and thus more likely a flaw of either the HDTV or the Blu-ray player): 2:2 film pull-down looked a bit jerky, a minor issue that doesn’t affect the cables individual performance.

The blind video test involved the assistance of five volunteers in the PCMag Lab. They were shown the same scene from Predators on Blu-ray with different cables. They were not told which cable was which until the end of the test. No one saw any appreciable difference between the $3 cables and the $120 cable, or any of the cables in between. However, we did notice a curious phenomenon: the screen appeared slightly darker and a bit more saturated when connected to the Blu-ray player with the Monster Cable 1200 High Definition Experience Pack cable. The HDTV showed that it was receiving the same 12-bit color depth information through each cable, so the more-expensive Monster cable wasn’t pushing through more color detail. Again, the difference was minimal, and could be corrected by calibrating your HDTV.

Finally, we loaded the 3D Avatar Blu-ray to check that the cables could handle an HDMI 1.4 standard feature: 3D content. Again, every cable, including the cheap $3 cable, carried a 3D video feed to the HDTV easily.

If you’re like the vast majority of HDTV users and have a fairly simple setup that isn’t spread across a large area, there is absolutely no reason to spend more than $10 on an HDMI cable, never mind more than $100 on one. Any possible benefit that could come from an over-engineered, overpriced HDMI cable simply won’t show up in your home theater. If you’re running a 4K projector, or have a 25-foot hallway between your Blu-ray player and HDTV, or want to show off how big your home theater budget is, that’s one thing. If you just want to hook up your Blu-ray player, cable box, or video game system to your HDTV, bypass the big stores and big brands and reach into the Web bargain bin. Then use the money you saveto buy more electronics that need to be connected to one another.

No Hell. Pastor Rob Bell: What if Hell Doesn’t Exist?

I am not as liberal as Rob Bell, I believe Hell does exist, but it is only reserve for truly evil people like Mao Tse Dong or Muammar Gaddafi (maybe George W. Bush too). I definitely won’t agree only Christians can go to heaven and everybody else goes to hell.

I am joining a reading group starting in May on Rob Bell’s book “Love Wins: A Book about Heaven, Hell, and the Fate of Every Person Who Ever Lived. For those who are interested, please register here

By Jon Meacham, Thursday, Apr. 14, 2011, Times Magazine

As part of a series on peacemaking, in late 2007, Pastor Rob Bell’s Mars Hill Bible Church put on an art exhibit about the search for peace in a broken world. It was just the kind of avant-garde project that had helped power Mars Hill’s growth (the Michigan church attracts 7,000 people each Sunday) as a nontraditional congregation that emphasizes discussion rather than dogmatic teaching. An artist in the show had included a quotation from Mohandas Gandhi. Hardly a controversial touch, one would have thought. But one would have been wrong.

A visitor to the exhibit had stuck a note next to the Gandhi quotation: “Reality check: He’s in hell.” Bell was struck.

Really? he recalls thinking.

Gandhi’s in hell?

He is?

We have confirmation of this?

Somebody knows this?

Without a doubt?

And that somebody decided to take on the responsibility of letting the rest of us know?

So begins Bell’s controversial new best seller, Love Wins: A Book About Heaven, Hell, and the Fate of Every Person Who Ever Lived. Works by Evangelical Christian pastors tend to be pious or at least on theological message. The standard Christian view of salvation through the death and resurrection of Jesus of Nazareth is summed up in the Gospel of John, which promises “eternal life” to “whosoever believeth in Him.” Traditionally, the key is the acknowledgment that Jesus is the Son of God, who, in the words of the ancient creed, “for us and for our salvation came down from heaven … and was made man.” In the Evangelical ethos, one either accepts this and goes to heaven or refuses and goes to hell.

Bell, a tall, 40-year-old son of a Michigan federal judge, begs to differ. He suggests that the redemptive work of Jesus may be universal — meaning that, as his book’s subtitle puts it, “every person who ever lived” could have a place in heaven, whatever that turns out to be. Such a simple premise, but with Easter at hand, this slim, lively book has ignited a new holy war in Christian circles and beyond. When word of Love Wins reached the Internet, one conservative Evangelical pastor, John Piper, tweeted, “Farewell Rob Bell,” unilaterally attempting to evict Bell from the Evangelical community. R. Albert Mohler Jr., president of the Southern Baptist Theological Seminary, says Bell’s book is “theologically disastrous. Any of us should be concerned when a matter of theological importance is played with in a subversive way.” In North Carolina, a young pastor was fired by his church for endorsing the book.

The traditionalist reaction is understandable, for Bell’s arguments about heaven and hell raise doubts about the core of the Evangelical worldview, changing the common understanding of salvation so much that Christianity becomes more of an ethical habit of mind than a faith based on divine revelation. “When you adopt universalism and erase the distinction between the church and the world,” says Mohler, “then you don’t need the church, and you don’t need Christ, and you don’t need the cross. This is the tragedy of nonjudgmental mainline liberalism, and it’s Rob Bell’s tragedy in this book too.”

Particularly galling to conservative Christian critics is that Love Wins is not an attack from outside the walls of the Evangelical city but a mutiny from within — a rebellion led by a charismatic, popular and savvy pastor with a following. Is Bell’s Christianity — less judgmental, more fluid, open to questioning the most ancient of assumptions — on an inexorable rise? “I have long wondered if there is a massive shift coming in what it means to be a Christian,” Bell says. “Something new is in the air.”

Which is what has many traditional Evangelicals worried. Bell’s book sheds light not only on enduring questions of theology and fate but also on a shift within American Christianity. More indie rock than “Rock of Ages,” with its videos and comfort with irony (Bell sometimes seems an odd combination of Billy Graham and Conan O’Brien), his style of doctrine and worship is clearly playing a larger role in religious life, and the ferocity of the reaction suggests that he is a force to be reckoned with.

Otherwise, why reckon with him at all? A similar work by a pastor from one of the declining mainline Protestant denominations might have merited a hostile blog post or two — bloggers, like preachers, always need material — but it is difficult to imagine that an Episcopal priest’s eschatological musings would have provoked the volume of criticism directed at Bell, whose reach threatens prevailing Evangelical theology.

Bell insists he is only raising the possibility that theological rigidity — and thus a faith of exclusion — is a dangerous thing. He believes in Jesus’ atonement; he says he is just unclear on whether the redemption promised in Christian tradition is limited to those who meet the tests of the church. It is a case for living with mystery rather than demanding certitude.

From a traditionalist perspective, though, to take away hell is to leave the church without its most powerful sanction. If heaven, however defined, is everyone’s ultimate destination in any event, then what’s the incentive to confess Jesus as Lord in this life? If, in other words, Gandhi is in heaven, then why bother with accepting Christ? If you say the Bible doesn’t really say what a lot of people have said it says, then where does that stop? If the verses about hell and judgment aren’t literal, what about the ones on adultery, say, or homosexuality? Taken to their logical conclusions, such questions could undermine much of conservative Christianity.

What the Hell?

From the Apostle Paul to John Paul II, from Augustine to Calvin, Christians have debated atonement and judgment for nearly 2,000 years. Early in the 20th century, Harry Emerson Fosdick came to represent theological liberalism, arguing against the literal truth of the Bible and the existence of hell. It was time, progressives argued, for the faith to surrender its supernatural claims.

Bell is more at home with this expansive liberal tradition than he is with the old-time believers of Inherit the Wind. He believes that Jesus, the Son of God, was sacrificed for the sins of humanity and that the prospect of a place of eternal torment seems irreconcilable with the God of love. Belief in Jesus, he says, should lead human beings to work for the good of this world. What comes next has to wait. “When we get to what happens when we die, we don’t have any video footage,” says Bell. “So let’s at least be honest that we are speculating, because we are.” He is quick to note, though, that his own speculation, while unconventional, is not unprecedented. “At the center of the Christian tradition since the first church,” Bell writes, “have been a number who insist that history is not tragic, hell is not forever, and love, in the end, wins and all will be reconciled to God.”

It is also true that the Christian tradition since the first church has insisted that history is tragic for those who do not believe in Jesus; that hell is, for them, forever; and that love, in the end, will envelop those who profess Jesus as Lord, and they — and they alone — will be reconciled to God. Such views cannot be dismissed because they are inconvenient or uncomfortable: they are based on the same Bible that liberals use to make the opposite case. This is one reason religious debate can seem a wilderness of mirrors, an old CIA phrase describing the bewildering world of counterintelligence.

Still, the dominant view of the righteous in heaven and the damned in hell owes more to the artistic legacy of the West, from Michelangelo to Dante to Blake, than it does to history or to unambiguous biblical teaching. Neither pagan nor Jewish tradition offered a truly equivalent vision of a place of eternal torment; the Greek and Roman underworlds tended to be morally neutral, as did much of the Hebraic tradition concerning Sheol, the realm of the dead.

Things many Christian believers take for granted are more complicated than they seem. It was only when Jesus failed to return soon after the Passion and Resurrection appearances that the early church was compelled to make sense of its recollections of his teachings. Like the Bible — a document that often contradicts itself and from which one can construct sharply different arguments — theology is the product of human hands and hearts. What many believers in the 21st century accept as immutable doctrine was first formulated in the fog and confusion of the 1st century, a time when the followers of Jesus were baffled and overwhelmed by their experience of losing their Lord; many had expected their Messiah to be a Davidic military leader, not an atoning human sacrifice.

When Jesus spoke of the “kingdom of heaven,” he was most likely referring not to a place apart from earth, one of clouds and harps and an eternity with your grandmother, but to what he elsewhere called the “kingdom of God,” a world redeemed and renewed in ways beyond human imagination. To 1st century ears in ancient Judea, Jesus’ talk of the kingdom was centered on the imminent arrival of a new order marked by the defeat of evil, the restoration of Israel and a general resurrection of the dead — all, in the words of the prayer he taught his disciples, “on earth.”

There is, however, no escaping the fact that Jesus speaks in the Bible of a hell for the “condemned.” He sometimes uses the word Gehenna, which was a valley near Jerusalem associated with the sacrifice of children by fire to the Phoenician god Moloch; elsewhere in the New Testament, writers (especially Paul and John the Divine) tell of a fiery pit (Tartarus or Hades) in which the damned will spend eternity. “Depart from me, you cursed [ones], into the eternal fire prepared for the devil and his angels,” Jesus says in Matthew. In Mark he speaks of “the unquenchable fire.” The Book of Revelation paints a vivid picture — in a fantastical, problematic work that John the Divine says he composed when he was “in the spirit on the Lord’s day,” a signal that this is not an Associated Press report — of the lake of fire and the dismissal of the damned from the presence of God to a place where “they will be tormented day and night for ever and ever.”

And yet there is a contrary scriptural trend that suggests, as Jesus puts it, that the gates of hell shall not finally prevail, that God will wipe away every tear — not just the tears of Evangelical Christians but the tears of all. Bell puts much stock in references to the universal redemption of creation: in Matthew, Jesus speaks of the “renewal of all things”; in Acts, Peter says Jesus will “restore everything”; in Colossians, Paul writes that “God was pleased to … reconcile to himself all things, whether things on earth or things in heaven.”

So is it heaven for Christians who say they are Christians and hell for everybody else? What about babies, or people who die without ever hearing the Gospel through no fault of their own? (As Bell puts it, “What if the missionary got a flat tire?”) Who knows? Such tangles have consumed Christianity for millennia and likely will for millennia to come.

What gives the debate over Bell new significance is that his message is part of an intriguing scholarly trend unfolding simultaneously with the cultural, generational and demographic shifts made manifest at Mars Hill. Best expressed, perhaps, in the work of N.T. Wright, the Anglican bishop of Durham, England (Bell is a Wright devotee), this school focuses on the meaning of the texts themselves, reading them anew and seeking, where appropriate, to ask whether an idea is truly rooted in the New Testament or is attributable to subsequent church tradition and theological dogma.

For these new thinkers, heaven can mean different things. In some biblical contexts it is a synonym for God. In others it signifies life in the New Jerusalem, which, properly understood, is the reality that will result when God brings together the heavens and the earth. In yet others it seems to suggest moments of intense human communion and compassion that are, in theological terms, glimpses of the divine love that one might expect in the world to come. One thing heaven is not is an exclusive place removed from earth. This line of thinking has implications for the life of religious communities in our own time. If the earth is, in a way, to be our eternal home, then its care, and the care of all its creatures, takes on fresh urgency.

Bell’s Journey

The easy narrative about Bell would be one of rebellion — that he is reacting to the strictures of a suffocating childhood by questioning long-standing dogma. The opposite is true. Bell’s creed of conviction and doubt — and his comfort with ambiguity and paradox — comes from an upbringing in which he was immersed in faith but encouraged to ask questions. His father, a central figure in his life, is a federal judge appointed by President Reagan in 1987. (Rob still remembers the drive to Washington in the family Oldsmobile for the confirmation hearings.) “I remember him giving me C.S. Lewis in high school,” Bell says. “My parents were both very intellectually honest, straightforward, and for them, faith meant that you were fully engaged.” As they were raising their family, the Bells, in addition to regular churchgoing, created a rigorous ethos of devotion and debate at home. Dinner-table conversations were pointed; Lewis’ novels and nonfiction were required reading.

The roots of Love Wins can be partly traced to the deathbed of a man Rob Bell never met: his grandfather, a civil engineer in Michigan who died when Rob’s father was 8. The Bells’ was a very conservative Evangelical household. When the senior Bell died, there was to be no grief. “We weren’t allowed to mourn, because the funeral of a Christian is supposed to be a celebration of the believer in heaven with Jesus right now,” says Robert Bell Sr. “But if you’re 8 years old and your dad — the breadwinner — just died, it feels different. Sad.”

The story of how his dad, still a child, was to deal with death has stayed with Rob. “To weep, to shed any tears — that would be doubting the sovereignty of God,” Rob says now, looking back. “That was the thing — ‘They’re all in heaven, so we’re happy about that.’ It doesn’t matter how you are actually humanly responding to this moment …” Bell pauses and chuckles ironically, a bit incredulous. “We’re all just supposed to be thrilled.”

Robby — his mother still calls him that — was emotionally precocious. “When he was around 10 years old, I detected that he had a great interest and concern for people,” his father says. “There he’d be, riding along with me, with his little blond hair, going to see sick folks or friends who were having problems, and he would get back in the truck after a visit and begin to analyze them and their situations very acutely. He had a feel for people and how they felt from very early on.”

Rob was a twice-a-week churchgoer at the Baptist and nondenominational churches the family attended at different times — services on Sunday, youth group on Wednesday. He recalls a kind of quiet frustration even then. “I remember thinking, ‘You know, if Jesus is who this guy standing up there says he is, this should be way more compelling.’ This should have a bit more electricity. The knob should be way more to the right, you know?”

Music, not the church, was his first consuming passion. (His wife Kristen claims he said he wanted to be a pastor when they first met early on at Wheaton College in Illinois. Bell is skeptical: “I swear to this day that that was a line.”) He and some friends started a band when he was a sophomore. “I had always had creative energy but no outlet,” he says. “I really discovered music, writing and playing, working with words and images and metaphors. You might say the music unleashed a monster.”

The band became central to him. Then two things happened: the guitar player decided to go to seminary, and Bell came down with viral meningitis. “It took the wind out of our sails,” he says. “I had no Plan B. I was a wreck. I was devastated, because our band was going to make it. We were going to live in a terrible little house and do terrible jobs at first, because that’s what great bands do — they start out living in terrible little houses and doing terrible little jobs.” His illness — “a freak brain infection” — changed his life, Bell says.

At 21, Rob was teaching barefoot waterskiing at HoneyRock Camp, near Three Lakes, Wis., when he preached his first sermon. “I didn’t know anything,” he says. “I took off my Birkenstocks beforehand. I had this awareness that my life would never be the same again.” The removal of the shoes is an interesting detail for Bell to remember. (“Do not come any closer,” God says to Moses in the Book of Exodus. “Take off your sandals, for the place where you are standing is holy ground.”) Bell says it was just intuitive, but the intuition suggests he had a sense of himself as a player in the unfolding drama of God in history. “Create things and share them,” Bell says. “It all made sense. That moment is etched. I remember thinking distinctly, ‘I could be terrible at this.’ But I knew this would get me up in the morning. I went to Fuller that fall.”

Fuller Theological Seminary, in Pasadena, Calif., is an eclectic place, attracting 4,000 students from 70 countries and more than 100 denominations. “It’s pretty hard to sit with Pentecostals and Holiness people and mainline Presbyterians and Anglicans and come away with a closed mind-set that draws firm boundaries about theology,” says Fuller president Richard Mouw.

After seminary, Bell’s work moved in two directions. He was recovering the context of the New Testament while creating a series of popular videos on Christianity called Nooma, Greek for wind or spirit. He began to attract a following, and Mars Hill — named for the site in Athens where Paul preached the Christian gospel of resurrection to the pagan world — was founded in Grand Rapids, Mich., in 1999. “Whenever people wonder why a church is growing, they say, ‘He’s preaching the Bible.’ Well, lots of people are preaching the Bible, and they don’t have parking problems,” says Bell.

Mars Hill did have parking problems, and Bell’s sudden popularity posed some risks for the young pastor. Pride and self-involvement are perennial issues for ministers, who, like politicians, grow accustomed to the sound of their own voices saying Important Things and to the deference of the flock. By the time Bell was 30, he was an Evangelical celebrity. (He had founded Mars Hill when he was 28.) He was referred to as a “rock star” in this magazine. “There was this giant spotlight on me,” he says. “All of a sudden your words are parsed. I found myself — and I think this happens to a lot of people — wanting to shrink away from it. But I decided, Just own it. I’m very comfortable in a room with thousands of people. I do have this voice. What will I say?”

And how will he say it? The history of Evangelism is in part the history of media and methods: Billy Sunday mastered the radio, Billy Graham television; now churches like Bell’s are at work in the digital vineyards of downloads and social media. Demography is also working in Bell’s favor. “He’s trying to reach a generation that’s more comfortable with mystery, with unsolved questions,” says Mouw, noting that his own young grandchildren are growing up with Hindu and Muslim friends and classmates. “For me, Hindus and Muslims were the people we sent missionaries off to in places we called ‘Arabia,'” Mouw says. “Now that diversity is part of the fabric of daily life. It makes a difference. My generation wanted truth — these are folks who want authenticity. The whole judgmentalism and harshness is something they want to avoid.”

If Bell is right about hell, then why do people need ecclesiastical traditions at all? Why aren’t the Salvation Army and the United Way sufficient institutions to enact a gospel of love, sparing us the talk of heaven and hellfire and damnation and all the rest of it? Why not close up the churches?

Bell knows the arguments and appreciates the frustrations. “I don’t know anyone who hasn’t said, ‘Let’s turn out the lights and say we gave it a shot,'” he says. “But you can’t — I can’t — get away from what this Jesus was, and is, saying to us. What the book tries to do is park itself right in the midst of the tension with a Jesus who offers an urgent and immediate call — ‘Repent! Be transformed! Turn!’ At the same time, I’ve got other sheep. There’s a renewal of all things. There’s water from the rock. People will come from the East and from the West. The scandal of the gospel is Jesus’ radical, healing love for a world that’s broken.”

Fair enough, but let’s be honest: religion heals, but it also kills. Why support a supernatural belief system that, for instance, contributed to that minister in Florida’s burning of a Koran, which led to the deaths of innocent U.N. workers in Afghanistan?

“I think Jesus shares your critique,” Bell replies. “We don’t burn other people’s books. I think Jesus is fairly pissed off about it as well.”

On Sunday, April 17, at Mars Hill, Bell will be joined by singer-songwriter Brie Stoner (who provided some of the music for his Nooma series) and will teach the first 13 verses of the third chapter of Revelation, which speaks of “the city of my God, the new Jerusalem, which is coming down out of heaven from my God … Whoever has ears, let them hear what the Spirit says to the churches.” The precise meaning of the words is open to different interpretations. But this much is clear: Rob Bell has much to say, and many are listening.

The Terror of Code in the Wrong Hands

Here is a new term, software terrorist, who brings negative productivity to the team. I can attest that catching bug in poorly written code waste a lot more time than rewriting the code myself from scratch.

By Allen Holub, May 2005, SD Times

The 20-to-1 productivity rule says that 5 percent of programmers are 20 times more productive than the remaining 95 percent, but what about the 5 percent at the other end of the bell curve? Consider the software terrorist: the guy who stays up all night, unwittingly but systematically destroying the entire team’s last month’s work while “improving” the code. He doesn’t tell anybody what he’s done, and he never tests. He’s created a ticking time bomb that won’t be discovered for six months.

When the bomb goes off, you can’t roll back six months of work by the whole team, and it takes three weeks of your best programmer’s effort to undo the damage. Meanwhile, our terrorist gets a raise because he stays late so often, working so hard. The brilliant guy who cleans up the debris gets a bad performance review because his schedule has slipped, so he quits.

Valuable tools in the hands of experts become dangerous weapons in the hands of terrorists. The terrorist doesn’t understand how to use generics, templates and casts, and so with a single click on the “refactor” button he destroys the program’s carefully crafted typing system. That single-click refactor is a real time saver for the expert. Scripting languages, which in the right hands save time, become a means for creating write-only code that has to be scrapped after you’ve spent two months trying to figure out why it doesn’t work.

Terrorist scripts can be so central to the app, and so hard to understand, that they sometimes remain in the program, doubling the time required for all maintenance efforts. Terrorist documentation is a font of misinformation. Terrorist tests systematically destroy the database every time they’re run.

Terrorist work isn’t just nonproductive, it’s anti-productive. A terrorist reduces your team’s productivity by at least an order of magnitude. It takes a lot longer to find a bug than to create one. None of the terrorist code ends up in the final program because it all has to be rewritten. You pay the terrorists, and you also pay 10 times more to the people who have to track down and fix their bugs.

Given the difficulty that most organizations have in firing (or even identifying) incompetent people, the only way to solve this problem is not to hire terrorists at all; but the terrorists are masters of disguise, particularly in job interviews. They talk a good game, they have lots of experience, and they have great references because they work so hard.

Since the bottom 5 percent is indistinguishable from the rest of the bottom 95 percent, the only way to avoid hiring terrorists is to avoid hiring from the remaining 95 percent altogether.

The compelling reason for this strategy is that the 20-to-1 rule applies only when elite programmers work exclusively with other elite programmers. Single elite programmers who interact with 10 average programmers waste most of their time explaining and helping rather than working. Two elite programmers raise the productivity of a 20-programmer group by 10 percent. It’s like getting two programmers for free. Two elite programmers working only with each other do the work of at least 20 average programmers. It’s like getting 18 programmers for free. If you pay them twice the going salary (and you should if you want to keep them), you’re still saving vast amounts of money.

Unfortunately, it’s possible for a software terrorist to masquerade as an elite programmer, but this disguise is easier to detect. Programmers who insist on working in isolation (especially the ones who come to work at 4:00 p.m. and stay all night), the prima donnas who have fits when they don’t get their way, the programmers who never explain what they’re doing in a way that anyone else can understand and don’t document their code, the ones that reject new technologies or methodologies out of hand rather than showing genuine curiosity—these are the terrorists.

Avoid them no matter how many years of experience they have.

Software terrorism is on the upswing. I used to quote the standard rule that the top 10 percent were 10 times more productive. The hiring practices prevalent since the dot-com explosion—which seem to reject the elite programmers by design—have lowered the general skill level of the profession, however.

As the number of elite programmers gets smaller, their relative productivity gets higher. The only long-term solution to this problem is to change our hiring practices and our attitudes toward training. The cynic in me has a hard time believing that either will happen, but we can always hope for the best.