Category Archives: Reference

Filing cabinet for a digitized world.

Therapist-free therapy

Looks like psychologist will be out of work soon and they will be replaced by computer programs. I never trust those talk therapy anyways, the couch only works in the movies.

Mar 3rd 2011, The Economist
Cognitive-bias modification may put the psychiatrist’s couch out of business

THE treatment, in the early 1880s, of an Austrian hysteric called Anna O is generally regarded as the beginning of talking-it-through as a form of therapy. But psychoanalysis, as this version of talk therapy became known, is an expensive procedure. Anna’s doctor, Josef Breuer, is estimated to have spent over 1,000 hours with her.

Since then, things have improved. A typical course of a modern talk therapy, such as cognitive behavioural therapy, consists of 12-16 hour-long sessions and is a reasonably efficient way of treating conditions like depression and anxiety (hysteria is no longer a recognised diagnosis). Medication, too, can bring rapid change. Nevertheless, treating disorders of the psyche is still a hit-and-miss affair, and not everyone wishes to bare his soul or take mind-altering drugs to deal with his problems. A new kind of treatment may, though, mean he does not have to. Cognitive-bias modification (CBM) appears to be effective after only a few 15-minute sessions, and involves neither drugs nor the discussion of feelings. It does not even need a therapist. All it requires is sitting in front of a computer and using a program that subtly alters harmful thought patterns.

This simple approach has already been shown to work for anxiety and addictions, and is now being tested for alcohol abuse, post-traumatic-stress disorder and several other disturbances of the mind. It is causing great excitement among researchers. As Yair Bar-Haim, a psychologist at Tel Aviv University who has been experimenting with it on patients as diverse as children and soldiers, puts it, “It’s not often that a new evidence-based treatment for a major psychopathology comes around.”

CBM is based on the idea that many psychological problems are caused by automatic, unconscious biases in thinking. People suffering from anxiety, for instance, may have what is known as an attentional bias towards threats: they are drawn irresistibly to things they perceive to be dangerous. Similar biases may affect memory and the interpretation of events. For example, if an acquaintance walks past without saying hello, it might mean either that he has ignored you or that he has not seen you. The anxious, according to the theory behind CBM, have a bias towards assuming the former and reacting accordingly.

The goal of CBM is to alter such biases, and doing so has proved surprisingly easy. A common way of debiasing attention is to show someone two words or pictures—one neutral and the other threatening—on a computer screen. In the case of social anxiety these might be a neutral face and a disgusted face. Presented with this choice, an anxious person instinctively focuses on the disgusted visage. The program, however, prods him to complete tasks involving the neutral picture, such as identifying letters that appear in its place on the screen. Repeating the procedure around a thousand times, over a total of two hours, changes the user’s tendency to focus on the anxious face. That change is then carried into the wider world.

Emily Holmes of Oxford University, who studies the use of CBM for depression, describes the process as like administering a cognitive vaccine. When challenged by reality in the form of, say, the unobservant friend, the recipient of the vaccine finds he is inoculated against inappropriate anxiety.

In a recent study of social anxiety by Norman Schmidt of Florida State University and his colleagues, which involved 36 volunteers who had been diagnosed with anxiety, half underwent eight short sessions of CBM and the rest were put in a control group and had no treatment. At the end of the study, a majority of the CBM volunteers no longer seemed anxious, whereas in the control group only 11% had shed their anxiety. Although it was only a small trial, these results compare favourably with those of existing treatments. An examination of standard talk therapy carried out in 2004, for instance, found that half of patients had a clinically significant reduction in symptoms. Trials of medications have similar success rates.

The latest research, which is on a larger scale and is due to be published this month in Psychological Science, tackles alcohol addiction. Past work has shown that many addicts have an approach bias for alcohol—in other words, they experience a physical pull towards it. (Arachnophobia, a form of this bias that is familiar to many people, works in the opposite way: if they encounter a spider, they recoil.)

This study, conducted by Reinout Wiers of the University of Amsterdam and his colleagues, attempted to correct the approach bias to alcohol with CBM. The 214 participants received either a standard addiction treatment—a form of talk therapy—or the standard treatment plus four 15-minute sessions of CBM. In the first group, 41% of participants were abstinent a year later; in the second, 54%. That is not a cure for alcoholism, but it is a significant improvement on talk therapy alone.

Many other researchers are now exploring CBM. A team at Harvard, led by Richard McNally, is seeking volunteers for a month-long programme that will use smart-phones to assess the technique’s effect on anxiety. And Dr Bar-Haim and his team are examining possible connections between cognitive biases and post-traumatic-stress disorder in the American and Israeli armies.

Not all disorders are amenable to CBM. One study, by Hannah Reese (also at Harvard) and her colleagues, showed that it is ineffective in countering arachnophobia (perhaps not surprising, since this may be an evolved response, rather than an acquired one). Moreover, Dr Wiers found that the approach bias towards alcohol is present in only about half of the drinkers he studies. He hypothesises that for the others, drinking is less about automatic impulses and more about making a conscious decision. In such cases CBM is unlikely to work.

Colin MacLeod of the University of Western Australia, one of the pioneers of the technique, thinks CBM is not quite ready for general use. He would like to see it go through some large, long-term, randomised clinical trials of the sort that would be needed if it were a drug, rather than a behavioural therapy. Nevertheless, CBM does look extremely promising, if only because it offers a way out for those whose answer to the question, “Do you want to talk about it?” is a resounding “No”

Why I am not worried about Japan’s nuclear reactors

Now, I know nuclear melt down is not that frightening. Worse by worse, it will only take out the core, but not any radioactivity explosion.

By, Dr. Josef Oehmen, MIT, March 13, 2011

I am writing this text (Mar 12) to give you some peace of mind regarding some of the troubles in Japan, that is the safety of Japan’s nuclear reactors. Up front, the situation is serious, but under control. And this text is long! But you will know more about nuclear power plants after reading it than all journalists on this planet put together.

There was and will *not* be any significant release of radioactivity.

By “significant” I mean a level of radiation of more than what you would receive on – say – a long distance flight, or drinking a glass of beer that comes from certain areas with high levels of natural background radiation.

I have been reading every news release on the incident since the earthquake. There has not been one single (!) report that was accurate and free of errors (and part of that problem is also a weakness in the Japanese crisis communication). By “not free of errors” I do not refer to tendentious anti-nuclear journalism – that is quite normal these days. By “not free of errors” I mean blatant errors regarding physics and natural law, as well as gross misinterpretation of facts, due to an obvious lack of fundamental and basic understanding of the way nuclear reactors are build and operated. I have read a 3 page report on CNN where every single paragraph contained an error.

We will have to cover some fundamentals, before we get into what is going on.

Construction of the Fukushima nuclear power plants

The plants at Fukushima are so called Boiling Water Reactors, or BWR for short. Boiling Water Reactors are similar to a pressure cooker. The nuclear fuel heats water, the water boils and creates steam, the steam then drives turbines that create the electricity, and the steam is then cooled and condensed back to water, and the water send back to be heated by the nuclear fuel. The pressure cooker operates at about 250 °C.

The nuclear fuel is uranium oxide. Uranium oxide is a ceramic with a very high melting point of about 3000 °C. The fuel is manufactured in pellets (think little cylinders the size of Lego bricks). Those pieces are then put into a long tube made of Zircaloy with a melting point of 2200 °C, and sealed tight. The assembly is called a fuel rod. These fuel rods are then put together to form larger packages, and a number of these packages are then put into the reactor. All these packages together are referred to as “the core”.

The Zircaloy casing is the first containment. It separates the radioactive fuel from the rest of the world.

The core is then placed in the “pressure vessels”. That is the pressure cooker we talked about before. The pressure vessels is the second containment. This is one sturdy piece of a pot, designed to safely contain the core for temperatures several hundred °C. That covers the scenarios where cooling can be restored at some point.

The entire “hardware” of the nuclear reactor – the pressure vessel and all pipes, pumps, coolant (water) reserves, are then encased in the third containment. The third containment is a hermetically (air tight) sealed, very thick bubble of the strongest steel and concrete. The third containment is designed, built and tested for one single purpose: To contain, indefinitely, a complete core meltdown. For that purpose, a large and thick concrete basin is cast under the pressure vessel (the second containment), all inside the third containment. This is the so-called “core catcher”. If the core melts and the pressure vessel bursts (and eventually melts), it will catch the molten fuel and everything else. It is typically built in such a way that the nuclear fuel will be spread out, so it can cool down.

This third containment is then surrounded by the reactor building. The reactor building is an outer shell that is supposed to keep the weather out, but nothing in. (this is the part that was damaged in the explosion, but more to that later).

Fundamentals of nuclear reactions

The uranium fuel generates heat by nuclear fission. Big uranium atoms are split into smaller atoms. That generates heat plus neutrons (one of the particles that forms an atom). When the neutron hits another uranium atom, that splits, generating more neutrons and so on. That is called the nuclear chain reaction.

Now, just packing a lot of fuel rods next to each other would quickly lead to overheating and after about 45 minutes to a melting of the fuel rods. It is worth mentioning at this point that the nuclear fuel in a reactor can *never* cause a nuclear explosion the type of a nuclear bomb. Building a nuclear bomb is actually quite difficult (ask Iran). In Chernobyl, the explosion was caused by excessive pressure buildup, hydrogen explosion and rupture of all containments, propelling molten core material into the environment (a “dirty bomb”). Why that did not and will not happen in Japan, further below.

In order to control the nuclear chain reaction, the reactor operators use so-called “control rods”. The control rods absorb the neutrons and kill the chain reaction instantaneously. A nuclear reactor is built in such a way, that when operating normally, you take out all the control rods. The coolant water then takes away the heat (and converts it into steam and electricity) at the same rate as the core produces it. And you have a lot of leeway around the standard operating point of 250°C.

The challenge is that after inserting the rods and stopping the chain reaction, the core still keeps producing heat. The uranium “stopped” the chain reaction. But a number of intermediate radioactive elements are created by the uranium during its fission process, most notably Cesium and Iodine isotopes, i.e. radioactive versions of these elements that will eventually split up into smaller atoms and not be radioactive anymore. Those elements keep decaying and producing heat. Because they are not regenerated any longer from the uranium (the uranium stopped decaying after the control rods were put in), they get less and less, and so the core cools down over a matter of days, until those intermediate radioactive elements are used up.

This residual heat is causing the headaches right now.

So the first “type” of radioactive material is the uranium in the fuel rods, plus the intermediate radioactive elements that the uranium splits into, also inside the fuel rod (Cesium and Iodine).

There is a second type of radioactive material created, outside the fuel rods. The big main difference up front: Those radioactive materials have a very short half-life, that means that they decay very fast and split into non-radioactive materials. By fast I mean seconds. So if these radioactive materials are released into the environment, yes, radioactivity was released, but no, it is not dangerous, at all. Why? By the time you spelled “R-A-D-I-O-N-U-C-L-I-D-E”, they will be harmless, because they will have split up into non radioactive elements. Those radioactive elements are N-16, the radioactive isotope (or version) of nitrogen (air). The others are noble gases such as Argon. But where do they come from? When the uranium splits, it generates a neutron (see above). Most of these neutrons will hit other uranium atoms and keep the nuclear chain reaction going. But some will leave the fuel rod and hit the water molecules, or the air that is in the water. Then, a non-radioactive element can “capture” the neutron. It becomes radioactive. As described above, it will quickly (seconds) get rid again of the neutron to return to its former beautiful self.

This second “type” of radiation is very important when we talk about the radioactivity being released into the environment later on.

What happened at Fukushima

I will try to summarize the main facts. The earthquake that hit Japan was 5 times more powerful than the worst earthquake the nuclear power plant was built for (the Richter scale works logarithmically; the difference between the 8.2 that the plants were built for and the 8.9 that happened is 5 times, not 0.7). So the first hooray for Japanese engineering, everything held up.

When the earthquake hit with 8.9, the nuclear reactors all went into automatic shutdown. Within seconds after the earthquake started, the control rods had been inserted into the core and nuclear chain reaction of the uranium stopped. Now, the cooling system has to carry away the residual heat. The residual heat load is about 3% of the heat load under normal operating conditions.

The earthquake destroyed the external power supply of the nuclear reactor. That is one of the most serious accidents for a nuclear power plant, and accordingly, a “plant black out” receives a lot of attention when designing backup systems. The power is needed to keep the coolant pumps working. Since the power plant had been shut down, it cannot produce any electricity by itself any more.

Things were going well for an hour. One set of multiple sets of emergency Diesel power generators kicked in and provided the electricity that was needed. Then the Tsunami came, much bigger than people had expected when building the power plant (see above, factor 7). The tsunami took out all multiple sets of backup Diesel generators.

When designing a nuclear power plant, engineers follow a philosophy called “Defense of Depth”. That means that you first build everything to withstand the worst catastrophe you can imagine, and then design the plant in such a way that it can still handle one system failure (that you thought could never happen) after the other. A tsunami taking out all backup power in one swift strike is such a scenario. The last line of defense is putting everything into the third containment (see above), that will keep everything, whatever the mess, control rods in our out, core molten or not, inside the reactor.

When the diesel generators were gone, the reactor operators switched to emergency battery power. The batteries were designed as one of the backups to the backups, to provide power for cooling the core for 8 hours. And they did.

Within the 8 hours, another power source had to be found and connected to the power plant. The power grid was down due to the earthquake. The diesel generators were destroyed by the tsunami. So mobile diesel generators were trucked in.

This is where things started to go seriously wrong. The external power generators could not be connected to the power plant (the plugs did not fit). So after the batteries ran out, the residual heat could not be carried away any more.

At this point the plant operators begin to follow emergency procedures that are in place for a “loss of cooling event”. It is again a step along the “Depth of Defense” lines. The power to the cooling systems should never have failed completely, but it did, so they “retreat” to the next line of defense. All of this, however shocking it seems to us, is part of the day-to-day training you go through as an operator, right through to managing a core meltdown.

It was at this stage that people started to talk about core meltdown. Because at the end of the day, if cooling cannot be restored, the core will eventually melt (after hours or days), and the last line of defense, the core catcher and third containment, would come into play.

But the goal at this stage was to manage the core while it was heating up, and ensure that the first containment (the Zircaloy tubes that contains the nuclear fuel), as well as the second containment (our pressure cooker) remain intact and operational for as long as possible, to give the engineers time to fix the cooling systems.

Because cooling the core is such a big deal, the reactor has a number of cooling systems, each in multiple versions (the reactor water cleanup system, the decay heat removal, the reactor core isolating cooling, the standby liquid cooling system, and the emergency core cooling system). Which one failed when or did not fail is not clear at this point in time.

So imagine our pressure cooker on the stove, heat on low, but on. The operators use whatever cooling system capacity they have to get rid of as much heat as possible, but the pressure starts building up. The priority now is to maintain integrity of the first containment (keep temperature of the fuel rods below 2200°C), as well as the second containment, the pressure cooker. In order to maintain integrity of the pressure cooker (the second containment), the pressure has to be released from time to time. Because the ability to do that in an emergency is so important, the reactor has 11 pressure release valves. The operators now started venting steam from time to time to control the pressure. The temperature at this stage was about 550°C.

This is when the reports about “radiation leakage” starting coming in. I believe I explained above why venting the steam is theoretically the same as releasing radiation into the environment, but why it was and is not dangerous. The radioactive nitrogen as well as the noble gases do not pose a threat to human health.

At some stage during this venting, the explosion occurred. The explosion took place outside of the third containment (our “last line of defense”), and the reactor building. Remember that the reactor building has no function in keeping the radioactivity contained. It is not entirely clear yet what has happened, but this is the likely scenario: The operators decided to vent the steam from the pressure vessel not directly into the environment, but into the space between the third containment and the reactor building (to give the radioactivity in the steam more time to subside). The problem is that at the high temperatures that the core had reached at this stage, water molecules can “disassociate” into oxygen and hydrogen – an explosive mixture. And it did explode, outside the third containment, damaging the reactor building around. It was that sort of explosion, but inside the pressure vessel (because it was badly designed and not managed properly by the operators) that lead to the explosion of Chernobyl. This was never a risk at Fukushima. The problem of hydrogen-oxygen formation is one of the biggies when you design a power plant (if you are not Soviet, that is), so the reactor is build and operated in a way it cannot happen inside the containment. It happened outside, which was not intended but a possible scenario and OK, because it did not pose a risk for the containment.

So the pressure was under control, as steam was vented. Now, if you keep boiling your pot, the problem is that the water level will keep falling and falling. The core is covered by several meters of water in order to allow for some time to pass (hours, days) before it gets exposed. Once the rods start to be exposed at the top, the exposed parts will reach the critical temperature of 2200 °C after about 45 minutes. This is when the first containment, the Zircaloy tube, would fail.

And this started to happen. The cooling could not be restored before there was some (very limited, but still) damage to the casing of some of the fuel. The nuclear material itself was still intact, but the surrounding Zircaloy shell had started melting. What happened now is that some of the byproducts of the uranium decay – radioactive Cesium and Iodine – started to mix with the steam. The big problem, uranium, was still under control, because the uranium oxide rods were good until 3000 °C. It is confirmed that a very small amount of Cesium and Iodine was measured in the steam that was released into the atmosphere.

It seems this was the “go signal” for a major plan B. The small amounts of Cesium that were measured told the operators that the first containment on one of the rods somewhere was about to give. The Plan A had been to restore one of the regular cooling systems to the core. Why that failed is unclear. One plausible explanation is that the tsunami also took away / polluted all the clean water needed for the regular cooling systems.

The water used in the cooling system is very clean, demineralized (like distilled) water. The reason to use pure water is the above mentioned activation by the neutrons from the Uranium: Pure water does not get activated much, so stays practically radioactive-free. Dirt or salt in the water will absorb the neutrons quicker, becoming more radioactive. This has no effect whatsoever on the core – it does not care what it is cooled by. But it makes life more difficult for the operators and mechanics when they have to deal with activated (i.e. slightly radioactive) water.

But Plan A had failed – cooling systems down or additional clean water unavailable – so Plan B came into effect. This is what it looks like happened:

In order to prevent a core meltdown, the operators started to use sea water to cool the core. I am not quite sure if they flooded our pressure cooker with it (the second containment), or if they flooded the third containment, immersing the pressure cooker. But that is not relevant for us.

The point is that the nuclear fuel has now been cooled down. Because the chain reaction has been stopped a long time ago, there is only very little residual heat being produced now. The large amount of cooling water that has been used is sufficient to take up that heat. Because it is a lot of water, the core does not produce sufficient heat any more to produce any significant pressure. Also, boric acid has been added to the seawater. Boric acid is “liquid control rod”. Whatever decay is still going on, the Boron will capture the neutrons and further speed up the cooling down of the core.

The plant came close to a core meltdown. Here is the worst-case scenario that was avoided: If the seawater could not have been used for treatment, the operators would have continued to vent the water steam to avoid pressure buildup. The third containment would then have been completely sealed to allow the core meltdown to happen without releasing radioactive material. After the meltdown, there would have been a waiting period for the intermediate radioactive materials to decay inside the reactor, and all radioactive particles to settle on a surface inside the containment. The cooling system would have been restored eventually, and the molten core cooled to a manageable temperature. The containment would have been cleaned up on the inside. Then a messy job of removing the molten core from the containment would have begun, packing the (now solid again) fuel bit by bit into transportation containers to be shipped to processing plants. Depending on the damage, the block of the plant would then either be repaired or dismantled.

Now, where does that leave us?

* The plant is safe now and will stay safe.
* Japan is looking at an INES Level 4 Accident: Nuclear accident with local consequences. That is bad for the company that owns the plant, but not for anyone else.
* Some radiation was released when the pressure vessel was vented. All radioactive isotopes from the activated steam have gone (decayed). A very small amount of Cesium was released, as well as Iodine. If you were sitting on top of the plants’ chimney when they were venting, you should probably give up smoking to return to your former life expectancy. The Cesium and Iodine isotopes were carried out to the sea and will never be seen again.
* There was some limited damage to the first containment. That means that some amounts of radioactive Cesium and Iodine will also be released into the cooling water, but no Uranium or other nasty stuff (the Uranium oxide does not “dissolve” in the water). There are facilities for treating the cooling water inside the third containment. The radioactive Cesium and Iodine will be removed there and eventually stored as radioactive waste in terminal storage.
* The seawater used as cooling water will be activated to some degree. Because the control rods are fully inserted, the Uranium chain reaction is not happening. That means the “main” nuclear reaction is not happening, thus not contributing to the activation. The intermediate radioactive materials (Cesium and Iodine) are also almost gone at this stage, because the Uranium decay was stopped a long time ago. This further reduces the activation. The bottom line is that there will be some low level of activation of the seawater, which will also be removed by the treatment facilities.
* The seawater will then be replaced over time with the “normal” cooling water
* The reactor core will then be dismantled and transported to a processing facility, just like during a regular fuel change.
* Fuel rods and the entire plant will be checked for potential damage. This will take about 4-5 years.
* The safety systems on all Japanese plants will be upgraded to withstand a 9.0 earthquake and tsunami (or worse)
* I believe the most significant problem will be a prolonged power shortage. About half of Japan’s nuclear reactors will probably have to be inspected, reducing the nation’s power generating capacity by 15%. This will probably be covered by running gas power plants that are usually only used for peak loads to cover some of the base load as well. That will increase your electricity bill, as well as lead to potential power shortages during peak demand, in Japan.

Are Compact Fluorescent Lightbulbs Really Cheaper Over Time?

I hate the lighting produced by CFL bulbs. I am going to switch from incandescent bulb to LED lights directly when the price of LED lights comes down. CFL is a in-between gaping technically that eventually should be phased out.

By Joseph Calamia, March 2011, IEEE Spectrum
CFLs must last long enough for their energy efficiency to make up for their higher cost

You buy a compact fluorescent lamp. The packaging says it will last for 6000 hours—about five years, if used for three hours a day. A year later, it burns out.

Last year, IEEE Spectrum reported that some Europeans opposed legislation to phase out incandescent lighting. Rather than replace their lights with compact fluorescents, consumers started hoarding traditional bulbs.

From the comments on that article, it seems that some IEEE Spectrum readers aren’t completely sold on CFLs either. We received questions about why the lights don’t always meet their long-lifetime claims, what can cause them to fail, and ultimately, how dead bulbs affect the advertised savings of switching from incandescent.

Tests of compact fluorescent lamps’ lifetime vary among countries. The majority of CFLs sold in the United States adhere to the U.S. Department of Energy and Environmental Protection Agency’s Energy Star approval program, according to the U.S. National Electrical Manufacturers Association. For these bulbs, IEEE Spectrum found some answers.

How is a compact fluorescent lamp’s lifetime calculated in the first place?

“With any given lamp that rolls off a production line, whatever the technology, they’re not all going to have the same exact lifetime,” says Alex Baker, lighting program manager for the Energy Star program. In an initial test to determine an average lifetime, he says, manufacturers leave a large sample of lamps lit. The defined average “rated life” is the time it takes for half of the lamps to go out. Baker says that this average life definition is an old lighting industry standard that applies to incandescent and compact fluorescent lamps alike.

In reality, the odds may actually be somewhat greater than 50 percent that your 6000-hour-rated bulb will still be burning bright at 6000 hours. “Currently, qualified CFLs in the market may have longer lifetimes than manufacturers are claiming,” says Jen Stutsman, of the Department of Energy’s public affairs office. “More often than not, more than 50 percent of the lamps of a sample set are burning during the final hour of the manufacturer’s chosen rated lifetime,” she says, noting that manufacturers often opt to end lifetime evaluations prematurely, to save on testing costs.

Although manufacturers usually conduct this initial rated life test in-house, the Energy Star program requires other lifetime evaluations conducted by accredited third-party laboratories. Jeremy Snyder directed one of those testing facilities, the Program for the Evaluation and Analysis of Residential Lighting (PEARL) in Troy, N.Y., which evaluated Energy Star–qualified bulbs until late 2010, when the Energy Star program started conducting these tests itself. Snyder works at the Rensselaer Polytechnic Institute’s Lighting Research Center, which conducts a variety of tests on lighting products, including CFLs and LEDs. Some Energy Star lifetime tests, he says, require 10 sample lamps for each product—five pointing toward the ceiling and five toward the floor. One “interim life test” entails leaving the lamps lit for 40 percent of their rated life. Three strikes, or burnt-out lamps, and the product risks losing its qualification.

Besides waiting for bulbs to burn out, testers also measure the light output of lamps over time, to ensure that the CFLs do not appreciably dim with use. Using a hollow “integrating sphere,” which has a white interior to reflect light in all directions, Lighting Research Center staff can take precise measurements of a lamp’s total light output in lumens. The Energy Star program requires that 10 tested lights maintain an average of 90 percent of their initial lumen output for 1000 hours of life, and 80 percent of their initial lumen output at 40 percent of their rated life.

Is there any way to accelerate these lifetime tests?

“There are techniques for accelerated testing of incandescent lamps, but there’s no accepted accelerated testing for other types,” says Michael L. Grather, the primary lighting performance engineer at Luminaire Testing Laboratory and Underwriters’ Laboratories in Allentown, Penn For incandescent bulbs, one common method is to run more electric current through the filament than the lamp might experience in normal use. But Grather says a similar test for CFLs wouldn’t give consumers an accurate prediction of the bulb’s life: “You’re not fairly indicating what’s going to happen as a function of time. You’re just stressing different components—the electronics but not the entire lamp.”

Perhaps the closest such evaluation for CFLs is the Energy Star “rapid cycle test.” For this evaluation, testers divide the total rated life of the lamp, measured in hours, by two and switch the compact fluorescent on for five minutes and off for five minutes that number of times. For example, a CFL with a 6000-hour rated life must undergo 3000 such rapid cycles. At least five out of a sample of six lamps must survive for the product to keep its Energy Star approval.

In real scenarios, what causes CFLs to fall short of their rated life?

As anyone who frequently replaces CFLs in closets or hallways has likely discovered, rapid cycling can prematurely kill a CFL. Repeatedly starting the lamp shortens its life, Snyder explains, because high voltage at start-up sends the lamp’s mercury ions hurtling toward the starting electrode, which can destroy the electrode’s coating over time. Snyder suggests consumers keep this in mind when deciding where to use a compact fluorescent. The Lighting Research Center has published a worksheet [PDF] for consumers to better understand how frequent switching reduces a lamp’s lifetime. The sheet provides a series of multipliers so that consumers can better predict a bulb’s longevity. The multipliers range from 1.5 (for bulbs left on for at least 12 hours) to 0.4 (for bulbs turned off after 15 minutes). Despite any lifetime reduction, Snyder says consumers should still turn off lights not needed for more than a few minutes.

Another CFL slayer is temperature. “Incandescents thrive on heat,” Baker says. “The hotter they get, the more light you get out of them. But a CFL is very temperature sensitive.” He notes that “recessed cans”—insulated lighting fixtures—prove a particularly nasty compact fluorescent death trap, especially when attached to dimmers, which can also shorten the electronic ballast’s life. He says consumers often install CFLs meant for table or floor lamps inside these fixtures, instead of lamps specially designed for higher temperatures, as indicated on their packages. Among other things, these high temperatures can destroy the lamps’ electrolytic capacitors—the main reason, he says, that CFLs fail when overheated.

How do shorter-than-expected lifetimes affect the payback equation?

Actually predicting the savings of switching from an incandescent must account for both the cost of the lamp and its energy savings over time. Although the initial price of a compact fluorescent (which can range [PDF] from US $0.50 in a multipack to over $9) is usually more than that of an incandescent (usually less than a U.S. dollar), a CFL can use a fraction of the energy an incandescent requires. Over its lifetime, the compact fluorescent should make up for its higher initial cost in savings—if it lives long enough. It should also offset the estimated 4 milligrams of mercury it contains. You might think of mercury vapor as the CFL’s equivalent of an incandescent’s filament. The electrodes in the CFL excite this vapor, which in turn radiates and excites the lamp’s phosphor coating, giving off light. Given that coal-burning power plants also release mercury into the air, an amount that the Energy Star program estimates at around 0.012 milligrams per kilowatt-hour, if the CFL can save enough energy it should offset this environmental cost, too.

Exactly how long a CFL must live to make up for its higher costs depends on the price of the lamp, the price of electric power, and how much energy the compact fluorescent requires to produce the same amount of light as its incandescent counterpart. Many manufacturers claim that consumers can take an incandescent wattage and divide it by four, and sometimes five, to find an equivalent CFL in terms of light output, says Russ Leslie, associate director at the Lighting Research Center. But he believes that’s “a little bit too greedy.” Instead, he recommends dividing by three. “You’ll still save a lot of energy, but you’re more likely to be happy with the light output,” he says.

To estimate your particular savings, the Energy Star program has published a spreadsheet where you can enter the price you’re paying for electricity, the average number of hours your household uses the lamp each day, the price you paid for the bulb, and its wattage. The sheet also includes the assumptions used to calculate the comparison between compact fluorescent and incandescent bulbs. Playing with the default assumptions given in the sheet, we reduced the CFL’s lifetime by 60 percent to account for frequent switching, doubled the initial price to make up for dead bulbs, deleted the assumed labor costs for changing bulbs, and increased the CFL’s wattage to give us a bit more light. The compact fluorescent won. We invite you to try the same, with your own lighting and energy costs, and let us know your results.

B.C. priest lands snowboarding PhD

I have been saying skiing nurture my spirituality for many years. Finally there is some theology proof from an Anglican priest with his Ph.D thesis on spiritually in snowboarding.

CBC News, Mar 4, 2011
Thesis examines connection between spirituality and snowboarding

An Anglican priest from Trail, B.C., has become the first person in the world to get a PhD in snowboarding.

Neil Elliot, the minister at St. Andrew’s Anglican Church, recently received his doctorate from Kingston University in London, England.

“The genesis was discovering this term ‘soul-riding’ in a discussion on the internet, and that discussion going into how people have had transcending experiences while riding and discovering I’ve had that experience as well I just hadn’t recognized it,” he said.

Elliot interviewed dozens of snowboarders from the United Kingdom and Canada, delving into the spirituality of snowboarding.

“Soul-riding starts with riding powder, it starts with finding some kind of almost transcendent experience in riding powder and in the whole of your life, so soul-riding is about being completely focused, being completely in the moment, you might say.”

Elliot said it’s clear spirituality and snowboarding do intersect.

“[It’s] about snowboarders who discovered that … snowboarding was their spirituality. I had a lot of people who said, ‘Snowboarding is my religion.'”

‘New model for spirituality’

While Elliot’s thesis doesn’t draw any definite conclusions, he says it offers a new point of view.
Neil Elliot is the first person in the world to get a PhD in snowboarding. Neil Elliot is the first person in the world to get a PhD in snowboarding. (St. Andrews Anglican Church)

“What my thesis does is give a new model for spirituality, saying that spirituality is a way of looking at the world and a way of looking at the world that includes there being something more than just the material,” he said.

“My thesis goes on to say that there’s three dimensions to that. There’s the experiences that we have, there’s the context that we’re in and then there’s what’s going on really inside us, who we are.”

Elliot, who already has a master’s degree in theology and Islamic studies, is the first to admit his love of snowboarding drove him to get the PhD and a job in the B.C. mountains. But he insists his thesis is serious.

“My PhD is about spirituality and snowboarding. It’s rooted in the sociology of religion and in … this debate that’s going on about whether somebody is religious or spiritual. A lot of people say, ‘I’m not religious — I’m spiritual’ and I’m trying to find out what that actually means,” he said.

“The spirituality of snowboarding is looking at what does it mean to be spiritual in today’s world.”

Elliot said his colleagues and congregation support his unorthodox PhD, and love of both the board and cloth.

“They understand that this is a light on what we’re all struggling with: how do we encourage people to come into the church? How do we encourage people to see religion and spirituality as working together, rather than being different things?”

3D Printing, The printed world

This is the end of Bandai. Who would buy over priced plastic model if you can print your own Gundam? Probably it will be the end of ToyR’us too.

By Feb 10th 2011, Economist
Three-dimensional printing from digital designs will transform manufacturing and allow more people to start making things

FILTON, just outside Bristol, is where Britain’s fleet of Concorde supersonic airliners was built. In a building near a wind tunnel on the same sprawling site, something even more remarkable is being created. Little by little a machine is “printing” a complex titanium landing-gear bracket, about the size of a shoe, which normally would have to be laboriously hewn from a solid block of metal. Brackets are only the beginning. The researchers at Filton have a much bigger ambition: to print the entire wing of an airliner.

Far-fetched as this may seem, many other people are using three-dimensional printing technology to create similarly remarkable things. These include medical implants, jewellery, football boots designed for individual feet, lampshades, racing-car parts, solid-state batteries and customised mobile phones. Some are even making mechanical devices. At the Massachusetts Institute of Technology (MIT), Peter Schmitt, a PhD student, has been printing something that resembles the workings of a grandfather clock. It took him a few attempts to get right, but eventually he removed the plastic clock from a 3D printer, hung it on the wall and pulled down the counterweight. It started ticking.

Engineers and designers have been using 3D printers for more than a decade, but mostly to make prototypes quickly and cheaply before they embark on the expensive business of tooling up a factory to produce the real thing. As 3D printers have become more capable and able to work with a broader range of materials, including production-grade plastics and metals, the machines are increasingly being used to make final products too. More than 20% of the output of 3D printers is now final products rather than prototypes, according to Terry Wohlers, who runs a research firm specialising in the field. He predicts that this will rise to 50% by 2020.

Using 3D printers as production tools has become known in industry as “additive” manufacturing (as opposed to the old, “subtractive” business of cutting, drilling and bashing metal). The additive process requires less raw material and, because software drives 3D printers, each item can be made differently without costly retooling. The printers can also produce ready-made objects that require less assembly and things that traditional methods would struggle with—such as the glove pictured above, made by Within Technologies, a London company. It can be printed in nylon, stainless steel or titanium.

The printing of parts and products has the potential to transform manufacturing because it lowers the costs and risks. No longer does a producer have to make thousands, or hundreds of thousands, of items to recover his fixed costs. In a world where economies of scale do not matter any more, mass-manufacturing identical items may not be necessary or appropriate, especially as 3D printing allows for a great deal of customisation. Indeed, in the future some see consumers downloading products as they do digital music and printing them out at home, or at a local 3D production centre, having tweaked the designs to their own tastes. That is probably a faraway dream. Nevertheless, a new industrial revolution may be on the way.

Printing in 3D may seem bizarre. In fact it is similar to clicking on the print button on a computer screen and sending a digital file, say a letter, to an inkjet printer. The difference is that the “ink” in a 3D printer is a material which is deposited in successive, thin layers until a solid object emerges.

The layers are defined by software that takes a series of digital slices through a computer-aided design. Descriptions of the slices are then sent to the 3D printer to construct the respective layers. They are then put together in a number of ways. Powder can be spread onto a tray and then solidified in the required pattern with a squirt of a liquid binder or by sintering it with a laser or an electron beam. Some machines deposit filaments of molten plastic. However it is achieved, after each layer is complete the build tray is lowered by a fraction of a millimetre and the next layer is added.
And when you’re happy, click print

The researchers at Filton began using 3D printers to produce prototype parts for wind-tunnel testing. The group is part of EADS Innovation Works, the research arm of EADS, a European defence and aerospace group best known for building Airbuses. Prototype parts tend to be very expensive to make as one-offs by conventional means. Because their 3D printers could do the job more efficiently, the researchers’ thoughts turned to manufacturing components directly.

Aircraft-makers have already replaced a lot of the metal in the structure of planes with lightweight carbon-fibre composites. But even a small airliner still contains several tonnes of costly aerospace-grade titanium. These parts have usually been machined from solid billets, which can result in 90% of the material being cut away. This swarf is no longer of any use for making aircraft.

To make the same part with additive manufacturing, EADS starts with a titanium powder. The firm’s 3D printers spread a layer about 20-30 microns (0.02-0.03mm) thick onto a tray where it is fused by lasers or an electron beam. Any surplus powder can be reused. Some objects may need a little machining to finish, but they still require only 10% of the raw material that would otherwise be needed. Moreover, the process uses less energy than a conventional factory. It is sometimes faster, too.

There are other important benefits. Most metal and plastic parts are designed to be manufactured, which means they can be clunky and contain material surplus to the part’s function but necessary for making it. This is not true of 3D printing. “You only put material where you need to have material,” says Andy Hawkins, lead engineer on the EADS project. The parts his team is making are more svelte, even elegant. This is because without manufacturing constraints they can be better optimised for their purpose. Compared with a machined part, the printed one is some 60% lighter but still as sturdy.

Lightness is critical in making aircraft. A reduction of 1kg in the weight of an airliner will save around $3,000-worth of fuel a year and by the same token cut carbon-dioxide emissions. Additive manufacturing could thus help build greener aircraft—especially if all the 1,000 or so titanium parts in an airliner can be printed. Although the size of printable parts is limited for now by the size of 3D printers, the EADS group believes that bigger systems are possible, including one that could fit on the 35-metre-long gantry used to build composite airliner wings. This would allow titanium components to be printed directly onto the structure of the wing.

Many believe that the enhanced performance of additively manufactured items will be the most important factor in driving the technology forward. It certainly is for MIT’s Mr Schmitt, whose interest lies in “original machines”. These are devices not constructed from a collection of prefabricated parts, but created in a form that flows from the intention of the design. If that sounds a bit arty, it is: Mr Schmitt is a former art student from Germany who used to cadge time on factory lathes and milling machines to make mechanised sculptures. He is now working on novel servo mechanisms, the basic building blocks for robots. Custom-made servos cost many times the price of off-the-shelf ones. Mr Schmitt says it should be possible for a robot builder to specify what a servo needs to do, rather than how it needs to be made, and send that information to a 3D printer, and for the machine’s software to know how to produce it at a low cost. “This makes manufacturing more accessible,” says Mr Schmitt.

The idea of the 3D printer determining the form of the items it produces intrigues Neri Oxman, an architect and designer who heads a research group examining new ways to make things at MIT’s Media Lab. She is building a printer to explore how new designs could be produced. Dr Oxman believes the design and construction of objects could be transformed using principles inspired by nature, resulting in shapes that are impossible to build without additive manufacturing. She has made items from sculpture to body armour and is even looking at buildings, erected with computer-guided nozzles that deposit successive layers of concrete.

Some 3D systems allow the properties and internal structure of the material being printed to be varied. This year, for instance, Within Technologies expects to begin offering titanium medical implants with features that resemble bone. The company’s femur implant is dense where stiffness and strength is required, but it also has strong lattice structures which would encourage the growth of bone onto the implant. Such implants are more likely to stay put than conventional ones.

Working at such a fine level of internal detail allows the stiffness and flexibility of an object to be determined at any point, says Siavash Mahdavi, the chief executive of Within Technologies. Dr Mahdavi is working on other lattice structures, including aerodynamic body parts for racing cars and special insoles for a firm that hopes to make the world’s most comfortable stiletto-heeled shoes.

Digital Forming, a related company (where Dr Mahdavi is chief technology officer), uses 3D design software to help consumers customise mass-produced products. For example, it is offering a service to mobile-phone companies in which subscribers can go online to change the shape, colour and other features of the case of their new phone. The software keeps the user within the bounds of the achievable. Once the design is submitted the casing is printed. Lisa Harouni, the company’s managing director, says the process could be applied to almost any consumer product, from jewellery to furniture. “I don’t have any doubt that this technology will change the way we manufacture things,” she says.

Other services allow individuals to upload their own designs and have them printed. Shapeways, a New York-based firm spun out of Philips, a Dutch electronics company, last year, offers personalised 3D production, or “mass customisation”, as Peter Weijmarshausen, its chief executive, describes it. Shapeways prints more than 10,000 unique products every month from materials that range from stainless steel to glass, plastics and sandstone. Customers include individuals and shopkeepers, many ordering jewellery, gifts and gadgets to sell in their stores.

EOS, a German supplier of laser-sintering 3D printers, says they are already being used to make plastic and metal production parts by carmakers, aerospace firms and consumer-products companies. And by dentists: up to 450 dental crowns, each tailored for an individual patient, can be manufactured in one go in a day by a single machine, says EOS. Some craft producers of crowns would do well to manage a dozen a day. As an engineering exercise, EOS also printed the parts for a violin using a high-performance industrial polymer, had it assembled by a professional violin-maker and played by a concert violinist.

Both EOS and Stratasys, a company based in Minneapolis which makes 3D printers that employ plastic-deposition technology, use their own machines to print parts that are, in turn, used to build more printers. Stratasys is even trying to print a car, or at least the body of one, for Kor Ecologic, a company in Winnipeg, whose boss, Jim Kor, is developing an electric-hybrid vehicle called Urbee.
Jim Kor’s printed the model. Next, the car

Making low-volume, high-value and customised components is all very well, but could additive manufacturing really compete with mass-production techniques that have been honed for over a century? Established techniques are unlikely to be swept away, but it is already clear that the factories of the future will have 3D printers working alongside milling machines, presses, foundries and plastic injection-moulding equipment, and taking on an increasing amount of the work done by those machines.

Morris Technologies, based in Cincinnati, was one of the first companies to invest heavily in additive manufacturing for the engineering and production services it offers to companies. Its first intention was to make prototypes quickly, but by 2007 the company says it realised “a new industry was being born” and so it set up another firm, Rapid Quality Manufacturing, to concentrate on the additive manufacturing of higher volumes of production parts. It says many small and medium-sized components can be turned from computer designs into production-quality metal parts in hours or days, against days or weeks using traditional processes. And the printers can build unattended, 24 hours a day.

Neil Hopkinson has no doubts that 3D printing will compete with mass manufacturing in many areas. His team at Loughborough University has invented a high-speed sintering system. It uses inkjet print-heads to deposit infra-red-absorbing ink on layers of polymer powder which are fused into solid shapes with infra-red heating. Among other projects, the group is examining the potential for making plastic buckles for Burton Snowboards, a leading American producer of winter-sports equipment. Such items are typically produced by plastic injection-moulding. Dr Hopkinson says his process can make them for ten pence (16 cents) each, which is highly competitive with injection-moulding. Moreover, the designs could easily be changed without Burton incurring high retooling costs.

Predicting how quickly additive manufacturing will be taken up by industry is difficult, adds Dr Hopkinson. That is not necessarily because of the conservative nature of manufacturers, but rather because some processes have already moved surprisingly fast. Only a few years ago making decorative lampshades with 3D printers seemed to be a highly unlikely business, but it has become an industry with many competing firms and sales volumes in the thousands.

Dr Hopkinson thinks Loughborough’s process is already competitive with injection-moulding at production runs of around 1,000 items. With further development he expects that within five years it would be competitive in runs of tens if not hundreds of thousands. Once 3D printing machines are able to crank out products in such numbers, then more manufacturers will look to adopt the technology.

Will Sillar of Legerwood, a British firm of consultants, expects to see the emergence of what he calls the “digital production plant”: firms will no longer need so much capital tied up in tooling costs, work-in-progress and raw materials, he says. Moreover, the time to take a digital design from concept to production will drop, he believes, by as much as 50-80%. The ability to overcome production constraints and make new things will combine with improvements to the technology and greater mechanisation to make 3D printing more mainstream. “The market will come to the technology,” Mr Sillar says.

Some in the industry believe that the effect of 3D printing on manufacturing will be analogous to that of the inkjet printer on document printing. The written word became the printed word with the invention of movable-type printing by Johannes Gutenberg in the 15th century. Printing presses became like mass-production machines, highly efficient at printing lots of copies of the same thing but not individual documents. The inkjet printer made that a lot easier, cheaper and more personal. Inkjet devices now perform a multitude of printing roles, from books on demand to labels and photographs, even though traditional presses still roll for large runs of books, newspapers and so on.

How would this translate to manufacturing? Most obviously, it changes the economics of making customised components. If a company needs a specialised part, it may find it cheaper and quicker to have the part printed locally or even to print its own than to order one from a supplier a long way away. This is more likely when rapid design changes are needed.

Printing in 3D is not the preserve of the West: Chinese companies are adopting the technology too. Yet you might infer that some manufacturing will return to the West from cheap centres of production in China and elsewhere. This possibility was on the agenda of a conference organised by DHL last year. The threat to the logistics firm’s business is clear: why would a company airfreight an urgently needed spare part from abroad when it could print one where it is required?
Our TQ article explains the technology behind the 3-D printing process

Perhaps the most exciting aspect of additive manufacturing is that it lowers the cost of entry into the business of making things. Instead of finding the money to set up a factory or asking a mass-producer at home (or in another country) to make something for you, 3D printers will offer a cheaper, less risky route to the market. An entrepreneur could run off one or two samples with a 3D printer to see if his idea works. He could make a few more to see if they sell, and take in design changes that buyers ask for. If things go really well, he could scale up—with conventional mass production or an enormous 3D print run.

This suggests that success in manufacturing will depend less on scale and more on the quality of ideas. Brilliance alone, though, will not be enough. Good ideas can be copied even more rapidly with 3D printing, so battles over intellectual property may become even more intense. It will be easier for imitators as well as innovators to get goods to market fast. Competitive advantages may thus be shorter-lived than ever before. As with past industrial revolutions, the greatest beneficiaries may not be companies but their customers. But whoever gains most, revolution may not be too strong a word.

Turning garbage into gas

Why burn or bury garbage when you can vaporize them and turn garbage into electricity? This is the solution for landfill.

Feb 3rd 2011, Economist
Atomising trash eliminates the need to dump it, and generates useful power too

DISPOSING of household rubbish is not, at first glance, a task that looks amenable to high-tech solutions. But Hilburn Hillestad of Geoplasma, a firm based in Atlanta, Georgia, begs to differ. Burying trash—the usual way of disposing of the stuff—is old-fashioned and polluting. Instead, Geoplasma, part of a conglomerate called the Jacoby Group, proposes to tear it into its constituent atoms with electricity. It is clean. It is modern. And, what is more, it might even be profitable.

For years, some particularly toxic types of waste, such as the sludge from oil refineries, have been destroyed with artificial lightning from electric plasma torches—devices that heat matter to a temperature higher than that of the sun’s surface. Until recently this has been an expensive process, costing as much as $2,000 per tonne of waste, according to SRL Plasma, an Australian firm that has manufactured torches for 13 of the roughly two dozen plants around the world that work this way.

Now, though, costs are coming down. Moreover, it has occurred to people such as Dr Hillestad that the process could be used to generate power as well as consuming it. Appropriately tweaked, the destruction of organic materials (including paper and plastics) by plasma torches produces a mixture of carbon monoxide and hydrogen called syngas. That, in turn, can be burned to generate electricity. Add in the value of the tipping fees that do not have to be paid if rubbish is simply vaporised, plus the fact that energy prices in general are rising, and plasma torches start to look like a plausible alternative to burial.
Related topics

The technology has got better, too. The core of a plasma torch is a pair of electrodes, usually made from a nickel-based alloy. A current arcs between them and turns the surrounding air into a plasma by stripping electrons from their parent atoms. Waste (chopped up into small pieces if it is solid) is fed into this plasma. The heat and electric charges of the plasma break the chemical bonds in the waste, vaporising it. Then, if the mix of waste is correct, the carbon and oxygen atoms involved recombine to form carbon monoxide and the hydrogen atoms link up into diatomic hydrogen molecules. Both of these are fuels (they burn in air to form carbon dioxide and water, respectively). Metals and other inorganic materials that do not turn into gas fall to the bottom of the chamber as molten slag. Once it has cooled, this slag can be used to make bricks or to pave roads.

Electric arcs are a harsh environment to operate in, and early plasma torches were not noted for reliability. These days, though, the quality of the nickel alloys has improved so that the torches work continuously. On top of that, developments in a field called computational fluid dynamics allow the rubbish going into the process to be mixed in a way that produces the most syngas for the least input of electricity.

The first rubbish-to-syngas plants were built almost a decade ago, in Japan—where land scarcity means tipping fees are particularly high. Now the idea is moving elsewhere. This year Geoplasma plans to start constructing a plant costing $120m in St Lucie County, Florida. It will be fed with waste from local households and should create enough syngas to make electricity for more than 20,000 homes. The company reckons it can make enough money from the project to service the debt incurred in constructing the plant and still provide a profit from the beginning.

Nor is Geoplasma alone. More than three dozen other American firms are proposing plasma-torch syngas plants, according to Gershman, Brickner & Bratton, a waste consultancy based in Fairfax, Virginia. Demand is so great that the Westinghouse Plasma Corporation, an American manufacturer of plasma torches, is able to hire out its test facility in Madison, Pennsylvania, for $150,000 a day.

Syngas can also be converted into other things. The “syn” is short for “synthesis” and syngas was once an important industrial raw material. The rise of the petrochemical industry has rather eclipsed it, but it may become important again. One novel proposal, by Coskata, a firm based in Warrenville, Illinois, is to ferment it into ethanol, for use as vehicle fuel. At the moment Coskata uses a plasma torch to make syngas from waste wood and wood-pulp, but modifying the apparatus to take household waste should not be too hard.

Even if efforts to convert such waste into syngas fail, existing plants that use plasma torches to destroy more hazardous material could be modified to take advantage of the idea. The Beijing Victex Environmental Science and Technology Development Company, for example, uses the torches to destroy sludge from Chinese oil refineries. According to Fiona Qian, the firm’s deputy manager, the high cost of doing this means some refineries are still dumping toxic waste in landfills. Stopping that sort of thing by bringing the price down would be a good thing by itself.

IP Integration : What is the difference between stitching and weaving?

I should write a article on: What is the difference between reusing and salvaging…

by David Murray, 12/15/2010, Design and Reuse

As a hardware design engineer, I was never comfortable when someone talked about IP integration as ‘stitching a chip together’. First of all, it sounded like a painful process involving sharp needles, usually preceded by a painful accident. I happened to be the recipient of said stitches when, at 8 years of age, I contested a stairs post with my forehead, and sorely lost. I have to say, luckily, I have been quite adept at avoiding the needle and thread ever since. That was of course until once when, an hour before that important customer presentation, my top-shirt button, due to an over enthusiastic yawn, pinged across my hotel room floor like a nano-UFO. A panicked retrieval of the renegade button was followed quickly with a successful hunt for an elusive emergency sewing-kit. The crisis quickly dissipated as I stitched back the button in a random-but-directed type of methodology. Needle-less to say stitching, whilst sometimes necessary, makes me uncomfortable.

Stitching, according to Wikipedia, is “.. the fastening of cloth, leather, furs, bark, or other flexible materials, using needle and thread. Its use is nearly universal among human populations and dates back to Paleolithic times (30,000 BCE).” It also states that stitching predates the weaving of cloth. So, 32,000 years later, in these hi-tech times we are still stitching things together. It’s not fur this time, but ‘ports’. Stitching a chip together involves connecting ports together with wires. (Note the terminology also where, if you don’t use certain ports you ’tie’ them off).

Weaving is a different game altogether. One definition simplifies weaving as ‘creating fabric’. Thus a key differentiator between stitching and weaving is that stitching may refer to fixing/mending things whilst weaving is used to create. Stitching is an emergency, an ah-hoc approach (please refer to my stitched button above) whilst weaving is more structured, more planned. Stitching invokes the image of being bent over, eyes squinted, immersed in the tiniest of detail. Weaving is more graceful and productive. In IC design flow terms, I equate stitching with scripting. It is task that is useful to join pieces of the flow together. Weaving creates something. It transforms thread to cloth, and therefore equates more to synthesis. Weaving is a process.

So when it came to developing and naming a tool used to effectively integrate IP and create a chip hierarchy, in a structured manner, we didn’t consider consider ‘STITCHER’ – It had to be ‘WEAVER’.

Stitching is important to fix things, and is necessary in emergency situations, however it has its limitations and as if used as a core creation process, it may come undone. So as I ranted on during that vital presentation, as I got to the cusp of the value-add, I curbed my enthusiasm, keep it slightly in check just in case those button stitches came undone and resulted in a serious eye injury of an altogether innocent customer. What then, of those poor stitched chips? What if those threads start to unravel and your chip integration is running very late. You may have to resort to different type of Weaving, when dealing with your management, customers or partners.

Which MBA? Think twice

According to Economist, studying MBA is not a good investment. So I should be glad that MBA school rejected my application.

2 Feb 2011, Economist
Set your heart on an MBA? Philip Delves Broughton suggests a radical alternative: don’t bother

Business schools have long sold the promise that, like an F1 driver zipping into the pits for fresh tyres, it just takes a short hiatus on an MBA programme and you will come roaring back into the career race primed to win. After all, it signals to companies that you were good enough to be accepted by a decent business school (so must be good enough for them); it plugs you into a network of fellow MBAs; and, to a much lesser extent, there’s the actual classroom education. Why not just pay the bill, sign here and reap the rewards?

The problem is that these days it doesn’t work like that. Rather, more and more students are finding the promise of business schools to be hollow. The return on investment on an MBA has gone the way of Greek public debt. If you have a decent job in your mid- to late- 20s, unless you have the backing of a corporate sponsor, leaving it to get an MBA is a higher risk than ever. If you are getting good business experience already, the best strategy is to keep on getting it, thereby making yourself ever more useful rather than groping for the evanescent brass rings of business school.

Business schools argue that a recession is the best time to invest in oneself. What they won’t say is that they also need your money. There are business academics right now panting for your cheque. They need it to pad their sinecures and fund their threadbare research. There is surely no more oxymoronic profession than the tenured business-school professor, and yet these job-squatting apostles of the free market are rife and desperate. Potential students should take note: if taking a professional risk were as marvellous as they say, why do these role models so assiduously avoid it?

Harvard Business School recently chose a new dean, Nitin Nohria, an expert in ethics and leadership. He was asked by Bloomberg Businessweek if he had watched the Congressional hearings on Goldman Sachs. He replied: “The events in the financial sector are something that we have watched closely at Harvard Business School. We teach by the case method, and one of the things we’ll do through this experience is study these cases deeply as information is revealed over time so we can understand what happened at all these financial firms. I’m sure that at some point we’ll write cases about Goldman Sachs because that’s how we learn.” He could have stood up for Goldman or criticised it. Instead he punted on one of the singular business issues of our time. It is indicative of the cringing attitude of business schools before the business world they purport to study.

When you look at today’s most evolved business organisms, it is obvious that an MBA is not required for business success. Apple, which recently usurped Microsoft as the world’s largest technology firm (by market capitalisation), has hardly any MBAs among its top ranks. Most of the world’s top hedge funds prefer seasoned traders, engineers and mathematicians, people with insight and programming skills, to MBAs brandishing spreadsheets, the latest two-by-twos and the guilt induced by some watery ethics course.

In the BRIC economies, one sees fortunes being made in the robust manner of the 19th-century American robber barons, with scarcely a nod to the niceties of MBA programmes. The cute stratagems and frameworks taught at business schools become quickly redundant in the hurly-burly of economic change. I’ve often wondered what Li Ka-Shing of Hong Kong or Stanely Ho of Macao, or Rupert Murdoch, for that matter, would make of an MBA programme. They would probably see it for what it is: a business opportunity. And as such, they would focus on the value of investing in it.

They would look at the high cost, and note the tables which show that financial rewards are not evenly distributed among MBAs but tilt heavily to those from the very top programmes who tend to go into finance and consulting. Successful entrepreneurs are as rare among MBAs as they are in the general population.

They would think to themselves that business is fundamentally about two things, innovating and selling, and that most MBA programmes teach neither. They might wonder about the realities of the MBA network. There is no point acquiring a global network of randomly assembled business students if you just want to work in your home town. Also, they will recall that the most effective way to build a network is not to go to school, but to be successful. That way you will have all the MBA friends you could ever want.

They might even meet a few business academics and wonder. Then they would take their application and do with it what most potential applicants should: toss it away.

The power of posture

According to this research, how your posture affect your projection of power and how other people perceive it. It is interesting to note the powerful sitting posture is regard as bad sitting manner in traditional Chinese culture. Another evidence for my theory that manners are simply rules set by the authority to make people easier to rule.

How you hold yourself affects how you view yourself
Jan 13th 2011, Economist

“STAND up straight!” “Chest out!” “Shoulders back!” These are the perennial cries of sergeant majors and fussy parents throughout the ages. Posture certainly matters. Big is dominant and in species after species, humans included, postures that enhance the posturer’s apparent size cause others to treat him as if he were more powerful.

The stand-up-straight brigade, however, often make a further claim: that posture affects the way the posturer treats himself, as well as how others treat him. To test the truth of this, Li Huang and Adam Galinsky, at Northwestern University in Illinois, have compared posture’s effects on self-esteem with those of a more conventional ego-booster, management responsibility. In a paper just published in Psychological Science they conclude, surprisingly, that posture may matter more.

The two researchers’ experimental animals—77 undergraduate students—first filled out questionnaires, ostensibly to assess their leadership capacity. Half were then given feedback forms which indicated that, on the basis of the questionnaires, they were to be assigned to be managers in a forthcoming experiment. The other half were told they would be subordinates. While the participants waited for this feedback, they were asked to help with a marketing test on ergonomic chairs. This required them to sit in a computer chair in a specific posture for between three and five minutes. Half the participants sat in constricted postures, with their hands under their thighs, legs together or shoulders hunched. The other half sat in expansive postures with their legs spread wide or their arms reaching outward.

In fact, neither of these tests was what it seemed. The questionnaires were irrelevant. Volunteers were assigned to be managers or subordinates at random. The test of posture had nothing to do with ergonomics. And, crucially, each version of the posture test included equal numbers of those who would become “managers” and “subordinates”.

Once the posture test was over the participants received their new statuses and the researchers measured their implicit sense of power by asking them to engage in a word-completion task. Participants were instructed to complete a number of fragments (for example, “l_ad”) with the first word that came to mind. Seven of the fragments could be interpreted as words related to power (“power”, “direct”, “lead”, “authority”, “control”, “command” and “rich”). For each of these that was filled out as a power word (“lead”, say, instead of “load”) the participant was secretly given a score of one point.

Although previous studies suggested a mere title is enough to produce a detectable increase in an individual’s sense of power, Dr Huang and Dr Galinsky found no difference in the word-completion scores of those told they would be managers and those told they would be subordinates. The posture experiment, however, did make a difference. Those who had sat in an expansive pose, regardless of whether they thought of themselves as managers or subordinates, scored an average of 3.44. Those who had sat in constricted postures scored an average of 2.78.

Having established the principle, Dr Huang and Dr Galinsky went on to test the effect of posture on other power-related decisions: whether to speak first in a debate, whether to leave the site of a plane crash to find help and whether to join a movement to free a prisoner who was wrongfully locked up. In all three cases those who had sat in expansive postures chose the active option (to speak first, to search for help, to fight for justice) more often than those who had sat crouched.

The upshot, then, is that father (or the sergeant major) was right. Those who walk around with their heads held high not only get the respect of others, they seem also to respect themselves.

To Really Learn, Quit Studying and Take a Test

That’s why the most effective way to learn is taking a course with the home work assignments and exams that force you to learn.

By Pam, Belluck, New York Times, January 20, 2011

Taking a test is not just a passive mechanism for assessing how much people know, according to new research. It actually helps people learn, and it works better than a number of other studying techniques.

The research, published online Thursday in the journal Science, found that students who read a passage, then took a test asking them to recall what they had read, retained about 50 percent more of the information a week later than students who used two other methods.

One of those methods — repeatedly studying the material — is familiar to legions of students who cram before exams. The other — having students draw detailed diagrams documenting what they are learning — is prized by many teachers because it forces students to make connections among facts.

These other methods not only are popular, the researchers reported; they also seem to give students the illusion that they know material better than they do.

In the experiments, the students were asked to predict how much they would remember a week after using one of the methods to learn the material. Those who took the test after reading the passage predicted they would remember less than the other students predicted — but the results were just the opposite.

“I think that learning is all about retrieving, all about reconstructing our knowledge,” said the lead author, Jeffrey Karpicke, an assistant professor of psychology at Purdue University. “I think that we’re tapping into something fundamental about how the mind works when we talk about retrieval.”

Several cognitive scientists and education experts said the results were striking.

The students who took the recall tests may “recognize some gaps in their knowledge,” said Marcia Linn, an education professor at the University of California, Berkeley, “and they might revisit the ideas in the back of their mind or the front of their mind.”

When they are later asked what they have learned, she went on, they can more easily “retrieve it and organize the knowledge that they have in a way that makes sense to them.”

The researchers engaged 200 college students in two experiments, assigning them to read several paragraphs about a scientific subject — how the digestive system works, for example, or the different types of vertebrate muscle tissue.

In the first experiment, the students were divided into four groups. One did nothing more than read the text for five minutes. Another studied the passage in four consecutive five-minute sessions.

A third group engaged in “concept mapping,” in which, with the passage in front of them, they arranged information from the passage into a kind of diagram, writing details and ideas in hand-drawn bubbles and linking the bubbles in an organized way.

The final group took a “retrieval practice” test. Without the passage in front of them, they wrote what they remembered in a free-form essay for 10 minutes. Then they reread the passage and took another retrieval practice test.

A week later all four groups were given a short-answer test that assessed their ability to recall facts and draw logical conclusions based on the facts.

The second experiment focused only on concept mapping and retrieval practice testing, with each student doing an exercise using each method. In this initial phase, researchers reported, students who made diagrams while consulting the passage included more detail than students asked to recall what they had just read in an essay.

But when they were evaluated a week later, the students in the testing group did much better than the concept mappers. They even did better when they were evaluated not with a short-answer test but with a test requiring them to draw a concept map from memory.

Why retrieval testing helps is still unknown. Perhaps it is because by remembering information we are organizing it and creating cues and connections that our brains later recognize.

“When you’re retrieving something out of a computer’s memory, you don’t change anything — it’s simple playback,” said Robert Bjork, a psychologist at the University of California, Los Angeles, who was not involved with the study.

But “when we use our memories by retrieving things, we change our access” to that information, Dr. Bjork said. “What we recall becomes more recallable in the future. In a sense you are practicing what you are going to need to do later.”

It may also be that the struggle involved in recalling something helps reinforce it in our brains.

Maybe that is also why students who took retrieval practice tests were less confident about how they would perform a week later.

“The struggle helps you learn, but it makes you feel like you’re not learning,” said Nate Kornell, a psychologist at Williams College. “You feel like: ‘I don’t know it that well. This is hard and I’m having trouble coming up with this information.’ ”

By contrast, he said, when rereading texts and possibly even drawing diagrams, “you say: ‘Oh, this is easier. I read this already.’ ”

The Purdue study supports findings of a recent spate of research showing learning benefits from testing, including benefits when students get questions wrong. But by comparing testing with other methods, the study goes further.

“It really bumps it up a level of importance by contrasting it with concept mapping, which many educators think of as sort of the gold standard,” said Daniel Willingham, a psychology professor at the University of Virginia. Although “it’s not totally obvious that this is shovel-ready — put it in the classroom and it’s good to go — for educators this ought to be a big deal.”

Howard Gardner, an education professor at Harvard who advocates constructivism — the idea that children should discover their own approach to learning, emphasizing reasoning over memorization — said in an e-mail that the results “throw down the gauntlet to those progressive educators, myself included.”

“Educators who embrace seemingly more active approaches, like concept mapping,” he continued, “are challenged to devise outcome measures that can demonstrate the superiority of such constructivist approaches.”

Testing, of course, is a highly charged issue in education, drawing criticism that too much promotes rote learning, swallows valuable time for learning new things and causes excessive student anxiety.

“More testing isn’t necessarily better,” said Dr. Linn, who said her work with California school districts had found that asking students to explain what they did in a science experiment rather than having them simply conduct the hands-on experiment — a version of retrieval practice testing — was beneficial. “Some tests are just not learning opportunities. We need a different kind of testing than we currently have.”

Dr. Kornell said that “even though in the short term it may seem like a waste of time,” retrieval practice appears to “make things stick in a way that may not be used in the classroom.

“It’s going to last for the rest of their schooling, and potentially for the rest of their lives.”