Category Archives: News Clips

Some news are still worth to remember even when it goes old.

Why I am not worried about Japan’s nuclear reactors

Now, I know nuclear melt down is not that frightening. Worse by worse, it will only take out the core, but not any radioactivity explosion.

By, Dr. Josef Oehmen, MIT, March 13, 2011

I am writing this text (Mar 12) to give you some peace of mind regarding some of the troubles in Japan, that is the safety of Japan’s nuclear reactors. Up front, the situation is serious, but under control. And this text is long! But you will know more about nuclear power plants after reading it than all journalists on this planet put together.

There was and will *not* be any significant release of radioactivity.

By “significant” I mean a level of radiation of more than what you would receive on – say – a long distance flight, or drinking a glass of beer that comes from certain areas with high levels of natural background radiation.

I have been reading every news release on the incident since the earthquake. There has not been one single (!) report that was accurate and free of errors (and part of that problem is also a weakness in the Japanese crisis communication). By “not free of errors” I do not refer to tendentious anti-nuclear journalism – that is quite normal these days. By “not free of errors” I mean blatant errors regarding physics and natural law, as well as gross misinterpretation of facts, due to an obvious lack of fundamental and basic understanding of the way nuclear reactors are build and operated. I have read a 3 page report on CNN where every single paragraph contained an error.

We will have to cover some fundamentals, before we get into what is going on.

Construction of the Fukushima nuclear power plants

The plants at Fukushima are so called Boiling Water Reactors, or BWR for short. Boiling Water Reactors are similar to a pressure cooker. The nuclear fuel heats water, the water boils and creates steam, the steam then drives turbines that create the electricity, and the steam is then cooled and condensed back to water, and the water send back to be heated by the nuclear fuel. The pressure cooker operates at about 250 °C.

The nuclear fuel is uranium oxide. Uranium oxide is a ceramic with a very high melting point of about 3000 °C. The fuel is manufactured in pellets (think little cylinders the size of Lego bricks). Those pieces are then put into a long tube made of Zircaloy with a melting point of 2200 °C, and sealed tight. The assembly is called a fuel rod. These fuel rods are then put together to form larger packages, and a number of these packages are then put into the reactor. All these packages together are referred to as “the core”.

The Zircaloy casing is the first containment. It separates the radioactive fuel from the rest of the world.

The core is then placed in the “pressure vessels”. That is the pressure cooker we talked about before. The pressure vessels is the second containment. This is one sturdy piece of a pot, designed to safely contain the core for temperatures several hundred °C. That covers the scenarios where cooling can be restored at some point.

The entire “hardware” of the nuclear reactor – the pressure vessel and all pipes, pumps, coolant (water) reserves, are then encased in the third containment. The third containment is a hermetically (air tight) sealed, very thick bubble of the strongest steel and concrete. The third containment is designed, built and tested for one single purpose: To contain, indefinitely, a complete core meltdown. For that purpose, a large and thick concrete basin is cast under the pressure vessel (the second containment), all inside the third containment. This is the so-called “core catcher”. If the core melts and the pressure vessel bursts (and eventually melts), it will catch the molten fuel and everything else. It is typically built in such a way that the nuclear fuel will be spread out, so it can cool down.

This third containment is then surrounded by the reactor building. The reactor building is an outer shell that is supposed to keep the weather out, but nothing in. (this is the part that was damaged in the explosion, but more to that later).

Fundamentals of nuclear reactions

The uranium fuel generates heat by nuclear fission. Big uranium atoms are split into smaller atoms. That generates heat plus neutrons (one of the particles that forms an atom). When the neutron hits another uranium atom, that splits, generating more neutrons and so on. That is called the nuclear chain reaction.

Now, just packing a lot of fuel rods next to each other would quickly lead to overheating and after about 45 minutes to a melting of the fuel rods. It is worth mentioning at this point that the nuclear fuel in a reactor can *never* cause a nuclear explosion the type of a nuclear bomb. Building a nuclear bomb is actually quite difficult (ask Iran). In Chernobyl, the explosion was caused by excessive pressure buildup, hydrogen explosion and rupture of all containments, propelling molten core material into the environment (a “dirty bomb”). Why that did not and will not happen in Japan, further below.

In order to control the nuclear chain reaction, the reactor operators use so-called “control rods”. The control rods absorb the neutrons and kill the chain reaction instantaneously. A nuclear reactor is built in such a way, that when operating normally, you take out all the control rods. The coolant water then takes away the heat (and converts it into steam and electricity) at the same rate as the core produces it. And you have a lot of leeway around the standard operating point of 250°C.

The challenge is that after inserting the rods and stopping the chain reaction, the core still keeps producing heat. The uranium “stopped” the chain reaction. But a number of intermediate radioactive elements are created by the uranium during its fission process, most notably Cesium and Iodine isotopes, i.e. radioactive versions of these elements that will eventually split up into smaller atoms and not be radioactive anymore. Those elements keep decaying and producing heat. Because they are not regenerated any longer from the uranium (the uranium stopped decaying after the control rods were put in), they get less and less, and so the core cools down over a matter of days, until those intermediate radioactive elements are used up.

This residual heat is causing the headaches right now.

So the first “type” of radioactive material is the uranium in the fuel rods, plus the intermediate radioactive elements that the uranium splits into, also inside the fuel rod (Cesium and Iodine).

There is a second type of radioactive material created, outside the fuel rods. The big main difference up front: Those radioactive materials have a very short half-life, that means that they decay very fast and split into non-radioactive materials. By fast I mean seconds. So if these radioactive materials are released into the environment, yes, radioactivity was released, but no, it is not dangerous, at all. Why? By the time you spelled “R-A-D-I-O-N-U-C-L-I-D-E”, they will be harmless, because they will have split up into non radioactive elements. Those radioactive elements are N-16, the radioactive isotope (or version) of nitrogen (air). The others are noble gases such as Argon. But where do they come from? When the uranium splits, it generates a neutron (see above). Most of these neutrons will hit other uranium atoms and keep the nuclear chain reaction going. But some will leave the fuel rod and hit the water molecules, or the air that is in the water. Then, a non-radioactive element can “capture” the neutron. It becomes radioactive. As described above, it will quickly (seconds) get rid again of the neutron to return to its former beautiful self.

This second “type” of radiation is very important when we talk about the radioactivity being released into the environment later on.

What happened at Fukushima

I will try to summarize the main facts. The earthquake that hit Japan was 5 times more powerful than the worst earthquake the nuclear power plant was built for (the Richter scale works logarithmically; the difference between the 8.2 that the plants were built for and the 8.9 that happened is 5 times, not 0.7). So the first hooray for Japanese engineering, everything held up.

When the earthquake hit with 8.9, the nuclear reactors all went into automatic shutdown. Within seconds after the earthquake started, the control rods had been inserted into the core and nuclear chain reaction of the uranium stopped. Now, the cooling system has to carry away the residual heat. The residual heat load is about 3% of the heat load under normal operating conditions.

The earthquake destroyed the external power supply of the nuclear reactor. That is one of the most serious accidents for a nuclear power plant, and accordingly, a “plant black out” receives a lot of attention when designing backup systems. The power is needed to keep the coolant pumps working. Since the power plant had been shut down, it cannot produce any electricity by itself any more.

Things were going well for an hour. One set of multiple sets of emergency Diesel power generators kicked in and provided the electricity that was needed. Then the Tsunami came, much bigger than people had expected when building the power plant (see above, factor 7). The tsunami took out all multiple sets of backup Diesel generators.

When designing a nuclear power plant, engineers follow a philosophy called “Defense of Depth”. That means that you first build everything to withstand the worst catastrophe you can imagine, and then design the plant in such a way that it can still handle one system failure (that you thought could never happen) after the other. A tsunami taking out all backup power in one swift strike is such a scenario. The last line of defense is putting everything into the third containment (see above), that will keep everything, whatever the mess, control rods in our out, core molten or not, inside the reactor.

When the diesel generators were gone, the reactor operators switched to emergency battery power. The batteries were designed as one of the backups to the backups, to provide power for cooling the core for 8 hours. And they did.

Within the 8 hours, another power source had to be found and connected to the power plant. The power grid was down due to the earthquake. The diesel generators were destroyed by the tsunami. So mobile diesel generators were trucked in.

This is where things started to go seriously wrong. The external power generators could not be connected to the power plant (the plugs did not fit). So after the batteries ran out, the residual heat could not be carried away any more.

At this point the plant operators begin to follow emergency procedures that are in place for a “loss of cooling event”. It is again a step along the “Depth of Defense” lines. The power to the cooling systems should never have failed completely, but it did, so they “retreat” to the next line of defense. All of this, however shocking it seems to us, is part of the day-to-day training you go through as an operator, right through to managing a core meltdown.

It was at this stage that people started to talk about core meltdown. Because at the end of the day, if cooling cannot be restored, the core will eventually melt (after hours or days), and the last line of defense, the core catcher and third containment, would come into play.

But the goal at this stage was to manage the core while it was heating up, and ensure that the first containment (the Zircaloy tubes that contains the nuclear fuel), as well as the second containment (our pressure cooker) remain intact and operational for as long as possible, to give the engineers time to fix the cooling systems.

Because cooling the core is such a big deal, the reactor has a number of cooling systems, each in multiple versions (the reactor water cleanup system, the decay heat removal, the reactor core isolating cooling, the standby liquid cooling system, and the emergency core cooling system). Which one failed when or did not fail is not clear at this point in time.

So imagine our pressure cooker on the stove, heat on low, but on. The operators use whatever cooling system capacity they have to get rid of as much heat as possible, but the pressure starts building up. The priority now is to maintain integrity of the first containment (keep temperature of the fuel rods below 2200°C), as well as the second containment, the pressure cooker. In order to maintain integrity of the pressure cooker (the second containment), the pressure has to be released from time to time. Because the ability to do that in an emergency is so important, the reactor has 11 pressure release valves. The operators now started venting steam from time to time to control the pressure. The temperature at this stage was about 550°C.

This is when the reports about “radiation leakage” starting coming in. I believe I explained above why venting the steam is theoretically the same as releasing radiation into the environment, but why it was and is not dangerous. The radioactive nitrogen as well as the noble gases do not pose a threat to human health.

At some stage during this venting, the explosion occurred. The explosion took place outside of the third containment (our “last line of defense”), and the reactor building. Remember that the reactor building has no function in keeping the radioactivity contained. It is not entirely clear yet what has happened, but this is the likely scenario: The operators decided to vent the steam from the pressure vessel not directly into the environment, but into the space between the third containment and the reactor building (to give the radioactivity in the steam more time to subside). The problem is that at the high temperatures that the core had reached at this stage, water molecules can “disassociate” into oxygen and hydrogen – an explosive mixture. And it did explode, outside the third containment, damaging the reactor building around. It was that sort of explosion, but inside the pressure vessel (because it was badly designed and not managed properly by the operators) that lead to the explosion of Chernobyl. This was never a risk at Fukushima. The problem of hydrogen-oxygen formation is one of the biggies when you design a power plant (if you are not Soviet, that is), so the reactor is build and operated in a way it cannot happen inside the containment. It happened outside, which was not intended but a possible scenario and OK, because it did not pose a risk for the containment.

So the pressure was under control, as steam was vented. Now, if you keep boiling your pot, the problem is that the water level will keep falling and falling. The core is covered by several meters of water in order to allow for some time to pass (hours, days) before it gets exposed. Once the rods start to be exposed at the top, the exposed parts will reach the critical temperature of 2200 °C after about 45 minutes. This is when the first containment, the Zircaloy tube, would fail.

And this started to happen. The cooling could not be restored before there was some (very limited, but still) damage to the casing of some of the fuel. The nuclear material itself was still intact, but the surrounding Zircaloy shell had started melting. What happened now is that some of the byproducts of the uranium decay – radioactive Cesium and Iodine – started to mix with the steam. The big problem, uranium, was still under control, because the uranium oxide rods were good until 3000 °C. It is confirmed that a very small amount of Cesium and Iodine was measured in the steam that was released into the atmosphere.

It seems this was the “go signal” for a major plan B. The small amounts of Cesium that were measured told the operators that the first containment on one of the rods somewhere was about to give. The Plan A had been to restore one of the regular cooling systems to the core. Why that failed is unclear. One plausible explanation is that the tsunami also took away / polluted all the clean water needed for the regular cooling systems.

The water used in the cooling system is very clean, demineralized (like distilled) water. The reason to use pure water is the above mentioned activation by the neutrons from the Uranium: Pure water does not get activated much, so stays practically radioactive-free. Dirt or salt in the water will absorb the neutrons quicker, becoming more radioactive. This has no effect whatsoever on the core – it does not care what it is cooled by. But it makes life more difficult for the operators and mechanics when they have to deal with activated (i.e. slightly radioactive) water.

But Plan A had failed – cooling systems down or additional clean water unavailable – so Plan B came into effect. This is what it looks like happened:

In order to prevent a core meltdown, the operators started to use sea water to cool the core. I am not quite sure if they flooded our pressure cooker with it (the second containment), or if they flooded the third containment, immersing the pressure cooker. But that is not relevant for us.

The point is that the nuclear fuel has now been cooled down. Because the chain reaction has been stopped a long time ago, there is only very little residual heat being produced now. The large amount of cooling water that has been used is sufficient to take up that heat. Because it is a lot of water, the core does not produce sufficient heat any more to produce any significant pressure. Also, boric acid has been added to the seawater. Boric acid is “liquid control rod”. Whatever decay is still going on, the Boron will capture the neutrons and further speed up the cooling down of the core.

The plant came close to a core meltdown. Here is the worst-case scenario that was avoided: If the seawater could not have been used for treatment, the operators would have continued to vent the water steam to avoid pressure buildup. The third containment would then have been completely sealed to allow the core meltdown to happen without releasing radioactive material. After the meltdown, there would have been a waiting period for the intermediate radioactive materials to decay inside the reactor, and all radioactive particles to settle on a surface inside the containment. The cooling system would have been restored eventually, and the molten core cooled to a manageable temperature. The containment would have been cleaned up on the inside. Then a messy job of removing the molten core from the containment would have begun, packing the (now solid again) fuel bit by bit into transportation containers to be shipped to processing plants. Depending on the damage, the block of the plant would then either be repaired or dismantled.

Now, where does that leave us?

* The plant is safe now and will stay safe.
* Japan is looking at an INES Level 4 Accident: Nuclear accident with local consequences. That is bad for the company that owns the plant, but not for anyone else.
* Some radiation was released when the pressure vessel was vented. All radioactive isotopes from the activated steam have gone (decayed). A very small amount of Cesium was released, as well as Iodine. If you were sitting on top of the plants’ chimney when they were venting, you should probably give up smoking to return to your former life expectancy. The Cesium and Iodine isotopes were carried out to the sea and will never be seen again.
* There was some limited damage to the first containment. That means that some amounts of radioactive Cesium and Iodine will also be released into the cooling water, but no Uranium or other nasty stuff (the Uranium oxide does not “dissolve” in the water). There are facilities for treating the cooling water inside the third containment. The radioactive Cesium and Iodine will be removed there and eventually stored as radioactive waste in terminal storage.
* The seawater used as cooling water will be activated to some degree. Because the control rods are fully inserted, the Uranium chain reaction is not happening. That means the “main” nuclear reaction is not happening, thus not contributing to the activation. The intermediate radioactive materials (Cesium and Iodine) are also almost gone at this stage, because the Uranium decay was stopped a long time ago. This further reduces the activation. The bottom line is that there will be some low level of activation of the seawater, which will also be removed by the treatment facilities.
* The seawater will then be replaced over time with the “normal” cooling water
* The reactor core will then be dismantled and transported to a processing facility, just like during a regular fuel change.
* Fuel rods and the entire plant will be checked for potential damage. This will take about 4-5 years.
* The safety systems on all Japanese plants will be upgraded to withstand a 9.0 earthquake and tsunami (or worse)
* I believe the most significant problem will be a prolonged power shortage. About half of Japan’s nuclear reactors will probably have to be inspected, reducing the nation’s power generating capacity by 15%. This will probably be covered by running gas power plants that are usually only used for peak loads to cover some of the base load as well. That will increase your electricity bill, as well as lead to potential power shortages during peak demand, in Japan.

Are Compact Fluorescent Lightbulbs Really Cheaper Over Time?

I hate the lighting produced by CFL bulbs. I am going to switch from incandescent bulb to LED lights directly when the price of LED lights comes down. CFL is a in-between gaping technically that eventually should be phased out.

By Joseph Calamia, March 2011, IEEE Spectrum
CFLs must last long enough for their energy efficiency to make up for their higher cost

You buy a compact fluorescent lamp. The packaging says it will last for 6000 hours—about five years, if used for three hours a day. A year later, it burns out.

Last year, IEEE Spectrum reported that some Europeans opposed legislation to phase out incandescent lighting. Rather than replace their lights with compact fluorescents, consumers started hoarding traditional bulbs.

From the comments on that article, it seems that some IEEE Spectrum readers aren’t completely sold on CFLs either. We received questions about why the lights don’t always meet their long-lifetime claims, what can cause them to fail, and ultimately, how dead bulbs affect the advertised savings of switching from incandescent.

Tests of compact fluorescent lamps’ lifetime vary among countries. The majority of CFLs sold in the United States adhere to the U.S. Department of Energy and Environmental Protection Agency’s Energy Star approval program, according to the U.S. National Electrical Manufacturers Association. For these bulbs, IEEE Spectrum found some answers.

How is a compact fluorescent lamp’s lifetime calculated in the first place?

“With any given lamp that rolls off a production line, whatever the technology, they’re not all going to have the same exact lifetime,” says Alex Baker, lighting program manager for the Energy Star program. In an initial test to determine an average lifetime, he says, manufacturers leave a large sample of lamps lit. The defined average “rated life” is the time it takes for half of the lamps to go out. Baker says that this average life definition is an old lighting industry standard that applies to incandescent and compact fluorescent lamps alike.

In reality, the odds may actually be somewhat greater than 50 percent that your 6000-hour-rated bulb will still be burning bright at 6000 hours. “Currently, qualified CFLs in the market may have longer lifetimes than manufacturers are claiming,” says Jen Stutsman, of the Department of Energy’s public affairs office. “More often than not, more than 50 percent of the lamps of a sample set are burning during the final hour of the manufacturer’s chosen rated lifetime,” she says, noting that manufacturers often opt to end lifetime evaluations prematurely, to save on testing costs.

Although manufacturers usually conduct this initial rated life test in-house, the Energy Star program requires other lifetime evaluations conducted by accredited third-party laboratories. Jeremy Snyder directed one of those testing facilities, the Program for the Evaluation and Analysis of Residential Lighting (PEARL) in Troy, N.Y., which evaluated Energy Star–qualified bulbs until late 2010, when the Energy Star program started conducting these tests itself. Snyder works at the Rensselaer Polytechnic Institute’s Lighting Research Center, which conducts a variety of tests on lighting products, including CFLs and LEDs. Some Energy Star lifetime tests, he says, require 10 sample lamps for each product—five pointing toward the ceiling and five toward the floor. One “interim life test” entails leaving the lamps lit for 40 percent of their rated life. Three strikes, or burnt-out lamps, and the product risks losing its qualification.

Besides waiting for bulbs to burn out, testers also measure the light output of lamps over time, to ensure that the CFLs do not appreciably dim with use. Using a hollow “integrating sphere,” which has a white interior to reflect light in all directions, Lighting Research Center staff can take precise measurements of a lamp’s total light output in lumens. The Energy Star program requires that 10 tested lights maintain an average of 90 percent of their initial lumen output for 1000 hours of life, and 80 percent of their initial lumen output at 40 percent of their rated life.

Is there any way to accelerate these lifetime tests?

“There are techniques for accelerated testing of incandescent lamps, but there’s no accepted accelerated testing for other types,” says Michael L. Grather, the primary lighting performance engineer at Luminaire Testing Laboratory and Underwriters’ Laboratories in Allentown, Penn For incandescent bulbs, one common method is to run more electric current through the filament than the lamp might experience in normal use. But Grather says a similar test for CFLs wouldn’t give consumers an accurate prediction of the bulb’s life: “You’re not fairly indicating what’s going to happen as a function of time. You’re just stressing different components—the electronics but not the entire lamp.”

Perhaps the closest such evaluation for CFLs is the Energy Star “rapid cycle test.” For this evaluation, testers divide the total rated life of the lamp, measured in hours, by two and switch the compact fluorescent on for five minutes and off for five minutes that number of times. For example, a CFL with a 6000-hour rated life must undergo 3000 such rapid cycles. At least five out of a sample of six lamps must survive for the product to keep its Energy Star approval.

In real scenarios, what causes CFLs to fall short of their rated life?

As anyone who frequently replaces CFLs in closets or hallways has likely discovered, rapid cycling can prematurely kill a CFL. Repeatedly starting the lamp shortens its life, Snyder explains, because high voltage at start-up sends the lamp’s mercury ions hurtling toward the starting electrode, which can destroy the electrode’s coating over time. Snyder suggests consumers keep this in mind when deciding where to use a compact fluorescent. The Lighting Research Center has published a worksheet [PDF] for consumers to better understand how frequent switching reduces a lamp’s lifetime. The sheet provides a series of multipliers so that consumers can better predict a bulb’s longevity. The multipliers range from 1.5 (for bulbs left on for at least 12 hours) to 0.4 (for bulbs turned off after 15 minutes). Despite any lifetime reduction, Snyder says consumers should still turn off lights not needed for more than a few minutes.

Another CFL slayer is temperature. “Incandescents thrive on heat,” Baker says. “The hotter they get, the more light you get out of them. But a CFL is very temperature sensitive.” He notes that “recessed cans”—insulated lighting fixtures—prove a particularly nasty compact fluorescent death trap, especially when attached to dimmers, which can also shorten the electronic ballast’s life. He says consumers often install CFLs meant for table or floor lamps inside these fixtures, instead of lamps specially designed for higher temperatures, as indicated on their packages. Among other things, these high temperatures can destroy the lamps’ electrolytic capacitors—the main reason, he says, that CFLs fail when overheated.

How do shorter-than-expected lifetimes affect the payback equation?

Actually predicting the savings of switching from an incandescent must account for both the cost of the lamp and its energy savings over time. Although the initial price of a compact fluorescent (which can range [PDF] from US $0.50 in a multipack to over $9) is usually more than that of an incandescent (usually less than a U.S. dollar), a CFL can use a fraction of the energy an incandescent requires. Over its lifetime, the compact fluorescent should make up for its higher initial cost in savings—if it lives long enough. It should also offset the estimated 4 milligrams of mercury it contains. You might think of mercury vapor as the CFL’s equivalent of an incandescent’s filament. The electrodes in the CFL excite this vapor, which in turn radiates and excites the lamp’s phosphor coating, giving off light. Given that coal-burning power plants also release mercury into the air, an amount that the Energy Star program estimates at around 0.012 milligrams per kilowatt-hour, if the CFL can save enough energy it should offset this environmental cost, too.

Exactly how long a CFL must live to make up for its higher costs depends on the price of the lamp, the price of electric power, and how much energy the compact fluorescent requires to produce the same amount of light as its incandescent counterpart. Many manufacturers claim that consumers can take an incandescent wattage and divide it by four, and sometimes five, to find an equivalent CFL in terms of light output, says Russ Leslie, associate director at the Lighting Research Center. But he believes that’s “a little bit too greedy.” Instead, he recommends dividing by three. “You’ll still save a lot of energy, but you’re more likely to be happy with the light output,” he says.

To estimate your particular savings, the Energy Star program has published a spreadsheet where you can enter the price you’re paying for electricity, the average number of hours your household uses the lamp each day, the price you paid for the bulb, and its wattage. The sheet also includes the assumptions used to calculate the comparison between compact fluorescent and incandescent bulbs. Playing with the default assumptions given in the sheet, we reduced the CFL’s lifetime by 60 percent to account for frequent switching, doubled the initial price to make up for dead bulbs, deleted the assumed labor costs for changing bulbs, and increased the CFL’s wattage to give us a bit more light. The compact fluorescent won. We invite you to try the same, with your own lighting and energy costs, and let us know your results.

B.C. priest lands snowboarding PhD

I have been saying skiing nurture my spirituality for many years. Finally there is some theology proof from an Anglican priest with his Ph.D thesis on spiritually in snowboarding.

CBC News, Mar 4, 2011
Thesis examines connection between spirituality and snowboarding

An Anglican priest from Trail, B.C., has become the first person in the world to get a PhD in snowboarding.

Neil Elliot, the minister at St. Andrew’s Anglican Church, recently received his doctorate from Kingston University in London, England.

“The genesis was discovering this term ‘soul-riding’ in a discussion on the internet, and that discussion going into how people have had transcending experiences while riding and discovering I’ve had that experience as well I just hadn’t recognized it,” he said.

Elliot interviewed dozens of snowboarders from the United Kingdom and Canada, delving into the spirituality of snowboarding.

“Soul-riding starts with riding powder, it starts with finding some kind of almost transcendent experience in riding powder and in the whole of your life, so soul-riding is about being completely focused, being completely in the moment, you might say.”

Elliot said it’s clear spirituality and snowboarding do intersect.

“[It’s] about snowboarders who discovered that … snowboarding was their spirituality. I had a lot of people who said, ‘Snowboarding is my religion.'”

‘New model for spirituality’

While Elliot’s thesis doesn’t draw any definite conclusions, he says it offers a new point of view.
Neil Elliot is the first person in the world to get a PhD in snowboarding. Neil Elliot is the first person in the world to get a PhD in snowboarding. (St. Andrews Anglican Church)

“What my thesis does is give a new model for spirituality, saying that spirituality is a way of looking at the world and a way of looking at the world that includes there being something more than just the material,” he said.

“My thesis goes on to say that there’s three dimensions to that. There’s the experiences that we have, there’s the context that we’re in and then there’s what’s going on really inside us, who we are.”

Elliot, who already has a master’s degree in theology and Islamic studies, is the first to admit his love of snowboarding drove him to get the PhD and a job in the B.C. mountains. But he insists his thesis is serious.

“My PhD is about spirituality and snowboarding. It’s rooted in the sociology of religion and in … this debate that’s going on about whether somebody is religious or spiritual. A lot of people say, ‘I’m not religious — I’m spiritual’ and I’m trying to find out what that actually means,” he said.

“The spirituality of snowboarding is looking at what does it mean to be spiritual in today’s world.”

Elliot said his colleagues and congregation support his unorthodox PhD, and love of both the board and cloth.

“They understand that this is a light on what we’re all struggling with: how do we encourage people to come into the church? How do we encourage people to see religion and spirituality as working together, rather than being different things?”

3D Printing, The printed world

This is the end of Bandai. Who would buy over priced plastic model if you can print your own Gundam? Probably it will be the end of ToyR’us too.

By Feb 10th 2011, Economist
Three-dimensional printing from digital designs will transform manufacturing and allow more people to start making things

FILTON, just outside Bristol, is where Britain’s fleet of Concorde supersonic airliners was built. In a building near a wind tunnel on the same sprawling site, something even more remarkable is being created. Little by little a machine is “printing” a complex titanium landing-gear bracket, about the size of a shoe, which normally would have to be laboriously hewn from a solid block of metal. Brackets are only the beginning. The researchers at Filton have a much bigger ambition: to print the entire wing of an airliner.

Far-fetched as this may seem, many other people are using three-dimensional printing technology to create similarly remarkable things. These include medical implants, jewellery, football boots designed for individual feet, lampshades, racing-car parts, solid-state batteries and customised mobile phones. Some are even making mechanical devices. At the Massachusetts Institute of Technology (MIT), Peter Schmitt, a PhD student, has been printing something that resembles the workings of a grandfather clock. It took him a few attempts to get right, but eventually he removed the plastic clock from a 3D printer, hung it on the wall and pulled down the counterweight. It started ticking.

Engineers and designers have been using 3D printers for more than a decade, but mostly to make prototypes quickly and cheaply before they embark on the expensive business of tooling up a factory to produce the real thing. As 3D printers have become more capable and able to work with a broader range of materials, including production-grade plastics and metals, the machines are increasingly being used to make final products too. More than 20% of the output of 3D printers is now final products rather than prototypes, according to Terry Wohlers, who runs a research firm specialising in the field. He predicts that this will rise to 50% by 2020.

Using 3D printers as production tools has become known in industry as “additive” manufacturing (as opposed to the old, “subtractive” business of cutting, drilling and bashing metal). The additive process requires less raw material and, because software drives 3D printers, each item can be made differently without costly retooling. The printers can also produce ready-made objects that require less assembly and things that traditional methods would struggle with—such as the glove pictured above, made by Within Technologies, a London company. It can be printed in nylon, stainless steel or titanium.

The printing of parts and products has the potential to transform manufacturing because it lowers the costs and risks. No longer does a producer have to make thousands, or hundreds of thousands, of items to recover his fixed costs. In a world where economies of scale do not matter any more, mass-manufacturing identical items may not be necessary or appropriate, especially as 3D printing allows for a great deal of customisation. Indeed, in the future some see consumers downloading products as they do digital music and printing them out at home, or at a local 3D production centre, having tweaked the designs to their own tastes. That is probably a faraway dream. Nevertheless, a new industrial revolution may be on the way.

Printing in 3D may seem bizarre. In fact it is similar to clicking on the print button on a computer screen and sending a digital file, say a letter, to an inkjet printer. The difference is that the “ink” in a 3D printer is a material which is deposited in successive, thin layers until a solid object emerges.

The layers are defined by software that takes a series of digital slices through a computer-aided design. Descriptions of the slices are then sent to the 3D printer to construct the respective layers. They are then put together in a number of ways. Powder can be spread onto a tray and then solidified in the required pattern with a squirt of a liquid binder or by sintering it with a laser or an electron beam. Some machines deposit filaments of molten plastic. However it is achieved, after each layer is complete the build tray is lowered by a fraction of a millimetre and the next layer is added.
And when you’re happy, click print

The researchers at Filton began using 3D printers to produce prototype parts for wind-tunnel testing. The group is part of EADS Innovation Works, the research arm of EADS, a European defence and aerospace group best known for building Airbuses. Prototype parts tend to be very expensive to make as one-offs by conventional means. Because their 3D printers could do the job more efficiently, the researchers’ thoughts turned to manufacturing components directly.

Aircraft-makers have already replaced a lot of the metal in the structure of planes with lightweight carbon-fibre composites. But even a small airliner still contains several tonnes of costly aerospace-grade titanium. These parts have usually been machined from solid billets, which can result in 90% of the material being cut away. This swarf is no longer of any use for making aircraft.

To make the same part with additive manufacturing, EADS starts with a titanium powder. The firm’s 3D printers spread a layer about 20-30 microns (0.02-0.03mm) thick onto a tray where it is fused by lasers or an electron beam. Any surplus powder can be reused. Some objects may need a little machining to finish, but they still require only 10% of the raw material that would otherwise be needed. Moreover, the process uses less energy than a conventional factory. It is sometimes faster, too.

There are other important benefits. Most metal and plastic parts are designed to be manufactured, which means they can be clunky and contain material surplus to the part’s function but necessary for making it. This is not true of 3D printing. “You only put material where you need to have material,” says Andy Hawkins, lead engineer on the EADS project. The parts his team is making are more svelte, even elegant. This is because without manufacturing constraints they can be better optimised for their purpose. Compared with a machined part, the printed one is some 60% lighter but still as sturdy.

Lightness is critical in making aircraft. A reduction of 1kg in the weight of an airliner will save around $3,000-worth of fuel a year and by the same token cut carbon-dioxide emissions. Additive manufacturing could thus help build greener aircraft—especially if all the 1,000 or so titanium parts in an airliner can be printed. Although the size of printable parts is limited for now by the size of 3D printers, the EADS group believes that bigger systems are possible, including one that could fit on the 35-metre-long gantry used to build composite airliner wings. This would allow titanium components to be printed directly onto the structure of the wing.

Many believe that the enhanced performance of additively manufactured items will be the most important factor in driving the technology forward. It certainly is for MIT’s Mr Schmitt, whose interest lies in “original machines”. These are devices not constructed from a collection of prefabricated parts, but created in a form that flows from the intention of the design. If that sounds a bit arty, it is: Mr Schmitt is a former art student from Germany who used to cadge time on factory lathes and milling machines to make mechanised sculptures. He is now working on novel servo mechanisms, the basic building blocks for robots. Custom-made servos cost many times the price of off-the-shelf ones. Mr Schmitt says it should be possible for a robot builder to specify what a servo needs to do, rather than how it needs to be made, and send that information to a 3D printer, and for the machine’s software to know how to produce it at a low cost. “This makes manufacturing more accessible,” says Mr Schmitt.

The idea of the 3D printer determining the form of the items it produces intrigues Neri Oxman, an architect and designer who heads a research group examining new ways to make things at MIT’s Media Lab. She is building a printer to explore how new designs could be produced. Dr Oxman believes the design and construction of objects could be transformed using principles inspired by nature, resulting in shapes that are impossible to build without additive manufacturing. She has made items from sculpture to body armour and is even looking at buildings, erected with computer-guided nozzles that deposit successive layers of concrete.

Some 3D systems allow the properties and internal structure of the material being printed to be varied. This year, for instance, Within Technologies expects to begin offering titanium medical implants with features that resemble bone. The company’s femur implant is dense where stiffness and strength is required, but it also has strong lattice structures which would encourage the growth of bone onto the implant. Such implants are more likely to stay put than conventional ones.

Working at such a fine level of internal detail allows the stiffness and flexibility of an object to be determined at any point, says Siavash Mahdavi, the chief executive of Within Technologies. Dr Mahdavi is working on other lattice structures, including aerodynamic body parts for racing cars and special insoles for a firm that hopes to make the world’s most comfortable stiletto-heeled shoes.

Digital Forming, a related company (where Dr Mahdavi is chief technology officer), uses 3D design software to help consumers customise mass-produced products. For example, it is offering a service to mobile-phone companies in which subscribers can go online to change the shape, colour and other features of the case of their new phone. The software keeps the user within the bounds of the achievable. Once the design is submitted the casing is printed. Lisa Harouni, the company’s managing director, says the process could be applied to almost any consumer product, from jewellery to furniture. “I don’t have any doubt that this technology will change the way we manufacture things,” she says.

Other services allow individuals to upload their own designs and have them printed. Shapeways, a New York-based firm spun out of Philips, a Dutch electronics company, last year, offers personalised 3D production, or “mass customisation”, as Peter Weijmarshausen, its chief executive, describes it. Shapeways prints more than 10,000 unique products every month from materials that range from stainless steel to glass, plastics and sandstone. Customers include individuals and shopkeepers, many ordering jewellery, gifts and gadgets to sell in their stores.

EOS, a German supplier of laser-sintering 3D printers, says they are already being used to make plastic and metal production parts by carmakers, aerospace firms and consumer-products companies. And by dentists: up to 450 dental crowns, each tailored for an individual patient, can be manufactured in one go in a day by a single machine, says EOS. Some craft producers of crowns would do well to manage a dozen a day. As an engineering exercise, EOS also printed the parts for a violin using a high-performance industrial polymer, had it assembled by a professional violin-maker and played by a concert violinist.

Both EOS and Stratasys, a company based in Minneapolis which makes 3D printers that employ plastic-deposition technology, use their own machines to print parts that are, in turn, used to build more printers. Stratasys is even trying to print a car, or at least the body of one, for Kor Ecologic, a company in Winnipeg, whose boss, Jim Kor, is developing an electric-hybrid vehicle called Urbee.
Jim Kor’s printed the model. Next, the car

Making low-volume, high-value and customised components is all very well, but could additive manufacturing really compete with mass-production techniques that have been honed for over a century? Established techniques are unlikely to be swept away, but it is already clear that the factories of the future will have 3D printers working alongside milling machines, presses, foundries and plastic injection-moulding equipment, and taking on an increasing amount of the work done by those machines.

Morris Technologies, based in Cincinnati, was one of the first companies to invest heavily in additive manufacturing for the engineering and production services it offers to companies. Its first intention was to make prototypes quickly, but by 2007 the company says it realised “a new industry was being born” and so it set up another firm, Rapid Quality Manufacturing, to concentrate on the additive manufacturing of higher volumes of production parts. It says many small and medium-sized components can be turned from computer designs into production-quality metal parts in hours or days, against days or weeks using traditional processes. And the printers can build unattended, 24 hours a day.

Neil Hopkinson has no doubts that 3D printing will compete with mass manufacturing in many areas. His team at Loughborough University has invented a high-speed sintering system. It uses inkjet print-heads to deposit infra-red-absorbing ink on layers of polymer powder which are fused into solid shapes with infra-red heating. Among other projects, the group is examining the potential for making plastic buckles for Burton Snowboards, a leading American producer of winter-sports equipment. Such items are typically produced by plastic injection-moulding. Dr Hopkinson says his process can make them for ten pence (16 cents) each, which is highly competitive with injection-moulding. Moreover, the designs could easily be changed without Burton incurring high retooling costs.

Predicting how quickly additive manufacturing will be taken up by industry is difficult, adds Dr Hopkinson. That is not necessarily because of the conservative nature of manufacturers, but rather because some processes have already moved surprisingly fast. Only a few years ago making decorative lampshades with 3D printers seemed to be a highly unlikely business, but it has become an industry with many competing firms and sales volumes in the thousands.

Dr Hopkinson thinks Loughborough’s process is already competitive with injection-moulding at production runs of around 1,000 items. With further development he expects that within five years it would be competitive in runs of tens if not hundreds of thousands. Once 3D printing machines are able to crank out products in such numbers, then more manufacturers will look to adopt the technology.

Will Sillar of Legerwood, a British firm of consultants, expects to see the emergence of what he calls the “digital production plant”: firms will no longer need so much capital tied up in tooling costs, work-in-progress and raw materials, he says. Moreover, the time to take a digital design from concept to production will drop, he believes, by as much as 50-80%. The ability to overcome production constraints and make new things will combine with improvements to the technology and greater mechanisation to make 3D printing more mainstream. “The market will come to the technology,” Mr Sillar says.

Some in the industry believe that the effect of 3D printing on manufacturing will be analogous to that of the inkjet printer on document printing. The written word became the printed word with the invention of movable-type printing by Johannes Gutenberg in the 15th century. Printing presses became like mass-production machines, highly efficient at printing lots of copies of the same thing but not individual documents. The inkjet printer made that a lot easier, cheaper and more personal. Inkjet devices now perform a multitude of printing roles, from books on demand to labels and photographs, even though traditional presses still roll for large runs of books, newspapers and so on.

How would this translate to manufacturing? Most obviously, it changes the economics of making customised components. If a company needs a specialised part, it may find it cheaper and quicker to have the part printed locally or even to print its own than to order one from a supplier a long way away. This is more likely when rapid design changes are needed.

Printing in 3D is not the preserve of the West: Chinese companies are adopting the technology too. Yet you might infer that some manufacturing will return to the West from cheap centres of production in China and elsewhere. This possibility was on the agenda of a conference organised by DHL last year. The threat to the logistics firm’s business is clear: why would a company airfreight an urgently needed spare part from abroad when it could print one where it is required?
Our TQ article explains the technology behind the 3-D printing process

Perhaps the most exciting aspect of additive manufacturing is that it lowers the cost of entry into the business of making things. Instead of finding the money to set up a factory or asking a mass-producer at home (or in another country) to make something for you, 3D printers will offer a cheaper, less risky route to the market. An entrepreneur could run off one or two samples with a 3D printer to see if his idea works. He could make a few more to see if they sell, and take in design changes that buyers ask for. If things go really well, he could scale up—with conventional mass production or an enormous 3D print run.

This suggests that success in manufacturing will depend less on scale and more on the quality of ideas. Brilliance alone, though, will not be enough. Good ideas can be copied even more rapidly with 3D printing, so battles over intellectual property may become even more intense. It will be easier for imitators as well as innovators to get goods to market fast. Competitive advantages may thus be shorter-lived than ever before. As with past industrial revolutions, the greatest beneficiaries may not be companies but their customers. But whoever gains most, revolution may not be too strong a word.

Turning garbage into gas

Why burn or bury garbage when you can vaporize them and turn garbage into electricity? This is the solution for landfill.

Feb 3rd 2011, Economist
Atomising trash eliminates the need to dump it, and generates useful power too

DISPOSING of household rubbish is not, at first glance, a task that looks amenable to high-tech solutions. But Hilburn Hillestad of Geoplasma, a firm based in Atlanta, Georgia, begs to differ. Burying trash—the usual way of disposing of the stuff—is old-fashioned and polluting. Instead, Geoplasma, part of a conglomerate called the Jacoby Group, proposes to tear it into its constituent atoms with electricity. It is clean. It is modern. And, what is more, it might even be profitable.

For years, some particularly toxic types of waste, such as the sludge from oil refineries, have been destroyed with artificial lightning from electric plasma torches—devices that heat matter to a temperature higher than that of the sun’s surface. Until recently this has been an expensive process, costing as much as $2,000 per tonne of waste, according to SRL Plasma, an Australian firm that has manufactured torches for 13 of the roughly two dozen plants around the world that work this way.

Now, though, costs are coming down. Moreover, it has occurred to people such as Dr Hillestad that the process could be used to generate power as well as consuming it. Appropriately tweaked, the destruction of organic materials (including paper and plastics) by plasma torches produces a mixture of carbon monoxide and hydrogen called syngas. That, in turn, can be burned to generate electricity. Add in the value of the tipping fees that do not have to be paid if rubbish is simply vaporised, plus the fact that energy prices in general are rising, and plasma torches start to look like a plausible alternative to burial.
Related topics

The technology has got better, too. The core of a plasma torch is a pair of electrodes, usually made from a nickel-based alloy. A current arcs between them and turns the surrounding air into a plasma by stripping electrons from their parent atoms. Waste (chopped up into small pieces if it is solid) is fed into this plasma. The heat and electric charges of the plasma break the chemical bonds in the waste, vaporising it. Then, if the mix of waste is correct, the carbon and oxygen atoms involved recombine to form carbon monoxide and the hydrogen atoms link up into diatomic hydrogen molecules. Both of these are fuels (they burn in air to form carbon dioxide and water, respectively). Metals and other inorganic materials that do not turn into gas fall to the bottom of the chamber as molten slag. Once it has cooled, this slag can be used to make bricks or to pave roads.

Electric arcs are a harsh environment to operate in, and early plasma torches were not noted for reliability. These days, though, the quality of the nickel alloys has improved so that the torches work continuously. On top of that, developments in a field called computational fluid dynamics allow the rubbish going into the process to be mixed in a way that produces the most syngas for the least input of electricity.

The first rubbish-to-syngas plants were built almost a decade ago, in Japan—where land scarcity means tipping fees are particularly high. Now the idea is moving elsewhere. This year Geoplasma plans to start constructing a plant costing $120m in St Lucie County, Florida. It will be fed with waste from local households and should create enough syngas to make electricity for more than 20,000 homes. The company reckons it can make enough money from the project to service the debt incurred in constructing the plant and still provide a profit from the beginning.

Nor is Geoplasma alone. More than three dozen other American firms are proposing plasma-torch syngas plants, according to Gershman, Brickner & Bratton, a waste consultancy based in Fairfax, Virginia. Demand is so great that the Westinghouse Plasma Corporation, an American manufacturer of plasma torches, is able to hire out its test facility in Madison, Pennsylvania, for $150,000 a day.

Syngas can also be converted into other things. The “syn” is short for “synthesis” and syngas was once an important industrial raw material. The rise of the petrochemical industry has rather eclipsed it, but it may become important again. One novel proposal, by Coskata, a firm based in Warrenville, Illinois, is to ferment it into ethanol, for use as vehicle fuel. At the moment Coskata uses a plasma torch to make syngas from waste wood and wood-pulp, but modifying the apparatus to take household waste should not be too hard.

Even if efforts to convert such waste into syngas fail, existing plants that use plasma torches to destroy more hazardous material could be modified to take advantage of the idea. The Beijing Victex Environmental Science and Technology Development Company, for example, uses the torches to destroy sludge from Chinese oil refineries. According to Fiona Qian, the firm’s deputy manager, the high cost of doing this means some refineries are still dumping toxic waste in landfills. Stopping that sort of thing by bringing the price down would be a good thing by itself.