吃蟑螂抗輻射

甲甴

蟑螂是地球上生命力最頑強的生物,著名科學家曾說過,若果爆發核子戰爭,就算人類死光了,蟑螂還會活下來。清華大學核子實驗室最近進行一項研究發現,以核子輻射照射蟑螂,同樣的劑量會造成人類死亡,但蟑螂卻完全沒事。

有專家認為蟑螂的高抗輻射性體特,是因為其特别的基因排列,令蟑螂的細胞不怕輻射感染。只要人類多吃蟑螂,多吸收蟑螂的特殊基因,人類細胞也能生産輻射抗體,可以抵受更高的輻射量。有營養學家指出,蟑螂含豐富蛋白質,食用前只要清洗乾淨,並以攝氏一百度以上煮熟,徹抵殺死蟑螂體内的細菌,吃蟑螂不會對人體造成傷害。不過他亦同時指出,要小心避免進食帶有殺蟲水的蟑螂,應盡量挑選有機的新鮮蟑螂食用。

随著越來越人知道吃蟑螂能抗輻射的好處,内地的一些餐廰乘勢推出蟑螂宴。在武漢的一家知名食府,便推出九十九元人民幣,三菜一湯的超值蟑螂套餐。先上一客蟑螂鬚燉湯,主菜是椒鹽蟑螂,配麻婆蟑螂鬆炒飯,甜品是蟑螂殻白果糖水。店外更一度出現人龍,未能入座的食客引發騷動,要公安到埸維持秩序。内地網站有網民轉貼蟑螂食譜,互相分享交流蟑螂的烹飪心得。國家衞生部發言人,更鼓勵民眾多吃蟑螂,一來可以抵抗輻射,二來可以消滅害蟲,可謂一舉兩得。

現在日本福島核子危機持續,吃蟑螂能抵抗輻射這個消息,請大家把廣為轉載傳播開去,讓市民可以及早預防輻射。

哲學功課﹕The Coherence Theory of Empirical Knowledge

在傳統認知論中,知識等於真實的信念加上合理相信的理由。在尋找合理的理由時,我們采用歸納法,從已經被肯定的知識中,推論相信新知識的合理理由。可是這裹有一個問題,若每一項知識也是從先前的知識推論出來,那層層遞進地推論追溯上去,那最初的知識如何肯定呢。傳統上基礎主義認為在知識的最底層,是一些不需論證自我肯定的基礎知識,作為所有知識推論的基礎。調和主羲則否定有基礎知識的存在,所有知論的推論是個巨大的循環,只能檢視整個知識系統的一至性,有沒有內部矛盾或對世界觀測的不協調。這篇功課討論調和主義理論本身的問題,探討調和主義能否成立。

The Coherence Theory of Empirical Knowledge

In this essay, I am evaluating Bonjour’s coherence theory of empirical knowledge (CTEK) against foundational theory of empirical knowledge (FTEK). First, I will outline what is the regress problem and compare the responses from FTEK and CTEK. Then I will examine the objection against CTEK regarding its relationship with external world. I will further extend the objection by arguing CTEK is asserting a fundamental assumption that the external world itself has to be coherent for CTEK to be justified. At last I am going to conclude CTEK is unsuccessful in overcome the objection in strictly epistemological sense but it is successful in practical sense.

Since Plato, traditional view of knowledge is justified true belief. A piece of belief is only qualified as knowledge if it is justified. A belief is justified based on the validity and soundness of its argument, which is implicitly depends on the premises used in the argument are also justified. Each premise on its own is also a piece of belief which required the justification of the premise’s premises. As a result, we have a regression of justifications for premises that keep tracing back, which is known as “the regression problem”. FTEK deals with the regression problem by stating there are some foundation beliefs at the very bottom of chains of premises and the regression terminates when the basic beliefs are reached. There are two version of FTEK. The strong version stated that the basic beliefs are self-justified without the need of further premises. The weak version stated that the basic beliefs are initially credible that are likely to be true. The CTEK rejects the notation of basic beliefs, instead of having the regression of premises go on infinite linearly, the inference is circular. An epistemic system is justified by its internal coherence.

However, the circular nature of CTEK runs into the problem of begging the question, which a belief cannot be justified unless it is already justified. The solution is to reject the linear conception of inferential justification and uses a holistic or systematic conception of inferential justification instead. CTEK separate the justification into two categories, justification of a particular belief and the global justification of the entire cognitive system. The justification of a particular belief appears linear, since the premises regression will soon reached some acceptable beliefs in the context. If no acceptable belief is reached, the premises regression will continue moving in a circle. In this case, the justification of the overall knowledge system comes under questions. In CTEK, the justification of the entire system is based on its degree of coherence. A coherent system must be internally consistence, which means there is no internal conflict, but it has more than just consistency. Coherent is the systemic connection between the components of the system, how observable facts can be explained and predicted. The justified knowledge system is the one with the highest degree of coherences out of all the alternative consistence systems.

In the paper, Bonjour lists three objections to CTEK on questioning the fundamental questions of the connection between coherence and justification. Out of the three objections, Bonjour spends most of the paper in defending against objection number two, the relationship of CTEK and external world. I think this is the strongest objection against CTEK and I also think Bonjour successfully defends CTEK against this objection. However, Bonjour omitted an underlining assumption in his defence that the external world has to be coherent in order to justify his argument. In the following paragraphs, I will first out the objection, go over Bonjour’s response to the objection and illustrate his hidden assumption with a counter example.

The strongest objection to CTEK is that since CTEK is justified only in terms the internal coherence of the beliefs in the system, it does not have any relationship with the external world. A self-enclosed system of beliefs cannot constitute empirical knowledge. Bonjour’s defense is pretty straight forward, it simply link the coherent belief system in CTEK to observable facts from external world. He argues that in CTEK, the coherent system of beliefs must also coherent with reliable observation of the external world in long run. When a particular observation does not coherent with the belief system, CTEK can either neglect the particular observation as an incoherent exception to the belief system or refine the belief system to include the new observation. If there are too many incoherent exception observations accumulated in the belief system, the belief system will become less coherent with the world and eventually it will be replaced by a more coherent belief system. The belief system is continuously updating itself upon new observation to maintain its degree of coherence. The input from external world has causal relationship with the CTEK belief system where the belief system is justified by its coherence with observable facts of the external world. One of the key pieces in Bonjour’s argument is to establish what can be constituted as reliable observations yet at the same time is not a basic belief. He argues that spontaneous introspective beliefs on spontaneous sensa beliefs are very likely to be true. The reliability of cognitively spontaneous beliefs is part of the coherence system along with the observation of the external world. Therefore it is not a prior truth in the sense that it is required as the foundation for justification of the knowledge.

Bonjour based CTEK’s justification on the coherence of the belief system and the reliable observation of external world in long run. Let’s granted that the belief system and the observations are reliable, however Bonjour failed to address the underlining assumption that the external world is coherence in long run. If the external world is not coherence, then no belief system can stay coherent due to CTEK has a causal relationship with the external world. Bonjour uses the spontaneous visual belief a red book and the lack of spontaneous visual of a blue book to illustrate how the belief system is linked to the external world. What if there is a chance that the book randomly change colour every time I observe it? How can I conclude there is a red book on my desk but not a blue book on my desk? Even though I can trust my spontaneous beliefs from my sensa of the book, I cannot trust the object under my observation stays the same between my two observations. It is possible that the cover of the book is made of the latest colour changing e-paper technology, which in the case we can provide a coherent account for the observable fact. However, it is also possible that there is no scientific theory can possible explain why the book change its color. It could be the act of God and it is simply a miracle that the book changed from red to blue for no apparent reason. The CTEK justification adopt an objective clock work world view that rule out the existence of any supernatural power, such as an omnipotent God who defies all laws of physics.
In theory, we cannot epistemological justify the CTEK because we cannot epistemological justify the world is coherent. Hume argues that “Uniformity of Nature”, which is essentially the same as coherence of the world, cannot be justified, yet it is rational and non-optional for us to accept the habit of inductive inference. Practically, we can assume the world is coherent almost all of the time and take it as a weak foundation that it is probably initially true until shown otherwise. CTEK is actually a very weak FTEK in disguise; the base belief of CTEK is that the world is coherence to provide the foundation to build coherent belief systems.

However, it would be totally absurd to argue the world is not coherent. If the world is not coherent, then even FTEK is not possible to have any knowledge system. Just like FTEK cannot convince the ultimate skeptic, CTEK also fail to convince the ultimate skeptic that there is justification on any knowledge. Given the fact that assumption of the world is coherent must dialectically acceptable in the context of any knowledge theory to have any meaning, we can grant this assumption a priori status outside of any epistemic dialog. With this particular exception, I conclude that CTEK is successful in overcoming the objection regarding the relationship of coherent belief system and the external world.

Reference:
[1] Laurence Bonjour, The Coherence Theory of Empirical Knowledge, Philosophical Studies 30 (1976) p281-312

Therapist-free therapy

Looks like psychologist will be out of work soon and they will be replaced by computer programs. I never trust those talk therapy anyways, the couch only works in the movies.

Mar 3rd 2011, The Economist
Cognitive-bias modification may put the psychiatrist’s couch out of business

THE treatment, in the early 1880s, of an Austrian hysteric called Anna O is generally regarded as the beginning of talking-it-through as a form of therapy. But psychoanalysis, as this version of talk therapy became known, is an expensive procedure. Anna’s doctor, Josef Breuer, is estimated to have spent over 1,000 hours with her.

Since then, things have improved. A typical course of a modern talk therapy, such as cognitive behavioural therapy, consists of 12-16 hour-long sessions and is a reasonably efficient way of treating conditions like depression and anxiety (hysteria is no longer a recognised diagnosis). Medication, too, can bring rapid change. Nevertheless, treating disorders of the psyche is still a hit-and-miss affair, and not everyone wishes to bare his soul or take mind-altering drugs to deal with his problems. A new kind of treatment may, though, mean he does not have to. Cognitive-bias modification (CBM) appears to be effective after only a few 15-minute sessions, and involves neither drugs nor the discussion of feelings. It does not even need a therapist. All it requires is sitting in front of a computer and using a program that subtly alters harmful thought patterns.

This simple approach has already been shown to work for anxiety and addictions, and is now being tested for alcohol abuse, post-traumatic-stress disorder and several other disturbances of the mind. It is causing great excitement among researchers. As Yair Bar-Haim, a psychologist at Tel Aviv University who has been experimenting with it on patients as diverse as children and soldiers, puts it, “It’s not often that a new evidence-based treatment for a major psychopathology comes around.”

CBM is based on the idea that many psychological problems are caused by automatic, unconscious biases in thinking. People suffering from anxiety, for instance, may have what is known as an attentional bias towards threats: they are drawn irresistibly to things they perceive to be dangerous. Similar biases may affect memory and the interpretation of events. For example, if an acquaintance walks past without saying hello, it might mean either that he has ignored you or that he has not seen you. The anxious, according to the theory behind CBM, have a bias towards assuming the former and reacting accordingly.

The goal of CBM is to alter such biases, and doing so has proved surprisingly easy. A common way of debiasing attention is to show someone two words or pictures—one neutral and the other threatening—on a computer screen. In the case of social anxiety these might be a neutral face and a disgusted face. Presented with this choice, an anxious person instinctively focuses on the disgusted visage. The program, however, prods him to complete tasks involving the neutral picture, such as identifying letters that appear in its place on the screen. Repeating the procedure around a thousand times, over a total of two hours, changes the user’s tendency to focus on the anxious face. That change is then carried into the wider world.

Emily Holmes of Oxford University, who studies the use of CBM for depression, describes the process as like administering a cognitive vaccine. When challenged by reality in the form of, say, the unobservant friend, the recipient of the vaccine finds he is inoculated against inappropriate anxiety.

In a recent study of social anxiety by Norman Schmidt of Florida State University and his colleagues, which involved 36 volunteers who had been diagnosed with anxiety, half underwent eight short sessions of CBM and the rest were put in a control group and had no treatment. At the end of the study, a majority of the CBM volunteers no longer seemed anxious, whereas in the control group only 11% had shed their anxiety. Although it was only a small trial, these results compare favourably with those of existing treatments. An examination of standard talk therapy carried out in 2004, for instance, found that half of patients had a clinically significant reduction in symptoms. Trials of medications have similar success rates.

The latest research, which is on a larger scale and is due to be published this month in Psychological Science, tackles alcohol addiction. Past work has shown that many addicts have an approach bias for alcohol—in other words, they experience a physical pull towards it. (Arachnophobia, a form of this bias that is familiar to many people, works in the opposite way: if they encounter a spider, they recoil.)

This study, conducted by Reinout Wiers of the University of Amsterdam and his colleagues, attempted to correct the approach bias to alcohol with CBM. The 214 participants received either a standard addiction treatment—a form of talk therapy—or the standard treatment plus four 15-minute sessions of CBM. In the first group, 41% of participants were abstinent a year later; in the second, 54%. That is not a cure for alcoholism, but it is a significant improvement on talk therapy alone.

Many other researchers are now exploring CBM. A team at Harvard, led by Richard McNally, is seeking volunteers for a month-long programme that will use smart-phones to assess the technique’s effect on anxiety. And Dr Bar-Haim and his team are examining possible connections between cognitive biases and post-traumatic-stress disorder in the American and Israeli armies.

Not all disorders are amenable to CBM. One study, by Hannah Reese (also at Harvard) and her colleagues, showed that it is ineffective in countering arachnophobia (perhaps not surprising, since this may be an evolved response, rather than an acquired one). Moreover, Dr Wiers found that the approach bias towards alcohol is present in only about half of the drinkers he studies. He hypothesises that for the others, drinking is less about automatic impulses and more about making a conscious decision. In such cases CBM is unlikely to work.

Colin MacLeod of the University of Western Australia, one of the pioneers of the technique, thinks CBM is not quite ready for general use. He would like to see it go through some large, long-term, randomised clinical trials of the sort that would be needed if it were a drug, rather than a behavioural therapy. Nevertheless, CBM does look extremely promising, if only because it offers a way out for those whose answer to the question, “Do you want to talk about it?” is a resounding “No”

Exam 血聘

我對小成本製作﹐但意念創新的獨立電影很情有獨鐘。《血聘》由英國的獨立電影公司拍攝﹐全片一百分鐘全部發生在一個房間之內﹐十位演員全部不知名的生面孔。這類電影一是無可救藥的爛片﹐一就是故事特別的有趣作品﹐《血聘》明顯不是前者。

這電影的橋段很簡單﹐八位應徵者到神秘的大企業面試﹐最後的淘汰試在一個密封房間內舉行﹐八人面前只有一張空白的試卷﹐考官給他們八十分鐘時間去找出答案。考試的規則很簡單﹐破壞自己試卷者會被取消資格﹐嘗試與考官或警衛對話者會被取消資格﹐走出這個房間會被取消資格。八個角色有白人﹐黑人﹐印度人﹐中國女人﹐金髮美女﹐紅髮美女﹐黑髮美女和 聾子。不過演員是誰其實並不重要﹐看這種電影的其中一個驚喜﹐是永遠估不到那個角色下一個出局﹐因為沒有必然勝出的主角。

故事劇情在此我不便洩露﹐這種電影知道了結局便不好看。電影的佈局不俗但算不上很嚴緊﹐借用軟科幻元素去扭橋﹐可以想像是Saw加The Apprantice的混合版﹐但沒有Saw那樣露骨的暴力鏡頭。八十分鐘的考試時間﹐與電影實時進行﹐從開始到最後解開答案﹐一氣呵成沒有冷場。美中不足之處﹐是劇中角色欠缺鮮明性格﹐反正角色連名字也沒有﹐只是用膚色或頭髮顏色作為代號﹐把角色身份隨意互換也可以﹐自然也不能期望有什麼描寫了。

如果你厭倦荷里活的無腦特技電影﹐如果你喜歡刺激思考的電影﹐又或者你只是對現實中千篇一律的面試感到無聊﹐《血聘》是一部不錯的小品驚慄電影。

Philosophy of History – Mark Day

近幾年香港重建舊區舊建築物時﹐常常聽到有關保育的訴求。當年清拆舊天星碼頭時﹐我曾花很多時間參與網上討論﹐辯論天星頭碼的歷史價值﹐同時亦感到自己知識上的不足﹐對於什麼是歷史意義這個核心問題﹐也只有從閱讀網上和報章評論而來一知半解的認識。雖然清拆天星已時隔多年﹐我還是對於自己在這方面知識的貧乏很介懷。最近終於立下決心﹐花了三個月時間潛心學習﹐讀畢大學歷史哲學入門的課本。這本書與我預期的答案有點不同﹐與其說這科是講述歷史哲學﹐不若說是講述歷史學的哲學。這本書從淺入深﹐介紹所有重要的歷史理論。歷史並不止是過去發生的事情﹐而是從人怎樣去看待歷史﹐去認識歷史與人的關係。

書本的第一章介紹歷史學之父Ranke的歷史理論﹐他認為歷史是從檔案中重組昔日的精神。由於人類的記憶不可靠﹐歷史學者對於歷史文本抱有懷疑精神﹐不能盡信任何一手或二手的記錄﹐對比現存的所有資料去找出答案。歷史學者不可能知道發生的所有細節﹐所以閱讀歷史時要分辨清楚什麼是原本的記錄﹐什麼是歷史學家後來加上去的自己演譯。他認為解釋歷史現像比分析歷史系統重要﹐他把歷史論述和歷史證據放在第一位。歷史把現在與過去聯繫起來﹐透過歷史保存和歷史對話﹐讓歷史得以應用來明白現在。歷史的記錄不單只是文字﹐古董﹐遺跡﹐影像也是重要的歷史素材﹐也必需要通過歷史學的批判﹐研究它們為現在帶來的影響。

第二章介紹Collingwood的歷史學的方法論﹐他批評歷史不應只把歷史資料剪貼拼湊而成﹐因為歷史資料的表像記錄不能盡信﹐會被記錄者的自身利益扭曲。歷史學者的責任﹐便用歷史學的思考法則﹐像偵探一樣抽絲剝繭﹐從文字風格推斷資料的真確性﹐從記錄者的身份推斷其可信性﹐研究現存文本和失存文本的關係﹐從而看穿第一手資料的表像﹐重組事情發生的真相。嚴守歷史學思考法則的重要性﹐便是防止任人隨便解釋扭曲歷史﹐無視歷史證據的連貫性﹐破壞現在與過去之間的因果關係。

第三章介紹分辨歷史證據可信性的方法﹐最基本是採用貝氐統計邏輯(Bayesianism)﹐接下來便要為證據提出解釋﹐推論和檢定歷史假設中的因果關係。歷史證據可能出現不同的解釋﹐好解釋要對不同證據有前後連貫性﹐歷史假設中不能有太多發想當然耳的空白﹐與所有證據都吻合的解釋﹐便是最簡潔有力的解釋。

第四章指出歷史學與科學是分別﹐兩者同樣是講求證據﹐但科學的本質是實證學(Positivism)﹐可以把證據數字化和通遍化﹐歸納出科學法則﹐再從法則推論出結論。但歷史並非科學﹐歷史不能做科學重覆實驗﹐沒有足夠的數據去歸納通則。研究歷史只能分析每件事的因果關係﹐再從中推論中事件與事件之間的規律﹐再按情況判斷每個規律應用的優先次序和輕重。

第五章確立歷史學中的因果關係﹐這章先指出其他否定因果歷史學家的謬誤﹐如果歷史事件間沒有因果關係﹐那事件與事件只是獨立的偶然發生﹐那便談不上任何的歷史解釋。比較不同的歷史事件﹐可以讓我們明白因果關係﹐讓我們從歷史的不同條件去﹐去推論不同條件下原因和影響。歷史理論幫助我們認清歷史﹐其中有三個必需條件。第一歷史理論必需具功用解釋﹐輸入歷史事件輸出事件的結果和關係。第二歷史理論解釋社會層面﹐因為個人層面涉及太多不可知的變數﹐不可能以理論去解讀。第三歷史理論提供一個模型﹐去說明各種因果連結的關係。通過比較不同歷史事件相同與相異之處﹐來證明歷史理論的解釋是否站得住腳。歷史理論解釋事件為何發生﹐與及在缺乏類似條件的情況下﹐事件為何沒有發生。可是歷史理論的最大挑戰﹐是如何分辨什麼是合理解釋﹐什麼是順口開河胡扯的歷史故事。

第六章提出自然史觀歷史理論的問題﹐歷史理論建主在綜合法則的重覆性﹐但每一件歷史事件都是獨一無異﹐已經發生的歷史不會再重覆﹐如果兩者之間毫無關連﹐那前者如何可以解釋後者的發生呢。歷史事件肯定歷史理論的正確﹐同時歷史理論也被用來解釋歷史事件﹐可是當歷史事件與已知的理論不乎﹐便會出現需要修改歷史理論﹐還是把歷史視為特例的選擇。歷史學家可以揉合不同的歷史理論﹐去解釋歷史事件如何發生﹐但對於預測未發生的事件卻完全沒有頭緒。當然在歷史事件發生後﹐歷史學家還是可輕易地解釋事件如何發生。至於該引用那一個歷史理論﹐則每一個歷史事件也要作不同考慮﹐不能憑空只從歷史理論作出推論﹐否則可能會與現實相差千里。研究歷史除了從歷史理論出發外﹐也可以從歷史論述的角度﹐把歷史以比喻形式演譯﹐重組歷史人物的想法和行動。

第七章探討如何從演譯歷史去找出歷史的意義﹐可是歷史學家面對一個悖論﹐歷史本身對處身其中人﹐不需要歷史學定的演譯已有其意義。歷史學家的演譯是另外一層的歷史意義﹐是歷史對現代人或歷史學家的意義。通過演譯歷史﹐讓人感受到當時發生的感情﹐去想像體驗其他人的經驗﹐並且對自身的體驗有意識。體驗必需通過歷史證據﹐而歷史證據可以分為兩種﹐一是外在行為的描述﹐二是內心文字的記錄﹐不論採用那種證據﹐也會遇上心靈哲學中﹐既然兩個人不可完全一樣﹐那如何去感受別人思想的難題。Collingwood認為歷史學家在寫歷史時﹐必需要把歷史在腦內重新演出﹐從外在發生的事件記錄﹐去剖析當事人的想法。他更進一步認為所有的歷也都是思想的歷史﹐不過這個說法有一大漏洞﹐便是需然歷史人物會有想法﹐但事件並不一定按其所想地發生。

第八章提出歷史學要為過去人物的思想和行為﹐找出合理的歷史解釋。通過合理的解釋﹐把思想與思想﹐思想與行為連結起來。要理解去生的行為﹐可以把行為本身視為對另一個問題的答案﹐而追問這個行為到底為當事人決解了什麼問題。當然人類行為並非科學法則﹐也會有違反理性的情況出現。要明白行為的理由﹐先要代入過去的角色中﹐用他們的視野去思考﹐在理論上不合理的事情﹐在他們信念和動機的前題下﹐可能在實際上變為合理。一個人的想法和行為﹐受當到他當時身處的社會的影響﹐所以歷史學家亦要考慮當時的社會背景。

第九章提出歷史的客觀性和主觀性的問題﹐到底歷史知識是普世性並超越時間﹐還是必須在當時的默絡裏解讀。歷史相對論者(historicism)認為人類的想法不停在改變﹐歷史學家不應用現代人的眼光去看過去的歷史﹐要追溯至原本事件的記錄和起源﹐不要被多年來堆積起來的解讀誤導。由於歷史學家也受制他們的時代﹐不論如何去解讀過去的歷史﹐總會帶有其處身時代的偏差﹐那客觀的歷史根本不可能存在。Max Weber認為每個人皆有其價值觀﹐只要歷史學者記錄的歷史不受其價值觀影響﹐那就乎合客觀歷史的條件。歷史中可以如實記錄其他人的價值和意見﹐只要沒有作者自身的意見便可以。可是選擇記錄什麼或不記錄什麼﹐也是一種價值取向亦會影響歷史的客觀性。Gadamer認為解讀歷史是與過去的對話﹐歷史學者不能對過去任意詮譯﹐必需要回答過去其他歷史學者的解讀﹐並要在對話中保持開放的心態﹐自己的意見可以隨著對話而改變。

第十章深入討論第六章中提過的歷史論述﹐以說故事的方式來記錄歷史。在二十世紀中歷史論述被分類為文學多於歷史﹐但作者認為歷史論述在歷史學中﹐佔有重要的位置﹐能夠讓讀者抽離現在的時空﹐跳進歷史當中感受當時的經驗。歷史故事有角色人物﹐亦有故事主線結局﹐說故事的人介入的多少﹐決定了歷史論述深淺厚度。歷史論述像說故事一樣要有起承轉合﹐主線可以在意料之外﹐但必需要在情理之中﹐故事前後穩含因果關係﹐有主旨貫通整個故事。Hayden White把不同歷史學家的歷史論述綜合總結﹐發展出超歷史學(Metahistory)﹐從歷史學家說故事修辭手法的異同﹐去重組歷史的知識和解釋。歷史論述與歷史小說的分別﹐在乎論述中的真實性。可是歷史學家為讓論述看起來更加真實﹐在論述中加插一些後世歷史學家不可能得知的瑣事。歷史論述可分為微觀和宏觀兩種﹐前者是把不同歷史人物的自我論述結合﹐從不同角度去觀察同一件事情。微觀歷史論述是集體回憶﹐但集體回憶並不是共同回憶﹐因為每個人的記憶也有不同。宏觀論述整合集體回憶中的分歧﹐把故事中所有觀點整合為統一的﹐超越事件中每個個體或組織的超論述。

第十一章解答歷史與歷史真相的問題﹐到底歷史與過去發生的事情之間﹐有著什麼的關係呢。無可否認過去曾經發生﹐歷史真實論者認為﹐多少程度上歷史能夠反映真實的過去。反真實論者認為形而上並沒有真實﹐一切只是取決於人的思想和言語﹐那歷史亦沒有所謂真實與不真實之分。反代表論者不否認真實的存在﹐但他們認為語言不能代表真實。除了歷史陳述是否真實外﹐綜合所有歷史陳述後的歷史系統也要被檢定是否真實。就算每一句歷史陳述為真﹐但如果只是選擇性地節錄某些陳述﹐結論給人的印像可以與事實相反。歷史真相會隨著時間而變得模糊﹐第一手資料也因為記錄者的個人利益不可以盡信﹐歷史學家只能盡量對比不同的歷史證據﹐與現存和新發現的證據互相印證﹐從中推論中比較可信的版本。歷史真相的一個難題﹐是如何連接過去的真相與現在的真相﹐歷史學家不可能對過去作出直接觀察﹐過去能印證真相的證據也可能隨時間而消失﹐能夠把歷史知讓流傳下來只有歷史論述。

第十二章探討歷史證據與歷史理論的關係﹐到底歷史學家的背景信念﹐會否預先決定他所得出的歷史結論。當歷史證據與歷史理論不乎時﹐歷史學者可以選擇把證據視為特列﹐亦可以選擇修改理論去包含新的證據﹐兩個選擇也可以保持理論內部的一致性﹐但卻是互不相容又同等同質的理論。兩個不同的歷史說法﹐兩者皆與現存的證據相容﹐必定一個是對一個是錯﹐只是我們沒有辨法分出來。很多事候不同的說法對基本事實也一致認同﹐分歧在論述﹐解釋﹐詮釋歷史意義上。社會解構論者認為﹐歷史也是權力關係下的產物﹐歷史說法可以從歷史學者的社會背景去分析。探討歷史知識本質的問題﹐很自然會追朔知識的本筆認知論的問題﹐到底先驗性的知識存不存在﹐會否隨時間而轉變﹐語言對知識有什麼限制﹐何謂知識的合理解釋等等。歷史知識除了知道什麼的問題外﹐還要問知道了該如何用的問題﹐作者認為要通過開放歷史論述﹐才能把過去的歷史連結到未來。

雖然不用交功課不用考試﹐但看這本書和寫這篇讀書筆記的時間﹐不比正式修讀該課為少。這篇讀書筆記花了三個週末才寫成﹐把課本前前後後讀了至少三篇。我對歷史是什麼這個問題﹐仍然沒有一個答案﹐但在閱讀過程當中﹐倒學懂很多不同的答案。我自己讀理科出身﹐比較接受科學觀式的歷史理論﹐可是歷史學始終是人文學科﹐歷史學主流對歷史的意義的見解﹐並不是描述客觀的歷史真理﹐而是透過論述和詮釋﹐連接過去的人與現在的人的思想。

Why I am not worried about Japan’s nuclear reactors

Now, I know nuclear melt down is not that frightening. Worse by worse, it will only take out the core, but not any radioactivity explosion.

By, Dr. Josef Oehmen, MIT, March 13, 2011

I am writing this text (Mar 12) to give you some peace of mind regarding some of the troubles in Japan, that is the safety of Japan’s nuclear reactors. Up front, the situation is serious, but under control. And this text is long! But you will know more about nuclear power plants after reading it than all journalists on this planet put together.

There was and will *not* be any significant release of radioactivity.

By “significant” I mean a level of radiation of more than what you would receive on – say – a long distance flight, or drinking a glass of beer that comes from certain areas with high levels of natural background radiation.

I have been reading every news release on the incident since the earthquake. There has not been one single (!) report that was accurate and free of errors (and part of that problem is also a weakness in the Japanese crisis communication). By “not free of errors” I do not refer to tendentious anti-nuclear journalism – that is quite normal these days. By “not free of errors” I mean blatant errors regarding physics and natural law, as well as gross misinterpretation of facts, due to an obvious lack of fundamental and basic understanding of the way nuclear reactors are build and operated. I have read a 3 page report on CNN where every single paragraph contained an error.

We will have to cover some fundamentals, before we get into what is going on.

Construction of the Fukushima nuclear power plants

The plants at Fukushima are so called Boiling Water Reactors, or BWR for short. Boiling Water Reactors are similar to a pressure cooker. The nuclear fuel heats water, the water boils and creates steam, the steam then drives turbines that create the electricity, and the steam is then cooled and condensed back to water, and the water send back to be heated by the nuclear fuel. The pressure cooker operates at about 250 °C.

The nuclear fuel is uranium oxide. Uranium oxide is a ceramic with a very high melting point of about 3000 °C. The fuel is manufactured in pellets (think little cylinders the size of Lego bricks). Those pieces are then put into a long tube made of Zircaloy with a melting point of 2200 °C, and sealed tight. The assembly is called a fuel rod. These fuel rods are then put together to form larger packages, and a number of these packages are then put into the reactor. All these packages together are referred to as “the core”.

The Zircaloy casing is the first containment. It separates the radioactive fuel from the rest of the world.

The core is then placed in the “pressure vessels”. That is the pressure cooker we talked about before. The pressure vessels is the second containment. This is one sturdy piece of a pot, designed to safely contain the core for temperatures several hundred °C. That covers the scenarios where cooling can be restored at some point.

The entire “hardware” of the nuclear reactor – the pressure vessel and all pipes, pumps, coolant (water) reserves, are then encased in the third containment. The third containment is a hermetically (air tight) sealed, very thick bubble of the strongest steel and concrete. The third containment is designed, built and tested for one single purpose: To contain, indefinitely, a complete core meltdown. For that purpose, a large and thick concrete basin is cast under the pressure vessel (the second containment), all inside the third containment. This is the so-called “core catcher”. If the core melts and the pressure vessel bursts (and eventually melts), it will catch the molten fuel and everything else. It is typically built in such a way that the nuclear fuel will be spread out, so it can cool down.

This third containment is then surrounded by the reactor building. The reactor building is an outer shell that is supposed to keep the weather out, but nothing in. (this is the part that was damaged in the explosion, but more to that later).

Fundamentals of nuclear reactions

The uranium fuel generates heat by nuclear fission. Big uranium atoms are split into smaller atoms. That generates heat plus neutrons (one of the particles that forms an atom). When the neutron hits another uranium atom, that splits, generating more neutrons and so on. That is called the nuclear chain reaction.

Now, just packing a lot of fuel rods next to each other would quickly lead to overheating and after about 45 minutes to a melting of the fuel rods. It is worth mentioning at this point that the nuclear fuel in a reactor can *never* cause a nuclear explosion the type of a nuclear bomb. Building a nuclear bomb is actually quite difficult (ask Iran). In Chernobyl, the explosion was caused by excessive pressure buildup, hydrogen explosion and rupture of all containments, propelling molten core material into the environment (a “dirty bomb”). Why that did not and will not happen in Japan, further below.

In order to control the nuclear chain reaction, the reactor operators use so-called “control rods”. The control rods absorb the neutrons and kill the chain reaction instantaneously. A nuclear reactor is built in such a way, that when operating normally, you take out all the control rods. The coolant water then takes away the heat (and converts it into steam and electricity) at the same rate as the core produces it. And you have a lot of leeway around the standard operating point of 250°C.

The challenge is that after inserting the rods and stopping the chain reaction, the core still keeps producing heat. The uranium “stopped” the chain reaction. But a number of intermediate radioactive elements are created by the uranium during its fission process, most notably Cesium and Iodine isotopes, i.e. radioactive versions of these elements that will eventually split up into smaller atoms and not be radioactive anymore. Those elements keep decaying and producing heat. Because they are not regenerated any longer from the uranium (the uranium stopped decaying after the control rods were put in), they get less and less, and so the core cools down over a matter of days, until those intermediate radioactive elements are used up.

This residual heat is causing the headaches right now.

So the first “type” of radioactive material is the uranium in the fuel rods, plus the intermediate radioactive elements that the uranium splits into, also inside the fuel rod (Cesium and Iodine).

There is a second type of radioactive material created, outside the fuel rods. The big main difference up front: Those radioactive materials have a very short half-life, that means that they decay very fast and split into non-radioactive materials. By fast I mean seconds. So if these radioactive materials are released into the environment, yes, radioactivity was released, but no, it is not dangerous, at all. Why? By the time you spelled “R-A-D-I-O-N-U-C-L-I-D-E”, they will be harmless, because they will have split up into non radioactive elements. Those radioactive elements are N-16, the radioactive isotope (or version) of nitrogen (air). The others are noble gases such as Argon. But where do they come from? When the uranium splits, it generates a neutron (see above). Most of these neutrons will hit other uranium atoms and keep the nuclear chain reaction going. But some will leave the fuel rod and hit the water molecules, or the air that is in the water. Then, a non-radioactive element can “capture” the neutron. It becomes radioactive. As described above, it will quickly (seconds) get rid again of the neutron to return to its former beautiful self.

This second “type” of radiation is very important when we talk about the radioactivity being released into the environment later on.

What happened at Fukushima

I will try to summarize the main facts. The earthquake that hit Japan was 5 times more powerful than the worst earthquake the nuclear power plant was built for (the Richter scale works logarithmically; the difference between the 8.2 that the plants were built for and the 8.9 that happened is 5 times, not 0.7). So the first hooray for Japanese engineering, everything held up.

When the earthquake hit with 8.9, the nuclear reactors all went into automatic shutdown. Within seconds after the earthquake started, the control rods had been inserted into the core and nuclear chain reaction of the uranium stopped. Now, the cooling system has to carry away the residual heat. The residual heat load is about 3% of the heat load under normal operating conditions.

The earthquake destroyed the external power supply of the nuclear reactor. That is one of the most serious accidents for a nuclear power plant, and accordingly, a “plant black out” receives a lot of attention when designing backup systems. The power is needed to keep the coolant pumps working. Since the power plant had been shut down, it cannot produce any electricity by itself any more.

Things were going well for an hour. One set of multiple sets of emergency Diesel power generators kicked in and provided the electricity that was needed. Then the Tsunami came, much bigger than people had expected when building the power plant (see above, factor 7). The tsunami took out all multiple sets of backup Diesel generators.

When designing a nuclear power plant, engineers follow a philosophy called “Defense of Depth”. That means that you first build everything to withstand the worst catastrophe you can imagine, and then design the plant in such a way that it can still handle one system failure (that you thought could never happen) after the other. A tsunami taking out all backup power in one swift strike is such a scenario. The last line of defense is putting everything into the third containment (see above), that will keep everything, whatever the mess, control rods in our out, core molten or not, inside the reactor.

When the diesel generators were gone, the reactor operators switched to emergency battery power. The batteries were designed as one of the backups to the backups, to provide power for cooling the core for 8 hours. And they did.

Within the 8 hours, another power source had to be found and connected to the power plant. The power grid was down due to the earthquake. The diesel generators were destroyed by the tsunami. So mobile diesel generators were trucked in.

This is where things started to go seriously wrong. The external power generators could not be connected to the power plant (the plugs did not fit). So after the batteries ran out, the residual heat could not be carried away any more.

At this point the plant operators begin to follow emergency procedures that are in place for a “loss of cooling event”. It is again a step along the “Depth of Defense” lines. The power to the cooling systems should never have failed completely, but it did, so they “retreat” to the next line of defense. All of this, however shocking it seems to us, is part of the day-to-day training you go through as an operator, right through to managing a core meltdown.

It was at this stage that people started to talk about core meltdown. Because at the end of the day, if cooling cannot be restored, the core will eventually melt (after hours or days), and the last line of defense, the core catcher and third containment, would come into play.

But the goal at this stage was to manage the core while it was heating up, and ensure that the first containment (the Zircaloy tubes that contains the nuclear fuel), as well as the second containment (our pressure cooker) remain intact and operational for as long as possible, to give the engineers time to fix the cooling systems.

Because cooling the core is such a big deal, the reactor has a number of cooling systems, each in multiple versions (the reactor water cleanup system, the decay heat removal, the reactor core isolating cooling, the standby liquid cooling system, and the emergency core cooling system). Which one failed when or did not fail is not clear at this point in time.

So imagine our pressure cooker on the stove, heat on low, but on. The operators use whatever cooling system capacity they have to get rid of as much heat as possible, but the pressure starts building up. The priority now is to maintain integrity of the first containment (keep temperature of the fuel rods below 2200°C), as well as the second containment, the pressure cooker. In order to maintain integrity of the pressure cooker (the second containment), the pressure has to be released from time to time. Because the ability to do that in an emergency is so important, the reactor has 11 pressure release valves. The operators now started venting steam from time to time to control the pressure. The temperature at this stage was about 550°C.

This is when the reports about “radiation leakage” starting coming in. I believe I explained above why venting the steam is theoretically the same as releasing radiation into the environment, but why it was and is not dangerous. The radioactive nitrogen as well as the noble gases do not pose a threat to human health.

At some stage during this venting, the explosion occurred. The explosion took place outside of the third containment (our “last line of defense”), and the reactor building. Remember that the reactor building has no function in keeping the radioactivity contained. It is not entirely clear yet what has happened, but this is the likely scenario: The operators decided to vent the steam from the pressure vessel not directly into the environment, but into the space between the third containment and the reactor building (to give the radioactivity in the steam more time to subside). The problem is that at the high temperatures that the core had reached at this stage, water molecules can “disassociate” into oxygen and hydrogen – an explosive mixture. And it did explode, outside the third containment, damaging the reactor building around. It was that sort of explosion, but inside the pressure vessel (because it was badly designed and not managed properly by the operators) that lead to the explosion of Chernobyl. This was never a risk at Fukushima. The problem of hydrogen-oxygen formation is one of the biggies when you design a power plant (if you are not Soviet, that is), so the reactor is build and operated in a way it cannot happen inside the containment. It happened outside, which was not intended but a possible scenario and OK, because it did not pose a risk for the containment.

So the pressure was under control, as steam was vented. Now, if you keep boiling your pot, the problem is that the water level will keep falling and falling. The core is covered by several meters of water in order to allow for some time to pass (hours, days) before it gets exposed. Once the rods start to be exposed at the top, the exposed parts will reach the critical temperature of 2200 °C after about 45 minutes. This is when the first containment, the Zircaloy tube, would fail.

And this started to happen. The cooling could not be restored before there was some (very limited, but still) damage to the casing of some of the fuel. The nuclear material itself was still intact, but the surrounding Zircaloy shell had started melting. What happened now is that some of the byproducts of the uranium decay – radioactive Cesium and Iodine – started to mix with the steam. The big problem, uranium, was still under control, because the uranium oxide rods were good until 3000 °C. It is confirmed that a very small amount of Cesium and Iodine was measured in the steam that was released into the atmosphere.

It seems this was the “go signal” for a major plan B. The small amounts of Cesium that were measured told the operators that the first containment on one of the rods somewhere was about to give. The Plan A had been to restore one of the regular cooling systems to the core. Why that failed is unclear. One plausible explanation is that the tsunami also took away / polluted all the clean water needed for the regular cooling systems.

The water used in the cooling system is very clean, demineralized (like distilled) water. The reason to use pure water is the above mentioned activation by the neutrons from the Uranium: Pure water does not get activated much, so stays practically radioactive-free. Dirt or salt in the water will absorb the neutrons quicker, becoming more radioactive. This has no effect whatsoever on the core – it does not care what it is cooled by. But it makes life more difficult for the operators and mechanics when they have to deal with activated (i.e. slightly radioactive) water.

But Plan A had failed – cooling systems down or additional clean water unavailable – so Plan B came into effect. This is what it looks like happened:

In order to prevent a core meltdown, the operators started to use sea water to cool the core. I am not quite sure if they flooded our pressure cooker with it (the second containment), or if they flooded the third containment, immersing the pressure cooker. But that is not relevant for us.

The point is that the nuclear fuel has now been cooled down. Because the chain reaction has been stopped a long time ago, there is only very little residual heat being produced now. The large amount of cooling water that has been used is sufficient to take up that heat. Because it is a lot of water, the core does not produce sufficient heat any more to produce any significant pressure. Also, boric acid has been added to the seawater. Boric acid is “liquid control rod”. Whatever decay is still going on, the Boron will capture the neutrons and further speed up the cooling down of the core.

The plant came close to a core meltdown. Here is the worst-case scenario that was avoided: If the seawater could not have been used for treatment, the operators would have continued to vent the water steam to avoid pressure buildup. The third containment would then have been completely sealed to allow the core meltdown to happen without releasing radioactive material. After the meltdown, there would have been a waiting period for the intermediate radioactive materials to decay inside the reactor, and all radioactive particles to settle on a surface inside the containment. The cooling system would have been restored eventually, and the molten core cooled to a manageable temperature. The containment would have been cleaned up on the inside. Then a messy job of removing the molten core from the containment would have begun, packing the (now solid again) fuel bit by bit into transportation containers to be shipped to processing plants. Depending on the damage, the block of the plant would then either be repaired or dismantled.

Now, where does that leave us?

* The plant is safe now and will stay safe.
* Japan is looking at an INES Level 4 Accident: Nuclear accident with local consequences. That is bad for the company that owns the plant, but not for anyone else.
* Some radiation was released when the pressure vessel was vented. All radioactive isotopes from the activated steam have gone (decayed). A very small amount of Cesium was released, as well as Iodine. If you were sitting on top of the plants’ chimney when they were venting, you should probably give up smoking to return to your former life expectancy. The Cesium and Iodine isotopes were carried out to the sea and will never be seen again.
* There was some limited damage to the first containment. That means that some amounts of radioactive Cesium and Iodine will also be released into the cooling water, but no Uranium or other nasty stuff (the Uranium oxide does not “dissolve” in the water). There are facilities for treating the cooling water inside the third containment. The radioactive Cesium and Iodine will be removed there and eventually stored as radioactive waste in terminal storage.
* The seawater used as cooling water will be activated to some degree. Because the control rods are fully inserted, the Uranium chain reaction is not happening. That means the “main” nuclear reaction is not happening, thus not contributing to the activation. The intermediate radioactive materials (Cesium and Iodine) are also almost gone at this stage, because the Uranium decay was stopped a long time ago. This further reduces the activation. The bottom line is that there will be some low level of activation of the seawater, which will also be removed by the treatment facilities.
* The seawater will then be replaced over time with the “normal” cooling water
* The reactor core will then be dismantled and transported to a processing facility, just like during a regular fuel change.
* Fuel rods and the entire plant will be checked for potential damage. This will take about 4-5 years.
* The safety systems on all Japanese plants will be upgraded to withstand a 9.0 earthquake and tsunami (or worse)
* I believe the most significant problem will be a prolonged power shortage. About half of Japan’s nuclear reactors will probably have to be inspected, reducing the nation’s power generating capacity by 15%. This will probably be covered by running gas power plants that are usually only used for peak loads to cover some of the base load as well. That will increase your electricity bill, as well as lead to potential power shortages during peak demand, in Japan.

Are Compact Fluorescent Lightbulbs Really Cheaper Over Time?

I hate the lighting produced by CFL bulbs. I am going to switch from incandescent bulb to LED lights directly when the price of LED lights comes down. CFL is a in-between gaping technically that eventually should be phased out.

By Joseph Calamia, March 2011, IEEE Spectrum
CFLs must last long enough for their energy efficiency to make up for their higher cost

You buy a compact fluorescent lamp. The packaging says it will last for 6000 hours—about five years, if used for three hours a day. A year later, it burns out.

Last year, IEEE Spectrum reported that some Europeans opposed legislation to phase out incandescent lighting. Rather than replace their lights with compact fluorescents, consumers started hoarding traditional bulbs.

From the comments on that article, it seems that some IEEE Spectrum readers aren’t completely sold on CFLs either. We received questions about why the lights don’t always meet their long-lifetime claims, what can cause them to fail, and ultimately, how dead bulbs affect the advertised savings of switching from incandescent.

Tests of compact fluorescent lamps’ lifetime vary among countries. The majority of CFLs sold in the United States adhere to the U.S. Department of Energy and Environmental Protection Agency’s Energy Star approval program, according to the U.S. National Electrical Manufacturers Association. For these bulbs, IEEE Spectrum found some answers.

How is a compact fluorescent lamp’s lifetime calculated in the first place?

“With any given lamp that rolls off a production line, whatever the technology, they’re not all going to have the same exact lifetime,” says Alex Baker, lighting program manager for the Energy Star program. In an initial test to determine an average lifetime, he says, manufacturers leave a large sample of lamps lit. The defined average “rated life” is the time it takes for half of the lamps to go out. Baker says that this average life definition is an old lighting industry standard that applies to incandescent and compact fluorescent lamps alike.

In reality, the odds may actually be somewhat greater than 50 percent that your 6000-hour-rated bulb will still be burning bright at 6000 hours. “Currently, qualified CFLs in the market may have longer lifetimes than manufacturers are claiming,” says Jen Stutsman, of the Department of Energy’s public affairs office. “More often than not, more than 50 percent of the lamps of a sample set are burning during the final hour of the manufacturer’s chosen rated lifetime,” she says, noting that manufacturers often opt to end lifetime evaluations prematurely, to save on testing costs.

Although manufacturers usually conduct this initial rated life test in-house, the Energy Star program requires other lifetime evaluations conducted by accredited third-party laboratories. Jeremy Snyder directed one of those testing facilities, the Program for the Evaluation and Analysis of Residential Lighting (PEARL) in Troy, N.Y., which evaluated Energy Star–qualified bulbs until late 2010, when the Energy Star program started conducting these tests itself. Snyder works at the Rensselaer Polytechnic Institute’s Lighting Research Center, which conducts a variety of tests on lighting products, including CFLs and LEDs. Some Energy Star lifetime tests, he says, require 10 sample lamps for each product—five pointing toward the ceiling and five toward the floor. One “interim life test” entails leaving the lamps lit for 40 percent of their rated life. Three strikes, or burnt-out lamps, and the product risks losing its qualification.

Besides waiting for bulbs to burn out, testers also measure the light output of lamps over time, to ensure that the CFLs do not appreciably dim with use. Using a hollow “integrating sphere,” which has a white interior to reflect light in all directions, Lighting Research Center staff can take precise measurements of a lamp’s total light output in lumens. The Energy Star program requires that 10 tested lights maintain an average of 90 percent of their initial lumen output for 1000 hours of life, and 80 percent of their initial lumen output at 40 percent of their rated life.

Is there any way to accelerate these lifetime tests?

“There are techniques for accelerated testing of incandescent lamps, but there’s no accepted accelerated testing for other types,” says Michael L. Grather, the primary lighting performance engineer at Luminaire Testing Laboratory and Underwriters’ Laboratories in Allentown, Penn For incandescent bulbs, one common method is to run more electric current through the filament than the lamp might experience in normal use. But Grather says a similar test for CFLs wouldn’t give consumers an accurate prediction of the bulb’s life: “You’re not fairly indicating what’s going to happen as a function of time. You’re just stressing different components—the electronics but not the entire lamp.”

Perhaps the closest such evaluation for CFLs is the Energy Star “rapid cycle test.” For this evaluation, testers divide the total rated life of the lamp, measured in hours, by two and switch the compact fluorescent on for five minutes and off for five minutes that number of times. For example, a CFL with a 6000-hour rated life must undergo 3000 such rapid cycles. At least five out of a sample of six lamps must survive for the product to keep its Energy Star approval.

In real scenarios, what causes CFLs to fall short of their rated life?

As anyone who frequently replaces CFLs in closets or hallways has likely discovered, rapid cycling can prematurely kill a CFL. Repeatedly starting the lamp shortens its life, Snyder explains, because high voltage at start-up sends the lamp’s mercury ions hurtling toward the starting electrode, which can destroy the electrode’s coating over time. Snyder suggests consumers keep this in mind when deciding where to use a compact fluorescent. The Lighting Research Center has published a worksheet [PDF] for consumers to better understand how frequent switching reduces a lamp’s lifetime. The sheet provides a series of multipliers so that consumers can better predict a bulb’s longevity. The multipliers range from 1.5 (for bulbs left on for at least 12 hours) to 0.4 (for bulbs turned off after 15 minutes). Despite any lifetime reduction, Snyder says consumers should still turn off lights not needed for more than a few minutes.

Another CFL slayer is temperature. “Incandescents thrive on heat,” Baker says. “The hotter they get, the more light you get out of them. But a CFL is very temperature sensitive.” He notes that “recessed cans”—insulated lighting fixtures—prove a particularly nasty compact fluorescent death trap, especially when attached to dimmers, which can also shorten the electronic ballast’s life. He says consumers often install CFLs meant for table or floor lamps inside these fixtures, instead of lamps specially designed for higher temperatures, as indicated on their packages. Among other things, these high temperatures can destroy the lamps’ electrolytic capacitors—the main reason, he says, that CFLs fail when overheated.

How do shorter-than-expected lifetimes affect the payback equation?

Actually predicting the savings of switching from an incandescent must account for both the cost of the lamp and its energy savings over time. Although the initial price of a compact fluorescent (which can range [PDF] from US $0.50 in a multipack to over $9) is usually more than that of an incandescent (usually less than a U.S. dollar), a CFL can use a fraction of the energy an incandescent requires. Over its lifetime, the compact fluorescent should make up for its higher initial cost in savings—if it lives long enough. It should also offset the estimated 4 milligrams of mercury it contains. You might think of mercury vapor as the CFL’s equivalent of an incandescent’s filament. The electrodes in the CFL excite this vapor, which in turn radiates and excites the lamp’s phosphor coating, giving off light. Given that coal-burning power plants also release mercury into the air, an amount that the Energy Star program estimates at around 0.012 milligrams per kilowatt-hour, if the CFL can save enough energy it should offset this environmental cost, too.

Exactly how long a CFL must live to make up for its higher costs depends on the price of the lamp, the price of electric power, and how much energy the compact fluorescent requires to produce the same amount of light as its incandescent counterpart. Many manufacturers claim that consumers can take an incandescent wattage and divide it by four, and sometimes five, to find an equivalent CFL in terms of light output, says Russ Leslie, associate director at the Lighting Research Center. But he believes that’s “a little bit too greedy.” Instead, he recommends dividing by three. “You’ll still save a lot of energy, but you’re more likely to be happy with the light output,” he says.

To estimate your particular savings, the Energy Star program has published a spreadsheet where you can enter the price you’re paying for electricity, the average number of hours your household uses the lamp each day, the price you paid for the bulb, and its wattage. The sheet also includes the assumptions used to calculate the comparison between compact fluorescent and incandescent bulbs. Playing with the default assumptions given in the sheet, we reduced the CFL’s lifetime by 60 percent to account for frequent switching, doubled the initial price to make up for dead bulbs, deleted the assumed labor costs for changing bulbs, and increased the CFL’s wattage to give us a bit more light. The compact fluorescent won. We invite you to try the same, with your own lighting and energy costs, and let us know your results.

B.C. priest lands snowboarding PhD

I have been saying skiing nurture my spirituality for many years. Finally there is some theology proof from an Anglican priest with his Ph.D thesis on spiritually in snowboarding.

CBC News, Mar 4, 2011
Thesis examines connection between spirituality and snowboarding

An Anglican priest from Trail, B.C., has become the first person in the world to get a PhD in snowboarding.

Neil Elliot, the minister at St. Andrew’s Anglican Church, recently received his doctorate from Kingston University in London, England.

“The genesis was discovering this term ‘soul-riding’ in a discussion on the internet, and that discussion going into how people have had transcending experiences while riding and discovering I’ve had that experience as well I just hadn’t recognized it,” he said.

Elliot interviewed dozens of snowboarders from the United Kingdom and Canada, delving into the spirituality of snowboarding.

“Soul-riding starts with riding powder, it starts with finding some kind of almost transcendent experience in riding powder and in the whole of your life, so soul-riding is about being completely focused, being completely in the moment, you might say.”

Elliot said it’s clear spirituality and snowboarding do intersect.

“[It’s] about snowboarders who discovered that … snowboarding was their spirituality. I had a lot of people who said, ‘Snowboarding is my religion.'”

‘New model for spirituality’

While Elliot’s thesis doesn’t draw any definite conclusions, he says it offers a new point of view.
Neil Elliot is the first person in the world to get a PhD in snowboarding. Neil Elliot is the first person in the world to get a PhD in snowboarding. (St. Andrews Anglican Church)

“What my thesis does is give a new model for spirituality, saying that spirituality is a way of looking at the world and a way of looking at the world that includes there being something more than just the material,” he said.

“My thesis goes on to say that there’s three dimensions to that. There’s the experiences that we have, there’s the context that we’re in and then there’s what’s going on really inside us, who we are.”

Elliot, who already has a master’s degree in theology and Islamic studies, is the first to admit his love of snowboarding drove him to get the PhD and a job in the B.C. mountains. But he insists his thesis is serious.

“My PhD is about spirituality and snowboarding. It’s rooted in the sociology of religion and in … this debate that’s going on about whether somebody is religious or spiritual. A lot of people say, ‘I’m not religious — I’m spiritual’ and I’m trying to find out what that actually means,” he said.

“The spirituality of snowboarding is looking at what does it mean to be spiritual in today’s world.”

Elliot said his colleagues and congregation support his unorthodox PhD, and love of both the board and cloth.

“They understand that this is a light on what we’re all struggling with: how do we encourage people to come into the church? How do we encourage people to see religion and spirituality as working together, rather than being different things?”

Mountek MK5000 CD Slot Mount

After I bought my Android smart phone, it is naturally that I am going to play mp3 and navigate with the built-in GPS when I am driving. Therefore I need to buy a phone mount to hold the smart phone inside the car. There are only two types of phone mount in the market, suction cup that sticks to the wind shield or flimsy clips that clips on to the air ventilation. I am not happy with both solutions, the former one blocks my view and the later one blocks the wind.

I did some search on eBay and Google and come across this one of its kind phone mount, the Mountek MK5000 that mounts on the CD slot. Since I no longer use the CD player, the space in front of CD player is pretty useless. It is the perfect place to mount my smart phone. It does not block anything other than the useless CD player. The MK5000 phone mount is very sturdy, it has an adjustable blade than I can slide inside the CD slot and lock it tight. The mount support vertical and horizontal rotation for easy screen rotation. It has spring loaded adjustable arms that fits devices of different size.

I have been using the mount for a couple of months and it works very well. Every day when I hop into my car, I place my smart phone onto the mount. The only disadvantage of the mount is its price. A cheap made-in-China phone mount costs less than $10, sometimes you can even get one as low as $5. The Mountek MK5000 is currently selling for $20 at eBay. Although it is more expensive, the design and the quality of the product worth the premiums price.