Free the Radio Spectrum

Free the underused radio spectrum will make our smart phone data plan much much cheaper. Imagine how much more data the cell phone company can carry in just one of those ancient inefficient TV radio channel.

BY James Losey, Sascha Meinrath, IEEE Spectrum, June 2010
Antiquated regulations have made radio spectrum artificially scarce. Rethinking the way we manage the airwaves could open up vast amounts of bandwidth

The Federal Communications Commission (FCC), the government entity that manages the commercial and public radio spectrum in the United States, has proposed making 500 megahertz of spectrum available for broadband within the next 10 years of which 300 MHz between 225 MHz and 3.7 GHz will likely be made available for mobile use within five years. The extra bandwidth, recaptured from broadcasters after the digital television transition, is certainly needed, given that AT&T reports that its mobile broadband traffic has increased 5000 percent over the last three years and that other carriers have also seen significant growth. However, under the current approach to allocating spectrum, this 500 MHz will do little to ease the looming spectrum crunch.

It’s time to rethink the way we allocate spectrum. Under current regulations, spectrum real estate is valuable but exclusive. In the past, that exclusivity was the only way to prevent multiple users from interfering with each other. But advances in radio technology means that today such exclusivity is no longer necessary; instead, it creates false scarcity. So we must change our decades-old approach to managing the public airwaves.

When the FCC began allocating spectrum in the 1930s, radios required wide swaths of spectrum to communicate. Without single players occupying designated bands, a cacophony of interference would have destroyed audio fidelity and later, with television, picture quality.

Today radios that can share spectrum make such protections from interference unnecessary. Just as car drivers can change lanes to avoid congestion, these “smart radios,” also called “cognitive radios,” are transceivers that listen to available frequencies and communicate over any channels that are currently unused. These radios not only shift frequencies but can also be programmed with the necessary protocols for use in different bands, such as for speaking the “language” of various blocks of spectrum used for Wi-Fi, television, or cellphones. This means that vast swaths of spectrum no longer need be locked into single use, or left unused, as a hedge against interference.

Unfortunately, the FCC’s policies still assume the use of antiquated technology, and therefore that license holders must maintain absolute control of spectrum space at all times. These regulations must be updated to reflect the technological realities of smart radios.

Wi-Fi serves as a striking example of what is possible. A relatively narrow piece of the airwaves that’s open for unlicensed access, Wi-Fi has enabled home networking, roaming connectivity in hotels, cafes, and airplanes, and community broadband networks around the world. The explosion of communications in Wi-Fi’s 2.4 and 5 gigahertz frequencies has led to a host of new services and applications. However, these frequencies have trouble with walls, hills, and long distances. To support next-generation networking, a logical next step would be to allow technology developers access to a bigger and better swath of unlicensed wireless spectrum through the use of smart radios.

Policy allowing these new radios, tagged Opportunistic Spectrum Access (OSA), would give birth to a new generation of connectivity. With smart radios, unlicensed devices could share the same bandwidth as licensed users, finding unused frequencies in real time and filling in during the milliseconds when licensed users are not using their bands. In essence, they would work the same way as today’s iTrip or many home wireless phones, which scan a number of different channels and choose the one with the least interference.

Developers working on smart radio devices are excited about the possibilities of OSA. The technology allows for more affordable broadband for rural populations where low population density has deterred private infrastructure investment. It would mean more affordable and robust networking over longer ranges than today’s Wi-Fi, helping municipalities working to update aging communications systems and public safety officials working in both urban and remote areas. Similarly, OSA could increase opportunities for wireless Internet service providers and networking in businesses, universities, and cities. Successful community wireless networks including Urbana-Champaign, Illinois; Athens, Greece; and Dharamsala, India, could be expanded over greater distances.

The great benefit of OSA is the ability to open more access to spectrum while avoiding the challenges of moving current users to other bands. For example, smart radio device developers could access unused frequencies in the so-called white spaces of broadcast television. These white spaces, created when the FCC allocated spectrum to television broadcasters, are empty channels that were left unoccupied to prevent interference. In many rural areas, as much as 80 percent of this television spectrum is currently unused.

Today companies like Spectrum Bridge and Shared Spectrum Co. are already building next-generation networks using OSA. Spectrum Bridge has built a prototype network using TV frequencies in Claudville, Va. And Shared Spectrum has developed OSA technologies for use in battlefield communications, using these devices’ frequency-hopping capabilities to help avoid jamming efforts by hostile forces.

The FCC recognizes that this spectrum could be made available. In 2008 it issued an order authorizing the use of “White Space Devices (WSDs) that can detect TV signals at levels that are 1/1000th the signal power a television needs to display a picture, scan for interference, and move their bandwidth accordingly, avoiding interference with television broadcasts.” To date, however, rollout of such products has stalled because the FCC has not followed through with necessary supplemental rulings, such as creating the required geolocational database of spectrum assignments to help identify which frequencies are in use in each area. Meanwhile, as a part of the national broadband plan, the FCC has committed to repurposing TV bands for exclusive use.

Potential spectrum also exists outside the television bands. Most spectrum allocations, such as the 270 000 held by government agencies alone, are woefully underutilized. Based on the best available data, collected in 2004 as part of a National Science Foundation research project, less than 10 percent of our current spectrum is used at any given point in time (including in major cities).

If the FCC continues the current policies of restricting spectrum use to exclusive entities and the highest bidders, they will continue choking what FCC Chairman Julius Genachowski has called “the oxygen of mobile broadband service.” By adopting OSA policies, the FCC will allow expansive access to spectrum without disrupting existing users. Current license holders could preserve priority use in their assigned bands, but secondary users could communally use the 90 percent of spectrum that is typically not in active use.

At this point, implementing OSA is a policy consideration, not a technological challenge. In the National Broadband Plan released by the FCC in March, the commission recommends expeditiously completing the regulations related to TV white spaces. In our view, these rulings must be expanded to include a greater spread of underused spectrum. Spectrum will always be a finite resource, but policy needs to evolve alongside the technology to increase the efficiency and number of devices that can take advantage of this public resource.
About the Authors

James Losey is a program associate with the New America Foundation’s Open Technology Initiative. Most recently he has published articles in Slate as well as resources on federal broadband stimulus opportunities and analyses of the National Broadband Plan.

Sascha Meinrath is the director of the New America Foundation’s Open Technology Initiative and has been described as a community Internet pioneer and an entrepreneurial visionary. He is a well-known expert on community wireless networks, municipal broadband, and telecommunications policy and was the 2009 recipient of the Public Knowledge IP3 Award for excellence in public interest advocacy.

迪士尼樂園

早幾年陳慧琳有首歌叫作《他約我去迪士尼》﹐這次我趁去阿海納姆(Anaheim)開會議﹐便順道約老婆去迪士尼樂園。美國加洲是第一座迪士尼樂園﹐早七八年見樂園陳舊缺乏吸引力﹐把停車場改建為加洲冒險樂園。一個地方兩座主題公園﹐加洲迪士尼躍升為繼佛洲迪士尼世界後﹐第二大的迪士尼樂園。撇除兩天套票要百五美元﹐票價有點昂價不太快樂外﹐迪士尼倒也如其宣傳口號一樣﹐這是世界上最快樂的地方。

這次的會議場地與迪士尼樂園只有一街之隔﹐抵步第一天從機楊駕車去會議酒店時﹐在高速公路上離遠已看見迪士尼的煙花。酒店房間雖不是面向迪士尼﹐每晚九時便會聽到迪士尼轟聾聾在放煙花。會議其中一晚我們的軟件供應商還包起了迪士尼商店街的一間餐廳﹐邀請所有客戶去開派對。周一至周五要開會議辨正經事﹐只好對著迪士尼望門輕嘆。不過日間會議完結後﹐有三天晚上去了迪士尼商店街吃晚餐。晚上迪士尼街十分熱鬧﹐除了迪士尼酒店的住客外﹐還有﹐從樂園過來醫肚和附近酒店區前來購物的人。迪士尼商店街的餐廳價錢與酒店區的其他餐廳相若﹐唯一不便之處不接受預訂留座﹐食客只能乘乘排隊輪候。迪士尼商店街的旗艦店當然是迪士尼專賣店﹐很多特別迪士尼商品只在這裏發售﹐不過價錢並不便宜。精打細算又不介意買上季舊貨的朋友﹐不妨到距離迪士尼十五分鐘車程的迪士尼Outlet店買﹐反正買手信送禮只要有米老鼠標誌便能交差。另一個讓人流連忘返的商店﹐是樂高積木專買店﹐也很有其他地方買不到的獨特商品。最特別是樂高積木砌的印度泰姬陵﹐可惜太貴買不起。不過樂高人仔鎖匙扣﹐價格相宜又可愛﹐是當手信最佳禮物。

白雪公主Precious Moment

迪士尼的交通十分方便﹐洛杉磯機場有穿梭巴士直達阿海納姆酒店區。酒店區內阿海納姆公車局有循環線﹐來往迪士尼和各酒店。如果駕車到迪士尼﹐把車停在新建的多層停車場﹐也有穿梭巴士接送到迪士尼入口。上述所有交通公具也有一個共通點﹐就是車身上畫滿迪士尼的卡通人物。司機與平時的巴士司機也很不同﹐他們會邊駕車邊說笑娛樂乘客﹐把迪士尼歡樂的氣氛帶到整個酒店區。迪士尼樂園內有兩條火車線﹐一條是連接商店街和樂園的單軌列車﹐另一條是圍繞樂園的蒸汽火車。這兩列火車是遊客必坐的景點之一﹐當然樂園面積這麼大﹐坐火車也可以省點腳程。

玩具奇兵巴士

蒸汽火車

我對迪士尼內的伙食有點微言﹐價錢尚算公道﹐不過卻欠缺選擇。最多是美式快餐﹐漢堡包熱狗汽水薯條﹐吃這麼肥膩的食物很不健康。我寧可吃樂園內小食亭賣的煙火雞腿和焗粟米。火雞腿一隻才八美元﹐比普通雞腿巨大三倍﹐吃一隻可以當午飯﹐美味又經濟。加洲天氣熱雪榚特別好賣﹐除了一定要吃在加拿大沒有的Dryers外﹐還吃了個米老鼠雪榚三文治。吃時我先咬掉它的兩隻耳朵﹐沒有耳朵的米老鼠看起來很古怪。星期天是假期的最後一晚﹐我們不想馬馬虎虎吃快餐當晚餐﹐於是便到樂園內的露天餐廳﹐坐下來慢慢享受一頓豐富的晚餐。可能樂園的遊客也在取每一秒遊盡玩遊戲﹐不願意浪費時間在餐廳用膳﹐我們反而不用排隊等位。在迪士尼內吃快餐要十二美元﹐在餐廳吃有人服待才不二十美元﹐當作中場休息也不錯。

機動遊戲一向不是迪士尼的強項﹐樂園那邊的遊戲成年人會覺得小兒科不夠刺激﹐不過勝在以迪士尼人物作主題﹐吸引小朋友。事實上樂園的設施也呈現老態﹐盡管已經把遊戲翻新加入新元素﹐始終還是十幾年前的舊設計。樂園中只有新遊戲Indiana Jones和Splash Mountain比較好玩﹐Space Mountain和Buzz Light Year等出名經典遊戲也不過爾爾﹐至於其他遊戲就只有齋坐得個睇字。Winnie the Pooh和Finding Nemo比較有趣﹐Roger Rabbit和海盜王則有點搵笨。海盜王更是新瓶舊酒﹐在以前舊海盜遊戲中﹐求其放幾個Johnny Deep的機械公仔﹐投射放映Davy Jones的影像便說是海盜王的主題遊戲﹐新與舊的東西完全格格不入感覺很不協調。我差不多玩盡了全部遊戲﹐唯獨沒有玩Autopia和世界真細小。對於一個天天駕車上班的人﹐玩Autopia實在有點無聊。至於世界真細小﹐十年前我去迪士尼留下陰影﹐那首歌好像洗腦一樣﹐在我腦子揮之不去響了足足三天﹐真是恐怖版繞樑三日。我是Star Wars的擁躉﹐很幸運可以玩到Star Tours﹐因為這個遊戲下個月便要拆了。這遊戲差不多有二十年歷史﹐畫面沒有電腦特技﹐只是用微縮模型拍攝﹐很有八十年代正宗星球大戰的味道。大家去迪士尼千萬不要浪費時間去玩Innoventions﹐那根本不是遊戲﹐美其名體驗新科技生活﹐說穿了只是微軟和Samsung的廣告展銷會。

Splash Mountain

海底奇兵潛水艇

海盜船

迪士尼樂園的遊戲適合小朋友﹐加洲冒險樂園的遊戲則緊張刺激很多。一個本來平平無奇的摩天輪﹐也可以玩出新意思。舊式的摩天輪廂座是固定﹐這個新式摩天輪的廂座則懸掛在軌道上﹐隨著摩天輪轉動廂座會前後流動搖晃。看著廂座一邊上升上邊向外衝﹐差點以為會我掉出摩天輪。過山車原來也有新科技﹐舊式的過山車是先例車拉上去﹐經過最高點後例車靠地心引力才開始慢慢衝下去。新式的過山車用電磁軌原理﹐瞬間把例車加速﹐像子彈那像射出去﹐經過最高點府向衡下時﹐便已經在全速前進。荷里活恐怖酒店是以迷離境界為主題包裝的跳樓機﹐掉下去前才忽然打開升降機的門﹐讓玩家看看自己身處多高﹐比舊式跳樓機驚嚇得多。這個遊戲實在太好玩﹐我們白天玩了第一次不夠喉﹐晚上回去再玩多一次更過癮。其他新款的遊戲也好玩﹐激流水泡所有主題公園也有﹐加洲飛行和Bug’s Life很有真實感。Muppets 3D和Monster Inc雖然只是坐著看﹐但新遊戲不論是活動和特技效果﹐也比迪士尼樂園那邊的舊遊戲優勝。其中我最喜愛的遊戲是Toy’s Story Mania﹐那是Buzz Light Year的進化版﹐舊遊戲用紅外線槍﹐射那裏看不清楚﹐新遊戲用3D眼鏡﹐彈波和飛箭真的好像從手中射出來一樣。

荷里活恐怖酒店

電磁軌過山車

去迪士尼除了玩機動戲遊外﹐重要節目是欣賞花車巡遊和煙花。迪士尼巡遊載歌載舞很熱鬧﹐每天下午也有大巡遊﹐其他特定時間也有表演節目。迪士尼樂園的巡遊以前看過﹐十年如一日有點老土﹐反而加洲冒險樂園的Pixar人物巡遊有趣很多。這次我們把善用時間﹐欣賞全部三個晚間大型節目。煙花是迪士尼的標誌﹐在藥園內看以城堡為背景看煙花﹐卡通人物飛天穿插其中﹐配合動聽的迪士尼音樂﹐用煙花編織出一個故事。平時新年或國慶時的煙火﹐相比之下只是狂燒銀紙亂放一通。Fantasic和World of Colour有點相似﹐也是用水銀幕和噴火﹐分別是前者配上真人演出﹐後者則配上花款噴水和幻彩燈光。與卡通人物合照也是重要節目﹐我們差不多見到有卡通人物前去影相。我和跳跳虎拍照時﹐我刻特扮跳跳虎在狂跳﹐要卡通人物和我一起跳。扮演跳跳虎的演員大慨心中咒罵我﹐要他穿著十多磅的化粧戲服﹐在加洲的烈日不停地跳﹐不辛苦得累死才怪。可是他要在遊客面扮前真的跳跳虎﹐在卡通中跳跳虎終日跳過不停﹐他只好勉為其難跟著我跳。誰叫他們在我排隊時﹐躲懶回後台休息﹐害我白等了十分鐘。第一天見到白雪公主﹐我覺得和迪士尼公主影相是女仔的事﹐所以我沒有和她拍照。第二天我想到了一個好點子﹐特地帶了個蘋果去扮引誘白雪公主「請咬一口」的合照﹐可惜卻遇不到白雪公主。

玩具奇兵綠色膠兵

反斗車王

其實我一向不是太喜歡迪士尼卡通人物﹐畢竟我是喝日本動畫奶水長大﹐總視美國動畫太過低能幻稚﹐王子與公主故事老土過時。最叫人受不了是動畫中忽然唱歌﹐不知道的還以為是在看印度波里活電影。一直到Pixar電腦動畫的出現﹐小朋友開心到看表面的歡樂﹐大人看到深一層的寓意﹐才改變我對迪士尼動畫的看法。很多人對迪士尼樂園趨之若鶩﹐對米老鼠十分著迷﹐可是我身在樂園之中﹐總是有一份抽離現實的違和感﹐對人工營做出來的歡樂抱著懷疑態度。我反而好奇迪士尼的魔法背後﹐令瘋魔千萬大人小朋友的市場規律。

DAC Technical Review (Day 4,5)

The exhibition floor is over in day 4 and 5. In day 4, I attended user track presentation on verification and a technical session on What input language for HLS. In day 5, I attended a workshop on Software Engineering using Agile Software Development technics

User track presentation on verification

In the presentation Migrating HW/SW co-verification platforms from simulator to emulators, it outlines a few challenges in the flow: 1. compile ASIC/IP vendor simulation model to the emulator. 2. generate the primary clock in emulation. 3. Use transaction based VIP or use emulator specific transactor IP.

In the presentation A Methodology for automatic generation of register bank RTL, related verification environment and firmware headers, it outlines a flow similar to our RDA flow. The difference between RDA and their flow is they support IP-XACT as the register definition flow and using tcl/Java to generate the files. The register XML files is translated into Java class, then registers are read from the Java class to generate the register RTL, vr_ad register files and firmware headers files. Their flow does not support auto-generation of backdoor path in vr_ad, neither does our RDA flow.

In the presentation Utilizing assertion synthesis to achieve an automated assertion-based verification methodology for a complex graphics chip designs, nVidia demonstrate the use of the Nextop tool to generate properties representing design functionality. The flow is pretty much the same as what’s outlined in Nextop’s booth. The presentation has introduced a few new concepts, first is the notation of assertion density, which measures the number of assertion properties required to define the functionality of the design. Then there is the difference between synthesized properties and synthesable properties. The first one refers to the properties auto-generated using Nextop’s flow, the later one refers to the assertion is able to run inside the emulation box. However the specification mining is only as good as the simulation traces feed into tool.

In the presentation A smart debugging strategy for billion-gate SOCs, Samsung present a solution to a common problem we have in verification. When a simulation fails, we need the waveform to debug. On one hand, takes time to rerun the simulation and dump all the waveform. On the other hand, it takes up disk space and slow the simulation down if we start dumping all the waveform in all simulation runs. An approach to solve this problem is save check points in the simulation, then rerun the simulation and dump the waveform from the closest check point to where the simulation fails. We attempted to implement a home grown solution using ncsim’s native save/reload function, but save/reload function has been very buggy and very inefficient in term of snapshot size. The presentation introduces a tool from System Centroid called SSi-1 to automate the flow. It worth to evaluate SSi-1 to see how well it solves the problem of dumping waveform in re-simulation. The only concern is System Centroid is a Korean company and most information in its website is written in Korean.

In the presentation Bug hunting methodology using semi-formal verification techniques, ST Microelectronics introduce a way to combine the formal verification with simulation. The idea is invoke the formal engine in pre-defined search point during the simulation to limit the scope of formal search space. The formal engine can be triggered when a certain RTL condition is met, on interval, on FSM or on coverage.

What Input Language is the best choice for High-Level Synthesis (HLS)?
This is the most heated debate session in DAC. The panel invited speakers from Cadence, Mentor, Synfora, Bluespec, Forte and AutoESL for this show down on their HLS methodology. In this a three way debate, the languages of choices are C/C++, System C and System Verilog. All the speakers are biased one way or another because they are representing their company which invested millions of dollar in a certain language, so they really advocate their choice of language is better than others.

The benefit of using System Verilog over C++ or System C is SV allow the designer specify hardware architecture. The weakness of System C or C++ follows sequential models as it lacks ways to specify concurrent models. Architecture decision cannot be made by the synthesis tool since it is the first order optimization. C++ or System C HLS tool has to use compile directives to specify the hardware architecture.

The benefit of C/C++ is the only language used by both HW/SW developers. Algorithm are modeled in C/C++, so it makes C/C++ the most native input to HLS tool. Modeler or SW developer does not need to learn a new language and there is no need to translation the code in C/C++ to another language. Using C/C++ can postpone defining the architecture by separate the layer of abstraction or even making decision on the HW/SW boundary.

System C is kinda half way between C/C++ and System Verilog. The advocate thinks it has the best of both world, but others thinks it got the worst of the both world. It provides limited language construct to define timing related information and concurrency statements. It can define more accurate hardware architecture than C/C++, but it also carries the burden of a 40 years old programming language that is not design to describe hardware implementation in the first place. However, System C is supported by Cadence, Mentor, Forte and NEC CyberWorkBench, the four biggest HLS tool vendors.

Agile Programming Techniques
I signed up a full day workshop on How to Write Better Software in day 5. The workspace is conducted by IBM internal software training consultants. IBM is huge on agile software development. Agile project focus on four elements, stable, time-boxed short iterations, stakeholder feedback, self-directed teams and sustainable pace. The workspace introduced two agile methodologies, eXtrememe programming (XP) and Scrum.

In XP, there are 12 programming practices, the instructor did not go over all of them in the workspace. The major practices they had mentioned are: 1. Lean-software development, 2. test-driven development (TDD), 3. automation, 4. continuous integration/build and 5. pair programming. Lean-software development apply value stream map to eliminate waste. TDD focus on the idea unit test and re-factoring.

In scrum, the project is divided into 2-4 week sprints. In the beginning of sprint, there is a sprint planning meeting. The product owner determine the priority of all the user stories in the product backlog. Then scrum team will pick the sprint backlog and commit to the sprint goal. Scrum master remove road blocks of the scrum team. A user story describes functional that will be useful to a stakeholder of the system. Within a sprint period, the team should get everything done, (coded, tested, documented) of the picked user stories. The scrum team will conduct short 15 minutes daily scrum meeting to report the progress from yesterday, the plan for tomorrow and road blocks need to resolve. At the end of the sprint period, there is a sprint review meeting and demo of the sprint goal. Unfinished user stories should put back to the product backlog and re-evaluate its priority.

Verification is a huge software project on its own as we already created more lines of code than the RTL design. I think applying Agile programming techniques will help us to improve the quality of work. The workshop is just an introduction to Agile, it outlines what Agile is and its potential benefits, but it leaves out details on the know-how. It would be nice to learn more on how to apply Agile in verification setting as our work is not exactly the same as software development projects in IBM. Moreover, knowing the principles of Agile is one thing, avoiding pit-falls during the execution is another thing. There are many real-life problems need to be sorted out to make an Agile project successful. The workshop did not talk about how to estimate schedule with Agile given that the planning is only done within each sprint, how to manage people within a Agile team, how to deal with free riders or how to deal with difference in skill levels or how to deal some tasks that no one want to work.

Given the workshop is a 3 days IBM internal training squeezed into 1 day, it is understandable that a lot of information is left out. However I am leaving the workspace unsatisfied, I expected to learn more about Agile from the workshop.

DAC Technical Review (Day 3)

In the 3rd day of DAC, I went to the user track presentation on formal verification, checked out the booth of Onespin, Jasper, SpringSoft, Tuscany, AMIQ, Starnet, Forte Design System and Cypber WorkBench

User track presentation on formal verification

The user track presentation is where users of the EDA present their work on how make the best and creative use of the EDA tools.  There are about 40 presentation today and about 10 of them is related to verification.  I read their presentation poster and then talk to them to exchange ideas.  Here is a few pointers I picked up from the presentation:

  • Use formal verification tool to aid closing of code coverage.  For the line of code is not yet covered, we can write a property statement for that line, feed it to the formal engine and ask it to come up with a input sequence that will trigger the assertion.  The formal engine may either generate an input sequence or prove the line is unreachable.
  • Sun/Oracle has the idea to run the property in the simulation to keep both formal and simulation in sync.  The trick  is to have some “useless” signals in the DUT to qualify the assertion check to avoid having tons of false-negative when the DUT is an invalid state.
  • One presentation presents the result that using formal verification early in the development cycle will catch more bugs in FV.  Duh!
  • This is a good one.  In formal verification, there are two types of properties, abstract properties that is safe and incomplete, constraint properties that are unsafe and complete.  Using which properties type is a trade off between finding counter example or getting a full proof of the design.
  • Exhaust prove is difficult, it is more practical to limit the proof to some reasonable depth.

Onespin

This company build formal verification tool.  Their basic product is similar to IFV, except it has a better and easier to use GUI that allow users do a lot more interaction and visualization.  Their flag ship product is operational formal ABV, instead of defining basic cause-reaction properties in ABV, the tool provides assertion library allow user define operational properties.  Then the user will go through an iteration to get a full coverage on the formal space with the aid of the tool.  The idea is to generate a set of properties that completely define the RTL.  I think the tool will work as it advertise because at the end is human who has to enter the missing properties.  However I wonder what’s the use of getting a complete ABV definition of the RTL.  It seems the idea is totally up side down.  I guess the idea is instead of auditing the implementation of the RTL code, the auditor should audit the complete properties of the code.

One thing I don’t like about Onespin is they have way too many products and it’s really confusing.  The flag ship product has all the features and the rest are just crippled version with fancy marketing terms simply to confuse users.  For example, the difference between two products is only the ability to load external property files vs the property has to be in the same file of the RTL code.  I don’t really like this kind of marketing ploy simply exist to milk more money from the customer.

Jasper

Jasper is THE formal verification tool vendor.   I spent almost 2 hours (and have my lunch) in their booth walk through all the demo and try out almost all the features in their product.  This is the tools of choice in may formal verification presentation in DAC.  The tool is much more user friendly and powerful than IFV.  IFV seems so primitive compare to Jasper.  Jasper also has property libraries for different memory, FIFO model instead of just black-boxing them out.  It support tunnel, so the user can steer the formal engine.  It comes with a lot more formal engines than IFV and gives very clever ways to get a counter example or a proof.  Active design uses the same formal engine but it is for a different application.  The idea is if we have poorly documented legacy RTL, the new designer can use active design to generate properties of the RTL and understand exactly what the RTL can and cannot do.  Another benefit is when we ask the designer can the RTL do such and such, we no longer have to take their work for it, the designer can generate recipe to prove their design to answer our question.  Jasper has an account manager for PMC and she know Bob and Alan.  I think we really should try Jasper in Digi and get Bob setup the tool for us.

SpringSoft

Springsoft acquired Debussy.  Debussy and Verdi does not change much, other than it added support to power-aware design and system verilog.  Siloti is an very neat add-on to Debussy for waveform compression.   The idea is very neat, in simulation we really only need the input, output and flip flop values in the waveform database.  The waveform viewer can calculate the value of the combination signals on the fly.  The waveform database is only 25% of the original size.  Certitude is a tool to test the testbench.  It corrupts the DUT and check whether the testbench fail.  If the testbench still pass when there is signal corruption in the DUT, there must be something wrong with the testbench.

Tuscany

This company has only 1 product.  It is a web interface GUI to display the timing report.  I like the GUI, even though I don’t know much about timing report.  I can see it solve the problem of how to keep track of so many timing reports.

AMIQ

Another small company with only 1 product.  They have a DVT IDE for Specman and SystemVerilog base on Eclipse, the open source Java IDE.  The IDE works like Visual Studio, it has editor, data browser, structure tree, keyword auto-complete, quick linting, hooks to launch Specrun, all under the same GUI.  It is a lot more user friendly compare to editing e code with vi or Emacs.  They are working on the debug interface hooking into the simulation for the next release, it will work like gdb.  I highly recommend purchase a few license (1k per seat, but I am sure Bob can negotiate volume discount if we buy more), give it out to the team to evaluate the product. I think we will see productivity increase with the DVT IDE instantly.

Starnet

They are selling a VNC replacement that they claim s much faster then VNC.  I know CAD is evaluating some fast VNC-like software right now.  Maybe we should get CAD to try out this product as well.  We all know how painful is it to view waveform in Banaglore via VNC.

Forte Design System and Cypber WorkBench

Both company sells high level synthesis (HLS) tool, that compile SystemC into RTL code.  It looks like HLS is finally here.  I don’t have enough domain knowledge to evaluate the tools.  All I can tell is they have a nice GUI and the RTL code generated is not very readable.  I asked about is there any limitation on the SystemC code and the efficiency of the generated RTL, I only got the typical marketing answer.  Too bad that both tools only work with SystemC, it would be nice if there is HLS for behavioral SystemVerilog.

DAC Technical Review (Day 2)

In the 2nd day of DAC, I attended a technology Session Bridging pre-silicon verification and post-silicon validation, a user track presentation on An Open Database for the Open Verification Methodology Synopsys VCS demo and verification luncheon, visited the booth of the following companies: Realintent, Adlec, IBM, Nextop, eVe, ExpertIO

Bridging pre-silicon verification and post-silicon validation

This technology session has panel discussion on closing the gap between verification and validation. Verification and validation has two very different culture arise from limitation in our work. The industry has the same problem we are facing in PMC. There is problem between control vs speed cost vs effort in testing. Since the test environment is incompatible between the two side, it is a challenge duplicate a problem from one side to the other side. The latest technology is Design for Debug (DFD) to close the loop between validation and verification. The idea of DFD is very simply, insert build in waveform probe and signal poke logic into the silicon, so we have get more control and observability in validation. The DFD is very new, but they are aiming to get the flow standardize and automate just like DFT. Simulator will have hooks to the DFD interface to recreate bugs found in validation or generate vectors and dump it to the device. It is interest to see the statistic of RevA success rate has dropped significantly in the industry, from 29% in 2002 to 28% in 2007, and seeing more Rev on average. DFD can speed up the validation process and better turn around between validation and verification. ClearBlue is a DFD tool and they claim overhead is 1-2% area increase in the silicon. However the Intel panel guests cite a number as high as 10% in their own in-house DFD solution. User can trade off between increasing pin count or adding more probe memory to cope with the bandwidth requirement on the DFD interface.

An Open Database for the Open Verification Methodology

This presentation come out from a university research project. It’s like what Peter had proposed a few years ago. Hook up a C++ SQL API with Specman and save all the coverage or even packet data to a mySQL database. It is a neat proof of concept exercise, but vManager already took address this question.

Synopsys VCS demo and verification luncheon

The VCS demo is not really a demo. It’s just marketing slides. However I chatted with the VCS product manager for half an hour after the demo and manage to hear a few interesting things about VCS.

  1. Cadence has EP and VM for test planning, Synopsys just use a Excel spreadsheet template.  The spreadsheet will suck in the coverage data in XLM format to report the test result.
  2. VCS multi-core is more advance than I had expected.  The user can partition the design along logical blocks (subsystems) and run each block in a different core to speed up the simulation.  The Cadence multi-core solution does not partition the design, it merely move some function like waveform dumping, checking assertion, running the testbench in a different core.  The catch is each core checks out a SVN license.
  3. VCS has a new constrain resolver, but they use a different approach than Igen.  They don’t break down the generation into ICFS.  Looks like there are more than one constraint resolver algorithm out there.  They claim the new constrain resolver is better than Cadence, but they are only comparing to pgen.  The VCS guy is not aware of igen.
  4. VCS finally support uni-cov, which supported by Cadence since IES8.2.  They have a tool to merge uni-cov files in parallel, kinda like the NHL playoff.  I think we can modify our coverage merge script to merge coverage in parallel to avoid crashing.

Realintent

This company offer statistic verification tool that runs very early in the development cycle to catch bugs before having testbench.  I have a demo with them and able to play around with their tool.  LINT is the HDL linting tool.  Other than having a filmsy GUI and integrated with Verdi, I don’t see any advantage over HAL.   IIV is a tool for imply intention verification, which analyze the HDL code and check it against 20 predefined checks.  LINT catches syntax error and combination error, IIV obviously sequential error like dead state in a state machine.  I don’t think IIV is very useful since the user cannot define custom checks.  The built-in checks only catch careless mistakes or stupid logical error made by co-op students.  XV is their tool for ‘X’ propagation verification.  It is still in beta.  The tool reads the RTL code, generate a small Verilog testbench which poke internal signal to ‘X’ and check the propagation.  The tool then run that small testbench on your normal simulator and see any ‘X’ is propagated anywhere.  I doubt the usefulness of this tool.  Lastly, they have ABV for formal assertion checks, but they don’t have a demo setup.  I suspect the tool is not ready even a working beta.  I am not very impressed by Realintent, if their tools works just advertised, we will probably save a few days of debug time in the beginning and that’s it.  I am always skeptic about their claim.

Aldec
They used to provide OEM simulator to Xilinux and other FPGA vendors, now they are selling it as a standalone simulator. The simulator runs on Windows and Linux. It comes with a lint tool and support assertion checking (but not formal analysis). This tool targets FPGA designs, since it probably won’t able to handle 50Mil gates ASIC design. The IDE GUI is simple and pretty, but lacks features and strength of IES.

IBM
I went to the IBM booth to find out what’s new in DOORS and vManager integration. The IBM guy brings in Michael Mussey from Cadence, who overseas the vManager project, when he walked by us. In short the integration is not yet working. In the planing front, DOORS will generate the vPlan file from the design spec, verifiers only have to map the coverage and checkers in the vPlan, via a RIF (requirement input format) XML file. In the reporting from, Cadence is working on a script take the vPlan and UCM, generate a UCIF (universal coverage input file) and feed it back to DOORS. Another potential application is use DOOR for verification schedule, DOORS has a plugin that talk to Microsoft Project. It looks like historical data is not saved in DOORS, DOORS only report the current view. Michael from Cadence mentioned that they are adding a MySQL backend to vManger to report historical data. I think we can look into using this new feature to replace our Excel spreadsheet. DOORS has bug tracking tool integration as well. A confirmed bug report should automatically trigger a change request in the requirement spec. We may need to upgrade our the PREP system to work with DOORS.

Nextop
The Nextop is very interesting. It generate assertion (PSL or SVA) automatically from monitoring your simulation. It is an unique solution to address the problem of who writes the assertion. Their tool will analyze how the signals is used in the simulation and generate a list of PSL or SVA statement as the properties of the block. Then the designer have to go through the list (a few hundreds of them) and categorize whether the should always hold true (an assertion) or it’s only true because we haven’t run enough stimulus. (a coverage) Then we the testbench will incorporate the assertions and use them for the rest of the simulation. My only concern is their solution seems too good to be true and I can’t tell the quality of the auto-generated assertion from the demo. I would like to evaluation this tool and is the generate asserted useful or just junks. The simulation over is about 10-15% when the tool is turned on to collect information. Currently, it only work on block level at the moment and the biggest size they had ever tried only has 12K line of code. The designer is weakest link in their flow, since the designer has to check and classify each generated assertion one by one.

eVe
They make emulation box. They talk about testbench speed up, so I am interested in their presentation. But it turns out they mean their box only support synthesable testbench. They don’t have any interface for the FPGA array to communicate with the simulator. They keep telling me that I can easily hook up the emulation box with the simulation by building custom PLI function. Yeah, right. It looks like there are not many emulation box support simulation acceleration out there. Probably it is only supported by the box from the big 3 simulator vendors.

ExpertIO
Verification IP vendor. They have Ethernet, SAS, SATA, FC VIP. The offering looks good but the only problem is the VIP is implemented in encrypted behavioral Verilog with SystemVerilog wrapper to control the transactor. They refuse to show me how the API of the VIP looks like unless PMC went through the official NDA process. The only information I can get is the a useless feature list of their VIP but I can’t tell how easy or annoying to use their VIP.

DAC 2010 Technical Report (Day 1)

Today is the report of my first day in DAC. I signed up to a full day technical workshop Choosing Advanced Verification Methodology. After the workshop ended at 3:30p, I managed to checked out a few companies in the exhibition floor Vennsa Technologyies, Agnisys Inc and Veritools

Advanced Verification Methodology

The workspace is smaller than I expected. There is only about 20 attendants. It started off with a keynote from Brian Bailey, a verification consultant, on the latest trends in verification. Assertion and ESL seems to be the theme of the day.

We finally see formal verification comes out from academic research and put into use by the industry and developing a good use model. There are 7-8 formal tools venders in the market right now, but looking at historical data in the EDA industry, no matter what previous technology, the market is only big enough for 2 to survive.

ESL is the latest buzz word. The word has many different meaning but basically it means where software and hardware comes together. To verification, ESL means we are building reusable testbench with different abstraction layers. Starting from the top with TLM model to verify the algorithm, then push down to verify the architecture, and then the RTL implementation at the very bottom. TLM 2.0 is the new industry standard and pretty much sweep aside all proprieties prototypes from different vendors. TLM 2.0 still lacks synthesis and no hardware/software interface.

Currently, many people model ESL in SystemC, but both SystemC and System Verilog need a few more revision to fully support ESL. The new ANSI concurrent C/C++ coming out this year many turn SystemC into a obsoleted branch of C/C++. High-level synthesis, C to RTL compiler, is almost an ESL enabler. It separate the architecture from behavioral description. The shift from RTL coding to high-level synthesis would be as disruptive as the shift from schematic capture to RTL coding.

Constraint random generation is a challenge in ESL verification. Current tool does not understand sequential depth and can’t constraint across sequence. Functional coverage is broken. It is merely isolated observation not necessary reflect the verification progress. We need a different metric to provide a direct measure on verification closure.

In ESL development, management will be a new challenge. Now we have to develop the hardware and software in the same development cycle, there will be conflicting schedule between the hardware team and the software team. Communication among different team and clear interface management at the partition between software and hardware implementation is the key.

In the next few years, the speak predicts there will be more research and probably technology break though in these areas: specification capture in verification, sequential equivalent checking, intelligent testbench, assertion synthesis and behavioral indexing.

After the keynote session, it is customer panel. The panel guests are Intel, ARM and nVidia. The ARM and nVidia are assertion expert, the Intel guy is more on ESL. It is Q&A session, but nothing special, the guest only talks about very generic things. They tell us what they do but don’t they us how they do it.

Jasper has the next presentation together with nVidia and ARM gives customer testimony. They talked about their formal verification tool and introduce basic concept like full proof, counter example, partial proof. There are quite a few neat examples of formal verification, like generate an input sequence for a given output trace, answer urgent customer question on whether something is possible in the design, verify dead lock/live lock, checking ‘X’ propagation. Both ARM and nVidia has dedicated assertion team and they said that is important to the success in using assertion in verification.

Synposys presents new updates to VCS simulation. They close the coverage to constraint loop with the Echo testbench technology. It is similar to what Cadence has and it is limited to the coverages that has a direct relationship to the constraint VCS finally has multi-core support. I think Cadence already it in IES 8.2. We should look into using both technologies for Digi. We should work with the CAD group to set up special LSF machines reserved for multi-core jobs.

TSMC talk about its Open Initiation Platform (OIP) for ESL verification. The virtual platform enable hardware/software co-simulation. The testbench is build from the top-down approach. Start with ESL TB to verify the algorithm, then ESL SoC TB to verify the ESL mode, then add cycle accurate adapter to the ESl model and finally the RTL testbench.

Mentors talks about ESL testbench and present a success story of TLM to RTL synthesis verification. They claim the high level synthesis flow save them lots of time and use the same testbench with different abstraction from top to bottom.

There is nothing new in Cadence’s presentation. They just show how vPlan fit in ESL flow.

Vennsa Technologyies
It is a small company in Toronto based on the research of a prof. from U of Toronto. Their OnPoint debug tool is pretty neat. It is an add-on to the simulation help the designer pin point the bug. Once you have an assertion failure, you can fire the OnPoint GUI. The tool will back trace the logic cone, narrow down and suggest where the bug is about. You can also start the back trace from a given value on the output pin at a given time. I played their demo for almost half an hour and it is a very handy tool if it works as advertised. The idea behind the tools sounds and I think we should evaluation the tool.

Agnisys Inc
This company has two product: IVerifySpec, a web GUI replacement for vManager and iDesignSpec, a half bake solution similar to RDA.

IVerifySpec use SQL database to store the vPlan, but it does not support UCD directly. It has to translate the UCD to an XML offline and import to the database. There is a few nice feature in the GUI, like heat map, traceability matrix, some charts and graphs looks like Google Analysis. However their tool is very immature overall, it does not support multi-level hierarchy in requirement specification, no revision control and data entry via the web interface is very tedious and user unfriendly. I should simply ask Cadence copy those nice report feature into vManager.

iDesignSpec sounds good on paper but the implementation is awkward. You enter the register specification in Word using some funny plug-in. Then the plug-in will generate PDF, HTML, XML, VHDL, OVE, C++ files. Somehow it is the exact opposite of our RDA flow. We enter our register description in XML and generate one thing at a time using scripts. The format of their word plug-in is very ugly. The code and PDF file generated by the plug-in is very primitive. I would say even our old ECBI generator is better than this tool. The only thing useful I learn from this presentation is there are industry standard for register description, SystemRDL and IP-XACT. Maybe our RDA tool should support industry standard as well.

Veritools

Their flag ship product is Veritools Designer. It’s basically a Debussy Verdi clone. It can view schematic and waveform, source code debugging. They claim their tool is very fast and only cost 1/4. I am always skeptic about those claims and I don’t like they use a their own waveform database format. It means simulator has to dump another waveform through their PLI. The GUI is fast but the design in the demo is not very big. The GUI is quite primitive compare to Simvision and they can’t beat the price of Simvision which comes free with IES. I do agree Simvision is a bit slow but I think investing in faster computer with bigger RAM can solve this problem. They have an add-on tool called Veritool Verifyer. This tool is kinda dumb. If there is an assertion failure, it read in the waveform and let you to test changes to the assertion code without invoking the simulation. I don’t think it is very useful. When an assertion fail, how often it is due to RTL bug and how often it’s just a faulty assertion?

DAC 2010 – First Impression

This is the first time I am going to the Design Automation Conference (DAC) conference or any industry conference. It is really an eye opener for me. When I first started working in PMC during the dot-com bubble days, the company promised send us to a conference every two years. Unfortunately before my turn to go, the bubble burst and the company is on survival mode for almost a decade. Finally, we are back on the growth track and the company has money to invest in developing the employees and budget to send us to industry conference.

There is a few reason I picked DAC to go. First, it is the biggest conference of the EDA industry. It has 3 days of exhibition and 2 more days for tutorial and workshops. You can see everything under the same roof, all the tool venders, 3rd party IP provider, the big names and new start up that you never heard of. Second, there are many workshops and tutorials sessions specified for verification, so I can learn what’s new other there, what other companies are doing. In fact there are so many interesting sessions that I could not see them all. Last, the conference is in Anaheim, right next to Disneyland. I am flying Pat down here and spend a weekend after the conference as a mini-vacation.

The latest technology presented by the exhibitors are amazing, but I am equally amazed by the registration system. After you have registered on-line, you can pick up your badge in the registration desk. The process is very smooth, just scan the bar code and your card is printed right in front of you. The card has built in RF chip. You no longer have to hand out business cards to exhibitors, they have a cell phone like device scan your card and print out your information automatically.

There are lots of freebies in DAC. It’s only the first day of the exhibition, I only covered 1/3 of the exhibition and I already got the following freebies: 1 backpack, 3 T-shirts, 4 balls, 2 highlight pens, a measuring tape, A battery powered cell phone charger, a pair of waist band and a book “The ten commandments for effective standards”. Other than freebies, there is free beer. Last night we have the kick-off reception sponsored by Intel. Tonight I went to the Denali party in Disney Downtown. Although, there is the beer is bottomless, no one is abusing the kind offer and got drunk. It is an industry conference after all, you don’t want to embarrass yourself in front of potential clients and employers. The industry is a small world after all.

I am looking forward to the rest of the week. I am going to write about what new technology I learn in DAC every day. Stay tuned.