Tag Archives: hardware

Verification

SW/FW Automated Test Framework and Debug Toolkit for System Testing In Design/IP Track Presentation of the 53rd Design Automation Conference (DAC 2016, Austin TX, Jun 4 – Jun 9, 2016)

Abstract:
This presentation outlines a novel SW/FW automated test framework and debug toolkit for system testing that supports automated regression and effective interactive debug. The framework is based on Google Test with expansions incorporating C++14 features and various open source libraries. Major features include: test flow utilities, argument parser, JSON files for device configuration, control test equipment via Tcl_Eval(), interactive debug prompt, control of FW running in the embedded CPU via remote gdb, C++ reflective API, etc. Engineers are more productive developing testcase in this test framework develop testcases compare to writing testcase with plain old C in ad-hoc fashion.

Full Text: pdfpdf (324kb)


A Methodology to Port a Complex Multi-Language Design and Testbench for Simulation Acceleration In Proceeding of the DVCON 2015(San Jose, US, Mar 1 – Mar 4, 2015).

Abstract:
PMC’s verification teams started exploring simulation acceleration (SA) with hardware-assisted verification in 2011, as one of the early adopters of UVM Acceleration. They undertook this effort because of the complexity and size of their mixed-language designs, which were coded in SystemVerilog, Verilog, and VHDL, and stimulated using state-of-the-art testbenches coded in UVM-e.
A few years later, the task of porting a design and testbench from simulation to acceleration evolved into a methodology and is now re-used across multiple verification teams. Finally, PMC has achieved the holy grail of SA, conquering the most complex challenges of SA verification including: 1) Speed – achieving 67x speed up, 2) Time to First Test – taking only a month to port a verification environment to run in acceleration mode, 3) Consistent – Running the same tests with RTL and an accelerated DUT, producing the same results.

This methodology exploits essential capabilities of the tools in use, and production proven procedures. This paper outlines a step-by-step guide to port an existing UVM-e testbench to SA. The verification user community can use this paper as a template to plan their migration from simulation to hardware acceleration.

Full Text: pdfpdf (334kb)


Maximize Vertical Reuse, Building Module to System Verification Environments with UVM e In Proceeding of the DVCON 2013(San Jose, US, Feb 24 – Feb 28, 2013).

Abstract:
Given the size and complexity of modern ASICs/SoC, coupled with their tight project schedule, it is impractical to build a complete system or chip level verification environment from scratch. Instead, in order to increase productivity, maximizing reuse of existing verification components seamlessly with the project has become one of the biggest opportunities to increase verification efficiency. In this paper, we present a testbench framework to maximize vertical reuse within a project. The framework presented here has been proven on the ground-up development of a 200M gates ASIC. In our framework, the system testbench is built in a hierarchical manner by recursively importing lower level block or module testbenches. From the lowest level to the highest level, all the testbenches are designed to support plug-and-play integration. Verification engineers can hook up several lower level testbenches and turn them into a higher level testbench. The system testbench inherits the device configuration sequences, traffic generation sequences, checkers and monitors from the imported module testbenches without duplication of effort. As a result, vertical reuse shortens the development time of the system testbench, improves the quality of testbench code and allows fast bring up during system integration.

Full Text: pdfpdf (251kb)


Can You Even Debug a 200M+ Gate Design? In Proceeding of the DVCON 2013(San Jose, US, Feb 24 – Feb 28, 2013).

Abstract:
Verification debug consumes a large portion of the overall project schedule, and performing efficient debug is a key concern in ensuring projects tape out on time and with high quality. In this paper, we outline a number of key verification debug challenges we were faced with and how we addressed them using a combination of tools, technology and methodology. The root cause of failures can be traced, most often, to either an issue in the RTL, an issue in the testbench or, in our case, the software interacting with the hardware. Speeding the debug turnaround time (the time to re-run a failing sim to replicate the issue for debug) is critical for efficient debug. Periodic saving of the simulation state was utilized extensively to narrow the debug turnaround time to a very small window. Once a re-run was launched, waveform verbosity levels could be set by users to dump the appropriate amounts of information for debug of the re-run scenario. For additional performance on the testbench side, coding methodology was introduced that allowed for maximum performance of stable sections of code. To speed SW debug, a software driver was implemented into the testbench to allow for debug of SW related issues very early on in the project.

Full Text: pdfpdf (331kb)


Hardware/Software co-verification using Specman and SystemC with TLM ports In Proceeding of the DVCON 2012(San Jose, US, Feb 28 – Mar 1, 2012).

Abstract:
In modern ASIC/SoC design, the hardware and software have to work seamlessly together to deliver the functions, requirements and performance of the embedded system. To accelerate time-to-market and to reduce overall development cost, it is crucial to co-verify the software code with the hardware design prior to tape-out. The software team can start developing and debugging their code with the actual hardware RTL code to shorten their overall development cycle. The hardware team can use the software code to identify performance bottlenecks and incorrect functional behaviors early in the development cycle which helps to reduce the risk of increasingly expensive device revisions.

The current approach to co-verification is primarily running the software on the embedded processor inside the hardware design, either within the simulator or with ICE (in-circuit emulation). The disadvantage of this approach is slow debug turnaround time and the higher cost is procuring and supporting a dedicated emulation box or FPGA platform. In addition, the software is running in isolation relative to the testbench, hence it is often challenging and inconvenient to integrate the software with other verification IP in the testbench.

In this paper, we will present an alternate approach on how to integrate the software driver into the simulator using Specman and SystemC with TLM ports. The software is running in the same memory space as the testbench, both of which run through the simulator on the Linux host. The advantage of this approach is fast execution speed of the software and the interoperability of the software with other verification components in the testbench. The software code runs in zero simulation time and the testbench has full control of the software using TLM ports and direct memory access via pointers. In addition, the software code can invoke gdb or any other C debugger to make debugging easier.

Full Text: pdfpdf (313kb)


Functional Verifi cation of Next-Generation ICs with Next-Generation Tools: Applying Palladium XP Simulation Acceleration to an Existing Specman Testbench Framework In CDNLive! 2012 (San Jose, US, Mar 13 – Mar 14, 2012).

Abstract:
Next-generation ICs from PMC are ever-larger, to the scale of more than 100M gates. Using conventional simulators, typical datapath simulations for telecom applications can take hours to send a complete frame, while full regression suites can take a week. With more features causing longer simulation times, it’s challenging to complete a comprehensive verifi cation plan while meeting time-to-market demands. To solve this, PMC implemented a transaction-level testbench infrastructure using the Specman Elite® tool, based on the Cadence hardware-acceleration–friendly Universal Verifi cation Methodology (UVM). High-level protocol UVM verifi cation components (UVCs) generate transactions driven to low-level interface UVCs, which generate signals that enter the device under test. To support the Palladium XP hardware acceleration platform, the aspect-oriented programming nature of Specman was exploited. Interface UVCs were extended by splitting BFMs and collectors across Specman and SystemVerilog RTL, leaving protocol UVCs unchanged. Thus, PMC’s verifi cation capabilities expanded with virtually no disruption to its ongoing verifi – cation plan. The addition of Palladium XP provides accelerated simulations that complete in 40x the speed of normal simulations. Thus, regressions can be completed in days instead of weeks and interactive debugging of top-level simulations are now possible, allowing PMC to complete a full verifi cation plan on complex ICs while reducing time to market.

Full Text: pdfpdf (576kb)

Moving to RAID

My computer is dead.  Windows refuse to restart.  The computer keep rebooting itself on the XP logo screen.  I was horrified, worrying losing all my data.  Hardware failure is not a big deal, you can always replace the broken parts.  However the data inside the computer is irreplaceable.  Luckily all my hard drives are intact, only the Windows itself is corrupted.  It will only take me a few days to re-install Windows and all my usual programs.  However, I feel a bit uneasy to format my C: drive.  I want to keep all the data in case there is something important.  So I have to buy an extra hard disk to copy over the data.

Since I am already buying more hard disk, why don’t I fix it once for all, so that I can have a peace of mind.  I upgraded the motherboard to one that support RAID and bought more hard disk space.  RAID stands for redundant arrays of inexpensive disks.  The idea is having two hard drive running in parallel mirroring each other.  In case of a disk failure, you still have a complete set of data.  It is a hardware solution, works much better than backup software.  Now I know my photos, my personal records and my mp3 collections of every Chinese CD released in the past 20 years are safe inside the hard disks, which have over 2TB capacity in total.

Dell Inspiron 6400 Laptop

dell-inspiron-6400-laptop

My laptop (Inspiron 6400) ordered from Dell has arrived today. It is the first laptop I ever own, excluding the piece of junk PMC gave me when I started working. I am quite happy with the laptop, except it doesn’t have enough memory. I know it when I make the order to avoid buying over charged memory from Dell. I am going to order more memory online, so I will have to suffer the sluggish performance for a while. There are lots of work to do after getting a new computer. I have to install all my usual software. However before that, I have to uninstall the useless software come with the laptop. Why can’t Dell allow me choose a clean windows install without all those software cogging my registry? I am still not use to the smaller keyboard and the touch pad, that will take me a while to get use to. I will post some more review of my laptop after I have the chance to use it for a longer period.

Reinstall

Today continues the painful process of installing software. I get most of the software installed, and I realized I forgot to copy two very important pieces of information from my old hard drive, they are my firefox bookmark and palm desktop contact list. I wonder why can’t all the software save their data in one centralized location, so I only have drag one directory every time I update my system.

Computer Crash

Fortunately, I am able to recover the data from my harddisk after running fixboot, fixmbr, chkdsk /p and fdisk /mbr on the recovery console from the Windows XP CD. However, unfortunately, the OS is beyond repair, I can’t even start file explorer or even go online. To reinstall my computer, I have to find some disk space to host my data while I reformat the harddisk. So I end up buying a 200GB harddisk for backup, and since I am going to open up the case, I got myself extra 1GB of RAM. Now my system has 1.5GB RAM and 440GB harddisk space in total. I’m planning to use the new harddrive and the 512MB RAM in the MS cheap computer though. I still haven’t decide whether should I take advantage of that deal, but will see how it goes.
To save time, I’m going to install the essential software. I found I had installed way too many software that I had not even used once. It’s a painful and unproductive process that I want to go through as soon as possible.

On a side node, when I was messing inside the case, I accidentally pressed the stand-by button of the cable modem and I’m not aware of it. I called the support line to make myself look like a fool for asking why the internet doesn’t work when the cable modem in stand-by mode. While I was waiting for the tech support, the system always play meaningless messages. I find a very stupid one telling the customer who has problem with their internet access to visit their support website. Duh! If I can visit the support website, I won’t be calling this number, go figure.

Dual monitors

Finally I received my dual LCDs at work after waiting for almost two months since I had talked to Uncle Bob. Colleagues gathered around my cube today and check out the dual LCD setting. I am not sure how to measure the productivity increase using dual displays, but it definitely much more convenient to use. The moral of this story is that you will get what you want if you keep asking for it and ask it to the right person.

My windows machine is dead and it keep rebooting itself. I suspect some files of the OS is corrupted since it won’t even boot into safe mode. The only option I have is to re-install windows. I hope the windows recovery works so I won’t lost my data. Sigh.. I don’t have much time to re-install all the software.

cheap computer

I’m still struggling whether should I take the advantage of the computer bundle from Microsoft and AMD. The deal is part of the Tech Road Show package, with sell 64-bit AMD CPU, Asus mother board and 64-bit XP for US$250. The hardware alone already costs $500. The catch is that you have to attend the show in person to pick up the package. My linux box is running on an old P3 733MHz and it’s about time to upgrade. So on top of the MS/AMD bundle deal, I only need to shell out another $200-$300 for a case, RAM and harddrive. The whole new computer would cost less than $600. My only concern is that I don’t have much time to setup the new server.

Follow this link to register.

Apple and intel

The only big news for all the geeks around the world happened today is that Apple annouced it will ditch IBM/Freescale and switch to Intel processors in the future. That could means the Macintosh strike back after it lost the OS war to Microsoft. Every analyst said there will be a hugh risk for Apple to migrate its the software which is the major differentiating factor between a Mac and a Windows machine. Those guys defined lacks technical insight of how OSX works. OSX is just an UI layer sitting on top of the a FreeBSD kernel. It is not that complicate to port OSX to x86 machines, probably it only have recompiling all the libraries. I can already sense that Steve Jobs is trying to get even on Bill Gates. Here is my prediction, somehow the OSX for x86 will be hacked to work with non-Mac Intel machines. There will be a version someone leaked into all major peer-to-peer download sites. Once OSX for x86 had gathered enough momentum in the underground world, Apple will launch it as an alternative to replace Windows. The battle of OS is getting interesting once again. Windows, OSX or Linux, who can domain the desktop market?