Automation is just transforming information from one format to another format, it won’t make the information any easier to work with. Abstraction drops irrelevant information and keeping only the useful information at the right layer. Abstraction can scale vertical, automation can only scale horizontally.
Design and Reuse, by David Murray
I’ve noticed recently that the word ‘automation’ can be used very loosely in the EDA industry as a presumption of productivity and quality. I’ve recenlly been working with some legacy customer flows on an IP integration process that was 100% ‘automated’ from an Excel sheet. This excel sheet was written to CSV text file which was then parsed with perl to create an RTL output. As the solution evolved however and the requirements grew more complex, another set of perl scripts were deployed which directly manipulated the RTL file. In fact this perl included some snippets of RTL code to insert into the output. So while technically the process was 100% automated, theis type of textmanipulation brought the level of abstraction lower even than the RTL level. I came across similar types of ‘automation’ in my previous life as a design engineers life, where automation was considered the ability to record keystrokes macros within a text editor. Again this automation was at a very granular and low level of abstraction and consisted of no more than creating repeatable, but not very reusable small steps. No matter the claimed level of automation of a process, a simple fact remains; automation without abstraction is like a bicycle without pedals.
A bicycle without pedals!
The Laufmaschine or ‘running machine’, a brilliant concept, was realized by German, Baron von Drais in 1817. Described as “A mechanical machine with two, in-line wheels and the ability to steer”, the Laufmaschine could get you from A to B in a more efficient fashion and it meant keeping both feet on the ground (and probably a new set of shoes every week). At initial trials, the laufmaschine was able to get running speeds at walking efficiency. This laufmaschine was also called a velocipede, meaning ‘fast foot’ as well as swiftwalker (a marketing term if there ever was one). Its goal was to make walking or running more efficient and it successfully achieved this.
In 1863, Frenchman Pierre Lallement modified a two-wheeler in Paris and attached pedals, forever changing the concept. The introduction of the pedal took the fast-walking to a new level . With the advent of gears, the efficiency of man travelling was catapulted way ahead of our counterparts in the animal kingdom. Man on a pedal-enabled bicycle was100 times more efficient than man walking. Man was now CYCLING!
So how does this relate to SoC realization and IP integration? As the number of connections required to assemble a system grew, manual connectivity was replaced with in-house (as well as outhouse!) scripting solutions without really changing the assembly concept. The scripts – be they Excel VBA, perl or python were performing low-level data manipulation rather than high level automation abstraction. There were still trying to deliver better walking efficiency.
These scripted solutions are essentially ‘swift-stitch’, or ‘veloci-wire-up’ style environments performing low-level connectivity manipulation. As the complexity inevitably increases, the levels of efficiency of scripting simply aren’t scaling. A new concept is needed for IP integration; a higher level of abstraction is needed to boost the SoC realization process.
So, what did we learn from the laufmaschine? With pedals, Mr Lallement found a way of synthesizing one motion into another, increasing efficiency by an order of magnitude in the process. Pedals were in fact a very small but highly significant change to the methodology – CYCLING was the result! We need the same paradigm in SoC realization where we replace adhoc IP stitching type of solutions and change to a new methodology called ‘weaving’.
Like pedals, ‘weaving’ doesn’t seem like a massive leap in innovation but it gives a quantum leap in efficiency. Weaver takes what these integration scripts were doing time and time again and abstracts it up to a specification language that defines how to integrate IPs and systems.
The key to this solution is a simple but powerful IP integration specification language that allows engineers to specify a high level integration specification as a set of rules that define how IP is integrated. These rules contain powerful assembly instructions and link with formal port/interface definitions, such as IP-XACT.
This rule when run on a sub-system containing multiple IP instances will export any ports that have been formalized through attributes or IP-XACT. The selected ports will be created on the sub-system boundary and connected to the originating instances. It will also maintain any packaging metadata that was stored with the ports e.g. properties, IP-XACT interface mappings etc.
The export instruction has a range of options that control the intended port creation. Other instructions include, connect, tieoff, group/split (for hierarchical manipulation), etc. and work at both the port or interface level. An integration specification can therefore be considered a collection of these rules
How can such a small change in abstraction lead to massive efficiency gains?
A rule can be reused multiple times on many sub-systems and also on multiple projects. Therefore the reuse potential is huge. Also the design intent is very clear and concise and easy to understand review. In a sample peripheral sub-system, 3 rules (with 9 instructions) auto-generates 1434 lines of structural HDL code. Similarly at the top-level of a large chip, 21 rules (120 instructions) gives 11188 lines of HDL, an average of 96 lines of HDL code created, per instruction. (Much like 1 revolution of the pedals giving you 100m in distance!)
Specifications and Scripts
The specification language is easy to understand and familiar to people working in the domain . There are only a handful of instructions to learn. So what is the difference between specifications and scripts?
* Rules are executable, synthesizable specifications whereas scripts tend to be ad-hoc implementations.
* Specifications by their nature captures design intent and raise the design abstraction. The design intent can be very clear. Scripts are low level and design intent cannot be as clear.
* Specifications are more formalised and are more stable than the resulting implementation. Scripts tend to be very implementation specific and implementation sensitive.
* Rules are essentially specifications whilst scripts are code
Scripting will always play a role in design automation and should be considered the essential glue of a process. Scripts should handle corner cases, tweaks and nuances but because of their ad-hoc nature and its resulting instability and unpredictability, script should be kept to a minimum and not form the central part of an integration flow. Scripts offer flexibility and a way to get out of certain holes but long-term, strategic solutions require a more automated AND abstracted solution. Scripting gives a context of what you are trying to do with the data and because of this it provides a pointer as to where the automation is going, probably much like what went through the Frenchman’s mind when he envisaged pedals. He would probably not have come up with a bicycle without first seeing the original laufmaschine.
It is time to raise the level of abstraction and aim to finally become 100 times more efficient at the IP integration process.