hardware synthesis

. llms
. compilers
. supercompilers
. sillicon
. tapeout

References

https://tinytapeout.com
https://www.youtube.com/watch?v=6vC3t_soJok
https://www.forbes.com/sites/patrickmoorhead/2023/05/04/synopsysairevolutionizing-chip-design-through-ai-driven-eda-suite/?sh=1ac8d66c4637
https://www.mckinsey.com/capabilities/operations/our-insights/how-generative-design-could-reshape-the-future-of-product-development
https://www.ptc.com/en/technologies/cad/generative-designhttps://arxiv.org/abs/2203.11005
https://www.youtube.com/watch?v=D--sSNKiVXg
https://www.youtube.com/watch?v=VsUF_CBJq50
https://sci-hub.se/https://link.springer.com/book/10.1007/978-3-030-39284-0

Introduction

Lets talk about tooling for the push to the next industrial revolution.

Data is the currency of the present, but data is collected from activities that humans do in an artisanal way, and almost all data was already captured by the Big Tech corporations like Google, Microsoft, Facebook and Apple, so there's no growth anymore for data collection.

Which leads us to the next step which is machine generated data and the oncoming revolution of language models capable of creating data at an industrial scale never seem before, a new market that's being valued at tens of trillions of USD.

For that to happen, in this new gold rush (as so told by Nvidia, the most valued company in the entire world now), a new set of tools is required, a set of tools that allows for faster hardware development. Every company of the big ones is developing their own chips now.

The cost of manufacture of a microchip dropped so much that most of the cost now is engineering, not manufacture. It dropped low enough that's totally possible for low volume production of dedicated hardware for dedicated tasks to be accelerated, for volumes as low as one thousand units.

Hardware Process

And the way microchips are usually developed is through a linear (waterfall) cycle of architecture, design, verification, implementation, validation, testing, and only then manufacturing, and in every step of the process there's a highly cost of manual human labour involved. What we are proposing is changing everything about the way things are designed in lieu of manufacturing becoming cheaper, its the same strategy of not being afraid of blowing up some rockets that SpaceX is taking instead of making a perfect design à priori because manufacturing is expensive. When manufacturing a chip costs tens of millions of USD, you can't take that risk, but when manufacturing now costs only tens of thousands, one or two orders of magnitude less, its possible to use a process that's faster and cheaper and fail fast.

Hardware Testing

The process is actually more involved and require a lot of testing and manual checking as depicted in the above picture of the V-model phase of development of hardware. If we could automate every single individual step so that the engineer could focus on what matters, which is the algorithmic model we could make the hardware come closer to the way software is done.

Thus that paves the way for software-defined-hardware, which is creating hardware in a way that's similar to how software is created, and then sending it to be manufactured in low units and getting it back in mere weeks. That's already the reality for PCB - printed circuit boards and even enclosures (3D printing), but soon it'll be the reality for microchips also. We currently do have specification languages like Verilog that is used to partially automate the process of creating hardware, but those aren't high level enough yet mainly because they are made for testing not for specification of high level designs.

Software Process

The way software is usually constructed is that we create a specification and from that we codify the software via an iterative loop that learns information and tests assumptions as we go, because the specification and source code are the only artifacts that are created in an artisanal way and the binary that's the end result is totally generated by the machines. This allows software to have a thigh loop of development, so the tools can do more automated work because the cost of getting it wrong is just a small amount of time.

Proposed Process

The way we're proposing is synthesizing the entire hardware design from a higher level software specification using an old idea called supercompilation. Instead of compiling code to run in a CPU that's manually engineered by engineers at a great cost (tens of millions of USD), we could in theory spend the engineering time in the compiler that's capable of emitting the design of the hardware specific for the software that's going to run in the hardware, thus the software gain a great advantage of performance because of that.

Prior

There is prior art on this topic and even newer takes on this idea and the possibility of using generative text models to speed up the process of testing the designs.

What we propose would require at most investment in development of software and possibly the (commercial) integration with some "Fab", which is the factory that actually does the manufacture part. As what we propose is basically a take on the logical part of the design, the manufacture is an entire another thing that's better left for the factories themselves. Most of the development of microchips is actually done by fabless companies that outsource the actual manufacturing of the parts to companies like TSMC. Even Apple is doing that. So its totally possible to have a startup coming up with newer and cheaper ways of creating microchips to optimize the extreme amount of data that's about to be generated by machines.

There isn't much written on super-compilation because it is totally a green field that doesn't have much done yet, as the industry focused too much on CPUs, which is general purpose machines and only used custom specific purpose machines for things that are done at a high volume scale, like for example, de-compressing MP3 files.

Computing

Meanwhile the entire explosion in AI (aka, machine learning, which is the proper technical term), is due to a change of paradigm in hardware, CPUs only process data in small chunks in series, but GPUs processes large amounts of vectorial data (as in a mathematics vector) in parallel. When a CPU typically has 4 cores on average, a GPU has 400 units on average, the top of the line GPU have thousands of processing units that are capable of computing simpler mathematical operations in parallel. Meanwhile ASIC are entirely custom made microchips in the aforementioned process.

But they're already being limited by the fact that the design of the hardware is still a general purpose machine, although a more specific one, the need for heterogeneous computing will keep pushing the hardware to become more like software, thus the idea of having a hybrid compiler capable of doing both tasks seems to be the natural direction we should take.