Many (core) Moore (Part III computing Epoch)

Back to the past – This is part III of four part story of the computing epochs as punctuated by Moore’s law in which Intel had its imprint for obvious reasons.

This is the 2003-2020 Era, in which multi-core, Open source, Virtualization, cloud infrastructure, social networks all blossomed…The onset of it was the end of MHz computing (Pentium IV) to multi-core and throughput computing.

It was also the beginning of my end in semiconductors for a brief period (20 years) until I decided its time to get back in the 2020s…That was punctuated by the first multi-core CPUs (mainstream) that Sun enabled – famously known as Niagara family and of-course the lesser know is UltraSPARC IIe which has an interesting contrast to Intel’s Banias (back to Pentium).

Some would call it Web2 era or Internet 2 era…The dot-com bubble which blew a number of companies in the prior era (OEM era), paved the way for new companies to emerge, thrive and establish the new stack. Notably at the infrastructure level, Moore was well ahead with first multi-core CPUs enabling virtualization and accelerated the decline of other processor companies (SPARC, MIPS), system OEMs as the market shifted from buying capital gear to cloud and opex.

Semiconductors investments started to go out of fashion as Intel dominated and other fabs (TI, National, Cypress, Philips, ST and many more withered) leaving Intel and TSMC with an also-ran Global foundries. In the same period, architectural consolidation around x86 happened along with Linux, ARM emerged as the alternative for. a new platform (mobile) via Apple. Looking back it was the value shifting from vertical integration (fab + processors) to SoC and thus IP (ARM) became dominant despite many attempts by processor companies to get into mobile.

Convergent to the emergence of iPhone/Apple/ARM, was AWS EC2 and S3 and thus the beginning of cloud with Opex as the new buying pattern instead of capex. This had significant implication as a decade later that very shift to commodity servers and opex comes full circle via Graviton and TPU with the cloud providers going vertical and investing in silicon. Intel’s lead on technology enabled x86 to dominate and when that lead in technology both slowed thanks to Moore’s law and TSMC, the shift towards vertical integration by the new system designers (Amazon, Google, Azure).

Simultaneously, emergence of ML as an emerging and significant workload that demanded new silicon types (GPU/TPU/MPU/DPU/xPU) and programming middleware (TensorFlow and PyTorch) broke the shackles from Unix/C/Linux to new frameworks and new hardware and software stack at the system level.

Nvidia happened to be at the right time at the right place (one can debate if GPU is the right architectural design), but certainly the new category or the tea leaves for the new system which is a CPU + xPU seeds were sown by mid 2010s….

All of the shift towards hyper scale distributed systems was fueled by Opensource. Some say that Amazon made all the money by reselling open source compute cycles. Quite true. Open source emerged and blossomed with the cloud and eventually the cloud would go vertical and raises the question – Is open source a viable investment strategy especially for infrastructure. The death of Sun microsystems was led by open source and. the purchase of RedHat by IBM formed the bookends of Open Source as the dominant investment thesis by the venture community. While open source is still viable and continues to thrive, it’s not front and center as a disruptor or primary investment thesis by end of this era as many more SaaS applications took the oxygen.

We started with 130nm 10 layers of metal with Intel taking the lead over TI and IBM and ended with 10nm from TSMC taking. the lead over Intel. How did that happen? Volumes have been written on Intel’s mis-steps, but clearly the investment into 3DXpoint and trying to innovate or bet with new materials and new devices to bridge the memory gap did not materialize and distracted. Good idea and important technology gap need, but picking the wrong material stack distracted.

The companies that emerged and changed the computing landscape were VMware, Open Source (many), Facebook, Apple (Mobile), China (as a geography ). The symbiotic relationship between VMware and Intel is best depicted in the chart below.

Single core to dual socket multi-core evolution…

On networking front The transition from 10Gbps to 100Gbps (10x) over the past decade is one of the biggest transformation of networking adoption of custom silicon design principles.

Above chart shows the flattening of the OEM business while the cloud made the pie larger. OEMs consolidated around big 6 (Dell, HPE, Cisco, Lenovo, NetApp, Arista) and rest withered.

GPU/xPU emerged as a category and along with resurgence in semiconductor investments (50+ startups with $2.5+B of venture dollars). Generalization of xPU with a dual heterogenous socket (CPU + xPU) is becoming the new building blocks for a system, thanks to CXL as well. The associated evolution and implications for the software layer was discussed here.

We conclude this era with the shift from 3-tier enterprise (‘modern mainframe’) stack that was serviced by OEMs to distrbuted systems as implemented by the cloud providers where use case (e-commerce, search, social) drove the system design whereas technology (Unix/C/RISC) drove the infrastructure design in the prior era (a note on that is coming…)

In summary – Moore’s law enabled multi-core, virtualization, distributed systems, but its slowdown of growth opened the gates for new systems innovation and thus new companies and new stack including significant headwinds for Intel.

Lets revisit some of the famous laws by famous people…

  1. Original Moore’s law – (cost, density)

Bill Joy’s change it to Performance Scaling. Certainly slowing down and shift in performance moved to throughput over latency. Needs update for ML/AI era, as it demands both latency and throughput.

2.Metcalfe’s Law – Still around. See the networking section.

3.Wrights Law (demand and volume) – https://ark-invest.com/articles/analyst-research/wrights-law-2/ – this predates moore’s law and now applies to many more domains – battery, biotech, solar etc…

4.Elon’s law – (A new one…) – Optimal alignment of atoms and how close to that is your error. We are approaching that.

5.Dennard Scaling – Power limits are being hit. Liquid cooling is coming down the cost curve rapidly.

Intelligrated ……

Ben Thompson of Stratchery in his recent blog on Intel Split prompted me to coin the word “Intelligrated“, which is a counterpoint to his thesis. – No, its not in the dictionary. Before we get to that, lets start with one topic he brings up as it is near and dear to me and many of my old fellow chip nerds from that time (1987-2003) which I would call as EDA 1.0 era.

EDA changed microprocessor roadmap starting Circa 1987 and continued through late 1990s: Ben references Pat’s paper on Intel EDA methodology which scaled design methodology to track moore’s law. Intel invested heavily in EDA internally as the industry was immature. Around the same time Sun Microsystems which built its business selling EDA/MCAD workstations was changing the industry EDA landscape (methodology and eco-system). [An aside: Would not be surprised if x86 CPUs till Pentium IV, were designed using Sun workstations]. Both companies had parallel but different approaches.

EDA 1.0: Intel vs Sun approach: Sun’s approach was industry tools and if it does not exist enable the industry eco-system to be built. It perhaps started in 1989 when a motley crew (25) of engineers (including yours truly) built the first CMOS SPARC SOC (Tsunami – referenced here) with no prior experience in custom VLSI. We all came out of a cancelled ECL SPARC microprocessor where none of us had done any custom VLSI design. The CAD approach was…

Necessity is the mother of invention. Sunil Joshi captured the EDA methodology then in the MicroSPARC HotChips presentation. 486 (Pat’s chip) had 1.2M transistors (100+ engineers perhaps) vs the more integrated MicroSPARC at 800K transistors that came 2 years later (same time as Pentium which had 200+ engineers) but a full SOC. AS noted, we had only 2 mask designers and every engineer had to write RTL, synthesize, verify, do their own P&R, timing analyze and two of us built the CAD flow so that it can be push button. Auto-generated standard cells were automatically P&R using compiler tools. That was not the norm for ‘custom VLSI’ circa 1991 for that scale.

That eventally got the name ‘Construct by correction vs Correct by construction’ and throughout the 1990s, this evolved, scaled and made the processor design competitive as well raised a lot of boats in the EDA industry that evenutally got Intel to adopt industry tools with a healthy mix of in-house tools. With no in-house EDA teams, we creatively partnered (Synopsys, Mentor), invested (Magma, mid 1990s), helped M&A (gateway design – verilog, Cooper and Chyan – Cadence), spin-out (Pearl timing analyzer to replace motive). At the same time IBM EDA tools which were superior to both Sun’s approach and Intel, but was locked up inside IBM until later in the decade, when it was too late.

In parallel, there was a great degree of systems innovation (SoCs, glue-less SMP, VIS, Ethernet, graphics accelerators, multi-core+threading) that was enabled by EDA 1.0 and CMOS custom VLSI by the industry at large with Sun leading the parade. Allow me to term it as ISM 1.0 (Integrated Systems).

Now IDM 1.0 is what made Intel successful to beat all the RISC vendors. We (Sun and compatriot RISC vendors) could beat Intel in architecture, design methodology and even engineering talent and in some cases like Sun which had OS and Platform built a strong moat. But, we could not beat Intel on manufacturing technology. Here’s a historical view of tech roadmap.

Tech History (litho nm only)

Caution: Given dated history, some data from 1990s could be incorrect – corrections please notify.

In a prior blog I have called out how Intel caught up with TI and IBM on process technology by 1998 (they were the manufacturing leader but not the technology leader w.r.t xtor FOM or metal litho until 1998). TI Process technlogists used to complain ‘At Intel design follows fab and you folks are asking fab to follow design’ as we demanded xTOR FOM and Metal litho more than Moore in the 1990s. By 1998 with coppertone, Intel raced ahead with both litho as well as xtor FOM (60% improvement in 180nm with coppertone to boost Pentium MhZ). So Intel was not xtor FOM leader in early 1990s, but they pulled ahead by 1-2 generations by late 1990s. It has been done by Intel. When they did go ahead is when IBM and TI became non-competitive (starting 1998) for high end microprocessors and the beginning of our end and my departure from microprocessor (unless I go to Intel). (Side note: Both TI and Intel bet on BiCMOS till 650nm). Unlikely history will repeat itself as the dynamics are different today with consolidation, but it has been done before by Intel.

Intel’s leadership with IDM 1.0 and co-opting ISM 1.0 architectural elements (by 2002 – multi-core, glue-less SMP, MMX, integrated DRAM controllers) into its processors made it difficult for fabless CPU companies to thrive despite having systems business to fund – which was not sufficient by 2002. Even 500K CPUs/year (Sun/SPARC) was not economically justifiable. IBM, SGI, HP and many more esp. dropped as cost of silicon design and tech went up. [Side note: I am not sure on a standalone basis Graviton is economically viable for Amazon – if 500K CPUs was not viable in 2000, 50K is certainly not viable in 2020 – but sure they can wrap other elements in the TCO stack to justify for a few generations – not sustainable over 3-5 generations). Regardless, 20 years later…

IDM 2.0 is a necessary first step and good that Pat & Intel are putting that focus back. But IDM 2.0 needs ISM 2.0 and the same chutzpah of late 1980s of design EDA innovation but this time perhaps own the ‘silicon systems platform SW stack’.

ISM 2.0 is ‘Integrated Systems 2.0’. If SoC were the outcome of Moore’s law in the 1990s, SoP (Systems on Package) is the new substrate to re-imagine systems as the platform is becoming heterogenous for compute (CPU, DPU, IPU, xPU), Memory (DRAM, 3D xpoint, CXL memories) and Networks (Ethernet and CXL). There will always be a CPU (x86 or ARM), but increasingly we will find a DPU/IPU/XPU in the system that sweeps all the new workloads. The IPU/DPU/GPU/XPU will increasingly be an SoP with diversity of silicon types to meet the diversity of workload needs. But it will need a common and coherent software model to be effective in enabling platforms and workloads including low level VMs or run-time with standard APIs (e.g. P4/Sonic, Pytorch+TF and others).

On economies of scale which killed the RISC revolution (amongst other reasons), I have written about SoC vs SoP in a different blog here, but its important to consider the diversity of customers from OEM platforms to cloud platforms, emerging telco/edge service providers, and emerging ML/AI or domain specific service providers that have a large TAM. Each one needs customization i.e. no more one size fits all platforms, its multiple chips into multiple SoPs to different ways to package and deliver to these new channels of delivery – OEM (Single System), Cloud (distributed systems) and emerging decentralized cloud. But to retain the economies of scale of chip manufacturing while delivering customized solutions to the old and new category of customers, we are moving towards disaggregate at the component level and aggregation at the platform software level.

Just to get a better sense of varied sales motion- Nvidia is a chip company, a box company (mellanox switches and DGX) as well as a cloud company (delivering ML/AI as a service w/Equinix).

This is more than Multi-core and Virtualization that happened in Circa 2003 (VMware). An entire new layer of software will be a key enabler for imagining the new ‘integrated systems’ and delivery of them. For lack of a better TLA , let me call it EDA 2.0. The design tooling to assemble these variety of SoP solutions requires new design tooling to enable customization of the ‘socket’. The old mantra was sell 1 chip SKU in millions. That is still true. But we will have multiple SoP SKUs using the same multi-million unit chip SKU. The design tooling to assemble these SoP has not only manufacturing programmability but in the field as there will be some FPGA elements in the SoP as well as low level resource management functionality.

Hijacking the OSI model as a metaphor to represent the infrastructure stack…..

7 layers of Infrastructure Stack
Bottom 3 layers form the emerging silicon systems plane

The homogenous monolithic CPU has now become heterogenous CPU+DPU/IPU/XPU/FPGA. Memory from being homogenous DDR to DDR + 3Dxpoint on DDR bus and CXL bus.

So ‘Integrated Systems’ is an assembly of these Integrated chips based on target segments and customers, but have to manage the three axes of flexibility vs performance vs cost. While silicon retains that one mask set for most target segments (economics), the customization at the packaging level enables the new ‘integrated system’ (the bottom three layers in above visual) . This new building block will become complex over time (hardware and software) thus value extraction (simplification of complexity results in value extraction) but requires capital and talent. Both exists ironically either with the chip vendor or with the cloud service provider, the two ends of the current value chain.

The pendulum is starting swing back from the vertically integrated chip company (1980-2000) to the era when OEMs owned Silicon (Sun, HP, IBM) or chip companies owned fab (Intel), to Horizontalization with the success of fabless chip (Nvidia, Broadcom……) + TSMC (2000-2020) to again vertical integration at sub-system level for certain segments or markets.

Back to Intel Split vs Intel Integrated. In this era, if there is any lesson learnt from the EDA 1.0 era, it would be smarter to do a build, buy and partner i.e. build the new tooling (EDA 2.0 and BIOS 2.0) in a smart combination of build, buy, partner and expand into invest, open source and build a moat around that eco-system that will be hard for competitive chip companies to compete. EDA 2.0 is not the same as EDA 1.0 – its both design tools pre-manufacturing and low level programming and resource management frameworks. Directionally some of it is captured here by Chris Lattner ( MLIR & CIRCT). We have a chicken and egg situation that to create that layer we will need new silicon constructs, but to get to the right silicon, you will need the new layer (akin to how Unix+C enabled RISC and RISC accelerated Unix and C eventually referenced here..)

Coming back to Intel v TSMC and splitting, TSMC is good at manufacturing – but has not (yet) built eco-systems and platforms at higher levels. Its their customers (fabless). Intel knows that and done that many times over. I make the case that Intel Integrated with IDM 2.0 and ISM 2.0 and being flexible in delivery SoC, SoP and even rack level products to emerging decentralized-edge cloud providers will the emerging opportunity.

Splitting the company will devalue the sum of parts than being whole in this case. While, the point of split (fab) driving accountability and customer focus and serviceability is there, there are perhaps other ways to achieve the same without a formal split while retain the value of integration of the parts.

Smart integration of the various assets and delivery. Creatively combining design engineering, EDA 2.0, BIOS 2.0 and linking it to SoP and SOC manufacturing including field level customization will be a huge value extraction play. The Apple vs Android model of market share vs margin share. IDM 2.0 with ISM 2.0 will get market share and margin share for dominant compute platforms.

A POV – Intel has do 3 things. IDM 2.0 (under way), ISM 2.0 (will elaborate that in a future note) and something else ( aligned with Intel’s manufacturing DNA) truly out of the box before 2030 when its going to be economically and physically hard to get more from semiconductors as we know. That has to be put in place between now and 2030…

————————————————————-

References to EDA at Sun…

Historical CPU/Process Node Data..

Open Systems to Open Source to Open Cloud.

“I’m all for sharing, but I recognize the truly great things may not come from that environment.” – Bill Joy (Sun Founder, BSD Unix hacker) commenting on opensource back in 2010.

In March at Google Next’17, Google officially called their cloud offering as ‘Open Cloud’. That prompted me to pen this note and reflect upon the history of Open (System, Source, Cloud) and what are the properties that make them successful.

A little know tidbit in history is that the early opensource effort of modern computing era (1980s to present) was perhaps Sun Microsystems. Back in 1982,  each shipment of the workstation was bundled with a tape of BSD – i.e. modern day version of open source distribution (BSD License?). In many ways much earlier to Linux, Open Source was initiated by Sun and in fact driven by Bill Joy. But Sun is associated with ‘Open Systems’ instead of Open Source. Had the AT&T lawyers not held Sun hostage, the trajectory of Open Source would be completely different i.e. Linux may look different. While Sun and Scott McNealy (open source)  tried to go back to its roots as an open source model, the 2nd attempt did not get rewarded with success (20 years later).

My view on success of any open source model requires the following 3 conditions to make it viable or perhaps as a sustainable, standalone business and distribution model.

  • Ubiquity: Everybody needs it i.e. its ubiquitous and large user base
  • Governance: Requires a ‘benevolent dictator’ to guide/shape and set the direction. Democracy is anarchy in this model.
  • Support: Well defined and  credible support model. Throwing over the wall will not work.

Back to Open Systems: Sun early in its life shifted to a marketing message of open systems rather effectively. Publish the APIs, interfaces and specs and compete on implementation. A powerful story telling that resonated with the customer base and to a large extent Sun was all about open systems. Sun used that to take on Apollo and effectively out market and outsell Apollo workstations. The Open Systems mantra was the biggest selling story for Sun through 1980s and 1990s.

In parallel around 1985, Richard Stallman pushed free software and evolution of that model led to the origins of Open Source as a distribution before it became a business model, starting with Linus and thus Linux.  Its ironic that 15+ years after the initial sale of Open systems, Open source via Linux came to impact Sun’s Unix (Solaris).

With Linux – The Open Source era was born (perhaps around 1994 with the first full release of Linux). A number of companies have been formed, notably RedHat that exploited and by far the largest and longest viable standalone open source company as well.

 

Slide1

The open systems in the modern era  perhaps began with Sun in 1982 and perhaps continued for 20 odd years with Open Source becoming a distribution and business model between 1995 and 2015 but will continue for another decade. 20 years later, we see the emergence of ‘open cloud’ or at-least the marketing term from Google.

In the past 20 years of the existence of Open Source, it has followed the classical bell curve of interest, adoption, hype, exuberance, disillusionment and beginning of decline. There is no hard data to assert Open Source is in decline, but its obvious based on simple analyses that with the emergence of the cloud (AWS, Azure and  Google), the consumption model of Open Source infrastructure software has changed. The big three in cloud have effectively killed the model as the consumption and distribution model of infrastructure technologies is rapidly changing. There are few open source products that are in vogue today that has reasonable traction, but struggling to find a viable standalone business model are elastic , SPARK (Data Bricks), Open Stack (Mirantis, SUSE, RHAT), Cassandra (Data Stax) ,  amongst others. Success requires all three conditions- Ubiquity, Governance and Support.

The Open Source model for infrastructure is effectively in decline when you talk to the venture community. While that was THE model until perhaps 2016, Open Source has been the ‘in thing’, the decline is accelerating with the emergence of public cloud consumption model.

Quoting Bill (circa 2010) – says a lot about the viability of open source model – “The Open Source theorem says that if you give away source code, innovation will occur. Certainly, Unix was done this way. With Netscape and Linux we’ve seen this phenomenon become even bigger. However, the corollary states that the innovation will occur elsewhere. No matter how many people you hire. So the only way to get close to the state of the art is to give the people who are going to be doing the innovative things the means to do it. That’s why we had built-in source code with Unix. Open source is tapping the energy that’s out there”.  The smart people now work at one of the big three (AWS, Azure and Google) and that is adding to the problems for both innovation and consumption of open source.

That brings to Open Cloud – what is it? While  Google announced they are the open cloud – what does that mean? Does that mean Google is going to open source all the technologies it uses in its cloud? Does that mean its going to expose the APIs and enable one to move any application from GCP to AWS or Azure seamlessly i.e .compete on the implementation? It certainly has done a few things. Open Sourced Kubernetes. It has opened up Tensor flow (ML framework). But the term open cloud is not clear. Certainly the marketing message of ‘open cloud’ is out there. Like Yin and Yang, for every successful ‘closed’ platform, there has been a successful ‘open’ platform.  If there is an open cloud, what is a closed cloud. The what and who needs to be defined and clarified in the coming years. From Google we have seen a number of ideas and technologies that eventually has ended up as open source projects. From AWS we see a number of services becoming de-facto standard (much like the Open Systems thesis – S3 to name one).

Kubernetes is the most recent ubiquitous open source software that seems to be well supported. Its still missing the ‘benevolent dictator’ – personality like Bill Joy or Richard Stallman or Linus Torvalds to drive its direction. Perhaps its ‘Google’ not a single person?  Using  the same criteria above  – Open Stack has the challenge of missing that ‘benevolent dictator’. Extending beyond Kubernetes, it will be interesting to see the evolution and adoption of containers+kubernetes vs the evolution of new computing frameworks like Lamda (functions etc). Is it time to consider an ‘open’ version of Lambda.

Regardless of all of these framework and API debates and open vs closed –  one observation:

Is Open Cloud really ‘Open Data’ as Data is the new oil that drives the whole new category of computing in the cloud. Algorithms and APIs will eventually open up. But Data can remain ‘closed’ and that remains a key value especially in the emerging ML/DL space.

Time will tell…..

On Oct 28th, IBM announced acquisition of RedHat. This marks the end of open source as we know today. Open Source will thrive, but not in the form of a large standalone business.

Time will tell…

wrong tool

You are finite. Zathras is finite. This is wrong tool.

----- Thinking Path -------

"knowledge speaks but wisdom listens" Jimi Hendrix.

The Daily Post

The Art and Craft of Blogging

WordPress.com News

The latest news on WordPress.com and the WordPress community.

%d bloggers like this: