Many (core) Moore (Part III computing Epoch)

Back to the past – This is part III of four part story of the computing epochs as punctuated by Moore’s law in which Intel had its imprint for obvious reasons.

This is the 2003-2020 Era, in which multi-core, Open source, Virtualization, cloud infrastructure, social networks all blossomed…The onset of it was the end of MHz computing (Pentium IV) to multi-core and throughput computing.

It was also the beginning of my end in semiconductors for a brief period (20 years) until I decided its time to get back in the 2020s…That was punctuated by the first multi-core CPUs (mainstream) that Sun enabled – famously known as Niagara family and of-course the lesser know is UltraSPARC IIe which has an interesting contrast to Intel’s Banias (back to Pentium).

Some would call it Web2 era or Internet 2 era…The dot-com bubble which blew a number of companies in the prior era (OEM era), paved the way for new companies to emerge, thrive and establish the new stack. Notably at the infrastructure level, Moore was well ahead with first multi-core CPUs enabling virtualization and accelerated the decline of other processor companies (SPARC, MIPS), system OEMs as the market shifted from buying capital gear to cloud and opex.

Semiconductors investments started to go out of fashion as Intel dominated and other fabs (TI, National, Cypress, Philips, ST and many more withered) leaving Intel and TSMC with an also-ran Global foundries. In the same period, architectural consolidation around x86 happened along with Linux, ARM emerged as the alternative for. a new platform (mobile) via Apple. Looking back it was the value shifting from vertical integration (fab + processors) to SoC and thus IP (ARM) became dominant despite many attempts by processor companies to get into mobile.

Convergent to the emergence of iPhone/Apple/ARM, was AWS EC2 and S3 and thus the beginning of cloud with Opex as the new buying pattern instead of capex. This had significant implication as a decade later that very shift to commodity servers and opex comes full circle via Graviton and TPU with the cloud providers going vertical and investing in silicon. Intel’s lead on technology enabled x86 to dominate and when that lead in technology both slowed thanks to Moore’s law and TSMC, the shift towards vertical integration by the new system designers (Amazon, Google, Azure).

Simultaneously, emergence of ML as an emerging and significant workload that demanded new silicon types (GPU/TPU/MPU/DPU/xPU) and programming middleware (TensorFlow and PyTorch) broke the shackles from Unix/C/Linux to new frameworks and new hardware and software stack at the system level.

Nvidia happened to be at the right time at the right place (one can debate if GPU is the right architectural design), but certainly the new category or the tea leaves for the new system which is a CPU + xPU seeds were sown by mid 2010s….

All of the shift towards hyper scale distributed systems was fueled by Opensource. Some say that Amazon made all the money by reselling open source compute cycles. Quite true. Open source emerged and blossomed with the cloud and eventually the cloud would go vertical and raises the question – Is open source a viable investment strategy especially for infrastructure. The death of Sun microsystems was led by open source and. the purchase of RedHat by IBM formed the bookends of Open Source as the dominant investment thesis by the venture community. While open source is still viable and continues to thrive, it’s not front and center as a disruptor or primary investment thesis by end of this era as many more SaaS applications took the oxygen.

We started with 130nm 10 layers of metal with Intel taking the lead over TI and IBM and ended with 10nm from TSMC taking. the lead over Intel. How did that happen? Volumes have been written on Intel’s mis-steps, but clearly the investment into 3DXpoint and trying to innovate or bet with new materials and new devices to bridge the memory gap did not materialize and distracted. Good idea and important technology gap need, but picking the wrong material stack distracted.

The companies that emerged and changed the computing landscape were VMware, Open Source (many), Facebook, Apple (Mobile), China (as a geography ). The symbiotic relationship between VMware and Intel is best depicted in the chart below.

Single core to dual socket multi-core evolution…

On networking front The transition from 10Gbps to 100Gbps (10x) over the past decade is one of the biggest transformation of networking adoption of custom silicon design principles.

Above chart shows the flattening of the OEM business while the cloud made the pie larger. OEMs consolidated around big 6 (Dell, HPE, Cisco, Lenovo, NetApp, Arista) and rest withered.

GPU/xPU emerged as a category and along with resurgence in semiconductor investments (50+ startups with $2.5+B of venture dollars). Generalization of xPU with a dual heterogenous socket (CPU + xPU) is becoming the new building blocks for a system, thanks to CXL as well. The associated evolution and implications for the software layer was discussed here.

We conclude this era with the shift from 3-tier enterprise (‘modern mainframe’) stack that was serviced by OEMs to distrbuted systems as implemented by the cloud providers where use case (e-commerce, search, social) drove the system design whereas technology (Unix/C/RISC) drove the infrastructure design in the prior era (a note on that is coming…)

In summary – Moore’s law enabled multi-core, virtualization, distributed systems, but its slowdown of growth opened the gates for new systems innovation and thus new companies and new stack including significant headwinds for Intel.

Lets revisit some of the famous laws by famous people…

  1. Original Moore’s law – (cost, density)

Bill Joy’s change it to Performance Scaling. Certainly slowing down and shift in performance moved to throughput over latency. Needs update for ML/AI era, as it demands both latency and throughput.

2.Metcalfe’s Law – Still around. See the networking section.

3.Wrights Law (demand and volume) – https://ark-invest.com/articles/analyst-research/wrights-law-2/ – this predates moore’s law and now applies to many more domains – battery, biotech, solar etc…

4.Elon’s law – (A new one…) – Optimal alignment of atoms and how close to that is your error. We are approaching that.

5.Dennard Scaling – Power limits are being hit. Liquid cooling is coming down the cost curve rapidly.

Open Systems to Open Source to Open Cloud.

“I’m all for sharing, but I recognize the truly great things may not come from that environment.” – Bill Joy (Sun Founder, BSD Unix hacker) commenting on opensource back in 2010.

In March at Google Next’17, Google officially called their cloud offering as ‘Open Cloud’. That prompted me to pen this note and reflect upon the history of Open (System, Source, Cloud) and what are the properties that make them successful.

A little know tidbit in history is that the early opensource effort of modern computing era (1980s to present) was perhaps Sun Microsystems. Back in 1982,  each shipment of the workstation was bundled with a tape of BSD – i.e. modern day version of open source distribution (BSD License?). In many ways much earlier to Linux, Open Source was initiated by Sun and in fact driven by Bill Joy. But Sun is associated with ‘Open Systems’ instead of Open Source. Had the AT&T lawyers not held Sun hostage, the trajectory of Open Source would be completely different i.e. Linux may look different. While Sun and Scott McNealy (open source)  tried to go back to its roots as an open source model, the 2nd attempt did not get rewarded with success (20 years later).

My view on success of any open source model requires the following 3 conditions to make it viable or perhaps as a sustainable, standalone business and distribution model.

  • Ubiquity: Everybody needs it i.e. its ubiquitous and large user base
  • Governance: Requires a ‘benevolent dictator’ to guide/shape and set the direction. Democracy is anarchy in this model.
  • Support: Well defined and  credible support model. Throwing over the wall will not work.

Back to Open Systems: Sun early in its life shifted to a marketing message of open systems rather effectively. Publish the APIs, interfaces and specs and compete on implementation. A powerful story telling that resonated with the customer base and to a large extent Sun was all about open systems. Sun used that to take on Apollo and effectively out market and outsell Apollo workstations. The Open Systems mantra was the biggest selling story for Sun through 1980s and 1990s.

In parallel around 1985, Richard Stallman pushed free software and evolution of that model led to the origins of Open Source as a distribution before it became a business model, starting with Linus and thus Linux.  Its ironic that 15+ years after the initial sale of Open systems, Open source via Linux came to impact Sun’s Unix (Solaris).

With Linux – The Open Source era was born (perhaps around 1994 with the first full release of Linux). A number of companies have been formed, notably RedHat that exploited and by far the largest and longest viable standalone open source company as well.

 

Slide1

The open systems in the modern era  perhaps began with Sun in 1982 and perhaps continued for 20 odd years with Open Source becoming a distribution and business model between 1995 and 2015 but will continue for another decade. 20 years later, we see the emergence of ‘open cloud’ or at-least the marketing term from Google.

In the past 20 years of the existence of Open Source, it has followed the classical bell curve of interest, adoption, hype, exuberance, disillusionment and beginning of decline. There is no hard data to assert Open Source is in decline, but its obvious based on simple analyses that with the emergence of the cloud (AWS, Azure and  Google), the consumption model of Open Source infrastructure software has changed. The big three in cloud have effectively killed the model as the consumption and distribution model of infrastructure technologies is rapidly changing. There are few open source products that are in vogue today that has reasonable traction, but struggling to find a viable standalone business model are elastic , SPARK (Data Bricks), Open Stack (Mirantis, SUSE, RHAT), Cassandra (Data Stax) ,  amongst others. Success requires all three conditions- Ubiquity, Governance and Support.

The Open Source model for infrastructure is effectively in decline when you talk to the venture community. While that was THE model until perhaps 2016, Open Source has been the ‘in thing’, the decline is accelerating with the emergence of public cloud consumption model.

Quoting Bill (circa 2010) – says a lot about the viability of open source model – “The Open Source theorem says that if you give away source code, innovation will occur. Certainly, Unix was done this way. With Netscape and Linux we’ve seen this phenomenon become even bigger. However, the corollary states that the innovation will occur elsewhere. No matter how many people you hire. So the only way to get close to the state of the art is to give the people who are going to be doing the innovative things the means to do it. That’s why we had built-in source code with Unix. Open source is tapping the energy that’s out there”.  The smart people now work at one of the big three (AWS, Azure and Google) and that is adding to the problems for both innovation and consumption of open source.

That brings to Open Cloud – what is it? While  Google announced they are the open cloud – what does that mean? Does that mean Google is going to open source all the technologies it uses in its cloud? Does that mean its going to expose the APIs and enable one to move any application from GCP to AWS or Azure seamlessly i.e .compete on the implementation? It certainly has done a few things. Open Sourced Kubernetes. It has opened up Tensor flow (ML framework). But the term open cloud is not clear. Certainly the marketing message of ‘open cloud’ is out there. Like Yin and Yang, for every successful ‘closed’ platform, there has been a successful ‘open’ platform.  If there is an open cloud, what is a closed cloud. The what and who needs to be defined and clarified in the coming years. From Google we have seen a number of ideas and technologies that eventually has ended up as open source projects. From AWS we see a number of services becoming de-facto standard (much like the Open Systems thesis – S3 to name one).

Kubernetes is the most recent ubiquitous open source software that seems to be well supported. Its still missing the ‘benevolent dictator’ – personality like Bill Joy or Richard Stallman or Linus Torvalds to drive its direction. Perhaps its ‘Google’ not a single person?  Using  the same criteria above  – Open Stack has the challenge of missing that ‘benevolent dictator’. Extending beyond Kubernetes, it will be interesting to see the evolution and adoption of containers+kubernetes vs the evolution of new computing frameworks like Lamda (functions etc). Is it time to consider an ‘open’ version of Lambda.

Regardless of all of these framework and API debates and open vs closed –  one observation:

Is Open Cloud really ‘Open Data’ as Data is the new oil that drives the whole new category of computing in the cloud. Algorithms and APIs will eventually open up. But Data can remain ‘closed’ and that remains a key value especially in the emerging ML/DL space.

Time will tell…..

On Oct 28th, IBM announced acquisition of RedHat. This marks the end of open source as we know today. Open Source will thrive, but not in the form of a large standalone business.

Time will tell…

1987 – 2017: SPARC Systems & Computing Epochs

 

30 Years of SPARC systems is this month in July’1987 when Sun 4/260 was launched.  A month before,  I started my professional career at Sun – to be exact June 15, 1987. 16+ years of my professional life was shaped by Sun, SPARC, Systems and more importantly the whole gamut of innovations that Sun did from chips to systems to operating systems to programming languages, covering the entire spectrum of computing architecture. I used to pinch myself for getting paid to work at Sun.  It was one great computer company that changed computing landscape.

 

Sun4_Launch

 

The SPARC story starts with Bill Joy without whom Sun would not be in existence (Bill was the fourth founder, though) as he basically drove re-inventing computing systems at Sun and thus the world at large.  Bill drove  technical direction of computing at Sun and initiated many efforts – Unix/Solaris, Programming languages, RISC to name a few.  David Patterson (UC Berkeley, now at Google) influenced the RISC direction. [David advised students who changed the computing industry and seems like he is involved with the next shift with TPUs @ Google – more later]. I call out Bill amongst the four (Andy Bechtolsheim, Scott Mcnealy, Vinod Khosla) in this context as without Bill, Sun would not have pulled the talent – he was basically a big black hole that sucked all talent across the country/globe. Without that talent, the innovations and the 30 year history of computing would not have been possible. A good architecture is one that lives for 30 years. This one did. Not sure about the next 30. More on this later. Back then, I dropped my PhD on AI (was getting disillusioned with then version of AI) for a Unix on my desktop and Bill Joy. Decision was that simple.

From historic accounts, the SPARC sytem initiative started in 1984. I joined Sun when the Sun 4/260 (Robert Garner was the lead) was going to be announced in July. It was  VME based backplane built both as a pedestal computer (12 VME boards) as well as rack mount system replaced then Sun 3/260 and Sun 3/280.  It housed the first SPARC chip (Sunrise) built out from gate arrays with Fujitsu.

 

This was an important product in the modern era of computing (198X-201X). 1985-87 was  the beginning of exploitation of Instruction level parallelism (ILP) with RISC implementations from Sun and MIPS. IBM/Power followed later, although it was incubated within IBM earlier than both . The guiding principles being, compilers can do better than humans and can generate code that is optimal and simpler with the  orthogonal instruction set. The raging debate then was “can compilers beat humans in generating code for all cases”?. It was settled with the dawn of the RISC era. This was the  era when C (Fortran, Pascal, ADA were other candidates) became the dominant programming language and compilers were growing leaps and bounds in capabilities. Steve Muchnik led the SPARC compilers, while Fred Chow did with MIPs. Recall the big debates about register windows (Bill even to-date argues about the decision on register windows) and code generation.  In brief it was the introduction of pipelining, orthogonal instruction sets and compilers (in some sense compilers were that era’s version of today’s rage in “machine learning” where machines started to outperform human ability to program and generate optimized code).

There were many categories enabled by Sun and the first SPARC system.

  1. The first chip was implemented in a gate array, which was more cost effective as well as faster TTD (Time to Design). The fabless semiconductor was born out of this gate array model and eventually exploited by many companies. A new category emerged in the semiconductor business.
  2. EDA industry starting with Synopsys and their design compiler were enabled and driven by Sun. Verilog as a language for hardware was formalized. It was an event driven evaluation model. Today’s reactive program is yesterday’s verilog (not really, but making a point here that HDL forever was event driven programming).
  3. Create an open eco-system of (effectively free) licensable architecture. It was followed by Opensource for hardware (OpenSPARC) which was a miserable failure.

The first system was followed by the pizza box (SPARCstation 1) using the Sunrise chip. Series of systems innovations were delivered with associated innovations in SPARC.

  1. 1987 Sun4/260 Sunrise – early RISC (gate array)
  2. 1989 SPARCstation 1 (Sunray) – Custom RISC
  3. 1991 Sun LX  (Tsunami) – First SoC
  4. 1992 SPARCstation 10 (Viking) – Desktop MP
  5. 1992 SPARCserver (Viking) – SMP servers
  6. 1995 UltraSPARC 1, Sunfire (Spitfire) – 64 bit, VIS, desktop to servers
  7. 1998 Starfire (Blackbird), Sparcstation 5 (Sabre) – Big SMP
  8. 2001 Serengeti (UltraSPARC III) – Bigger SMP
  9. 2002 Ultra 5 (Jalapeno) – Low Cost MP
  10. 2005 UltraSPARC T-1 (Niagara) – Chip Multi-threading
  11. 2007 UltraSPARC T-2 – Encryption co-processor
  12. 2011 SPARC T4
  13. 2013 SPARC T5, M5
  14. 2015 SPARC M7 (Tahoe)
  15. 2017 M8…

The systems innovations were driven by both SPARC and Solaris or SunOS back then. There are 2 key punctuations in the innovation and we have entered the third era in computing. The first two was led by Sun and I was lucky to be part of that innovation and be able to shape that as well.

1984-1987 was the dawn of the ILP era, which continued on for the next 15 years until thread level parallelism became the architectural thesis thanks to Java, internet and throughput computing. A few things that Sun did was very smart. That includes

  1. Took  a quick and fast approach to implement chip by adoption of gate arrays. This surprised IBM and perhaps MIPS w.r.t speed of execution. Just 2 engineers (Anant Agrawal and Masood Namjoo) did the integer unit. No Floating point. MIPS was meanwhile designing a custom chip
  2. It was immediately followed by Sunrise a full custom chip done with Cypress (for Integer unit) and TI (for floating point unit). Of all the different RISC chips that were designed around the same era, SPARC along with MIPS stood out (eventually Power).
  3. That was the one two punch enabled by Sun owning the architectural paradigm shift (C/Unix/RISC) as compute stack of then.

Industry’s first Pipelining, super scalar (more than 1 instruction/clock) became the drivers of performance. Sun innovated both at the processor level (with compilers) and system level with symmetric multi-processing with operating system to drive the ‘attack of the killer micros‘. A number of success and failures followed the initial RISC (Sunrise based platform).

  1. Suntan was an ECL sparc chip that was built, but not taken to market for two reasons. [Have an ECL board up in my attic]. The debate of CMOS vs ECL was ending with CMOS rapidly gaining speed-power ratio of ECL and more importantly the ability of Sun to continue with ECL would have drained the company relative to the value of the high end of the market. MIPS carried through and perhaps drained significant capital focus by doing so.
  2. SuperSPARC was the first super scalar process that came out in 1991 working with Xerox, Sun delivered the first glue-less SMP (M-bus and X-bus).
  3. 1995 was a 64 bit CPU  (MIPS beat to market – but was too soon) with integrated VIS ( media SIMD instructions)

After that the next big architectural shift was multi-core and threading. It was executed with mainstream Sparc but accelerated with the acquisition of Afara and its Niagara family of CPUs. If there is a ‘hall of fame’ for computer architects, a shout out goes to Les Kohn who led two key innovations – UltraSPARC (64bit, VIS) and UltraSPARC T-1 (multi-threading). Seeds of that shift was sown in 1998 and family of products exploiting multi-core and threading were brought to market starting in 2002/2003.

1998, in my view is the dawn of the second wave of computing in the modern era (1987-2017) in the industry and again Sun drove this transition. The move to web or cloud centric workloads, the emergence of Java as a programming language for enterprise and web applications enabled the shift to TLP – Thread Level Parallelism. In short, this is classical space-time tradeoff where clock rate had diminishing returns and shift to threading and multi-core began with the workload shift. Here again SPARC was the innovator with multi-core and multi-threading. The results of this shift started showing in systems around 2003  – roughly 15 years after the first introduction of SPARC with Sun 4/260.  In that 15 years, computing power grew by 300+ times and memory capacity grew by 128 times, roughly following Moore’s law.

While the first 15 years was the ILP era and the 2nd 15 years was about multi-core and threading (TLP). What is the third? We are dawning upon that era.  I phrase it as ‘MLP’- memory level parallelism.  Maybe not. But we know a few things now. The new computing era is more ‘data path’ oriented – be it GPU, FPGA or TPU – some form of high compute throughput matched by emerging ML/DL applications. A number of key issues have to be addressed.

 

Computing_trends

Every 30 years, technology, business and people have to re-invent themselves otherwise they stand to whither away. There is a pattern there.

There is a pattern here with SPARC as well. SPARC and SPARC based systems have reached 30 year life and it looks to be at the beginning of the end , while a new generation of processing is emerging.

Where do go from here? Defintely applications and technologies are the drivers. ML/DL is the obvious driver. Technologies range from memory, coherent ‘programable datapath accelerators’, programming models (event driven?), user space resource managers/schedulers and lots more. A few but key meta trends

  • The past 30 years – Hardware (Silicon and Systems) aggregated (for e.g. SMP) the resources while Software disaggregated (VMware). I believe the reverse will be true for the next 30 i.e. disaggregated hardware (e.g. accelerators or memory) but software will aggregate (for e.g. vertical NUMA, heterogenous schedulers).
  • Separation of control flow dominated code vs data path oriented code will happen (early signs with TPUs).

 

data_flow

  • Event driven (for e.g. reactive) programming models will become more prevalent. The ‘serverless’ trend will only accelerate this model as traditional programmers (procedural) have to be retrained to do event driven programming/coding (Hardware folks have been doing for decades).
  • We will build machines that will be hard (not in the sense of complexity – more in the sense of scale) for humans to program.

CMOS, RISC, Unix and C was the mantra in 1980s. Its going to be memory (some form of resistive memory structure) and re–thinking of the stack needs to happen. Unix is also 30+ years old.

Then_v_now

 

Just when you thought Moore’s law is slowing, the amalgamation of these emerging concepts  and ideas in a simple but clever system will rise to the next 30 years of innovations in computing.

Strap yourself for the ride…

 

 

wrong tool

You are finite. Zathras is finite. This is wrong tool.

----- Thinking Path -------

"knowledge speaks but wisdom listens" Jimi Hendrix.

The Daily Post

The Art and Craft of Blogging

WordPress.com News

The latest news on WordPress.com and the WordPress community.

%d bloggers like this: