Cloud and Fabs – 3 Years Later (Its AWS and VMware)

In 2017, I posted Cloud and Fabs here. I grew up in the semiconductor world so the comparison was obvious, but little did I know that my assertion would come true – i.e. a 2 player game as in semis (TSMC and Intel) will be played out in the cloud. I assumed the 2nd player was going to be Azure or GCP. Now it looks more like AWS(Intel) and TSMC(VMware) by way of analogy (fabless vs vertically integrated models).

Since January 2017, a lot has happened. Google Cloud has seen management shuffles, AWS continues it growth and Azure picking up steam. The industry pundits have written off on premise and I like many drank the Kool-aid that cloud first is the way to go. Cloud first as a developer makes total sense, but cloud first as a deployment model is evolving starting with 2019. I went to a few Google Nexts and was awed by the investment ($30B+) and technologies (ML). I went to AWS invent and was amazed as to see how Andy Jassy spout out technology and features like a CTO and if Jeff Bezos blinks could become Amazon CEO as well. The litany of features they throw at the customers has been shocking to the extent most silicon valley VCs ran to the hills when Amazon announces products. Startups could not compete with Amazon. That has not happened in Silicon Valley. Twice I started on version 2 of this blog and never completed thankfully as 2019 is the inflection point for a tectonic shift. Industry analyst after industry analyst predicted that the march to the cloud will consummate by 2025 and there will be no on-premise or only 20% of the cloud will be on-prem. I did too.

I predict by 2025, the private cloud (on-prem) / public cloud ratio will be a healthy 60/40 or 40/60 depending on whom you believe. That is a radical statement i.e .going against the tide? Why the shift or change of viewpoint?

Around the same time in 2017, I was introduced to Nutanix. Nutanix addressed two things that cloud infrastructure did from the start. Make storage invisible and ease of use or consumption just a few clicks away. Since 2017, Nutanix has seen another company creep up. Enter VMware. With the recent announcements with their project pacific and the re-tooling of the Day 0, 1 and 2 automation in VMware stack, both VMware and Nutanix have addressed key value propositions that the public cloud infrastructure companies provided . It’s cost / VM and ease of consuming a VM. If all you need is a commoditized VM the distraction of Openstack clouded the closing of the gap by Nutanix and VMware.

In 2019, any enterprise or private cloud that has a need to consume say 500 – 50K physical machines (10K to 1M VMs) of sustained use, doing the TCO analyses will show you, your private instance is significantly cheaper and the value prop of ease of VM creation and its life cycle. Here’s a simple visual of the shift in the value proposition. It’s not capex depreciation as the cost of sourcing server, storage and network is within 10-20% of hyperscalers if you are at moderate scale. Its the Opex – the total # of people deployed to run the infrastructure. That is the change event visually described below.

This is a qualitative chart, but its an attempt to make the point that the value to the end user was cost/VM (low) for entire life cycle of a VM (fast startup time, no operational overhead). The solid lines are for VMware and Nutanix and the dotted lines are for AWS, GCP, Azure. Sure, the big three have some form of a private cloud option – the usage of hyerpscaler stack for on-prem will be challenged by the complexity of the deployment and management!! Its different. The same reason why EMC won the storage business in the 1990s (most enterprises wanted to buy storage from a vendor independent from the server OEM). Same rationale applies.

VMware is pulling further ahead with their recent announcement at VMworld 2019 with tight integration of Kubernetes, GKE like a small light weight linux kernel embedded in the hypervisor to provide close to bare metal performance. The gap between the hyperscalers and these private cloud stacks (VMware and Nutanix) have closed with the cost advantage going to the VMware or Nutanix on a per VM basis. This implies for a range of enterprise users (mid size – 500 to 50K phyical machines) the reasons to go to public cloud for cost or ease of consumption is disappearing fast. Attraction to using public cloud depends on location, Opex vs Capex, simplified procurement and availability of other services (ML, lamdbda etc) – but for a vast majority of applications (by some count there are 150M VMs running on-prem while only 30-40M VMs are running in public cloud), I think we will see the stemming of the tide to move to the public cloud (hyperscalers).

VMware due to the market footprint is going to be a dominant player to serve the private or on-premise space. Would not be surprised this becomes an AWS vs VMWare game by 2022? What are the implications of this shift?

  1. Its going to be harder for Google to compete with VMware emerging as an option to stem the tide to move to public cloud. To recap, there are big three (AWS, Azure, Google) and Medium two (VMware and Nutanix) private cloud stacks.
  2. VMware is big enough to enhance its portfolio to include PaaS layers and other cloud native services and orchestrators including functions, lambda like serverless, NoSQL DB etc.
  3. This cements the death of OpenStack (as it has at SAP) for the enterprise. Might live in the Telco segments.
  4. The last 5 years, we saw the death of open source, as the public cloud companies (notably AWS) just resold open source via AWS EC2 compute cycles. They made more money than all the open source developers combined.
  5. The last 5-8 years we saw the infrastructure investors run to the hills. Most VCs feared to compete with AWS and except for a few tuck-ins, the big three basically wiped out significant portion of the VC investments in infrastructure technology.

All that is poised for a change. VMware is now emerging a strong alternative to hyperscalers. Kubernetes integrated in their stack, immediatey their customer base gets it, which is huge. 150M VMs in the wild have no reason to move. IT jobs (some) are preserved. Not all. The ‘edge’ loosely defined is emerging as another big category of its own. The interesting aspects of this shift are

  1. Rate of innovation in infrastructure has slowed down in the cloud due to their sheer size. One cannot introduce a new tech (memory, FPGA, GPU) fast enough in the public cloud as their hurdle rate is high. The byte sized enablement of new tech is easier or more economically viable with the on-prem. VMware (or Nutanix) can enable that faster than a public cloud can.
  2. That means a rich eco-system of Infrastructure startups now have a friend in VMware (or Nutanix). They can find a way to fit into that eco-system as the needs of the customers are varied. Hopefully VMware will recognize. In some sense VMware can enable hardware from different vendors today. So a new technology platform could be fast tracked with the VMware eco-system (if VMware desires so) than a public cloud channel.
  3. The perfect storm of new memory (3D Xpoint), new compute in ML (GPU, FPGA, TPU) and the new networks (200G/400G that addresses the micro-second transaction system) will find a faster path to market via this eco-system than the Public cloud. This implies VCs can bet once more on infrastructure startups.
  4. Distributed computing as we know is ripe for change. Its become too hard for the average programmer to build a distributed application. At the core its the assumption that infrastructure is fragile and networks are broken. If you change those assumptions (some startups are rethinking the fundamental assumptions), it will make disrributed systems simpler. The challenge of the killer microseconds very well explained by Google (Barroso, Partha et. all) is going to turn the infrastructure on its head. Will that innovation happen first in the hyperscalers or in byte sized chunks on-prem. The latter while not capitalized has a better chance as diversity of ideas (startups) will win over centralized innovation inside hyperscalers. This is a contrarian view – but I think its a bet I am willing to propose.

Three Final points.

  1. If there was a ‘Cloud first stack’ the past 15 years, there is emerging an ‘edge first stack’. Edge first means one has to solve a federated system that is constrained in space, power and new compute and data management paradigms.
  2. Infrastructure is back – i.e. innovation in infrastructure that can be channeled to on-prem enterprise customers via VMware (or Nutanix) will be a significant option for entrepreneurs, VCs , customers and traditional OEM players and to semiconductor companies like intel, Micron and Broadcom.
  3. TSMC is a fabless semiconductor company contrasted with a vertically integrated company (Intel). Today both have a market cap of $225B. 10 years, back it would have been unthinkable of TSMC achieving this status.
Image result for TSMC vs Intel market growth comparison

I think the same might happen in the cloud. AWS (and thus Azure and Google) are becoming more vertically integrated. AWS is doing a lot better than other two to serve a wide range of customers, but with nitro and other silicon efforts, they are getting deeper into owning the infrastructure stack. Here comes VMware like TSMC as a infrastructure-less cloud company?

The shift: Just when you thought the world has ended, it opens up again. 3 years back, I thought VMware dead on its track with the industry move to containers and Kubernetes. I was wrong. Keeping fingers crossed as the innovation of startups need a friend to channel their output.

Car v Cloud (Autonomy and business models)

This is not a post on cloud enabled cars or how autononous driving will use the cloud. Its a collection of thoughts on how  and when computing business model got disrupted by the cloud and similar analogies might hold for the automobile industry. A lot has been discussed on the business models of Tesla, Waymo, Uber, Lyft and perhaps what I am stating here is already observed and stated by others. I thought I will share some of my observations based on experience an EV which has autonomy as a core tech differentiator (Tesla) and economics of the EV model. This was prompted by a financial analyst session that I attended hosted by Merill Lynch on the “The future of driving” and debates with my good friends Atul Kapadia and Mike Klein.

Lots of good observations by Daniel Daniel of Merilly Lynch, but a few I will share here to set the stage for my analyses and observations of this topic. The annual automobile ownership costs and cost per mile are thus (based on 15K miles/year)

Sedan (57c) SUV (68c) Minivan (61c)

Constrating that to a Model X (75D) assuming 15K miles/year and free supercharge (unique to Tesla) works out to 69c/mile (included in that is cost of capital, electricity cost, nominal maintenace/year and insurance). So most of the cost. The model 3 by contrast (LR , rear wheel) comes to 39c/mile. My benchmark is, if I can get to say ~25c/mile, I might be open to a new model of using a car. Maybe its lower – but that is the starting point for this discussion. So what does it take to get to 25c/mile. For reference Uber and Lyft charges me >$1/mile. I would rather drive a 25c/mile than >$1/mile is its available at the click of a button from my Phone – i.e. a 4x reduction in cost of ‘shared’ transportation. That is very disruptive to current Uber/Lyft models.

A Simple EV Model is available for any or all of you to review. I have compared a Model 3 owner (LR) vs a model 3 shared (SR) vs Model X and Camry (owned and shared). Some assumption of fuel cost, maintenance and insurance costs are included as well. Feel free to copy and use it or modify to your hearts content. Simple EV model: http://tinyurl.com/yxcoos8h

Current ICE cars that are lot cheaper to buy (20-30%) for similar size and features – but EVs certainly are more limited in range. ICE that have range (Toyota Camry is perhaps a good benchmark is 30+% cheaper and achieves great mileage – no wonder Toyota has postponed the EV transition).

So if you are driving 15K miles a year and keep the car for 10+ years, its better to own than use Uber. Much like if you have cloud compute with reserved instances for 3 years esp for larger instances of VM, its better to have your own infrastructure than a cloud instance. (yes, availability and other functionality is better in the cloud – but for the purpose of this – just comparing cost at the VM layer). Amazon’s infrastructure economics works when there is re-purpose and better utilization of the infrastructure.

So when does the car ownership model will be subsumed to a subscription model like cloud computing? Definitely if you are driving less than 15K miles and esp short distances where Uber or Lyft is viable and maybe less than 6 years before you replace the car. That is not the majority of US car buyers today. But some other dynamics are at play like cloud computing that could well play out here. EV cars and autonomy do add cost. EV has the other problem of battery life cycle and battery life (age). So lets assume a 100KWh battery and 80% charge per cycle (effectively 67% as you do not want to go below 20% either). With gas at $4/gallon and 27mpg for an SUV vs 40mpg for a sedan (sparing all the cost assumption details), it seems like if a car or car service can have

  1. Share the capital spend thus share the car with anbody or a few (different models will appear)
  2. Full autonomy so it can go from Point A to B and thus enable sharing
  3. Simple access (like the model 3 iphone or card access) – make it impersonal (don’t leave your personal stuff in the car)
  4. Be ‘usable’ for 2-3x mileage than normal driving – e.g. 480K miles in 8 years i.e. better use of capital spend.
  5. Available at points of use or within walking distance say 250 meters or comes to you on call.
  6. Sufficient energy for daily driving distance and nightly or daily charge up.

Then the extra cost of an EV and autonomy can be justified. Simple math shows at $50K (pre-tax) model with 75KWh battery with 450Kmiles driven over an 8 year period (125 miles per day) will cost 26c/mile. The SUV economics are different – as the transport and passenger profile are different (like needing 256 GB VM machines vs 4TB VM machines – usage model is different to justify that cost).

So the challenges or in other words oppotunity is to have a $50K car with Autopilot and ideally 100KWh battery. Combine that with simple to walk in and drive (model 3 already achieves including driver personalization), will use autopilot to primarily autonomously let a car drive from person A to B. The individual use of Autopilot is a whole different matter frought with different acceptance profile, legal etc.

Of all the car companies, seems like Tesla is further along the key parameters above. If they can be achieved, I would be open to not have my own Tesla and go for a shared car managed by the cloud at 25c/mile vs my current >50c/mile. That is $300/month of ‘auto subscription’.

So you may ask – the same model will apply to gasoline cars with autonomy. Sure. Like in cloud – I am not just buying a VM, i want the fancy new functionality. I want my driving experience to be simpler (use autonomy in stop and go traffic), like the performance characteristics (acceleration etc when I need it). I have also made everything more efficient (expecting electric cars to have longer mechnamical life) except for the battery.

Why is this not the future of Tesla network or Uber or Lyft. I think the nuance is a slightly different business model. Tesla network could – but who will pay for the upfront capital and where will be the fleet park (lot of cars). For Uber or Lyft, they solved the problem with Taxi service alternative, but its not clear they have addressed the future business model. As Mike Volpi says here – the treat the car OEMs as ‘metal benders’ like the cloud companies did initially with Dell or HPEs and out of which ODM model formed. They can morph to a new model – but even with them, who is going to pay for the upfront capital and parking (nightly or other gaps in the day). You need a lot of cars to be paid by somebody to make this work. That is how the cloud infrastructure was built out. Who has the largest fleet that is under utilized. Maybe its Hertz, Avis etc. Even for Hertz and Avis a massive fleet upgrade and associated capital cost is expensive. Maybe there is a hybrid model that is evolved from the current Turo model and a lease model.

At the end its a financial engineering that fits the most common usage model. That will be the winning business model much like AWS was able to leverage the unused capacity and technology built it out for their own use to deliver and automated, easy to consume cloud computing.

Something to think about. What if I can subscribe to 8c/mile auto service. That is > 10x of Uber or Lyft and that is $100/month charge for 15K miles. Somebody is going to solve it and either a NewCo or evolved business model from Tesla.

Love to hear your thoughts.

RIP – SPARC and Anant

A year back when Robert Garner  sent an invite for the 30th anniversary of SPARC,  little did I expect  or believe events that transpired later would happen. I did this post on history of SPARC and its impact to the computing world and speculated on the future of computing.

Since then – on September 1′ 2017, Oracle decided to terminate all SPARC development efforts and laid off the entire development team. 30 years and perhaps 3000+ world class engineers and  >$5B in R&D spend over 30 years, >15 million CPUs built and delivered in Systems from Sun (generating >$100B in revenue) – SPARC flamed out. RIP SPARC.

More significant was one of key member of the initial SPARC development (Anant Agrwal) at Sun, who led the development of SPARC from its initial design through two key inflection points (1984-2002) left 2000+ of his world class engineers and his family on May 28th. A small sample of the Tsunami wave

  • 2000+ careers were launched during his tenure.
  • 10+ companies were founded by the engineers who worked under his leadership.
  • Processors from 1.3uM to 0.65uM CMOS (a decade of lithography feature) with a healthy mix of CMOS, ECL and even GaAS
  • 1000+ Patents created by the team
  • Many industry firsts –
    • RISC 32 bit (1987).
    • Microprocessor in a gate array (1987)
    • Microprocessor SoC (1991)
    • 64 bit processor (1995)
    • Glueless SMP
    • VIS (multi-media extension to FPU) (1995)
    • multi-core & multi-threading (2001)

His departure marks the bookend of the both his life and SPARC – how tragically co-incidental and a  reflection of the wake he left. A befitting bio of Anant Agrawal is posted here  at the computer history museum.

Sharing a poignant image of his family holding his hands around the time of his departure.

anant

                                                                            R I P

                                                                           Anant

VMware moment in storage: V2.0

Back in October’2016, I posted this blog on storage – speculating that we are on the verge on the next big shift in storage – towards a distributed storage platform. Speculated that an emerging company will stand out from the plethora of startups in this space with a unique technology and business model.

18 months later, I would like revise that thesis. What remains true are the following

  1. Yes, the world of storage today follows  distributed system design
  2. Yes, it will have a new business model

What I expected then was a startup company to emerge  and exploit this category. I believe that will be less likely now as the market dynamics specifically the consumption model has changed.

What is in vogue today is the cloud like consumption model. Clearly there is AWS, Google and Azure in the cloud storage space and a good part of storage needs are addressed by them. Which also means they are eating a part of the storage pie (speculating upto 30% of the market in dollar value – perhaps higher in capacity). The notion of storage only software companies are less likely today than then (2016). As the consumption model increasingly looks like cloud like, private cloud is emerging as the new category. That means, storage (distributed)  that is resilient, feature rich and bundled with other parts of the cloud stack. The three stacks that have a chance to gather market share in this space are VMC (VSAN), Nutanix (NDFS) and Azure (Blob storage).

As the enterprises (big and small) shift their infrastructure spend to public cloud and now evolving private cloud, there is little room left for traditional appliance like storage. While standalone solutions from Software Defined Storage (Ceph, MapR, Excelero, Datera, Driverscale to name a few) do have a play, the emergence of VMware cloud stacks (VMC), Nutanix and Azure stack says the market is accepting an integrated solution where good is ‘good enough’. i.e while these storage solutions from the top three private cloud players are architected well, they also meet the ‘good enough’ category leaving little room for any compelling alternatives.

So where do these ideas/companies go next?  Cloud agnostic multi-cloud multi-data access solution that provides customers independence and avoid lock-in is an option. The challenge is competing against the investment of the big three is hard. Focus on emerging mode 2 applications and its needs – containers, KV stores, new data access methods.  Potentially. Perhaps a variant of that, but focusing on the new and emerging memory stacks (memory centric).  One has to combine a unique technology that satisfies a need along with unique business model, disruption will happen.

Again lets revist this in 2020 (18 months from now) and see how much of this speculation becomes reality.

Open Systems to Open Source to Open Cloud.

“I’m all for sharing, but I recognize the truly great things may not come from that environment.” – Bill Joy (Sun Founder, BSD Unix hacker) commenting on opensource back in 2010.

In March at Google Next’17, Google officially called their cloud offering as ‘Open Cloud’. That prompted me to pen this note and reflect upon the history of Open (System, Source, Cloud) and what are the properties that make them successful.

A little know tidbit in history is that the early opensource effort of modern computing era (1980s to present) was perhaps Sun Microsystems. Back in 1982,  each shipment of the workstation was bundled with a tape of BSD – i.e. modern day version of open source distribution (BSD License?). In many ways much earlier to Linux, Open Source was initiated by Sun and in fact driven by Bill Joy. But Sun is associated with ‘Open Systems’ instead of Open Source. Had the AT&T lawyers not held Sun hostage, the trajectory of Open Source would be completely different i.e. Linux may look different. While Sun and Scott McNealy (open source)  tried to go back to its roots as an open source model, the 2nd attempt did not get rewarded with success (20 years later).

My view on success of any open source model requires the following 3 conditions to make it viable or perhaps as a sustainable, standalone business and distribution model.

  • Ubiquity: Everybody needs it i.e. its ubiquitous and large user base
  • Governance: Requires a ‘benevolent dictator’ to guide/shape and set the direction. Democracy is anarchy in this model.
  • Support: Well defined and  credible support model. Throwing over the wall will not work.

Back to Open Systems: Sun early in its life shifted to a marketing message of open systems rather effectively. Publish the APIs, interfaces and specs and compete on implementation. A powerful story telling that resonated with the customer base and to a large extent Sun was all about open systems. Sun used that to take on Apollo and effectively out market and outsell Apollo workstations. The Open Systems mantra was the biggest selling story for Sun through 1980s and 1990s.

In parallel around 1985, Richard Stallman pushed free software and evolution of that model led to the origins of Open Source as a distribution before it became a business model, starting with Linus and thus Linux.  Its ironic that 15+ years after the initial sale of Open systems, Open source via Linux came to impact Sun’s Unix (Solaris).

With Linux – The Open Source era was born (perhaps around 1994 with the first full release of Linux). A number of companies have been formed, notably RedHat that exploited and by far the largest and longest viable standalone open source company as well.

 

Slide1

The open systems in the modern era  perhaps began with Sun in 1982 and perhaps continued for 20 odd years with Open Source becoming a distribution and business model between 1995 and 2015 but will continue for another decade. 20 years later, we see the emergence of ‘open cloud’ or at-least the marketing term from Google.

In the past 20 years of the existence of Open Source, it has followed the classical bell curve of interest, adoption, hype, exuberance, disillusionment and beginning of decline. There is no hard data to assert Open Source is in decline, but its obvious based on simple analyses that with the emergence of the cloud (AWS, Azure and  Google), the consumption model of Open Source infrastructure software has changed. The big three in cloud have effectively killed the model as the consumption and distribution model of infrastructure technologies is rapidly changing. There are few open source products that are in vogue today that has reasonable traction, but struggling to find a viable standalone business model are elastic , SPARK (Data Bricks), Open Stack (Mirantis, SUSE, RHAT), Cassandra (Data Stax) ,  amongst others. Success requires all three conditions- Ubiquity, Governance and Support.

The Open Source model for infrastructure is effectively in decline when you talk to the venture community. While that was THE model until perhaps 2016, Open Source has been the ‘in thing’, the decline is accelerating with the emergence of public cloud consumption model.

Quoting Bill (circa 2010) – says a lot about the viability of open source model – “The Open Source theorem says that if you give away source code, innovation will occur. Certainly, Unix was done this way. With Netscape and Linux we’ve seen this phenomenon become even bigger. However, the corollary states that the innovation will occur elsewhere. No matter how many people you hire. So the only way to get close to the state of the art is to give the people who are going to be doing the innovative things the means to do it. That’s why we had built-in source code with Unix. Open source is tapping the energy that’s out there”.  The smart people now work at one of the big three (AWS, Azure and Google) and that is adding to the problems for both innovation and consumption of open source.

That brings to Open Cloud – what is it? While  Google announced they are the open cloud – what does that mean? Does that mean Google is going to open source all the technologies it uses in its cloud? Does that mean its going to expose the APIs and enable one to move any application from GCP to AWS or Azure seamlessly i.e .compete on the implementation? It certainly has done a few things. Open Sourced Kubernetes. It has opened up Tensor flow (ML framework). But the term open cloud is not clear. Certainly the marketing message of ‘open cloud’ is out there. Like Yin and Yang, for every successful ‘closed’ platform, there has been a successful ‘open’ platform.  If there is an open cloud, what is a closed cloud. The what and who needs to be defined and clarified in the coming years. From Google we have seen a number of ideas and technologies that eventually has ended up as open source projects. From AWS we see a number of services becoming de-facto standard (much like the Open Systems thesis – S3 to name one).

Kubernetes is the most recent ubiquitous open source software that seems to be well supported. Its still missing the ‘benevolent dictator’ – personality like Bill Joy or Richard Stallman or Linus Torvalds to drive its direction. Perhaps its ‘Google’ not a single person?  Using  the same criteria above  – Open Stack has the challenge of missing that ‘benevolent dictator’. Extending beyond Kubernetes, it will be interesting to see the evolution and adoption of containers+kubernetes vs the evolution of new computing frameworks like Lamda (functions etc). Is it time to consider an ‘open’ version of Lambda.

Regardless of all of these framework and API debates and open vs closed –  one observation:

Is Open Cloud really ‘Open Data’ as Data is the new oil that drives the whole new category of computing in the cloud. Algorithms and APIs will eventually open up. But Data can remain ‘closed’ and that remains a key value especially in the emerging ML/DL space.

Time will tell…..

On Oct 28th, IBM announced acquisition of RedHat. This marks the end of open source as we know today. Open Source will thrive, but not in the form of a large standalone business.

Time will tell…

1987 – 2017: SPARC Systems & Computing Epochs

 

30 Years of SPARC systems is this month in July’1987 when Sun 4/260 was launched.  A month before,  I started my professional career at Sun – to be exact June 15, 1987. 16+ years of my professional life was shaped by Sun, SPARC, Systems and more importantly the whole gamut of innovations that Sun did from chips to systems to operating systems to programming languages, covering the entire spectrum of computing architecture. I used to pinch myself for getting paid to work at Sun.  It was one great computer company that changed computing landscape.

 

Sun4_Launch

 

The SPARC story starts with Bill Joy without whom Sun would not be in existence (Bill was the fourth founder, though) as he basically drove re-inventing computing systems at Sun and thus the world at large.  Bill drove  technical direction of computing at Sun and initiated many efforts – Unix/Solaris, Programming languages, RISC to name a few.  David Patterson (UC Berkeley, now at Google) influenced the RISC direction. [David advised students who changed the computing industry and seems like he is involved with the next shift with TPUs @ Google – more later]. I call out Bill amongst the four (Andy Bechtolsheim, Scott Mcnealy, Vinod Khosla) in this context as without Bill, Sun would not have pulled the talent – he was basically a big black hole that sucked all talent across the country/globe. Without that talent, the innovations and the 30 year history of computing would not have been possible. A good architecture is one that lives for 30 years. This one did. Not sure about the next 30. More on this later. Back then, I dropped my PhD on AI (was getting disillusioned with then version of AI) for a Unix on my desktop and Bill Joy. Decision was that simple.

From historic accounts, the SPARC sytem initiative started in 1984. I joined Sun when the Sun 4/260 (Robert Garner was the lead) was going to be announced in July. It was  VME based backplane built both as a pedestal computer (12 VME boards) as well as rack mount system replaced then Sun 3/260 and Sun 3/280.  It housed the first SPARC chip (Sunrise) built out from gate arrays with Fujitsu.

 

This was an important product in the modern era of computing (198X-201X). 1985-87 was  the beginning of exploitation of Instruction level parallelism (ILP) with RISC implementations from Sun and MIPS. IBM/Power followed later, although it was incubated within IBM earlier than both . The guiding principles being, compilers can do better than humans and can generate code that is optimal and simpler with the  orthogonal instruction set. The raging debate then was “can compilers beat humans in generating code for all cases”?. It was settled with the dawn of the RISC era. This was the  era when C (Fortran, Pascal, ADA were other candidates) became the dominant programming language and compilers were growing leaps and bounds in capabilities. Steve Muchnik led the SPARC compilers, while Fred Chow did with MIPs. Recall the big debates about register windows (Bill even to-date argues about the decision on register windows) and code generation.  In brief it was the introduction of pipelining, orthogonal instruction sets and compilers (in some sense compilers were that era’s version of today’s rage in “machine learning” where machines started to outperform human ability to program and generate optimized code).

There were many categories enabled by Sun and the first SPARC system.

  1. The first chip was implemented in a gate array, which was more cost effective as well as faster TTD (Time to Design). The fabless semiconductor was born out of this gate array model and eventually exploited by many companies. A new category emerged in the semiconductor business.
  2. EDA industry starting with Synopsys and their design compiler were enabled and driven by Sun. Verilog as a language for hardware was formalized. It was an event driven evaluation model. Today’s reactive program is yesterday’s verilog (not really, but making a point here that HDL forever was event driven programming).
  3. Create an open eco-system of (effectively free) licensable architecture. It was followed by Opensource for hardware (OpenSPARC) which was a miserable failure.

The first system was followed by the pizza box (SPARCstation 1) using the Sunrise chip. Series of systems innovations were delivered with associated innovations in SPARC.

  1. 1987 Sun4/260 Sunrise – early RISC (gate array)
  2. 1989 SPARCstation 1 (Sunray) – Custom RISC
  3. 1991 Sun LX  (Tsunami) – First SoC
  4. 1992 SPARCstation 10 (Viking) – Desktop MP
  5. 1992 SPARCserver (Viking) – SMP servers
  6. 1995 UltraSPARC 1, Sunfire (Spitfire) – 64 bit, VIS, desktop to servers
  7. 1998 Starfire (Blackbird), Sparcstation 5 (Sabre) – Big SMP
  8. 2001 Serengeti (UltraSPARC III) – Bigger SMP
  9. 2002 Ultra 5 (Jalapeno) – Low Cost MP
  10. 2005 UltraSPARC T-1 (Niagara) – Chip Multi-threading
  11. 2007 UltraSPARC T-2 – Encryption co-processor
  12. 2011 SPARC T4
  13. 2013 SPARC T5, M5
  14. 2015 SPARC M7 (Tahoe)
  15. 2017 M8…

The systems innovations were driven by both SPARC and Solaris or SunOS back then. There are 2 key punctuations in the innovation and we have entered the third era in computing. The first two was led by Sun and I was lucky to be part of that innovation and be able to shape that as well.

1984-1987 was the dawn of the ILP era, which continued on for the next 15 years until thread level parallelism became the architectural thesis thanks to Java, internet and throughput computing. A few things that Sun did was very smart. That includes

  1. Took  a quick and fast approach to implement chip by adoption of gate arrays. This surprised IBM and perhaps MIPS w.r.t speed of execution. Just 2 engineers (Anant Agrawal and Masood Namjoo) did the integer unit. No Floating point. MIPS was meanwhile designing a custom chip
  2. It was immediately followed by Sunrise a full custom chip done with Cypress (for Integer unit) and TI (for floating point unit). Of all the different RISC chips that were designed around the same era, SPARC along with MIPS stood out (eventually Power).
  3. That was the one two punch enabled by Sun owning the architectural paradigm shift (C/Unix/RISC) as compute stack of then.

Industry’s first Pipelining, super scalar (more than 1 instruction/clock) became the drivers of performance. Sun innovated both at the processor level (with compilers) and system level with symmetric multi-processing with operating system to drive the ‘attack of the killer micros‘. A number of success and failures followed the initial RISC (Sunrise based platform).

  1. Suntan was an ECL sparc chip that was built, but not taken to market for two reasons. [Have an ECL board up in my attic]. The debate of CMOS vs ECL was ending with CMOS rapidly gaining speed-power ratio of ECL and more importantly the ability of Sun to continue with ECL would have drained the company relative to the value of the high end of the market. MIPS carried through and perhaps drained significant capital focus by doing so.
  2. SuperSPARC was the first super scalar process that came out in 1991 working with Xerox, Sun delivered the first glue-less SMP (M-bus and X-bus).
  3. 1995 was a 64 bit CPU  (MIPS beat to market – but was too soon) with integrated VIS ( media SIMD instructions)

After that the next big architectural shift was multi-core and threading. It was executed with mainstream Sparc but accelerated with the acquisition of Afara and its Niagara family of CPUs. If there is a ‘hall of fame’ for computer architects, a shout out goes to Les Kohn who led two key innovations – UltraSPARC (64bit, VIS) and UltraSPARC T-1 (multi-threading). Seeds of that shift was sown in 1998 and family of products exploiting multi-core and threading were brought to market starting in 2002/2003.

1998, in my view is the dawn of the second wave of computing in the modern era (1987-2017) in the industry and again Sun drove this transition. The move to web or cloud centric workloads, the emergence of Java as a programming language for enterprise and web applications enabled the shift to TLP – Thread Level Parallelism. In short, this is classical space-time tradeoff where clock rate had diminishing returns and shift to threading and multi-core began with the workload shift. Here again SPARC was the innovator with multi-core and multi-threading. The results of this shift started showing in systems around 2003  – roughly 15 years after the first introduction of SPARC with Sun 4/260.  In that 15 years, computing power grew by 300+ times and memory capacity grew by 128 times, roughly following Moore’s law.

While the first 15 years was the ILP era and the 2nd 15 years was about multi-core and threading (TLP). What is the third? We are dawning upon that era.  I phrase it as ‘MLP’- memory level parallelism.  Maybe not. But we know a few things now. The new computing era is more ‘data path’ oriented – be it GPU, FPGA or TPU – some form of high compute throughput matched by emerging ML/DL applications. A number of key issues have to be addressed.

 

Computing_trends

Every 30 years, technology, business and people have to re-invent themselves otherwise they stand to whither away. There is a pattern there.

There is a pattern here with SPARC as well. SPARC and SPARC based systems have reached 30 year life and it looks to be at the beginning of the end , while a new generation of processing is emerging.

Where do go from here? Defintely applications and technologies are the drivers. ML/DL is the obvious driver. Technologies range from memory, coherent ‘programable datapath accelerators’, programming models (event driven?), user space resource managers/schedulers and lots more. A few but key meta trends

  • The past 30 years – Hardware (Silicon and Systems) aggregated (for e.g. SMP) the resources while Software disaggregated (VMware). I believe the reverse will be true for the next 30 i.e. disaggregated hardware (e.g. accelerators or memory) but software will aggregate (for e.g. vertical NUMA, heterogenous schedulers).
  • Separation of control flow dominated code vs data path oriented code will happen (early signs with TPUs).

 

data_flow

  • Event driven (for e.g. reactive) programming models will become more prevalent. The ‘serverless’ trend will only accelerate this model as traditional programmers (procedural) have to be retrained to do event driven programming/coding (Hardware folks have been doing for decades).
  • We will build machines that will be hard (not in the sense of complexity – more in the sense of scale) for humans to program.

CMOS, RISC, Unix and C was the mantra in 1980s. Its going to be memory (some form of resistive memory structure) and re–thinking of the stack needs to happen. Unix is also 30+ years old.

Then_v_now

 

Just when you thought Moore’s law is slowing, the amalgamation of these emerging concepts  and ideas in a simple but clever system will rise to the next 30 years of innovations in computing.

Strap yourself for the ride…

 

 

30 (15 and 7) Year Technology Cycles

30_year_img

We all have heard the 7 year itch. Have you ever asked why its 7? Why it is not  10 or 4. Looking back,  I have had those 7 year itches.

Have you had the urge to leave your job or change your role significantly around 15 years? Think about this  very carefully or ask around!! I left my first job after 16 years

What about 30 years. Perhaps time to retire. Or seek a completely new profession?

Now lets look at businesses. Lets review two examples.

  1. Apple: From 1997 to 2007 – it was the Apple IIe.  7 years after founding, 1984 (MAC). 7 years later (1991 – Powerbook).  7 years later (Steve is back with iMAC). 30 years since founding (iPhone). New product, New Business model. Apple re-invented. (http://applemuseum.bott.org/sections/history.html)
  2. Sun: 1982-1989 – Motorola + BSD Unix. 1989-1996 – Solaris and SPARC. 1996-UltraSPARC 1 and Java. 2009 (27 years), acquired by Oracle.  http://devtome.com/doku.php?id=timeline_of_sun_microsystems_history

Why 7? Why 15?  Why 30? The 7 year itch feeds into the 15 year peak which feeds into 30 year retire/re-invention cycle. Since businesses are made of people, especially in the technology sector, business cycles have the same 7, 15 and 30 year cycles.  At 7, one becomes proficient in a domain (10,000+ hours) and change is in the air. At 15, you feel like you reach a peak. At 30 you become obsolete (retire) or re-invent.

At the core of these 30 year transitions, the DNA has to be relevant or one ‘retires’. The biological DNA that define our value proposition, businesses  do as well. Apple has the core DNA of designing eye candy products (materials +  UI). This has transcended to the desktop to the mobile.  IBM has the DNA of general computing business. That has unchanged over decades.  Sun had the DNA of Open-source and open systems. Intel has the core DNA of device engineering and manufacturing (Moore’s law).

The similarity has to do with human productive cycles. While this assertion is not statistically accurate, but there is enough data to suggest why there is a linkage between business cycles and human productivity cycles.

First lets visit some 30 year cycles…In most cases the business model changes – not just technology. Usually the incumbent sees the technology shift and can handle it. But they fail to see or react to the business model change.

Survivors:

  1. IBM: Strangely IBM has revived itself every 30 years. There was mainframe from 1950s to 1980s. Then there was PC and Global Services from 1980s to 2010s. Now its on a re-invention cycle with Watson and Enterprise Cloud. IBM has done this over its 100 year history repeatedly and perhaps the only one. But its facing that issue once more…Time will tell…
  2. Apple: From 1997 to 2007 – it was the Apple IIe.  7 years after founding, 1984 (MAC). 7 years later (1991 – Powerbook).  7 years later (Steve is back with iMAC). 30 years since founding (iPhone). New product, New Business model. But same DNA. 1992 was a critical year as we all know. 2014 is 7 years after the first iPhone. Another change cycle? The smartness with Apple was the move from shrink wrapped software to Appstore. New business model. Fantastic!  The iPhone which is 10 years old will reach its peak the next few years.  Apple will have to have its next big act before 2021 ?
  3. HP: Another survivor has gone through two  30 year cycles and in the middle of the 3rd. Remains to be seen how it will evolve

Recycled

  1. Sun: My Alma mater. Started in 1992 and dies or got gobbled up in 2010. Perhaps lives inside Oracle – not really. Again 28 years and died. Founding DNA = Opensource + Interactivity and big memory. Lost its way when Linux took over (Opensource – around 1998 – 15 year!!!). 
  2. DEC: 1957-1987; VAX9000 remember. That was the beginning of 30 year run. Got gobbled up and could not handle the next business transition
  3. Many other companies fall into this category (SGI…..)
Challenged
  1. Cisco: 1984-2014 (30 years). The networking category is being challenged in a similar way VMware came and altered the server market with Xeon. Commodity switches + SW. Other problems. Good news for Cisco in that it has lots of cash. Needs to find new growth categories that are significant. They are trying with a new CEO and moving upstream.
  2. Intel: Intel got out of the DRAM business in 1984. Now its facing challenges in its core having missed the whole handheld and tablet market. Again another 30 year transition cycle for Intel 
  3. Microsoft: 1997-2007. Windows Vista, Windows 7 Windows 8 – not a growth engine. Missed the phone. Failed to grasp the new business model. With a change in CEO and now the shift to the cloud, albeit late, but seems to have made or making the transition. Its a work-in-progress.
  4. Oracle: The 30 year old database market has run its course. The shift to in-memory and more importantly the shift to the cloud (AWS/Aurora and Google/Spanner) combined with the sedimentation of the plumbing layer (how many database companies are out there). The challenge for Oracle is not technology (which it can by acquiring the next viable business). The shift is in the move of the enterprise from licensed software to the cloud model. 
There is a pattern in this 30 year cycle. Typically a technology and a business model shift occurs and the companies. As you can see very few have done the 30 year transition. If you look deeper there seems to be a 15 year sub cycle and a 7 year sub cycle.
Guess what Google is past the 15 year cycle. Amazon founded in 1994, had its second act starting around 2008 (AWS).
1980s  was the golden era for  many tech companies
–Computing (Intel, Sun, Apple, Dell…)
–Networking (Cisco, 3COM)
–Storage (EMC, Netapp later)
Common Theme: Business model of selling integrated systems (OEM model)
30 years later that model has turned upside down… We are into the the next 30 year cycle with the cloud.
I touched upon this topic in one of the UC Berkely (2014) Aspire retreat(s). An image of a handwritten slide is shown below.  This time like in 1984, when DRAM gave way to logic (Intel), new memory technology will drive the new technology stack and thus new business opportunities. Maybe or atleast hopeful of it. This will be the topic for the next post…
30_Yr_Tech_re_invention
Meanwhile  – what comes after 7 15 30 ?
Coming back to 30 year technology cycles and individuals – is there a link?
Like to hear more about it from others..