Moore No More – (Part IV – Computing Epochs)

The next Tsunami….

We overestimate what we can accomplish in 2 years and under-estimate what we can in 10 years

Bill Gates

“We are at a time of enormous transition and opportunity, as nearly all large-scale computing is moving to cloud infrastructure, classical technology trends are hitting limits, new programming paradigms and usage patterns are taking hold, and most levels of systems design are being restructured. We are seeing wholesale change with the introduction of new applications around ML training and real-time inference to massive-scale data analytics and processing workloads fed by globally connected edge and cellular devices. This is all happening while the performance and efficiency gains we’ve relied on for decades are slowing dramatically from generation to generation. And while reliability is more important than ever as we deploy societally-critical infrastructure, we are challenged by increasing hardware entropy as underlying components approach angstrom scale manufacturing processes and trillions of transistors.”

Amin Vahdat on Google on SRG creation

We are at the cusp of another systems evolution and rebuild of the entire stack. We have entered the 4th epoch of computing at roughly 20 year intervals, with

  • Epoch 1 1965-1985 (mainframe , CRT terminal (VT100))
  • Epoch 2: 1985-2005 ([Workstation, Unix Servers], [PC, Mac])
  • Epoch 3: 2005-2025 (Cloud, SmartPhone)
  • Epoch 4: 2025- (TBD, TBD)

It roughly follows Moore’s law in terms of impact to systems as shown in the visual below.

The onset of next wave is both obvious and non-obvious based on your vantage point. It was not obvious in 1980 – Ethernet, NFS, RISC, Unix let Sun (and perhaps PCs) and eventually big SMP boxes was the end game along with it Sun’s demise starting the end of dot-com bust. Technology stack drove the use case but they drove out the mini and main frame. So Monolithic to disaggregated back to monolithic (or perhaps modular) within 20 years!!! The business model then was OEM (Capex). Unix/C/RISC was the tech stack at the infrastructure level and use cases or killer apps were HPC, EDA and eventually shared everything databases (Oracle) and 3-tier enterprise apps (SAP). In a prior SoC to SoP blog I mentioned the emergence of cheap servers from both Sun and ODMs – but Sun failed to capitalize on that early lead and trend as it was beholden to margin over market share – a point Martin Casado makes about Dr. Reddy Labs . A classic reference

By 2000 we saw search (Google, 1998), e-commerce (AWS, 1995), hypervisor (VMware, 1998), distributed systems (1U rack mount servers), 10G networks, distributed storage (S3 – 2007) and then cloud happened with that and more. This time use cases drove the infrastructure stack and the incumbment i.e. OEMs (Sun, HPE, DEC etc) missed that transition and some eventually disappeared with last man standing effectively being Dell through consolidation, low cost supply chain (market share over margin) and financial engineering (well executed!). Again disaggregated and now back to monolithic (hyper scaler clouds of today). The business model with Cloud being Opex. (Linux+VM/Java,javascript,c,…/x86) is the tech stack and all the new consumer and enterprise SaaS apps.

The tides of complexity and simplification is on its march again – but this time its use case and technology coming from both ends and there will be new winners and many loosers. Martin’s blog on trillion dollar paradox is a leading indicator of this shift and the pendulum is swinging back to the middle between the old OEM/on-prem model and hyperscale cloud opex model to something new. I am guessing there are healthy mix of agreements and disagreements with this shift inside these hyperscalers. Just when you think they are too big to disappear, think again – every cycle leaves a consolidated entity eventually and new players emerge. To quote Clayton’s innovators dilemma

The third dimension [is] new value networks. These constitute either new customers who previously lacked the money or skills to buy and use the product, or different situations in which a product can be used — enabled by improvements in simplicity, portability, and product cost…We say that new-market disruptions compete with “nonconsumption” because new-market disruptive products are so much more affordable to own and simpler to use that they enable a whole new population of people to begin owning and using the product, and to do so in a more convenient setting.

Clayton Christensen

The questions to answer are –

  • What is the new infrastructure stack
  • What is the new business model
  • What are the new use case drivers
  • Who are these new customers?

Its fascinating to watch Cloudflare which reminded me of Akamai and Inktomi back in 1998 that led to 1U/ small server (@Sun ) – but to be only ignored by the $1M sales GMs. They took the company down with their margin over market share mentality as they were drunk by the fish falling into the boat during pre dot-com days. Strange 20 years later replace Inktomi or Akamai to Cloudflare and emergence of new category of consumption and deployment is the rising tide that is going to lift some new boats and leave some behind.

Fast forward, Its near impossible to predict which technology and companies will be torch bearers by 2030, but we can start with what we got. A few things to keep in mind.

  • Storage: Cloud was enabled by foundational under-pinnings in storage (GFS, S3) – distributed storage. 1980s systems as anchored by Sun Microsystems was successful because of NFS. So a common pattern with emergence of new stack is storage is an important and critical tier and those who got it right tend to rule the roost (Sun, AWS).
  • Network: Over the 2 epochs, networks (TCP/IP) dominated with Ethernet winning by Epoch 1, riding the Silicon curve in 2nd epoch and now we are at the cusp of breaking it. Its no longer a unitary world. The network was the computer in Epoch 2. It was even more important in the cloud era and its going to be much more critical in the coming epoch – but its not the same networking model as you would think though.
  • Compute: ILP (Instruction level parallelism) was the enabler in Epoch 2 via RISC. TLP (Thread level Parallelism) was enabler for Epoch 3 (Multi-core / threading) and thus poses the question – what is this wave going to present itself with. IPU, DPU, TPU are all names for coherent “datapath’ accelerators and that boat is sailing.
  • OS: Unix was the hardware abstraction layer that gave way to hypervisors that is giving way to K8s and I would say more like Serverless/Lambda i.e. you need both a HW abstraction and resource management layer.
  • Developer Abstraction:
  • Asset utilization: Turning cost line items in your P&L and turning into business is Amazon’s claim to fame. Asset light vs Asset heavy arguments (Fabless vs Fab arguments). So taking one use case paid for by a class of customers and leveraging that asset investment into another business is common practice and cloudflare is playing that game. This is an important element of disruptions and disruptors. Famously – Scott (Sun CEO) used to implore on some of us – investment of SPARC is already paid for – why not open source or give it away? Looking back 20 years later he was right, but not taken seriously or executed.
  • Marginal value of technology and value extraction: The cost of transistor going to near 0, led to the entire computing platform in the 1980s and 1990s. The cost of transporting a byte going to zero led to the internet and other upper level services and value extraction. The cost of storage (HDD) going to zero led to the cloud (Yahoo offering mail storage, Google Drive and S3). The next wave is going to be cost of transaction or inferencing going to zero will lead to the next set of disruptors and value extraction.

What is the new infrastructure Stack?

Network is the computer starts with the rack: This has been around for sometime, but finally a confluence of technologies and use cases i.e. a perfect storm is brewing. Like the beginning of every epoch, we are starting with ‘disaggregated systems’ (or perhaps modular) the computer and the computing platform starts with refactoring the computer as we know to simpler forms and assembled at upper layers of the stack to enable the evolution of lower level components. The physical rack as a computer is not new, but the software layers to make that viable is more needed and viable now than before. It starts with basic building blocks which are now SoP ( systems on a package ), CXL (Computer Express Link), (x)PU and emerging memory components. At the height of the moore exponential (epoch 2), the CMOS roadmaps drove higher perf, lower cost and lower power all three. Like the CAP theorem, now we can get only 2 out of 3 and to keep on the same Moore curve, SoP is one of the ways to achieve that same performance curve. Given more data parallel workloads, performance is achieved over more silicon area, power is distributed over a larger surface (density) and cost is managed by re-use of many sub-componets. Its expected by end of the decade the total cost and power as % of the single system dedicated to ‘control path’ execution will shrink as the ‘data path’ component will grow driven by ML/AI and other emerging verticals.

Systems on a package is enabler of disaggregation of the system. It starts out with addressing performance, cost and power, but when combined with emerging memory tiering, CXL at a rack level its a key enabler of disaggregating a single node, but aggregating at the rack level.

Three trends – Memory is getting disaggregated, computing is going more datapath and disaggregated to meet the scale requirements (Nvidia with NVlink and A100, Google with TPUs and IB), Network is going 2 tier to handle latency vs bandwidth di-chotomy and new frameworks are getting established to be the contract with the application developers. Lets explore each one in a little bit more detail. The physical disaggregation has to be aggregated at the software level i.e.

Memory vs Storage: While memory has been tracking Moore’s law, scale of data processing is much larger in scale and thus the emergence of Hadoop and other distributed computation embedded within storage model. But we are reaching a point with emergence of interconnects like CXL enabling coherent tiered memory with manageable latencies (1-3 NUMA hops) but significant increase in capacity. When combined with new workloads that demand not infinite storage space (consumer SaaS), but moderate size (some enterprise SaaS, edge, ML, ADN to name a few), but need much higher performance (latency) that we can re-imagine systems at the rack level with memory only access modes with associated failure domains handled entirely in software.

A visual of that is shown below.

By mid-decade one can imagine 2+ PB ‘memory’ semantic with tiers of memory enabled by CXL interconnect. Any page in such a locally distributed system (hyperconverged is the wrong word – so not using it) can be Sub uS. There is tremendous investment in PCIe, but the economic motivation for these new apps are the drivers for memory-centric computing model. At moderate scale handling (tail) latency is also of value. (Ref: Attack of the killer microseconds)

Coherent Co-Processors: Historically accelerators have been in their own non-coherent space (starting with GPUs ever since graphics has been a category) and ofcourse NICs and and FPGAs and some specialized HPC accelerators. Coherency and ‘share-able’ memory simplifies the programming model significantly and the great unlock is CXL as until now in the mainstream market, Intel did not open up their coherent interface to other providers for some very good technical and perhaps business reasons. With the new workloads (ML in particular) demanding more compute and the mapping of the ML algorithms to sparse linear algebra (over-generlized – but remains true for the past 3-5 years) reflects on the shift in both time and power spent in compute cycles.

The 1-2uS and 10uS are interesting numbers in the memory – storage visual above to demark the split between memory model and storage/IO model. That lends itself to two types of networks – in this case they are synergistic, one can subsume the other and each one solves the needs below the sync-barrier or RDMA barrier and the other provides reach, throughput and carries the 40 year investment. The beauty of CXL with CXL.io as one of the operating modes is it can transport IP packets within the same basic payload and thus provide compatibility while providing unique capability for rack scale needs. A rack will be the new unit of compute, the new unit of aggregation at the software level and is a good demarcation point for the foreseeable future. 2030 is 10 years away and a lot can change in that 10 years (whole new categories of companies will emerge).

By 2030 these three shifts will enable us to create the ‘fungible computer’ atleast at the rack level.

Disaggregation of silicon into component parts but aggregating them at upper layers of the stack is the key shift that is enabled by new workloads that are more data parallel, re-imagined processing pipelines, memory centric computational models at specific cost and power plateaus. SoP, CXL, (x)PU (x is your favourite IPU, NPU, DPU, MPU ……the alphabet soup of ideas).

This image has an empty alt attribute; its file name is image-14.png

The natural value shift and sedimentation is on its march once again. Unix was the high value at the dawn of RISC and OS (Solaris, Linux) sedimented down in value over a 20 year period. Multi-core and Cloud reset that with hypervisor and scale management frameworks with natural sedimentation that has happened. Emerging ML-IR and layers that map, aggregate, schedule and resource manage these disaggregated components will be most value and will take its due 20 year cycle to sediment itself until the next wave of innovation happens.

All of this is technical direction – reality might not follow technology visions. Going back to the proposition stated up front – this epoch we are going to see new efforts that drive both technology (bottoms up) and use cases (tops down). To enable that a new delivery and consumption model is needed.

Enter the bottom feeder once more…The upstream push by a select few global colo providers when married with right open and/or proprietary management stacks and many elements of the system design above used by some of the new players (e.g. Cloudflare, crypto companies – decentralized model is a first class property) will ride the next Tsunami to challenge current incumbents with the new technology stack (edge native) and new business models. The business model is interesting and will require a blog of its own at the right time, but its starting to emerge. To remind ourselves, we shifted from Capex to Opex to now……..(leave it as an exercise or perhaps share your ideas with me (DM) – love to hear them…).

In Summary I am reminded of Jerry Chen’s framework Systems of engagement, Systems of Intelligence, Systems of record. As more things change its continues to fit within the overall framework.

TierEpoEpoch2Epoch3Epoch4
Tier 1 SystemWebEngagementNLP,CV, ML ?
Tier 2 SystemApp TierIntelligenceServerless ?
Tier 3 SystemDatabaseRecordCrypto ?

The pendulum is starting to swing away from the current cloud to either the new middle (cloud exchanges) or perhaps continue to find the other end of its swing (edge). We will know for sure by 2025, long before 2030….

Back in 2001 led the charge of building basic blocks for the emerging distributed systems world. Alas, failed to see the use case driven approach that led to innovation elsewhere. Lessons learnt – See more of this from me this decade and if interested lets talk, engage and collaborate.

In closing to what prompted me to pen this was Amin’s note – While the ‘cloud’ has innovated a lot in hardware systems, most of it was derived from HW innovations that were kicked off in the prior epoch. This coming decade will be unlike both prior epochs as we don’t have more of moore and its highly distributed and perhaps decentralized but more hardware innovation is at play than the prior epoch (cloud era).

Author: renuraman

Always connecting the dots....

One thought on “Moore No More – (Part IV – Computing Epochs)”

  1. Nice writeup Renu. The one things we don’t know is what (whether) technology will emerge that might play the role that silicon + Moore did in the past 50 years. There are some hints on the horizon. If something turns up, it will explode onto the scene, given how today’s world works.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

wrong tool

You are finite. Zathras is finite. This is wrong tool.

----- Thinking Path -------

"knowledge speaks but wisdom listens" Jimi Hendrix.

The Daily Post

The Art and Craft of Blogging

WordPress.com News

The latest news on WordPress.com and the WordPress community.

%d bloggers like this: