Showing posts with label high-performance memory. Show all posts

Today, Intel owns the data center market. The only challenger in the x86 space, AMD, once claimed a significant share of that market, but has been all-but eliminated after years of noncompetitive CPU architectures. AMD has been driven to single-digit market share, though the company hopes to take back some of it with its upcoming Zen processor, due next year. Other vendors, like IBM or ARM, have an even smaller market share than AMD. That could change in the next few years, however, and Google has flung its support behind a new interconnect standard, OpenCAPI, and IBM’s POWER9 CPU architecture.


Google puts Intel on notice

In a blog post on Friday, Google announced that it had joined the OpenCAPI consortium, a group dedicated to developing a next-generation set of interconnects for servers and data centers. If this is giving you a sense of déjà vu, never fear — the Gen-Z announcement we covered last week also concerned a large group of companies that are developing a next-generation interconnect, and most of the same companies are members. Gen-Z aims to develop an interconnect standard for storage devices, heterogeneous accelerators, and pooled memory using memory semantic fabric, while OpenCAPI uses DMA semantics. Google and Nvidia are the only two members of OpenCAPI that aren’t also members of Gen-Z.
In its blog post, Google documents a new server it has developed, the Zaius P9 (which implements the OpenCAPI standard).
Zaius is designed to use two IBM POWER9 LaGrange CPUs with support for DDR4 (16 DIMM slots per CPU, 32 total), along with two 30-bit buses handling inter-CPU communication. POWER9 will include support for PCI Express Gen 4, with 84 lanes spread between the two processors. PCIe 4.0 isn’t expected to be finalized until 2017, and there’s no word on when consumer hardware will actually be available. Power9 is expected in 2017, but we don’t know when Google‘s Zaius specifically will debut. The chips themselves will target a 225W TDP, well above most of Intel’s hardware.
PCI Express Gen 4
The goal of these new interconnect initiatives is to challenge Intel’s dominance in this space. OpenCAPI is a project Nvidia has prominently planned to support with the enterprise version of its Pascal architecture, and AMD has its own reasons for cooperating with such efforts. If it wants to win back space for Zen, it may have decided throwing its own lot in with competitors working on new interconnects is the right way to do that. There’s precedent for doing this — back in 2003, it was AMD’s HyperTransport bus and its support for “glueless” multi-socket systems that gave the company a prominent advantage over Intel in the multi-socket server market. Even after dual and quad-core chips were available, Opteron continued to outperform some of its Core 2-equivalents in multi-socket configurations, at least for a little while.
The threat to Intel is in the last line of Google’s blog post, where the company writes: “We look forward to a future of heterogeneous architectures within our cloud. And, as we continue our commitment to open innovation, we’ll continue to collaborate with the industry to improve these designs and the product offerings available to our users.”
That might seem like a mild sentence, but it’s a shot across the bow. Google is prominently backing Intel’s chief competitors, and given the consistent downturn in the PC industry, you can bet that Intel is taking any and all threats to its data center market extremely seriously.

Industry powerhouses have joined forces to address an issue that has confounded system architects since the advent of multicore computing, one that has gained in urgency with the rising tide of big data: the need to bring balance between processing power and data access. The Gen-Z Consortium has set out to create an open, high-performance semantic fabric interconnect and protocol that scales from the node to the rack.
Gen-Z Consortium Puts New High Performance Interconnect in Motion
Gen-Z brings together 19 companies (ARM, Cray, Dell EMC, HPE, IBM, Mellanox, Micron, Seagate, Xilinx and others) and bills itself as a transparent, non-proprietary standards body that will develop a “flexible, high-performance memory semantic fabric (providing) a peer-to-peer interconnect that easily accesses large volumes of data while lowering costs and avoiding today’s bottlenecks.” The not-for-profit organization said it will operate like other open source entities and will make the Gen-Z standard free of charge.

Gen-Z Consortium Puts New High Performance Interconnect in Motion

Vowing to enable Gen-Z systems in 2018, the consortium’s mission is to address what it says are obsolete “programmatic and architectural assumptions”: that storage is slow, persistent, and reliable while data in memory is fast but volatile, assumptions the consortium contends are no longer optimal in the face of new storage-class memory technologies, which converge storage and memory attributes. A new approach to data access that takes on the challenges of explosive data growth, real-time application requirements, the emergence of low-latency storage class memory and demand for rack scale resource pools – these are the consortium’s objectives.

Kurtis Bowman, director, server solutions, office of the CTO, at consortium member Dell EMC, said that 12 of the member companies have worked for the past year to develop what he called a “.7- or .8-level spec” on the fabric, “so there’s still opportunity for new members to contribute to the spec, make it stronger,” but enough work has been done “with the spec in proving out that the technology itself is right.”

Gen-Z Consortium Puts New High Performance Interconnect in Motion

“We get asked a lot, ‘Why the new bus?’” he said. “It’s because there’s really nothing that today solves all the problems that we think exist. One is that memory is flat or shrinking in the servers that we have today. So the bandwidth per core is shrinking to a point where today we have less bandwidth per core than we did in 2003. The memory capacity per core is shrinking, the I/O per core is shrinking. It really comes down to there’s just not enough pins on the processor to be able to get the requisite amount of memory and I/O that you need.”

He emphasized the need to solve this challenge as real-time workloads are increasingly adopted, “You have to be able to quickly analyze the data coming in, get some insights from that data, because as it takes longer to analyze that data, your time to insights pushes out and makes it less valuable. So we want to make it so it’s easier to get compute and data closer together and allow those to be done” in a standardized way, across CPUs, GPUs, FPGAs and other architectures. “All of them need access to the memory that’s available.”

Gen-Z touts the following benefits:

  • High bandwidth and low latency via a simplified interface based on memory semantics, scalable to 112GT/s and beyond with DRAM-class latencies.
  • Support for advanced workloads by enabling data-centric computing with scalable memory pools and resources for real time analytics and in-memory applications.
  • Software compatibility with no required changes to the operating system while scaling from simple, low cost connectivity to highly capable, rack scale interconnect.


Gartner Group’s Chirag Dekate, research director, HPC, servers, emerging technologies, said the consortium’s focus on data movement has important implications on high-growth segments of the advanced scale computing market, such as data analytics and machine learning, that utilize coprocessors and accelerators.

“These technologies are crucial in delivering the much needed computational boost for the underlying applications,” Dekate said. “These architectures are biased towards extreme compute capability. However, this results in I/O bottlenecks across the stack.”

He said coprocessors and accelerators utilize the PCIe bus to synchronize host and device memories, despite there being roughly three orders of magnitude difference between the FLOPS-rate and the bandwidth of the underlying PCIe bus. “This essentially translates to dramatic inefficiencies in performance, especially in instances where there isn’t sufficient parallelism to hide the data access latencies,” said Dekate. “This problem is only going to get worse as the computational capabilities of core architectures evolve more rapidly than the supporting memory subsystems, resulting in a fundamental mismatch between data movement within a compute node and the floating point rate of modern processors.”

Initiatives like Gen-Z are crucial for addressing the data movement challenges that emerging compute platforms are facing, he said. “The success of Gen-Z will depend on the consortium’s ability to expand and integrate broader scale of processor vendors to be able to have the broadest impact in customer datacenters.”

Gen-Z said it expects to have the core specification, covering the architecture and protocol, finalized in late 2016. Proof systems developed on FPGAs will follow with fully Gen-Z enabled systems on track for mid-2018. Other consortium members include AMD, Cavium Inc., Huawei, IDT, Lenovo, Microsemi, Red Hat, SK Hynix and Western Digital.
Welcome to ARM Technology
Powered by Blogger.

Latest News

Newsletter

Subscribe Our Newsletter

Enter your email address below to subscribe to our newsletter.

- Copyright © ARM Tech -Robotic Notes- Powered by Blogger - Designed by HPC Appliances -