Showing posts with label HPC News. Show all posts
New Intel Technologies Highlighted in SC16 Announcements
Tuesday, 13 December 2016
Posted by ARM Servers
"Updates for Intel® Xeon® processors, Intel® HPC Orchestrator, Intel® Deep Learning Inference Accelerator and other forthcoming supercomputing technologies available soon"
SC16 revealed several important pieces of news for supercomputing experts. In case you missed it, here’s a recap of announced updates from Intel that will provide even more powerful capabilities to address HPC challenges like energy efficiency, system complexity, and the ability for simplified workload customization. In supercomputing, one size certainly does not fit all. Intel’s new and updated technologies take a step forward in addressing these issues, allowing users to focus more on their applications for HPC, not the technology behind it.
intellogoIn 2017, developers will welcome a next generation of Intel® Xeon® and Intel® Xeon Phi™ processors. As you would expect, these updates offer increased processor speed and more through improved technologies under the hood. The next generation Intel Xeon Phi processor (code name “Knights Mill”) will exceed its predecessor’s capability with up to four times better performance in deep learning scenarios1.
Of course, as developers know, the currently-shipping Intel Xeon Phi processor (formerly known as “Knights Landing”) is no slouch! Nine systems utilizing this processor now reside on the TOP500 list. Of special note are the Cori (NERSC) and Oakforest-PACS (Japan Joint Center for Advanced High Performance Computing) supercomputing systems with both claiming a spot among the Top 10.
The next-generation Intel Xeon processor (code name “Skylake”) is also expected to join the portfolio in 2017. Demanding applications involving floating point calculations and encryption will benefit from both Intel® Advanced Vector Instructions-512, and Intel® Omni-Path Architecture (Intel® OPA). These improvements will further streamline the processor’s capability, giving commercial, academic and research institutions another step forward against taxing workloads.
A third processing technology anticipated in 2017 enables an additional level of HPC customization. The combined hardware and software solution, known as Intel® Deep Learning Inference Accelerator, sports a field-programmable gate array (FPGA) at its heart. By maximizing industry standard frameworks like Intel® Distribution for Caffe* and Intel® Math Kernel Library for Deep Neural Networks too, the solution provides end users opportunity for even greater flexibility in their supercomputing applications.
intelcircleAt SC16, Intel also highlighted supplemental momentum for Intel® Scalable System Framework (Intel SSF). HPC is an essential tool for advances in health-related applications, and Intel SSF is taking a place center-stage as a mission-critical tool in those scenarios as Intel demonstrated in its SC16 booth. Dell* offers Intel SSF for supercomputing scenarios involving drug design and cancer research. Other applications like genomic sequencing create a challenge for any supercomputer. For this reason, Hewlett Packard Enterprise* (HPE) taps Intel SSF as a core component of the HPE Next Generation Sequencing Solution.
Additional performance isn’t the only thing supercomputing experts need, though. Feedback from HPC developers, administrators and end-users express the need for improved tools during critical phases of system setup and usage. Help is on the way. Now available, Intel® HPC Orchestrator based upon the OpenHPC software stack addresses that feedback. With over 60 features integrated, it assists with testing at full-scale, deployment scenarios, and simplified systems management. Currently available through Dell* and Fujitsu*, Intel HPC Orchestrator should provide added momentum for the democratization of HPC.
Demonstrating further momentum, Intel Omni-Path Architecture has seen quite an uptick in adoption since release nine months back. It is utilized in about 66 percent of TOP500 HPC systems utilizing 100Gbit interconnects.
With so many technical advancements on the horizon, 2017 is shaping up as a year for major changes in the HPC industry. We are excited see how researchers, developers and others will utilize the technologies to take their supercomputing systems to the next level of performance, and tackle problems which were impossible just a few years ago.
1 For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
When
the movie The Terminator was released in 1984, the notion of computers
becoming self-aware seemed so futuristic that it was almost difficult to
fathom. But just 22 years later, computers are rapidly gaining the
ability to autonomously learn, predict, and adapt through the analysis
of massive datasets. And luckily for us, the result is not a nuclear
holocaust as the movie predicted, but new levels of data-driven
innovation and opportunities for competitive advantage for a variety of
enterprises and industries.
Artificial intelligence (AI) continues to play an expanding role in the future of high-performance computing (HPC). As machines increasingly become able to learn and even reason in ways similar to humans, we’re getting closer to solving the tremendously complex social problems that have always been beyond the realm of compute. Deep learning, a branch of machine learning, uses multi-layer artificial neural networks and data-intensive training techniques to refine algorithms as they are exposed to more data. This process emulates the decision-making abilities of the human brain, which until recently was the only network that could learn and adapt based on prior experiences.
Artificial intelligence (AI) continues to play an expanding role in the future of high-performance computing (HPC). As machines increasingly become able to learn and even reason in ways similar to humans, we’re getting closer to solving the tremendously complex social problems that have always been beyond the realm of compute. Deep learning, a branch of machine learning, uses multi-layer artificial neural networks and data-intensive training techniques to refine algorithms as they are exposed to more data. This process emulates the decision-making abilities of the human brain, which until recently was the only network that could learn and adapt based on prior experiences.
Deep
learning networks have grown so sophisticated they’ve begun to deliver even
better performance than traditional machine learning approaches. One advantage
of deep learning is that there is little need to "train" the system
and define features that might be useful for modeling and prediction. With only
basic labeling, machines can now learn these features independently as more
data is introduced to the model. Deep learning has even begun to surpass the
capabilities and speed of the human brain in many areas, including image,
speech, or text classification, natural language processing, and pattern
recognition.
The
core technologies required for deep learning are very similar to those
necessary for data-intensive computing and HPC applications. Here are a few
technologies that are well-positioned to support deep learning networks.
Multi-core
processors:
Deep
learning applications require substantial amounts of processing power, and a
critical element to the success and usability of deep learning comes with the
ability to reduce execution times. Multi-core processor architectures currently
dominate the TOP500 list of the most powerful supercomputers available today,
with 91% based on Intel processors. Multiple cores can run numerous
instructions at the same time, increasing the overall processing speed for
compute-intensive programs like deep learning, while reducing power
requirements, increasing performance, and allowing for fault tolerance.
The
Intel® Xeon Phi™ Processor, which features a whopping 72 cores, is geared
specifically for high-level HPC and deep learning. These many-core processors
can help data scientists significantly reduce training times and run a wider
variety of workloads, something that is critical to the computing requirements
of deep neural networks.
Software
frameworks and toolkits:
There
are various frameworks, libraries, and tools available today to help software
developers train and deploy deep learning networks, such as Caffe, Theano,
Torch, and the HPE Cognitive Computing Toolkit. Many of these tools are built
as resources for those new to deep learning systems, and aim to make deep
neural networks available to those that might be outside of the machine
learning community. These tools can help data scientists significantly reduce
model training times and accelerate time to value for their new deep learning
applications.
Deep
learning hardware platforms:
Not
every server can efficiently handle the compute-intensive nature of deep
learning environments. Hardware platforms that are purpose-built to handle
these requirements will offer the highest levels of performance and efficiency.
New HPE Apollo systems contain a high ratio of GPUs to CPUs in a dense 4U form
factor, which enables scientists to run deep learning algorithms faster and
more efficiently while controlling costs.
Enabling
technologies for deep learning is ushering in a new era of cognitive computing
that promises to help us solve the world’s greatest challenges with more
efficiency and speed than ever before. As these technologies become faster,
more available, and easier to implement, deep learning technologies will secure
their place in real-world applications – not in science fiction.
Volkswagen Moves HPC Workloads to Verne Global in Iceland
Friday, 23 September 2016
Posted by ARM Servers
Today
Verne Global announced Volkswagen is moving more than 1 MW of high performance
computing applications to the company’s datacenter in Iceland. The company will
take advantage of Verne Global’s hybrid data center approach – with variable
resiliency and flexible density – to support HPC applications in its continuous
quest to develop cutting-edge cars and automotive technology.
"The
hybrid data center solution of Verne Global gives us quick and easy capacity
for our High-Performance Computing applications,” says Harald Berg, Head of IT
Tools, Network and Data Center in the Volkswagen Group. “We were particularly
impressed by the modular design of the data center that allows us to respond to
increasing demands in a flexible manner.”
Volkswagen
is committed to developing new processes and applications for the modern
“digital factory” of today’s automotive industry. As more and more real-life
factory operations become virtualized, Volkswagen is utilizing HPC applications
for everything from shortening design cycles, traffic optimization, developing
and improving the connected car and more.
To
drive innovation in its manufacturing process, Volkswagen is taking advantage
of Verne Global’s unique, hybrid data center approach. Verne Global is the data
center industry’s only developer offering the ability to scale resiliency and
density of both of its solutions, powerDIRECT and powerADVANCE. Companies, like
Volkswagen, can now have greater flexibility to support their individual
computing needs. While both solutions deliver highly optimized data center
infrastructure, powerDIRECT enables IT organizations to meet the increasing
demand for high and ultra-high density applications. powerADVANCE is a
traditional Tier III data center solution with the highest possible
specification enterprise-ready data center environment.
"Our
expertise delivering data center solutions for discrete manufacturing allow
companies such as those in the automotive sector to do more compute for less,”
said Jeff Monroe, CEO of Verne Global. “We see our unique offering as the
future of data center solutions and a means to support companies, like
Volkswagen, as they drive towards innovation, forward-thinking design and
operational efficiency.”
In
this video from the HPC User Forum in Tucson, Jorge L. Balcells from Verne
Global presents: Verne Global Datacenters for Forward Thinkers.