Showing posts with label Cloud Computing. Show all posts
New Intel Technologies Highlighted in SC16 Announcements
Tuesday, 13 December 2016
Posted by ARM Servers
"Updates for Intel® Xeon® processors, Intel® HPC Orchestrator, Intel® Deep Learning Inference Accelerator and other forthcoming supercomputing technologies available soon"
SC16 revealed several important pieces of news for supercomputing experts. In case you missed it, here’s a recap of announced updates from Intel that will provide even more powerful capabilities to address HPC challenges like energy efficiency, system complexity, and the ability for simplified workload customization. In supercomputing, one size certainly does not fit all. Intel’s new and updated technologies take a step forward in addressing these issues, allowing users to focus more on their applications for HPC, not the technology behind it.
intellogoIn 2017, developers will welcome a next generation of Intel® Xeon® and Intel® Xeon Phi™ processors. As you would expect, these updates offer increased processor speed and more through improved technologies under the hood. The next generation Intel Xeon Phi processor (code name “Knights Mill”) will exceed its predecessor’s capability with up to four times better performance in deep learning scenarios1.
Of course, as developers know, the currently-shipping Intel Xeon Phi processor (formerly known as “Knights Landing”) is no slouch! Nine systems utilizing this processor now reside on the TOP500 list. Of special note are the Cori (NERSC) and Oakforest-PACS (Japan Joint Center for Advanced High Performance Computing) supercomputing systems with both claiming a spot among the Top 10.
The next-generation Intel Xeon processor (code name “Skylake”) is also expected to join the portfolio in 2017. Demanding applications involving floating point calculations and encryption will benefit from both Intel® Advanced Vector Instructions-512, and Intel® Omni-Path Architecture (Intel® OPA). These improvements will further streamline the processor’s capability, giving commercial, academic and research institutions another step forward against taxing workloads.
A third processing technology anticipated in 2017 enables an additional level of HPC customization. The combined hardware and software solution, known as Intel® Deep Learning Inference Accelerator, sports a field-programmable gate array (FPGA) at its heart. By maximizing industry standard frameworks like Intel® Distribution for Caffe* and Intel® Math Kernel Library for Deep Neural Networks too, the solution provides end users opportunity for even greater flexibility in their supercomputing applications.
intelcircleAt SC16, Intel also highlighted supplemental momentum for Intel® Scalable System Framework (Intel SSF). HPC is an essential tool for advances in health-related applications, and Intel SSF is taking a place center-stage as a mission-critical tool in those scenarios as Intel demonstrated in its SC16 booth. Dell* offers Intel SSF for supercomputing scenarios involving drug design and cancer research. Other applications like genomic sequencing create a challenge for any supercomputer. For this reason, Hewlett Packard Enterprise* (HPE) taps Intel SSF as a core component of the HPE Next Generation Sequencing Solution.
Additional performance isn’t the only thing supercomputing experts need, though. Feedback from HPC developers, administrators and end-users express the need for improved tools during critical phases of system setup and usage. Help is on the way. Now available, Intel® HPC Orchestrator based upon the OpenHPC software stack addresses that feedback. With over 60 features integrated, it assists with testing at full-scale, deployment scenarios, and simplified systems management. Currently available through Dell* and Fujitsu*, Intel HPC Orchestrator should provide added momentum for the democratization of HPC.
Demonstrating further momentum, Intel Omni-Path Architecture has seen quite an uptick in adoption since release nine months back. It is utilized in about 66 percent of TOP500 HPC systems utilizing 100Gbit interconnects.
With so many technical advancements on the horizon, 2017 is shaping up as a year for major changes in the HPC industry. We are excited see how researchers, developers and others will utilize the technologies to take their supercomputing systems to the next level of performance, and tackle problems which were impossible just a few years ago.
1 For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
OpenStack Deployment Growing Beyond Test-Development Phase
Friday, 28 October 2016
Posted by ARM Servers
A
lot of hype and hope has surrounded OpenStack since its 2010 debut,
with some industry watchers predicting that the open-source cloud
platform might just surpass Amazon Web Services (AWS) and VMware within a
few years. While the verdict is still out on that rosy forecast, a new
study from analyst firm 451 Research shows growing enterprise interest
in OpenStack deployment despite the platform’s shortcomings.
As
reported by Talkin’ Cloud, OpenStack deployments are moving beyond the test and
development phase and into a variety of enterprise workloads. According to 451
Research’s recently released OpenStack Pulse 2016 report, revenue from
OpenStack business models should top $5 billion by 2020.
Private
Cloud Driving Growth
Today,
most OpenStack revenue comes from service providers offering multitenant
infrastructure-as-a-service (IaaS). That will change by 2019, however, when
OpenStack private cloud revenue exceeds public cloud revenue, 451 Research
forecasts.
Even
with the projected growth, however, OpenStack revenues will be significantly
smaller than those of VMware in private clouds and AWS in public clouds.
Work
in Progress
While
OpenStack has established itself as the leading open-source choice for building
private and public cloud environments, the platform is still challenging for
mainstream IT organizations to implement, the report finds.
“This
year OpenStack has become a top priority and credible cloud option, but it
still has its shortcomings,” says Al Sadowski, 451 Research’s vice president of
research, in a statement. “We continue to believe the market is still in the
early stages of enterprise use and revenue generation.”
That
said, the OpenStack outlook is positive, with an increase in revenues from all
sectors and geographic regions, particularly from companies in the OpenStack
products and distributions category that target enterprises.
Growing
Enterprise Importance
Although
OpenStack deployment is occurring in mission-critical operations across most
verticals, it’s still essentially a platform for pilot projects, web hosting
and testing and development environments. Top use cases focus on big data,
DevOps, platform-as-a-service (PaaS) and ways to better serve developers and
lines of business, the report finds. That said, a growing number of use cases
among service providers and enterprises focus on new areas, including
software-defined networking, network function virtualization, mobile and the
Internet of Things.
Another
key takeaway from the 451 Research report is that OpenStack isn’t limited to
giant enterprises. In fact, some 65 percent of report respondents work in
organizations with 1,000 to 10,000 employees, Talkin’ Cloud notes.
Additionally,
while container software such as Docker is “mostly beneficial and
complementary” to OpenStack, container management and orchestration can be a
competitive threat. “The attention to containers and their management also
threatens to eclipse OpenStack, similar to how it surpassed the rival
CloudStack in mind share and then market share,” the report notes.
OpenStack
users are adopting containers at a faster rate than other enterprises —
specifically, 55 percent of OpenStack users also use containers, compared with
17 percent of users across all enterprises, according to the study.
Bottom
line: OpenStack is gaining popularity among organizations that want to deploy
applications in private clouds and eliminate use of proprietary software. But
it still seems less appealing for legacy applications and for enterprises that
are comfortable with top hyperscale cloud providers.
Cavium
announced range of support for the deployment of OpenStack cloud
systems. Cavium has announced full support for OpenStack deployment by
offering optimized platforms for OpenStack deployments on its devices
and systems such as ThunderX, LiquidIO and FastLinQ product families.
OpenStack is open source software for building clouds. Data centers can
use OpenStack for quickly deploy new cloud products at reduced costs.
Open Stack can deliver cheaper cloud services for multiple applications.
Cavium
has optimized its ThunderX ARMv8 based processor architecture for OpenStack
cloud infrastructure deployment. The idea is to allow users of OpenStack cloud
infrastructure to fully utilize ThunderX ARMv8 for workloads such as cloud
storage with CEPH, Apache Hadoop for Big Data Analytics, distributed data bases
such as MySQL and Cassandra and secure web serving with NGINX. Cavium also says
its ThunderX is optimized for networking specific workloads such as Network
Functions Virtualization (NFV) and Load-Balancing for Telco applications.
Cavium
said its LiquidIO II Intelligent Server adapters also support and seamlessly
integrate OpenStack software value-add functionality for application
acceleration, network configuration and provisioning, security, and isolation
in multi-tenant compute clusters. LiquidIO Open Virtual Switch (OVS) offload
enables various Network Function Virtualization (NFV) and security features as
a workload specific Virtual Network Function (VNF) or as a Service Function
Chain (SFC) with seamless configuration and provisioning using OpenStack
platform.
When
coming to next Gen Ethernet adapters, Cavium is offering QLogic FastLinQ 45000
Series capability to support 10/25/40/50/100GbE with Ethernet deliver broad set
of protocols including Universal RDMA, and stateless offloads for server and
network virtualization. QLogic FastLinQ Ethernet Adapters delivers the ability
to orchestrate and manage an OpenStack deployment with technologies like the
Plug-in for Mirantis FUEL for automatic SR-IOV configuration. QConvergeConsole
(QCC) for OpenStack physical and logical topology maps and ability to offer QoS
for network functions operating to be simplified and accelerated.
Cavium
has optimized its ThunderX 64–bit ARMv8 based SoC family with a range of SKUs
and form factors for hyper scale data centers targeting cloud computing and NFV
include volume compute, storage, secure compute and networking specific
workloads.
Software
developers who are partnering with Cavium on ThunderX include Canonical, Red
Hat and SUSE.