Univa Adds Intel Phi Support to Resource Management Platform

Add Your Comments

Data center automation company Univa, announced the release of Univa Grid Engine Version 8.1.4 with 46 new updates, and enhanced support for Intel Xeon Phi Coprocessors.

“Our latest Univa Grid Engine version 8.1.4 has been completely customer driven and is the largest update of the last 10 months,” said Fritz Ferstl, CTO Univa Corporation and father of Grid Engine. “We are leading the industry right now in converged infrastructures supporting Big Data and Big Compute, and our customers rely on Univa Grid Engine to manage mission-critical applications – so we make sure to always stay close to them in order to support their needs.”

The Univa Grid Engine is a distributed resource management software platform, which has added over 525 updates since Univa took over the management of Grid Engine. It supports commercial enterprises in the industries of Industrial Manufacturing, Oil and Gas, Energy, Life Sciences, Biology and Semiconductors for enterprise-grade, big data applications. Earlier this year the company announced product support for ARM-based servers.

New key feature updates include an improved load collection tool for Intel Xeon Phi coprocessors to ensure jobs run on the least loaded cores. It also features extended memory usage metrics for multi-threaded applications and scheduler performance enhancements.

Coprocessors like Intel’s Xeon Phi supplement the performance of the primary processor, and have become a common feature in the fastest supercomputers.  Phi is the new brand for products using Intel’s Many Integrated Core (MIC) architecture for highly parallel workloads.

Last week Univa CTO Fritz Ferstl was interviewed by InsideHPC.com for a Grid Engine State of the Union.

About the Author

John Rath is a veteran IT professional and regular contributor at Data Center Knowledge. He has served many roles in the data center, including support, system administration, web development and facility management.

Add Your Comments

  • (will not be published)