Posted By Jason Verge On January 22, 2013 @ 4:26 pm In Amazon,Cloud Computing | No Comments
Amazon Web Services continues to expand its applicability in the high-end computing world. The newest enhancement is a workhorse of an instance designed for real-time, high memory needs, called the High Memory Cluster Eight Extra Large instance type.
AWS continues to release instances that fit a broad range of needs, expanding its potential use cases. These high memory instances join a high storage family of instances for EC2 back in December and high I/O instances back in July. In addition to moving in on big applications and big data, the company also continues to enhance functionality across its products, two recent features being EBS Snapshot Copy and Static hosting.
The High Memory Cluster Eight Extra Large instance type is designed for memory intensive applications that need a lot of memory on one instance or take advantage of distributed memory architectures. It’s designed to host applications that have a voracious need for compute power, memory and network bandwidth such as in-memory databases, graph databases, and memory-intensive high performance computing (HPC).
These instances are available in US-East (Northern Virginia) Region only, with plans to make it available in other AWS Regions in the future. Pricing starts at $3.50 per hour for Linux instances and $3.831 per hour for Windows instances. Given the size and nature, the company said these are the most cost effective instances it offers.
Here are the full specs:
“We expect this instance type to be a great fit for in-memory analytics systems like SAP HANA and memory-hungry scientific problems such as genome assembly,” said the company in a blog post.
It has a total 88 ECU (EC2 Compute Units). Users can run applications that need serious memory that can take advantage of 32 Hyperthreaded cores (16 per processor). There’s also an interesting Turbo Boost feature. When the operating system requests the maximum possible processing power, the CPU increases the clock frequency while monitoring the number of active cores, the total power consumption and the processor temperature. The processor runs as fast as possible while staying within its documented temperature envelope.
Back in December, the company released a high storage instance family for data-intensive applications that require high storage depth and high sequential I/O performance. Examples of these types of applications include data warehousing, log processing, and the company gave a very specific use case in seismic analysis. Basically, it made EC2 applicable in the work of applications that generate a tremendous amount of data.
Each instance includes 117 GiB of RAM, 16 virtual cores (providing 35 ECU of compute performance), and 48 TB of instance storage across 24 hard disk drives capable of delivering up to 2.4 GB per second of I/O performance.
High I/O Instances
Going further back to July, the company revealed High I/O Instances for Amazon EC2, an instance type that provides very high, low latency, disk I/O performance using SSD-based local instance storage. High I/O instances are suitable for high performance clustered databases, and are especially well suited for NoSQL databases like Cassandra and MongoDB. The use cases for this type of instance include media streaming, gaming, mobile, and social networking. It allows running applications storage I/O needs even
Customers whose applications require low latency access to tens of thousands of random IOPS can take advantage of the capabilities of this new Amazon EC2 instance.
This is the third set of instances designed for high performance applications in the last half year or so from the company, as it looks to capitalize on real time big data needs. There have also been high storage instances and High I/O instances released.
The company isn’t solely focused on expanding its use cases to the upper spectrum of the market. It has also been adding functionality to enhance its existing products. Two added features of note are EBS Snapshot Copy and Static Hosting.
EBS Snapshot Copy
Back in December, the company introduced EBS Snapshot Copy, making it easier for customers to build AWS applications that span regions. It simplifies copying EBS snapshots between EC2 Regions. Use cases for this include geographic expansion (the ability to launch an application in a new region), Migration (migrate from one region to another) and for disaster recovery purposes, as in backing up data and log files across different geographical locations.
The company hasn’t focused solely on the high end of things – AWS also released Root Domain Website Hosting for Amazon S3. While customers have been able to host static websites on Amazon S3 for a while, the companyadded two options to give even more control: the ability to host a website at the root of your domain (e.g. http://mysite.com ), and the ability to use redirection rules to redirect website traffic to another domain.
Article printed from Data Center Knowledge: http://www.datacenterknowledge.com
URL to article: http://www.datacenterknowledge.com/archives/2013/01/22/aws-unveils-high-memory-instances-continues-to-expand-applicability/
URLs in this post:
 http://mysite.com: http://mysite.com
 Jason Verge: http://www.datacenterknowledge.com/archives/author/jasonv/
Copyright © 2012 Data Center Knowledge. All rights reserved.