Amazon Launches Data Warehouse Service Redshift

1 comment

Redshift

AWS announced a new data warehouse service, called Redshift.

In a continued bid to gain enterprise market share for storage, Amazon Web Services (AWS) officially launched Redshift, a fully managed, petabyte-scale data warehouse service in the cloud. The company announced the service late last year with a limited preview by invitation only. The service is now available in U.S. East (Northern Virginia), with plans to expand to other AWS Regions in the coming months.

AWS built Redshift based on technology licensed from Paraccel, of which Amazon is an investor.

Impact on the Marketplace

With Redshift, Amazon is taking on established offerings from Oracle, IBM and Teradata, and it’s challenging them on cost. At its re: Invent conference in November, AWS presented the pay-as-you-go incentive, calculating that it would cost between $19,000 and $25,000 per terabyte per year at list prices to build and run a good-sized data warehouse on premise.

Redshift is a good example of AWS working harder to provide an enterprise-friendly service. The Redshift service follows AWS Glacier, which provides low cost cold storage/archive storage with the tradeoff that archives aren’t available instantaneously. The company also recently unveiled high memory instances. It appears that Amazon is going hard after enterprise data on a variety of fronts, and with Redshift, it’s expanding into the big data marketplace.

Ease of Use

Users can manage Redshift from the AWS Management Console. It includes a variety of graphs and visualizations to monitor the status and performance of clusters, as well as the resources consumed by each query. Customers can resize clusters, add or remove nodes, change instance type, create a snapshot, restore the snapshot to a new cluster, within the console through a couple of clicks.

Redshift offers fast-query performance when analyzing virtually any size data set using the same SQL-based tools and business intelligence applications that are in use today. The company says it designed Redshift to be cost-effective, easy to use, and flexible. Redshift is anticipated to deliver 10 times the performance at one-tenth the cost of on-premise data warehouses. This is achieved through columnar data storage, advanced compression, and high-performance disk and network I/O.

Redshift integrates with a number of other AWS services, including S3 and Amazon DynamoDB. Customers can also use the AWS Data Pipeline to load data from Amazon RDS, Amazon Elastic MapReduce, and Amazon EC2 data sources.

Users can start out small (in terms of data warehousing, a couple of hundred gigabytes) and scale up as needed.

Pricing

• High Storage Extra Large (15 GiB of RAM, 4.4 ECU, and 2 TB of local attached compressed user data) goes for $0.85 per hour

• High Storage Eight Extra Large (120 GiB of RAM, 35 ECU, and 16 TB of local attached user data) for $6.80 per hour.

With either instance type, customers pay an effective price of $3,723 per terabyte per year for storage and processing. One-Year and Three-Year Reserved Instances are also available, pushing the annual cost per terabyte down to $2,190 and $999, respectively.

Keep up on Data Center Knowledge’s cloud computing coverage, check the Cloud Computing channel.

About the Author

Jason Verge is an Editor/Industry Analyst on the Data Center Knowledge team with a strong background in the data center and Web hosting industries. In the past he’s covered all things Internet Infrastructure, including cloud (IaaS, PaaS and SaaS), mass market hosting, managed hosting, enterprise IT spending trends and M&A. He writes about a range of topics at DCK, with an emphasis on cloud hosting.

Add Your Comments

  • (will not be published)

One Comment

  1. Great overview. Join discussion group dedicated to Redshift: www.linkedin.com/groups/Redshift-Professionals-4884099