Top 10 Ways In-Memory Computing Can Revitalize Tech at Federal Agencies

CHRIS STEEL<BR/>Software AG Government SolutionsCHRIS STEEL
Software AG Government Solutions

Chris Steel is Chief Solutions Architect for Software AG Government Solutions, a leading software provider for the federal government helping them integrate and dramatically enhance the speed and scalability of their IT systems.

Until recently, it seemed that in-memory computing platforms were only leveraged by the most technologically savvy organizations. However, the value has become so obvious that many organizations, especially budget strapped federal agencies, are racing toward adoption.

With IT experts agreeing that RAM is the new disk, in-memory computing is being seen as the secret to cost-effective modernization. As a result, more and more organizations are moving data into machine memory and out of disk-based stores and remote relational databases.

While still more prevalent in the commercial sector, the public sector is rapidly learning that if data resides right where it’s used – in the core processing unit where the application runs – several benefits arise.

Below are the top 10 reasons why federal agencies are embracing in-memory computing:

Blazingly fast speed In-memory data is accessed in microseconds. That’s real-time access to critical data—or at least 100 times faster than retrieving data from a disk-based store accessed across the network.

Higher throughput Significantly lower latency leads to dramatically higher throughput. Agencies that run high-volume transactions can use in-memory data to boost processing capacity without adding computing power.

Real-time processing For some applications—like fraud detection or network monitoring—delays of seconds, even milliseconds, don’t cut it. Acceptable performance requires real-time data access for ultra-fast processing.

Accelerated analytics Why wait hours for a report of days-old data? With in-memory data, you can do analytics in real-time for faster decision-making based on up-to-the-minute information.

Plunging memory prices The past decade has seen a precipitous drop in the cost of RAM. When you can buy a 96GB server for less than $5,000, storing data in-memory makes good fiscal and technical sense.

RAM-packed servers Hardware makers are adding more memory to their boxes. Today’s terabyte servers are sized to harness, in-memory, the torrent of data coming from mobile devices, websites, sensors and other sources.

In-memory data store An in-memory store can act as a central point of coordination, aggregating, distributing and providing instant access to your Big Data at memory speeds.

Easy for developers There is no simpler way to store data than in its native format in-memory. Most in-memory solutions are no longer database specific. No complex APIs, libraries or interfaces are typically required, and there’s no overhead added by conversion into a relational or columnar format. There is even an enterprise version of Ehcache, Java’s de facto standard caching.

Expected by users In-memory data satisfies the “need-it-now” demands of consumers and business users, whether that’s for speedier searches, faster Web services or immediate access to more relevant information.

Game-changing for mission critical applications and the agency In-memory data creates unprecedented opportunities for innovation. Government organizations can transform how they access, analyze and act on data, building new capabilities that deliver top and bottom line benefits directly benefiting the mission. Get There Faster!

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Add Your Comments

  • (will not be published)

One Comment

  1. In-memory is certainly a key ingredient to the Big Data and IoT landscape. There are many offers out there and many examples of how in-memory is helping solve the low latency demands of these domains. I've personally used Altibase, MemSQL, SQL Server 2014, and of course my favorite VoltDB. In my experience I find those offers that fall into the NewSQL space to be paramount to solving the problems of tomorrow. Of course, one does not need to be a main-memory DB to participate in the NewSQL space but these In-memory OLTP databases are a key offer within it. The key benefit of a NewSQL offer is the horizontally distributed nature of the data across many servers. To be clear; I am referring to horizontally partitioning the data in a shared nothing architecture not horizontally scaling the queries across replicated data. Once one gets some initial hands on experience with a distributed IMDBS, it is easy to understand how in-memory systems compounded by horizontally distributing your data across many servers can provide even greater capabilities when supporting Big Data. There are certainly use cases for the likes of non-distributed offers, such as SQL Server 2014, as well but in regards to the big picture horizontally distributing your data is key to solving tomorrow's demands and in-memory databases that utilize such an architecture will be key to the success of many projects.