Rick Veague is Americas CTO for IFS.
In-memory computing is one of the most talked about technologies right now. But how the software works, how it can benefit enterprises and their processes is a completely different story – one that needs to be told.
At the basic level, in-memory computing replaces slower disc-based data tables and instead uses the random-access memory (RAM) of a computer or a cluster of computing resources in the cloud, offering significant speed and cost benefits.
Combining ERP software with in-memory will preserve the traditional database traits of atomicity, consistency, isolation and durability (ACID) that are there to guarantee transaction integrity. Unlike pure in-memory applications, ERP with in-memory may include a hybrid approach, with both an in-memory and disc-based database. This helps maintain RAM reserves by allowing an application to decide which parts of transactional data are disc-based and which should be in-memory.
When choosing to adopt in-memory as part of your ERP strategy, there are three main questions you need to ask first.
1. What are the drivers for in-memory adoption? The incentives that drive a company to adopt in-memory computing are straightforward. Some large enterprises may be harnessing big data from social media and other online sources and harvesting insights from an in-memory data set. But for many industrial companies, the most compelling case for in-memory technology may stem from the need of senior managers to view aggregated enterprise data in real-time.
In-memory computing can also be a way for an ERP vendor to address underlying issues in an application’s architecture. If the original enterprise software architecture was too complex, the application may have to look in more than a dozen locations in a relational database to satisfy a single query. They may be able to simplify this convoluted model and speed up queries by moving entirely from disc-based data storage to in-memory.
But an IT department may find running the entirety of an application in-memory may not prove economically attractive. While the cost of RAM or flash memory has been falling, it still costs as much as $20,000 to $40,000 for a 1TB RAM cluster. For scalability and cost reasons, it may be wise for businesses to be selective about what portions of the database they run in-memory. Moreover, ERP applications that run entirely in-memory tend to force end user companies to staff up with technical expertise familiar with this very specific technology.
2. How will it optimize the speed of queries and reports? The main benefit is enhanced processing speed. Data stored in-memory can be accessed hundreds of times faster than would be the case on a hard disc – which is important for businesses dealing with larger data sets and non-indexed tables that need to be accessed immediately.
Within ERP, this speed is particularly useful when companies are running ad-hoc queries, say, to identify customer orders that conform to specific criteria or determine which customer projects consume a common part. Enterprise software run with traditional disc-based storage is likely to bog down if the database running business transactions in real-time is also responding to regular queries from the business intelligence systems.
But an in-memory application should be, in a manner of speaking, a type of hybrid solution between RAM and disc-based storage. In theory, a pure in-memory computing system will require no disc space. But this is impractical since modern enterprise applications can store both structured and unstructured data such as photos, technical drawings, video and other materials that are not used for analytical purposes - but would consume a great deal of memory. The benefit of moving imagery – for example, photos an electric utility engineer may take of meters – in-memory would be minimal and the cost high. This data is not queried, does not drive visualizations or business intelligence and would consume substantial memory resources.
A hybrid model containing both a traditional and in-memory database working in sync enables the end user to keep all or part of the database in-memory, so that columns and tables that are frequently queried by business analytics tools or referenced in ad hoc queries can be accessed almost instantly. Meanwhile, data that doesn’t need to be accessed as frequently is stored in a physical disc, enabling businesses to get real-time access to important information while making the most of their current IT systems.
3. Where should I be deploying in-memory computing? The cost of RAM is one reason that it may be more desirable to simply use in-memory to speed up processing in specific parts of the database that are frequently queried. This delivers the greatest benefit with minimal cost for additional RAM. Rather than keeping an entire application database in-memory, most companies may prefer to rely on a database kept in traditional servers or server clusters on-premise or in the cloud, keeping only highly-trafficked data in-memory.
Determining which sections or how much of an ERP database should be run in-memory will depend on the use case, but there are three main areas in-memory computing can help optimize:
Analysis of Large Data Sets
Real-time streaming of data, whether it is actual big data that resides outside a transaction system or data from within your ERP, requires tremendous computing resources. If this information in a traditional data warehouse will be old and less useful, but continuous queries on the transactional database could lead to performance issues. Even traditional business intelligence processes in industries that can benefit from real-time or predictive analytics require real-time streaming data rather than periodic updates, making in-memory an attractive option.
Zeroing in on Key Information
If there is data in an application subject to frequent queries for decision-making or ad-hoc reporting, it makes sense to move those tables in-memory. Otherwise, these queries could take a while to complete and the load on the transactional database could affect the experience of end users. If you want to summarize a thousand rows out of a million, or to retrieve a handful of columns in a table for a fraction of the percentage of the total data volume, this is one area where a targeted approach to in-memory computing shines.
Selecting the Right Transactions
Running an entire transactional database in-memory will probably never be optimal, but it is possible. For a very large database with tens or hundreds of thousands of transactions per second, in-memory across the board may be the best way to ensure performance without event loss. High-volume transactional environments on this scale are rare however, and in most cases, it will still make sense to move only carefully-chosen subsets of a transactional database in-memory.
Another String to the ERP Bow
The decision rests on an enterprise on where and how to deploy in-memory computing. Being limited by in-memory only applications can increase cost as there is significant investment required to support a complete in-memory deployment. There is technology available and in place to allow businesses to reap the benefits of in-memory, while keeping other less-suited processes on traditional disc-based infrastructure.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.