In-memory processing

In computer science, in-memory processing is a developing technology for processing of data stored in an in-memory database. Older systems have been based on disk storage and relational databases using SQL query language, but these are increasingly regarded as inadequate to meet business intelligence (BI) needs. Because stored data is accessed much more quickly when it is placed in Random Access Memory (RAM) or flash memory, in-memory processing allows data to be analysed in real time, enabling faster reporting and decision-making in business.[1][2]

Disk-based BI

Data structures

With the hitherto prevalent disk-based technology, data is loaded on to the computer's hard disk in the form of multiple tables and multi-dimensional structures against which queries are run. Disk based technologies are Relational Database Management Systems (RDMS), often based on the structured query language (SQL), such as SQL Server, MySQL, Oracle and many others. RDMS are designed for the requirements of transactional processing. Using a database that supports insertions and updates as well as performing aggregations, joins (typical in BI solutions) are typically very slow. Another drawback is that SQL is designed to efficiently fetch rows of data, while BI queries usually involve fetching of partial rows of data involving heavy calculations.

To improve query performance, multidimensional databases or OLAP cubes - also called multidimensional online analytical processing (MOLAP) - are constructed. Designing a cube is an elaborate and lengthy process, and changing the cube's structure to adapt to dynamically changing business needs may be cumbersome. Cubes are pre-populated with data to answer specific queries and although they increase performance they are still not suitable for answering ad hoc queries.[3]

Information technology (IT) staff spend substantial development time on optimizing databases, constructing indexes and aggregates, designing cubes and star schemas, data modeling, and query analysis.[4]

Processing speed

Reading data from the hard disk is much slower (possibly hundreds of times) when compared to reading the same data from RAM. Especially when analyzing large volumes of data, performance is severely degraded. Though SQL is a very powerful tool, complex queries take a relatively long time to execute and often result in bringing down transactional processing. In order to obtain results within an acceptable response time, many data warehouses have been designed to pre-calculate summaries and answer specific queries only. Optimized aggregation algorithms are needed to increase performance.

In-memory processing tools

Memory processing can be accomplish via traditional databases such as Oracle, DB2 or Microsoft SQL Server or via NoSQL offerings such as in-memory data grid like Hazelcast, Infinispan, or Oracle Coherence. With both in-memory database and data grid, all information is initially loaded into memory RAM or flash memory instead of hard disks. With a data grid processing occurs at an order of magnitude of 3 times faster than relational databases which have advanced functionality such as ACID which degrade performance in compensation for the additional functionality. The arrival of column centric databases, which store similar information together, allow data to be stored more efficiently and with greater compression. This allows huge amounts of data to be stored in the same physical space, reducing the amount of memory needed to perform a query and increasing processing speed. Many users and software vendors have integrated flash memory into their systems to allow systems to scale to larger data sets more economically. Oracle has been integrating flash memory into the Oracle Exadata products for increased performance. Microsoft SQL Server 2012 BI/Data Warehousing software has been coupled with Violin Memory flash memory arrays to enable in-memory processing of data sets greater than 20TB.[5]

Users query the data loaded into the system’s memory, thereby avoiding slower database access and performance bottlenecks. This differs from caching, a very widely used method to speed up query performance, in that caches are subsets of very specific pre-defined organized data. With in-memory tools, data available for analysis can be as large as a data mart or small data warehouse which is entirely in memory. This can be accessed quickly by multiple concurrent users or applications at a detailed level and offers the potential for enhanced analytics and for scaling and increasing the speed of an application. Theoretically the improvement in data access is 10,000 to 1,000,000 times faster than from disk. It also minimizes the need for performance tuning by IT staff and provides faster service for end users.

Growing advantages of in-memory technology

Certain developments in computer technology and business needs have tended to increase the relative advantages of in-memory technology.[6]

Application in business

A range of in-memory products provide ability to connect to existing data sources and access to visually rich interactive dashboards. This allows business analysts and end users to create custom reports and queries without much training or expertise. Easy navigation and ability to modify queries on the fly is of benefit to many users. Since these dashboards can be populated with fresh data, users have access to real time data and can create reports within minutes. In-memory processing may be of particular benefit in call centers and warehouse management.[8]

With in-memory processing the source database is queried only once instead of accessing the database every time a query is run, thereby eliminating repetitive processing and reducing the burden on database servers. By scheduling to populate the in-memory database overnight, the database servers can be used for operational purposes during peak hours.

Adoption of in-memory technology

With a large number of users, a large amount of RAM is needed for an in-memory configuration, which in turn affects the hardware costs. The investment is more likely to be suitable in situations where speed of query response is a high priority, and where there is significant growth in data volume and increase in demand for reporting facilities; it may still not be cost-effective where information is not subject to rapid change. Security is another consideration, as in-memory tools expose huge amounts of data to end users. Makers advise ensuring that only authorized users are given access to the data.[9]

References

  1. Plattner, Hasso; Zeier, Alexander (2012). In-Memory Data Management: Technology and Applications. Springer Science & Business Media. ISBN 9783642295744.
  2. Hao Zhang; Gang Chen; Beng Chin Ooi; Kian-Lee Tan; Meihui Zhang (July 2015). "In-Memory Big Data Management and Processing: A Survey". IEEE Transactions on Knowledge and Data Engineering. 27 (7): 1920–1948.
  3. Gill, John (2007). "Shifting the BI Paradigm with In-Memory Database Technologies". Business Intelligence Journal. 12 (2): 58–62.
  4. Earls, A (2011). Tips on evaluating, deploying and managing in-memory analytics tools (PDF). Tableau. Archived from the original (PDF) on 2012-04-25.
  5. "SQL Server 2012 with Violin Memory" (PDF). Microsoft.
  6. "In_memory Analytics". yellowfin. p. 6.
  7. Kote, Sparjan. "In-memory computing in Business Intelligence". Archived from the original on April 24, 2011.
  8. "In_memory Analytics". yellowfin. p. 9.
  9. "In_memory Analytics". yellowfin. p. 12.
This article is issued from Wikipedia - version of the 11/10/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.