Modern business analytic products must be easy to use, function at massive scale, and provide granular detail about disparate datasets.
Processing millions of database tables is easy. Crunching billions of records? That's challenging, said Jeremy Sokolic, VP Product at Sisense. Big data for enterprise and SMBs is big business, and some of the analytics industry's most innovative companies revealed powerful new backend technologies this week at New York's annual Data Summit.
From SQL and Hadoop database innovations to innovations in cloud services like Azure and AWS, Data Summit's primary theme focused on business scale. Sokolic emphasized that working with large numbers is no longer good enough. Many of the best analytics solutions still require trained IT professionals to chain multiple products together. This increases overhead, decreases efficiency, and can often result in miscommunication between business and technical departments, Sokolic said. Using what the company calls "In-Chip technology," Sokolic claimed Sisense can solve those problems, in addition to processing and visualizing massive database records. The goal is to build a product that is powerful and easy to use for IT and business departments alike.
SEE: Three ways encryption can safeguard your cloud files (Tech Pro Research)
Though database tech--particularly SQL and Hadoop--was initially developed to store information, Sokolic said, more and more frequently companies require relational databases to be flexible. Unlike most rack-based solutions, "In-Chip" tech is optimized for these types of databases and does not require dedicated local hardware. The software product is aided by Intel's Core vPro processor in the cloud, runs on the user's machine, and processes data using the local CPU cache.
This approach is 100 times faster than other solutions, the company claimed in a recent press release. This means users can upload a variety of dissimilar data types and still get meaningful visualizations and insights. "[In-Chip] let's us chain together disparate datasets at massive scale," Sokolic said, "then get really granular even as we get bigger."