Our forums are currently in maintenance mode and the ability to post is disabled. We will be back up and running as soon as possible. Thanks for your patience!

General discussion


Fastest and Smallest Database Engine

By o1j2m3 ·
First time post topic here, hope can get some valuable feedback.

Performance of reading and writing data is very important for a Database Engine.

Currently, most of the Database Engines are designed to process Tree/Map/Hash data, and they are using clustering, indexing and memory-processing to speed up the database I/O performance.

Hard disk I/O becomes the bottleneck. Usually, I feel headache on large database which contains at least 100GB of data. Analyzing old backup file becomes another nightmare to me.

I am just thinking, why there is no such a design where :
1. A data table as the core of a database which store unique data
2. All other indexes table point to the data table

Conceptually, this database engine design will greatly enhance on speed of reading and writing, because the database initially are indexed.

If normally, we need to use 100GB of hard disk space to store a database, with the new design, we only need 20GB of hard disk space.

The redundancy of data in a large database could be crazy. A piece of date or numeric data could occur more than 10 times or even more in database. The design will reduce the speed of insert and update data, but it greatly enhance on reading and maintenance performance.

The main thing is, it could be use in micro devices.

This conversation is currently closed to new comments.

Thread display: Collapse - | Expand +

All Comments

Related Discussions

Related Forums