It looks like you're offline.
Open Library logo
additional options menu
Last edited by Anand Chitipothu
February 4, 2009 | History

Infostore: Generalizing Infobase to support mulitiple database schemas

Every problem in software can be solved by adding one more level of redirection..

I am trying to solve the performance problems in Infogami/Infobase by adding another level of redirection.

Infostore

I am introducing a new layer Infostore, which is low-level storage layer below Infobase.

Infobase decides what operations needs to be performed and Infostore decides how an operation is performed.

Since all the decision making is done by Infobase, the job of Infostore is very straight-forward.
This simplicity allows providing different implementations of Infostore very easily. In fact, it will be possible to migrate from one implementation of Infostore to another without changing rest of the system.

Infobase interacts with Infostore using the following API.

class Infostore:
    """Storage for Infobase"""
    def get(self, key, revision):
        """Returns object with the speicified key and revision."""
        pass
    def create(self, key, properties):
        """Creates a new object with the specified key.
        properties is a list of (name, datatype, value).
        """
        pass
    def update(self, key, actions):
        """Updates the object with the specified key by applying the actions.
        actions is a list of (action, name, datatype, value).
        action can be one of "insert", "delete", "update", "update_list".
        """
        pass
    def things(self, conditions):
        """Returns keys of the objects matching the specified conditions.
        Conditions is a list if (name, op, datatype, value). op can be one of
        ("<", "<=", "=", "!=", ">=", ">", "~")
        """
        pass
    def versions(self, conditions):
        """Returns a list of versions matching the specified conditions.
        Conditions is a list if (name, op, datatype, value). op can be one of
        ("<", "<=", "=", "!=", ">=", ">", "~")
        """
        pass

Possible Infostore Implementations

Existing Implementation

We are having some of performance issues with the current implementation.

Since all properties are stored in the same table, scanning for one property may involve hitting more disk blocks.
This can be improved by splitting the data into multiple tables whenever possible.

Since the data for all revisions is stored in the same table, additional indexes are required for begin_revision and end_revision.
This can be improved by keeping data of every object in a separate table as JSON and keep the latest data in other tables for querying.

The following implementations of Infostore implements the above mentioned improvements.

One table for every datatype

Generalized Implementation

It is still possible to extend the above idea to split the data further by keeping separate tables for different types.

sys_*       /type/type, /type/property
user_*      /type/user, /type/usergroup, /type/permission
author_*    /type/author
edition_*   /type/edition
datum_*     everything else

For example title property of /type/edition is stored in edition_str.

We can still extends this idea, if we want to split based on (type, property_name) for some important properties. Surely authors property of /type/edition is a potential candidate.

It is not easy to generate SQL schema, once all these things are specified.

# type          property,datatype   table
/type/type          *               sys_*
/type/property      *               sys_*
/type/user          *               user_*
/type/usergroup     *               user_*
/type/permission    *               user_*
/type/author        *               author_*
/type/edition       authors, ref    edition_authors_ref
/type/edition       *               edition_*

Too many tables? I don't think it is an issue.

This allowing adding a new kind data without effecting the performance of the existing system. Support we want to add support for book reviews, all review data can be stored in a separate set of tables.

The thing has to store keys of all the objects, but I am assuming that that is not a performance issue. (Need to test how big the thing table can become.)


Performance study (Updated on July 17, 2008)

I have created new database with the following schema and loaded all objects from the production database.

create table thing (
    id serial primary key,
    key text,
    type int references thing,
    latest_revision int,
    last_modified timestamp,
    created timestamp
);
create index thing_key_idx ON thing(key);
create index thing_type_idx ON thing(type);
create table edition_keys (
    id serial primary key,
    key text
);
create table edition_str (
    thing_id int references thing,
    key_id int references edition_keys,
    value varchar(2048),
    ordering int default NULL
);
create index edition_str_idx ON edition_str(key_id, value);
create index edition_str_thing_id_idx ON edition_str(thing_id);
create table edition_int (
    thing_id int references thing,
    key_id int references edition_keys,
    value int,
    ordering int default NULL
);
create index edition_int_idx ON edition_int(key_id, value);
create index edition_int_thing_id_idx ON edition_int(thing_id);
create table edition_ref (
    thing_id int references thing,
    key_id int references edition_keys,
    value int references thing,
    ordering int default NULL
);
create index edition_ref_idx ON edition_ref(key_id, value);
create index edition_ref_thing_id_idx ON edition_ref(thing_id);
...
(similar schema for author_xxx tables)

And ran the following queries on it.

Test1: Find books of an author

SELECT * FROM thing
JOIN edition_ref author ON author.thing_id=thing.id
WHERE author.key_id=$key_id AND author.value=$author_id
LIMIT 20

Average time taken: 0.05 sec

Disk stats:

   relname   | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit
-------------+----------------+---------------+---------------+--------------
 edition_ref |              3 |             0 |             2 |            2
 thing       |              3 |             0 |             3 |            6

Test2: Find books of an author, sorted by book title

SELECT * FROM thing
JOIN edition_ref author ON author.thing_id=thing.id
JOIN edition_str title ON title.thing_id=thing.id
WHERE author.key_id=$key_authors AND author.value=$author_id AND title.key_id=$key_title
ORDER BY title.value LIMIT 20

Average time taken: 0.20 sec

Disk stats:

   relname   | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit
-------------+----------------+---------------+---------------+--------------
 thing       |              0 |             6 |             1 |           18
 edition_str |              4 |             4 |             7 |           21
 edition_ref |              0 |             6 |             0 |            7

Test3: Find books where author=xx and publisher=yy order by title.

SELECT * FROM thing
JOIN edition_ref author ON author.thing_id=thing.id
JOIN edition_str publisher ON publisher.thing_id=thing.id
JOIN edition_str title ON title.thing_id=thing.id
WHERE author.key_id=7 AND author.value=9890457
    AND publisher.key_id=19 AND publisher.value='Dover Publications'
    AND title.key_id=1
ORDER BY title.value LIMIT 20

Average time taken: 5 sec

Disk stats:

   relname   | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit
-------------+----------------+---------------+---------------+--------------
 thing       |              2 |             0 |             4 |            2
 edition_str |           7556 |             2 |            55 |            3
 edition_ref |           1243 |             0 |             6 |            0

Test4: Find books where author=xx, publisher=yy and number_of_pages = zz

SELECT * FROM thing
JOIN edition_ref author ON author.thing_id=thing.id
JOIN edition_str publisher ON publisher.thing_id=thing.id
JOIN edition_str title ON title.thing_id=thing.id
JOIN edition_int pages ON title.thing_id=thing.id
WHERE author.key_id=7 AND author.value=1650702
    AND publisher.key_id=19 AND publisher.value='Science Research Associates'
    AND pages.key_id=2 AND pages.value = 100
    AND title.key_id=1
ORDER BY title.value LIMIT 20

Average time taken: 1.1 sec

Disk stats:

   relname   | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit
-------------+----------------+---------------+---------------+--------------
 thing       |              0 |             0 |             0 |            0
 edition_str |              0 |           769 |             0 |           11
 edition_int |              0 |             0 |             0 |            0
 edition_ref |              0 |             1 |             0 |            3

Disk stats for different values of author_id, publisher and number of paeges.

    relname   | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit
 -------------+----------------+---------------+---------------+--------------
  thing       |              8 |             0 |            22 |            2
  edition_str |           6155 |             1 |           114 |            0
  edition_int |         243848 |             0 |          1232 |            0
  edition_ref |           1243 |             0 |             6 |            0

Time taken: 32 sec.

Same query with condition number_pages < 100 takes too long (more than 5 minutes).

History

February 4, 2009 Created by Anand Chitipothu moving old docs from staging to production