Search code examples
c++postgresqllibpqxx

C++, Postgres , libpqxx huge query


I have to execute an SQL query to Postgres by the following code. The query returns a huge number of rows (40M or more) and has 4 integer fields: When I use a workstation with 32Gb everything works but on a 16Gb workstation the query is very slow (due to swapping I guess). Is there any way to tell the C++ to load rows at batches, without waiting the entire dataset? With Java I never had these issues before, due to the probably better JDBC driver.

try {
        work W(*Conn);
        result r = W.exec(sql[sqlLoad]);
        W.commit();

        for (int rownum = 0; rownum < r.size(); ++rownum) {
            const result::tuple row = r[rownum];
            vid1 = row[0].as<int>();
            vid2 = row[1].as<int>();
            vid3 = row[2].as<int>();
            ..... 

    } catch (const std::exception &e) {
        std::cerr << e.what() << std::endl;
    }

I am using PostgreSQL 9.3 and there I see this http://www.postgresql.org/docs/9.3/static/libpq-single-row-mode.html, but I do not how to use it on my C++ code. Your help will be appreciated.

EDIT: This query runs only once, for creating the necessary main memory data structures. As such, tt cannot be optimized. Also, pgAdminIII could easily fetch those rows, in under one minute on the same (or with smaller RAM) PCs. Also, Java could easily handle twice the number of rows (with Statent.setFetchSize() http://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html#setFetchSize%28int%29) So, it is really an issue for the libpqxx library and not an application issue. Is there a way to enforce this functionality in C++, without explicitly setting limits / offsets manually?


Solution

  • To answer my own question, I adapted How to use pqxx::stateless_cursor class from libpqxx?

    try {
            work W(*Conn);
            pqxx::stateless_cursor<pqxx::cursor_base::read_only, pqxx::cursor_base::owned>
                    cursor(W, sql[sqlLoad], "mycursor", false);
            /* Assume you know total number of records returned */
            for (size_t idx = 0; idx < countRecords; idx += 100000) {
                /* Fetch 100,000 records at a time */
                result r = cursor.retrieve(idx, idx + 100000);
                for (int rownum = 0; rownum < r.size(); ++rownum) {
                    const result::tuple row = r[rownum];
                    vid1 = row[0].as<int>();
                    vid2 = row[1].as<int>();
                    vid3 = row[2].as<int>();
                    .............
                }
            }
        } catch (const std::exception &e) {
            std::cerr << e.what() << std::endl;
        }