Opened 9 years ago

Closed 9 years ago

Last modified 9 years ago

#12312 closed defect (fixed)

Segmentation fault migrating very large table

Reported by: Ryan J Ollos Owned by: Jun Omae
Priority: normal Component: TracMigratePlugin
Severity: normal Keywords:
Cc: Trac Release:

Description (last modified by Ryan J Ollos)

I was attempting to migrate an SQLite database to PostgreSQL last evening (in-place migration). The SQLite database is 4 GB on disk. The migration halts on bitten_report_item table a little under a minute after it finishes migrating the previous table (bitten_report), but before reporting that any rows have been migrated. bitten_report_item is the 6th or 7th table to be migrated. The only reported error is Segmentation Fault (I don't have the exact output at the moment, but I'll try to post that later). I guess it halts somewhere around: tracmigrateplugin/0.12/tracmigrate/

I don't have direct access to the database. I've been advising someone on the work remotely, writing the migration scripts and observing execution over video conference. The data in the database is proprietary so I can't have the entire database transferred to me, and I don't have direct remote access due to the firewall. That's just a lame explanation of why I don't have a better bug report, and might limit my ability to reproduce things.

I'm going to try and reproduce the situation by creating a database of similar size with contrived data. I imagine more information is needed, but I just wanted to open the ticket ahead of time to see if you had an idea of what the failure mode might be with a very large table, assuming that is the cause. Should the migration be significantly affected by the amount of memory in the system? The server has 8 GB, but could try the migration on a workstation with more memory.

Attachments (1)

t12312-huge-sqlite.diff (6.2 KB) - added by Jun Omae 9 years ago.

Download all attachments as: .zip

Change History (14)

comment:1 Changed 9 years ago by Ryan J Ollos

Description: modified (diff)

comment:2 Changed 9 years ago by Jun Omae

Hmm. The plugin disables prefetch with SQLite database for large database, at tracmigrateplugin/0.12/tracmigrate/ So, I consider the migration doesn't halt on such the large bitten_report_item table under a minute.

Could you please confirm using latest of the plugin?

comment:3 Changed 9 years ago by Ryan J Ollos

I'm sure we were using the latest version of the plugin because the Trac instance was installed on a new server yesterday (Ubuntu 14.04) using script that fetches the latest sources:

wget -O src/

I could be wrong on some of the details such as how long it paused. I'll run through it again tomorrow evening and record the events more carefully.

comment:4 Changed 9 years ago by Ryan J Ollos

The migration was run again and additional information gathered. I have a video recording of the session that I can share with you, but I don't want to post it public so I'll email you directly.

System information:

  • Python 2.7.6
  • SQLite 3.8.2
  • PySQLite 2.6.0 (via Python 2.7.6) - later upgraded to PySQLite 2.6.3 with same results that I'll describe next.

The bitten_report_item has 58 million rows. The bitten_report_item migration starts at 11:04 in the video and there's a pause of about 2 minutes after the following output:

bitten_report_item table...

and before the result:

bitten_report_item table... ./conf/ line 17: 5116 Segmentation fault      pve/bin/trac-admin "$ENV" migrate -i "postgres://$DB_USER:$DB_PASS@localhost/$DB_NAME"

Memory usage of the trac-admin process jumps from 5% to 62% at 11:32, climbs to 92% by 11:51 and stays there until the Segmentation Fault at 13:11. I assume it starts to page to disk. Memory usage was at 2% when migrating smaller tables, at 10% when migrating bitten_log_table which has 12.5 million records.

Changed 9 years ago by Jun Omae

Attachment: t12312-huge-sqlite.diff added

comment:5 Changed 9 years ago by Jun Omae

Thanks for the details and the video.

I cannot reproduce it yet. I created 1.2 million rows in node_change table however the memory usage was 1.5% of 2 GB. If eager cursor in SQLiteConnection is used, the usage was 12.4%.

After t12312-huge-sqlite.diff patch, the plugin fetches and inserts each 1,000 rows using rowid column in SQLite. Could you please try the patch?

comment:6 Changed 9 years ago by Ryan J Ollos

Thank you for the patch. I should be able to test within 24 hours and will report back.

comment:7 Changed 9 years ago by Ryan J Ollos

I tested the patch and we encountered the same issue. Next, tried appending ?cursor= to the [trac] database string and it appears to be working now! Memory usage is low, in the 1-2% range. The table migration is still in progress and is estimated to complete in 4 hours. I'll report back when it's done. It looks like eager cursor is not disabled for in-place migrations?: tracmigrateplugin/0.12/tracmigrate/,38,51-52,57#L32.

Last edited 9 years ago by Ryan J Ollos (previous) (diff)

comment:8 Changed 9 years ago by Jun Omae

Status: newaccepted

Thanks for the investigating! You're right. I hadn't rightly tested with in-place migration....

After the following patch, it would confirm disabled eager cursor if SQLite is using.

  • tracmigrate/

    4848        dst_env = self._create_env(env_path, dburi)
    4949        src_dburi = self.config.get('trac', 'database')
    5050        src_db = get_connection(self.env)
    51         if src_dburi.startswith('sqlite:'):
    52             src_db.cnx._eager = False  # avoid uses of eagar cursor
    5351        dst_db = get_connection(dst_env)
    5452        self._copy_tables(src_db, dst_db, src_dburi, dburi)
    5553        self._copy_directories(src_db, dst_env)
    6563                           dir=os.path.dirname(self.env.path))
    6664        try:
    6765            dst_env = self._create_env(env_path, dburi)
    68             src_db = self.env.get_read_db()
    69             dst_db = dst_env.get_db_cnx()
     66            src_db = get_connection(self.env)
     67            dst_db = get_connection(dst_env)
    7068            self._copy_tables(src_db, dst_db, src_dburi, dburi, inplace=True)
    7169            del src_db
    7270            del dst_db
    123121    def _copy_tables(self, src_db, dst_db, src_dburi, dburi, inplace=False):
    124122        self._printout('Copying tables:')
     124        if src_dburi.startswith('sqlite:'):
     125            src_db.cnx._eager = False  # avoid uses of eagar cursor
    126126        src_cursor = src_db.cursor()
     127        if src_dburi.startswith('sqlite:'):
     128            from trac.db.sqlite_backend import PyFormatCursor
     129            if type(src_cursor.cursor) is not PyFormatCursor:
     130                raise AssertionError('src_cursor.cursor is %r' %
     131                                     src_cursor.cursor)
    127132        src_tables = set(self._get_tables(src_dburi, src_cursor))
    128133        cursor = dst_db.cursor()
    129134        tables = set(self._get_tables(dburi, cursor)) & src_tables

comment:9 Changed 9 years ago by Ryan J Ollos

The table migration completed successfully. The patch attachment:t12312-huge-sqlite.diff​ was used along with ?cursor= on the database string. I can test the patch in comment:8 when another migration is done sometime in the next week or so. For what it's worth the changes look good and sensible to me. Thanks for the patch!

comment:10 Changed 9 years ago by Ryan J Ollos

Another run of the migration was done with the patch from comment:8 and it succeeded (using database = sqlite:db/trac.db). Memory use was low (~1%) throughout the migration.

comment:11 Changed 9 years ago by Jun Omae

Resolution: fixed
Status: acceptedclosed

In 14598:

TracMigratePlugin: fixed out of memory when SQLite database is huge and using in-place migration with Trac 1.0 or later (closes #12312)

comment:12 Changed 9 years ago by Jun Omae

Thanks for the feedback! Committed in [14598].

comment:13 Changed 9 years ago by Ryan J Ollos

After [14614] I've seen a 6 hour migration reduced to less than 30 minutes!

Modify Ticket

Change Properties
Set your email in Preferences
as closed The owner will remain Jun Omae.
The resolution will be deleted. Next status will be 'reopened'.

Add Comment

E-mail address and name can be saved in the Preferences.

Note: See TracTickets for help on using tickets.