Opened 18 years ago
Closed 12 years ago
#844 closed defect (wontfix)
Conversion failed ...again
Reported by: | anonymous | Owned by: | John Hampton |
---|---|---|---|
Priority: | normal | Component: | SqliteToPgScript |
Severity: | major | Keywords: | |
Cc: | nick.stone@… | Trac Release: | 0.11 |
Description
Basically I'm experiencing a number of problems with the conversion. Firstly the burndown plug-in causes an error but that seems to be a problem with the plugin. Secondly the following lines:
if not opts.noclean:
self.cleanTable(table)
in migrateTable throw an error. Thirdly once these are removed the copy only seems to copy over some of the rows. I get back an error to the effect that the data already exists in the postgres table even though it doesn't.
I'm happy to help out here so if you need more info - please let me know
Attachments (0)
Change History (24)
comment:1 Changed 18 years ago by
Status: | new → assigned |
---|
comment:2 Changed 18 years ago by
Here's the information you wanted:
System: python 2.4.3-8 postgres 8.1.4-6 python-psycopg 2.0.4-1 trac 0.10
Traceback
Traceback (most recent call last): File "/root/trac_scripts/sqlitetopgscript/0.10/sqlite2pg", line 332, in ? Main(opts) File "/root/trac_scripts/sqlitetopgscript/0.10/sqlite2pg", line 245, in Main pgenv = getPostgreSQLEnvironment(opts) File "/root/trac_scripts/sqlitetopgscript/0.10/sqlite2pg", line 224, in getPostgreSQLEnvironment env.upgrade() File "/usr/lib/python2.4/site-packages/trac/env.py", line 334, in upgrade if participant.environment_needs_upgrade(db): File "build/bdist.linux-i686/egg/burndown/burndown.py", line 42, in environment_needs_upgrade File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 48, in execute return self.cursor.execute(sql) File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 48, in execute return self.cursor.execute(sql) psycopg2.ProgrammingError: relation "sqlite_master" does not exist
Trac plugins
graphviz.graphviz.graphvizmacro = enabled iniadmin.* = enabled navadd.* = enabled pagetopdf.* = enabled trac.wiki.web_ui.wikimodule = disabled tracgantt.* = enabled tractags.* = enabled webadmin.* = enabled tracnav.* = enabled
Notes:
I did try editing the script to see if I could see what is going on. Firstly it appears that the schema is correctly created which rules out permissions. Secondly the script appears to drop out in the try / catch within the for loop in Main. I have tried changing the catch statement and have managed to get some data accross but I suspect the problem lies with why that's failing.
comment:3 Changed 18 years ago by
Just for information I'm using the command as follows:
sqlite2pg -e /<sqlite-project> -s sqlite:db/trac.db -p 'postgres://<user>:<password>@localhost/trac?schema=<schema>'
comment:4 Changed 18 years ago by
My appologies - I just realised I pasted the wrong traceback... Correct Traceback below:
Traceback (most recent call last): File "/root/trac_scripts/sqlitetopgscript/0.10/sqlite2pg", line 331, in ? Main(opts) File "/root/trac_scripts/sqlitetopgscript/0.10/sqlite2pg", line 250, in Main rc = tmigration.migrateTable(tname) or rc File "/root/trac_scripts/sqlitetopgscript/0.10/sqlite2pg", line 70, in migrateTable self.cleanTable(table) File "/root/trac_scripts/sqlitetopgscript/0.10/sqlite2pg", line 65, in cleanTable cur.execute(delete_from) File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 48, in execute return self.cursor.execute(sql) File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 48, in execute return self.cursor.execute(sql) psycopg2.ProgrammingError: relation "system" does not exist
comment:5 Changed 18 years ago by
Sorry to hastle but is there anything else you need for this and if as I suspect there's something really simple I've done wrong is there any chance you could post something up so that I know what to change - thanks.
comment:6 Changed 18 years ago by
Sorry to be so unresponsive.
The good news: I can now duplicate the bug. Not on my own server, but on a different server
The bad news: This bug makes absolutely no sense and shouldn't be happening in the first place.
For some unknown reason, the script manages to create the tables fine in the new schema, however, when it goes to clean the table, it loses the schema. This isn't really a bug in the script as much as it is a bug in either the schema handling code in trac proper (for which I'm to blame again :) ) or some weird combination of postgres + psycopg2 + ???
Just to rule out everything, what are your locale settings? Also, would it be possible to try using pyPgSQL?
So, unfortunately, there isn't a whole lot more you can do, unless you really want to start profiling the trac code. I am working on the issue, but I don't know how long it's going to take me.
comment:7 Changed 18 years ago by
No worries - I've put the locale stuff in below - hope that's what you wanted - if not just let me know. I've tried installing pyPgSQL through Debain (can't get it to work at all - agggrrrr!). Will keep trying and post some more information up if successful.
LANG=en_GB LANGUAGE=en_GB:en_US:en_GB:en LC_CTYPE="en_GB" LC_NUMERIC="en_GB" LC_TIME="en_GB" LC_COLLATE="en_GB" LC_MONETARY="en_GB" LC_MESSAGES="en_GB" LC_PAPER="en_GB" LC_NAME="en_GB" LC_ADDRESS="en_GB" LC_TELEPHONE="en_GB" LC_MEASUREMENT="en_GB" LC_IDENTIFICATION="en_GB" LC_ALL=
comment:8 Changed 18 years ago by
OK pyPgSQL works fine - well at least as far as I can tell. I have a postgres based site I'm using as a test and that still all functions without psycopg although of course this may be unrelated! The Traceback I now get seems to suggest the script requires psycopg to work...
Traceback (most recent call last): File "sqlitetopgscript/0.10/sqlite2pg", line 30, in ? from psycopg2 import ProgrammingError, IntegrityError ImportError: No module named psycopg2
comment:9 follow-up: 10 Changed 18 years ago by
I'm interested in getting up to speed on Python so if you can point me in the right direction I can happily do some profiling if it will help.
comment:10 Changed 18 years ago by
Replying to nickstone:
I'm interested in getting up to speed on Python so if you can point me in the right direction I can happily do some profiling if it will help.
http://www.diveintopython.org is a great reference, though in general the mailing list or IRC are probably better places to ask.
comment:11 Changed 18 years ago by
OK - I think I have a handle on the problem - I'm using schema's within the Postgres DB. If I fudge the code a bit to put the schema name into the SQL in a few places they work fine! e.g. currently the code will produce "DELETE FROM <table>" for the clean table routine but it needs to be "DELETE FROM <schema>.<table>" etc.
Hope this helps
comment:12 Changed 18 years ago by
The trick here is the trac code. When you set ?schema=foo
in the database option in trac.ini, you are telling trac to use the schema foo
for all it's operations. Trac then issues a SET search_path TO foo
query. This makes it such that subsequent DELETE FROM <table>
queries are equivalent to DELETE FROM foo.<table>
The weird part is that for some reason, when you run the sqlite2pg script, it isn't getting set. This should be next to impossible due to how trac sets the search path. For some reason it is failing to set the search path, or something is happening to nullify the setting of the search path. This is where the problem lies.
comment:13 Changed 18 years ago by
I've tried a small fudge in the code which seems to do the job - I'm sure you'll want to find a better way of doing this but in the mean time...
I've added the following code to Main just after the "if not opts.tlist: etc..." clause (so just before the for loop).
cnx = pgenv.get_db_cnx() schema = cnx.schema or u'public' cur = cnx.cursor() cur.execute('SET search_path TO %s', (schema,))
the migration now works which is good but I suspect that there are better ways to do this.
comment:14 Changed 18 years ago by
OK I was premature - the conversion nearly works for some reason every table apart from milestone gets copied accross - I will keep looking into it and try and put some more information up if I have any luck
comment:15 Changed 18 years ago by
Looks like the problem is that the table definition for postgres version of milestone is different to the SQLite version - if somebody could point me to the appropriate place to log this as a trac bug I will. FYI:
SQLite definition:
CREATE TABLE milestone ( name text PRIMARY KEY, due integer, -- Due date/time completed integer, -- Completed date/time started integer, -- Started date/time description text )
postgresql def:
CREATE TABLE milestone ( name text NOT NULL, due int4, completed int4, description text, CONSTRAINT milestone_pkey PRIMARY KEY (name) ) WITHOUT OIDS;
comment:16 Changed 18 years ago by
IamLordV1ldemort
has given me great insight into this problem as he seems to be able to break everything I do ;)
Anyway, when we were mucking around with postgres database leaks in tracd, we found that when we turned off connection pooling, we ran into this bug.
For us that is what triggered it. I verified that this bug exists even when connection pooling is turned on, which is probably your case.
I hope to have a work around soon.
comment:17 Changed 18 years ago by
(In [1608])
- Refs #844 hopefull fix for missing "system" table. Turns out that the
ISOLATION_LEVEL_AUTOCOMMIT
line was the culprit. I think that I originally had that in there because when I first started the script I was taking a different approach for creating the environments. It's not needed now, and fixes the issue for me. Need testing before I can close the bug.
comment:19 Changed 18 years ago by
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
comment:20 follow-up: 21 Changed 18 years ago by
Resolution: | fixed |
---|---|
Status: | closed → reopened |
Getting the following error while converting from sqlite to postgresql:
System: python 2.4.3-8 postgres 8.1.4-6 python-psycopg 2.0.4-1 trac 0.10
Traceback (most recent call last): File "./sqlite2pg", line 331, in ? Main(opts) File "./sqlite2pg", line 250, in Main rc = tmigration.migrateTable(tname) or rc File "./sqlite2pg", line 72, in migrateTable rc = self.default_copy(table) File "./sqlite2pg", line 82, in default_copy scur.execute(select_all) File "/var/lib/python-support/python2.4/trac/db/util.py", line 51, in execute return self.cursor.execute(sql) File "/var/lib/python-support/python2.4/trac/db/sqlite_backend.py", line 56, in execute args or []) File "/var/lib/python-support/python2.4/trac/db/sqlite_backend.py", line 48, in _rollback_on_error return function(self, *args, **kwargs) pysqlite2.dbapi2.DatabaseError: file is encrypted or is not a database
comment:21 Changed 18 years ago by
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
Replying to tauk2mi@gmail.com:
Getting the following error while converting from sqlite to postgresql:
<snip>
pysqlite2.dbapi2.DatabaseError: file is encrypted or is not a database
This is not a bug in the conversion script. This is due to the fact that your sqlite database is either a sqlite2 db and your bindings are sqlite3 or the bindings are sqlite2 and the bindings are sqlite3. Because you're using pysqlite2, I'd say the former. Convert your sqlite database and then re-run the script.
Take a look at t.e.o for info on how to upgrade your sqlite database.
comment:23 Changed 14 years ago by
Resolution: | fixed |
---|---|
Status: | closed → reopened |
Summary: | Conversion failed → Conversion failed ...again |
Trac Release: | 0.10 → 0.11 |
This is not fixed as I have been able to reproduce in the latest version (0.10) from svn ...trac 0.11.7 and postgresql 0.9.0... I can confirm that my setup is working because I was able to use a dirty workaround...which was use the public schema to migrate the sql database and then rename the schema afterwards to the desired name. Trac works fine with this, however the migrate script is clearly not schema compatible.
Attaching traceback:
Traceback (most recent call last): File "./sqlite2pg", line 336, in <module> sys.exit(main(sys.argv[1:])) File "./sqlite2pg", line 332, in main Main(opts) File "./sqlite2pg", line 244, in Main pgenv = getPostgreSQLEnvironment(opts) File "./sqlite2pg", line 220, in getPostgreSQLEnvironment x.environment_created() File "/mnt/cluster/.virtualenv/trac_0.11_py2.6/lib/python2.6/site-packages/trac/env.py", line 502, in environment_created vals) File "/mnt/cluster/.virtualenv/trac_0.11_py2.6/lib/python2.6/site-packages/trac/db/util.py", line 80, in executemany return self.cursor.executemany(sql_escape_percent(sql), args) File "/mnt/cluster/.virtualenv/trac_0.11_py2.6/lib/python2.6/site-packages/trac/db/util.py", line 80, in executemany return self.cursor.executemany(sql_escape_percent(sql), args) psycopg2.ProgrammingError: relation "component" does not exist LINE 1: INSERT INTO component (name,owner) VALUES (E'component1',E's...
comment:24 Changed 12 years ago by
Resolution: | → wontfix |
---|---|
Status: | reopened → closed |
This plugin is deprecated. See the TracMigratePlugin.
Replying to anonymous:
Since I'm lazy, I'll blame those on the burndown plugin also.
The way the plugin works, is it populates the PostgreSQL tables using the same method that Trac uses when one runs
initenv
. Beacuse of this, all the default pages are copied into the PostgreSQL database. So, if we want an exact copy of the SQLite data, it then wipes all info from the table before copying the data from SQLite.The only reason that I can think of that this would fail is due to permissions on the PostgreSQL database. However, I find that unlikely.
More info would be helpful. If you have any tracebacks, that would be the best. Also, can you tell me what versions you are using of the following: