Implementation doesn't scale to huge production environments
|Reported by:||anonymous||Owned by:||Markus Pelkonen|
|Cc:||Erik Andersson||Trac Release:||0.11|
I've installed it in my Trac test environment, with a production database, but it seems unreasonable slow and I just wanted to make sure I'm not using it wrong. It always seem to loop through the entire ticket_change table although I specify an time interval. Does it lookup all entries in ticket_change and then filter out what's not of interest?
I knew this would pop up to my face at some point :) Unfortunately my own projects using this addon didn't ever grow enough big (Biggest had only ~ 600 tickets).
IIRC, implementation crawls the whole history and builds "version history" on the fly. With a lot of tickets and ticket changes, it definitely will slow down. And if graph is interested only small portion of changes, this crawling should be avoided.
At some point I had idea to store snapshots of ticket data. Graph rendering would then take closest snapshot and build ticket history only from that point. E.g. if there was snaopshot from every 1 week, then rendering 2 week graph from the middle of middle of 4 year project should hit ticket changes from 3 week period.
This would require own table (for snapshot data) and more thinking exceptional cases (e.g. which fields to store to snapshot, how to handle cases when new fields are introduced in the middle of project, etc).
However, deciding how often to store snapshots would come interesting. Should it be static period or could it be just something like "every N ticket change", which could be configurable value (and default to 1000 or something)?
However, i should check if implementation unnecessarily processes data beyond begin time. E.g. if two last weeks are in interest in the huge datase, it still should render quite quickly.