Opened 18 years ago
Last modified 16 years ago
#639 closed enhancement
Large p4 depots kill virtual memory (patch included) — at Initial Version
Reported by: | Owned by: | Lewis Baker | |
---|---|---|---|
Priority: | normal | Component: | PerforcePlugin |
Severity: | normal | Keywords: | needinfo |
Cc: | Trac Release: | 0.10 |
Description
If you do an initial sync to a large depot (50,000 changelists), the perforce plugin will use up VM like crazy. (I stopped once I reached about 1.5 GB)
Small modification to the sync procedure solves that problem. (Basically, get changelists in chunks of 1000). Here's my local mod
# Override sync to precache data to make it run faster
def sync(self):
youngest_stored = self.repos.get_youngest_rev_in_cache(self.db) if youngest_stored is None:
youngest_stored = '0'
while youngest_stored != str(self.repos.youngest_rev):
# Need to cache all information for changes since the last # sync operation.
youngest_to_get = self.repos.youngest_rev if youngest_to_get > int(youngest_stored) + 1000:
youngest_to_get = int(youngest_stored) + 1000
# Obtain a list of changes since the last cache sync from p4trac.repos import _P4ChangesOutputConsumer output = _P4ChangesOutputConsumer(self.repos._repos) self.repos._connection.run('changes', '-l', '-s', 'submitted',
'@>%s,%d' % ( youngest_stored, youngest_to_get ), output=output)
if output.errors:
from p4trac.repos import PerforceError raise PerforceError(output.errors)
changes = output.changes changes.reverse()
# Perform the precaching of the file history for files in these # changes. self.repos._repos.precacheFileHistoryForChanges(changes)
youngest_stored=str(youngest_to_get)
# Call on to the default implementation now that we've cached # enough information to make it run a bit faster. CachedRepository.sync(self)