--- OLD/man1/git-fast-import.1 Thu Jan 1 00:00:00 1970 +++ NEW/man1/git-fast-import.1 Thu Jan 1 00:00:00 1970 @@ -41,7 +41,7 @@ .PP \-\-max\-pack\-size= .RS 4 -Maximum size of each output packfile, expressed in MiB\. The default is 4096 (4 GiB) as that is the maximum allowed packfile size (due to file format limitations)\. Some importers may wish to lower this, such as to ensure the resulting packfiles fit on CDs\. +Maximum size of each output packfile, expressed in MB\. The default is 4096 (4 GB) as that is the maximum allowed packfile size (due to file format limitations)\. Some importers may wish to lower this, such as to ensure the resulting packfiles fit on CDs\. .RE .PP \-\-depth= @@ -67,7 +67,7 @@ .PP \-\-export\-pack\-edges= .RS 4 -After creating a packfile, print a line of data to listing the filename of the packfile and the last commit on each branch that was written to that packfile\. This information may be useful after importing projects whose total object set exceeds the 4 GiB packfile limit, as these commits can be used as edge points during calls to +After creating a packfile, print a line of data to listing the filename of the packfile and the last commit on each branch that was written to that packfile\. This information may be useful after importing projects whose total object set exceeds the 4 GB packfile limit, as these commits can be used as edge points during calls to \fIgit\-pack\-objects\fR\. .RE .PP @@ -523,7 +523,7 @@ .RE This command is extremely useful if the frontend does not know (or does not care to know) what files are currently on the branch, and therefore cannot generate the proper filedelete commands to update the content\. .sp -Issuing a filedeleteall followed by the needed filemodify commands to set the correct content will produce the same results as sending only the needed filemodify and filedelete commands\. The filedeleteall approach may however require fast\-import to use slightly more memory per active branch (less than 1 MiB for even most large projects); so frontends that can easily obtain only the affected paths for a commit are encouraged to do so\. +Issuing a filedeleteall followed by the needed filemodify commands to set the correct content will produce the same results as sending only the needed filemodify and filedelete commands\. The filedeleteall approach may however require fast\-import to use slightly more memory per active branch (less than 1 MB for even most large projects); so frontends that can easily obtain only the affected paths for a commit are encouraged to do so\. .sp .RE .SS "mark" @@ -682,11 +682,11 @@ LF? .fi .RE -Note that fast\-import automatically switches packfiles when the current packfile reaches \-\-max\-pack\-size, or 4 GiB, whichever limit is smaller\. During an automatic packfile switch fast\-import does not update the branch refs, tags or marks\. +Note that fast\-import automatically switches packfiles when the current packfile reaches \-\-max\-pack\-size, or 4 GB, whichever limit is smaller\. During an automatic packfile switch fast\-import does not update the branch refs, tags or marks\. .sp As a checkpoint can require a significant amount of CPU time and disk IO (to compute the overall pack SHA\-1 checksum, generate the corresponding index file, and update the refs) it can easily take several minutes for a single checkpoint command to complete\. .sp -Frontends may choose to issue checkpoints during extremely large and long running imports, or when they need to allow another Git process access to a branch\. However given that a 30 GiB Subversion repository can be loaded into Git through fast\-import in about 3 hours, explicit checkpointing may not be necessary\. +Frontends may choose to issue checkpoints during extremely large and long running imports, or when they need to allow another Git process access to a branch\. However given that a 30 GB Subversion repository can be loaded into Git through fast\-import in about 3 hours, explicit checkpointing may not be necessary\. .sp The LF after the command is optional (it used to be required)\. .sp @@ -867,7 +867,7 @@ There are a number of factors which affect how much memory fast\-import requires to perform an import\. Like critical sections of core Git, fast\-import uses its own memory allocators to amortize any overheads associated with malloc\. In practice fast\-import tends to amortize any malloc overheads to 0, due to its use of large block allocations\. .sp .SS "per object" -fast\-import maintains an in\-memory structure for every object written in this execution\. On a 32 bit system the structure is 32 bytes, on a 64 bit system the structure is 40 bytes (due to the larger pointer sizes)\. Objects in the table are not deallocated until fast\-import terminates\. Importing 2 million objects on a 32 bit system will require approximately 64 MiB of memory\. +fast\-import maintains an in\-memory structure for every object written in this execution\. On a 32 bit system the structure is 32 bytes, on a 64 bit system the structure is 40 bytes (due to the larger pointer sizes)\. Objects in the table are not deallocated until fast\-import terminates\. Importing 2 million objects on a 32 bit system will require approximately 64 MB of memory\. .sp The object table is actually a hashtable keyed on the object name (the unique SHA\-1)\. This storage configuration allows fast\-import to reuse an existing or already written object and avoid writing duplicates to the output packfile\. Duplicate blobs are surprisingly common in an import, typically due to branch merges in the source\. .sp @@ -877,7 +877,7 @@ .SS "per branch" Branches are classified as active and inactive\. The memory usage of the two classes is significantly different\. .sp -Inactive branches are stored in a structure which uses 96 or 120 bytes (32 bit or 64 bit systems, respectively), plus the length of the branch name (typically under 200 bytes), per branch\. fast\-import will easily handle as many as 10,000 inactive branches in under 2 MiB of memory\. +Inactive branches are stored in a structure which uses 96 or 120 bytes (32 bit or 64 bit systems, respectively), plus the length of the branch name (typically under 200 bytes), per branch\. fast\-import will easily handle as many as 10,000 inactive branches in under 2 MB of memory\. .sp Active branches have the same overhead as inactive branches, but also contain copies of every tree that has been recently modified on that branch\. If subtree include has not been modified since the branch became active, its contents will not be loaded into memory, but if subtree src has been modified by a commit since the branch became active, then its contents will be loaded in memory\. .sp @@ -891,7 +891,7 @@ .SS "per active file entry" Files (and pointers to subtrees) within active trees require 52 or 64 bytes (32/64 bit platforms) per entry\. To conserve space, file and tree names are pooled in a common string table, allowing the filename \(lqMakefile\(rq to use just 16 bytes (after including the string header overhead) no matter how many times it occurs within the project\. .sp -The active branch LRU, when coupled with the filename string pool and lazy loading of subtrees, allows fast\-import to efficiently import projects with 2,000+ branches and 45,114+ files in a very limited memory footprint (less than 2\.7 MiB per active branch)\. +The active branch LRU, when coupled with the filename string pool and lazy loading of subtrees, allows fast\-import to efficiently import projects with 2,000+ branches and 45,114+ files in a very limited memory footprint (less than 2\.7 MB per active branch)\. .sp .SH "AUTHOR" Written by Shawn O\. Pearce \.