Running Compaction on Disk Store Log Files
Running Compaction on Disk Store Log Files
When a cache operation is added to a disk store, any preexisting operation record for the same entry becomes obsolete, and Pivotal GemFire marks it as garbage. For example, when you create an entry, the create operation is added to the store. If you update the entry later, the update operation is added and the create operation becomes garbage. GemFire does not remove garbage records as it goes, but it tracks the percentage of garbage in each operation log, and provides mechanisms for removing garbage to compact your log files.
GemFire compacts an old operation log by copying all non-garbage records into the current log and discarding the old files. As with logging, oplogs are rolled as needed during compaction to stay within the max oplog setting.
You can configure the system to automatically compact any closed operation log when its garbage content reaches a certain percentage. You can also manually request compaction for online and offline disk stores. For the online disk store, the current operation log is not available for compaction, no matter how much garbage it contains.
Log File Compaction for the Online Disk Store
Offline compaction runs essentially in the same way, but without the incoming cache operations. Also, because there is no current open log, the compaction creates a new one to get started.
Run Online Compaction
- Automatic compaction. When auto-compact is true, GemFire automatically compacts each oplog when its garbage content surpasses the compaction-threshold. This takes cycles from your other operations, so you may want to disable this and only do manual compaction, to control the timing.
Manual compaction. To run
- Set the disk store attribute allow-force-compaction to true. This causes GemFire to maintain extra data about the files so it can compact on demand. This is disabled by default to save space. You can run manual online compaction at any time while the system is running. Oplogs eligible for compaction based on the compaction-threshold are compacted into the current oplog.
- Run manual compaction as
needed. GemFire has two types of manual
- Compact the logs
for a single online disk store through the API, with the
forceCompaction method. This method
first rolls the oplogs and then compacts them. Example:
gfsh, compact a disk store in a
distributed system with the compact disk-store command.
gfsh>compact disk-store --name=Disk1 gfsh>compact disk-store --name=Disk1 --group=MemberGroup1,MemberGroup2Note: You need to be connected to a JMX Manager in gfsh to run this command.
- Compact the logs for a single online disk store through the API, with the forceCompaction method. This method first rolls the oplogs and then compacts them. Example:
Run Offline Compaction
Offline compaction is a manual process. All log files are compacted as much as possible, regardless of how much garbage they hold. Offline compaction creates new log files for the compacted log records.
Using gfsh, compact individual offline disk stores with the compact offline-disk-store command:
gfsh>compact offline-disk-store --name=Disk2 --disk-dirs=/Disks/Disk2 gfsh>compact offline-disk-store --name=Disk2 --disk-dirs=/Disks/Disk2 --max-oplog-size=512 -J=-Xmx1024m
You must provide all of the directories in the disk store. If no oplog max size is specified, GemFire uses the system default.
Offline compaction can take a lot of memory. If you get a java.lang.OutOfMemory error while running this, you may need to increase your heap size with the -J=-Xmx parameter.
Performance Benefits of Manual Compaction
You can improve performance during busy times if you disable automatic compaction and run your own manual compaction during lighter system load or during downtimes. You could run the API call after your application performs a large set of data operations. You could run compact disk-store command every night when system use is very low.
To follow a strategy like this, you need to set aside enough disk space to accommodate all non-compacted disk data. You might need to increase system monitoring to make sure you do not overrun your disk space. You may be able to run only offline compaction. If so, you can set allow-force-compaction to false and avoid storing the information required for manual online compaction.
Directory Size Limits
- For automatic compaction, the system logs a warning, but does not stop.
- For manual compaction, the operation stops and returns a DiskAccessException to the calling process, reporting that the system has run out of disk space.
Example Compaction Run
bash-2.05$ ls -ltra backupDirectory total 28 -rw-rw-r-- 1 user users 3 Apr 7 14:56 BACKUPds1_3.drf -rw-rw-r-- 1 user users 25 Apr 7 14:56 BACKUPds1_3.crf drwxrwxr-x 3 user users 1024 Apr 7 15:02 .. -rw-rw-r-- 1 user users 7085 Apr 7 15:06 BACKUPds1.if -rw-rw-r-- 1 user users 18 Apr 7 15:07 BACKUPds1_4.drf -rw-rw-r-- 1 user users 1070 Apr 7 15:07 BACKUPds1_4.crf drwxrwxr-x 2 user users 512 Apr 7 15:07 . bash-2.05$ gfsh gfsh>validate offline-disk-store --name=ds1 --disk-dirs=backupDirectory /root: entryCount=6 /partitioned_region entryCount=1 bucketCount=10 Disk store contains 12 compactable records. Total number of region entries in this disk store is: 7 gfsh>compact offline-disk-store --name=ds1 --disk-dirs=backupDirectory Offline compaction removed 12 records. Total number of region entries in this disk store is: 7 gfsh>exit bash-2.05$ ls -ltra backupDirectory total 16 -rw-rw-r-- 1 user users 3 Apr 7 14:56 BACKUPds1_3.drf -rw-rw-r-- 1 user users 25 Apr 7 14:56 BACKUPds1_3.crf drwxrwxr-x 3 user users 1024 Apr 7 15:02 .. -rw-rw-r-- 1 user users 0 Apr 7 15:08 BACKUPds1_5.drf -rw-rw-r-- 1 user users 638 Apr 7 15:08 BACKUPds1_5.crf -rw-rw-r-- 1 user users 2788 Apr 7 15:08 BACKUPds1.if drwxrwxr-x 2 user users 512 Apr 7 15:09 . bash-2.05$