Fixed (after many years)
NAS-2870
-
Getting issue details...STATUS
whereby all generated revisit-records had badly formatted WARC-Payload-Digest fields and were therefore invalid according to the Warc standard.
Added 3 new link extractors (from the British Library) to heritrix :
org.archive.modules.extractor.ExtractorRobotsTxt
org.archive.modules.extractor.ExtractorSitemap
org.archive.modules.extractor.ExtractorJson Note that ExtractorSitemap deviates slightly in functionality from the British Library version in that it is considerably more lenient in both what it identifies as a sitemap and what Urls it accepts in sitemaps.
Added caching of crawl logs when hadoop is used for processing
Added caching of metadata-file indexes when hadoop is used for processing
Added retry functionality to improve the robustness of the WarcRecordClient
Fixed a bug whereby files uploaded from a harvester were not being deleted when the Bitmagasin backend is in use
Added retry-handling to Bitmagasin uploads via two new settings keys under settings.common.arcrepositoryClient.bitrepository
Added support for uberized jobs optimised for small tasks in hadoop vis
settings.common.hadoop.mapred.enableUbertask
Added hdfs caching functionality to hadoop jobs. When this feature is enabled, any local files passed as input to the hadoop job are first copied into hdfs and cached for future use. This should create savings when the same file is processed multiple times, as is often the case for metadata files. This functionality is controlled by the following parameters
Improved the performance of the GUI functionality associated with the button "Browse only relevant crawl-log lines for this domain".
The new caching functionality for crawl logs and metadata indexes stores data in a directory specified by the setting
settings.common.webinterface.metadata_cache_dir
whose default value is "metadata_cache" (relative to the current working directory where the GUIApplication is started). At present there is no automatic cleaning of this directory.
Highlights in 7.0
NetarchiveSuite 7.0 introduces an entirely new backend storage and mass-processing implementation based on software from bitrepository.org and hadoop. The new functionality is enabled by defining the following key in the settings file for all applications:
The older arcrepositoryClient implementation dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient will be deprecated in future releases. (The developers are unaware of any other organisations currently using the older client, but please contact us if you still rely on it.)