Prerequisites

This test requires restart of infrastructure components (database and jms broker). These steps must be coordinated with the other testers.

Resubmit jobs after restart, restart of failed jobs, upload of old files at harvester restart, scheduler skips old jobs.

 

Under migration from old twiki test, TEST6: Err-1 (Resubmit af jobs efter nedlukning, genstart af fejlede jobs, upload af gamle filer ved høster-genstart, scheduler overspringer gamle jobs)

Install and Start System

On test@kb-prod-udv-001.kb.dk:

 

export TESTX=TEST6
export PORT=807?
export MAILRECEIVERS=foo@bar.dk
stop_test.sh
cleanup_all_test.sh
prepare_test.sh deploy_config_database.xml
install_test.sh
start_test.sh

Check that the GUI is available that the System Status doesn not show any startup problems.

Start a selective harvest

Start a hourly selective harvest for the 'netarkivet.dk' domain.

Create a new template:

Modify domain templates 

Make a new snapshot harvest definition with a name you can remember

 

 

Check that at least one file has been uploaded.

  1. Stop the system after the first arc fil has been uploaded
    1. Go to harvest status page at http://kb-test-adm-001.kb.dk:8076/HarvestDefinition and find the Job for kum.dk.
    2. In the system overview finde the harvester running the job. The information will appear in the log column when the job has been started.
    3. Run the attached script to stop the test system after the first arcfile has been uploaded. Note that the script needs to be updated with the relevant job number and harvester.
  2. Check that the correct file has been generated.
    1. Log on to the harvester, eg. ssh kb-test-har-001.
    2. Verify that a meta data fil exists at ~/TEST?/harvester_low/{crawldir}/metadata/
    3. Copy the file to /tmp
  3. Create a fake crawl dir (failing, see )
    1. ssh sb-test-har-001.statsbiblioteket.dk
    2. cd TEST6/harvester_high
    3. cp -r ~netarkiv/testdata/TEST6/23-fakejobdir .
  4. Restart the test system moran 3 hours after the shutdown.
  5. Verify the restarted system. On kb-test-adm-001
    1. Check the log for warnings and errors.

      cd /home/test/$TESTX/log/
      grep SEVERE *.log.0
      grep WARNING *.log.0

      The following entries are normal: 

      arcrepositoryapplication0.log.0:WARNING: AdminDataFile (./admin.data) was not found.
      guiapplication0.log.0:WARNING: Refusing to schedule harvest definition 'netarkivet' in the past. Skipped 18 events. Old nextDate was Mon Dec 18 14:29:30 CET 2006 new nextDate is Tue Dec 19 09:29:30 CET 2006
      GUIApplication0.log.0:WARNING: Job 2 failed: HarvestErrors = dk.netarkivet.common.exceptions.IOFailure: Crawl probably interrupted by shutdown of HarvestController

      The following warning may occur after a while: 

      WARNING: Error processing message '
      Class:                  com.sun.messaging.jmq.jmsclient.ObjectMessageImpl
      getJMSMessageID():      ID:40-130.225.27.140(d2:1:3:b1:10:de)-46478-1197902260630
      getJMSTimestamp():      1197902260630
      getJMSCorrelationID():  null
      JMSReplyTo:             null
      JMSDestination:         TEST6_COMMON_THE_SCHED
      getJMSDeliveryMode():   PERSISTENT
      getJMSRedelivered():    false
      getJMSType():           null
      getJMSExpiration():     0
      getJMSPriority():       4
      Properties:             null'
      dk.netarkivet.common.exceptions.UnknownID: Job id 23 is not known in persistent storage
              at dk.netarkivet.harvester.datamodel.JobDBDAO.read(JobDBDAO.java:294)
              at dk.netarkivet.harvester.scheduler.HarvestSchedulerMonitorServer.processCrawlStatusMessage(HarvestSchedulerMonitorServer.java:103)
              at dk.netarkivet.harvester.scheduler.HarvestSchedulerMonitorServer.visit(HarvestSchedulerMonitorServer.java:285)
              at dk.netarkivet.harvester.harvesting.distribute.CrawlStatusMessage.accept(CrawlStatusMessage.java:133)
              at dk.netarkivet.harvester.distribute.HarvesterMessageHandler.onMessage(HarvesterMessageHandler.java:67)
              at com.sun.messaging.jmq.jmsclient.MessageConsumerImpl.deliverAndAcknowledge(MessageConsumerImpl.java:330)
              at com.sun.messaging.jmq.jmsclient.MessageConsumerImpl.onMessage(MessageConsumerImpl.java:265)
              at com.sun.messaging.jmq.jmsclient.SessionReader.deliver(SessionReader.java:102)
              at com.sun.messaging.jmq.jmsclient.ConsumerReader.run(ConsumerReader.java:174)
              at java.lang.Thread.run(Thread.java:595)
  6. Go to the system overview page and check that all the expected applications are listen and are without warnings or errors.

Check that a job can be resubmitted

  1. Check that you can reject a job for resubmission using the "Reject?" button so that it is no longer visible when you list failed jobs.
  2. Check that you can see the rejected job when you now list all jobs.
  3. Click on one or more "Genstart"/"Resubmit" buttons. Note that you only can resubmit jobs failed due to harvesting errors, not due to upload errors.
  4. Check that the job-status changes to "resubmitted" and that a new Job is made from the same harvestdefinition with the same configurations.
  5. Check that resubmitted jobs contain information about which job they were resubmitted (FR770)

Følgende udføres på sb-test-bar-001.statsbiblioteket.dk som bruger netarkiv

Kør følgende, som laver to sorterede lister af dels indhold i ARC-filerne, dels indhold i CDX-filerne og sammenligner:
(NB: Der skrives nu igen til /netarkiv/0001 er fyldt op)
export TESTX=TEST?

cd /netarkiv/0001/$TESTX/filedir
grep -a '^http://[^ ] [0-9.]+ [0-9]+' *.dk.arc | sort >/tmp/$TESTX-arc-headers
grep -a '^http://[^ ] [0-9.]+ [0-9]+' *-metadata-1.arc | cut -d' ' -f1-5 | sort >/tmp/$TESTX-cdx-headers
diff -a /tmp/$TESTX-arc-headers /tmp/$TESTX-cdx-headers

Dette skulle ikke give noget output.
Kør følgende, som udregner mængden af høstede sider, mængden af duplikater og mængden af DNS-opslag for 'netarkivet.dk':

grep '^http://[^ /]*netarkivet.dk/.*\ ' *-metadata-1.arc |wc -l
grep '[0-9]\ http://[^ /]*netarkivet.dk.*\ .*duplicate' *-metadata-1.arc| wc -l
grep '[0-9]\ dns:[^ /]*netarkivet.dk.*' *-metadata-1.arc| wc -l

Disse tre tal skulle tilsammen antallet af dokumenter der er høstet fra netarkivet.dk. Det skulle være mindst det samme som summen af dokumenter høstet der vises under Høstnings-historie for netarkivet.dk (http://kb-test-adm-001.kb.dk:807?/History/Harveststatus-perdomain.jsp?domainName=netarkivet.dk).
Hvis {sum af tal fra grep's} < {sum af tal fra Høstnings-historik} så gøres følgende:
Hvis tallet er højere er det fordi der er høstet noget netarkiv-data i jobs der ikke er nået tilbage til GUI'en eller som subdele af høstninger der ikke angiver netarkivet.dk som et domæne. I så fald findes jobnumre der har høstet i http://kb-test-adm-001.kb.dk:807?/History/Harveststatus-perdomain.jsp?domainName=netarkivet.dk. Så køres:

grep '^http://[^ /]*netarkivet.dk/.*\ ' {<<id>>,<<id>>,<<id>>}-metadata-1.arc |wc -l
grep '[0-9]\ http://[^ /]*netarkivet.dk.*\ .*duplicate' {<<id>>,<<id>>,<<id>>}-metadata-1.arc| wc -l
grep '[0-9]\ dns:[^ /]*netarkivet.dk.*' {<<id>>,<<id>>,<<id>>}-metadata-1.arc| wc -l

hvor <> er jobid'erne.
Læs metadata-arc filen. se evt.heritrix doku på http://crawler.archive.org/articles/user_manual.html#creating
Check:
At den starter med en "filedesc:" indgang
At der følger et antal "metadata:" indgange som i URL-en har følgende parametre:

Check at scheduleren overspringer forældede hændelser

Shutdown the system