Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

...

Panel

Broad crawl

We started our second broad crawl for 2020 on 20 August, the first step with a byte limit of 50 MB finished on 2 September. On 21 August we started the separate crawl of ultra big sites, this crawl is still running.

Event crawl

We have to decide, whether we want to stop the event crawl on Corona in Denmark or not, there are different opinions on that issue. 

Miscellaneous

Everything is prepared for the French trainee: we signed a contract and he will start on 28 September. He wants to work on visualization of data and Netarchive.

We started a collaboration with the IT-University in Copenhagen: students participating in a course on project work and communication for software developers will work together with us on several special challenges.

We try to solve various technical issues; we got aware of most of them on the base of emails from persons dealing with certain web sites. These issues are for example:

  • URL’s which do not change, when you click on links from the front page (gaffa.dk)
  • Embedded tables (dfi.dk)
  • Sites where we need a JavaScript interpreter for to render the pages (rehpa.dk)

We are going to look at the new features in BCWeb on an installation in a test environment

BnF

Panel

After the upgrade of NAS and Heritrix in June, we have observed the evolution of the QA indicators by comparing similar jobs run before and after the upgrade. The findings are positive : for a same job type, we crawl more URLs with less 404 errors with the new version, and the improvement is particularly significant with the image files, with a growth of the number of crawled images between 19 % and 98 % depending on the different types of jobs. We are very happy with this quality improvement, however we have to manage with larger WARC files and to reassess our budget estimate. Our annual broad crawl will be launched in October and we have to carefully adjust the parameters in order to comply with budget forecast.

The new version of BC web (7.3.0), with new functionalities such as duplication of records and improvement of the advanced search and of the deduplication, has been successfully put in production at the end of July.

ONB

Panel


BNE

Panel

Broad crawl:

This year the result of the broad crawl has been 1.930.000 web sites (around 50 terabytes of information). The number of domines has increased but the information published on internet has been less than the year before.  Of all web sites that have been saved a 87 per cent have been fully and completely recollected.

Covid19 Collection:

On the other hand we continue with the collection about Coronavirus which increases each week. Actually it contains more than 4000 (four thousand) web sites

KB-Sweden

Panel

Questions:

Do you treat certain types of web sites/domains as uninteresting to harvest, and limit their budget or reduce the harvest in other ways? If yes:

  • Which categories of web sites?
  • How do you identify the category and find which web sites to treat specially?
  • How do you reduce the harvest there – data limit, object count limit, reject rules?

We would like to avoid the very large amount of web sites containing huge product catalogues, often with lots of images on each product. But are there ways to do find and avoid/limit them in some (semi-)automatic way?

(On the wish list – when you have identified such a site – would also be a way to harvest a specified proportion of it, e.g. 1 %, randomly selected among a representative selection of different types of pages … J )

A side-track to this is more complicated crawler traps which often show up on these (and other) sites, e.g. infinite loops  of types which Heritrix can’t detect (a/b/c/a/b/c, pages referring to themselves with extra parameters etc.). Hints?

Next meetings

  • October 6, 2020
  • November 3, 2020
  • December 8, 2020
  • January 5, 2021

Any other business?

·