...
As to our selective crawls: “business as usual” – that is to say: analyze of “candidates” (new sites proposed for selective crawls), QA of selective crawls, monitoring harvest jobs, revision of harvest profiles
- BNF:
Panel |
---|
On the 8th of August, the last harvest of the electoral project ended. Over a period of seven months, monthly, weekly, daily and single captures have been made of websites selected by librarians for their relation to the French presidential and parliamentary elections. The result is more than 350 million URLs, and 20.38 Tb of data (compressed: 10.67 Tb). We have focused our efforts on harvesting the social Web, especially Twitter and Facebook, but Pinterest and Flickr too. The well-known problem of the # in the URL has been an unsurmountable obstacle to the harvest of some sites (Google+, Pearltree). But solutions were found for others. Thus Twitter was collected 4 times a day with a special harvest template: the crawler declared itself not as a browser, but as a robot. This allowed us to have access to the URL without the problematic <#!> sequence, and therefore to collect tweets. But now Twitter's URLs seem to work without this sequence, even in a normal browser, making them easier to collect. This project was also the occasion to see our new nomination tool (BCWeb) working with NAS on a large scale. It proved to be very useful, even where we had sometimes to adjust the frequency of certain captures (to densify harvests for the electoral week-ends for example). |
- ONB:
Panel |
---|
|
...