Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Proof-Of-Concept subproject to investigate the possibility af using Cooliris for browsing archived websites

User guide

Guide line to to view the provide webcontent:

  1. Extract the web content to a folder on the web server which is going to

Generate content

  1. Download and extract the source tar
  2. Create a URLGenerator class to provide the list of URLs to capture. Examples of URLGenerators can be found in the src/main/wall/harvesters folder. All the harvesters found here are based on loading a file containing the relevant URLs and generating a url list based on this.
  3. Create a runner class, which is going to generate the Cooliris wall. See the harvestrunner package for examples.
  4. Start a Selenium server with the appropriate configuration, eg.
    java -Dhttp.proxyHost=${proxy-url} -Dhttp.proxyPort=${proxy-port} -Dhttp.proxyUser=${proxy-loginname} -Dhttp.proxyPassword=${proxy-pw} -jar selenium-server.jar
    
    .
    I never got this to work in Ubuntu firefox, so alternatively I created a custiom profile with the relevant proxy setting and started the seleniun server with the
    -firefoxProfileTemplate "path-to-profile-folder"
    
    You may need to manully supply a proxy loginname/pw on each Capture run in this case.
    Another possiblity is to use the Maven2 selenium target provide in the projects pom.xml to start the selenium server.
  5. Start the relevant Runner. Ensure that the browser window showing the webpages to capture is visible at all times.
  • No labels