Proof-Of-Concept subproject to investigate the possibility af using Cooliris for browsing archived websites
User guide
Guide line to to view the provide webcontent:
- Extract the web content to a folder on the web server which is going to
Generate content
- Download and extract the source tar
- Create a URLGenerator class to provide the list of URLs to capture. Examples of URLGenerators can be found in the src/main/wall/harvesters folder. All the harvesters found here are based on loading a file containing the relevant URLs and generating a url list based on this.
- Create a runner class, which is going to generate the Cooliris wall. See the harvestrunner package for examples.
- Start a Selenium server with the appropriate configuration, eg.
.
java -Dhttp.proxyHost=${proxy-url} -Dhttp.proxyPort=${proxy-port} -Dhttp.proxyUser=${proxy-loginname} -Dhttp.proxyPassword=${proxy-pw} -jar selenium-server.jar
I never got this to work in Ubuntu firefox, so alternatively I created a custiom profile with the relevant proxy setting and started the seleniun server with theYou may need to manully supply a proxy loginname/pw on each Capture run in this case.-firefoxProfileTemplate "path-to-profile-folder"
Another possiblity is to use the Maven2 selenium target provide in the projects pom.xml to start the selenium server. - Start the relevant Runner. Ensure that the browser window showing the webpages to capture is visible at all times.