Multimedia Computing and Computer Vision Lab












Student Theses


Source Code / Datasets




User Study

From Multimedia Computing Lab - University of Augsburg

Questions / Bug Reports to Stefan Romberg

  • Phone:
    • +49 (821) 598 2462

User Study


Start UserStudy client (Java WebStart)

.zip file

Large Screenshot

Evaluation of Image Retrieval Systems

One of our research areas is image retrieval on large image databases. Unfortunately these databases are very large and no groundtruth is known. To measure whether our developed image retrieval system perform well and return reasonable results humans are needed to evaluate our systems. Only humans are able to rate the perfomance of our image search engines according to the given query image and its results. Therefore user studies are performed whereby participants are asked to rate the similarity of the search results with respect to the query image. The ratings are collected and an overall score can be computed.

Crash Course

On startup the user is asked to select a certain experiment he/she wants to evaluate. You may simply select the default one. Each experiment consists of a set of pages and images. The upper left image (white frame) is the query image. Imagine you had given this image to the search engine just like you type certain keywords into a websearch engine.

The other 19 images are the top 19 found results. The search engine considers these 19 images to be the most similar ones to the query image. You are asked to rate them if these are really similar.

Each image can be rated as "similar", "somewhat similar" and "not similar". You may rate images by simply clicking on them. A left click on a image marks it as "similar" (green frame). A right click on a image marks it as "somewhat" similar (yellow frame). Clicking once again on an already rated image marks it as "not similar" (red frame).

After you have rated each image on a page that you consider as similar or somewhat similar you can go to the next page by clicking on the arrow button or pressing the right arrow key. The remaining (yet unclicked) images are considered to be "not similar" to the query image and are marked with a red frame.

Once all images of an experiment are rated another one may be selected.

System Requirements

  • Java 1.5 or newer
  • Internet Connection

Bug Reports

Known Issues

  • Does not work yet on Mac OS X 10.6 "Snow Leopard"
  • Switching Platforms/Systems messes up the path for temporary data.
  • Sending the ratings after the last page is finished sometimes takes a very long time.
  • Zooming into images is very buggy and freezes the GUI occassionally.
  • Images are painted slowly sometimes (Depends on hard disk speed, no pre-caching is performed yet).