TrebleCLEF Logo
Evaluation, Best Practices and Collaboration for Multilingual Information Access
Summer School
Best Practices
Latest News
Cross Language Evaluation Form: CLEF 2009 Call for Participation
Registration for CLEF 2009 is now...
- Link

INFILE: INFILE : Information, Filtering, Evaluation Evaluation campaign for information...
- Link

About

Although in the past decade there has been extensive research in the field of multilingual system development, much of it promoted by the Cross-Language Evaluation Forum (CLEF), and considerable progress has been made, there are still very few operational cross-language retrieval systems in existence. The time is now mature to begin to transfer the knowledge acquired to an application setting.

The TrebleCLEF Coordination Action intends to move in this direction by building on and extending the results already achieved by CLEF. The objective is to support the development and consolidation of expertise in the multidisciplinary research area of multilingual information access and to promote a dissemination action in the relevant application communities.

TrebleCLEF thus intends to promote research, development, implementation and industrial take-up of multilingual, multimodal information access functionality in the following ways:

  • by supporting the annual system evaluation campaigns of the Cross-Language Evaluation Forum with tracks and tasks designed to stimulate R&D to meet the requirements of the user and application communities, with particular focus on the following key areas:
    • user modeling, e.g. what are the requirements of different classes of users when querying multilingual information sources;
    • language-specific experimentation, e.g. looking at differences across languages in order to derive best practices for each language, best practices for the development of system components and best practices for MLIA systems as a whole;
    • results presentation, e.g. how can results be presented in the most useful and comprehensible way to the user.
  • by constituting a scientific forum for the MLIA community of researchers enabling them to meet and discuss results, emerging trends, new directions:
    • providing a scientific digital library to manage accessible the scientific data and experiments produced during the course of an evaluation campaign. This library would also provide tools for analyzing, comparing, and citing the scientific data of an evaluation campaign, as well as curating, preserving, annotating, enriching, and promoting the re-use of them;
  • by acting as a virtual centre of competence providing a central reference point for anyone interested in studying or implementing MLIA functionality and encouraging the dissemination of information:
    • making publicly available sets of guidelines on best practices in MLIA (e.g. what stemmer to use, what stop list, what translation resources, how best to evaluate, etc., depending on the application requirements);
    • making tools and resources used in the evaluation campaigns freely available to a wider public whenever possible; otherwise providing links to where they can be acquired;
    • organising workshops, and/or tutorials and training sessions.

The aim is to

  • Provide applications that need multilingual search solutions with the possibility to identify the technology which is most appropriate
  • Assist technology providers to develop competitive multilingual search solutions.