Amazon cover image
Image from Amazon.com

Multilingual and Multimodal Information Access Evaluation [electronic resource] : International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings /

Contributor(s): Material type: TextTextSeries: Information Systems and Applications, incl. Internet/Web, and HCI ; 6360Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg : Imprint: Springer, 2010Edition: 1st ed. 2010Description: XIII, 145 p. 21 illus. online resourceContent type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9783642159985
Subject(s): Additional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification:
  • 006.35 23
LOC classification:
  • QA76.9.N38
Online resources:
Contents:
Keynote Addresses -- IR between Science and Engineering, and the Role of Experimentation -- Retrieval Evaluation in Practice -- Resources, Tools, and Methods -- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages -- A New Approach for Cross-Language Plagiarism Analysis -- Creating a Persian-English Comparable Corpus -- Experimental Collections and Datasets (1) -- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases -- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation -- Experimental Collections and Datasets (2) -- MapReduce for Information Retrieval Evaluation: “Let’s Quickly Test This on 12 TB of Data” -- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types -- Evaluation Methodologies and Metrics (1) -- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements -- On the Evaluation of Entity Profiles -- Evaluation Methodologies and Metrics (2) -- Evaluating Information Extraction -- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation -- Automated Component–Level Evaluation: Present and Future -- Panels -- The Four Ladies of Experimental Evaluation -- A PROMISE for Experimental Evaluation.
In: Springer Nature eBookSummary: In its ?rst ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the di?erent areasofexpertiseneededto dealwith thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of e?ort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the “classic CLEF” format and an experiment aimed at understanding how “next generation” evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference – the ?rst two days – and a series of laboratories and workshops – the second two days.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

Keynote Addresses -- IR between Science and Engineering, and the Role of Experimentation -- Retrieval Evaluation in Practice -- Resources, Tools, and Methods -- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages -- A New Approach for Cross-Language Plagiarism Analysis -- Creating a Persian-English Comparable Corpus -- Experimental Collections and Datasets (1) -- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases -- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation -- Experimental Collections and Datasets (2) -- MapReduce for Information Retrieval Evaluation: “Let’s Quickly Test This on 12 TB of Data” -- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types -- Evaluation Methodologies and Metrics (1) -- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements -- On the Evaluation of Entity Profiles -- Evaluation Methodologies and Metrics (2) -- Evaluating Information Extraction -- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation -- Automated Component–Level Evaluation: Present and Future -- Panels -- The Four Ladies of Experimental Evaluation -- A PROMISE for Experimental Evaluation.

In its ?rst ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the di?erent areasofexpertiseneededto dealwith thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of e?ort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the “classic CLEF” format and an experiment aimed at understanding how “next generation” evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference – the ?rst two days – and a series of laboratories and workshops – the second two days.

There are no comments on this title.

to post a comment.
© 2024 IIIT-Delhi, library@iiitd.ac.in