000 05499nam a22005535i 4500
001 978-3-030-22948-1
003 DE-He213
005 20240423125031.0
007 cr nn 008mamaa
008 190813s2019 sz | s |||| 0|eng d
020 _a9783030229481
_9978-3-030-22948-1
024 7 _a10.1007/978-3-030-22948-1
_2doi
050 4 _aQA75.5-76.95
072 7 _aUNH
_2bicssc
072 7 _aUND
_2bicssc
072 7 _aCOM030000
_2bisacsh
072 7 _aUNH
_2thema
072 7 _aUND
_2thema
082 0 4 _a025.04
_223
245 1 0 _aInformation Retrieval Evaluation in a Changing World
_h[electronic resource] :
_bLessons Learned from 20 Years of CLEF /
_cedited by Nicola Ferro, Carol Peters.
250 _a1st ed. 2019.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2019.
300 _aXXII, 595 p. 89 illus., 75 illus. in color.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aThe Information Retrieval Series,
_x2730-6836 ;
_v41
505 0 _aFrom Multilingual to Multimodal: The Evolution of CLEF over Two Decades -- The Evolution of Cranfield -- How to Run an Evaluation Task -- An Innovative Approach to Data Management and Curation of Experimental Data Generated through IR Test Collections -- TIRA Integrated Research Architecture -- EaaS: Evaluation–as–a–Service and Experiences from the VISCERAL Project -- Lessons Learnt from Experiments on the Ad-Hoc Multilingual Test Collections at CLEF -- The Challenges of Language Variation in Information Access -- Multi-lingual Retrieval of Pictures in ImageCLEF -- Experiences From the ImageCLEF Medical Retrieval and Annotation Tasks -- Automatic Image Annotation at ImageCLEF -- Image Retrieval Evaluation in Specific Domains -- ’Bout Sound and Vision: CLEF beyond Text Retrieval Tasks -- The Scholarly Impact and Strategic Intent of CLEF eHealth Labs from 2012-2017 -- Multilingual Patent Text Retrieval Evaluation: CLEF-IP -- Biodiversity Information Retrieval through Large Scale Content-Based Identification: A Long-Term Evaluation -- From XML Retrieval to Semantic Search and Beyond -- Results and Lessons of the Question Answering Track at CLEF -- Evolution of the PAN Lab on Digital Text Forensics -- RepLab: an Evaluation Campaign for Online Monitoring Systems -- Continuous Evaluation of Large-scale Information Access Systems: A Case for Living Labs -- The Scholarly Impact of CLEF 2010-2017 -- Reproducibility and Validity in CLEF -- Visual Analytics and IR Experimental Evaluation -- Adopting Systematic Evaluation Benchmarks in Operational Settings.
520 _aThis volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since – and traces its evolution over these first two decades. CLEF’s main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV andV represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing “what has been achieved”, but above all on “what has been learnt”. The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
650 0 _aInformation storage and retrieval systems.
650 0 _aNatural language processing (Computer science).
650 1 4 _aInformation Storage and Retrieval.
650 2 4 _aNatural Language Processing (NLP).
700 1 _aFerro, Nicola.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
700 1 _aPeters, Carol.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
710 2 _aSpringerLink (Online service)
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783030229474
776 0 8 _iPrinted edition:
_z9783030229498
776 0 8 _iPrinted edition:
_z9783030229504
830 0 _aThe Information Retrieval Series,
_x2730-6836 ;
_v41
856 4 0 _uhttps://doi.org/10.1007/978-3-030-22948-1
912 _aZDB-2-SCS
912 _aZDB-2-SXCS
942 _cSPRINGER
999 _c173467
_d173467