News

GERBIL - A General Entity Annotator Benchmark

Named Entity Recognition and Entity Linking systems are very useful for extracting data into machine understandable form. Several such systems have been developed in recent years, but how good are they GERBIL provides a tool that evaluates how well these systems perform.

Using GERBIL, a user can test their system with well established datasets and test and evaluate it against these other systems. This function is especially good news for dataset developers.

GERBIL provides several integrated systems users can choose from, as well as a NIF based web service and file upload option for other systems.

With 10 different experiment types performing a variety of Entity Recognition, Linking and Typing benchmarks, as well as Relation Extraction, GERBIL provides a tool to check every corner of a system’s performance.

With the vast variety of datasets, including OKE challenge datasets and Micropost datasets, users can benchmark their systems extensively while comparing with other systems in a fair reproducible way. Each experiment will be provided with a permanent w3id url. Since the very first OKE challenge in 2015, GERBIL has been the evaluation platform for the challenge.

Since GERBIL was ported to the HOBBIT project, it has been capable of meeting challenges more easily. With GERBIL QA and GERBIL KBC, two spin offs were developed for Question Answering benchmarks and Knowledge Base Curation benchmarks.

It is open source and can be reached at github.com/dice-group/GERBIL. Do you want to test your system? Go to gerbil.aksw.org/gerbil and test it yourself!

Have you used GERBIL in the past? Any suggestions, critiques or future wishes? Let us know!

527efb333