Fully-unsupervised embeddings-based hypernym discovery

Maurizio Atzori*, Simone Balloccu* (Corresponding Author)

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)
6 Downloads (Pure)


The hypernymy relation is the one occurring between an instance term and its general term (e.g., "lion" and "animal", "Italy" and "country"). This paper we addresses Hypernym Discovery, the NLP task that aims at finding valid hypernyms from words in a given text, proposing HyperRank, an unsupervised approach that therefore does not require manually-labeled training sets as most approaches in the literature. The proposed algorithm exploits the cosine distance of points in the vector space of word embeddings, as already proposed by previous state of the art approaches, but the ranking is then corrected by also weighting word frequencies and the absolute level of similarity, which is expected to be similar when measuring co-hyponyms and their common hypernym. This brings us two major advantages over other approaches-(1) we correct the inadequacy of semantic similarity which is known to cause a significant performance drop and (2) we take into accounts multiple words if provided, allowing to find common hypernyms for a set of co-hyponyms-a task ignored in other systems but very useful when coupled with set expansion (that finds co-hyponyms automatically). We then evaluate HyperRank against the SemEval 2018 Hypernym Discovery task and show that, regardless of the language or domain, our algorithm significantly outperforms all the existing unsupervised algorithms and some supervised ones as well. We also evaluate the algorithm on a new dataset to measure the improvements when finding hypernyms for sets of words instead of singletons.

Original languageEnglish
Article number268
Number of pages20
JournalInformation (Switzerland)
Issue number5
Publication statusPublished - 18 May 2020

Bibliographical note

Funding: Supported in part by Sardegna Ricerche project OKgraph (CRP 120) and MIUR MIUR PRIN 2017 (2019-2022) project HOPE—High quality Open data Publishing and Enrichment.


  • Hypernym discovery
  • Natural language processing
  • Natural language understanding
  • Unsupervised learning
  • Word embeddings
  • Word2vec


Dive into the research topics of 'Fully-unsupervised embeddings-based hypernym discovery'. Together they form a unique fingerprint.

Cite this