I am very new to NLP and I might be doing something wrong.
I would like to work with a hungarian text where I can get the synset/hyponym/hypernym of some selected words. I am working in python.
As Open Multilingual Wordnet does not have hungarian wordnet dictionary I have downloaded one from this github site: https://github.com/mmihaltz/huwn
As it is an xml file I have converted it to .tab with a converter available in other language folders.
So at this stage I created the '\nltk_data\corpora\omw\hun' library and placed my new wn-data-hun.tab inside this directory.
But unfortunately it is not working
After importing nltk and wordnet the wn.langs() command shows the 'hun' also as available language.
However trying: wn.lemmas('cane', lang='hun') command is showing an empty list. Trying with other languages (built in languages in open multilanguage wordnet), it works.
Could you pleaes help me or point me in the right direction in order to make it work?
Thank you in advance!
Attached hungarian .tab file: here
Hungarian text:
A szöveg megfelelője gyakorlatilag az összes európai nyelvben "Text" (különböző írásképekkel a nemzeti helyesírás miatt), ami a latin "textum" szóból ered, amely szó eredeti jelentése: szövet, szöveg. A magyarban a nyelvújítás idején a jelentést magyar szóval jelöltük. A szöveg egy összefüggő és a környezetétől jól elhatárolt vagy elhatárolható megnyilvánulás, kijelentés írott vagy tágabb értelemben nem írott de (le)írható nyelven. A nem feltétlenül írott, de leírható szövegre példa a dalszöveg, egy film szövege vagy improvizált színházi szöveg.
The problem is that in case of hungarian language, it does not find anything but in case of french it finds. See below:

UPDATED 12-04-2021
I would highly recommend reaching out to the repository's owner to understanding the mapping IDs that were used in the huwn.xml.
Here is why:
I cross-referenced the word mappings between .tab files for a specific word.
French mapping for the word 'chien', which is dog in English
Italian mapping for the word 'cane', which is dog in English
Arabic mapping for the word 'كلْب', which is dog in English
When I look for the mapping ID 02084071-n in your Hungarian .tab file it does not exist.
These are the mappings IDs in your file for the word 'kutya', which is Hungarian for dog. These mappings ID don't exist in the other .tab files.
Additionally the format of your .tab file still does not match the format of the ones used by Wordnet.
ORIGINAL POST 12-03-2021
I did some research into this question. I noted that the GitHub Hungarian Wordnet repository that you're trying to use has a secondary repository to use the code. Have you tried to use the scripts in that repository?
Also I looked at the format of the huwn.xml file in the first repository. It isn't in the same format as the NLTK omw .tab files.
NLTK file: wn-data-pol.tab
file: huwn.xml
Based on these major formatting differences I don't see how the huwn.xml can be easily dropped into NLTK Wordnet.
DISCLAIMER:
I'm the author of the Python Package WordHoard, which can be used to obtain antonyms, synonyms, hypernyms, hyponyms, homophones and definitions. WordHoard is designed to perform language translation via 3 difference services. Hungarian is a supported language for translation.
Below is an example of WordHoard translating the Hungarian word 'kutya' and searching multiple sources for synonyms related to 'kutya', which is English for 'dog'.
P.S. I see that I might need to add some additional code to WordHoard to remove English words like 'Bow Wow' and 'Hot dog' from the output.