I have looked at a lot resources but I still have issues trying to convert a PyTorch model to a hugging face model format. I ultimately want to be able to use inference API with my custom model.
I have a "model.pt" file which I got from fine-tuning the Facebook Musicgen medium model (The Git repo I used to train / Fine tune the model is here). I want to upload this to the hugging face hub so i can use this with inference API. How can I convert the .pt model to files/model that can be used on hugging face hub? I tried looking at other posts but there is no clear answer, or it is poorly explained.
Any help / guidance would be greatly appreciated
This is the code I have right now that is not working:
import torch
from transformers import MusicgenConfig, MusicgenModel
from audiocraft.models import musicgen
import os
os.mkdir('models')
state_dict = musicgen.MusicGen.get_pretrained('facebook/musicgen-medium', device='cuda').lm.load_state_dict(torch.load('NEW_MODEL.pt'))
config = MusicgenConfig.from_pretrained('facebook/musicgen-medium')
model = MusicgenModel(config)
model.load_state_dict(state_dict)
model.save_pretrained('/models')
loaded_model = MusicgenModel.from_pretrained('/models')
In case your model is a (custom) PyTorch model, you can leverage the
PyTorchModelHubMixinclass available in thehuggingface_hubPython library. It is a minimal class which addsfrom_pretrainedandpush_to_hubcapabilities to anynn.Module, along with download metrics.Here is a link to the huggingface docs, explaining how to push pytorch model to the huggingface hub.
https://huggingface.co/docs/hub/en/models-uploading#upload-a-pytorch-model-using-huggingfacehub