I'm trying to do Pytorch Lightning Fabric distributed FSDP training with Huggingface PEFT LORA fine tuning on LLAMA 2 but my code ends up failing with:
`FlatParameter` requires uniform dtype but got torch.float32 and torch.bfloat16
File ".......", line 100, in <module>
model, optimizer = fabric.setup(model, optimizer)
ValueError: `FlatParameter` requires uniform dtype but got torch.float32 and torch.bfloat16
How do I find out which tensors in pytorch fabric are of float32 type?
Appears to be a bug with pytorch: https://github.com/h2oai/h2o-llmstudio/issues/98 https://github.com/pytorch/pytorch/issues/100945