Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sequence Parallel is incompatible with Rotary Positional Embedding #385

Open
anogkongda opened this issue May 9, 2024 · 4 comments
Open

Comments

@anogkongda
Copy link

I would like to finetune llama2 on long sequence data. (more than or eq 32K)

I follow the example below for sequence parallel:

https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples_deepspeed/deepspeed4science/megatron_long_seq_support/pretrain_gpt_30B_seq_parallel.sh

Sadly, the lm loss is NaN if I use rotary positional embedding.
When I disable rotary positional embedding, the loss is ok even other parameters/arguments are the same as before.

@anogkongda
Copy link
Author

After testing, I found the following:

  1. Reducing the model size (e.g., the original 32-layer LLaMA 7B reduced to 16 layers) prevents the loss from becoming NaN.

  2. Switching from BF16 to FP16 also prevents the loss from becoming NaN.

  3. When the loss becomes NaN, there's no protection mechanism, which causes all model parameters to turn into NaN.

  4. When Sequence Parallel is enabled, the BF16 Optimizer might overflow under certain circumstances, potentially due to computational errors.

  5. Observing the trend of loss change in FP16 training is still ongoing.

@inkcherry
Copy link

hi, @anogkongda, I also encountered the NAN issue and resolved it with this #399, could you try this. Can it solve your problem?

@anogkongda
Copy link
Author

hi, @anogkongda, I also encountered the NAN issue and resolved it with this #399, could you try this. Can it solve your problem?

thank you, I will try this and report my result ASAP.

@anogkongda
Copy link
Author

It doesn't work in my case. I'm trying more to make it correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants