Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to reproduce performance #12

Open
guozhiyao opened this issue Feb 22, 2024 · 10 comments
Open

Unable to reproduce performance #12

guozhiyao opened this issue Feb 22, 2024 · 10 comments

Comments

@guozhiyao
Copy link

guozhiyao commented Feb 22, 2024

  • base model: alignment-handbook/zephyr-7b-sft-full
  • train data: UCLA-AGI/SPIN_iter0
    I use the default hyper-parameter to train the model, and test the model with HuggingFaceH4/open_llm_leaderboard locally. The result on allenai/ai2_arc as below:
  • base model: 0.5819112627986348
  • epoch1 (step 778): 0.5989761092150171
  • epoch2 (step 1556): 0.5964163822525598
  • epoch3 (step 2334): 0.590443686006826
    which can not match the performance in paper (63.40).
@angelahzyuan
Copy link
Collaborator

angelahzyuan commented Feb 22, 2024

Try setting --num_train_epochs=6. We've uncommented it in the revised finetune.sh script (previously specified in comments). You can still stop at the first few epochs.

@guozhiyao
Copy link
Author

Try setting --num_train_epochs=6. We've uncommented it in the revised finetune.sh script (previously specified in comments). You can still stop at the first few epochs.

@angelahzyuan Thanks for your reply, but the performance of the baseline model is not consistent with the results in the paper.
image

@guozhiyao
Copy link
Author

Try setting --num_train_epochs=6. We've uncommented it in the revised finetune.sh script (previously specified in comments). You can still stop at the first few epochs.

@angelahzyuan I have change the epoch to 6. I test the model of epoch 1 on allenai/ai2_arc, which still can not match the result.
image

@yihedeng9
Copy link
Collaborator

Please check our readme for the following,

  1. We noticed that Alignment Handbook has updated their configuration and SFT checkpoint since our experiments. The configuration and SFT model from the Alignment Handbook that we used in our experiments for data generation and fine-tuning are the older version (Config, Model). If you wish to use the newest SFT model, you need to generate your own data instead of using the datasets we provided on huggingface.

  2. For our evaluation on the Open LLM Leaderboard, please use the lm-evaluation-harness repository at v0.4.0. Also, note that we set the number of few shot examples to be the same as instructed on the Leaderboard, as in their 'About'. The leaderboard uses a different version of lm-eval-harness. Different evaluation versions results in different scores, but the trend for improvement will remain the same. Lastly, we have also uploaded our models to the leaderboard, for which you can directly search for the results (https://huggingface.co/datasets/open-llm-leaderboard/details_UCLA-AGI__zephyr-7b-sft-full-SPIN-iter0).

@angelahzyuan
Copy link
Collaborator

angelahzyuan commented Feb 22, 2024

Try setting --num_train_epochs=6. We've uncommented it in the revised finetune.sh script (previously specified in comments). You can still stop at the first few epochs.

@angelahzyuan I have change the epoch to 6. I test the model of epoch 1 on allenai/ai2_arc, which still can not match the result. image

@guozhiyao Furthermore, we've retrained the Zephyr-7b-sft-full model with the latest updates from the Alignment Handbook. Our evaluation on ARC indicates an improvement from 57.51 (sft-full) to 60.75 (SPIN-iter0). While different base models and evaluation methods may produce varying results, the overall performance trend remains consistent.
image

@lewtun
Copy link
Contributor

lewtun commented Feb 23, 2024

Hello, @guozhiyao I'm one of the authors of the alignment-handbook 👋

Indeed, we updated some of the configs used to train the SFT model in this PR (huggingface/alignment-handbook#88) because there were some bugs fixed in TRL that changes the learning rate scheduler.

If you want to use the original checkpoint we released with the handbook you can load the model with revision=ac6e600eefcce74f5e8bae1035d4f66019e93190

@lewtun
Copy link
Contributor

lewtun commented Feb 23, 2024

One related question for @angelahzyuan - do you happen to know why the GSM8k values on the leaderboard are so different from those shown in your paper?

It seems like iter0 is actually the best on the leaderboard, while your paper shows iter2 and iter3 are considerably better.

Screenshot 2024-02-23 at 10 13 28

Screenshot 2024-02-23 at 10 15 09

@yihedeng9
Copy link
Collaborator

Hi, the difference in scores between leaderboard and our papers is mainly due to difference in lm-evaluation-harness version. We used v.0.4.0, which has a different evaluation result as compared to the older version used by leaderboard, especially in gsm8k as we observed.

@lewtun
Copy link
Contributor

lewtun commented Feb 23, 2024

Hi, the difference in scores between leaderboard and our papers is mainly due to difference in lm-evaluation-harness version. We used v.0.4.0, which has a different evaluation result as compared to the older version used by leaderboard, especially in gsm8k as we observed.

Great, thanks for the clarification!

@acherstyx
Copy link

Hello @yihedeng9 , I noticed that the task name and evaluation metric in version 4.0 of lm-evaluation-harness are different from those described on the Open LLM Leaderboard (About -> REPRODUCIBILITY). Could you please provide your evaluation scripts for running evaluation locally? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants