Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the logps decrease #13

Open
zhanghaoie opened this issue Feb 22, 2024 · 2 comments
Open

the logps decrease #13

zhanghaoie opened this issue Feb 22, 2024 · 2 comments

Comments

@zhanghaoie
Copy link

Hi, I have tried spin with an agent datasets. As training time increases, the logps become smaller and smaller.

I understand that as the logps, the model becomes increasingly uncertain about its predictions. Am I right?

Have you met this situation during fine tuning with spin?

{'loss': 0.683, 'learning_rate': 1.4492753623188405e-07, 'rewards/chosen': -0.04396624490618706, 'rewards/rejected': -0.06808599829673767, 'rewards/accuracies': 0.668749988079071, 'rewards/margins': 0.024119755253195763, 'logps/rejected': -1.543066382408142, 'logps/chosen': -1.1352736949920654, 'logps/rejected_ref': -1.5335168838500977, 'logps/chosen_ref': -1.1292275190353394, 'logits/rejected': -2.7396950721740723, 'logits/chosen': -2.740943431854248, 'epoch': 0.06}
{'loss': 0.6719, 'learning_rate': 1.8115942028985507e-07, 'rewards/chosen': -0.0987342819571495, 'rewards/rejected': -0.14318521320819855, 'rewards/accuracies': 0.675000011920929, 'rewards/margins': 0.04445093870162964, 'logps/rejected': -1.5421949625015259, 'logps/chosen': -1.1360487937927246, 'logps/rejected_ref': -1.5208072662353516, 'logps/chosen_ref': -1.1223704814910889, 'logits/rejected': -2.727785587310791, 'logits/chosen': -2.723764657974243, 'epoch': 0.07}
{'loss': 0.6527, 'learning_rate': 2.1739130434782607e-07, 'rewards/chosen': -0.1672864407300949, 'rewards/rejected': -0.2939082682132721, 'rewards/accuracies': 0.7250000238418579, 'rewards/margins': 0.126621812582016, 'logps/rejected': -1.597663164138794, 'logps/chosen': -1.1704113483428955, 'logps/rejected_ref': -1.555121898651123, 'logps/chosen_ref': -1.1455411911010742, 'logits/rejected': -2.6994194984436035, 'logits/chosen': -2.698334217071533, 'epoch': 0.09}
{'loss': 0.6357, 'learning_rate': 2.536231884057971e-07, 'rewards/chosen': -0.24780841171741486, 'rewards/rejected': -0.41642823815345764, 'rewards/accuracies': 0.6812499761581421, 'rewards/margins': 0.1686197966337204, 'logps/rejected': -1.5889415740966797, 'logps/chosen': -1.1762551069259644, 'logps/rejected_ref': -1.5331004858016968, 'logps/chosen_ref': -1.1385315656661987, 'logits/rejected': -2.608551263809204, 'logits/chosen': -2.595763683319092, 'epoch': 0.1}
{'loss': 0.6215, 'learning_rate': 2.898550724637681e-07, 'rewards/chosen': -0.3489204943180084, 'rewards/rejected': -0.5186128616333008, 'rewards/accuracies': 0.6625000238418579, 'rewards/margins': 0.16969238221645355, 'logps/rejected': -1.4930452108383179, 'logps/chosen': -1.2134768962860107, 'logps/rejected_ref': -1.4123075008392334, 'logps/chosen_ref': -1.1594829559326172, 'logits/rejected': -2.72218656539917, 'logits/chosen': -2.7101638317108154, 'epoch': 0.12}
{'loss': 0.5946, 'learning_rate': 3.260869565217391e-07, 'rewards/chosen': -0.5540724396705627, 'rewards/rejected': -0.7824233174324036, 'rewards/accuracies': 0.6625000238418579, 'rewards/margins': 0.22835083305835724, 'logps/rejected': -1.6317428350448608, 'logps/chosen': -1.18325674533844, 'logps/rejected_ref': -1.5138404369354248, 'logps/chosen_ref': -1.1028623580932617, 'logits/rejected': -2.5669362545013428, 'logits/chosen': -2.5608177185058594, 'epoch': 0.13}
{'loss': 0.5704, 'learning_rate': 3.6231884057971015e-07, 'rewards/chosen': -0.7594733238220215, 'rewards/rejected': -1.1346296072006226, 'rewards/accuracies': 0.6812499761581421, 'rewards/margins': 0.37515631318092346, 'logps/rejected': -1.6324068307876587, 'logps/chosen': -1.2670494318008423, 'logps/rejected_ref': -1.4673458337783813, 'logps/chosen_ref': -1.1539455652236938, 'logits/rejected': -2.371572971343994, 'logits/chosen': -2.3651740550994873, 'epoch': 0.15}
{'loss': 0.5638, 'learning_rate': 3.9855072463768114e-07, 'rewards/chosen': -0.9244664311408997, 'rewards/rejected': -1.3455318212509155, 'rewards/accuracies': 0.6625000238418579, 'rewards/margins': 0.4210655093193054, 'logps/rejected': -1.7487471103668213, 'logps/chosen': -1.3893951177597046, 'logps/rejected_ref': -1.5395921468734741, 'logps/chosen_ref': -1.2470614910125732, 'logits/rejected': -2.358799457550049, 'logits/chosen': -2.351849317550659, 'epoch': 0.16}
{'loss': 0.5619, 'learning_rate': 4.3478260869565214e-07, 'rewards/chosen': -0.9872746467590332, 'rewards/rejected': -1.5160770416259766, 'rewards/accuracies': 0.6312500238418579, 'rewards/margins': 0.5288023352622986, 'logps/rejected': -1.579150676727295, 'logps/chosen': -1.2077702283859253, 'logps/rejected_ref': -1.358649492263794, 'logps/chosen_ref': -1.0587767362594604, 'logits/rejected': -2.2918102741241455, 'logits/chosen': -2.295361042022705, 'epoch': 0.17}

@zhanghaoie
Copy link
Author

because my original code was modified from trl DPOTrainer.

@zhanghaoie
Copy link
Author

May I ask why your logs here are 'rewards/chosen' and 'rewards/rejected'? The code should be 'rewards/real' and 'rewards/generated'

but the rewards/chosen corresponds to rewards_real; and rejected corresponds to generated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant