Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about finetune #23

Open
Shimmer8001 opened this issue Apr 17, 2023 · 3 comments
Open

Some questions about finetune #23

Shimmer8001 opened this issue Apr 17, 2023 · 3 comments

Comments

@Shimmer8001
Copy link

Recently I came across some papers on molecular contrastive learning, and it is my great pleasure to find a paper written by your team, named Molecular Contrastive Learning of Representations via Graph Neural Networks. This paper has benefited me a lot. But when I use the pre-trained model you provided for downstream tasks with the default configuration file config_finetune.yaml, the performance of the model can never reach the one shown in the paper. So I would like to ask if you can provide the hyperparameter configuration files required for downstream tasks on each data set.

@danielkaifeng
Copy link

hi, @Shimmer8001 I finetuned and found the result similar to the paper, with just minor decrease. I think there are some random seed need to set to replicate exactly the same results.

Besides, did you try to pre-train model on larger of other different dataset to improve the finetune?

@zhangtia16
Copy link

Same here, I find that I cannot reproduce the results shown in the paper on many datasets.

@happyCoderZC
Copy link

hi, @danielkaifeng I can't reproduce the results shown in the paper, too. I guess maybe the hyperparameters and random seed I set are not suitable. I wonder if you can share the hyperparameters and random seed. Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants