Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discuss:] how to improve the performance? #71

Open
SeekPoint opened this issue Dec 26, 2016 · 1 comment
Open

[Discuss:] how to improve the performance? #71

SeekPoint opened this issue Dec 26, 2016 · 1 comment

Comments

@SeekPoint
Copy link

Am I right about the following items:
--dataset more bigger, better performance?

--hiddenSize more bigger, better performance?

--maxEpoch more bigger, better performance?

--batchSize more bigger, better performance?

@Jordy-VL
Copy link

Your question is rather unspecified ... I believe you should look up the bias-variance tradeoff, a standard concept in Machine Learning. There you will get more feeling for how to improve models, what is over/underfitting, distribution mismatches etc.
In general, bigger dataset "might" improve performance, but then your parametrization should also be adjusted to this. A too small hiddenSize will not be able to generalize, a too big hiddenSize will overfit on the training data. What concerns training, the "early stopping criterion" has been invented for a reason, again depending on quality improvement (mostly measured in MSE or a task-specific loss funtion) over the different sets (if applicable: train, train-dev, dev-test, test). batchSize I am not really certain how messing with this parameter might improve performance - again what do you mean with performance? - it will probably affect training time though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants