Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some review on the tests and their results #5

Open
ProbShin opened this issue Mar 11, 2020 · 4 comments
Open

Some review on the tests and their results #5

ProbShin opened this issue Mar 11, 2020 · 4 comments

Comments

@ProbShin
Copy link

This is part of openjournals/joss-reviews#1843.

First of all, as an reviewer, there are two parts, one is the implementation of the code and the other is the writing paper part. I am not sure which part should I focus on or both? And this post is only about the paper part.

  • First of all, For the result of benchmark instances, the table contains columns of graph name, number of vertices, Optimum result, best result, and the worst result. Those information is very useful, showing the basic property of the graph and the capacity of the algorithm. However, could the author add one more column about the number of edges? Since both numbers of vertices and the number of edges together give a general idea of how large and dense the graph is. And also that graph size and graph density play an important role in the chromatic number. For example, a very dense graph, we know that the chromatic number would be close to the number of vertices while for a very sparse graph, the chromatic number would be very small. Plus, the number of edges is very easy to get :)

  • Second of all, I was confused by the term Found-Best and Found-Worst. Could the author put a few words to make it a little bit clear? For example, The best/worst results are picked from how many repeated test results; What's the mean reason for the algorithm generating a non-deterministic result for a different run. Is it by the different random seed, or a different number of iterations, or even by the operating system's scheduling?

  • Then, what's the mean reason for the Table3 of the ColPack comparison part. The table only contains a subset of 9 graphs. Or why does the author picked these 9 graphs? Is there any particular reason? or just random? Could the author help me out?

  • In the end, since the tests result have a very well structured result. I recommend the author also provide the "Geometric Mean" at the last row of each table of results.
    Because 1 Geometric means provide better overall performance evaluation. 2 It is easy to calculate and there is no need to do any extra experiment.

@shah314
Copy link
Owner

shah314 commented Mar 11, 2020

@ProbShin @jedbrown

  • I have added the number of edges in the git README.md. There is no space in the paper, so I left this information out of the paper.
  • The found best and found worst are from running the algorithm 10 times with a different random seed.
  • The ColPack comparison was done on a few randomly chosen graphs. I think it is sufficient to compare the algorithms on these 9 graphs. If you think more comparison is required, I can add more graphs.
  • I have added a geometric mean as the last row in the table in the paper.

Thanks for the detailed review!

@ProbShin
Copy link
Author

Great, thanks for the update.

BTW, is there any way for me to get the updated papers, or where can I find it.

@jedbrown
Copy link
Contributor

We ask Whedon (the JOSS bot) to rebuild the PDF in the review thread: openjournals/joss-reviews#1843 (comment)

@ProbShin
Copy link
Author

Great, thanks very much!

@ProbShin ProbShin reopened this Mar 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants