-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doubts in the training code #2
Comments
Hi @epratheeban, Thanks for your question.
Best, |
@qinliuliuqin Thanks for the message. Really interested in exploring further with your appraoch from the paper. I am particualrly not intereseted in your dataset. As I have own 200 CT head dataset with segmentaion mask of bones and annotated 59 Cephalometric Landmarks. I managed to rewrite your code base to evaluate validation set during the training pipeline and save the best model based on the validation loss. Is it possible if you could the share the config and model network used for training, along with code script used for the preprocessing the CT dicom or Nifti format. ? Really appreciate if you could able to provide further information. Thanks in advance.
|
Hi @epratheeban, Thanks for your questions again.
Let me know if you have more questions. |
Hi @qinliuliuqin , Thank you so much for sharing the code. Looking through the training config, I have 1 question to get clarified before I adapt the code. Am I right ? |
Hi @epratheeban , Yes, we want to deploy our model on CPU machines for practical applications, so we split all landmarks into different groups and use a separate model to detect each group. We should have mentioned this in our paper. If you don't have memory constraints, you can simply merge all landmarks together and train a single model to detect them. However, you need to be careful because some landmarks may be very close. |
Hi @qinliuliuqin , Thanks for the info. After observing the landmark mask labels, as you said it is indeed closer. For now I will start with some 8 classes and gradually increase it to see the perfomance.
|
Hi @epratheeban , Thanks for your follow-up questions. It seems you are making progress. As for you questions:
Feel free to let me know if you have any other follow-up questions. Best, |
Hi @qinliuliuqin , The landmarks for some reasons are going out of the voxel region during the generation landmarks mask. The problem is that we have commerical tool for annotating the landmarks. Unfortunately nobody knows what coordinates system that the landmarks are in. I tried few different converting forth and back to LPS and world coordinates. Nothing worked so far. Could you please advice me on these few things
By the way I loaded one of the nifti file in Slicer and tried to put the landmark as you can see in the image The landmark point in RAS coordinates system is But the annotated landmark from this commerical tool is (178.78, 98.14, 101.67). When I convert this coordinates to Voxel coordinates using the function So now I'm confused with landmarks coordinates I should choose. Sorry for the basic questions , I'm a noob in medical image processing. thanks |
Hi @epratheeban , No worries, the issue seems to be an easy one.
Let me know if you have other questions. Best, |
Thanks Qin. The picture explains how I have to annotate the landmarks. Probably I will start with converting RAS to coordinates to RAL. One last question in preprocessing step, some of the DICOM have Image origin which is not (0,0,0). In this case is it ok recenter them 0,0,0 or just go with existing one. Does this step is essential ? Thanks again, |
Hi @epratheeban , I would suggest that you keep the images as they were because discarding/changing image information might cause undesirable consequences. For example, without the correct origins, you can't properly align two medical images for visualization using ITK-SNAP. As long as you can record the correct world coordinates, manually changing image information would be unnecessary. Another solution you may consider is to directly record the voxel coordinates, and then convert them to world coordinates using SimpleITK. However, voxel coordinates might not be as accurate as world coordinates especially when voxel spacing is large. Best, |
Thanks for you answer. It is indeed helpful, Here I'm sharing what I found. may be useful for someone reading this thread For labelling landmarks in 3D, Our annotators find, it is easy to annotate it in slicer rather in ITK. So they annotated a point in 3D on NiFTi image .The Nifti file
The RAS coordinate for one landmark
So converting this RAS to voxel coordinates directly from your code I got the below voxcel coordinates
After your previous answer, I converted the RAS coordinates to RAI coordinates Now the RAI coordinate is So I want visualize this voxel coordinates lying on right place loaded the landmask as a segmenation in slice The green dot is segmentation mask lablel and which is above the Red one label. Hence I decided to reset the original image origin to (0,0,0) and loaded the reoriented Nifti file in slicer parllely I still got the bit of offset from the actual points. So I decided to label the resetted image origin nifiti image Now the RAS point = [-298.701. -103.237, 178.236] to RAI = [298.701,103.237,178.236] and the voxel coordinates is same as RAI coordiante here [298.701,103.237,178.236] because of recentering image origin to (0,0,0) Now I again loaded the landmark mask generated and overlay on resetted nifti image, as you can see below the landmark coordinate is on the center of segmentation mask. After much exploring this, I found that the only way to solve this is resetting the image origin of NiFTi file. May be I am wrong. Please correct me ,If I have made any mistake. Thanks Pratheeban |
Hi @epratheeban , Thanks for the detailed explanation. It seems you made a mistake while transforming the world coordinates to voxel coordinates. As shown below, the voxel coordinates should be (299, 105, 179) in your case. I also verified this in a notebook. Could you double check it? Thank you. Best |
@qinliuliuqin |
@qinliuliuqin, thank you for publishing your code. Thank you very much for publishing your code and sharing your research with the community. I found your work extremely valuable and insightful. I noticed that while your paper discusses both landmark coordinate predictions and segmentation mask prediction, the code provided includes only an example for landmark coordinate prediction. Would it be possible for you to add an example case for segmentation as well? I believe this addition would greatly benefit others who are interested in exploring the segmentation aspect of your work. Thank you once again for your contribution and for considering this request. |
Hi @sankardrbreaths , Thanks for your interest. Actually I have a 3D medical image segmentation code base (https://github.com/qinliuliuqin/Medical-Segmentation3d-Toolkit). It share the same segmentation backbone with this detection method. I separate the segmentation and detection into two repositories to make them more general. It's a great suggestion that I should merge them together to help others reproduce our results in the paper. I will do it once I get time! |
Hi Qin
Thanks for publishing your code. However there are few things I do not understand . Please reply back when you have time.
is there any specific reason for that ?
The text was updated successfully, but these errors were encountered: