Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defination of kernel sizes not agree with most papers #96

Open
mylyu opened this issue Jul 20, 2022 · 2 comments
Open

Defination of kernel sizes not agree with most papers #96

mylyu opened this issue Jul 20, 2022 · 2 comments

Comments

@mylyu
Copy link

mylyu commented Jul 20, 2022

First, thanks for such a great project. I really love using this toolbox.

I noticed this problem when using kernel size (5,5) at high acceleration factor (R), because the results were too poor. After increasing the kernel size to about R times, the results became normal. Then I checked the code, and found the kernel size is used as a region? Then the actual number of data points used to interpolate may be far less than the "kernel size". I believe that in most published papers, the kernel size means exactly how many points are used to interpolate the missing point.

Am I right?

@mckib2
Copy link
Owner

mckib2 commented Jul 26, 2022

@mylyu I believe you are correct. The original thought process for specifying a region was to not have to assume specific, regular line geometries and allow for potentially arbitrary, non-repeating sampling patterns. If I understand correctly, this is what the GE ARC implementation does and what Mikki Lustig replicates in the GRAPPA implementation bundled with his SPIRiT code here.

But I agree, that the "total number of neighboring sources used for interpolation" is almost certainly what's expected here. I'll have to think of an implementation strategy. The first couple things that come to mind are:

  1. use a kd-tree and query for the $$\Pi_{k \in \text{kernel} } k $$ nearest neighbor samples, OR
  2. use a greedy strategy to push out in a hyper-rectangle with side ratios determined by the kernel size (?) all dimensions until the number of points are acquired

I'm leaning towards the kd-tree unless you have any ideas?

For reference, mdgrappa calculates all possible geometries here

@mylyu
Copy link
Author

mylyu commented Jul 26, 2022

@mylyu I believe you are correct. The original thought process for specifying a region was to not have to assume specific, regular line geometries and allow for potentially arbitrary, non-repeating sampling patterns. If I understand correctly, this is what the GE ARC implementation does and what Mikki Lustig replicates in the GRAPPA implementation bundled with his SPIRiT code here.

But I agree, that the "total number of neighboring sources used for interpolation" is almost certainly what's expected here. I'll have to think of an implementation strategy. The first couple things that come to mind are:

  1. use a kd-tree and query for the Πk∈kernelk nearest neighbor samples, OR
  2. use a greedy strategy to push out in a hyper-rectangle with side ratios determined by the kernel size (?) all dimensions until the number of points are acquired

I'm leaning towards the kd-tree unless you have any ideas?

For reference, mdgrappa calculates all possible geometries here

Thanks for the reply. I agree that Lustig also implemented it as such. It will be great if a reminder can be put in the docs so that users can know how to use a proper kernel size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants