Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wishlist: "shape-based" transform #3

Open
schlegelp opened this issue May 14, 2021 · 20 comments
Open

Wishlist: "shape-based" transform #3

schlegelp opened this issue May 14, 2021 · 20 comments

Comments

@schlegelp
Copy link
Owner

Basically, what I want is a Transform that accepts two sets of points (e.g. from two neurons) and then finds an affine (i.e. non-warping) transform that minimises the distance between those points. This affine transform could be restricted to e.g. only use translation and rotation but not scaling.

One use case would be to align two neurons of the same cell type but from different segments.

I'm sure I've read about/seen something like this but I can't think of the proper term. @clbarnes: any idea of what I'm talking about?

@clbarnes
Copy link
Collaborator

Definitely sounds like something which exists! In the worst case it's a 6-parameter optimisation problem, right (rotation around each axis, plus translation in each axis)? Downsample/ prune both neurons ahead of time, or pick the root and major branch points or something, then use something like squared distance to nearest neighbour as a cost function; you'd only need to construct one KDtree (because the target is static). Orientation might be a problem for some data. Normalise the translation and rotation parameters to between -1 and 1 (multiply by pi for the rotations, and by the length of the bounding box for the translation) and most optimisers would work, although the cost function would be pretty slow.

I'm practically certain there's some linear algebra which can do it properly, although I don't know what it is, I'm afraid.

@schlegelp
Copy link
Owner Author

The thinplate-spline transform in morphops appears to have separate affine + warp parts:

https://github.com/vaipatel/morphops/blob/10b77498e444d2e5a64268b418e5c686041868d0/morphops/tps.py#L165

I'll see if that would be suited but as you say: something like that MUST already exist. This is probably one of those things where you just need to to know the correct term and Google will help. Maybe @aschampion knows?

@aschampion
Copy link

I'm confused -- are the points between each each set paired with each other? If so, this is just a least squares problem and can be solved directly with the eigenvector (from SVD). If not, it requires some way of doing point matching and is more open-ended, but the most common point cloud matching approaches are to do an expectation-maximization-like process where you generate candidate matches (kNN or similar) based on the previous xform, fit a new xform, and repeat.

@aschampion
Copy link

For the former problem see the fitting in CATMAID for rigid xform.

@schlegelp
Copy link
Owner Author

I'm confused -- are the points between each each set paired with each other?

Ah, sorry: here the points would not be paired.

@schlegelp
Copy link
Owner Author

This looks interesting: https://github.com/siavashk/pycpd

@aschampion
Copy link

I use/forked pycpd for mesh registration -- would recommend.

@schlegelp
Copy link
Owner Author

Here's another one: https://github.com/neka-nat/probreg

@schlegelp
Copy link
Owner Author

schlegelp commented Jun 20, 2021

I just had a closer look at pycpd and find it a bit clunky and rather slow. @aschampion: how complex were the meshes you used it for and was there any trick to getting it to work?

@aschampion
Copy link

The clouds I was using were ~ 10-50K vertices. Generating registrations is 10s minutes-1 hour, but transforming points was fine, and I typically was rendering a warp field once that I would use for most subsequent projections anyway.

@schlegelp
Copy link
Owner Author

Thanks! I'm about to open an issue on the pycpd repo asking for advice on the parameters - I'll tag you in case you have a suggestion.

@schlegelp
Copy link
Owner Author

Actually nvm: I now have some decent results - my tolerance was too high previously and it would then converge too quickly.

I did run into another issue though: pycpd.DeformableRegistration only works with the original source (i.e. Y), right? So I can't just use it to transform a different set of points? I guess I could make a landmark-based transform out of it but you said you produced a warp field? Could you perhaps briefly summarise how you approached that?

@aschampion
Copy link

You can transform arbitrary point sets from the source space with the pycpd registrations using the transform_point_cloud method.

Generating a warp field makes sense if you need to amortize computation for xform of many points or image volumes and your input space is bounded and discretized (e.g., locations are reasonably approximated as voxel coordinates in some relatively low-resolution grid). However it's not something you'd want to generate for something like EM-scale skeleton transformation.

Here's some pseudo-ish code for generating a field, but details such as which direction (+/- original coordinates) and whether the warp vectors are in physical units, source coordinates, or target coordinates, is up to your particular use:

def registration_to_warp_field(regst, shape, downsample=(1, 1, 1)):
    ranges = np.asarray([slice(None, r, d) for (r, d) in zip(shape, downsample)])
    coords = np.mgrid[ranges].reshape(len(shape), -1).T
    x_coords = np.copy(coords)
    CHUNK_SIZE = 1000
    for i in range(0, len(coords), CHUNK_SIZE):
        end = min(i + CHUNK_SIZE, len(coords))
        x_coords[i:end] = r.transform_point_cloud(x_coords[i:end])
    x_coords = x_coords - coords
    dshape = np.ceil(np.asarray(shape) / downsample).astype(np.int64)
    return x_coords.reshape(*dshape, len(shape))

@aschampion
Copy link

You probably know this already, but to note for DeformableRegistration you almost certainly want to do an affine first, transform your source through the affine, then fit a deformable from the transformed source to the target. For transforming other points then you just apply the two registrations in sequence.

@schlegelp
Copy link
Owner Author

schlegelp commented Jun 21, 2021

Thanks for the snippet!

You probably know this already, but to note for DeformableRegistration you almost certainly want to do an affine first, transform your source through the affine, then fit a deformable from the transformed source to the target. For transforming other points then you just apply the two registrations in sequence.

Yes, I used a RigidTransform but that seemed to also do a decent enough job for initial alignment. FYI, I'm trying to align the CNS of Seymour (red) and Igor (green = after rigid; red = rigid + warp) for the guys in Bonn:

Screenshot 2021-06-21 at 13 39 24

I'm still a bit confused re the transform_point_cloud method. It does work for Affine/Rigid transforms but the relevant line in DeformableRegistration is this (link):

return Y + np.dot(self.G, self.W)

Here Y are the (N, 3) points you want to transform, and G/W are of shape (M, M) and (M, 3), respectively where M is the number of original source points. Consequently, I get a ValueError if I transform a different set of points:

>>> reg.transform_point_cloud(np.array([[0,0,0], [1,1,1]]))
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-217-c28b12d2586a> in <module>
----> 1 deform.transform_point_cloud(np.array([[0,0,0], [1,1,1,]]))

~/Google Drive/Cloudbox/Github/pycpd/pycpd/deformable_registration.py in transform_point_cloud(self, Y)
     66             return
     67         else:
---> 68             return Y + np.dot(self.G, self.W)
     69 
     70     def update_variance(self):

ValueError: operands could not be broadcast together with shapes (2,3) (11784,3) 

What am I missing?

@aschampion
Copy link

This rang a bell. Looking at my notes I had a workaround padding or batching the input point sets to be the same size as the source set, but removed that when rebasing sometime last year, because of the merging of this PR. I think that's not on pypi.

@schlegelp
Copy link
Owner Author

Ah yes: there are changes to this method in the development branch. To me it looks like with default settings the issue is still the same - are you referring to when low_rank=True?

@schlegelp
Copy link
Owner Author

Never mind - I looked at the wrong line. Will check it out, thanks!

@schlegelp
Copy link
Owner Author

Quick question for @aschampion: do you know if it possible to invert a pycpd transform?

@aschampion
Copy link

Have to do it manually, e.g.:

def invert_affine_reg(affine: pycpd.AffineRegistration):
    (B, t) = affine.get_registration_parameters()
    B_inv = np.linalg.inv(B)
    t_inv = - np.dot(t, B_inv)
    t_inv = np.expand_dims(t_inv, 0)
    return pycpd.AffineRegistration(B=B_inv, t=t_inv, X=affine.Y, Y=affine.X)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants