Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing Datas #5

Open
liaojinli opened this issue Jun 9, 2022 · 1 comment
Open

Testing Datas #5

liaojinli opened this issue Jun 9, 2022 · 1 comment

Comments

@liaojinli
Copy link

Would you like to ask how to test own datas?

@apple2373
Copy link

I also want to test the model on my own dataset. I tried but failed. It does not output anything looks like a face... It would be nice if there's some guidelines to apply the model on our own dataset. From what i have guessed, the following are what we should prepare.

intrinsic matrix K
[[f, 0, cx],
[0, f, cy],
[0, 0, 1]]
where f is the focal length, and cx and cy is the principal point of the image (usually W/2 and H/2 unless we crop images).

Rotation matrix R -- 3x3 array of world-to-camera rotation matrix representing the camera orientation.
Camera position T -- 3x1 array of camera position.

Then,
KRT is K.dot(R).dot(T)
rigid_mat is np.hstack([R,T])

I used COLMAP with this notebook to extract the K, R, and T. https://github.com/google/nerfies/blob/1a38512214cfa14286ef0561992cca9398265e13/notebooks/Nerfies_Capture_Processing.ipynb
So "The local camera coordinate system of an image is defined in a way that the X axis points to the right, the Y axis to the bottom, and the Z axis to the front as seen from the image."

Am I doing it right? I used 512x512 image, but the mesh i got is terrible. It's not even have a shape of face.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants