Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linear Algeba #1516

Open
DirkToewe opened this issue Apr 14, 2019 · 17 comments
Open

Linear Algeba #1516

DirkToewe opened this issue Apr 14, 2019 · 17 comments
Assignees
Labels

Comments

@DirkToewe
Copy link
Contributor

DirkToewe commented Apr 14, 2019

Backpropagatable Linear Algebra methods are a vital part of Machine Learning including but not limited to Neural Nets. Here and there, requests for different LA methods pop up. More and more duplicate PRs regarding LA are submitted. This issue is an attempt to concentrate and coordinate the efforts and - if possible - establish a roadmap. Another motivation for this issue is this unanswered question regarding the future of tf.linalg.

The following methods seem to be particularly useful:

  • QR Decompostion
    • Useful for Linear Least Squares and numerically stable Linear Equation solutions.
    • tf.linalg.qr is implemented but very slow. Backpropagation is not supported.
    • tf.linalg.qrSolve is missing.
    • PR can be found here.
    • With some assistance with the WebGL backend, I can offer a GPU implementation.
  • Cholesky Decomposition
    • Useful for Gaussian Processes and Multivariate Normal Distributions.
    • cholesky and choleskySolve are missing.
    • Issue can be found here.
    • PRs can be found here and here and here.
    • With some assistance with the WebGL backend, I can offer a GPU implementation.
  • LU Decomposition
    • Useful for fast Linear Equation solutions.
    • PRs can be found here and here.
  • Solve
  • LSTSQ
    • Useful for 3D surface triangulation, see @oveddan's comment
    • Could be built on top of SVD and triangularSolve
  • SVD
    • Useful for PCA and Linear Least Squares.
    • Issue can be found here.
    • PR can be found here.
    • I can offer up an implementation from here with backpropagation added on top of it.
    • With some assistance with the WebGL backend, I can offer a GPU implementation.
  • Eigen
    • Useful for PCA and determining Main "Directions" modes of geometric bodies using Graph Laplacian.
    • I can offer up an implementation from here but no backpropagation.
  • Determinant
    • Easily computed with one of the decompositions (SVD, QR, LU or even Cholesky in the symmetric, positive definite case).
  • (Moore–Penrose) Inverse
    • Easily computed with one of the decompositions.

It is my impression that the TFJS team is currently too busy to focus on Linear Algebra which is perfectly understandable considering how quickly TFJS is developing and progressing. On top of that the PRs (especially mine) may not yet satisfy the TFJS standards. However without feedback that is hard to fix.

Would it be possible to add tf.linalg as a future milestone to TFJS? If there are no intentions to add more tf.linalg methods to tfjs-core, would it be possible to initiate a new tfjs-linalg sub-project for this purpose?

@caisq
Copy link
Collaborator

caisq commented Apr 15, 2019

@DirkToewe Just to gather requirements: do you plan to use these linear algebra features in Node.js or browser?

@DirkToewe
Copy link
Contributor Author

For me that would be the Browser mostly. Outside the Browser there are many alternatives (TF4Py, PyTorch, ...) while inside the Browser I see none. My focus is on having either QR or Cholesky running on the GPU (whichever is faster, which is likely Cholesky but it could be QR because there is more potential for parallelism).

My application would be the compression of 3D model data. It has nothing to do with Neural Nets (yet) but it is highly interesting none the less (at least to me).

@tafsiri
Copy link
Contributor

tafsiri commented Apr 16, 2019

@DirkToewe Thanks for putting this together and collecting all the issues and use cases in one place (and for the PRs!). We are going to discuss in our next team sync and try and respond to this more fully by the end of next week. I think at the moment we are leaning towards figuring out how to make this happen (given our current bandwidth).

@tafsiri
Copy link
Contributor

tafsiri commented Apr 30, 2019

@DirkToewe We had a chance to talk about this and the summary is that we would like to support more linalg functions but don't have the bandwidth to review all the functionality currently in tf.linalg.

However we think we can do some work to list the ops we would want to review PRs for based on broad applicability/use case. @dsmilkov will follow-up with some concrete thoughts on op prioritization and review process to get a subset of tf.linalg to get into tfjs. Thanks for the consolidated links to the existing PRs.

@DirkToewe
Copy link
Contributor Author

It's good to hear that You're committed to extending tf.linalg and I realize how much review work is involved. The past few weeks, I've been trying to think of ways to reduce that work. So far, I've come up with two ideas:

One Idea would be to port Lapack to the Web using WebAssembly. Lapack is thoroughly tested, so the review work could be limited to the backpropagation implementation and Wasm. Sadly, I will be of no help there, due to my lack of knowledge on Fortran, Wasm, and JS build tools. Maybe there is some other community member who could do it?

The other idea is that I can offer to give a presentation or prepare a video about the Algorithm(s) used in a particular implementation, that You're interested in.

Let me know if there is anything else I can do to help.

@backspaces
Copy link

Another source of the Python stack is the iodide/pyodide project using wasm to port Python & numpy and friends to the browser. JavaScript access is also available:
https://github.com/iodide-project/pyodide/blob/master/docs/using_pyodide_from_javascript.md

@DirkToewe
Copy link
Contributor Author

DirkToewe commented Jun 8, 2019

One minor downside of Python→JS ports is that they usually come with ginormous package sizes. Pyodide is ~12mb if I see it correctly (~15 times the size of tf.min.js).

For comparison: EmLapack is ~0.2mb.

@janosh
Copy link

janosh commented Oct 22, 2019

Just to add to @DirkToewe's list, tf.linalg.det would be really useful in TFJS as well.

@DirkToewe
Copy link
Contributor Author

DirkToewe commented Oct 22, 2019

Added it to the list :)

If the matrices are small, You can already compute the determinant using the current implementation of the QR decomposition:

function det( /*tf.Tensor*/a ) {
  const a_shape = a.shape,
          [m,n] = a_shape.slice(-2);
  if( m !== n )
    throw new Error('det(): Matrix/matrices must be square.');

  return tf.stack(
    a.reshape([-1,n,n]).unstack().map( a => {
      const [q,r] = tf.linalg.qr(a),
           diag_r = r.flatten().stridedSlice([0],[n*n],[n+1]),
            det_r = diag_r.prod(),
            det_q = n%2===0 ? +1 : -1; // <- q is product of n Householder
                                       //    reflections, i.e. det(q) = (-1)^n
      return tf.mul(det_q, det_r);
    })
  ).reshape( a_shape.slice(0,-2) );
}

@janosh
Copy link

janosh commented Oct 23, 2019

@DirkToewe Thanks for that implementation. My matrices are indeed small. I refactored your code a little and ran some tests which had round-off errors. Tried out different dtypes but that didn't help. Is this unavoidable with tf.linalg.qr?

const tf = require(`@tensorflow/tfjs`)

const det = tnsr => {
  const [n, m, ...dims] = tnsr.shape.reverse()
  if (m !== n) throw new Error(`det(): Received non-square matrix.`)
  const mats = tnsr.reshape([-1, n, n]).unstack()
  const dets = mats.map(mat => {
    const r = tf.linalg.qr(mat)[1]
    const diag_r = r.flatten().stridedSlice([0], [n * n], [n + 1])
    const det_r = diag_r.prod()
    // q is product of n Householder reflections, i.e. det(q) = (-1)^n
    const det_q = n % 2 === 0 ? 1 : -1
    return tf.mul(det_q, det_r)
  })
  return tf.stack(dets).reshape(dims)
}

const t1 = tf.tensor([[1, 2], [3, 4]], undefined, `int32`)
const t2 = tf.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], undefined, `int32`)

console.log(det(t1).arraySync()) // -1.9999994039535522
console.log(det(t2).arraySync()) // -0.000006132030193839455

Also, tf.linalg.inv should probably be on the list as well.

@DirkToewe
Copy link
Contributor Author

DirkToewe commented Oct 23, 2019

@janosh Added it to the list as well.

TFJS - sadly - only supports float32. And float32 is scarily inaccurate. The machine epsilon is only ~1.1920928955078125e-7, i.e. the next larger number after 4 is ~4.000000476837158. Considering that, the results seem to be within the margin of error.

@oveddan
Copy link
Contributor

oveddan commented Oct 30, 2019

tf.linalg.lstsq would be great as this would support triangulation of rays or 2d points with known camera parameters into 3d, as is done in tensorflow graphics ray.triangulate function

@adamijak
Copy link

adamijak commented Aug 3, 2021

Any ETA on this issue?

@danbri
Copy link

danbri commented Oct 7, 2021

Any ETA on this issue?

I came here to ask this too...

@dsmilkov dsmilkov removed their assignment May 19, 2023
@gaikwadrahul8
Copy link
Contributor

Hi, @DirkToewe

Thank you for opening this issue. Since this issue has been open for a long time, the code/debug information for this issue may not be relevant with the current state of the code base.

The TFJs team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TFJs version with the latest compatible hardware configuration which could potentially resolve the issue. If you are still facing the issue, please create a new GitHub issue with your latest findings, with all the debugging information which could help us investigate.

Please follow the release notes to stay up to date with the latest developments which are happening in the Tensorflow.js space.

Thank you for your support and cooperation.

@DirkToewe
Copy link
Contributor Author

DirkToewe commented Aug 31, 2023

Hi @gaikwadrahul8,

It has indeed been a while since I last checked up on this issue, so thank You for reminding me. Looking at the API, tf.linalg still seems to offer the same three operations as last time:

  • bandPart
  • gramSchmidt
  • qr

The latter two of which still seem to use the same high level implementation that comes with the same significant overhead as before. So this is still very much an open feature request.

TFJS is a neural net library not a linear algebra library, so there are likely many other issues and features that understandably take precedence over this feature request.

Other than that, I am not sure there is anything new I can report or contribute.

@joezappie
Copy link

Would love to see this as well. Theres really no good alternative to numpy in JS from what I can tell and tensorflow is by far the closest, despite not even being primarily a math library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests