Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Floating point overflow #34

Open
milan-hl opened this issue Apr 19, 2017 · 1 comment
Open

Floating point overflow #34

milan-hl opened this issue Apr 19, 2017 · 1 comment

Comments

@milan-hl
Copy link
Collaborator

At the moment, the result of a / mp.norm(a) may unexpectedly be zero or NaN for large MPArrays (or small MPArrays with large tensor entries):

>>> import numpy as np
>>> import mpnum as mp
>>>
>>> a = mp.MPArray.from_kron([np.array([10.0])] * 308)
>>> mp.norm(a / mp.norm(a))
0.0
>>> b = mp.MPArray.from_kron([np.array([10.0])] * 309)
>>> mp.norm(b / mp.norm(b))
nan

The reason is that floats cannot exceed a certain value:

>>> np.finfo(float).max
1.7976931348623157e+308
>>> a.lt[-1], mp.norm(a)
(array([[[  1.00000000e+308]]]), inf)
>>> b.lt[-1], mp.norm(b)
(array([[[ inf]]]), inf)
>>> 

It would be nice to add a method which computes a / mp.norm(a) without running into the floating point overflow.

Underflow can also happen:

>>> for n in 161, 162, 323, 324:
...     a = mp.MPArray.from_kron([np.array([0.1])] * n)
...     print('{}   {!r:25} {!r:25} {!r:20}'.format(
...         n, mp.norm(a), a.lt[-1].flat[0], mp.norm(a / mp.norm(a))))
... 
161   9.9404793228621183e-162   1.0000000000000097e-161   1.0059877069510206  
162   0.0                       1.0000000000000097e-162   inf                 
323   0.0                       9.8813129168249309e-324   inf                 
324   0.0                       0.0                       nan                 
>>> 
@dsuess
Copy link
Owner

dsuess commented Jul 7, 2017

Sorry, just noticed this issue. The basic solution to this is simple: in the canonicalization, just normalize the R matrices after each QR decomposition and keep track of the original norms. The question is, how do we incorporate this...

The easiest approach would be to add a switch to MPArray.normalize (s.th. like normalize_norm), which does the R normalization mentioned above. This would require use to remove the "divide by norm" pattern and might be a little slower. The naming would of course be easier if #31 was fixed.

That being said, I certainly don't have any use for more than say 30 sites anyway...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants