Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hansen-Law improvements #1

Open
oliverhaas opened this issue Oct 5, 2018 · 1 comment
Open

Hansen-Law improvements #1

oliverhaas opened this issue Oct 5, 2018 · 1 comment
Labels
question Further information is requested

Comments

@oliverhaas
Copy link
Owner

oliverhaas commented Oct 5, 2018

Mainly to sort my own thoughts, I will give some remarks on the Hansen-Law method here.

There are two things limiting the accuracy of the Hansen-Law method:

  1. Accuracy of the state space model fit of a sum of exponentials to the Abel transform. Hansen and Law have fit a sum of 9 exponentials with accuracy around 1.e-3 when ignoring the singularity. In the code this fit is represented by hk and lamk.
model_hansenLawOrg.hk = [1., 0.596903, 1.09956, 2.57611, 5.65487, 12.2522, 26.0752, 61.5752, 151.739]
model_hansenLawOrg.lamk = [0.0, -2.1, -6.2, -22.4, -92.5, -414.5, -1889.4, -8990.9, -47391.1]
  1. Order of the assumed interpolation of the input function. Hansen and Law assume linear interpolation of points. In the code this is seen for example by the lines
sn[1] = (dataInOld-dataIn[ii])/plan.stepSize
sn[0] = dataIn[ii] - plan.grid[ii]*sn[1]

Typically as of now the method is more limited by the accuracy of the fit. As a note: The interpolation could be fairly easily extended, but would require evaluation of more complicated elementary functions with every order and the interpolation in that order. Both is still linear O(N) computational complexity though.

So a better fit would be nice. Problem is though that the function to fit by exponentials has a singularity at 0 (from here on out Mathematica-like pseudo code):

fun = 1/Sqrt[1 - Exp[-2*x]]
sing = 1/Sqrt[2*x]

As of now I didn't manage to get a good fit. A very robust method to get a good fit is to use some tricks with an SVD to get the exponents first and then do a linear least squares to get the factors. This ignores the singularity, but the whole Hansen-Law method does this anyway. The hope is to get at least a little bit better fit. In Mathematica this looks like this:

nn = 1001;
xmax = 10;
fun = 1/Sqrt[1 - Exp[-2*xmax*x]] - 1;
data = Table[(fun /. x -> z + y), {z, 1./(nn - 1), 1. + 1./(nn - 1), 
     1./(nn - 1)}, {y, 1./(nn - 1), 1. + 1./(nn - 1), 1./(nn - 1)}] //
    Quiet;
svd = SingularValueDecomposition[data];
(*ListLogPlot[svd[[2]]/Max[svd[[2]]]//Diagonal,PlotRange\[Rule]All] \
Enable to manually find sigcut *)
NN = (nn - 1)/2;
sigcut = 16;
lams = 2*NN*
    Log[PseudoInverse[svd[[1, 2 ;;, 1 ;; sigcut]]].svd[[1, ;; -2, 
        1 ;; sigcut]] // Eigenvalues] // Re;

data2 = Table[{x, fun}, {x, 1./(nn - 1), 1. + 1./(nn - 1), 
     1./(nn - 1)}] // Quiet;
fit = NonlinearModelFit[data2, 
   Table[a[ii]*Exp[-lams[[ii]]*x], {ii, 1, sigcut}] // Total, 
   Table[a[ii], {ii, 1, sigcut}], x, 
   Weights -> 
    1/Table[fun + 1, {x, 1./(nn - 1), 1. + 1./(nn - 1), 
        1./(nn - 1)}]^2];
lams/xmax
Table[a[ii], {ii, 1, sigcut}] /. fit["BestFitParameters"]

{326.962, 215.002, 150.651, 108.24, 78.732, 57.6702, 42.4463, 31.378, \
23.3208, 17.4742, 13.2704, 10.2773, 8.02312, 6.00046, 4., 2.}

{4.68558, 1.70955, 1.93147, 1.17423, 1.21348, 0.863059, 0.831268, \
0.635035, 0.578285, 0.453191, 0.386552, 0.304036, 0.284079, 0.312645, \
0.375015, 0.5}

Problem is that we can't chose the number of points nn large enough to get very close to the singularity, since the SVD gets slow really quick. As far as I understand if we're strict, one needs the fit to be accurate in the interval [1/N, Infinity], where N is the number of points in the Abel transform... Too bad overal, because this methods gives very high quality fits/Approximations. In theory one could manually combine this methods for smaller and smaller intervals close to the singularity, but I couldn't manage to make it work in a consistent way.

In principle one could use the standard fitting routines, here I tried using the previous results as starting values among other things:

laminit = lams
ainit = Table[a[ii], {ii, 1, sigcut}] /. fit["BestFitParameters"];
data3 = Table[{Exp[1. s], fun /. x -> Exp[1. s]}, {s, -5, 0, 
    3./(300 - 1)}] // Quiet
fit2 = NonlinearModelFit[data3, 
  Table[a[ii]*Exp[-lam[ii]*x], {ii, 1, sigcut}] // Total, 
  Join[{Table[a[ii], {ii, 1, sigcut}], ainit} // 
    Transpose, {Table[lam[ii], {ii, 1, sigcut}], laminit} // 
    Transpose], x]

This converges very poorly. Tinkering around with parameters yields somewhat better fits than the original Hansen-Law, but it's not like there is any consistency in obtaining them, or any real improvement, or any easy way to decide if a small error improvement warrants the use of twice the number of exponentials. And of course the singularity is still a problem, so accuracy is again limit in general.

I tried removing the singularity in the method as well in several different ways. Overall this doesn't seem to really increase accuracy or the ill-posedness of the problem enough.

As of now I'm convinced that the Hansen-Law method is inherently flawed and can't be significantly improved. I'm open for suggestions if anybody reads this :).

@oliverhaas
Copy link
Owner Author

I'm thinking about removing Hansen-Law, it's not really useful for anything and basically just a trap.

@oliverhaas oliverhaas added the question Further information is requested label Jun 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant