TransWikia.com

Non-negative least squares with very small numbers

Computational Science Asked on October 23, 2021

(I have asked this question on StackOverflow previously but it has been pointed to me that CSSE or MSE could be more appropriate)

I have to solve a constrained optimization problem of the following form, where the only unknown is $x$:

$$ x^{*} = arg min_{x} left | Ax – b right |^{2} qquad x in mathbb{R}_{geq 0}^{n} , ~ A in mathbb{R}_{geq 0}^{n times n} , b in mathbb{R}_{geq 0}^{n} $$

in other words a non-negative least squares problem (NNLS). Alternatively, I could solve a linear program (aware that these are not the same, but I would like a solution from either, whichever is more convenient):

$$ text{minimize} sum_i xi_i $$
$$ text{subject to:} ~~ Ax = b ~ + xi$$
$$ x in mathbb{R}_{geq 0}^{n} , xi in mathbb{R}_{geq 0}^{n}$$

Now so far so good. My problem is that the matrix A and the vector b that I am using contain extremely small entries (1e-60,1e-100). Note that all numbers are roughly this small. This is because they come from evaluations of a high dimensional pdf. As far as I can tell, not even the most precise solvers could properly handle such numbers. The rest of my algorithm deals with these numbers just fine, since all operations are carried out in log-space, as it is commonplace with probabilities.

Trying to solve the original problem with either method I presented, that is using e.g. scipy.optimize.nnls or scipy.optimize.linprog, results in the solver simply returning a vector of zeros.

One could think about solving the following modified problem (for example):

$$ x^{**} = arg min_{x} left | log (A) x – log(b) right |^{2} qquad x in mathbb{R}_{geq 0}^{n} , ~ A in mathbb{R}_{geq 0}^{n times n} , b in mathbb{R}_{geq 0}^{n} $$

An analogue modification could be done for the LP showed earlier. While this does not run into the same optimization issues, the optimal solution to this modified problem is not the same as one for the original problem. That is, $x^{*} neq x^{**} $ and also $x^{*} neq exp(x^{**}) $. Solving this modified problem and exponentiating its solution does not give completely nonsensical results, but it is not good enough for my purposes.

How would I go about solving the original problem despite the optimization issues given by the small entries of $A$ and $b$?

One Answer

To tackle your issue of very, very small numbers, you need to use a arbitrary precision arithmetic library, like MPFR. https://www.mpfr.org/

MPFR is awesome and will continually jack up the precision until it is sufficient to avoid roundoff error or you run out of memory. In my experience, I never used more than a 128 bit mantissa (a double has something like 53). Goodbye numerical limitations! Your program will run slower, but it will succeed.

If your preferred solver does not support this data type, you can write your own with a very simple gradient descent implementation and a change of variables.

Let $x_i=y_i^2$. The vector function is now expressed as $F=Ay^2-b$. The gradient descent iteration goes something like this:

$$y_{new}=y_{old}-gamma∇||F||^2$$ where $∇||F||^2=2(∇F)^TF$ and $∇F_{ij}=2A_{ij}y_j$ and $gamma$ is an arbitrary (usually small) positive step size. Recover $x$ by squaring the values of $y$, which is always non-negative. There are sophisticated ways of varying $gamma$ each step to improve convergence. No matter what you do, you will most likely have to try a few (or many) random starting points to find the best solution.

Answered by Charlie S on October 23, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP