# Linear objective function with non-linear constraints

Operations Research Asked by FightMilk on August 19, 2021

I would like to choose a set of $$beta_j$$s that maximizes a simple linear objective function of the type

$$underset{beta_j}{operatorname{max}}sum_{j=1}^{J}X_jbeta_j \$$

subject to the following constraints
$$sum_{j=1}^{J}C_j(beta_j)beta_j le M \ beta_j in Omega \$$

here $$C_j(beta_j)$$ can be thought as a marginal cost function that changes with the chosen $$beta_j$$. $$beta_j$$ can only be from a set of pre-selected set of integers $$Omega$$. $$M$$ is some budget constraint.

I don’t know the functional form of $$C_j(beta_j)$$ but can simulate $$C_j$$ for each $$j$$ and each possible $$beta_j$$.

I am having trouble understanding how to optimize this problem efficiently. Can someone give any direction on how this can be solved in R or Python?

Given that my comment to the question has been referred in the other existing answer, I will add it as an answer on its own. The premise for the answer is that the $$C_j(y)$$ function values can be precomputed for all values of $$yinOmega$$. The basic idea is to utilize that $$Omega$$ only contains a relatively small number of values in order to convert the problem into a binary linear program solvable by many commercial as well as free solvers.

To that end, let $$omega_i$$, $$iin I$$, be the different values in $$Omega$$. Then, for each $$i in I$$ and $$jin J$$ compute the values $$C_j(omega_i):=gamma_{ij}$$. Next, introduce binary variables $$z_{ij}$$ equaling 1 iff $$beta_j$$ takes the value $$omega_i$$. We can then replace the variables $$beta_j$$ with the sum $$sum_{iin I}omega_iz_{ij}$$. The original problem can the be stated as begin{align} max& sum_{jin J}X_j sum_{i in I}omega_iz_{ij}\ text{s.t.}:&sum_{iin I} z_{ij} = 1,&& forall jin J\ & sum_{jin J} sum_{iin I} gamma_{ij}omega_iz_{ij}leq M,\ & z_{ij}in {0,1},&&forall iin I,j in J end{align} I could imagine, but haven’t tested it, that many solvers can handle this somewhat simple MILP efficiently.

Correct answer by Sune on August 19, 2021

Since you don't know the functional form, you can use Pypopt, the Python wrapper around Ipopt. Ipopt supports callbacks, which means you can provide functions for the solver to evaluate in real-time to get values & derivatives.

Another way would be to use any of the genetic/evolutionary algorithms in Scipy.

If you have values in a tabular format, i.e., you don't have a black-box function that can produce $$C(beta)$$ for any $$beta$$, the workaround for non-linear optimisation would be to simply interpolate between the closest values that you do have. Ipopt defaults to finite differences if you don't provide derivatives, so as a first-order approach you would only need to do this for the evaluation of the function (not the derivatives).

It's important to know that it's incorrect to solve this directly as an MILP, as your $$C(beta)$$ would be fixed, rather than updated dynamically as it's supposed to.

If you do want to use an MILP formulation to select values from a table you can, but with a few caveats:

• you would lose the derivative-based information
• The number of new binaries won't scale well for dense tables
• The formulation can be quite challenging
• There's a decent chance you will need a commercial linear solver

Thus, the best all-round (and free) option in my opinion would be callbacks through Ipopt.

Answered by Nikos Kazazakis on August 19, 2021