Mathematica Asked by Alexey Popkov on March 5, 2021
This question is intimately connected with previous one: “How to create internally optimized expression for computing with high WorkingPrecision?“
Oleksandr R. correctly states in the comment:
A good answer will hopefully discuss the
Experimental`NumericalFunction
, which is a structure produced by
FindMinimum
from the function and itsJacobian
that is optimized
for fast numerical evaluation. I know almost nothing about these
objects or how to create/use them, but I would like to find out.
From other questions on this site it is apparent that other built-in numerical functions also use Experimental`NumericalFunction
which seems to be the standard way to optimize numerical evaluation. It is not documented but it is placed in the Experimental
context which “contains functions that are being considered for official inclusion in future versions of Mathematica“. This undocumented function is of crucial importance because currently it is the only way to optimize arbitrary precision calculations. For MachinePrecision
we have Compile
but nothing documented for arbitrary precision. How to work with Experimental`NumericalFunction
?
To create Experimental`NumericalFunction
, one needs to evaluate Experimental`CreateNumericalFunction[vars, expr, dims]
where vars
is a list of arguments, expr
- the expression from which the numerical function will be created, dims
- the dimensions of the output matrix produced by this expression. If the output is scalar, then dims
should be set to {}
.
It also accepts an optional fourth argument, a list of input types corresponding to vars
, such as {_Real, _Real}
, as can be seen in this code that creates a new NIntegrate
rule. If the fourth argument is not specified, the resulting function expects a single argument: a list of the vars
values. If the fourth argument is specified, the resulting function expects a sequence of arguments instead of a single list.
The Experimental`CreateNumericalFunction
has several options, including WorkingPrecision
, EvaluationMonitor
, StepMonitor
, Compiled
, Hessian
, Gradient
and Jacobian
which work in the same way as in FindMinimum
.
Hessian
, Gradient
and Jacobian
can be Automatic
(the default), Symbolic
or FiniteDifference
. According to the Documentation,
One of the things that
NumericalFunction
does is to handle derivatives automatically. By default, symbolic derivatives are used if they can be found, and otherwise finite differences are used.
I tried to use the "Sparse"
suboption of Hessian
and found that it is ignored. It also has interesting options "ErrorReturn"
, "SampleArgument"
and Message
the purpose of which is unclear to me.
The created Experimental`NumericalFunction
has several properties:
f = Experimental`CreateNumericalFunction[{x, y}, {Sin[x + y], x^2 y}, {2}];
f["Properties"]
{"ArgumentDimensions", "ArgumentNames", "ArgumentUnits", "CompiledFunction", "FunctionExpression", "InputIndexes", "InputTypes", "Properties", "ResultUnits", "SolutionDataComponents", "WorkingPrecision"}
The most useful is that you can directly compute Gradient/Jacobian or Hessian for expr
for any numerical values of parameters:
f = Experimental`CreateNumericalFunction[{x, y}, {Sin[x + y], x^2 y}, {2}];
f["Hessian"[{1, 2}]]
f["Gradient"[{1, 2}]]
f["Jacobian"[{1, 2}]]
{{{-0.14112,-0.14112},{-0.14112,-0.14112}},{{4.,2.},{2.,0.}}} {{-0.989992,-0.989992},{4.,1.}} {{-0.989992,-0.989992},{4.,1.}}
When called for the first time, symbolic Hessian or Jacobian will be created (if the corresponding parameter is set to Automatic
or Symbolic
), further evaluation will be executed MUCH faster. For Experimental`NumericalFunction
Gradient
and Jacobian
options seem to do the same.
There also is NDSolve`ValidNumericalFunctionQ
which seemingly tests whether the created Experimental`NumericalFunction
is valid.
There also are some useful Message
s for Experimental`CreateNumericalFunction
defined in the file "Messages.m".
Another mentionable feature of NumericalFunction
is its first argument isn't necessarily a list of Symbol
:
f = Experimental`CreateNumericalFunction[{Subscript[x, 1], y}, {Sin[Subscript[x, 1] + y],
Subscript[x, 1]^2 y}, {2}];
f[{2, 3}]
(* {-0.958924, 12.} *)
Answered by Alexey Popkov on March 5, 2021
Here's a little addition related to the performance of NumericalFunction
.
Suppose we have a function like
func = Cos[x + y^2] Sin[x^2 - y];
A NumericalFunction
may be created as follows. The first variant nf0[{x0, y0}]
returns a scalar. The second variant nf1[{x0, y0}]
returns a 1-vector {z0}
for a given input {x0, y0}
, and the reason for including it will be seen in the timing of the "Hessian"
further down.
nf0 = Experimental`CreateNumericalFunction[{x, y}, func, {}];
nf1 = Experimental`CreateNumericalFunction[{x, y}, {func}, {1}];
For a NumericalFunction
nf
, nf["Jacobian"@{x, y}]
and nf["Jacobian"@{x, y}]
compute the Jacobian and Hessian, respectively. I thought to explore how efficient these were compared to the symbolic D[func, {{x, y}}]
and D[func, {{x, y}, 2}]
.
The Hessian of func
above turns out to be fairly complicated. I thought to use Experimental`OptimizeExpression
to improve its speed for the sake of comparison. This was done as follows:
hf1 = Experimental`OptimizeExpression[
Simplify@D[func, {{x, y}, 2}]] /.
Experimental`OptimizedExpression[f_] :>
Function @@ Hold[{x, y}, f];
It's possible to optimize func
and its Jacobian expression, too, which I did mainly because I was making a table.
I also include compiled versions and vectorized ones. Functions were compiled with a simple Compile[{x, y}, expr]
. I wasn't trying to find the absolute fastest method of evaluating, just giving some basis for evaluating the performance of NumericalFunction
. Vectorization was done on the y
coordinate as follows, using both the plain and optimized expressions for the function, Jacobian, and Hessian:
Do[testf[x, Range[0., 4., dx]], {x, 0., 4., dx}] // RepeatedTiming
The other functions were timed as follows:
Do[testf[x, y], {x, 0., 4., dx}, {y, 0., 4., dx}] // RepeatedTiming
Unsurprisingly, the results show that the vectorized optimized form is fast and usually fastest. The NumericalFunction
is faster than the symbolic expression when evaluated on single inputs. The most surprising outcome is the difference between the performances of the scalar nf0
and the vector nf1
on the Hessian. The vector form evaluates at least twice as fast as the scalar form. They seem computationally equivalent to me. If either is more expensive, I would think it would be the First
of the vector form nf1
, which in fact is the case when evaluating the function and Jacobian. The same difference in speed is observed if func
is changed to the simpler x^2 y^3
.
dx = 1./8 |
dx = 1./128 |
---|---|
Answered by Michael E2 on March 5, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP