FixedPointFinder achieves its design goal by implementing its functionality over several packages:

·
__Framework__: Framework to accommodate bracketing/open
convergence initialization, execution customization/configuration, iterative
variate evolution, and search termination detection

·
__Bracket Initialization Techniques__:
Implementation of the different techniques for the initial bracket extraction.

·
__Open Method Convergence Zone Initialization
Techniques__: Implementation of the different techniques for the convergence
zone starting variate extraction.

·
__Iterative Open Methods__: Implementation of the
iterative Open Methods - Newton-Raphson and Secant Methods

·
__Iterative Bracketing Primitive Methods__:
Implementation of the iterative bracketing primitives – Bisection, False
Position, Quadratic Interpolation, Inverse Quadratic Interpolation, and Ridder.

·
__Iterative Bracketing Compound Methods__:
Implementation of the iterative bracketing compound methodologies – Brent’s and
Zheng’s methods.

·
__Search Initialization Heuristics__: Implementation
of a number of search heuristics to make the search targeted and customized.

·
__Samples__: Samples for the various bracketing and
the open methods, their customization, and configuration.

· __Documentation__:
Literature review, framework description, mathematical and formulation details
of the different components, root finder synthetic knowledge unit (SKU)
composition, and module and API usage guide.

· __Regression
Tests__: Statistical regression analysis and dispersion metric evaluation for
the initialization and the iteration components of the different bracketing and
open root finder methodologies.

1. __Hyperspace
Search__: Hyperspace search is a search to determine whether the entity is
inside the zone of a range, e.g., bracketing search.

2. __Hyperpoint
Search__: Hyperpoint searches are hard searches that search for an exact
specific point (to within an appropriately established tolerance).

3. __Iterate
Nodes__: This is the set of the traveled nodes (variate/Objective Function
ordered pairs) that contain the trajectory traveled.

4. __Iteration
Search Primitives__: The set of variate iteration routines that generate the
subsequent iterate nodes.

5. __Compound
iterator search scheme__: Search schemes where the primitive iteration
routine to be invoked at each iteration are evaluated.

6. __RunMap__:
Map that holds the program state at the end of each iteration, in the generic
case, this is composed of the Wengert iterate node list, along with the
corresponding root finder state.

7. __Cost
of Primitive (cop)__: This is the cost of invocation of a single variate
iterator primitive.

1. Base Framework

2. Search Initialization

a. Bracketing

b. Objective Function Failure

c. Bracketing Start Initialization

d. Open Search Initialization

e. Search/Bracketing Initializer Customization Heuristics

3. Numerical Challenges in Search

4. Variate Iteration

5. Open Search Methods

a. Newton’s Method

6. Closed Search Methods

a. Secant

b. Bracketing Iterative Search

c. Univariate Iterator Primitive

i. Bisection

ii. False Position

iii. Inverse Quadratic

iv. Ridder’s

d. Univariate Compound Iterator

i. Brent’s Method

ii. Zheng’s Method

7. Polynomial Root Search

8. References

9. Figures

10. Fixed Point Search Software Components

a. Execution Initialization

b. Bracketing

c. Execution Customization

d. Fixed Point Search

e. Variate Iteration

f. Initialization Heuristics

1. The root search given an objective function and its goal is achieved by iteratively evolving the variate, and involves the following steps:

·
__Search initialization and root reachability
determination__: Searched is kicked off by spawning a root variate iterator for
the search initialization process (described in detail in the next section).

·
__Absolute Tolerance Determination__.

·
__Root Search Iteration__: The root is searched
iteratively according to the following steps:

1. The iterator progressively reduces the bracket width.

2. Successive iteration occurs using either a single primitive (e.g., using the bisection primitive), or using a selector scheme that picks the primitives for each step (e.g., Brent’s method).

3. For Open Method, instead of 1 and 2, the routine drives towards convergence iteratively.

·
__Search Termination Detection__: The search
termination occurs typically based on the following:

· Proximity to the Objective Function Goal

· Convergence on the variate

· Exhaustion if the number of iterations

2. The flow behind these steps is illustrated in Figure 1.

1. Broadly speaking, root finding approaches
can be divided into a) those that bracket roots before they solve for them, and
b) those that don’t need to bracket, opting instead to pick a suitable starting
point.

2. Depending
upon the whether the search is a bracketing or an open method, the search
initialization does one the following:

o Determine
the root brackets for bracketing methods

o Locate
root convergence zone for open methods

3. Initialization begins by a search for the starting zone. A suitable starting point/zone is determined where, by an appropriate choice for the iterator, you are expected to reach the fixed-point target within a sufficient degree of reliability. Very general-purpose heuristics often help determine the search start zone.

4. Both bracketing and open initializers are hyperspace searches, since they search for something “IN”, not “AT”.

__ __

__ __

1. Bracketing is the process of localizing the fixed point to within a target zone with the least required number of Objective Function calculations. Steps are:

- Determine a valid bracketing search start
- Choose a suitable bracket expansion
- Limit attention to where the Objective Function is defined (more on this below).

2. Figure 2 shows the flow for the Bracketing routine.

3. Bracketing methods require that the initial search interval bracket the root (i.e. the function values at interval end points have opposite signs).

4. Bracketing traps the fixed point between two variate values, and uses the intermediate value theorem and the continuity of the Objective Function to guarantee the presence/existence of the fixed point between them.

5. Unless the objective function is discontinuous, bracketing methods guarantee convergence (although may not be within the specified iteration limit).

6.
Typically, they do not require the objective function
to be differentiable.

7.
Bracketing iteration primitives’ convergence is usually
linear to super-linear.

8. Bracketing methods preserve bracketing throughout computation and allow user to specify which side of the convergence interval to select as the root.

9. It is also possible to force a side selection
after a root has been found, for example, in sequential search, to find the
next root.

10. Generic root bracketing methods that treat the
objective function as a black box will be slower than targetted ones – so much
so that they can constitute the bulk of the time for root search. This is
because, to accommodate generic robustness coupled with root-pathology
avoidance (oscillating bracket pairs etc), these methods have to perform a full
variate space sweep without any assumptions regarding the location of the roots
(despite this most bracketing algorithms cannot guarantee isolation of root
intervals). For instance, naďve examination of the Objective Function’s
“sign-flips” alone can be misleading, especially if you bracket fixed-points
with even numbered multiplicity within the brackets. Thus, some ways of
analyzing the Black Box functions (or even the closed form Objective Functions)
are needed to better target/customize the bracketing search (of course,
parsimony in invoking the number of objective function calls is the main
limitation).

11. The first step is
to determine a valid bracketing search start. One advantage with univariate
root finding is that objective function range validity maybe established using
an exhaustive variate scanner search without worrying about combinatorial
explosion.

__ __

__ __

1. Objective Function may fail
evaluation at the specified variate for the following reason:

o Objective Function is not defined at the specified variate.

o Objective Function evaluates to a complex number.

o Objective Function evaluation produces NaN/Infinity/Under-flow/Over-flow errors.

o In such situations, the following steps are used to steer the variate to a valid zone.

2. __Objective
Function undefined at the Bracketing Candidate Variate__: If the Objective
Function is undefined at the starting variate, the starting variate is expanded
using the variate space scanner algorithm described above. If the objective
Function like what is seen in Figure 3, a valid starting variate will
eventually be encountered.

3. __Objective
Function not defined at any of the Candidate Variates__: The risk is that the
situation in Figure 4 may be encountered, where the variate space scanner
iterator “jumps over” the range over which the objective function is defined.
This could be because the objective function may have become complex. In this
case, remember that an even power of the objective function also has the same
roots as the objective function itself. Thus, solving for an even power of the
objective function (like the square) – or even bracketing for it – may help.

__ __

__ __

1. Figure 5 shows the flow behind a general-purpose bracket start locator.

2. Once the starting variate search is successful, and the objective function validity is range-bound, then use an algorithm like bisection to bracket the root (as shown in Figure 6 below).

3. However, if the objective function runs out of its validity range under the variate scanner scheme, the following steps need to undertaken:

- If the left bracketing candidate fails, bracketing is
done towards the right using the last known working left-most bracketing
candidate as the “left edge”.
- Likewise, if the right bracketing candidate fails,
bracketing is done towards the left using the last known working
right-most bracketing candidate as the “right edge”.

4. The final step is to trim the variate zone. Using the variate space scanner algorithm, and the mapped variate/Objective Function evaluations, the tightest bracketing zones are extracted (Figure 7).

__ __

1. Non-bracketing methods use a suitable starting point to kick off the root search. As is obvious, the chosen starting point can be critical in determining the fate of the search. In particular, it should be within the zone of convergence of the fixed-point root to guarantee convergence. This means that specialized methods are necessary to determine zone of convergence.

2. When the objective function is differentiable, the non-bracketing root finder often may make use of that to achieve quadratic or higher speed of convergence. If the non-bracketing root finder cannot/does not use the objective function’s differentiability, convergence ends up being linear to super-linear.

3. The typical steps for determining the open method starting variate are:

- Find a variate that is proximal to the fixed point
- Verify that it satisfies the convergence heuristic

4. Bracketing followed by a choice of an appropriate primitive variate (such as bisection/secant) satisfies both, thus could be a good starting point for open method searches like Newton’s method.

5. Depending upon the structure of the Objective Function, in certain cases the chain rule can be invoked to ease the construction of the derivative – esp. in situations where the sensitivity to inputs are too high/low.

__ __

o Wengert Variate Analysis => Collection of the Wengert variates helps build and consolidate the Objective Function behavior from the variate iterate Wengert nodes – to build a behavioral picture of the Objective Function.

o Objective Function Neighborhood Behavior => With every Wengert variable, calculation of the set of forward sensitivities and the reverse Jacobians builds a local picture of the Objective Function without having to evaluate it.

**Open Search Method: Newton’s
method**

16.
The
algorithm estimates *m* after carrying out one or two iterations, and then
use that value to increase the rate of convergence. Alternatively, the modified
Newton’s method may also be used:

_{}

_{}

where

(a) _{} is a fixed point of_{} [_{}]

(b) _{} is a positive
constant,

(c) _{}, and

(e) _{}for all _{}.

19. One sufficient
condition for _{}to initialize a convergent sequence _{}, which converges to the root _{}of _{}is that _{}and that _{} be chosen so that _{} for all _{}.

__ __

__ __

__ __

1. Bracketing
iterative root searches attempt to progressively narrow the brackets and to
discover the root within.

2. The
first set discusses the goal search univariate iterator primitives that are
commonly used to iterate through the variate.

3. These
goal search iterator primitives continue generating a new pair of iteration
nodes (just like their bracketing search initialization counter-parts).

4. Certain
iterator primitives carry bigger “local” cost, i.e., cost inside a single
iteration, but may reduce global cost, e.g., by reducing the number iterations
due to faster convergence.

5. Further,
certain primitives tend to be inherently more robust, i.e., given enough
iteration, they will find the root within – although they may not be fast.

6. Finally
the case of compound iterator search schemes, search schemes where the
primitive iteration routine to be invoked at each iteration is evaluated
on-the-fly, are discussed.

7. Iterative
searches that maintain extended state across searches pay a price in terms of
scalability – the price depending on the nature and the amount of state held
(e.g., Brent’s method carries iteration selection state, whereas Zheng’s does
not).

2.
Convergence is faster than secant, but poor when iterates not
close to the root, e.g., if two of
the function values *f _{n}*

a)
Given a
specific numerical tolerance _{}, if the previous step used the bisection method, if _{}, the bisection method is performed and its result used for
the next iteration. If the previous step used interpolation, then the check
becomes _{}.

b)
If the
previous step used bisection, if_{}, secant is used; otherwise the bisection used for the next iteration. If the previous step
performed interpolation, _{} is checked
instead.

7.
Finally,
since Brent's method uses inverse quadratic interpolation, *s* has to lie between *(3a _{k} + b_{k}) / 4*
and

8. Brent’s algorithm uses three points for the next inverse quadratic interpolation, or secant rule, based upon the criterion specified above.

9. One simplification to the Brent’s method adds one more evaluation for the function at the middle point before the interpolation.

10. This simplification reduces the times for the conditional evaluation and reduces the interval of convergence.

11. Convergence is better than Brent’s, and as fast and simple as Ridder’s.

1. This section carries out a brief treatment of computing roots for polynomials.

2. While closed form solutions are available for polynomials up to degree 4, they may not be stable numerically.

3. Popular techniques such as Sturm’s theorem and Descartes’ rule of signs are used for locating and separating real roots.

4. Modern methods such as VCA and the more powerful VAS use these with Bisection/Newton methods – these methods are used in Maple/Mathematica.

5. Since the eigenvalues of the companion matrix to a polynomial correspond to the polynomial’s roots, common fast/robust methods used to find them may also be used.

6. A number of caveats apply specifically to polynomial root searches, e.g., Wilkinson’s polynomial shows why high precision is needed when computing the roots – proximal/other ill-conditioned behavior may occur.

7. Finally,
special ways exist to identify/extract multiplicity in polynomial roots – they
use the fact that *f(x)* and *f’(x)* share the root, and by figuring
out their GCD.

1. ** ExecutionInitializer**
implements the initialization execution and customization functionality. It
performs two types of variate initialization:

o __Bracketing initialization__: This brackets the fixed point using the bracketing
algorithm described above. If successful, a pair of variate/Objective Function
coordinate nodes that bracket the root is generated. These brackets are
eventually used by routines that iteratively determine the root. Bracketing
initialization is controlled by the parameters in *BracketingControlParams**.*

o __Convergence Zone initialization__: This generates a variate that lies within the
convergence zone for the iterative determination of the fixed point using the
Newton's method. Convergence Zone Determination is controlled by the
parameters in ** ConvergenceControlParams**.

2.
** ExecutionInitializationOutput** holds the output of the
root initializer calculation. It contains the following fields:

Whether the
initialization completed successfully the number of iterations, the number of
objective function calculations, and the time taken for the initialization

The starting variate
from the initialization

1.
** BracketingControlParams** implements the control
parameters for bracketing solutions. It provides the following parameters:

·
The
starting variate from which the search for bracketing begins

·
The initial
width for the brackets

·
The factor
by which the width expands with each iterative search

·
The number
of such iterations.

1.
** ConvergenceControlParams** holds the fields needed
for the controlling the execution of Newton's method. It does that using the
following parameters:

·
The
determinant limit below which the convergence zone is deemed to have been
reached.

·
Starting
variate from where the convergence search is kicked off.

·
The factor
by which the variate expands across each iterative search.

·
The number
of search iterations.

2.
** ConvergenceOutput** extends the

o ** ConvergenceOutput** does not add any new field
to

1.
** ExecutionControl** implements the core root search execution control and
customization functionality.

a.
It is used for a) calculating
the absolute tolerance, and b) determining whether the ** ObjectiveFunction**
has reached the goal.

b.
** ExecutionControl** determines the execution termination using its

2.
** ExecutionControlParams** holds the parameters needed for controlling the
execution of the root finder.

fields control the root search in one of the following ways:*ExecutionControlParams*

a. Number of iterations after which the
search is deemed to have failed

b. Relative ** ObjectiveFunction**
Tolerance Factor which, when reached by the objective function, will indicate
that the fixed point has been reached

c. Absolute Tolerance fall-back, which is
used to determine that the fixed point has been reached when the relative tolerance
factor becomes zero

* *

1.
** FixedPointFinder** is the base abstract class that is implemented by
customized invocations, e.g., Newton's method, or any of the bracketing
methodologies.

- It invokes the core routine
for determining the fixed point from the goal.
determines the execution termination.*ExecutionControl*

2.
** FixedPointFinder** main flow comprises of the following steps:

- Initialize the root search zone by determining either a) the brackets, or b) the starting variate.
- Compute
the absolute
tolerance that establishes the attainment of the fixed point.*ObjectiveFunction* - Launch the variate iterator that iterates the variate.
- Iterate until the desired tolerance has been attained
- Return the root finder output.

3.
Root finders that derive from this provide implementations for the
following:

__Variate initialization__: They may choose either bracketing initializer, or the convergence initializer - functionality is provided for both in this module.__Variate Iteration__: Variates are iterated using a) any of the standard primitive built-in variate iterators (or custom ones), or b) a variate selector scheme for each iteration.

4.
** FixedPointFinderOutput** holds the result of the
root search.

- It contains the following fields:
- Whether the search completed successfully
- The number of iterations, the number of objective function base/derivative calculations, and the time taken for the search
- The output from initialization

5.
** FixedPointFinderNewton** customizes the

·
** FixedPointFinderNewton** applies the following
customization:

· Initializes the fixed point finder by computing a starting variate in the convergence zone

· Iterating the next search variate using the Newton's method.

6.
** FixedPointFinderBracketing** customizes the

·
** FixedPointFinderBracketing** applies the following customization:

o Initializes the root finder by computing the starting brackets

o Iterating
the next search variate using one of the specified variate iterator primitives.

·
** FixedPointFinderBracketing** does not do compound
iterations of the variate using any schemes - that is done by classes that
extend it.

7.
** FixedPointFinderBrent** customizes FixedPointFinderBracketing
by applying the Brent's scheme of compound variate selector.

·
Brent's scheme, as implemented here, is described above.

·
This implementation retains absolute shifts that have happened to the
variate for the past 2 iterations as the discriminant that determines the next
variate to be generated.

·
** FixedPointFinderBrent** uses the following parameters specified in

· The Variate Primitive that is regarded as the "fast" method

· The Variate Primitive that is regarded as the "robust" method

· The relative variate shift that determines when the "robust" method is to be invoked over the "fast"\

· The lower bound on the variate shift between iterations that serves as the fall-back to the "robust"

8.
** FixedPointFinderZheng** implements the root
locator using Zheng's improvement to Brent's method.

·
It overrides the *iterateCompoundVariate* method in ** FixedPointFinderBrent** to achieve the desired
simplification in the iterative variate selection.

1. ** IteratedBracket**
holds the left/right bracket variates and the corresponding values for the

2.
** IteratedVariate** holds the variate and the
corresponding value for the

3.
** VariateIteratorPrimitive** implements the various
variate iterator primitives. It implements the following primitives:

· Bisection

· False Position

· Quadratic

· Inverse Quadratic

·
Ridder

It may be readily enhanced to accommodate additional primitives.

4.
** VariateIteratorSelectorParameters** implements the control
parameters for the compound variate selector scheme used in Brent's method.

- Brent's method uses the following fields
in
to generate the next variate:*VariateIteratorSelectorParameters* - The Variate Primitive that is regarded as the "fast" method
- The Variate Primitive that is regarded as the "robust" method
- The relative variate shift that determines when the "robust" method is to be invoked over the "fast"
- The lower bound on the variate shift between iterations that serves as the fallback to the "robust".

** **

** **

** InitializationHeuristics
**implements several heuristics
used to kick off the fixed point

bracketing/search process.
The following custom heuristics are implemented as part of

the heuristics based
kick-off:

: Any of the standard bracketing control parameters can be customized to kick-off the bracketing search.__Custom Bracketing Control Parameters__: The left/right starting bracket edges are used as soft bracketing initialization hints.__Soft Left/Right Bracketing Hints__: A mid bracketing level is specified to indicate the soft bracketing kick-off.__Soft Mid Bracketing Hint__: A pair of hard floor and ceiling limits are specified as a constraint to the bracketing.__Hard Bracketing Floor/Ceiling__: A pair of hard left and right boundaries are specified to kick-off the final fixed point search.__Hard Search Boundaries__

These heuristics are
further interpreted and developed inside the *ExecutionInitializer*