v0.25, 24 Mar 2013
1. Hyperspace Search: Hyperspace search is a search to determine whether the entity is inside the zone of a range, e.g., bracketing search.
2. Hyperpoint Search: Hyperpoint searches are hard searches that search for an exact specific point (to within an appropriately established tolerance).
3. Iterate Nodes: This is the set of the traveled nodes (variate/Objective Function ordered pairs) that contain the trajectory traveled.
4. Iteration Search Primitives: The set of variate iteration routines that generate the subsequent iterate nodes.
5. Compound iterator search scheme: Search schemes where the primitive iteration routine to be invoked at each iteration are evaluated.
6. RunMap: Map that holds the program state at the end of each iteration, in the generic case, this is composed of the Wengert iterate node list, along with the corresponding root finder state.
7. Cost of Primitive (cop): This is the cost of invocation of a single variate iterator primitive.
1. Basic Search/Optimization SKU Framework
2. Unconstrained 1-D/multi-dimensional search
3. Constrained multi-dimensional Search
4. Pattern/String Search
5. Image Search
1. The root search given an objective function and its goal is achieved by iteratively evolving the variate, and involves the following steps:
· Search initialization and root reachability determination: Searched is kicked off by spawning a root variate iterator for the search initialization process (described in detail in the next section).
· Absolute Tolerance Determination.
· Root Search Iteration: The root is searched iteratively according to the following steps:
1. The iterator progressively reduces the bracket width.
2. Successive iteration occurs using either a single primitive (e.g., using the bisection primitive), or using a selector scheme that picks the primitives for each step (e.g., Brent’s method).
3. For Open Method, instead of 1 and 2, the routine drives towards convergence iteratively.
· Search Termination Detection: The search termination occurs typically based on the following:
· Proximity to the Objective Function Goal
· Convergence on the variate
· Exhaustion if the number of iterations
2. The flow behind these steps is illustrated in Figure 1.
1. Broadly speaking, root finding approaches can be divided into a) those that bracket roots before they solve for them, and b) those that don’t need to bracket, opting instead to pick a suitable starting point.
2. Depending upon the whether the search is a bracketing or an open method, the search initialization does one the following:
o Determine the root brackets for bracketing methods
o Locate root convergence zone for open methods
3. Initialization begins by a search for the starting zone. A suitable starting point/zone is determined where, by an appropriate choice for the iterator, you are expected to reach the fixed-point target within a sufficient degree of reliability. Very general-purpose heuristics often help determine the search start zone.
4. Both bracketing and open initializers are hyperspace searches, since they search for something “IN”, not “AT”.
1. Bracketing is the process of localizing the fixed point to within a target zone with the least required number of Objective Function calculations. Steps are:
2. Figure 2 shows the flow for the Bracketing routine.
3. Bracketing methods require that the initial search interval bracket the root (i.e. the function values at interval end points have opposite signs).
4. Bracketing traps the fixed point between two variate values, and uses the intermediate value theorem and the continuity of the Objective Function to guarantee the presence/existence of the fixed point between them.
5. Unless the objective function is discontinuous, bracketing methods guarantee convergence (although may not be within the specified iteration limit).
6. Typically, they do not require the objective function to be differentiable.
7. Bracketing iteration primitives’ convergence is usually linear to super-linear.
8. Bracketing methods preserve bracketing throughout computation and allow user to specify which side of the convergence interval to select as the root.
9. It is also possible to force a side selection after a root has been found, for example, in sequential search, to find the next root.
10. Generic root bracketing methods that treat the objective function as a black box will be slower than targetted ones – so much so that they can constitute the bulk of the time for root search. This is because, to accommodate generic robustness coupled with root-pathology avoidance (oscillating bracket pairs etc), these methods have to perform a full variate space sweep without any assumptions regarding the location of the roots (despite this most bracketing algorithms cannot guarantee isolation of root intervals). For instance, naďve examination of the Objective Function’s “sign-flips” alone can be misleading, especially if you bracket fixed-points with even numbered multiplicity within the brackets. Thus, some ways of analyzing the Black Box functions (or even the closed form Objective Functions) are needed to better target/customize the bracketing search (of course, parsimony in invoking the number of objective function calls is the main limitation).
11. The first step is to determine a valid bracketing search start. One advantage with univariate root finding is that objective function range validity maybe established using an exhaustive variate scanner search without worrying about combinatorial explosion.
1. Objective Function may fail evaluation at the specified variate for the following reason:
o Objective Function is not defined at the specified variate.
o Objective Function evaluates to a complex number.
o Objective Function evaluation produces NaN/Infinity/Under-flow/Over-flow errors.
o In such situations, the following steps are used to steer the variate to a valid zone.
2. Objective Function undefined at the Bracketing Candidate Variate: If the Objective Function is undefined at the starting variate, the starting variate is expanded using the variate space scanner algorithm described above. If the objective Function like what is seen in Figure 3, a valid starting variate will eventually be encountered.
3. Objective Function not defined at any of the Candidate Variates: The risk is that the situation in Figure 4 may be encountered, where the variate space scanner iterator “jumps over” the range over which the objective function is defined. This could be because the objective function may have become complex. In this case, remember that an even power of the objective function also has the same roots as the objective function itself. Thus, solving for an even power of the objective function (like the square) – or even bracketing for it – may help.
1. Figure 5 shows the flow behind a general-purpose bracket start locator.
2. Once the starting variate search is successful, and the objective function validity is range-bound, then use an algorithm like bisection to bracket the root (as shown in Figure 6 below).
3. However, if the objective function runs out of its validity range under the variate scanner scheme, the following steps need to undertaken:
4. The final step is to trim the variate zone. Using the variate space scanner algorithm, and the mapped variate/Objective Function evaluations, the tightest bracketing zones are extracted (Figure 7).
1. Non-bracketing methods use a suitable starting point to kick off the root search. As is obvious, the chosen starting point can be critical in determining the fate of the search. In particular, it should be within the zone of convergence of the fixed-point root to guarantee convergence. This means that specialized methods are necessary to determine zone of convergence.
2. When the objective function is differentiable, the non-bracketing root finder often may make use of that to achieve quadratic or higher speed of convergence. If the non-bracketing root finder cannot/does not use the objective function’s differentiability, convergence ends up being linear to super-linear.
3. The typical steps for determining the open method starting variate are:
4. Bracketing followed by a choice of an appropriate primitive variate (such as bisection/secant) satisfies both, thus could be a good starting point for open method searches like Newton’s method.
5. Depending upon the structure of the Objective Function, in certain cases the chain rule can be invoked to ease the construction of the derivative – esp. in situations where the sensitivity to inputs are too high/low.
o Wengert Variate Analysis => Collection of the Wengert variates helps build and consolidate the Objective Function behavior from the variate iterate Wengert nodes – to build a behavioral picture of the Objective Function.
o Objective Function Neighborhood Behavior => With every Wengert variable, calculation of the set of forward sensitivities and the reverse Jacobians builds a local picture of the Objective Function without having to evaluate it.
16. The algorithm estimates m after carrying out one or two iterations, and then use that value to increase the rate of convergence. Alternatively, the modified Newton’s method may also be used:
(a) is a fixed point of 
(b) is a positive constant,
(c) , and
(e) for all .
19. One sufficient condition for to initialize a convergent sequence , which converges to the root of is that and that be chosen so that for all .
1. Bracketing iterative root searches attempt to progressively narrow the brackets and to discover the root within.
2. The first set discusses the goal search univariate iterator primitives that are commonly used to iterate through the variate.
3. These goal search iterator primitives continue generating a new pair of iteration nodes (just like their bracketing search initialization counter-parts).
4. Certain iterator primitives carry bigger “local” cost, i.e., cost inside a single iteration, but may reduce global cost, e.g., by reducing the number iterations due to faster convergence.
5. Further, certain primitives tend to be inherently more robust, i.e., given enough iteration, they will find the root within – although they may not be fast.
6. Finally the case of compound iterator search schemes, search schemes where the primitive iteration routine to be invoked at each iteration is evaluated on-the-fly, are discussed.
7. Iterative searches that maintain extended state across searches pay a price in terms of scalability – the price depending on the nature and the amount of state held (e.g., Brent’s method carries iteration selection state, whereas Zheng’s does not).
2. Convergence is faster than secant, but poor when iterates not close to the root, e.g., if two of the function values fn−2, fn−1 and fn coincide, the algorithm fails.
a) Given a specific numerical tolerance , if the previous step used the bisection method, if , the bisection method is performed and its result used for the next iteration. If the previous step used interpolation, then the check becomes .
b) If the previous step used bisection, if, secant is used; otherwise the bisection used for the next iteration. If the previous step performed interpolation, is checked instead.
7. Finally, since Brent's method uses inverse quadratic interpolation, s has to lie between (3ak + bk) / 4 and bk.
8. Brent’s algorithm uses three points for the next inverse quadratic interpolation, or secant rule, based upon the criterion specified above.
9. One simplification to the Brent’s method adds one more evaluation for the function at the middle point before the interpolation.
10. This simplification reduces the times for the conditional evaluation and reduces the interval of convergence.
11. Convergence is better than Brent’s, and as fast and simple as Ridder’s.
1. This section carries out a brief treatment of computing roots for polynomials.
2. While closed form solutions are available for polynomials up to degree 4, they may not be stable numerically.
3. Popular techniques such as Sturm’s theorem and Descartes’ rule of signs are used for locating and separating real roots.
4. Modern methods such as VCA and the more powerful VAS use these with Bisection/Newton methods – these methods are used in Maple/Mathematica.
5. Since the eigenvalues of the companion matrix to a polynomial correspond to the polynomial’s roots, common fast/robust methods used to find them may also be used.
6. A number of caveats apply specifically to polynomial root searches, e.g., Wilkinson’s polynomial shows why high precision is needed when computing the roots – proximal/other ill-conditioned behavior may occur.
7. Finally, special ways exist to identify/extract multiplicity in polynomial roots – they use the fact that f(x) and f’(x) share the root, and by figuring out their GCD.
1. ExecutionInitializer implements the initialization execution and customization functionality. It performs two types of variate initialization:
o Bracketing initialization: This brackets the fixed point using the bracketing algorithm described above. If successful, a pair of variate/Objective Function coordinate nodes that bracket the root is generated. These brackets are eventually used by routines that iteratively determine the root. Bracketing initialization is controlled by the parameters in BracketingControlParams.
o Convergence Zone initialization: This generates a variate that lies within the convergence zone for the iterative determination of the fixed point using the Newton's method. Convergence Zone Determination is controlled by the parameters in ConvergenceControlParams.
2. ExecutionInitializationOutput holds the output of the root initializer calculation. It contains the following fields:
Whether the initialization completed successfully the number of iterations, the number of objective function calculations, and the time taken for the initialization
The starting variate from the initialization
1. BracketingControlParams implements the control parameters for bracketing solutions. It provides the following parameters:
· The starting variate from which the search for bracketing begins
· The initial width for the brackets
· The factor by which the width expands with each iterative search
· The number of such iterations.
1. ConvergenceControlParams holds the fields needed for the controlling the execution of Newton's method. It does that using the following parameters:
· The determinant limit below which the convergence zone is deemed to have been reached.
· Starting variate from where the convergence search is kicked off.
· The factor by which the variate expands across each iterative search.
· The number of search iterations.
2. ConvergenceOutput extends the ExecutionInitializationOutput by retaining the starting variate that results from the convergence zone search.
o ConvergenceOutput does not add any new field to ExecutionInitializationOutput.
1. ObjectiveFunction provides the evaluation of the objective function and its derivatives for a specified variate.
o Default implementation of the derivatives is meant for non-analytical black box objective functions.
2. DerivativeControl provides bumps needed for numerically approximating derivatives.
3. Differential holds the incremental differentials for the variate and the objective function.
1. ExecutionControl implements the core root search execution control and customization functionality.
a. It is used for a) calculating the absolute tolerance, and b) determining whether the ObjectiveFunction has reached the goal.
b. ExecutionControl determines the execution termination using its ExecutionControlParams instance.
2. ExecutionControlParams holds the parameters needed for controlling the execution of the root finder.
a. Number of iterations after which the search is deemed to have failed
b. Relative ObjectiveFunction Tolerance Factor which, when reached by the objective function, will indicate that the fixed point has been reached
c. Absolute Tolerance fall-back, which is used to determine that the fixed point has been reached when the relative tolerance factor becomes zero
1. FixedPointFinder is the base abstract class that is implemented by customized invocations, e.g., Newton's method, or any of the bracketing methodologies.
2. FixedPointFinder main flow comprises of the following steps:
3. Root finders that derive from this provide implementations for the following:
4. FixedPointFinderOutput holds the result of the root search.
5. FixedPointFinderNewton customizes the FixedPointFinder for Open (Newton's) root finder functionality.
· FixedPointFinderNewton applies the following customization:
· Initializes the fixed point finder by computing a starting variate in the convergence zone
· Iterating the next search variate using the Newton's method.
6. FixedPointFinderBracketing customizes the FixedPointFinder for bracketing based root finder functionality.
· FixedPointFinderBracketing applies the following customization:
o Initializes the root finder by computing the starting brackets
o Iterating the next search variate using one of the specified variate iterator primitives.
· FixedPointFinderBracketing does not do compound iterations of the variate using any schemes - that is done by classes that extend it.
7. FixedPointFinderBrent customizes FixedPointFinderBracketing by applying the Brent's scheme of compound variate selector.
· Brent's scheme, as implemented here, is described above.
· This implementation retains absolute shifts that have happened to the variate for the past 2 iterations as the discriminant that determines the next variate to be generated.
· FixedPointFinderBrent uses the following parameters specified in VariateIterationSelectorParams:
· The Variate Primitive that is regarded as the "fast" method
· The Variate Primitive that is regarded as the "robust" method
· The relative variate shift that determines when the "robust" method is to be invoked over the "fast"\
· The lower bound on the variate shift between iterations that serves as the fall-back to the "robust"
8. FixedPointFinderZheng implements the root locator using Zheng's improvement to Brent's method.
· It overrides the iterateCompoundVariate method in FixedPointFinderBrent to achieve the desired simplification in the iterative variate selection.
1. IteratedBracket holds the left/right bracket variates and the corresponding values for the ObjectiveFunction during each iteration.
2. IteratedVariate holds the variate and the corresponding value for the ObjectiveFunction during each iteration.
3. VariateIteratorPrimitive implements the various variate iterator primitives. It implements the following primitives:
· False Position
· Inverse Quadratic
It may be readily enhanced to accommodate additional primitives.
4. VariateIteratorSelectorParameters implements the control parameters for the compound variate selector scheme used in Brent's method.
InitializationHeuristics implements several heuristics used to kick off the fixed point
bracketing/search process. The following custom heuristics are implemented as part of
the heuristics based kick-off:
These heuristics are further interpreted and developed inside the ExecutionInitializer