appa-logo3

 

 

Stochastic Analyzer in DRIP

 

Lakshmi Krishnamurthy

v0.41, 20 January 2014


Introduction

 

 

Glossary

 

1.      Wengert List: List of all the non over-writable program variables (Wengert (1964)) – can also be seen as a linearization of the computational graph. By construction, it is an intermediate variable.

2.      Intermediate Wengert Canonical Variable: These are intermediate financial variables those are fixed from the point-of-view of the output Jacobians and the input parameters that serve as computation graph parsimonious optimizers (Figures 1 and 2).

3.      Wengert fan-in and fan-out: Reduction of a set of initial/intermediate Wengert variates onto the subsequent set is called fan-in; the opposite is fan-out.

4.      Wengert funneling: Same as Wengert fan-in.

5.      Micro-Jacobian: Change in the calibrated instrument measure coefficients to unit change in the quoted instrument measures.

6.      Self-Jacobian: Self-Jacobian refers to the Jacobian of the Objective Function at any point in the variate to the Objective Function at the segment nodes, i.e., . Self-Jacobian is a type of micro-Jacobian.

7.      Derivative Entity: The entity whose dynamics are determined by the evolution of a stochastic variate, and whose specific facets/measures are observable.

8.      Path-wise Derivative Estimator: , where V is the value of the derivative, and  is the starting value for a specific stochastic variate.

9.      Non-Parsimonized Parameters: Parameters that map one-to-one with the input instrument set, e.g., typical curve bootstrapping.

10.  Parsimonization: Reduction of the parameter space from the input measure space.

 

 

Overview

 

1.      AD History: Iri (1991)

2.      Mathematical Foundations: Griewank (2000)

3.      Survey: Berz (1996)

4.      Implementation Tools, Methodologies, Processes, and Techniques (Bischof, Hovland, and Norris (2005))

5.      AD Resource: http://www.autodiff.org/

 

 

Algorithmic Differentiation in Finance

 

1.      Focus has been primarily on Monte-Carlo methodologies.

2.      Although path-wise optimized sensitivity generation had been employed earlier (Glasserman (2004)), Giles and Glasserman (2006) first discussed adjoint methods in path-wise sensitivity generation.

3.      Full extension to LMM based stochastic variate evolution and a corresponding exotic (in this case Bermudan) swap option evaluation (Leclerc, Liang, and Schneider (2009)), as well as to correlated defaults and their sensitivities (Capriotti and Giles (2011)).

4.      Capriotti (2011) covers automated Greek generation, but with a focus on automatic differentiation, and in the context of Monte-Carlo methods.

5.      Finally, algorithmic differentiation has also been applied to addressing the issue of calibration along with sensitivity generation (Schlenkirch (2011)).

 

 

Document Layout

 

1.      Introduction

a.       Glossary

b.      Overview

c.       Algorithmic Differentiation in Finance

d.      Document Layout

2.      Algorithmic Differentiation - Basics

a.       Motivation and Advantages

b.      Program Sequence Construction Modes

c.       Canonicalization – Program Sequence Simplification by Decomposition

d.      Challenges of Automating the Differentiation

e.       Wengert Representation and Optimal Program Structure Synthesis

f.        Optimization using Pre-accumulation and Check-Pointing

g.       Financial Application Space Customization

3.      Sensitivity Generation During Curve Construction

a.       Introduction

b.      Curve Jacobian

4.      Stochastic Entity Evolution

a.       Sensitivity Formulation

b.      Sensitivities to Stochastic Latent State Variables and Dynamical Parameters

c.       Stochastic Variate Evolution Constrained by Splines

d.      Formulation of the Evolution of the Stochastic Variate Self-Jacobian

e.       Correlated Stochastic Variables Evolution

f.        LMM Forward Rate Evolution

5.      Formulation of Sensitivities for Pay-off Functions

a.       Formulation of Pay-off Function Stochastic Evolution

b.      Path Greek

c.       Payoff Sensitivity to the Correlation Matrix

d.      Algorithmic Differentiation in the Payoff Sensitivities Calculation

6.      Bermudan Swap Option Sensitivities

a.       Base Formulation

b.      Greek Estimation

c.       LSM Formulation

7.      NTD Basket Sensitivities

a.       NTD Product Formulation

8.      Basket Options

9.      References

10.  Figures

11.  Stochastic Analyzer Software Components

a.       Univariate Function Package

b.      Univariate Calculus Package


Algorithmic Differentiation - Basics

 

 

Motivation and Advantages

 

1.      Definition: Automatic differentiation is a set of techniques for transforming a program that calculates the numerical values of a function into a program that calculates numerical values for derivatives of that function with about the same accuracy and efficiency as the function values themselves (Bartholomew-Biggs, Brown, Christianson, and Dixon (2000)).

2.      Symbolic Derivatives: Calculate the local symbolic derivatives rather than the a) divided differences, or b) numerical differentials (Automatic Differentiation - Wikipedia Entry).

3.      Calculation Speed: Same number of Objective Function Calculation as the original; however, potential “chain rule” multiplication factor effects.

4.      Accuracy vs. Performance: Due to the usage of symbolics, accuracy of Automatic Differentiation always better than numerical differentials; however, due to the chain-rule issue, may not be always faster.

5.      Scalability at infinitesimal variates: Since Automatic Differentiation is always symbolic and therefore infinitesimal, it will automatically scale to arbitrarily small variate infinitesimals – reduced errors due to bit cancellation etc:

6.      Higher-order derivatives: Automatic Differentiation does not need additional objective function evaluations for higher order derivative calculations (beyond the chain-rule issues); therefore, those are infinitesimally correct too.

 

 

Program Sequence Construction Modes

 

1.      Forward Automatic Differentiation: Express the final and the intermediate variables as a consequence of a computed forward graph, and derive the symbolic forward derivative graph.

·        Effectively computes the gradient of the intermediate variables to the variates or the “independent variables” and transmits them up the graph.

2.      Reverse Automatic Differentiation: Express the intermediate variables and the input variates as nodes in the computed reverse graph, and derive the symbolic reverse derivative graph.

·        Often may still need the forward path to store the calculated intermediates needed on the way back.

·        Effectively computes the gradient of the intermediate variables to the “dependent variables” and transmits them down the graph.

3.      Speed:

·        Forward Mode => Speed proportional to n, the number of “independent” variables

·        Reverse Mode => Speed proportional to m, the number of “dependent” variables

4.      Memory Usage (Ghaffari, Li, Li, and Nie (2007)):

·        Forward Mode => a) Each Wengert variable, b) Forward Jacobian for each Wengert, c) Forward Dependency Graph

·        Reverse Mode => a) Each Wengert Adjoint, b) Reverse Jacobian for each Wengert, c) Forward/Backward Dependency graph

5.      When the difference is minimal: When the dependence of the final Jacobian sensitivity step is the dominating factor, and the adjointing step is not the rate-determining part, then the performance will always be , where n is the number of sensitivities – for e.g., if , given that  is trivial to calculate, the performance will always be .

·        For instance, given a univariate objective function (as in constrained/unconstrained optimization (e.g., maximization/minimization) problems), either forward or reverse Automatic Differentiation is an equally good choice for sensitivity generation, owing to its performance.

 

 

Canonicalization - Program Statements Simplification by Decomposition

 

1.      Program Line-level decomposition: Canonicalization decomposes the program/statement units into specific analysis bits.

2.      Canonicalization Implementation: In general, canonicalization and other related Automatic Differentiation Source Code generation/transformation techniques should go hand in hand with optimizing compiled code emission techniques, program active variable activity analysis.

3.      Equivalence with Wengert Structuring: Given that canonicalization consists of hoisting all the l-value updates separately without side effects, it is effectively the same as Wengert un-rolling and DAG linearization.

4.      Limitations with the implementation: For many of the reasons above, automated implementations of canonicalization (like other automated code generation/re-structuring) might result in “invisible inefficiencies”, and the had-drafted techniques those are based upon essentially the same principles may be more optimal.

5.      Post canonicalized Enhancement Cost: Given that the worst case operation is division, going from  to  results in going from 1 function unit execution cost to 4 automatic differentiation execution unit costs. Typically due to “weird” functions, the worst-case addition to a single post-canonicalized statement is a factor of 5, not 4.

6.      Divided Differences based Differentiation Fall back: .

 

 

Challenges of Automating the Differentiation

 

1.      Deep-dig perspective: Re-purposed Automatic Differentiation perspective forces the visualization of the computation at the granularity of the symbolic functional forms of the objective function.

a.       Objective Function evaluator over-loading => This requires propagation of the inner most symbolic graph nodes through the graph chain => causes additional cognitive SKU export!!

b.      Objective Function Neighborhood Behavior => With every Wengert variable, calculation of the set of forward sensitivities and the reverse Jacobians builds a local picture of the Objective Function without having to evaluate it.

2.      Block-level View Fixation: Source code transformation techniques are very invasive, and require highly locally frozen view fixation, and are therefore less cognitive. Operator overloading techniques enable retention of the domain focus, and are therefore more cognitive.

a.       Naïve operator overloading would simply generate a block-level (or function call level) adjoint. This can explode the required storage, in addition to generating sub-optimal reverse-mode code. Needless to mention, source code transformation techniques can be built to overcome this – in practice, however, many may not quite do it.

3.      Complied language Automatic Differentiation implementation: Without the usage of obfuscating “versatile” templates, auto-generation of very generic forward/reverse accumulation code is impossible. Therefore source level function overloading and automated program instrumentation techniques are very hard.

a.       Further, compiled language source code transformation appears to be a vestige of “smart compiler” efforts of the ‘90s – classic instance of where a simple idea is “intellectually transmitted” than “built out-of the-box”.

4.      Symbolic Differentiation Challenges with certain Unit Functional Forms: When you consider functions such as , and you seek  symbolically, the higher order symbolic differentiations become much more challenging - ,  and so on for higher orders. Thus symbolically handling these series this way gets out of control fast!

 

 

Wengert Representation and Optimal Program Structure Synthesis

 

1.      Combination of Forward/Reverse Modes: Forward (n inputs) and reverse (m outputs) mode represent just two possible (extreme) ways of recursing through the chain rule. For n > 1 and m > 1 there is a golden mean, but finding the optimal way is probably an NP-hard problem (Berland (2006)) – optimal Jacobian accumulation is NP-complete (Naumann (2008)).

2.      Wengert Intermediate Fan-in and possibly fan-out: See Figures 1 to 3 for illustrate this.

·        Wengert Intermediate Performance Enhancement => If there exists an intermediate quantity that is fixed from the point-of-view of the output Jacobians and the input parameters, the performance may be improved (see Figure 1).

·        Reusable Intermediate Performance Improvement => If the input/output computation leads to sufficient commonality among the Wengert intermediate calculation, that may also reduce computation by promoting reuse, thereby improving efficiency.

·        Wengert Funneling Criterion => For non-optimizing, non-parsimonized Wengert funnels,  for the Wengert fan to be a funneling fan-in – otherwise rippling out causes huge non-diagonal Markov matrices. This is true for a) I -> P, b) P-> W, and c) W -> O.

3.      Standardized Computational Finance Structures: In computational finance (esp. computational fixed income finance), the payout/product/pricer object serves the function of the intermediate Wengert variate indicated above. From below this variate you have the inputs/parameters rippling up, and from above you have the Jacobians/output measure adjoints feeding down (Figure 2).

 

 

Optimization using Pre-accumulation and Check Pointing

 

1.      Pre-accumulation: Aggregation (and possibly caching) of the sensitivity Jacobian over all the intermediate Wengert’s inside a routine/block/module – thereby only exposing  for the group unit (not each Wengert inside).

a.       Pre-accumulation also provides a suitable boundary for parallelization.

b.      It may also be looked at as the appropriate edge at which the source code transformation technique and operator overloading technique may “merge”.

2.      Cross-country Accumulation: Same as pre-accumulation, but pre- accumulation occurs in a specified (forward/reverse), Cross-country accumulation need not – in fact it may be guided by program analysis using Optimal Wengert intermediate composition techniques.

a.       This is also referred to as check pointing.

b.      This typically also requires snapshotting the program global and other execution context parameters at the checkpoint boundaries.

c.       Works best when the program state is easily and minimally savable, and quickly recoverable.

d.      Will also work well in conjunction with traditional kernel level check pointing schemes for fail-over etc:

 

 

Algorithmic Differentiation Financial Application Space Customization

 

1.      Math Modules:

·        Forward differentials and auto-adjointing of math modules => May be needed for most of them.

·        Every block, compute the base “value”, forward differential, and reverse adjoint.

·        In fact, for every active double-precision variable v, source code transformation automatic differentiation techniques recursively automatically generate the doublet . Further, this calculation may also be parallelized.

o       This particular calculation may also be propagated at the function call level, so that the assignment outputs are automatically generated for the doublet/multiple.

o       Computational structures => Design restrictions may also be imposed by the computability of the AD of a math module, i.e., would the financial MarketParamsContainer be broken down into further parameter units?

2.      Stochastic Variate Automatic Differentiation: Evolution of stochastic variates and their derivative entities may be further optimized by exploiting sparse-ness of the multi-factor co-variance matrix, thereby evolving the variate/derivative matrix that is sparse optimally (as opposed to blind delta bumps that may happen when computing differentials).

3.      Quasi-analytic Computation Models: No Monte-Carlo evolution needed at all, but still Wengert intermediate level reformulation necessary to enhance the quasi-analytics analysis (e.g., Copula methods).

4.      Latent State Calibration from Observed Manifest Measures:

·        Formulation of the de-convolution the latent state from the observed manifest measure is necessary for the extraction of the latent state parameter set (this is accomplished by the calibration process).

·        Of course, latent state calibration occurs among the elastic and the inelastic dimensions, and the inelastics are parameter set!

·        Latent state calibration/parameterization etc: inherently involve parsimonization – this is where the models come in.


Sensitivity Generation During Curve Construction

 

 

Introduction

 

1.      Advantages: In addition to the usual advantage that Automatic Differentiation provides on doing accurate Greeks on the same run as pricing, there is no need for multiple bumped curves anymore – but the proper Jacobians need to be calculated.

2.      Curve Calibration Deltas: Typical deltas are with respect to the

3.      Span/Segment Elastic Variates: Consider the situation where the latent state in itself (not its transformation) is explicitly measured. There are 5 different kinds of latent state proxies to consider:

4.      Span/Segment variate relations: For a given calculated/formulated output manifest measure , the following are true by definition:

5.      Sensitivities to the elastic variates:

6.      Piece-wise constant segment variate: In this case, .

7.      Splined segment variate: Recall that segment spline coefficient calibration is simply a problem of matching to a terminal node (which is the quoted instrument measure at the terminal node). Thus, for a formulated output , at node k, it is obvious that .

8.      Linear Dependence of Integrand Quadrature: For many functional formulations in finance, the calculated product measure () has a linear dependence on the stochastic evolution variate, i.e., . This implies that , i.e.,  only, and not on the quadrature details.

 

 

Curve Jacobian

 

1.      Representation Jacobian: Every Curve implementation needs to generate the Jacobian of the following latent state metric using its corresponding latent state quantification metric:

·        Forward Rate Jacobian to Quote Manifest Measure

·        Discount Factor Jacobian to Quote Manifest Measure

·        Zero Rate Jacobian to Quote Manifest Measure

2.      Calibration Jacobian vs. Monte-Carlo Automatic Differentiation Delta: Both of these are actually path-wise, the difference being that:

·        Jacobian generated during calibration is part of inference, therefore iterative.

·        Jacobian of Monte-Carlo Automatic Differentiation is typically path-wise and non-iterative, therefore it is technically part of prediction.

3.      Importance of the representation Self-Jacobian: Representation Self-Jacobian computation efficiency is critical, since Jacobian of any function  is going to be dependent on the self-Jacobian  because of the chain rule.

4.      Forward Rate->DF Jacobian:

·        .

·        .

·         => Forward rate between times  and .

·         => Discount Factor at time

5.      Zero Rate to Forward Rate Equivalence: This equivalence may be used to construct the Zero Rate Jacobian From the Forward Rate Jacobian. Thus the above equation may be used to extract the Zero Rate micro-Jacobian.

6.      Zero Rate->DF Jacobian:

·       

·         => Zero rate at time t

7.      Quote->Zero Rate Jacobian:

·       

·         => Zero rate at time t

8.      PV->Quote Jacobian:

·       

9.      Cash Rate DF micro-Jacobian:

·       

·         => Cash Rate Quote for the jth Cash instrument.

·         => Discount Factor at time

10.  Cash Instrument PV-DF micro-Jacobian:

·       

·        There is practically no performance impact on construction of the PV-DF micro-Jacobian in then adjoint mode as opposed for forward mode, due to the triviality of the adjoint.

11.  Euro-dollar Future DF micro-Jacobian:

·       

·         => Quote for the jth EDF with start date of  and maturity of .

12.  Euro-dollar Future PV-DF micro-Jacobian:

·       

·        There is practically no performance impact on construction of the PV-DF micro-Jacobian in then adjoint mode as opposed for forward mode, due to the triviality of the adjoint.

13.  Interest Rate Swap DF micro-Jacobian:

·       

·         => Quote for the jth IRS maturing at .

·         => DV01 of the swap

·         => Floating PV of the swap

·       

·       

·       

·       

·       

14.  Interest Rate Swap PV-DF micro-Jacobian:

·       

·        There is no performance impact on construction of the PV-DF micro-Jacobian in then adjoint mode as opposed for forward mode, due to the triviality of the adjoint. Either way the performance is , where n is the number of cash flows, and k is the number of curve factors.

15.  Credit Default Swap DF micro-Jacobian:

·       

·        j => jth CDS Contract with a maturity

·         => Coupon of the jth CDS

·         => PV of the full CDS contract

·         => PV of the Coupon leg of the CDS Contract

·         => PV of the Accrual paid on default

·       

·       

·       

·       

·       

·       

16.  Credit Default Swap DF micro-Jacobian:

·       

·        There is no performance impact on construction of the PV-DF micro-Jacobian in then adjoint mode as opposed for forward mode, due to the triviality of the adjoint. Either way the performance is , where n is the number of cash flows, and k is the number of curve factors.


Stochastic Entity Evolution

 

 

Stochastic Entity Evolution – Sensitivity Formulation

 

1.      Evolution Dynamics: Simplest evolution of stochastic variables  will be ones with constant forward volatilities. Once the dynamics is formulated according to  where  is the component drift, and  is the component co-variance to the factor , subsequent evolution can be determined.

2.      Evolution of the derivative entity: Once the stochastic variate dynamics is established, the dynamics of the observed derivative entity can be progressively determined.

3.      Derivative Entity Measure path-wise evolution: Evolution sequence can be determined for the individual pay-off measures as well. These measures may further be dependent on the differentials of the derivative entity, so those may also need to be evolved using automatic differentiation.

4.      Derivative Entity Computational efficiency enhancement:

5.      Derivative Entity Measure Calculation: [Instrument, Manifest Measure] input to [Instrument, Manifest Measure] output is equivalently maintained in the Jacobian. Alternately, the computation may also hold [Latent State calibrated parameter] to [Instrument, Manifest Measure] Output map.

 

 

Sensitivities to Stochastic State Variates and Dynamical Parameters

 

1.      State Variates: These are base stochastic entities that characterize the actual system statics/dynamics.

·        Sensitivities to the state variates are typically sensitivities to the “current” (or starting) realization of these variates – e.g., delta, gamma.

2.      Dynamic Parameters: Model parameters that govern the evolution/equilibrium behavior of the state variates, and thereby the system dynamics.

·        Examples would be sensitivities to volatility, correlation, etc:

3.      Segment/Span Coefficients: These are the additional coefficients serve act as the interpolated “PROXY” for the segment latent state at the unobserved points in the segment.

·        Sensitivities may also be sought to the coefficients.

 

 

Stochastic Variate Evolution Constrained by Splines

 

1.      The forward rates (or indeed any term instrument measures) need to evolve such that

2.      For e.g., the evolution dynamics of the forward rates (or indeed any term instrument measures) can still be via LMM, but splines may still be applicable to the intermediate nodes, as the segment spline coefficients adjust to the forward rate nodes.

3.      Splines may also be used for any term instrument measure determinant (e.g., the volatility surface maybe also be interpolatively constructed using splines), so as to preserve the continuity/smoothness, as opposed to piece-wise discreteness.

 

 

Formulation of the Evolution of Stochastic Variate Self-Jacobian

 

1.      Evolution Formulation:

2.      Definition of Self-Jacobian Delta:

3.      Evolution Sensitivity Formulation:

a.       i => Index over the number of underliers (1 to n)

b.      l => Index over the number of independent stochastic factors (1 to m)

c.       Then

d.      Eulerized version of the above is:

e.       Re-write #1:  where

f.        Re-write #2:  where  and  are column matrices, and  is an n x n square matrix.

g.       Re-write #3: .

·        This is still forward automatic differentiation mode and is , but you can optimize this using specific techniques shown in Glasserman and Zhao (1999).

·        Another significant optimization can be achieved by adjointing techniques [Griewank (2000), Giles and Pierce (2000)].

·        To achieve further significant optimization, transpose this, to get the following adjoint form: , which actually reduces to vector/matrix as opposed to matrix/matrix in the non-transposed version – this would be , as opposed to .

h.       The matrix nature of  simply arises from the chain rule summation over i. Similar chain rules may be set for the different cash flow Jacobians, etc.

i.         Re-casting  from above as , we can separate out the different contributions to . a) The term  is the contribution due to the previous D, i.e., . b) The term  is the contribution from the derivative of the drift term. c) The term  is the contribution from the volatility derivative.

4.      Definition of Self-Jacobian Gamma:

o        

o      

o      

o      

 

 

Correlated Stochastic Variables Evolution

 

1.      Continuous Evolution of LMM-type Quantities: Given  => the vector of financial variables that need to be mapped to the corresponding Weiner variates . In LMM, for e.g., start with , then the LMM evolutionary techniques generate  and update .

·        Any continuous entity can be chosen to model correlations, not just LMM –type asset movements. If the default process can be correspondingly transformed to an asset indicator variable, that may be correlated with the other asset variables too.

·        For a set of correlated variates, the stochastic evolution equation is: .

·        Here  is the variance, and  is the correlation matrix – the variance is factored out of the covariance matrix to produce the correlation grid.  is in the usual i.i.d. .

·        The corresponding delta is:

·        The entry in matrix D is given as:

·        The corresponding parameter sensitivity is:

o       This may be simplified in cases where  is an explicit function ONLY of the state evolution variables as:

2.      Correlated Default Times: Unlike the continuous variables above, if we are to consider the correlations between default times ONLY, it is much more efficient to draw correlated default times – again this correlation is different from that of continuous asset value times that results in default.

3.      Generation of Correlated Default Times:

·        Generate the vector .

·        Factorize the correlation matrix to create the Cholesky diagonal matrices  and .

·        Use the Cholesky transformation to create  from  using .

·        For each entity  in :

                                                   i.      Evaluate the cumulative normal , where  is a Normal distribution with unit mean and zero variance.

                                                 ii.       where is the survival probability for the entity i.

                                                iii.      In general, .

 

 

LMM Forward Rate Evolution

 

1.      Importance of the LMM Formulation: 2 reasons why it is important:

·        LMM is one of the most popularly used formulation, and it is essential to evaluate the impact the no-arbitrage constrained drift has on the evolution and the impact on the greeks.

·        The lognormal nature of the forward rate  is important in its own right.

2.      No-arbitrage constraint specification: , where , , and  is the maturity of the first instrument that matures after t [Brace, Gatarek, and Musiela (1997), Jamshidian (1997)].

3.      Forward Rate Volatility vs. at-the-swap Swap Option Volatility: LMM uses forward rate volatilities, so there needs to be a conversion step that involves converting the market observed at-the-money swap option volatility onto LMM forward rate volatility [Brigo and Mercurio (2001)].

4.      Self-Jacobian of the extended LMM Formulation: As shown in Denson and Joshi (2009a) and Denson and Joshi (2009b), , where

·       

·       

·       

5.      Forward-Rate Evolution Matrix: As expected, , where , and .

6.      Variate Jacobian Parameter Sensitivity: , where  is available from above for the two scenarios.

Rewrite #1: , where


Formulation of Sensitivities for Pay-off Functions

 

 

Formulation of Pay-off Function Stochastic Evolution

 

1.      Monte-Carlo Path-wise Derivatives: Path-wise derivatives are typically forward derivatives, not adjoint [Giles and Glasserman (2006)]. Therefore computation time is proportional to the number of inputs. Further, not easy to accommodate these in complex payouts [Capriotti (2011)].

2.      Payoff Expectation Formulation:  [Harrison and Kreps (1979)], where  is the vector of financial variables.

o       Path Payoff Expectation [Kallenberg (1997)] =>  and

 

 

Path Greek

 

1.      Unbiased Estimate of Path Sensitivity: Estimate is unbiased [Kunita (1990), Protter (1990), Broadie and Glasserman (1996), Glasserman (2004)] if  where  is the starting point for the variate.

2.      Monte-Carlo Greek Definition: Greek is defined at the change in Y with respect to the starting value of x, i.e., . . If x is a multi-component vector , then .

3.      Pay-off Function Delta: . Now use the earlier formulation for  to establish the path delta. In particular, using above, , so all the speed up advantages associated with the adjoint formulation above follows.

4.      Variance in the Greeks in addition to the base Greeks:

·        Cluster all the Path-wise Greeks calculated for a given input (either  or a parameter ).

·        Within that cluster estimate the corresponding Greek.

·        Usual population sampling variance techniques applied to compute the variance in the Greek.

5.      Path Parameter () Sensitivity: . Now use the earlier formulation for  to establish the path parameter sensitivity.

6.      Explicit Pay-off Greek Formulation:

·        Notice that it has additional terms since the explicit dependence of  on  is, in general, non-zero: otherwise, , and the pay-off parameter sensitivity formulation proceeds precisely along the same lines as delta formulation.

a.       Rewrite #1:  where  is exactly the same as earlier, and .

b.      Rewrite #2:  where , , and  are n x 1 column matrices, and  is an n x n square matrix.

c.       Rewrite #3: Generalizing over all the j’s, we get .

d.      Rewrite #4: Transposing the above we get .

e.       Implications of rewrite #4: Given that  and  are now row matrices, and that they are the preceding terms in the series, all the adjoint advantages indicated earlier continue to be valid. Further the previous formulations for  can be re-used at the same Eulerian time step.

f.        Adjoint Storage Demands: Remember that  and  still need to be retained in memory during the forward evolutionary sweep for , so this represents a corresponding increase on the storage requirements.

 

 

Payoff Sensitivity to the Correlation Matrix

 

1.      Payoff Sensitivity Formulation: Irrespective of where the stochastic process is diffusive or not, , where  is the correlation matrix.

2.      Financial Variable to Correlated Random Partial:

·        Remember the general theorem that if, then .

·        From this, and using , you can derive .

3.      Differential of the Cholesky Factorization Matrix:

·         where is given in Smith (1995).

·        Therefore  where  is given from above.

 

 

Algorithmic Differentiation in Payoff Sensitivities Calculation

 

1.      Monte-Carlo Path-wise Derivatives: Path-wise derivatives are typically forward derivatives, not adjoint (Giles and Glasserman (2006)). Therefore computation time is proportional to the number of inputs.

2.      Forward Monte-Carlo evolution variates: The full set forward evolution variates is still needed for extracting the fields/parameters required for the delta estimation of the adjoint path.

3.      Corresponding storage requirements: All the variates set (the transition matrices etc.) still need to be maintained, so this represents an increase in the storage needed.

4.      Adjointing vs. Reverse Mode: Typically adjoint refers ONLY to the intermediate/dynamical matrices [Giles (2007), Giles (2009)], whereas REVERSE refers to calculation of only the relevant outputs and their sensitivities [Griewank (2000)].

·        Adjointing deals with the evolved latent state space parameters left to right, therefore technically it is still forward in the time sense – and achieves optimization by minimizing the matrix<->matrix computations.

·        In the non-matrix sense (as in adjoint automatic differentiation), the term reverse and adjoint are analogous, i.e., adjoint/reverse refer to a scan backwards from right to left inside the SAME step, for e.g., a time step.

·        Finally, formalized pure “forward” and pure “reverse” is often theoretical constructs. Just like hand-rolled code can beat generic optimizers, hand-rolled algorithmic differentiation code will be better – even for Monte-Carlo sensitivity runs. However, development productivity gains to be attained by using automated AD tools are well documented.

5.      Systematic Design Paradigm for using Automatic Differentiation for Path-wise Monte-Carlo Derivatives: Capriotti and Giles (2011) detail several techniques for this.

6.      Cost:

·        Forward Automatic Differentiation Cost =>

·        Reverse Automatic Differentiation Cost =>

·        B => Base; F => Forward; R => Reverse.

7.      Calibration along with Automatic Sensitivities Generation: Automatic Differentiation is natural performance fit in these situations (Kaebe, Maruhn, and Sachs (2009), Schlenkirch (2011)). Many approaches in this regard end up utilizing intermediate value theorem to facilitate the formulation (Christianson (1998), Giles and Pierce (2000)).


Bermudan Swap Option Sensitivities

 

 

Base Formulation

 

1.      Option Valuation under Monte-Carlo: Unlike typical closed forms (such Black-Scholes, Black etc.), volatility does not explicitly show up in the PV generation part for options. Instead, it features intrinsically, through the evolution dynamics, and from the valuation of the underlying that needs to be valued under a specific exercise scenario.

2.      H x M Bermudan Swap Option Details:

·        Define the M swap exercise/pay date tenor grids .

·        Option exercise dates  start from date  onwards, i.e., .

·        The cash flow stream after the exercise is the payment stream .

3.      H x M Exercised Bermudan Swap Valuation: , where R is the fixed rate. The Bermudan Swap PV is , where  is the expectation operator.

4.      H x M Bermudan Swap Valuation SKU:

·        Simulate a single path sequence of .

·        For this path, evaluate  for each .

·        For this path, generate a vector of  corresponding to each possible exercise date .

·        Find  that maximizes .

·        Record .

 

 

Greek Estimation

 

1.      H x M Exercised Bermudan Swap Option Delta/Parameter Sensitivity [Piterbarg (2004), Capriotti and Giles (2011)]:  

2.      Individual Cash-flow PV and Greeks [Leclerc, Liang, and Schneider (2009)]:

3.      Cash-flow PV Delta:

 

 

LSM Methodology

 

1.      Curve-Fitting to Extract Optimal Exercise: Since the simple model of maximizing  across  gets too cumbersome if the exercise dates are numerous – LSM based optimal exercise determination laid out in [Longstaff and Schwartz (2001)] can be used – regress  against .

2.      Continuous or Fine-grained Call Schedules: LSM is highly effective in these situations. Sampling is reduced to a few evenly spaced-out grid points – such that the full sample scoping is eliminated.

3.      Interpolation between Sampled Nodes: Any appropriate inter-nodal interpolating/splining technique to determine  as a function of  is valid – e.g., constant  over , linear/quadratic/polynomial  over , or even exponential/hyperbolic tension spline-based  over .


NTD Basket Sensitivities

 

 

NTD Product Formulation

 

1.      Running Index Details:

·         => Number of Components

·         => Row, column index of the correlation matrix for each of the n components

·         => Factorized Cholesky diagonal matrix for the n components

·        r => rth component in the current draw of ordered default times; it corresponds to the current nth-to-default.

·        N => The “N” in NTD ().

2.      Base NTD Pricing:

·       

·       

·       

·       

·         => Default Indicator that is 1 if , and 0 otherwise.

                                                   i.      To make the computation convenient [Capriotti and Giles (2010), Capriotti and Giles (2011), Giles (2009), Chen and Glasserman (2008)]  is regularized and smeared out using an appropriate proxy, i.e., .

                                                 ii.       can be the Heaviside function.

                                                iii.      The proxy  has a bias, but it can be designed to be much tighter than the Monte-Carlo accuracy.

3.      NTD Sensitivity:

·       

·       

·       

·       

·       


Basket Options

 

 

1.      Base Pricing Formulation:

2.      Basket Options Delta:

·        Remember from earlier that . Here:

                                                   i.     

                                                 ii.     

·         where


References

 

 

·        Bartholomew-Biggs, M., S. Brown, B. Christianson, and L. Dixon (2000): Automatic Differentiation of Algorithms. Journal of Computational and Applied Mathematics 124 (2000): 171-190.

·        Berland, H (2006): Automatic Differentiation.

·        Berz, M., et al. (1996): Computational Differentiation: Techniques, Applications and Tools, Society for Industrial and Applied Mathematics, Philadelphia, PA.

·        Bischof, C, P Hovland, and B Norris (2005): On the Implementation of Automatic Differentiation Tools.

·        Brace, A., D. Gatarek, and M. Musiela (1997): The Market Model of Interest-Rate Dynamics. Mathematical Finance 7: 127-155.

·        Brigo, D., and F. Mercurio (2001): Interest-Rate Models: Theory and Practice, Springer-Verlag.

·        Broadie, M., and M. Glasserman (1996): Estimating Security Derivative Prices Using Simulation. Management Science 42: 269-285.

·        Capriotti, L. (2011): Fast Greeks by Algorithmic Differentiation. Journal of Computational Finance 14 (3): 3-35.

·        Capriotti, L., and M. Giles (2010): Fast Correlation Greeks by Adjoint Algorithmic Differentiation. Risk (2010): 79-83.

·        Capriotti, L., and M. Giles (2011): Algorithmic Differentiation: Adjoint Greeks Made Easy.

·        Chen, Z., and P. Glasserman (2008): Sensitivity Estimates for Portfolio Credit Derivatives using Monte-Carlo. Finance and Stochastics 12 (4): 507-540.

·        Christianson, B. (1998). Reverse Accumulation and Implicit Functions. Optimization Methods and Software 9 (4): 307-322.

·        Denson, N., and M. Joshi (2009a): Fast and Accurate Greeks for the LIBOR Market Model, Journal of Computational Finance 14 (4): 115-140.

·        Denson, N., and M. Joshi (2009b): Flaming Logs, Wilmott Journal 1: 5-6.

·        Ghaffari, H, J Li, Y Li, and Z Nie (2007): Automatic Differentiation.

·        Giles, M. (2007): Monte Carlo Evaluation of Sensitivities in Computational Finance. Proceedings of the Eighth HERCMA Conference.

·        Giles, M. (2009): Vibrato Monte-Carlo Sensitivities, Monte-Carlo and Quasi Monte-Carlo Methods 2008, P. L’Ecuyer, and Owen, A., editors, Springer, New York.

·        Giles, M., and P. Glasserman (2006): Smoking Adjoints: fast Monte-Carlo Greeks. Risk (2006): 92-96.

·        Giles, M., and N. Pierce (2000): An introduction to the adjoint approach to design. Flow, Turbulence, and Control 65: 393-415.

·        Glasserman, P. (2004): Monte-Carlo Methods in Financial Engineering, Springer-Verlag, New York.

·        Glasserman, P.,  and X. Zhao (1999): Fast Greeks by Simulation in Foreard Libor Models. Journal of Computational Finance 3: 5-39.

·        Griewank, A. (2000): Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, Society for Industrial and Applied Mathematics, Philadelphia.

·        Harrison, J., and D. Kreps (1979): Martingales and Arbitrage in multi-period Securities Markets. Journal of Economic Theory 20 (3): 381-408.

·        Iri, M (1991): History of Automatic Differentiation and Rounding Error Estimation, in: A. Griewank, G. Corliss (Eds.), Automatic Differentiation of Algorithms, Society for Industrial and Applied Mathematics, Philadelphia, PA, 3-16.

·        Jamshidian, F. (1997): LIBOR and Swap Market Models and Measures. Finance and Stochastics 1: 293-330.

·        Kaebe, C., J. Maruhn, and E. Sachs (2009). Adjoint based Monte-Carlo Calibration of Financial Market Models. Finance and Stochastics 13 (3): 351-379.

·        Kallenberg, O. (1997): Foundations of Modern Probability Theory, Springer, New York.

·        Kunita, H. (1990): Stochastic Flows and Stochastic Differential Equations, Cambridge University Press.

·        Leclerc, M., Q. Liang, and I. Schneider (2009): Fast Monte-Carlo Bermudan Greeks. Risk (2009): 84-88.

·        Longstaff, F., and E. Schwartz (2001): Valuing American Options by Simulation: A Simple Least-Squares Approach. Review of Financial Studies 14: 113-147.

·        Naumann, U (2008): Optimal Jacobian accumulation is NP-complete. Mathematical Programming 112 (2): 427–441.

·        Piterbarg, V. (2004): Computing deltas of callable LIBOR exotics in Forward LIBOR Models. Journal of Computational Finance 7(3): 107-144.

·        Protter, P. (1990): Stochastic Integration and Differential Equations, Springer-Verlag, Berlin.

·        Schlenkirch, S. (2011). Efficient Calibration of the Hull-White Model. Optimal Control Applications and Methods 33 (3): 352-362.

·        Smith, S. (1995): Differentiation of the Cholesky Algorithm. Journal of Computational and Graphical Statistics 4 (2): 134-147.

·        Wengert, R (1964): A Simple Automatic Derivative Evaluation Program. Communications of the ACM 7: 463–464.


Text Box: Figure 1: Optimal Intermediate Wengert Variable

Output

Jacobian

 

Input

Parameters

 

 

Text Box: Figure 2: Computation Financial Object Scheme

Output

Jacobian

 

Intermediate

Wengert Set

 

Input

Parameters

 

 

Text Box: Figure 3: Wengert Fan-in and fan-out
Oval: MI1 Oval: MIn
Oval: I1 Oval: In

n Input Instruments

 
Oval: P1 Oval: Pp

p <= n Calibrated parameters

 

w <= p Wengert Intermediate Variates

 
Oval: W1 Oval: Ww
Oval: O1 Oval: Om

m Output Instruments

 

m * k Outputs

 

k Output Measures per Instrument

 
 



Stochastic Analyzer Software Components

 

 

While bulk of stochastic analyzer functionality is formulation based, the algorithmic differential functionality is available across 2 core functional packages.

·        Univariate function package

·        Univariate Calculus package

 

 

Univariate Function Package (org.drip.quant.function1D)

 

The univariate function package implements the individual univariate functions, their convolutions, and reflections. It contains the following classes/interfaces:

1.      AbstractUnivariate: This abstract class provides the evaluation of the given basis/objective function and its derivatives for a specified variate. Default implementations of the derivatives are for black box, non-analytical functions.

2.      UnivariateConvolution: This class provides the evaluation of the point value and the derivatives of the convolution of 2 univariate functions for the specified variate.

3.      UnivariateReflection: For a given variate , this class provides the evaluation and derivatives of the reflection at .

4.      Polynomial: This class provides the evaluation of the nth order polynomial and its derivatives for a specified variate. The degree n specifies the order of the polynomial.

5.      BernsteinPolynomial: This class provides the evaluation of Bernstein polynomial and its derivatives for a specified variate. The degree exponent specifies the order of the Bernstein polynomial.

6.      NaturalLogSeriesElement: This class provides the evaluation of a single term in the expansion series for the natural log. The exponent parameter specifies which term in the series is being considered.

7.      ExponentialTension: This class provides the evaluation of exponential tension basis function and its derivatives for a specified variate. It can be customized by the choice of exponent, the base, and the tension parameter.

8.      HyperbolicTension: This class provides the evaluation of hyperbolic tension basis function and its derivatives for a specified variate. It can be customized by the choice of the hyperbolic function and the tension parameter.

9.      LinearRationalShapeControl: This class implements the deterministic rational shape control functionality on top of the estimate of the basis splines inside -  - Globally :  where is the normalized ordinate mapped as .

10.  QuadraticRationalShapeControl: This class implements the deterministic rational shape control functionality on top of the estimate of the basis splines inside -  - Globally :  where is the normalized ordinate mapped as .

11.  LinearRationalTensionExponential: This class provides the evaluation of the Convolution of the Linear Rational and the Tension Exponential Function and its derivatives for a specified variate.

 

 

Univariate Calculus Package (org.drip.quant.calculus)

 

The univariate calculus package implements univariate difference based arbitrary order derivative, implements differential control settings, implements several integrand routines, and multivariate Wengert Jacobian.

1.      DerivativeControl: DerivativeControl provides bumps needed for numerically approximating derivatives. Bumps can be absolute or relative, and they default to a floor.

2.      Differential: Differential holds the incremental differentials for the variate and the objective functions.

3.      WengertJacobian: WengertJacobian contains the Jacobian of the given set of Wengert variables to the set of parameters. It exposes the following functionality:

o       Set/Retrieve the Wengert variables

o       Accumulate the Partials

o       Scale the partial entries

o       Merge the Jacobian with another

o       Retrieve the WengertJacobian elements

o       Display the contents of the WengertJacobian

4.      Integrator: Integrator implements the following routines for integrating the objective functions:

o       Linear Quadrature

o       Mid-Point Scheme

o       Trapezoidal Scheme

o       Simpson/Simpson38 Schemes

o       Boole Scheme