Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George Labahn is active.

Publication


Featured researches published by George Labahn.


Numerische Mathematik | 2004

A penalty method for American options with jump diffusion processes

Y. d’Halluin; Peter A. Forsyth; George Labahn

Summary.The fair price for an American option where the underlying asset follows a jump diffusion process can be formulated as a partial integral differential linear complementarity problem. We develop an implicit discretization method for pricing such American options. The jump diffusion correlation integral term is computed using an iterative method coupled with an FFT while the American constraint is imposed by using a penalty method. We derive sufficient conditions for global convergence of the discrete penalized equations at each timestep. Finally, we present numerical tests which illustrate such convergence.


SIAM Journal on Matrix Analysis and Applications | 1994

A Uniform Approach for the Fast Computation of Matrix-Type Pade Approximants

Bernhard Beckermann; George Labahn

Recently, a uniform approach was given by B. Beckermann and G. Labahn [Numer. Algorithms, 3 (1992), pp. 45-54] for different concepts of matrix-type Pade approximants, such as descriptions of vector and matrix Pade approximants along with generalizations of simultaneous and Hermite Pade approximants. The considerations in this paper are based on this generalized form of the classical scalar Hermite Pade approximation problem, power Hermite Pade approximation. In particular, this paper studies the problem of computing these new approximants. A recurrence relation is presented for the computation of a basis for the corresponding linear solution space of these approximants. This recurrence also provides bases for particular subproblems. This generalizes previous work by Van Barel and Bultheel and, in a more general form, by Beckermann. The computation of the bases has complexity


Archive | 1996

Maple V: programming guide

Michael B. Monagan; Keith O. Geddes; K. M. Heal; George Labahn; S. M. Vorkoetter

{\cal O}(\sigma^{2})


Journal of Computational Finance | 2007

Numerical methods for controlled Hamilton-Jacobi-Bellman PDEs in finance

Peter A. Forsyth; George Labahn

, where


SIAM Journal on Scientific Computing | 2005

A Semi-Lagrangian Approach for American Asian Options under Jump Diffusion

Yann d'Halluin; Peter A. Forsyth; George Labahn

\sigma


international symposium on symbolic and algebraic computation | 1996

Asymptotically fast computation of Hermite normal forms of integer matrices

Arne Storjohann; George Labahn

is the order of the desired approximant and requires no conditions on the input data. A second algorithm using the same recurrence relation along with divide-and-conquer methods is also presented. When the coefficient field allows for fast polynomial multiplication, this second algorithm computes a basis in the superfast complexity


Journal of Symbolic Computation | 2009

Symbolic-numeric sparse interpolation of multivariate polynomials

Mark Giesbrecht; George Labahn; Wen-shin Lee

{\cal O}(\sigma \log^{2})


Journal of Symbolic Computation | 2006

Fraction-free row reduction of matrices of Ore polynomials

Bernhard Beckermann; Howard Cheng; George Labahn

. In both cases the algorithms are reliable in exact arithmetic. That is, they never break down, and the complexity depends neither on any normality assumptions nor on the singular structure of the corresponding solution table. As a further application, these methods result in fast (and superfast) reliable algorithms for the inversion of striped Hankel, layered Hankel, and (rectangular) block-Hankel matrices.


SIAM Journal on Scientific Computing | 2011

Methods for Pricing American Options under Regime Switching

Y. Huang; Peter A. Forsyth; George Labahn

1. Introduction.- 1.1 Getting Started.- Locals and Globals.- Inputs, Parameters, Arguments.- 1.2 Basic Programming Constructs.- The Assignment Statement.- The for Loop.- The Conditional Statement.- The while Loop.- Modularization.- Recursive Procedures.- Exercise.- 1.3 Basic Data Structures.- Exercise.- Exercise.- A MEMBER Procedure.- Exercise.- Binary Search.- Exercises.- Plotting the Roots of a Polynomial.- 1.4 Computing with Formulae.- The Height of a Polynomial.- Exercise.- The Chebyshev Polynomials, Tn(x).- Exercise.- Integration by Parts.- Exercise.- Computing with Symbolic Parameters.- Exercise.- 2. Fundamentals.- 2.1 Evaluation Rules.- Parameters.- Local Variables.- Global Variables.- Exceptions.- 2.2 Nested Procedures.- Local or Global?.- The Quick-Sort Algorithm.- Creating a Uniform Random Number Generator.- 2.3 Types.- Types that Modify Evaluation Rules.- Structured Types.- Type Matching.- 2.4 Choosing a Data Structure: Connected Graphs.- Exercises.- 2.5 Remember Tables.- The remember Option.- Adding Entries Explicitly.- Removing Entries from a Remember Table.- 2.6 Conclusion.- 3. Advanced Programming.- 3.1 Procedures Which Return Procedures.- Creating a Newton Iteration.- A Shift Operator.- 3.2 When Local Variables Leave Home.- Creating the Cartesian Product of a Sequence of Sets.- Exercises.- 3.3 Interactive Input.- Reading Strings from the Terminal.- Reading Expressions from the Terminal.- Converting Strings to Expressions.- 3.4 Extending Maple.- Defining New Types.- Exercises.- Neutral Operators.- Exercise.- Extending Certain Commands.- 3.5 Writing Your Own Packages.- Package Initialization.- Making Your Own Library.- 3.6 Conclusion.- 4. The Maple Language.- 4.1 Language Elements.- The Character Set.- Tokens.- Token Separators.- 4.2 Escape Characters.- 4.3 Statements.- The Assignment Statement.- Unassignment: Clearing a Name.- The Selection Statement.- The Repetition Statement.- The read and save Statements.- 4.4 Expressions.- Expression Trees: Internal Representation.- The Types and Operands of Integers, Strings, Indexed Names, and Concatenations.- Fractions and Rational Numbers.- Floating-Point (Decimal) Numbers.- Complex Numerical Constants.- Labels.- Sequences.- Sets and Lists.- Functions.- The Arithmetic Operators.- Non-Commutative Multiplication.- The Composition Operators.- The Ditto Operators.- The Factorial Operator.- The mod Operator.- The Neutral Operators.- Relations and Logical Operators.- Arrays and Tables.- Series.- Ranges.- Unevaluated Expressions.- Constants.- Structured Types.- 4.5 Useful Looping Constructs.- The map, select, and remove Commands.- The zip Command.- The seq, add, and mul Commands.- 4.6 Substitution.- 4.7 Conclusion.- 5. Procedures.- 5.1 Procedure Definitions.- Mapping Notation.- Unnamed Procedures and Their Combinations.- Procedure Simplification.- 5.2 Parameter Passing.- Declared Parameters.- The Sequence of Arguments.- 5.3 Local and Global Variables.- Evaluation of Local Variables.- 5.4 Procedure Options and the Description Field.- Options.- The Description Field.- 5.5 The Value Returned by a Procedure.- Assigning Values to Parameters.- Explicit Returns.- Error Returns.- Trapping Errors.- Returning Unevaluated.- Exercise.- 5.6 The Procedure Object.- Last Name Evaluation.- The Type and Operands of a Procedure.- Saving and Retrieving Procedures.- 5.7 Explorations.- Exercises.- 5.8 Conclusion.- 6. Debugging Maple Programs.- 6.1 A Tutorial Example.- 6.2 Invoking the Debugger.- Displaying the Statements of a Procedure.- Breakpoints.- Watchpoints.- Error Watchpoints.- 6.3 Examining and Changing the State of the System.- 6.4 Controlling Execution.- 6.5 Restrictions.- 7. Numerical Programming in Maple.- 7.1 The Basics of evalf.- 7.2 Hardware Floating-Point Numbers.- Newton Iterations.- Computing with Arrays of Numbers.- 7.3 Floating-Point Models in Maple.- Software Floats.- Hardware Floats.- Roundoff Error.- 7.4 Extending the evalf Command.- Defining Your Own Constants.- Defining Your Own Functions.- 7.5 Using the Matlab Package.- 7.6 Conclusion.- 8. Programming with Maple Graphics.- 8.1 Basic Plot Functions.- 8.2 Programming with Plotting Library Functions.- Plotting a Loop.- A Ribbon Plot Procedure.- 8.3 Maples Plotting Data Structures.- The PLOT Data Structure.- A Sum Plot.- The PLOT3D Data Structure.- 8.4 Programming with Plot Data Structures.- Writing Graphic Primitives.- Plotting Gears.- Polygon Meshes.- 8.5 Programming with the plottools Package.- A Pie Chart.- A Dropshadow Procedure.- Creating a Tiling.- A Smith Chart.- Modifying Polygon Meshes.- 8.6 Example: Vector Field Plots.- 8.7 Generating Grids of Points.- 8.8 Animation.- 8.9 Programming with Color.- Generating Color Tables.- Adding Color Information to Plots.- Creating A Chess Board Plot.- 8.10 Conclusion.- 9. Input and Output.- 9.1 A Tutorial Example.- 9.2 File Types and Modes.- Buffered Files versus Unbuffered Files.- Text Files versus Binary Files.- Read Mode versus Write Mode.- The default and terminal Files.- 9.3 File Descriptors versus File Names.- 9.4 File Manipulation Commands.- Opening and Closing Files.- Position Determination and Adjustment.- Detecting the End of a File.- Determining File Status.- Removing Files.- 9.5 Input Commands.- Reading Text Lines from a File.- Reading Arbitrary Bytes from a File.- Formatted Input.- Reading Maple Statements.- Reading Tabular Data.- 9.6 Output Commands.- Configuring Output Parameters using the interface Command.- One-Dimensional Expression Output.- Two-Dimensional Expression Output.- Writing Maple Strings to a File.- Writing Arbitrary Bytes to a File.- Formatted Output.- Writing Tabular Data.- Flushing a Buffered File.- Redirecting the default Output Stream.- 9.7 Conversion Commands.- C or FORTRAN Generation.- LATEX or eqn Generation.- Conversion between Strings and Lists of Integers.- Parsing Maple Expressions and Statements.- Formatted Conversion to and from Strings.- 9.8 A Detailed Example.- 9.9 Notes to C Programmers.- 9.10 Conclusion.


international symposium on symbolic and algebraic computation | 1999

Shifted normal forms of polynomial matrices

Bernhard Beckermann; George Labahn; Gilles Villard

Many nonlinear option pricing problems can be formulated as optimal control problems, leading to Hamilton-Jacobi-Bellman (HJB) or Hamilton-Jacobi-Bellman-Isaacs (HJBI) equations. We show that such formulations are very convenient for developing monotone discretization methods which ensure convergence to the financially relevant solution, which in this case is the viscosity solution. In addition, for the HJB type equations, we can guarantee convergence of a Newton-type (Policy) iteration scheme for the nonlinear discretized algebraic equations. However, in some cases, the Newton-type iteration cannot be guaranteed to converge (for example, the HJBI case), or can be very costly (for example for jump processes). In this case, we can use a piecewise constant control approximation. While we use a very general approach, we also include numerical examples for the specific interesting case of option pricing with unequal borrowing/lending costs and stock borrowing fees.

Collaboration


Dive into the George Labahn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Howard Cheng

University of Lethbridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Zhou

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward Lank

University of Waterloo

View shared research outputs
Researchain Logo
Decentralizing Knowledge