Dynamic Programming and Optimal Control, Vol. II, 4th Edition: Approximate Dynamic Programming

Dimitri P. Bertsekas

Published June 2012Volume 2 cover image


The fourth edition of Vol. II of the two-volume DP textbook was published in June 2012. This is a major revision of Vol. II and contains a substantial amount of new material, as well as a reorganization of old material. The length has increased by more than 60% from the third edition, and most of the old material has been restructured and/or revised. Volume II now numbers more than 700 pages and is larger in size than Vol. I. It can arguably be viewed as a new book!

Approximate DP has become the central focal point of this volume, and occupies more than half of the book (the last two chapters, and large parts of Chapters 1-3). Thus one may also view this new edition as a followup of the author's 1996 book "Neuro-Dynamic Programming" (coauthored with John Tsitsiklis). A lot of new material, the outgrowth of research conducted in the six years since the previous edition, has been included.

Dynamic Programming and Optimal Control, Vol. II: Approximate Dynamic Programming, ISBN-13: 978-1-886529-44-1, 712 pp., hardcover, 2012


Click here for preface and detailed information.

Click here to order at Amazon.com

Click here to download Approximate Dynamic Programming, the lecture slides for a 6-lecture short course at Tsinghua Univ., Beijing, China, 2014. Click here to download complete videos.

Click here to download lecture slides for a 7-lecture short course on Approximate Dynamic Programming, Caradache, France, 2012.

Click here to download lecture slides for the MIT course "Dynamic Programming and Stochastic Control (6.231), Dec. 2011. The last six lectures cover a lot of the approximate dynamic programming material.

Click here to download research papers and other material on Dynamic Programming and Approximate Dynamic Programming.


Contents:

  1. Discounted Problems - Theory

    1. Minimization of Total Cost - Introduction
      1. The Finite-Horizon DP Algorithm
      2. Shorthand Notation and Monotonicity
      3. A Preview of Infinite Horizon Results
      4. The Finite-Horizon DP Algorithm
      5. Randomized and History-Dependent Policies
    2. Discounted Problems - Bounded Cost per Stage
    3. Scheduling and Multiarmed Bandit Problems
    4. Discounted Continuous-Time Problems
    5. The Role of Contraction Mappings
      1. Sup-Norm Contractions
      2. Discounted Problems - Unbounded Cost per Stage
    6. General Forms of Discounted Dynamic Programming
      1. Basic Results Under Contraction and Monotonicity
      2. Discounted Dynamic Games
    7. Notes, Sources, and Exercises

  2. Discounted Problems - Computational Methods

    1. Markovian Decision Problems
    2. Value Iteration
      1. Monotonic Error Bounds for Value Iteration
      2. Variants of Value Iteration
      3. Q-Learning
    3. Policy Iteration
      1. Policy Iteration for Costs
      2. Policy Iteration for Q-Factors
      3. Optimistic Policy Iteration
      4. Limited Lookahead Policies and Rollout
    4. Linear Programming Methods
    5. Methods for General Discounted Problems
      1. Limited Lookahead Policies and Approximations
      2. Generalized Value Iteration
      3. Approximate Value Iteration
      4. Generalized Policy Iteration
      5. Generalized Optimistic Policy Iteration
      6. Approximate Policy Iteration
      7. Mathematical Programming
    6. Asynchronous Algorithms
      1. Asynchronous Value Iteration
      2. Asynchronous Policy Iteration
      3. Policy Iteration with a Uniform Fixed Point
    7. Notes, Sources, and Exercises

  3. Stochastic Shortest Path Problems

    1. Problem Formulation
    2. Main Results
    3. Underlying Contraction Properties
    4. Value Iteration
      1. Conditions for Finite Termination
      2. Asynchronous Value Iteration
    5. Policy Iteration
      1. Optimistic Policy Iteration
      2. Approximate Policy Iteration
      3. Policy Iteration with Improper Policies
      4. Policy Iteration with a Uniform Fixed Point
    6. Countable State Spaces
    7. Notes, Sources, and Exercises

  4. Undiscounted Problems

    1. Unbounded Costs per Stage
      1. Main Results
      2. Value Iteration
      3. Other Computational Methods
    2. Linear Systems and Quadratic Cost
    3. Inventory Control
    4. Optimal Stopping
    5. Optimal Gambling Strategies
    6. Nonstationary and Periodic Problems
    7. Notes, Sources, and Exercises

  5. Average Cost per Stage Problems

    1. Finite-Spaces Average Cost Models
      1. Relation with the Discounted Cost Problem
      2. Blackwell Optimal Policies
      3. Optimality Equations
    2. Conditions for Equal Average Cost for all Initial States
    3. Value Iteration
      1. Single-Chain Value Iteration
      2. Multi-Chain Value Iteration
    4. Policy Iteration
      1. Single-Chain Policy Iteration
      2. Multi-Chain Policy Iteration
    5. Linear Programming
    6. Infinite-Spaces Problems
      1. A Sufficient Condition for Optimality
      2. Finite State Space and Infinite Control Space
      3. Countable States -- Vanishing Discount Approach
      4. Countable States -- Contraction Approach
      5. Linear Systems with Quadratic Cost
    7. Notes, Sources, and Exercises

  6. Approximate Dynamic Programming - Discounted Models

    1. General Issues of Simulation-Based Cost Approximation
      1. Approximation Architectures
      2. Simulation-Based Approximate Policy Iteration
      3. Direct and Indirect Approximation
      4. Monte Carlo Simulation
      5. Simplifications
    2. Direct Policy Evaluation - Gradient Methods
    3. Projected Equation Methods for Policy Evaluation
      1. The Projected Bellman Equation
      2. The Matrix Form of the Projected Equation
      3. Simulation-Based Methods
      4. LSTD, LSPE, and TD(0) Methods
      5. Optimistic Versions
      6. Multistep Simulation-Based Methods
      7. A Synopsis
    4. Policy Iteration Issues
      1. Exploration Enhancement by Geometric Sampling
      2. Exploration Enhancement by Off-Policy Methods
      3. Policy Oscillations - Chattering
    5. Aggregation Methods
      1. Cost Approximation via the Aggregate Problem
      2. Cost Approximation via the Enlarged Problem
      3. Multistep Aggregation
      4. Asynchronous Distributed Aggregation
    6. Q-Learning
      1. Q-Learning: A Stochastic VI Algorithm
      2. Q-Learning and Policy Iteration
      3. Q-Factor Approximation and Projected Equations
      4. Q-Learning for Optimal Stopping Problems
      5. Q-Learning and Aggregation
      6. Finite Horizon Q-Learning
    7. Notes, Sources, and Exercises

  7. Approximate Dynamic Programming - Nondiscounted Models and Generalizations

    1. Stochastic Shortest Path Problems
    2. Average Cost Problems
      1. Approximate Policy Evaluation
      2. Approximate Policy Iteration
      3. Q-Learning for Average Cost Problems
    3. General Problems and Monte Carlo Linear Algebra
      1. Projected Equations
      2. Matrix Inversion and Iterative Methods
      3. Multistep Methods
      4. Extension of Q-Learning for Optimal Stopping
      5. Equation Error Methods
      6. Oblique Projections
      7. Generalized Aggregation
      8. Deterministic Methods for Singular Linear Systems
      9. Stochastic Methods for Singular Linear Systems
    4. Approximation in Policy Space
      1. The Gradient Formula
      2. Computing the Gradient by Simulation
      3. Essential Features for Gradient Evaluation
      4. Approximations in Policy and Value Space
    5. Notes, Sources, and Exercises

  8. Appendix A: Measure-Theoretic Issues in Dynamic Programming

    1. A Two-Stage Example
    2. Resolution of the Measurability Issues