Amazon cover image
Image from Amazon.com

Introduction to methods for nonlinear optimization

By: Contributor(s): Material type: TextTextPublication details: Springer, Cham : ©2023Description: xv, 721 p. : ill. ; 24 cmISBN:
  • 9783031267895
Subject(s): DDC classification:
  • 519.3 GRI-I
Contents:
1 Introduction 2 Fundamental definitions and basic existence results 3 Optimality conditions for unconstrained problems in Rn 4 Optimality conditions for problems with convex feasible set 5 Optimality conditions for Nonlinear Programming 6 Duality theory 7 Optimality conditions based on theorems of the alternative 8 Basic concepts on optimization algorithms 9 Unconstrained optimization algorithms 10 Line search methods 11 Gradient method 12 Conjugate direction methods 13 Newton’s method 14 Trust region methods 15 Quasi-Newton Methods 16 Methods for nonlinear equations 17 Methods for least squares problems 18 Methods for large-scale optimization 19 Derivative-free methods for unconstrained optimization 20 Methods for problems with convex feasible set 21 Penalty and augmented Lagrangian methods 22 SQP methods 23 Introduction to interior point methods 24 Nonmonotone methods 25 Spectral gradient methods 26 Decomposition methods
Summary: This book has two main objectives: • to provide a concise introduction to nonlinear optimization methods, which can be used as a textbook at a graduate or upper undergraduate level; • to collect and organize selected important topics on optimization algorithms, not easily found in textbooks, which can provide material for advanced courses or can serve as a reference text for self-study and research. The basic material on unconstrained and constrained optimization is organized into two blocks of chapters: • basic theory and optimality conditions • unconstrained and constrained algorithms. These topics are treated in short chapters that contain the most important results in theory and algorithms, in a way that, in the authors’ experience, is suitable for introductory courses. A third block of chapters addresses methods that are of increasing interest for solving difficult optimization problems. Difficulty can be typically due to the high nonlinearity of the objective function, ill-conditioning of the Hessian matrix, lack of information on first-order derivatives, the need to solve large-scale problems. In the book various key subjects are addressed, including: exact penalty functions and exact augmented Lagrangian functions, non monotone methods, decomposition algorithms, derivative free methods for nonlinear equations and optimization problems. The appendices at the end of the book offer a review of the essential mathematical background, including an introduction to convex analysis that can make part of an introductory course.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Status Date due Barcode Item holds
Books Books IIITD General Stacks Mathematics 519.3 GRI-I (Browse shelf(Opens below)) Available 012843
Total holds: 0

Includes bibliographical references and index.

1 Introduction 2 Fundamental definitions and basic existence results 3 Optimality conditions for unconstrained problems in Rn 4 Optimality conditions for problems with convex feasible set 5 Optimality conditions for Nonlinear Programming 6 Duality theory 7 Optimality conditions based on theorems of the alternative 8 Basic concepts on optimization algorithms 9 Unconstrained optimization algorithms 10 Line search methods 11 Gradient method 12 Conjugate direction methods 13 Newton’s method 14 Trust region methods 15 Quasi-Newton Methods 16 Methods for nonlinear equations 17 Methods for least squares problems 18 Methods for large-scale optimization 19 Derivative-free methods for unconstrained optimization 20 Methods for problems with convex feasible set 21 Penalty and augmented Lagrangian methods 22 SQP methods 23 Introduction to interior point methods 24 Nonmonotone methods 25 Spectral gradient methods 26 Decomposition methods

This book has two main objectives: • to provide a concise introduction to nonlinear optimization methods, which can be used as a textbook at a graduate or upper undergraduate level; • to collect and organize selected important topics on optimization algorithms, not easily found in textbooks, which can provide material for advanced courses or can serve as a reference text for self-study and research. The basic material on unconstrained and constrained optimization is organized into two blocks of chapters: • basic theory and optimality conditions • unconstrained and constrained algorithms. These topics are treated in short chapters that contain the most important results in theory and algorithms, in a way that, in the authors’ experience, is suitable for introductory courses. A third block of chapters addresses methods that are of increasing interest for solving difficult optimization problems. Difficulty can be typically due to the high nonlinearity of the objective function, ill-conditioning of the Hessian matrix, lack of information on first-order derivatives, the need to solve large-scale problems. In the book various key subjects are addressed, including: exact penalty functions and exact augmented Lagrangian functions, non monotone methods, decomposition algorithms, derivative free methods for nonlinear equations and optimization problems. The appendices at the end of the book offer a review of the essential mathematical background, including an introduction to convex analysis that can make part of an introductory course.

There are no comments on this title.

to post a comment.
© 2024 IIIT-Delhi, library@iiitd.ac.in