NASA Logo

NTRS

NTRS - NASA Technical Reports Server

Back to Results
Solution of the stochastic control problem in unbounded domains.Bellman's dynamic programming equation for the optimal index and control law for stochastic control problems is a parabolic or elliptic partial differential equation frequently defined in an unbounded domain. Existing methods of solution require bounded domain approximations, the application of singular perturbation techniques or Monte Carlo simulation procedures. In this paper, using the fact that Poisson impulse noise tends to a Gaussian process under certain limiting conditions, a method which achieves an arbitrarily good approximate solution to the stochastic control problem is given. The method uses the two iterative techniques of successive approximation and quasi-linearization and is inherently more efficient than existing methods of solution.
Document ID
19730039903
Acquisition Source
Legacy CDMS
Document Type
Reprint (Version printed in journal)
Authors
Robinson, P.
(Maryland, University College Park, Md., United States)
Moore, J.
(Newcastle, University Newcastle, Australia)
Date Acquired
August 7, 2013
Publication Date
March 1, 1973
Publication Information
Publication: Franklin Institute
Subject Category
Electronics
Accession Number
73A24705
Funding Number(s)
CONTRACT_GRANT: NSG-398
Distribution Limits
Public
Copyright
Other

Available Downloads

There are no available downloads for this record.
No Preview Available