We demonstrate how applying optimal control principles can significantly improve planning in deep model-based reinforcement learning with epistemic uncertainty.
Preprints are read, shared, and cited, yet still dismissed as incomplete until blessed by a publisher. I argue that the true measure of scholarship lies in open exchange, not in the industry’s gatekeeping of what counts as published.
Preprints are read, shared, and cited, yet still dismissed as incomplete until blessed by a publisher. I argue that the true measure of scholarship lies in open exchange, not in the industry’s gatekeeping of what counts as published.
We demonstrate how applying optimal control principles can significantly improve planning in deep model-based reinforcement learning with epistemic uncertainty.
We demonstrate how applying optimal control principles can significantly improve planning in deep model-based reinforcement learning with epistemic uncertainty.