LensRL is a scientific toolkit designed to apply reinforcement learning (RL) techniques to lens design. By leveraging the optical analysis capabilities of Optiland, this repository provides a modular and extensible framework to explore and optimize lens systems using RL methodologies.
Note
This repository is under active development, and its API may evolve as new features are introduced.
LensRL is a modular platform that recasts lens design as an RL problem. It provides a suite of components that enable the design and optimization of optical systems through a systematic RL approach. Key functionalities include:
- Action Module: A dynamic set of actions to adjust lens parameters.
- Reward Functions: Customizable rewards based on RMS spot size, system complexity, aperture size, field of view, etc., guiding the RL agent towards optimal designs.
- Observation & Action Spaces: RL spaces that encapsulate the state of the optical system and drive decision-making.
- Configurable Optical System: A flexible class that integrates with Optiland for detailed optical simulations and analysis.
- Normalization Module: Standardizes variables to improve learning stability.
- Lens Design Environment: A dedicated environment that frames lens design challenges as RL tasks, facilitating automated exploration and iterative improvement.
Lens design inherently involves balancing multiple objectives and constraints. By framing lens design as an RL problem, LensRL aims to:
- Automate and accelerate the optimization process.
- Navigate complex design spaces more efficiently.
- Enable researchers, engineers, or enthusiasts to experiment with different RL strategies to achieve innovative optical designs.
To get started, follow these steps:
- Clone the LensRL Repository:
git clone https://github.com/HarrisonKramer/LensRL.git
- Install LensRL dependencies:
cd LensRL
pip install -r requirements.txt
- Customize and Experiment: Modify reward functions, tweak action spaces, and tailor the environment to meet your research objectives.
A minimal working example can be found here. More notebooks will be added as development continues.
If you have feedback, would like to contribute, or have any ideas for how to make LensRL better, feel free to open an issue or submit a pull request.
Distributed under the MIT License. See LICENSE for more information.