loading page

Addressing lighting and bounding box accuracy for the Embedded Automated Generator of Labeled Images (EAGL-I) system
  • +1
  • Tung Ki Wong,
  • Michael A. Beck,
  • Christopher P. Bidinosti,
  • Christopher J. Henry
Tung Ki Wong

Corresponding Author:[email protected]

Author Profile
Michael A. Beck
Christopher P. Bidinosti
Christopher J. Henry

Abstract

The Embedded Automated Generator of Labeled Images (EAGL-I) system is a tool for generating labeled images, particularly for data-driven methods, such as deep learning models. The system has already generated hundreds of thousands of images of weeds and crops. We present modifications made to the original system that are based on the experiences gathered from generating such large-scale datasets. The improvements relate to lighting conditions, ease of use, refined image segmentation, and pathfinding for camera-movements. To address lighting conditions, we made three major changes to the hardware. First, the blue keying fabric was replaced by solid black panels, mitigating reflections and achieving reliable color accuracy; second, sunlight entering the room through a window is diffused and partially blocked by a screen, achieving consistent and uniform lighting of the imaging environment; third, dimmable LED lights are installed allowing us to image with lower ISO and to reduce noise in the resulting images. A YOLO machine learning model was trained to replace the previous methods of estimating bounding boxes around the plants. This new way of creating bounding boxes adapts to different plant architectures, such as grasses or different kind of dicots. Finally, we implemented a version of the A* pathfinding algorithm to define save zones through which the camera will not be moved. Overall, these modifications improved system performance and image quality significantly, while making EAGL-I easier to use. We have extended potential applications of EAGL-I, particularly for plant phenotyping research and in fine-tuning machine learning models for image analysis.
17 Oct 2023Submitted to NAPPN 2024
18 Oct 2023Published in NAPPN 2024