Self-navigating systems such as cars or drones necessitate tracking of objects for traffic monitoring or surveillance. The thus far presented operation principle of our occlusion handling vision sensor for stationary objects does not always equally hold for a moving body. To further illustrate this point, let’s first consider a simple scenario where the intersecting foreground is small in size as compared to the tracked object. In especially, we again investigate a 2 x 2 pixel white square traveling from left to right at a uniform speed and occluded by a 2 x 1 pixel thin red vertical strip moving in the opposite direction, using our 20-OHL pixel proof-of-concept sensor (movie 2). Figures 4a and 4b depict the sensor’s response during the entire white square’s motion without and with the thin strip foreground, respectively. Most pixels values in Figure 4b show only minor deviations from the occlusion-free reference scenario reflected in Figure 4a, implying that the sensor successfully tracked the object despite occlusion. If this was not the case, the pixels seeing the portion of the white square being blocked by the red strip should deliver a significant drop in VOC pertaining to the pixels’ red light responses (Figure 3b). The absence of such dramatic VOC decay upon occlusion is due to the pixels’ memory effect. This image reconstruction via self-learning by the sensor was only possible because, at any instant of time, the OHL pixels were able to perceive the travelling object prior to the crossing red strip foreground, as highlighted by the snapshots in Figures 4d and 4e.