Now that we have established how speed estimation works, we come back to our initial occlusion scenario, simulating a simple one-dimensional motion of a white square travelling from left to right and encountering blockage by a stationary red square (Figure 5a-e and movie 3). Figure 5g shows all 20 pixels’ VOC values during the whole event (pixel numbering and positions follow Figure 1c). Clearly, the pixels seeing the stationary red square (pixels 11-14, marked red) deliver low VOC throughout the whole occlusion scenario (Figures 5b, c, d); hence, failing to reconstruct parts of the travelling white square disappearing behind the red one. As mentioned before, one way to solve this problem is for the sensor to predict the object’s path during occlusion based on the past trajectory taken. This can be achieved by simply probing the self-learning OHL pixels 1-8 that have tracked the object prior to the onset of occlusion (Figure 5b). As Figure 5g reveals, pixels 1-8 maintain mostly similar VOC values after changes in light intensity (indicated by black arrows) throughout the whole event (Table 2), hence providing a speed and path prediction that allows the sensor to “assume” the object keeps following the original trajectory behind the obstructing red square. More precisely, during the period in which half of the white square vanishes (marked as (b) in Figure 5g and corresponding to Figure 5b), probing the pixels’ 1-8 would provide information about the object’s previous speed and past route. Similarly, for complete occlusion (Figure 5c), pixels 9 and 10 also have to be measured within the period of occurrence, designated by (c) in Figure 5g. The camera can then keep tracking the temporary disappearing object along this predicted path. It is open for the user to decide on the length of the past course that should be considered. In our example, we chose the whole track represented by pixels 1-10. However, for some scenarios, a shorter trail involving fewer pixels may be sufficient as well.