Results
[Present here your key results.]
Discussion
[Discuss here your results in light of current knowledge.]
Conclusion
This paper presents a human-like decision-making algorithm for driving intelligence tests. The interaction model of road users is firstly established using the Bayesian game theory. Besides an extreme conservative choice or an extreme aggressive choice, a probing behavior can be generated using the proposed method based on the cost and relative aggressiveness probability. To evaluate the aggressiveness of the opponent, an observation model is established and the way to customize it is given by an experiment. Additionally, the driver's probing strategy generation method is developed to test the real aggressiveness of the background vehicle. The strategy is reflected on the vehicle's behavior through a proposed Markov method. Next, the proposed methodology is compared with commonly used approaches and state of art literature. The comparison indicates that our method concurs with previous researches while is capable of generating more complex and human-like behavior. Finally, the human-likeness of our algorithm is evaluated using the Turing test. The test results indicate that the participants cannot distinguish human behavior from the behavior generated by our algorithm.
Although the proposed method is designed for scene generation, it may shed some light on the autonomous driving algorithm as well. One of the major challenges in autonomous driving is the uncertainty of traffic. Instead of passively accepting the probability, we may actively make some small steps to reduce the entropy without compromising safety as is given in this paper. Current researches focus on prediction accuracy and learning convergence, which is supposed to be a trade-off between perception/computation burden, and accuracy; the more data available, the more powerful the computer is, the better the decision can be. In this way, we may eventually be able to predict the future, thus obtain a best decision. But, this demand is endless. The decision algorithm, as well as prediction and aggressiveness estimation methodology in this paper, are simple and direct, thus computationally efficient because we do not insist on the global best decision, which is the same for normal human drivers; when human drivers are confused, they just try with small steps, which are simple but powerful.
Additionally, the Turing test framework given in this paper might be applied to autonomous driving algorithm evaluation. As so many researchers and manufacturers are developing human-like self-driving algorithms, this unified and objective method can be used for the assessment of human-likeness.
Our future work will focus on the human control level. Other human behaviors, such as human distraction, control latency will be considered to generate more human-like behavior for autonomous tests. Also, the Markov method will be replaced with a better approximator that can be even more tightly connected to the strategy. Moreover, a more general Turing test procedure with more participants might be our focus as well.