With the advancement of technology and the breakthroughs of the modern era, accessing and transforming that data into information is becoming a more complex task for the scientific community. Specifically, a wide range of wearable and vision sensors are employed to capture multimodal data from diverse sources and fields. These have been incorporated into numerous domains and applications to assess academic and remote systems, emergency personnel, and monitoring systems. This paper presents a robust human anomaly detection and classification method in crowded scenes. First, crowdsourced data is acquired as an input. A few normalizing and filtering steps for denoising are performed. Then, human silhouettes are abstracted, which significantly facilitates human detection. Then, crowd-based analysis and clustering are employed for precise and efficient predictions. Following that feature engineering process, three robust features are extracted, including deep flow, gradient patches, and dense optical flow-based descriptors. Furthermore, stochastic gradient descent (SGD) was utilized for feature selection and optimization. Finally, optimized features are further fed to the Restricted Boltzmann Machines (RBM) classifier to advance adaptive training for the classification and predictions of human behavior in crowded scenes. The experimental results revealed an 88.1% accuracy and a 12.36 % error rate for the Avenue dataset. The ADOC dataset attained an average recognition rate of 91.17 percent, and an error rate of 8.82 percent. Finally, the USCD-Ped 2 dataset achieved an improved recognition rate of 90.19% with an error rate of 9.81%.