Overview:
The rapid expansion in population has transcendently expanded the utilization of vehicles in all regions. This expansion in vehicular utilization has expanded the pace of street mishaps in the ongoing decade. Congestion of vehicles, a driver under alcohol or drug influence, distracted driving, street racing, faulty design of cars or traffic lights, tailgating, improper turns, and driving in the wrong direction are some of the genuine causes of accidents across the globe. There are many advanced frameworks actualized for street security, however, the prevention of accidents is still a compelling issue. With the rising trends in the field of software engineering, the utilization of emerging technologies can be useful for accident avoidance and recognition.
Courtesy: Pete Piringer Twitter
Computer Vision is the technology that is intended to imitate
how the human visual framework functions. The digitalized information from various
surveillance system is analyzed and the information is examined and if there
are any accidents, for example, speeding, careless driving, mishaps, and so on
it is recognized and revealed by the framework simultaneously. With computer vision and intelligent transportation systems (ITS), we can provide drivers with a safety net. Object detection,
object location, object tracking, and semantic segmentation are a portion of
the computer vision-based methods with cutting edge profound deep learning
approaches that can be utilized in accident recognition and prevention measures.
Distinguished Techniques:
In the past twenty years, researchers have conducted many studies on vision-based traffic crash detection and prevention, which can be classified into three categories:
(1) Modeling of traffic flow patterns
(2) Modeling of
vehicle interactions
(3) Analysis of vehicle activities
The first method is to compare vehicle trajectories to typical vehicle motion patterns that can be learned from large data samples. In this framework, if a trajectory is not consistent with typical trajectory patterns, it can be considered as a traffic incident. However, it is not easy to identify whether this incident is a crash due to limited crash trajectory data that can be collected in the real world.
The second method determines crash occurrence based on speed change information, which applies the social force model and intelligent driver model to model interactions among vehicles. This method requires a larger number of training samples.
The third method largely depends on trackers because it needs to continuously calculate vehicle motion features (e.g. distance, acceleration, direction, etc). As such, aberrant behaviors related to traffic incidents could be detected. However, it is often difficult to be utilized in practice, limited by high computational costs and unsatisfactory tracking performance in congested traffic environment. In general, fruitful results have been achieved for vision-based crash detection.
Today we would be highlighting one of the techniques used for accident prevention using gaze tracking for computer vision.
Proposed Methodology:
The proposed framework expects to help in analyzing the elements related to the driver's behavior for the development of accident prevention frameworks. This framework is implemented using two image processing tool to get the facial geometry-based eye region detection for eye closure identification, combined tracking, and detection of vehicles. Frequencies of eye blinking and eye closure are used as an indication of drowsiness and warning sign is generated for the recommendation, and road traffic is also analyzed.
Flowchart of the proposed system
System Description:
There are four main modules: Image acquisition, facial feature detection, head pose estimation, and gaze tracking.Image Acquisition:
The image acquisition module is based on a low-cost CCD camera placed on top of the steering wheel column. It facilitates the estimation of gaze angles, such as pitch, which is relevant for detecting when the driver is texting on a phone (a major threat to safety).
Facial Feature Detection:
Facial feature extraction is important in automated visual interpretation and human face recognition. Detecting facial features plays a crucial role in a wide variety of applications such as human-computer interface, facial animation and face recognition, etc. The proposed method detects and tracks facial features from the face images with different facial expressions under various face orientations in real-time by using pyramidal Gabor wavelets for efficient facial feature representation, dynamic and accurate model updating for each facial feature to eliminate any error accumulation and imposing the global geometry constraints to eliminate any geometrical violations. By these methods, the accuracy of the facial feature tracking reaches a new peak.
Head Pose Estimation research in Computer Vision focuses on the prediction of the pose of a human head in an image. More specifically, it concerns the prediction of the Euler angles of a human head. The Euler angles consist of three values: yaw, pitch, and roll.
In the proposed method, a novel technique is proposed to recover 3D face pose and facial expression simultaneously from a monocular video sequence in real-time. First, facial features are detected and tracked under various face orientations and significant facial expressions. Secondly, after modeling the coupling between face pose and facial expression in the 2D image as a nonlinear function, a normalized SVD (N-SVD) decomposition technique is proposed to recover the pose and expression parameters analytically. Therefore, the proposed method can recover the face pose and facial expression from the face images robustly and accurately.
Gaze Tracking:
Gaze estimation is a process to estimate and track the 3D line of sight of a person, or simply, where a person is looking. The device or apparatus used to track gaze by analyzing eye movements is called a gaze tracker. A gaze tracker performs two main tasks simultaneously: localization of the eye position in the video or images, and tracking its motion to determine the gaze direction. The eye-gaze tracking system consists of one CCD camera and two mirrors. Based on geometric and linear algebra calculations, the mirrors rotate to follow head movements to keep the eyes within the view of the camera. It also consists of a hierarchical generalized regression neural networks (H-GRNN) scheme to map eye and mirror parameters to gaze.
Now we will be looking into the three algorithms used for feature detection which consists of The Longest Line Scanning (LLS), Occluded Circular Edge Matching (OCEM), and Blink detection.
The Longest Line Scanning:
Since the eye position is not sufficient enough for tracking the eye accurately, the longest line scanning algorithm is used for iris center detection. The longest line found provides the candidate for the center of the iris. The candidate center from the LLS algorithm and used as an input to the next algorithm, Occluded Circular Edge Matching.
Occluded Circular Edge Matching :
Although the LLS method detects the center of the iris, the following problems arise intra-iris noise and rough iris edge. If the edge of the iris is noisy, the horizontal line drawn in LLS will not be easily defined. OCEM takes both the candidate center of the iris and the edge image as inputs and approximates the shape of the iris with a circle. The center of that circle is our chosen center for the iris.
Blink Detection:
Blink detection algorithm separates the voluntary and the involuntary blinks and detects single voluntary blinks or sequence of blinks. (Applications, like fatigue monitoring, human-computer interfacing, and lie detection.)
The proposed model has described a real-time gaze tracking system using the video from a monocular camera installed on a steering wheel column. The proposed system can detect at day and night, and under a wide range of driver’s characteristics.
Conclusion:
Using computer vision and sensory technologies, we can make modern traffic management processes more efficient. With our advanced image recognition technology, we have all the tools in place to disrupt the automotive industry and make it safer for the world of tomorrow. Our synthetic datasets could be applied to traffic flow to manage it more efficiently, as well. Computer vision technologies have the potential to revolutionize our daily lives, making everyday doings like driving and transportation easier and safer for all involved.
References:
- P. S. Rani, P. Subhashree and N. S. Devi, "Computer vision based gaze tracking for accident prevention," 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), Coimbatore, 2016, pp. 1-6, doi: 10.1109/STARTUP.2016.7583976.
- Reale, Michael & Canavan, Shaun & Yin, Lijun & Hu, Kaoning & Hung, Terry. (2011). A Multi-Gesture Interaction System Using a 3-D Iris Disk Model for Gaze Estimation and an Active Appearance Model for 3-D Hand Pointing. Multimedia, IEEE Transactions on. 13. 474 - 486. 10.1109/TMM.2011.2120600.
- Neuromation "How Computer Vision Can Change The Automotive Industry" Medium. 2018
- Chen Wang, Yulu Dai, Wei Zhou, Yifei Geng, "A Vision-Based Video Crash Detection Framework for Mixed Traffic Flow Environment Considering Low-Visibility Condition", Journal of Advanced Transportation, vol. 2020, Article ID 9194028, 11 pages, 2020. https://doi.org/10.1155/2020/9194028
Very informative peice of work with precise information and easy to understand. Keep up the good work!
ReplyDeleteGood work guys!!
ReplyDeleteamazing!!
ReplyDeleteWell explained!!
ReplyDeleteCan't wait to see it being implemented around us ASAP !! Great Work y'all !
ReplyDeleteVery well explained, big fan sir
ReplyDeleteVery useful Content
ReplyDeleteOne of the best articles on using the power of computer vision to prevent potential disasters
ReplyDeletePrecise info. Great work guys!
ReplyDeleteVery Informative article .
ReplyDeleteThe information is quite resourceful. Truly gave me a better understanding. Kudos to y’all!
ReplyDeleteInformative 👍
ReplyDeleteGreat work
ReplyDeleteVery informative. Well explained!!
ReplyDeleteSuch an amazing peice of information. Enjoyed it very much, thank you for simplifying the information in such a way.
ReplyDeleteVery well explained!!
ReplyDeleteNice information as we are working on project of wrong way vehicle detection . these information going to help us a lot
ReplyDeleteNice work on computer visions, very simple to understand. Keep it up.
ReplyDeleteGreat work
ReplyDeleteThe information is very resourceful and easy to understand. Thank you for making this
ReplyDeleteVery interesting information. Loved the content. Thank you for creating this blog
ReplyDeleteAmazing content, written in a way which was very easy to understand. Keep up with the good work
ReplyDeleteIt was a really nice work, gave me a completely new understanding of computer vision and help me understand a few point I wasn't able to before, genuinely a big thank you to the creators.
ReplyDeleteVery wonderful content. I really loved reading it, the information was easy to understand. Hope to see more!
ReplyDeleteKeep it up!
ReplyDelete