Driver Assistance System

Project Duration: 2nd January 2019 – 20th January 2019.

About

As suggested by the name it is a hardware integrated software stack to assist car drivers and prevent accidents due to drowsiness and fatigue of drivers.

The software estimates the Alertness level of drivers with the use of computer vision-based methods. The level of alertness can be estimated from the value of PERCLOS (It is the ratio of closed eye frames to the total frames processed). In this work, we have developed a real-time system which is able to process the video onboard and to alarm the driver in case the driver is not alert and even assist Braking during difficult situations or if the warning is ignored. We used a per-trained model and DLib Library, which detects face and give landmarks of different features like lips, eyes, etc. By studying the distance between lips and Eyes we estimate the alertness of the Driver. 

The whole structure run on a PYNQ-Z2 FPGA board, you can see the setup guide [GitHub Repo Wiki Link].

You can find the working code and more about this project [GitHub Repo Link].

Detailed Project Report :

Drowsiness and fatigue of automobile drivers reduce the drivers’ abilities of vehicle control, natural reflex, recognition and perception. Sleep deprivation​ is a major cause of motor vehicle accidents [​ Wikipedia​] and it can impair the human brain as much as alcohol can. Sleep-deprived driving (commonly known as tired driving, drowsy driving, or fatigued driving) is the operation of a motor vehicle while being cognitively impaired by a lack of sleep.

Braking Assistance: Another large share of road accidents occurs due to sudden appearance or negligence of the presence of some obstacle due to poor visibility or lack of concentration of driver. Automatic braking technologies combine sensors and brake controls to help prevent high-speed collisions. Some automatic braking systems can prevent collisions altogether, but most of them are designed to simply reduce the speed of a vehicle before it hits something. Since high-speed crashes are more likely to be fatal than low-speed collisions, automatic braking systems can save lives and reduce the amount of property damage that occurs during an accident. Some of these systems provide braking assistance​ to the driver, and others are actually capable of activating the brakes with no driver input. Sudden braking can also make the vehicle to slip/skid or topple.

Statistics and Facts : Deaths due to Drowsy Driving According the ranking survey done by WHO India in 2013, death due to road accident hold 9th Position among all. More than 1,37,000 people were killed in road accidents in 2013 alone, which means there is 1 death in every 5 minutes. Also, the NCRB data shows that from 2014-15 the road accident deaths rose from 3.1% to 5.1% in India alone. In the United States, 250,000 drivers fall asleep at the wheel every day, according to the Division of Sleep Medicine at Harvard Medical School and in a national poll by the National Sleep Foundation, 54% of adult drivers said they had driven while drowsy during the past year with 28% saying they had actually fallen asleep while driving. According to the National Highway Traffic Safety Administration, drowsy driving is a factor in more than 100,000 crashes, resulting in 6,550 deaths and 80,000 injuries annually in the USA. Millions of drivers fall asleep at the wheel each month, and roughly 15 percent of all fatal crashes involve a drowsy driver.

The Alertness level of drivers can be estimated with the use of computer vision based methods. The level of fatigue can be found from the value of PERCLOS (It is the ratio of closed eye frames to the total frames processed). In this work we have developed a real-time system which is able to process the video onboard and to alarm the driver in case the driver is not alert and even assist Braking during difficult situations. Using pre-trained model which detects face and give landmarks of different features, distance between lips and Eyes Aspect Ratio we study the alertness of the Driver. During night time active Near Infrared (NIR) illumination will be used.

Autonomous Braking Algorithm used is a very effective way of bringing the vehicle to a stop immediately, in situations of obstacle appearing suddenly in front of the car, and minimising the damage.

As road accidents occurs within a very short time frame, thus a delay of milliseconds can be a factor of life and death, So this application requires fast processing and accelerated functions. This is where the role of PYNQ comes. The Idea of using PYNQ is to accelerate the heavy computation part of Image Processing by offloading it to FPGA.

Getting Started and Setup Guide

 
Rishabh Singh edited this page on Jan 14 · 8 revisions

To Set This System up for Use :

  1. Adjust the Camera : The placement of camera should be selected in accordance with four constraints

    • It should not obstruct the view of the driver.
    • The image acquired should contain driver face at its centre.
    • Direct light from other vehicles or street lights should not fall on the camera.
    • Effect of vehicle vibration should be minimum.
  2. Setup the PYNQ board and Connect the camera’s output to USB port and Monitor’s Input to HDMI-OUT port of PYNQ. (if camera being used has a different output port type then use an converter adaptor.

  3. Connect the Audio Out Line from PYNQ board to CAN Bus Interface to connect it with Car’s Stereo Audio System, if CAN is not present then a physical alarm can be set up which can be controlled by GPIO pins of PYNQ board.

PYNQ SETUP GUIDE : Follow the given link for setting up your PYNG-Z2 board

A initial score of 70,000 is assigned to show result in small video length. On closing eyes or yawning for greater than a certain frames, alertness score falls down. It ignores natural blinks and normal mouth movements while talking as it considers a certain number of frames. If driver is alert again, the score starts rising slowly to its maximum value. If the Alertness falls below a certain level, then WARNING signs are activated to wake up the driver.

EXPLANATION OF ENTIRE CODE IN DETAILS

  • drowsy_driver.py – Calculates Alertness Score on basis of eyes blinking pattern and yawning rate for Model 1 of this project. (without considering Velocity as a factor of required alertness)

  • autonomous_braking.py – Uses our advance Autonomous Braking System Algorithm to bring the car to a stop, minimizing the damage and avoiding the car to slip/skid/topple using ABS method.

  • ros_module.py – Uses Publish-Suscbribe Message Protocol using [ROS](Robotic Operating System) and CAN-bus Interface for getting the velocity and other information of the car and using it in for Autonomous Braking system.

 

DROWSY DRIVER CODE :

 Program to determine the alertness of a driver while driving.  

import cv2
import dlib
import time
import imutils
import argparse
import numpy as np

#importing PYNQ Overlays: 
from pynq.overlays.base import BaseOverlay
from pynq import Overlay
from pynq.lib.video import *

from threading import Thread
from collections import OrderedDict
from imutils.video import VideoStream
from imutils.video import FileVideoStream
from scipy.spatial import distance as dist

SHAPE_PREDICTOR_PATH = "/home/rey/shape_predictor_68_face_landmarks.dat"

Files are imported into the program and path for trained wights is provided.

base = Overlay('base.bit')

Mode = VideoMode(640,480,24)
hdmi_out = base.video.hdmi_out
hdmi_out.configure(Mode,PIXEL_BGR)
hdmi_out.start()

frame_out_w = 1366
frame_out_h = 768

Initializing overlay and configuring HDMI output to Monitor Resolution

# Compute the ratio of eye landmark distances to determine 
# if a person is blinking

def eye_aspect_ratio(eye):

    vertical_A = dist.euclidean(eye[1], eye[5])
    vertical_B = dist.euclidean(eye[2], eye[4])
    horizontal_C = dist.euclidean(eye[0], eye[3])
    ear = (vertical_A + vertical_B) / (2.0 * horizontal_C)
    return ear

The function returns euclidean distances between the two sets of vertical and horizontal eye landmarks in cartesian coordinates. Next it computes the eye aspect ratio and return the same.

def get_landmarks(im):
    rects = detector(im, 1)

    if len(rects) > 1:
        return "error"
    if len(rects) == 0:
        return "error"
    return np.matrix([[p.x, p.y] for p in predictor(im, rects[0]).parts()])

This funtion generate landmarks on the face by using a pre-trained model predictor

def annotate_landmarks(im, landmarks):
    im = im.copy()
    for idx, point in enumerate(landmarks):
        pos = (point[0, 0], point[0, 1])
        cv2.putText(im, str(idx), pos,
                    fontFace=cv2.FONT_HERSHEY_SCRIPT_SIMPLEX,
                    fontScale=0.4,
                    color=(0, 0, 255))
        cv2.circle(im, pos, 3, color=(0, 255, 255))
    return im

Copy the landmarks on the image frame and return it.

def top_lip(landmarks):
    top_lip_pts = []
    for i in range(50,53):
        top_lip_pts.append(landmarks[i])
    for i in range(61,64):
        top_lip_pts.append(landmarks[i])
    top_lip_all_pts = np.squeeze(np.asarray(top_lip_pts))
    top_lip_mean = np.mean(top_lip_pts, axis=0)
    return int(top_lip_mean[:,1])   

Compute and put landmarks on the top lip on the image frame.

def bottom_lip(landmarks):
    bottom_lip_pts = []
    for i in range(65,68):
        bottom_lip_pts.append(landmarks[i])
    for i in range(56,59):
        bottom_lip_pts.append(landmarks[i])
    bottom_lip_all_pts = np.squeeze(np.asarray(bottom_lip_pts))
    bottom_lip_mean = np.mean(bottom_lip_pts, axis=0)
    return int(bottom_lip_mean[:,1])

Compute and put landmarks on the bottom lip on the image frame.

def mouth_open(image):
    landmarks = get_landmarks(image)
    
    if landmarks == "error":
        return image, 0

    image_with_landmarks = annotate_landmarks(image, landmarks)
    top_lip_center = top_lip(landmarks)
    bottom_lip_center = bottom_lip(landmarks)
    lip_distance = abs(top_lip_center - bottom_lip_center)
    return image_with_landmarks, lip_distance

Find the absolute distance between the top and bottom lips using the topmost landmark of top lip and bottomost landmark of bottom lip.

# Constant for the eye aspect ratio to indicate drowsiness 
EYE_AR_THRESH = 0.25
# Constant for the number of consecutive frames the 
#eye (closed) must be below the threshold
EYE_AR_CONSEC_FRAMES = 10
# Initialize the frame counter
FRAME_COUNTER = 0
# Boolean to indicate if the alarm is going off
IS_ALARM_ON = False
#yawning distance threshold 
YAWN_DIST = 32
#Maximum Positive Attention Score : 

AttentionScoreMax = 80000
AttentionScore = AttentionScoreMax
#Warning Level
WarningLevel = 50000
autoBrakeLevel = 3000
#error frame 
error_frame_thres = 2
YAWN_MIN_FRAME_COUNT = 10
CAL_AVG_EAR=0
ear_flag=0
avg_ear_sum=0

All the threshold values and variables are initialized.

def warningAlert():
    print (" WARNING ALERT !!! ")
    output_text = "WARNING! ALERT !" + str(AttentionScore)
    cv2.putText(frame,output_text,(50,100),
        cv2.FONT_HERSHEY_COMPLEX, 2,(0,0,255),2)
    return

This function gives an alert on the screen and outputs the attention score of the driver. Funtion printing alert warning on screen and return attenion score of driver Compare the attention score and threshold attention level

def autoBrakeAlert():
    print (" AUTO BRAKE !!! ")
    output_text = "AUTO BRAKE " + str(AttentionScore)
    cv2.putText(frame,output_text,(50,250),
        cv2.FONT_HERSHEY_COMPLEX, 2,(0,0,255),4)
    return

This function when called executes the CAN protocol for hardware level implementations on the car for slow braking and alerting the driver about it.

def checkWarning():
    if(AttentionScore < WarningLevel):
        warningAlert()
    if(AttentionScore < autoBrakeLevel):
        autoBrakeAlert()
    return

Compare the attention score and threshold attention level.

# Take a bounding predicted by dlib and convert it
# to the format (x, y, w, h) as normally handled in OpenCV

def rect_to_bb(rect):
    x = rect.left()
    y = rect.top()
    w = rect.right() - x
    h = rect.bottom() - y
    
    # return a tuple of (x, y, w, h)
    return (x, y, w, h)

Take a bounding predicted by dlib and convert it to the format (x, y, w, h) as normally handled in OpenCV.

# The dlib face landmark detector will return a shape object 
# containing the 68 (x, y)-coordinates of the facial landmark regions.
# This fucntion converts the above object to a NumPy array.

def shape_to_np(shape, dtype = 'int'):
    # initialize the list of (x, y)-coordinates
    coords = np.zeros((68, 2), dtype = dtype)
    
    # loop over the 68 facial landmarks and convert them
    # to a 2-tuple of (x, y)-coordinates
    for i in range(0, 68):
        coords[i] = (shape.part(i).x, shape.part(i).y)
    
    # return the list of (x, y)-coordinates    
    return coords

The dlib face landmark detector will return a shape object containing the 68 (x, y)-coordinates of the facial landmark regions. This fucntion converts the above object to a NumPy array.

# define a dictionary that maps the indexes of the facial
# landmarks to specific face regions

FACIAL_LANDMARKS_IDXS = OrderedDict([
    ("mouth", (48, 68)),
    ("right_eyebrow", (17, 22)),
    ("left_eyebrow", (22, 27)),
    ("right_eye", (36, 42)),
    ("left_eye", (42, 48)),
    ("nose", (27, 35)),
    ("jaw", (0, 17))
])

This function define a dictionary that maps the indexes of the facial landmarks to specific face regions.

# initialize dlib's face detector (HOG-based)
detector = dlib.get_frontal_face_detector()

# create the facial landmark predictor
predictor = dlib.shape_predictor(SHAPE_PREDICTOR_PATH)

# grab the indexes of the facial landmarks for the left and
# right eye, respectively
(leStart, leEnd) = FACIAL_LANDMARKS_IDXS['left_eye']
(reStart, reEnd) = FACIAL_LANDMARKS_IDXS['right_eye']

# Streaming from a web-cam

vs = VideoStream(src = 0).start()
fileStream = False
time.sleep(1.0)

# Variables for yawn detection and frame count are initialized 

yawns = 0
yawn_status = False 
FRAME_COUNTER_YAWN =0
FRAME_COUNTER_EYES =0
error_frame =0
yawns = 0

Initialize dlib’s face detector (HOG-based), creates a facial landmark predictor, get the stream from a camera and define all variables.

while True:
    global frame 
    frame = vs.read()
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    rects = detector(gray, 0)
    image_landmarks, lip_distance = mouth_open(frame)
    out.write(frame)
    
    #Printing attention score and distance between lips
    prev_yawn_status = yawn_status  
    print ("lip distance: ", lip_distance)
    print ("AttentionScore: ", AttentionScore)
    
    # Master code for yawn detection and attention scoring
    # along with time of yawning
    if (lip_distance > YAWN_DIST and ear_flag == 2):
        #start_time = time.time()
        FRAME_COUNTER_YAWN +=1
        yawn_status == True
        print ("Frame Count: ", FRAME_COUNTER)

        # Attention scorer part
        if(FRAME_COUNTER_YAWN > YAWN_MIN_FRAME_COUNT):
                AttentionScore -=2*FRAME_COUNTER
                cv2.putText(frame, "Subject is Yawning",(50,450),
                    cv2.FONT_HERSHEY_COMPLEX, 1,(0,0,255),2)
                checkWarning()
            
    else:
        yawn_status == False
        if (AttentionScore < AttentionScoreMax):
            AttentionScore +=100
            checkWarning()

                 
    if prev_yawn_status == True and yawn_status == False:
        yawns += 1
        FRAME_COUNTER_YAWN = 0

    cv2.imshow('Live Landmarks', image_landmarks )
    
    ##Checking Drowsiness of Driver : 

    for rect in rects:
        # determine the facial landmarks for the face region, then
        # convert the facial landmark (x, y)-coordinates to a NumPy array
        shape = predictor(gray, rect)
        shape_np = shape_to_np(shape)
        
        # extract the left and right eye coordinates, then use the
        # coordinates to compute the eye aspect ratio for both eyes
        leftEye = shape_np[leStart:leEnd]
        rightEye = shape_np[reStart:reEnd]
        leftEAR = eye_aspect_ratio(leftEye)
        rightEAR = eye_aspect_ratio(rightEye)
        
        # average the eye aspect ratio together for both eyes
        avgEAR = (leftEAR + rightEAR) / 2.0
        
        # compute the convex hull for the left and right eye, then
        # visualize each of the eyes
        leftEyeHull = cv2.convexHull(leftEye)
        rightEyeHull = cv2.convexHull(rightEye)
        cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
        cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
        

        #Configures Eyes Aspect Ratio according to the driver's 
        #eyes by taking avg. of 100 frames
        if(CAL_AVG_EAR < 100):
            cv2.putText(frame, "CONFIGURING E.A.R.", (20, 60),
             cv2.FONT_HERSHEY_SIMPLEX, 2, (255, 0, 0), 4)
            ear_flag=1
            avg_ear_sum+=avgEAR
            CAL_AVG_EAR +=1
        else:
            if(CAL_AVG_EAR == 100):
                CAL_AVG_EAR = 200
                avgEAR = 0.95*(avg_ear_sum/100)
                ear_flag=2

        if (avgEAR < EYE_AR_THRESH and ear_flag == 2): FRAME_COUNTER_EYES += 1 # if the eyes were closed for a sufficient number of # then sound the alarm if FRAME_COUNTER_EYES >= EYE_AR_CONSEC_FRAMES:
                cv2.putText(frame, 'DROWSINESS ALERT!!!', (10, 30),
                 cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
                AttentionScore-=40*FRAME_COUNTER_EYES
                if(AttentionScore <=0):
                    AttentionScore=0
                checkWarning()
            
        # check to see if the eye aspect ratio is below the blink
        # threshold, and if so, increment the blink frame counter
        else:
            FRAME_COUNTER_EYES = 0
            IS_ALARM_ON = False

This fragment of the code computes the area of eyes and decides the drowsiness of the driver. It also ammends the attention score but with a very low weight as compared to that of a yawn. It checks if the eyes are closed for sufficient amount of time as well.

            
        # draw the computed eye aspect ratio for the frame
        cv2.putText(frame, "EAR: {:.2f}".format(avgEAR), (300, 30),
         cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
        output_text = " Attention Score " + str(AttentionScore)
        cv2.putText(frame,output_text,(30,300),
            cv2.FONT_HERSHEY_COMPLEX, 1,(0,255,127),2)
        
    # show the frame
    #cv2.imshow('Frame', frame)

    #OUTPUT through HDMI on MONITOR : 
    outframe = hdmi_out.newframe()
    outframe[0:480,0:640,:] = frame[0:480,0:640,:]
    hdmi_out.writeframe(outframe)

    #saving Result
    result.write(frame)

    key = cv2.waitKey(1) & 0xFF
    

# Cleanup
cv2.destroyAllWindows()
vs.stop()
hdmi_out.stop()
del hdmi_out
out.release()

This is the last part of the code which combines all the above modeules into one program and displays the live result on the screen.

The Automatic Braking Algorithm includes the following Braking Systems

  • Anti-Lock Braking System (ABS) Anti-lock brakes stop the wheels from locking up in a panic braking situation. They sense the motion of each wheel and detect skidding or severe braking. If skidding or massive sudden brake pressure from the driver is detected, the ABS will pump, or pulse, the breaks as needed to prevent the car from skidding out of control. The anti-lock braking system can pump the brakes hundreds of times per second – faster than any human can – which helps the driver maintain control of the car.

  • Emergency Brake Assist (EBA) Designed to add braking power if the driver doesn’t apply enough pressure to the brakes during a panic stop, this system can override the driver entirely and apply full brake force. This alone is not an automatic braking system.

  • Forward Collision Warning (FCW) Forward collision warning (FCW) is a warning system that gives the driver time to take action first and prevent an accident – but it is only a warning system. The computer does not take over and do the braking if the driver does not react in time.

  • Forward Collision Mitigation (FCM) A forward collision mitigation (FCM) system warns the driver and applies the brakes simultaneously. This system should not be confused with forward collision warning (FCW) because FCW does not take any action. What FCM does is apply the vehicle’s brakes as the computer calculates the vehicle’s situation, such as velocity and distance. An FCM is not necessarily designed to fully stop the car in an emergency situation, but reduce , or mitigate, the damaging effects of a collision.

  • Forward Collision Avoidance (FCA) Avoiding a collision completely is what a Forward Collision Avoidance (FCA) system is designed to do. It is the most complex system because the factors that must be considered and calculated to actually avoid a collision are numerous. An FCA uses automatic braking assistance, anti-lock brakes, and even assisted steering in order to achieve its goal, but the reality is you’re probably still going to crash. An FCA, like an FCM or FCW, will help reduce the severity of a crash.


 

Algorithm :

Using 2D-LiDAR/ RADAR(cheap option)/Stereo camera, distance of the nearest object ahead is calculated. (let’s this be = x)

Safe Braking Distance is calculated which includes reaction time factor, min. braking distance and safety factor ( = z)

  • v0 : is the speed of the vehicle at the time Emergency Braking was deployed
  • tr : is the avg. reaction time (taken as 2 sec.)
  • a : is the Retardation due to Maximum Braking Force
  • lamda : is the safety multiplier as Maximum Braking Force will not be used to stop (0.7<lamda<0.9)

If x < z Emergency Braking System is activated

Let’s derive the Retardation Rate required to bring the Car to stop: The speed vs distance graph is shown in the graph.

Let’s assume a very general function. (So that we can change parameters and train the system to give best result)

(alpha and beta are tuning parameters)

By differentiating this equation w.r.t. x and multiplying v, we get retardation rate :

So, this is the value of retardation, we require to stop the vehicle.

If full brakes are applied, tyres will get locked and hence there is a great possibility of slipping, So, we use the ABS Algorithm, which releases the brake in pulses when the tyres gets locked and keeps the slip ratio near it’s peak value to obtain the maximum efficiency.

MODEL 1 :

Pseudo Code : 
Attention Score determining Algorithm (assuming 30fps):

Assign Initial Positive Attention Max. Score : 80000
Define Warning Level : 8000
Define AutoBrake Level : 4000

checkWarning : 
	if Attention Score < 8000
		Display WARNING
	if Attention Score < 4000 Take Control and Slow Down Vehicle Eyes Aspect Ratio is Configured in first 100 frames according to Driver's Eyes agvEAR is set as the average of EAR over the initial 100 frames. If Yawn and Blink is not detected in a Frame Attention Score += 60 If Blink is Detected Attention Score -= 20 x Frame Counter If Yawn is Detected in a frame Attention Score -= 2 x Frame Counter for Yawn checkWarning() _______________________________________________________ _______________________________________________________ Code Flow ->

read frame from camera
convert into grayscale 
identify face landmark using dlib

avgEAR is configured according to Drivers's eyes

yawn detection:
	distance between lips > yawning threshold
		Frame Counter (with Yawn Detected) ++
		if Yawning in Continuos Number of Frames > Threshold
			Attention Score -=2x Frame Counter
			checkWarning // check warning status after each score update
If Yawn and Blink is not detected in a Frame
	Attention Score += 60 (if Attention Score < Fixed Maximum Score)
	checkWarning // check warning status after each score update

Eye Blinking detection:
	Using PERCLOS
	Eyes aspect ratio < Average aspect ratio (parameter)
	Blink is detected
		if Bink is detected continuously for Threshold no. of frames
			Drowsiness Detected
			Attention Score -=20 x Frame Counter
			checkWarning // check warning status after each score update

Required Alertness Level is a function of Speed of Vehicle So, AttentionScore -= 2*(speed of vehicle)*(Count of continuous frames with Eyes Closed)

WARNING LEVEL and AUTO_BRAKE LEVEL are also function of Speed of Vehicle : AUTO_BRAKE LEVEL is calculated by : ( = z)

  • v0 : is the speed of the vehicle at the time Emergency Braking was deployed
  • tr : is the avg. reaction time (taken as 2 sec.)
  • a : is the Retardation due to Maximum Braking Force
  • lamda : is the safety multiplier as Maximum Braking Force will not be used to stop (0.7<lamda<0.9)

If (distance of nearest object/traffic in line of path (given by RADAR or LiDAR)) < z Emergency Braking System is activated BRAKING SYSTEM ALGORITHM IS EXPLAINED IN Autonomous_Braking_Algorithm Wiki


 

Model 3:

Includes GPS and other Sensors data for complete Localisation and Use PID with the required value of Retardation and actual value of Retardation.

 

Model 4:

Quadratic Lane Curve Fitting Technique gives us as an Quadratic equation that fits the lane most closely, We can therefore calculate the Radius of Curvature of the turn using monocular Camera, and thus determine Safe Turning Speed for the car, and warn driver if the speed is above this limit or restrict the speed of the car to the Safe Limit Speed for the Turn.

Xilinx Innovation Challenge

Secured Second Position in the Competition.

Abstract submission was accepted from all over India, and 16 were accepted for the Final Round.
We submitted Abstract on Problem Statement of Safe Driving.
Upon qualification we received a PYNQ-Z2 FPGA board by Xilinx and had to implement our idea within a week of receiving this board. We implemented the system with CAN Protocol with our Autonomous Car Mahindra, and used a WebCamera on Dashboard to record real-time expression of the Driver.

Leave a Reply

Your email address will not be published. Required fields are marked *