![]() system and method for identifying a driver
专利摘要:
SYSTEM AND METHOD FOR IDENTIFYING A DRIVER. A system for identifying a driver comprising an image capture device, a processor and a memory. The image is captured during vehicle operation and from an expected location of the driver. The processor is configured to detect a driver's face in the image, determine a facial dataset from the image, and identify the driver based on at least part of the facial dataset. Memory is coupled to the processor and configured to provide the processor with instructions. 公开号:BR112014003488B1 申请号:R112014003488-5 申请日:2012-09-13 公开日:2021-05-11 发明作者:Syrus C. Nemat-Nasser;Andrew Tombras Smith 申请人:Lytx Inc; IPC主号:
专利说明:
Invention History [0001] There is a need in fleet management to accurately determine the performance of a driver. Performance information can be used to compensate for safe driving behavior, provide useful feedback to a driver for further improvement, and fair assessment of insurance. Currently, driver performance is assessed by monitoring driving data such as driver behavior and vehicle operation through a variety of sensors placed on board the vehicle. Driving events captured via on-board sensors are scored to provide a measure of driver performance. In a large fleet, a driver is typically not permanently assigned to a particular vehicle, but is instead assigned to a vehicle based on availability. This creates a challenge in attributing the captured driving data (eg driving event data captured from vehicle operation) to the appropriate driver in order to assess the performance of that specific driver. Although driver badges (for example, a Radio Frequency Identification Device (RFID) badge) are sometimes used to match a driver with driving data, badges have been unreliable due to drivers not consistently carrying their badges. . Due to this, driving data is often manually assigned to drivers by human operators. In some cases, manual assignment leads to an unacceptable number of mismatches between driving data and the driver. Brief Description of Drawings [0002] Various embodiments of the invention are disclosed in the detailed description below and accompanying drawings. [0003] Figure 1 is a block diagram illustrating an embodiment of a system to identify a driver. [0004] Figure 2 is a block diagram illustrating an embodiment of an on-board computer. [0005] Figure 3 is a block diagram illustrating the various sensors on board to detect the driving data according to various embodiments. [0006] Figure 4 is a flow diagram illustrating a realization of a process to identify a driver. [0007] Figure 5 is a flow diagram illustrating an embodiment of a process to associate the facial data and the driver to the driving data. [0008] Figure 6 is a flow diagram illustrating a realization of a process to select the best facial data for driver identification. [0009] Figure 7 is a flow diagram illustrating one realization of a process to make associations between driving data and facial data captured at different times. [0010] Figure 8 is a diagram illustrating an embodiment of capturing and/or generating driving data, driver images and facial data. [0011] Figure 9 is a diagram illustrating an embodiment of the facial aspects. Detailed Description [0012] The invention can be implemented in numerous ways, including a process; an apparatus; a system; a composition of matter; a computer program product incorporated into a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored and/or provided by memory coupled to the processor. In this descriptive report, these implementations, or any other form the invention may take, may be referred to as techniques. In general, the order of steps of the disclosed processes can be changed within the scope of the invention. Unless stated otherwise, a component such as a processor or memory described as being configured to perform a task can be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to carry out the task. As used herein, the term "processor" refers to one or more devices, circuits and/or processing cores configured to process data, such as computer program instructions. [0013] A detailed description of one or more embodiments of the invention is provided below with accompanying figures illustrating the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any one embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set out in the description below in order to provide a thorough understanding of the invention. These details are provided for the purpose of examples and the invention may be practiced in accordance with the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields relating to the invention have not been described in detail so that the invention is not necessarily obscure. [0014] A system for identifying a driver is revealed. The system comprises an image capture device, a processor and a memory. The image is captured during vehicle operation and from an expected location of the driver. The processor is configured to detect a driver's face in the image, determine a facial dataset from the image, and identify the driver based on at least part of the facial dataset. Memory is coupled to the processor and configured to provide the processor with instructions. [0015] In some embodiments, a driver identification system captures an image of a driver at an expected location of a vehicle. The image is analyzed to locate a face in the image and extract the facial features for processing into a set of facial data (eg geometric relationships between the facial feature - eg eyes, eyebrows, nose, mouth, chin, hair, ears , etc.). Facial data is analyzed to identify the driver and associate the driver with captured event data stored or transferred while driving. Facial data can be transmitted to a remote server for driver identification. Facial data can be used to identify the driver by comparing it to facial data stored in a database or by employing a face model. The facial data extracted from a single image of a face is smaller than an image of that face, and thus the transmission of the facial data as opposed to the image data is more efficient for communication between a mobile unit and a remote server. [0016] Figure 1 is a block diagram illustrating an embodiment of a system for identifying a driver. In the example shown, vehicle 102 is equipped with on-board computer 104 that interfaces with various on-board sensors of vehicle 106. On-board computer 104 includes one or more processors that are capable of executing computer instructions to perform various functions involved. in the evaluation of the driver's performance. The on-board computer 104 further includes one or more data storage units for storing computer instructions, rules, algorithms, driving data, various databases and maps such as a digital security map. On-board computer 104 further includes one or more communication interfaces for communicating with on-board sensors 106 (e.g., global positioning system (GPS) receiver 108) and remote server 112 accommodated in a network 114. , communication interfaces comprise interfaces for wired and/or wireless links (short band or long band), direct and/or indirect communication links - for example, a universal serial bus interface, a vehicle bus interface ( eg on-board diagnostic system (OBD)), a global positioning system (GPS) interface, a BluetoothTM interface, a ZigBeeTM link, an institute of electrical and electronic engineers (IEEE) point-to-point 802.11 link, link wired/wireless data network, or any other appropriate interfaces. In various embodiments, network 114 comprises a wired or wireless telephone network, a local area network (LAN), wide area network (WAN), or any other suitable network. [0017] In various embodiments, one or more on-board sensors 106 comprise an image capture device (e.g., a video camera, a still camera, etc.), a geographic location data receiver (e.g., GPS 108), a sensor to detect the operating state of the vehicle, or any other suitable sensors. In the example shown, the GPS 108 receives signals from satellites 110 that provide the GPS 108 to provide geographic location data. In some embodiments, one or more on-board sensors 106 (e.g., GPS receiver, accelerometer, etc.) are incorporated in on-board computer 104. In some embodiments, one or more on-board sensors 106 are separate from on-board computer 104. One or more on-board sensors 106 detect various driving data during vehicle operation, including driver behavior, vehicle operating state, and/or various driving conditions or environmental parameters. In various embodiments, driving conditions include road conditions, weather conditions, traffic conditions, or any other appropriate driving conditions. In various embodiments, communication circuits, processors and/or interfaces are included in one or more sensors to perform various functions such as capturing, storing, processing and/or transmitting sensor data. For example, the sensor on/off circuit is included to turn the sensor on/off, the data capture circuit is included to capture data from the sensor, and a communications interface circuit is included to transmit data from the sensor to a server remote. In some embodiments, these sensor functions are performed automatically by the sensor or are performed in response to commands (e.g., issued by the on-board computer 104). In various embodiments, one or more data storage units are included or associated with one or more sensors 106 to store computer instructions and sensor data. Data storage units include internal or external, fixed or removable, persistent and/or volatile memory. The on-board computer 104 is configured to receive sensor data from one or more sensors and receive other information from other external source(s) (e.g., weather information, road map, etc.). ) through communications interfaces. For example, motion or stationary images from various perspectives of viewing, vehicle speed, acceleration and direction, the vehicle's geographic location, ambient temperature and humidity level are received from various on-board sensors. In some embodiments, received sensor data is analyzed to assess driver performance. In some embodiments, data from different sensors is correlated to the time and geographic location of the moving vehicle. [0018] In some embodiments, the on-board computer 104 is configured to perform analyzes of the detected driving data. In some embodiments, whereas the computing capability of onboard computer 104 is limited, onboard computer 104 performs preliminary analyzes and analyzes less robust or complex than those performed on remote server 112 which has more computational power. In some embodiments, the onboard computer 104 is configured to upload the driving data (e.g., sensor data and/or analysis data) to remote server 112 for further analysis, processing, and/or storage. In some embodiments, the upload driving data is automatically loaded by the onboard computer 104 based on predefined criteria or upon request (e.g., by remote server 112). In some embodiments, remote server 112 performs detailed and/or additional analysis of the driving data. For example, the server uses driving data to identify the driver, analyze driving data, determine driver performance, such as determine driver attitude (eg, recklessness) and skill, calculate driver risk score. , generate the driver profile, identify erratic and dangerous driving behavior, identify the driver's deviation from his normal driving behavior (comparing to his driving profile), identify high-risk drivers, perform the risk analysis for a group of drivers or for an entire fleet, calculate insurance, and/or generate various reports. [0019] Figure 2 is a block diagram illustrating an embodiment of an on-board computer. In the example shown, the on-board computer 200 comprises one or more processors 202, communication interface(s) 206, and data storage unit(s) 204. [0020] Figure 3 is a block diagram illustrating an embodiment of on-board sensors. In the example shown, one or more video cameras 302 and/or still cameras 304 can be mounted in various positions on the vehicle to capture a view of the cabin, an outside view such as a front view, a rear view, a left side view, and /or a right side view. Video cameras and still cameras can be equipped with infrared emitters for enhanced night vision and/or to obtain an image of the driver's facial aspects through sunglasses. Video cameras and/or still cameras can be stereo video cameras and/or still cameras that are capable of capturing 3D images. Captured images can be used to identify the driver, to record the driver's behavior in relation to the circumstances leading to a driving event, during and immediately after it. The captured images can be used to recognize road signs such as speed limit signs. One or more 306 microphones can be placed inside and/or outside the cabin to record audio sounds. One or more laser and/or camera based on the 308 road monitoring sensor(s) may be positioned at the front and/or rear of the vehicle to monitor vehicle slippage on the lane. In one example, a video camera is mounted on the top console above the mirror to monitor lane markings on the road. Captured video images can be processed using one or more processors to determine if the vehicle has moved off its proper lane and by how much. One or more accelerometers 310 may be placed on board the vehicle to monitor acceleration along one or more axis of the vehicle. The vehicle acceleration axes may include the longitudinal axis of the vehicle (i.e., the axis substantially in the direction of the main movement of the vehicle), the transverse (lateral) axis of the vehicle (i.e., the substantially horizontal axis substantially orthogonal to the main movement of the vehicle. vehicle), and the vertical axis of the vehicle (that is, the axis orthogonal to both the longitudinal axis of the vehicle and the transverse axis of the vehicle). The 310 accelerometer can be a standalone accelerometer put in place by the vehicle manufacturer or it can be an additional accelerometer put in after manufacture. Gyroscope 312 can be placed on board the vehicle to detect the angular orientation and frequency of rotation of the vehicle (e.g., rotational dynamics of the vehicle). Rotation is typically measured with reference to one of three axes: drive, longitudinal and transverse. The 314 humidity sensor can be mounted on the outside of the vehicle to detect the ambient humidity level, which provides an indication of whether it is raining on the road. The 316 temperature sensor can be mounted on the outside of the vehicle to detect the ambient temperature, which provides information on how cold the outside environment is and whether it is below freezing and by how much. Additionally, the on-board computer 104 may also have the ability to access information detected by one or more vehicle sensors in the vehicle by the manufacturer via a vehicle bus interface such as OBD 318. For example, via OBD 318 , the on-board computer 104 can access the cabin equipment operating sensor 319, the manufacturer's integrated speedometer 320 to detect the vehicle speed, the anti-lock brake system speed sensor 322 to detect the frequency at which the vehicle wheels are moving and if the anti-lock brake has been engaged, accelerator pedal position sensor 324 and brake pedal position sensor 326 to detect brake pedal and accelerator pedal depression degrees and profiles, temperature sensor of the engine 327 to detect engine temperature, engine position sensor 328 to detect engine position/selection, sensor 330 rotation speed sensor to detect engine engine speed, and 332 engine exhaust sensor to detect exhaust composition and temperature. The onboard vehicle sensors are not limited to the examples provided here. Other vehicle sensors can also be included, such as a crash sensor, various cabin equipment operation sensors that can be accessed through the vehicle bus that provide information regarding the operation of the windshield blades, light status (headlights on/off, fog lights on/off, left/right/off arrows, etc.), operation of equipment inside the vehicle such as radios, cell phones, DVD players, the volume at which the audio equipment is operated , the driver's identity based on the entry of an identification number, seat settings, weight, seat belt status, number of passengers. [0021] Figure 4 is a flow diagram illustrating an embodiment of a process to identify a driver. In some embodiments, the process is performed by a moving vehicle's onboard computer and/or a remote server. In the example shown at 402, an image of a driver of a moving vehicle is captured. In some embodiments, driving image capture involves capturing an image of an expected location of the driver in the vehicle. In various embodiments, an image capture device such as a video camera or a still camera is positioned on board the vehicle in such a way that the image capture device having its view focused on an expected driver position can be used to capture an image of the driver. In one example, the image capture device is placed on the vehicle's front panel. In several embodiments, the captured image is a 2D image. In several embodiments, a 3D image is captured. In various embodiments, the position of the image capture device can be such that it has more or less of the front view of the driver's face. In several embodiments, although a perfect front view is preferred, it may not always be possible to achieve due to various restrictions placed on camera position, a face view turned slightly to one side or tilted up or down to a certain degree can be tolerated. In various embodiments, the image capture device is configured to capture an image of the driver periodically (for example, every 30 seconds) during operation of the moving vehicle. For example, from the moment the engine is detected to be on to the moment the engine is detected off, the image capture device captures an image of the driver every minute. In various embodiments, the image is timed to allow it to be correlated to other sensor data from the moving vehicle, or other vehicles (eg, in a fleet), on a common time scale. [0022] In 404, the driver's face is detected in the image. Various face location and/or detection algorithms can be used to detect the presence and/or location of the driver's face within the image. For example, the detection algorithm can be based on a face model, a deformable appearance model, a skin color model, and/or a neural network. In several embodiments, the image can be divided into subspaces and each subspace is run through an algorithm to detect if a face is located within the subspace. In various embodiments, the captured image can be converted to luminance data, hue data, and chroma data for processing via the face detection algorithm. In a face model based approach, a face model is applied to the captured image based on the luminance data, the face is detected if the correlation value between the face model and the captured image is high enough. In a skin color based approach, a colored area of skin is recognized based on the hue data, and the face is detected based on a change in luminance in the skin color area. In an approach based on facial appearance and skin color, the colored area of skin is detected based on the hue data, facial aspects such as eyes and mouth are detected by applying deformable aspect models to the captured image, and the face is detected based on the presence of these facial features in the colored area of the skin. In a neural network-based approach, a neural network-based classifier can be trained using positive face examples and non-facial examples, and then the classifier is applied to the captured image to detect the presence of the face. In some embodiments, the location of the driver's face is a direct output from the detection algorithm. For example, in the well-known Viola-Jones method for robust face detection (Viola and Jones, 2004), a rectangular “sub window” is moved over the image in an overlapping fashion, and a set of simple aspects is computed at each position. . These aspects provide input to an optimized probabilistic model that determines whether or not the subwindow contains a face of an expected size. Alternatively, in some embodiments, the location of the driver's face is found separately through an operation or subsequent operations. [0023] In 406, facial features are extracted from the face region of the image. In some embodiments, the image is transformed before extracting the facial appearance. For example, the image is normalized over size, color, contrast, and hue, luminance, position (eg, tilt, rotation, etc.) before extracting the facial feature. In some embodiments, other transformations, such as geometric, lighting, and/or color adjustments are applied prior to facial feature extraction. Examples of facial features include measurements derived from the image subset containing the mouth, eyes, nose, eyebrows, and chin. Several approaches can be used to extract facial features, for example, (1) a luminance, chrominance, facial geometry and symmetry approach, (2) a model match-based approach, and (3) a neural network-based approach . In general, two types of information are commonly used through these techniques. One is the image appearance of the facial features, which is referred to as texture information, and the other is the spatial relationship between the different facial features. For example, in an approach based on luminance, chrominance, geometry and facial symmetry, the appearance of the mouth is identified based on the redness property of the image's pixels. Aspects of the eyes are identified based on the relative position of the mouth looking for regions that satisfy some statistical, geometric, and structural properties of the eyes in a face image. A symmetry transformation is applied to the skin area to find the darker rounded regions that could be the eyes and horizontal aspects that could be a mouth. A simple vertical edge detection can produce the geometric location of the nose. In an aspect template-based approach, aspect templates can be applied to a captured region of the face of the image; facial aspects are detected when the correlation values between the aspect models and the captured image are high. In a neural network-based method, a neural network is individually trained as an aspect detector for each facial aspect, and the facial features are then located by seeking the facial image through the trained facial aspect detectors. [0024] In 408, facial data is generated based on the extracted facial features. Facial data is metadata derived from the driver's facial image to identify distinctive features. In various embodiments, the facial data may include coordinate data indicating respective locations of a plurality of facial features. In various embodiments, the facial data can include geometric relationships (eg, distances, angles) between the plurality of facial features. [0025] At 410, the quality of the facial data and/or the image is evaluated. In several embodiments, a confidence model for the facial data is built by training with a sample of the facial data and the corresponding assessed human facial data quality confidence scores or index. The confidence model is used to calculate a confidence score that reflects the adequacy of the calculated face data to identify the driver. The quality of the face score is considered acceptable if it is above a preset threshold value or unacceptable if it is below a preset threshold value. [0026] In 412, facial data is used to identify the driver. In several embodiments, facial data is compared to facial data previously collected from identified drivers. If a similar score between the target driver's facial data and that of an identified driver is sufficiently high, the driver is identified. In several embodiments, facial data is applied to a model of the face. In several embodiments, the face model comprises a model based on the neural network built through training with the facial data of drivers and/or non-drivers. The driver can be identified using the neural network model. [0027] In some embodiments, facial data is continuously collected from one or more vehicles, and added to a database. Driver identification is accomplished using a process in which multiple sets of face data are determined to belong to the same driver, and one or more of these sets are then associated with one or more external driver identification signs (e.g., an identification of a driver by means of a badge, a supervisor, a logo, etc.). In some embodiments, multiple sets of the facial data are determined to belong to the same driver through one or more of the following mechanisms: (1) multiple sets of the facial data are collected during a time interval in which a vehicle can reliably be assumed to be operated continuously by one person; (2) a human reviews images associated with multiple sets of facial data and determines that all faces represent the same driver; (3) the methods to be used for automatic face identification are used to determine that multiple sets of facial data are from the same driver. In several embodiments, these operations can be performed either by the on-board computer or on a remote server. By grouping multiple sets of facial data from the same driver, a single driver identification sign can associate the entire group with a specific driver, for example, facial data collected over a one-month period between several vehicles shared in a fleet can be associated with a driver based on a driver identification sign. Examples of external driver identification signs include: (1) a unique RFID identifier of a card issued to a specific driver; (2) a supervisor confirms the identity of a driver by viewing an image or video uploaded from the vehicle; (3) an expedition logbook contains the driving schedule for a driver, and the system automatically associates the identity with one or more sets of collected facial data. [0028] In several embodiments, the quality (eg, confidence index) of the facial data is determined and the facial data is used to identify the driver only if the quality of the facial data meets one or more predefined criteria (eg, has a confidence value higher than a threshold). For example, a confidence index, or a set of confidence indexes, of the facial data can be calculated based on a model that correlates to a human operator review of the quality of the facial data (eg, classify the facial data as having low, medium and high quality). In several embodiments, facial data quality is an output of the face detection and/or face appearance extraction algorithm. For example, if a Viola-Jones face detection procedure is used, the confidence index can be a function of the number of overlapping boxes reporting a positive detection. If the detection model is probabilistic (that is, it generates an estimate of the probability that an image contains a face), then that probability can be used as a measure of confidence. [0029] In various embodiments, using the facial data to identify the driver includes transmitting the facial data to a remote server via, for example, a wireless communications link for driver identification, authentication and/or registration. In several embodiments, facial data is transmitted to the remote server only if the quality of the facial data is determined to be sufficient, such as meeting one or more predefined criteria (for example, having a quality or confidence value above a threshold) . In various embodiments, the facial data is associated (eg, connecting the same communication tube) with the surrounding captured conduction data at the time the image from which the facial data is derived is captured. [0030] Figure 5 is a flow diagram illustrating an embodiment of a process to associate face and driver data to driving data. In various embodiments, the process can be performed by an on-board computing device placed on board the moving vehicle and/or a remote server. In the example shown, at 502, the driving data of a moving vehicle is captured over a period of time. In several embodiments, driving data, reflecting driver behavior, may include image/audio/video data, sensor data indicating vehicle movement, vehicle operating status, and/or driving environment data indicating the driving environment such as road condition, weather, and traffic condition. In several embodiments, the driving data includes driving event data from the moving vehicle. At 504, face data is generated from the driver's image data during the period. For example, a face is extracted from the image using model matching, color matching, aspect matching, and a facial dataset is derived from the extracted face. In 506, facial data is associated with driving data. For example, a header or footer or other metadata is linked to the facial data and the driving data indicating that the facial data and driving data are associated with each other. In several embodiments, image, facial data, and driving data are correlated using a common time scale. In some embodiments, image, facial data, and driving data are timed. In various embodiments, correlating facial data to driving data includes assigning the image and facial data to driving data. At 508, the identified driver is associated with driving data. For example, facial data is used to identify the driver, and the identified driver is associated with driver data (eg driving event data). Driving data is then used to assess the performance of the identified driver. In some embodiments, the captured driver image and corresponding facial data are used to positively identify the driver and the driver's vehicle operation at the time the driving data is captured. [0031] Figure 6 is a flow diagram illustrating a realization of a process to select the best face data for driver identification. In various embodiments, the process can be performed by an on-board computing device and/or a remote server. In the example shown, at 602, the driving data of a moving vehicle is detected. For example, driving data is detected using one or more sensors on a vehicle (eg truck, semi, van, car, etc.) In several embodiments, driving data includes sensor readers, data image indicating the driver's behavior during vehicle operation, vehicle operating status information, a driving environment information such as road conditions, weather conditions and traffic conditions, or any other appropriate driving data. At 604, a plurality of driver images of the moving vehicle are captured around or at the time of driving data collection. In several embodiments, the driver's image is captured periodically (for example, every minute) and/or after a trigger event. In 606, images are evaluated for quality. For example, a quality score is determined for each image. For example, several factors affect the quality of the image captured for driver identification: 1) lighting and its effect on being able to distinguish facial features; 2) the position of the driver's head and its effect on being able to discern a front view of the driver; and 3) driver movement (ie, which blurs the image). Images with acceptable quality scores (eg where the quality score is above a threshold) are used for facial data generation, driver identification, registration and/or transmission to a remote server. At 608, a set of facial data is determined for each image with acceptable quality. In some embodiments, the acceptable image quality is determined to be 606. For example, facial data is determined by 1) locating the face in each of the captured images, 2) extracting facial features from the images, and 3 ) calculation of geometric relationships between the various facial features (eg distances and coordinate angles). At 610, the quality of each set of facial data is determined. For example, the quality of facial data is determined by calculating a confidence score using a confidence model, and high-quality facial data is retained (eg, stored). The confidence model indicates the degree of confidence that the facial data is likely to support correct driver identification. Model output is trained using the facial data and corresponding human confirmation of correct identification. In some embodiments, facial data is transferred to a remote server based at least in part on a quality score for the facial data (eg, a confidence level above a threshold). At 612, an image with the highest quality score is selected from the driving data. For example, the quality score is used to rank facial data and the image associated with the highest score associated with driving data. In some embodiments, the image is associated with the entire driving session (for example, the period of time the vehicle is in an operational state). In some embodiments, a plurality of sets of the corresponding facial and image data is selected - for example, choosing the top N sets with their corresponding images. These N sets of facial data and the corresponding N images are then associated with the driving session (eg being associated with a time period). [0032] In some embodiments, the quality of a face data is determined by comparing a number of face data sets to another; a facial dataset that is not similar in nature to the others is given a low quality rating. In some embodiments, determining the similarity of a face image or facial dataset is done as a pre-processing step prior to any other processing or qualification of a face image or facial dataset based on other metrics in the quality. [0033] Figure 7 is a flow diagram illustrating a realization of a process to make associations between the conduction data and the face data captured at different times. In various embodiments, this process can be performed by an on-board computing device and/or a remote server. In the example shown at 702, the facial data is derived from one or more images of the driver's face captured over a period of time. In some embodiments, facial data is used to identify a driver. At 704, driving data is received. For example, data associated with a driving event, data used to assess driving performance, data used to assess fuel efficiency performance, vehicle performance data, etc. At 706, it is determined whether the driver has changed. For example, driving data or facial data is used to determine whether the driver has changed or not: images show a driver being different or not; speed data indicates that the vehicle has decelerated enough or not to allow for a change of driver; weight sensor data indicates that the driver has the same weight at different times, etc. In the event that it determined that the driver has not changed, in 708 the driving data is associated with the identified driver. In some embodiments, a previously identified driver is associated with driving data obtained at a later time (eg, driver identified using a driver's badge, RFID, a face ID, etc.). For example, driving data is associated with a driver identifier or identification number by storing the identifier or identification number with “the driving data. In some embodiments, the driving data comprises vehicle sensor data. In the event that it determined that the driver has changed, in 710 it is indicated that the identified driver has changed. [0034] In some embodiments, facial data is derived from one or more images of the driver's face captured over a period of time. Additionally, the conduction data was captured over another time period, potentially not overlapping the time period during which the facial data was captured. The decision system calculates a period of time during which there is a low probability that the vehicle's driver has changed. During this period, all facial data and driving data represent a driver. As a consequence, the data transmission bandwidth can be reduced by transmitting only a subset of data in the period, saving without reducing the precision of the process. In several realizations, data for the period, as described above, are consolidated. In the period, facial data from one or more images of the driver can be used to establish this correspondence. In some achievements, a similar score is calculated from the facial data collected during the period. In some achievements, if the similar score is above a threshold, the data is assumed to have been collected while the same person was driving the vehicle. For example, the similar score is above an empirically defined or predefined threshold. Alternatively, the score being below a possibly different threshold could indicate that the driver has likely changed. In some realizations, this data association over a period of time is recorded by adding the metadata to a header, footer, or any other appropriate metadata store. [0035] Figure 8 is a diagram illustrating an embodiment of capturing and/or generating driving data, driver images and face data. In the example shown, between engine start-up 800 and engine shutdown of engine 801 (as shown on shaft at time 816), sensor 802, sensor 803, sensor 804, sensor 805 capture the driving data. Driving data may include vehicle operating status data such as vehicle speed, linear acceleration, angular acceleration, driving condition data such as weather condition, road condition and traffic condition, and vehicle behavior data. driver such as throttle position, brake position, steering wheel position, and/or use of cabin equipment. Driving event 812 and driving event 814 are determined based at least part of the driving data. Driving event 812 is shown to correspond at time T1 to time T7. Driving event 814 is shown to correspond at time T10 to time T16. Between the start of the 800 engine and the shutdown of the 801 engine, the 806 camera captures images (as designated by the rectangles along the line). The facial data 808 is generated from the captured images. In some embodiments, facial data is extracted from a complete image and a harvested image is used to confirm the driver's identity by a human. The best 810 facial data is selected based on quality scores. In various embodiments, the best facial data is selected from a newly acquired image, an image obtained from a previously stored image, one from one of the stored top N images, or any other appropriate image. Best 810 facial data is used for driver identification and/or registration or transmitted to a remote server for driver identification and/or registration. In some embodiments, the driver associated with driving event 812 is also associated with driving event 814 if the best facial data 810 is similar to each other. [0036] Figure 9 is a diagram illustrating an embodiment of the facial aspects. In the example shown, image 900 is analyzed by facial aspects - for example, eyebrow 902, eyes, nose, Mouth, cheeks, chin, etc. Facial features are used to determine a set of facial data (eg data corresponding to point 904). In some embodiments, the coordinates of the facial aspect and relationships between them (eg, geometric relationships) comprise the facial data. [0037] Although the above embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways to implement the invention. The disclosed embodiments are illustrative and not restrictive.
权利要求:
Claims (19) [0001] 1. A system for identifying a driver, characterized in that it comprises: an image capture device for capturing a first image during vehicle operation and of an expected location of the driver; a processor configured to: detect a driver's face in the first image; determine a quality score for the first image; if the first image quality score is determined to exceed a predefined threshold, determining a facial dataset from the first image; determining a confidence index for the given facial dataset, where the confidence index indicates a degree of confidence that the facial dataset is capable of supporting correct driver identification; if the confidence index reaches a confidence threshold, identify the driver based at least in part on the facial dataset; and a memory coupled to the processor and configured to provide the processor with instructions. [0002] 2. System according to claim 1, characterized in that the identification of the driver comprises the comparison of a set of facial data with a set or sets of facial data known to the driver. [0003] 3. System according to claim 1, characterized in that the driver's identification comprises the introduction of the set of facial data on a model face. [0004] System according to claim 1, characterized in that it comprises: one or more on-board vehicle sensors for detecting driving data, wherein the driving data is associated with a period of operation of the vehicle; and wherein the processor is configured to associate the driver with driving data. [0005] 5. System according to claim 1, characterized in that the image capture device captures another image during another period of operation of the vehicle; where the processed is configured to: find another face of another driver within the other image; determine another set of facial data from the other image; and calculate a similarity score between the facial dataset and the other facial dataset. [0006] 6. System according to claim 5, characterized in that the processed is configured to: in the event that the similarity score meets a similarity criterion, indicate that the driver in the time period and the other driver in another time period are the same driver. [0007] System according to claim 4, characterized in that the driving data comprises a driving event. [0008] System according to claim 4, characterized in that the driving data comprises the operating status data. [0009] 9. System according to claim 8, characterized in that the operating status data comprises one or more of the following data: vehicle speed data, vehicle acceleration data, vehicle deceleration data, and rotational dynamics data of the vehicle. [0010] System according to claim 4, characterized in that the driving data comprises the geographic location data. [0011] 11. System according to claim 1, characterized in that it comprises a communication interface for transmitting all the facial data to a remote server. [0012] 12. System according to claim 11, characterized in that the transmission uses the wireless communications link. [0013] 13. System according to claim 1, characterized in that the confidence index reaches a confidence threshold, transmitting the set of facial data to a remote server. [0014] 14. System according to claim 1, characterized in that the facial data set is based at least in part on one or more of the following facial features: eye, eyebrow, nose and mouth identified in the image. [0015] 15. System according to claim 14, characterized in that the determination of the set of facial data comprises the determination of one or more geometric relationships between one or more facial features. [0016] 16. Method to identify a driver characterized by comprising: capturing a first image during vehicle operation of an expected location of the driver; detection of a driver's face in the first image; determining a quality score for the first image; if the first image quality score is determined to exceed a predefined threshold, determining a facial dataset from the first image; determining a confidence index for the given facial dataset, where the confidence index indicates a degree of confidence that the facial dataset is capable of supporting correct driver identification; and if the confidence index reaches a confidence threshold, driver identification based at least in part on the facial dataset. [0017] 17. System according to claim 1, characterized in that the image capture device is configured to capture a plurality of images during vehicle operation at a time interval of an expected location of the driver, wherein the processor is further configured to: determine a quality score for the plurality of images, if the quality score of the plurality of images exceeds a predefined threshold, extract a facial dataset from each image that has a quality score that exceeds a predefined threshold, determine a confidence index for each set of extracted facial data, and rank the extracted facial data based, at least in part, on the corresponding confidence index. [0018] 18. System according to claim 17, characterized in that the driver is identified based, at least in part, on the facial dataset determined to have a higher confidence index. [0019] A system according to claim 17, characterized in that the facial data set determined to have a higher confidence index is transmitted to a remote server.
类似技术:
公开号 | 公开日 | 专利标题 BR112014003488B1|2021-05-11|system and method for identifying a driver US9589393B2|2017-03-07|Driver log generation JP2019531560A|2019-10-31|Automatic driver identification system and method García et al.2014|Driver monitoring based on low-cost 3-D sensors US8649583B2|2014-02-11|Pupil detection device and pupil detection method EP2949534A2|2015-12-02|Driver assistance apparatus capable of diagnosing vehicle parts and vehicle including the same WO2019232972A1|2019-12-12|Driving management method and system, vehicle-mounted intelligent system, electronic device and medium US20190005310A1|2019-01-03|Public service system and method using autonomous smart car JP2015007953A|2015-01-15|Apparatus, method, and computer readable medium for monitoring the number of passengers in automobile CN107886761A|2018-04-06|A kind of parking lot monitoring method based on unmanned plane CN107945321A|2018-04-20|Safety inspection method, application server and computer-readable recording medium based on recognition of face WO2013157466A1|2013-10-24|Smoking detection device, method and program US20180012090A1|2018-01-11|Visual learning system and method for determining a driver's state US20180204078A1|2018-07-19|System for monitoring the state of vigilance of an operator WO2018046015A1|2018-03-15|Alarm method, device and terminal for vehicle CN108319952B|2021-02-02|Vehicle feature extraction method and device CN109934076A|2019-06-25|Generation method, device, system and the terminal device of the scene image of vision dead zone CN108229406A|2018-06-29|A kind of method for detecting lane lines, device and terminal CN106937532B|2018-08-14|System and method for detecting actual user CN109543577A|2019-03-29|A kind of fatigue driving detection method for early warning based on facial expression feature CN106447828A|2017-02-22|Automobile data recorder with iris recognition function CN109383519A|2019-02-26|Information processing method, information processing system and program WO2021217575A1|2021-11-04|Identification method and identification device for object of interest of user WO2019080251A1|2019-05-02|Boarding implementation method , terminal device and computer readable storage medium CN111164377B|2021-08-27|Computer system, position estimation method, and program
同族专利:
公开号 | 公开日 BR112014003488A2|2017-08-08| US20140324281A1|2014-10-30| US9180887B2|2015-11-10| US8744642B2|2014-06-03| US20130073114A1|2013-03-21| WO2013040152A1|2013-03-21|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 IT1095061B|1978-05-19|1985-08-10|Conte Raffaele|EQUIPMENT FOR MAGNETIC REGISTRATION OF CASUAL EVENTS RELATED TO MOBILE VEHICLES| JPH0547428B2|1985-12-09|1993-07-16|Nissan Motor| US5140436A|1989-11-02|1992-08-18|Eastman Kodak Company|Pre-event/post-event recording in a solid state fast frame recorder| US5546191A|1992-02-25|1996-08-13|Mitsubishi Denki Kabushiki Kaisha|Recording and reproducing apparatus| US20080157510A1|1994-05-09|2008-07-03|Automotive Technologies International, Inc.|System for Obtaining Information about Vehicular Components| US5497419A|1994-04-19|1996-03-05|Prima Facie, Inc.|Method and apparatus for recording sensor data| DE4416991A1|1994-05-13|1995-11-16|Pietzsch Ag|Warning HGV driver against overturning in negotiation of curve| US5600775A|1994-08-26|1997-02-04|Emotion, Inc.|Method and apparatus for annotating full motion video and other indexed data structures| US5689442A|1995-03-22|1997-11-18|Witness Systems, Inc.|Event surveillance system| US5815093A|1996-07-26|1998-09-29|Lextron Systems, Inc.|Computerized vehicle log| US5825284A|1996-12-10|1998-10-20|Rollover Operations, Llc|System and method for the detection of vehicle rollover conditions| US6405132B1|1997-10-22|2002-06-11|Intelligent Technologies International, Inc.|Accident avoidance system| US6163338A|1997-12-11|2000-12-19|Johnson; Dan|Apparatus and method for recapture of realtime events| US6449540B1|1998-02-09|2002-09-10|I-Witness, Inc.|Vehicle operator performance recorder triggered by detection of external waves| US6389340B1|1998-02-09|2002-05-14|Gary A. Rayner|Vehicle data recorder| US6718239B2|1998-02-09|2004-04-06|I-Witness, Inc.|Vehicle event data recorder including validation of output| US6141611A|1998-12-01|2000-10-31|John J. Mackey|Mobile vehicle accident data system| DE19983911B4|1999-01-27|2018-09-06|Compumedics Sleep Pty. Ltd.|Wachsamkeitsüberwachungssystem| US20020111725A1|2000-07-17|2002-08-15|Burge John R.|Method and apparatus for risk-related use of vehicle communication system data| US7143353B2|2001-03-30|2006-11-28|Koninklijke Philips Electronics, N.V.|Streaming video bookmarks| US20030080878A1|2001-10-30|2003-05-01|Kirmuss Charles Bruno|Event-based vehicle image capture| AU2002252630A1|2002-02-08|2003-09-02|David Lively|Centralized digital video recording system with bookmarking and playback from multiple locations| JP2004067031A|2002-08-08|2004-03-04|Nissan Motor Co Ltd|Operator determining device and on-vehicle device using the same| US6795759B2|2002-08-26|2004-09-21|International Business Machines Corporation|Secure logging of vehicle data| US7792690B2|2002-11-27|2010-09-07|Computer Sciences Corporation|Computerized method and system for estimating an effect on liability of the speed of vehicles in an accident and time and distance traveled by the vehicles| US20040236596A1|2003-02-27|2004-11-25|Mahesh Chowdhary|Business method for a vehicle safety management system| CA2925145A1|2003-07-07|2005-01-13|Insurance Services Office, Inc.|Traffic information system| US7843313B2|2003-09-05|2010-11-30|Honeywell International Inc.|Distributed stand-off verification and face recognition systems | EP1668469A4|2003-09-19|2007-11-21|Bae Systems Advanced Informati|Tracking systems and methods| JP4206928B2|2004-01-19|2009-01-14|株式会社デンソー|Collision possibility judgment device| JPWO2005069675A1|2004-01-20|2007-09-06|オムロン株式会社|Telephone countermeasure device and telephone countermeasure method when using telephone while driving| JP4532181B2|2004-06-24|2010-08-25|日産自動車株式会社|VEHICLE DRIVE OPERATION ASSISTANCE DEVICE AND VEHICLE HAVING VEHICLE DRIVE OPERATION ASSISTANCE DEVICE| US20060011399A1|2004-07-15|2006-01-19|International Business Machines Corporation|System and method for controlling vehicle operation based on a user's facial expressions and physical state| RU2389625C2|2004-08-06|2010-05-20|Хонда Мотор Ко., Лтд.|Vehicle control device| DE102004041521A1|2004-08-27|2006-03-02|Robert Bosch Gmbh|Method and device for evaluating driving situations| US20060053038A1|2004-09-08|2006-03-09|Warren Gregory S|Calculation of driver score based on vehicle operation| DE102004044771B4|2004-09-16|2006-08-24|Bayerische Motoren Werke Ag|Method for image-based driver identification in a motor vehicle| JP4564320B2|2004-09-29|2010-10-20|アイシン精機株式会社|Driver monitor system| US20060103127A1|2004-11-16|2006-05-18|Arvin Technology, Llc|Module structure for a vehicle| JP2006248364A|2005-03-10|2006-09-21|Omron Corp|Driver authorization device of movable body| US20060212195A1|2005-03-15|2006-09-21|Veith Gregory W|Vehicle data recorder and telematic device| JP2006264400A|2005-03-22|2006-10-05|Omron Corp|Driver authorization device of mobile body| WO2006131926A2|2005-06-09|2006-12-14|Drive Diagnostics Ltd.|System and method for displaying a driving profile| US7117075B1|2005-08-15|2006-10-03|Report On Board Llc|Driver activity and vehicle operation logging and reporting| US7593963B2|2005-11-29|2009-09-22|General Electric Company|Method and apparatus for remote detection and control of data recording systems on moving systems| US10878646B2|2005-12-08|2020-12-29|Smartdrive Systems, Inc.|Vehicle event recorder systems| US20070135979A1|2005-12-09|2007-06-14|Smartdrive Systems Inc|Vehicle event recorder systems| US20070150140A1|2005-12-28|2007-06-28|Seymour Shafer B|Incident alert and information gathering method and system| JP4664826B2|2006-01-26|2011-04-06|財団法人日本自動車研究所|Vehicle behavior analysis system| US8594933B2|2006-02-09|2013-11-26|Sap Ag|Transmission of sensor data based on geographical navigation data| US20070216521A1|2006-02-28|2007-09-20|Guensler Randall L|Real-time traffic citation probability display system and method| US8392821B2|2006-03-17|2013-03-05|Viddler, Inc.|Methods and systems for displaying videos with overlays and tags| US20070241874A1|2006-04-17|2007-10-18|Okpysh Stephen L|Braking intensity light| US8314708B2|2006-05-08|2012-11-20|Drivecam, Inc.|System and method for reducing driving risk with foresight| US8373567B2|2006-05-08|2013-02-12|Drivecam, Inc.|System and method for identifying non-event profiles| US7659827B2|2006-05-08|2010-02-09|Drivecam, Inc.|System and method for taking risk out of driving| US9836716B2|2006-05-09|2017-12-05|Lytx, Inc.|System and method for reducing driving risk with hindsight| US20070268158A1|2006-05-09|2007-11-22|Drivecam, Inc.|System and Method for Reducing Driving Risk With Insight| US8630768B2|2006-05-22|2014-01-14|Inthinc Technology Solutions, Inc.|System and method for monitoring vehicle parameters and driver behavior| US20080269978A1|2007-04-25|2008-10-30|Xora, Inc.|Method and apparatus for vehicle performance tracking| US8284039B2|2008-03-05|2012-10-09|Earthwave Technologies, Inc.|Vehicle monitoring system with power consumption management| DE102008001679A1|2008-05-09|2009-11-12|Robert Bosch Gmbh|Method and device for processing recorded image information from a vehicle| JP4888838B2|2008-05-12|2012-02-29|トヨタ自動車株式会社|Driver imaging device and driver imaging method| US9188980B2|2008-09-11|2015-11-17|Deere & Company|Vehicle with high integrity perception system| US20100070175A1|2008-09-15|2010-03-18|Navteq North America, Llc|Method and System for Providing a Realistic Environment for a Traffic Report| US8095265B2|2008-10-06|2012-01-10|International Business Machines Corporation|Recording, storing, and retrieving vehicle maintenance records| US8700255B2|2008-10-08|2014-04-15|Trimble Navigation Limited|Devices, systems, and methods for monitoring driver and vehicle behavior| TWI405134B|2009-10-21|2013-08-11|Automotive Res & Testing Ct|Driver face image recognition system| CN102069710B|2009-11-24|2014-03-26|鸿富锦精密工业(深圳)有限公司|Device and method for monitoring driving| JP4978722B2|2010-08-23|2012-07-18|株式会社デンソー|Speed control device| KR101251793B1|2010-11-26|2013-04-08|현대자동차주식회사|Method for authenticating face of driver in vehicle| US8744642B2|2011-09-16|2014-06-03|Lytx, Inc.|Driver identification based on face data|US8520069B2|2005-09-16|2013-08-27|Digital Ally, Inc.|Vehicle-mounted video system with distributed processing| US10878646B2|2005-12-08|2020-12-29|Smartdrive Systems, Inc.|Vehicle event recorder systems| US20070150138A1|2005-12-08|2007-06-28|James Plante|Memory management in event recording systems| US8996240B2|2006-03-16|2015-03-31|Smartdrive Systems, Inc.|Vehicle event recorders with integrated web server| US9201842B2|2006-03-16|2015-12-01|Smartdrive Systems, Inc.|Vehicle event recorder systems and networks having integrated cellular wireless communications systems| US8989959B2|2006-11-07|2015-03-24|Smartdrive Systems, Inc.|Vehicle operator performance history recording, scoring and reporting systems| US8649933B2|2006-11-07|2014-02-11|Smartdrive Systems Inc.|Power management systems for automotive video event recorders| US8868288B2|2006-11-09|2014-10-21|Smartdrive Systems, Inc.|Vehicle exception event management systems| US8239092B2|2007-05-08|2012-08-07|Smartdrive Systems Inc.|Distributed vehicle event recorder systems having a portable memory data transfer system| US8503972B2|2008-10-30|2013-08-06|Digital Ally, Inc.|Multi-functional remote monitoring system| EP3514752A1|2011-01-17|2019-07-24|Imetrik Technologies Inc.|Computer-implemented method and system for reporting a confidence score in relation to a vehicle equipped with a wireless-enabled usage reporting device| US8744642B2|2011-09-16|2014-06-03|Lytx, Inc.|Driver identification based on face data| US9235750B1|2011-09-16|2016-01-12|Lytx, Inc.|Using passive driver identification and other input for providing real-time alerts or actions| JP5536736B2|2011-10-24|2014-07-02|本田技研工業株式会社|Vehicle diagnostic method and external diagnostic device| US8634822B2|2012-06-24|2014-01-21|Tango Networks, Inc.|Automatic identification of a vehicle driver based on driving behavior| US9728228B2|2012-08-10|2017-08-08|Smartdrive Systems, Inc.|Vehicle event playback apparatus and methods| US9311544B2|2012-08-24|2016-04-12|Jeffrey T Haley|Teleproctor reports use of a vehicle and restricts functions of drivers phone| WO2014052898A1|2012-09-28|2014-04-03|Digital Ally, Inc.|Portable video and imaging system| US10272848B2|2012-09-28|2019-04-30|Digital Ally, Inc.|Mobile video and imaging system| DE102012222175A1|2012-12-04|2014-06-18|Robert Bosch Gmbh|Method and device for opening a door of a vehicle| US8981942B2|2012-12-17|2015-03-17|State Farm Mutual Automobile Insurance Company|System and method to monitor and reduce vehicle operator impairment| US8930269B2|2012-12-17|2015-01-06|State Farm Mutual Automobile Insurance Company|System and method to adjust insurance rate based on real-time data about potential vehicle operator impairment| US20140257865A1|2013-03-10|2014-09-11|State Farm Mutual Automobile Insurance Company|Systems and methods for processing credits for distance-based insurance policies| US9958228B2|2013-04-01|2018-05-01|Yardarm Technologies, Inc.|Telematics sensors and camera activation in connection with firearm activity| US8930072B1|2013-07-26|2015-01-06|Lytx, Inc.|Managing the camera acquiring interior data| US9153144B2|2013-08-05|2015-10-06|Ford Global Technologies, Llc|Rapid approach detector| US10075681B2|2013-08-14|2018-09-11|Digital Ally, Inc.|Dual lens camera unit| US9253452B2|2013-08-14|2016-02-02|Digital Ally, Inc.|Computer program, method, and system for managing multiple data recording devices| US10390732B2|2013-08-14|2019-08-27|Digital Ally, Inc.|Breath analyzer, system, and computer program for authenticating, preserving, and presenting breath analysis data| US9159371B2|2013-08-14|2015-10-13|Digital Ally, Inc.|Forensic video recording with presence detection| US10210761B2|2013-09-30|2019-02-19|Sackett Solutions & Innovations, LLC|Driving assistance systems and methods| US9501878B2|2013-10-16|2016-11-22|Smartdrive Systems, Inc.|Vehicle event playback apparatus and methods| US9610955B2|2013-11-11|2017-04-04|Smartdrive Systems, Inc.|Vehicle fuel consumption monitor and feedback systems| DE102013114394A1|2013-12-18|2015-06-18|Huf Hülsbeck & Fürst Gmbh & Co. Kg|Method for authenticating a driver in a motor vehicle| US10134091B2|2013-12-31|2018-11-20|Hartford Fire Insurance Company|System and method for determining driver signatures| US10023114B2|2013-12-31|2018-07-17|Hartford Fire Insurance Company|Electronics for remotely monitoring and controlling a vehicle| US8892310B1|2014-02-21|2014-11-18|Smartdrive Systems, Inc.|System and method to detect execution of driving maneuvers| WO2015151862A1|2014-04-01|2015-10-08|みこらった株式会社|Automobile and automobile program| US9158962B1|2014-05-07|2015-10-13|Lytx, Inc.|Passive driver identification| FR3021443B1|2014-05-20|2017-10-13|Essilor Int |METHOD FOR CONSTRUCTING A MODEL OF THE FACE OF AN INDIVIDUAL, METHOD AND DEVICE FOR ANALYZING POSTURE USING SUCH A MODEL| WO2015191934A2|2014-06-11|2015-12-17|Abbvie Inc.|Blood-brain barrierpenetrating dual specific binding proteins for treating brain and neurological diseases| US10078099B2|2014-06-24|2018-09-18|Truemotion, Inc.|Methods and systems for aligning a mobile device to a vehicle| US10161746B2|2014-08-18|2018-12-25|Trimble Navigation Limited|Systems and methods for cargo management| US9714037B2|2014-08-18|2017-07-25|Trimble Navigation Limited|Detection of driver behaviors using in-vehicle systems and methods| US10686976B2|2014-08-18|2020-06-16|Trimble Inc.|System and method for modifying onboard event detection and/or image capture strategy using external source data| US10493996B2|2014-09-22|2019-12-03|Future Technology Partners, Llc|Method and system for impaired driving detection, monitoring and accident prevention with driving habits| US9418491B2|2014-09-22|2016-08-16|Brian K. Phillips|Method and system for automatically identifying a driver by creating a unique driver profile for a vehicle from driving habits| EP3210396A1|2014-10-20|2017-08-30|Axon Enterprise, Inc.|Systems and methods for distributed control| US9663127B2|2014-10-28|2017-05-30|Smartdrive Systems, Inc.|Rail vehicle event detection and recording system| US11069257B2|2014-11-13|2021-07-20|Smartdrive Systems, Inc.|System and method for detecting a vehicle event and generating review criteria| US9830665B1|2014-11-14|2017-11-28|United Services Automobile Association|Telematics system, apparatus and method| US10764542B2|2014-12-15|2020-09-01|Yardarm Technologies, Inc.|Camera activation in response to firearm activity| AU2015362669B2|2014-12-15|2019-06-20|i4c Innovations Inc.|Opportunistic syncing methods for wearable devices| US9448559B2|2015-01-15|2016-09-20|Nissan North America, Inc.|Autonomous vehicle routing and navigation using passenger docking locations| US9697730B2|2015-01-30|2017-07-04|Nissan North America, Inc.|Spatial clustering of vehicle probe data| US9568335B2|2015-01-30|2017-02-14|Nissan North America, Inc.|Associating parking areas with destinations based on automatically identified associations between vehicle operating information and non-vehicle operating information| US9778658B2|2015-03-13|2017-10-03|Nissan North America, Inc.|Pattern detection using probe data| US9679420B2|2015-04-01|2017-06-13|Smartdrive Systems, Inc.|Vehicle event recording system and method| US9821763B2|2015-04-03|2017-11-21|Honda Motor Co., Ltd.|Hierarchical based vehicular control systems, and methods of use and manufacture thereof| US9841259B2|2015-05-26|2017-12-12|Digital Ally, Inc.|Wirelessly conducted electronic weapon| US10013883B2|2015-06-22|2018-07-03|Digital Ally, Inc.|Tracking and analysis of drivers within a fleet of vehicles| US10192277B2|2015-07-14|2019-01-29|Axon Enterprise, Inc.|Systems and methods for generating an audit trail for auditable devices| US10204159B2|2015-08-21|2019-02-12|Trimble Navigation Limited|On-demand system and method for retrieving video from a commercial vehicle| US9849887B2|2015-08-27|2017-12-26|Intelligent Imaging Systems, Inc.|Contextual driver behavior monitoring| US9704398B2|2015-09-14|2017-07-11|At&T Intellectual Property I, L.P.|Method and apparatus for enhancing driver situational awareness| US9914460B2|2015-09-25|2018-03-13|Mcafee, Llc|Contextual scoring of automobile drivers| US10445603B1|2015-12-11|2019-10-15|Lytx, Inc.|System for capturing a driver image| WO2017123226A1|2016-01-14|2017-07-20|Ford Global Technologies, Llc|Driver-identification system and method| US10904474B2|2016-02-05|2021-01-26|Digital Ally, Inc.|Comprehensive video collection and storage| US9918038B2|2016-02-29|2018-03-13|Toyota Motor Engineering & Manufacturing North America, Inc.|System and method for multimedia capture| FR3048175A1|2016-02-29|2017-09-01|Peugeot Citroen Automobiles Sa|DEVICE AND METHOD FOR DETECTING A COGNITIVE DISTRACTION STATUS OF A DRIVER| US20190101916A1|2016-03-31|2019-04-04|Honda Motor Co., Ltd.|Vehicle control apparatus, vehicle control method, and vehicle control program| US10407078B2|2016-04-26|2019-09-10|Sivalogeswaran Ratnasingam|Dynamic learning driving system and method| WO2018009567A1|2016-07-05|2018-01-11|Nauto Global Limited|System and method for automatic driver identification| WO2018009552A1|2016-07-05|2018-01-11|Nauto Global Limited|System and method for image analysis| EP3497405A4|2016-08-09|2020-07-29|Nauto, Inc.|System and method for precision localization and mapping| CN106314438B|2016-08-15|2018-09-25|西北工业大学|The detection method and system of abnormal track in a kind of driver driving track| WO2018053175A1|2016-09-14|2018-03-22|Nauto Global Limited|Systems and methods for near-crash determination| US10733460B2|2016-09-14|2020-08-04|Nauto, Inc.|Systems and methods for safe route determination| US10521675B2|2016-09-19|2019-12-31|Digital Ally, Inc.|Systems and methods of legibly capturing vehicle markings| US10019053B2|2016-09-23|2018-07-10|Toyota Motor Sales, U.S.A, Inc.|Vehicle technology and telematics passenger control enabler| CN106503673A|2016-11-03|2017-03-15|北京文安智能技术股份有限公司|A kind of recognition methodss of traffic driving behavior, device and a kind of video acquisition device| WO2018085804A1|2016-11-07|2018-05-11|Nauto Global Limited|System and method for driver distraction determination| CN106274483A|2016-11-10|2017-01-04|合肥工业大学|The Vehicular automatic driving switching device differentiated based on driving behavior of diverting one's attention and method| US9971933B1|2017-01-09|2018-05-15|Ulsee Inc.|Facial image screening method and face recognition system thereof| CN108399357B|2017-02-08|2020-12-29|浙江宇视科技有限公司|Face positioning method and device| US10173643B2|2017-02-20|2019-01-08|Ford Global Technologies, Llc|Object detection for vehicles| US10911725B2|2017-03-09|2021-02-02|Digital Ally, Inc.|System for automatically triggering a recording| US10424079B2|2017-04-05|2019-09-24|Here Global B.V.|Unsupervised approach to environment mapping at night using monocular vision| US10289938B1|2017-05-16|2019-05-14|State Farm Mutual Automobile Insurance Company|Systems and methods regarding image distification and prediction models| US10417816B2|2017-06-16|2019-09-17|Nauto, Inc.|System and method for digital environment reconstruction| WO2018229550A1|2017-06-16|2018-12-20|Nauto Global Limited|System and method for adverse vehicle event determination| EP3638542B1|2017-06-16|2022-01-26|Nauto, Inc.|System and method for contextualized vehicle operation determination| US20190012552A1|2017-07-06|2019-01-10|Yves Lambert|Hidden driver monitoring| CN109803583A|2017-08-10|2019-05-24|北京市商汤科技开发有限公司|Driver monitoring method, apparatus and electronic equipment| WO2019042689A1|2017-08-29|2019-03-07|Siemens Aktiengesellschaft|Person recognition in areas with limited data transmission and data processing| US10867161B2|2017-09-06|2020-12-15|Pixart Imaging Inc.|Auxiliary filtering device for face recognition and starting method for electronic device| JP2019048524A|2017-09-08|2019-03-28|スタンレー電気株式会社|Device for controlling vehicle headlight| US10507798B1|2017-11-07|2019-12-17|Nick Sebastian|Automobile security system| JP6687006B2|2017-11-15|2020-04-22|オムロン株式会社|Driver determination device, driver state determination device including the same, and methods and programs thereof| US10594991B1|2018-01-09|2020-03-17|Wm Intellectual Property Holdings, Llc|System and method for managing service and non-service related activities associated with a waste collection, disposal and/or recycling vehicle| CN110334491A|2018-03-31|2019-10-15|广州卓腾科技有限公司|A kind of auth method based on vehicle pipe business self-service device| CN110356363A|2018-04-09|2019-10-22|杭州海康汽车技术有限公司|A kind of driver identity authentication method, device, system and server| CN108846332A|2018-05-30|2018-11-20|西南交通大学|A kind of railway drivers Activity recognition method based on CLSTA| CN108960065B|2018-06-01|2020-11-17|浙江零跑科技有限公司|Driving behavior detection method based on vision| US20210166041A1|2018-07-18|2021-06-03|Robert Laganière|System and method for tracking customer movements in a customer service environment| US11024137B2|2018-08-08|2021-06-01|Digital Ally, Inc.|Remote video triggering and tagging| KR20200056186A|2018-11-14|2020-05-22|현대자동차주식회사|Apparatus and method for recognizing face| KR20210100610A|2018-12-11|2021-08-17|소니그룹주식회사|Image processing apparatus, image processing method and image processing system| US20210380046A1|2018-12-11|2021-12-09|Sony Group Corporation|Image processing apparatus, image processing method, and image processing system| US10836352B2|2019-02-27|2020-11-17|Stc Corporation Co. Ltd|Vehicle starting control system by using face perception data and method thereof| US10885725B2|2019-03-18|2021-01-05|International Business Machines Corporation|Identifying a driving mode of an autonomous vehicle| CN110210592A|2019-06-19|2019-09-06|周春才|A kind of automobile data recorder, server, driver's license| CN110254393A|2019-06-21|2019-09-20|一汽轿车股份有限公司|A kind of automotive self-adaptive control method based on face recognition technology| US20220019646A1|2020-07-14|2022-01-20|Micron Technology, Inc.|Intelligent multi-factor authentication for vehicle use|
法律状态:
2017-08-15| B15I| Others concerning applications: loss of priority| 2017-10-10| B12F| Other appeals [chapter 12.6 patent gazette]| 2019-11-19| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-04-06| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-05-11| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 13/09/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/235,263|2011-09-16| US13/235,263|US8744642B2|2011-09-16|2011-09-16|Driver identification based on face data| PCT/US2012/055063|WO2013040152A1|2011-09-16|2012-09-13|Driver identification based on face data| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|