Display control device, display control method, and computer-readable storage medium
1. A display control apparatus is characterized in that,
the display control device includes:
a display control unit that displays an image so as to overlap with a field of view region of a driver of a vehicle; and
a detection unit that analyzes the line of sight of the driver and detects the viewpoint of the driver on the visual field region obtained from the result of the analysis,
the display control unit takes a prescribed region on the visual field region as an object of display control,
the display control means changes the manner of displaying the image when the overlap satisfies a condition based on a determination result of the overlap between the predetermined region on the visual field region and the viewpoint of the driver detected by the detection means.
2. The display control apparatus according to claim 1, wherein the image is an image overlapping with the predetermined region.
3. The display control apparatus according to claim 2, wherein the display control unit performs recognition display of the image so as to become recognizable.
4. The display control apparatus according to claim 3, wherein the recognition display is performed when the predetermined region is determined on the visual field region.
5. The display control device according to claim 3, wherein the predetermined region is specified on the visual field region, and the recognition display is performed when the viewpoint of the driver is detected at a position different from the predetermined region.
6. The display control device according to claim 4 or 5, wherein the display control means cancels the recognition display of a portion corresponding to the overlap in the image when the overlap is detected as the condition.
7. The display control device according to claim 4 or 5, wherein the display control means cancels the recognition display of the image when the overlap exceeds a predetermined amount as the condition.
8. The display control device according to claim 3, wherein the display control means changes a mode of the recognition display when the viewpoint of the driver is detected at a position different from the predetermined area after the recognition display is performed.
9. The display control device according to claim 8, wherein the display control unit changes the manner of the recognition display by changing a density of the image.
10. The display control apparatus according to any one of claims 1 to 5,
the display control device further includes a determination unit that determines a risk outside the vehicle,
the display control means displays, as the image, an image for warning the risk so as to overlap the visual field region according to a determination result of the determination means.
11. A display control method executed in a display control apparatus,
the display control method has the following steps:
displaying an image so that the image overlaps with a field of view region of a driver of a vehicle; and
analyzing the line of sight of the driver and detecting the viewpoint of the driver on the visual field region obtained from the result of the analysis,
in the display control method, a predetermined region on the visual field region is set as an object of display control,
and changing a mode of displaying the image when the overlap satisfies a condition based on a determination result of the overlap between the predetermined region on the visual field region and the detected viewpoint of the driver.
12. A computer-readable storage medium comprising, in combination,
the computer-readable storage medium stores a program for causing a computer to function as:
displaying an image so that the image overlaps with a field of view region of a driver of a vehicle; and
analyzing the line of sight of the driver and detecting the viewpoint of the driver on the visual field region obtained from the result of the analysis,
a predetermined region on the visual field region is set as an object of display control,
and changing a mode of displaying the image when the overlap satisfies a condition based on a determination result of the overlap between the predetermined region on the visual field region and the detected viewpoint of the driver.
Background
Patent document 1 describes the following: when it is estimated that the driver recognizes the content of the displayed warning, the method of displaying the warning is changed (such as reduction in brightness, change in display position, and stop of display). Patent document 2 describes displaying an actual line-of-sight distribution and an ideal line-of-sight distribution of the driver.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2005-135037
Patent document 2: international publication No. 2016/166791
Disclosure of Invention
Problems to be solved by the invention
None of the patent documents mentions a display method for controlling a display to be displayed to a driver in order to facilitate attention to a predetermined area.
The purpose of the present invention is to provide a display control device, a display control method, and a computer-readable storage medium that effectively cause attention to a predetermined area on a visual field area.
Means for solving the problems
The display control device according to the present invention includes: a display control unit that displays an image so as to overlap with a field of view region of a driver of a vehicle; and a detection unit that analyzes a line of sight of the driver and detects a viewpoint of the driver on the visual field region obtained from a result of the analysis, wherein the display control unit is configured to set a predetermined region on the visual field region as a target of display control, and the display control unit changes a mode of displaying the image when an overlap between the predetermined region on the visual field region and the viewpoint of the driver detected by the detection unit satisfies a condition based on a determination result of the overlap.
A display control method of the present invention is a display control method executed in a display control apparatus, the display control method having the steps of: displaying an image so that the image overlaps with a field of view region of a driver of a vehicle; and analyzing the line of sight of the driver and detecting the viewpoint of the driver on the visual field region obtained from the analysis result, wherein the display control method is configured to set a predetermined region on the visual field region as a target of display control, and to change the display mode of the image when an overlap between the predetermined region on the visual field region and the detected viewpoint of the driver satisfies a condition based on a determination result of the overlap.
A computer-readable storage medium storing a program according to the present invention stores a program for causing a computer to function as: displaying an image so that the image overlaps with a field of view region of a driver of a vehicle; and analyzing the line of sight of the driver, detecting the viewpoint of the driver on the visual field region obtained from the analysis result, and changing the manner of display of the image when the overlap satisfies a condition based on a determination result of the overlap between the predetermined region on the visual field region and the detected viewpoint of the driver, with the predetermined region on the visual field region being an object of display control.
Effects of the invention
According to the present invention, attention to a predetermined region on a visual field region can be effectively promoted.
Drawings
Fig. 1 is a block diagram of a vehicle control device (travel control device).
Fig. 2 is a diagram showing functional blocks of the control unit.
Fig. 3 is a view showing a visual field region seen by the driver.
Fig. 4 is a view showing a visual field region seen by the driver.
Fig. 5 is a flowchart showing the display control process.
Fig. 6 is a flowchart showing the display control process.
Fig. 7 is a flowchart showing the display control process.
Fig. 8 is a flowchart showing the display control process.
Fig. 9 is a flowchart showing the display control process.
Fig. 10 is a flowchart showing the display control process.
Fig. 11 is a flowchart showing the display control process.
Description of the reference numerals
1: a vehicle; 2: a control unit; 20. 21, 22, 23, 24, 25, 26, 27, 28, 29: an ECU; 200: a control unit; 218: a HUD control unit; 219: HUD.
Detailed Description
Hereinafter, embodiments will be described in detail with reference to the drawings. The following embodiments do not limit the invention according to the claims, and all combinations of features described in the embodiments are not necessarily essential to the invention. Two or more of the plurality of features described in the embodiments may be arbitrarily combined. The same or similar components are denoted by the same reference numerals, and redundant description thereof is omitted.
[ first embodiment ]
Fig. 1 is a block diagram of a vehicle control device (travel control device) according to an embodiment of the present invention, and controls a vehicle 1. Fig. 1 shows a schematic of a vehicle 1 in a plan view and a side view. As an example, the vehicle 1 is a sedan-type four-wheeled passenger vehicle. In the present embodiment, a vehicle configured to be capable of automatic driving and driving assistance is described as an example of the configuration of the vehicle 1, but the configuration in which a head-up display (HUD) described later is mounted is not limited to the configuration described below.
The control device of fig. 1 comprises a control unit 2. The control unit 2 includes a plurality of ECUs 20 to 29 that are connected so as to be able to communicate using an in-vehicle network. Each ECU includes a processor typified by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like. The storage device stores a program executed by the processor, data used by the processor in processing, and the like. Each ECU may include a plurality of processors, storage devices, interfaces, and the like. The configuration of the control device of fig. 1 can be realized by forming a computer that implements the present invention according to the program.
Hereinafter, functions and the like of the ECUs 20 to 29 will be described. The number of ECUs and the functions to be assigned to the ECUs can be appropriately designed, and can be further detailed or integrated than the present embodiment.
The ECU20 executes control related to automatic driving of the vehicle 1. In the autonomous driving, at least one of steering, acceleration, and deceleration of the vehicle 1 is automatically controlled. In the control example described later, both of the steering and the acceleration/deceleration are automatically controlled.
The ECU21 controls the electric power steering device 3. The electric power steering apparatus 3 includes a mechanism for steering the front wheels in accordance with a driving operation (steering operation) of the steering wheel 31 by the driver. The electric power steering apparatus 3 includes a motor that generates a driving force for assisting a steering operation or automatically steering front wheels, a sensor that detects a steering angle, and the like. When the driving state of the vehicle 1 is the automatic driving, the ECU21 automatically controls the electric power steering device 3 in accordance with an instruction from the ECU20 to control the traveling direction of the vehicle 1.
The ECUs 22 and 23 control the detection units 41 to 43 for detecting the surrounding conditions of the vehicle and process the detection results. The detection unit 41 is a camera (hereinafter, may be referred to as a camera 41) that captures an image of the front of the vehicle 1, and in the case of the present embodiment, is attached to the vehicle interior side of the front window at the front roof portion of the vehicle 1. By analyzing the image captured by the camera 41, the outline of the target object and the lane lines (white lines, etc.) on the road can be extracted.
The Detection unit 42 is Light Detection and Ranging (LIDAR) and detects a target object around the vehicle 1 or measures a distance to the target object. In the present embodiment, five detection units 42 are provided, one at each corner of the front portion of the vehicle 1, one at the center of the rear portion, and one at each side of the rear portion. The detection means 43 is a millimeter wave radar (hereinafter, may be referred to as a radar 43) and detects a target object around the vehicle 1 or measures a distance to the target object. In the present embodiment, five radars 43 are provided, one at the center of the front portion of the vehicle 1, one at each corner portion of the front portion, and one at each corner portion of the rear portion.
The ECU22 controls one camera 41 and each detection unit 42 and processes information of the detection results. The ECU23 controls the other camera 41 and each radar 43 and performs information processing of the detection results. By providing two sets of devices for detecting the surrounding conditions of the vehicle, the reliability of the detection result can be improved, and by providing different types of detection means such as a camera and a radar, the surrounding environment of the vehicle can be analyzed in various ways.
The ECU24 controls the gyro sensor 5, the GPS sensor 24b, and the communication device 24c and processes the detection result or the communication result. The gyro sensor 5 detects a rotational motion of the vehicle 1. The course of the vehicle 1 can be determined from the detection result of the gyro sensor 5, the wheel speed, and the like. The GPS sensor 24b detects the current position of the vehicle 1. The communication device 24c wirelessly communicates with a server that provides map information, traffic information, and weather information, and acquires these pieces of information. The ECU24 can access the database 24a of map information constructed in the storage device, and the ECU24 performs a route search from the current position to the destination, and the like. Further, the database 24a may be configured with a database of the traffic information, weather information, and the like.
The ECU25 includes a communication device 25a for vehicle-to-vehicle communication. The communication device 25a performs wireless communication with other vehicles in the vicinity to exchange information between the vehicles.
The ECU26 controls the power unit 6. The power plant 6 is a mechanism that outputs a driving force for rotating the driving wheels of the vehicle 1, and includes, for example, an engine and a transmission. The ECU26 controls the output of the engine in accordance with, for example, the driver's driving operation (accelerator operation or accelerator operation) detected by an operation detection sensor 7A provided at the accelerator pedal 7A, or switches the gear position of the transmission based on information such as the vehicle speed detected by a vehicle speed sensor 7 c. When the driving state of the vehicle 1 is the automatic driving, the ECU26 automatically controls the power plant 6 in response to an instruction from the ECU20 to control acceleration and deceleration of the vehicle 1.
The ECU27 controls lighting devices (headlamps, tail lamps, etc.) including a direction indicator 8 (turn signal lamp). In the case of the example of fig. 1, the direction indicator 8 is provided at the front, door mirror, and rear of the vehicle 1.
The ECU28 controls the input/output device 9. The input/output device 9 outputs information of the driver and receives input of information from the driver. The sound output device 91 notifies the driver of information by sound. The display device 92 notifies the driver of information by display of an image. The display device 92 is disposed on the front of the driver's seat, for example, and constitutes an instrument panel or the like. Note that, although the example is made by sound and display, information may be notified by vibration or light. Further, a plurality of sounds, displays, vibrations, or lights may be combined to report information. Further, the combination may be different or the notification manner may be different depending on the level of information to be notified (e.g., the degree of urgency). In addition, the display device 92 includes a navigation device.
The input device 93 is a switch group that is disposed at a position where the driver can operate and gives an instruction to the vehicle 1, but may include a voice input device.
The ECU29 controls the brake device 10 and a parking brake (not shown). The brake device 10 is, for example, a disc brake device, is provided to each wheel of the vehicle 1, and decelerates or stops the vehicle 1 by applying resistance to rotation of the wheel. The ECU29 controls the operation of the brake device 10 in accordance with, for example, the driver's driving operation (braking operation) detected by an operation detection sensor 7B provided on the brake pedal 7B. When the driving state of the vehicle 1 is the automatic driving, the ECU29 automatically controls the brake device 10 in response to an instruction from the ECU20 to decelerate and stop the vehicle 1. The brake device 10 and the parking brake can be operated to maintain the stopped state of the vehicle 1. In addition, when the transmission of the power unit 6 includes the parking lock mechanism, the transmission can be operated to maintain the stopped state of the vehicle 1.
The control related to the automatic driving of the vehicle 1 executed by the ECU20 will be described. When the driver instructs the destination and the automated driving, the ECU20 automatically controls the travel of the vehicle 1 toward the destination according to the guidance route searched by the ECU 24. At the time of automatic control, the ECU20 acquires information (external information) relating to the surrounding conditions of the vehicle 1 from the ECU22 and the ECU23, and instructs the ECU21, the ECU26, and the ECU29 to control steering, acceleration, and deceleration of the vehicle 1 based on the acquired information.
Fig. 2 is a diagram showing functional blocks of the control unit 2. The control unit 200 corresponds to the control unit 2 of fig. 1, and includes an external recognition unit 201, a self-position recognition unit 202, an in-vehicle recognition unit 203, an action planning unit 204, a drive control unit 205, and an equipment control unit 206. The functional blocks are realized by one ECU or a plurality of ECUs shown in fig. 1.
The external recognition unit 201 recognizes external information of the vehicle 1 based on signals from the external recognition camera 207 and the external recognition sensor 208. Here, the camera 207 for external recognition is, for example, the camera 41 of fig. 1, and the sensor 208 for external recognition is, for example, the detection means 42 and 43 of fig. 1. The external recognition unit 201 recognizes scenes such as intersections, railroad crossings, and tunnels, free spaces such as shoulders, and other vehicle behaviors (speed and traveling direction) based on signals from the external recognition camera 207 and the external recognition sensor 208. The self-position identifying unit 202 identifies the current position of the vehicle 1 based on the signal from the GPS sensor 211. Here, the GPS sensor 211 corresponds to, for example, the GPS sensor 24b of fig. 1.
The vehicle interior recognition unit 203 recognizes the occupant of the vehicle 1 and recognizes the state of the occupant based on signals from the vehicle interior recognition camera 209 and the vehicle interior recognition sensor 210. The vehicle interior recognition camera 209 is, for example, a near-infrared camera provided on the display device 92 in the vehicle interior of the vehicle 1, and detects, for example, the direction of the line of sight of the occupant. The in-vehicle recognition sensor 210 is a sensor that detects a biological signal of a passenger, for example. The vehicle interior recognition unit 203 recognizes the doze state of the occupant, the state during work other than driving, and the like based on these signals.
The action planning unit 204 plans the action of the vehicle 1 such as an optimal route and a risk avoidance route based on the recognition results of the external recognition unit 201 and the self-position recognition unit 202. The action planning unit 204 performs an action plan based on, for example, an entry determination at a start point or an end point at an intersection, a railroad crossing, or the like, or based on behavior prediction of another vehicle. The drive control unit 205 controls the driving force output device 212, the steering device 213, and the brake device 214 based on the action plan of the action planning unit 204. Here, the driving force output device 212 corresponds to, for example, the power plant 6 of fig. 1, the steering device 213 corresponds to the electric power steering device 3 of fig. 1, and the brake device 214 corresponds to the brake device 10.
The device control unit 206 controls devices connected to the control unit 200. For example, the device control unit 206 controls the speaker 215 to output a predetermined audio message such as a warning message or a navigation message. For example, the device control unit 206 controls the display device 216 to display a predetermined interface screen. The display device 216 corresponds to, for example, the display device 92. For example, the device control unit 206 controls the navigation device 217 to acquire setting information in the navigation device 217.
The control unit 200 may include functional blocks other than those shown in fig. 2 as appropriate, and may include an optimum route calculation unit that calculates an optimum route to a destination based on map information acquired via the communication device 24c, for example. The control unit 200 may acquire information from a camera or a sensor other than those shown in fig. 2, and may acquire information of another vehicle via the communication device 25a, for example. The control unit 200 receives not only the detection signal from the GPS sensor 211 but also detection signals from various sensors provided in the vehicle 1. For example, the control unit 200 receives detection signals of an opening/closing sensor of a door and a mechanism sensor of a door lock provided in a door portion of the vehicle 1 via an ECU configured in the door portion. Thus, the control unit 200 can detect unlocking of the door and opening and closing operations of the door.
The head-up display (HUD) controller 218 controls a head-up display (HUD)219 mounted on the vehicle interior near the front window of the vehicle 1. The HUD control unit 218 and the control unit 200 can communicate with each other, and the HUD control unit 218 acquires image data of the external recognition camera 207 via the control unit 200, for example. The HUD219 projects an image toward the front window under the control of the HUD control unit 218. For example, the HUD control unit 218 receives image data of the external recognition camera 207 from the control unit 200, and generates image data for projecting the HUD219 based on the image data. The image data is, for example, image data that is superimposed (overlapped) on a scene viewed by the driver through the front window. By projecting the HUD219 toward the front window, the driver can have a feeling that, for example, an icon image (destination information or the like) for navigation is superimposed on the landscape of the road ahead. The HUD control unit 218 can communicate with an external device via a communication interface (I/F) 220. The external device is, for example, a mobile terminal 221 such as a smartphone held by the driver. The communication I/F220 may be configured to be connectable to a plurality of networks, for example, to the internet.
The operation of the present embodiment will be described below. When the driver drives the vehicle, the driver has a forward direction obligation. Further, depending on each scene such as an intersection or a curve, there is a region in which the driver needs to call attention in a visual field region that can be visually confirmed through the front window.
Fig. 3 and 4 are diagrams for explaining the operation of the present embodiment. Fig. 3 and 4 show the visual field region that the driver can visually confirm through the front window. In the visual field area 300 of fig. 3, areas 301 and 302 represent areas where attention needs to be called. That is, in the scene of the intersection shown in fig. 3, there is a possibility that a person may rush out from the left side and another vehicle may enter the intersection, and therefore the area 301 becomes an area that requires attention. In addition, the area 302 corresponding to the traffic light is an area in which attention needs to be called for smooth passage of the vehicle. In the present embodiment, as shown in fig. 3, the area 301 and the area 302 are displayed so as to be recognizable by the HUD219 in such a manner as to overlap with a landscape on the front window. For example, the area 301 and the area 302 are displayed as translucent areas having light tones.
Fig. 4 shows a case where the driver stagnates the viewpoint in the area 301 from the display state of fig. 3. The viewpoint 303 is an area corresponding to the viewpoint of the driver. That is, fig. 4 shows a situation in which the driver stops the viewpoint in the vicinity corresponding to the curb in the area 301. Then, as the driver moves the viewpoint in the arrow direction within the area 301, the viewpoint 303 also moves in the arrow direction on the visual field area 300. The viewpoint area 304 indicates an area covered by the movement of the viewpoint. In the present embodiment, in the viewpoint region 304 corresponding to the movement amount of the viewpoint 303 in the region 301, the recognition display of the region 301 is canceled only in the portion corresponding to the viewpoint region 304. For example, in the semi-transparent display of the region 301, the semi-transparent display of the portion corresponding to the viewpoint region 304 is released. When the area of the viewpoint region 304 reaches a predetermined ratio of the area of the region 301, the identification display of the region 301 is released.
As described above, according to the present embodiment, as shown in fig. 3, the region where attention needs to be called is displayed so as to be recognizable, and the driver can be encouraged to stay at the viewpoint in the region. In addition, when the driver stops the viewpoint in the area, the recognition display of a part of the area 301 is released as shown in fig. 4, and thereby the driver can be made aware that the viewpoint is stopped in the area where attention needs to be called. When the area of the viewpoint region 304 is equal to a predetermined ratio of the area of the region 301, the recognition display of the region 301 is released, and thus the driver can be motivated to confirm the region that needs to be brought out of attention without omission.
Fig. 5 is a flowchart showing a process of display control in the present embodiment. The processing of fig. 5 is realized by reading out a program from a storage area such as a ROM by the HUD control unit 218 and executing the program. In addition, the process of fig. 5 is started when the vehicle 1 starts running.
In S101, the HUD control unit 218 acquires the current position of the vehicle 1. For example, the HUD control portion 218 may acquire the current position of the vehicle 1 from the control portion 200. Then, in S102, the HUD control portion 218 determines whether or not to display the region of interest based on the current position of the vehicle 1 acquired in S101. Here, the region of interest corresponds to the regions 301 and 302 in fig. 3 and 4. In S102, if there is a point (point) that needs to call attention within a predetermined distance from the current position of the vehicle 1, the HUD control unit 218 determines that the attention area is displayed. For example, the HUD control unit 218 determines that the attention area is displayed when a specific scene such as a curve having an intersection or a predetermined or more curvature exists within a predetermined distance from the current position of the vehicle 1 based on the map information acquired from the control unit 200. If it is determined in S102 that the region of interest is not to be displayed, the processing from S101 is repeated. If it is determined in S102 that the region of interest is displayed, the process proceeds to S103.
Further, a place where attention needs to be called can be learned in advance for each scene. As a configuration for this, for example, the HUD control device 218 includes a learning unit including a GPU, a data analysis unit, and a data storage unit. The data storage unit stores position data of the driver's viewpoint in a visual field area for each scene corresponding to a traveling road or the like, and the data analysis unit analyzes the distribution of the driver's viewpoint in the visual field area. For example, driving by a skilled driver is performed in advance, the distribution of the points of view of the driver is analyzed, and the result may be stored for each scene. In this case, the distribution tendency of the viewpoints of the skilled driver is learned as a point where attention needs to be called.
On the other hand, the location where attention needs to be called can be learned by another method using the distribution tendency of the driver's view points on the view field area, not only by a skilled driver. For example, a position to be confirmed, which is likely to be overlooked by the driver, may be set as a point where attention needs to be called, based on the distribution tendency of the driver's view points in the visual field region, and the position may be classified and learned for each scene. For example, when it is analyzed that there is a tendency that the viewpoints are extremely distributed in the vicinity of the front of the vehicle at the time of turning (for example, there is a habit of gazing at the vicinity of the vehicle), it is possible to learn a region corresponding to a far place ahead of the vehicle on the visual field region as a point where attention needs to be called. In this case, the tendency of the distribution of the viewpoints of the skilled drivers can be used as teacher data. When it is determined that the vehicle is traveling in a similar scene, each learning result described above is used as the region of interest.
In S103, the HUD control unit 218 displays the region of interest. Fig. 6 is a flowchart showing the display processing of the region of interest in S103. In S201, the HUD control unit 218 acquires the target information. Here, the object information is information to be used as a reference for specifying coordinates of an area requiring attention, and is information of an object such as a traffic sign or a traffic signal. The object information is not limited to information of an object, and may be information of a range including a plurality of objects. For example, as in area 301 of fig. 3, it may be information that spans the range of the crosswalk and the curb. The HUD control unit 218 can acquire the object information based on the image data of the external recognition camera 207 corresponding to the visual field area 300, for example, via the control unit 200.
In S202, the HUD control unit 218 specifies the coordinates of the region of interest in the visual field area 300 that is the target of HUD display, based on the object information acquired in S201. For example, the HUD control unit 218 acquires image data of the visual field area 300 based on the image data of the external recognition camera 207 corresponding to the visual field area 300, and determines coordinates of a region of interest in the visual field area 300 to be displayed on the HUD based on the image data.
In S203, the HUD control unit 218 generates display data for HUD displaying the region of interest based on the coordinates of the region of interest determined in S202, and controls the HUD219 to display in the front window based on the display data. Here, the display data corresponds to the areas 301 and 302 in fig. 3. The HUD control unit 218 generates display data such that the region of interest is formed as a translucent region of light color tone, for example, and can be distinguished from other regions. Further, the HUD control unit 218 starts measurement of the elapsed time by the timer function when displaying the HUD in the region of interest. This measurement result is used for display control in S107 described later. After S203, the process of fig. 6 is ended, and the process proceeds to S104 of fig. 5.
In S104 of fig. 5, the HUD control unit 218 acquires the viewpoint of the driver. Fig. 7 is a flowchart showing the processing of acquisition of the viewpoint in S104. In S301, the HUD control unit 218 analyzes the line of sight of the driver. For example, the line of sight of the driver may be analyzed by the in-vehicle recognition unit 203 of the control unit 200 via the in-vehicle recognition camera 209 and the in-vehicle recognition sensor 210, and the HUD control unit 218 may acquire the analysis result.
In S302, the HUD control unit 218 determines coordinates of the viewpoint in the visual field area 300 based on the analysis result in 301. For example, the HUD control unit 218 specifies coordinates of the viewpoint in the visual field area 300 based on the image data of the external recognition camera 207 corresponding to the visual field area 300. After S302, the process of fig. 7 is ended, and the flow proceeds to S105 of fig. 5.
In S105 of fig. 5, the HUD control unit 218 determines whether or not the region of interest displayed in S103 overlaps with the viewpoint acquired in S104. The determination of S105 may be made based on the coordinates of the region of interest determined in S202 and the viewpoint coordinates determined in S302, for example. If it is determined in S105 that there is overlap, the display control in S106 is performed, and if it is determined in S105 that there is no overlap, the display control in S107 is performed.
Fig. 8 is a flowchart showing the processing of the display control in S106. In S401, the HUD control unit 218 identifies the overlapping region determined to have the overlap, and cancels the display control performed in S103. Here, the region determined to have overlap corresponds to the viewpoint region 304 in fig. 4. For example, the HUD control unit 218 specifies a predetermined area including the viewpoint coordinates specified in S302, for example, a circular area having a predetermined diameter, and cancels the recognition display (for example, semi-transparent display) in the area. As a result, the driver can be made aware that the translucent display of the position where the driver has stopped the viewpoint disappears. In S401 after tracking the viewpoint, an outline region of a trace moved out of a predetermined region (for example, a circular region corresponding to the viewpoint 303) including the viewpoint coordinates is determined as an overlap region (corresponding to the viewpoint region 304).
In S402, the HUD controller 218 acquires the tracking amount of the viewpoint. The tracking amount of the viewpoint corresponds to the movement amount of the viewpoint 303 in fig. 4, that is, the viewpoint area 304 which is an overlap area. In addition, when the driver stops the viewpoint in the attention area for the first time, the tracking amount of the viewpoint acquired in S402 is an initial value. The initial value may be, for example, zero.
In S403, the HUD control unit 218 determines whether or not the tracking amount acquired in S402 has reached a predetermined amount. Here, the predetermined amount may be, for example, an area of a predetermined ratio of the area of the region of interest. When determining that the tracking amount has reached the predetermined amount, the HUD control unit 218 cancels the display control performed in S103 for all the regions of interest in S404. As a result, the driver can be made aware that the entire translucent display has disappeared with respect to the region of interest in which the viewpoint is stopped by itself by a certain amount. After S404, the process of fig. 8 is ended, and the process of S101 of fig. 5 is repeated.
If it is determined in S403 that the tracking amount has not reached the predetermined amount, the processing from S104 is repeated. For example, when the overlap region does not reach the predetermined amount even if the driver stops the viewpoint in the attention region and the viewpoint moves within the attention region, the processing from S104 is repeated to track the viewpoint. In this case, for example, as described above, the outline region of the trace moved by the predetermined region including the viewpoint coordinates (for example, the circular region corresponding to the viewpoint 303) is determined as the overlap region.
Fig. 9 is a flowchart showing the processing of the display control in S107. The case of proceeding to S107 corresponds to, for example, the case where the driver does not stagnate the viewpoint in the areas 301 and 302 of fig. 3 although these areas are shown.
In S501, the HUD control unit 218 determines whether or not a predetermined time has elapsed based on the measurement result of the timer function. When it is determined that the predetermined time has elapsed, in S502, the HUD control unit 218 increases the display density of the region of interest displayed in S103. With such a configuration, the driver can be prompted to pay attention to the region of interest. The display control in S502 is not limited to the density control, and other display controls may be performed. For example, the region of interest displayed in S103 may be caused to blink. After S502, the process of fig. 9 is ended, and the process from S105 of fig. 5 is repeated.
On the other hand, if it is determined in S501 that the predetermined time has not elapsed, in S503, the HUD control unit 218 determines whether or not the region of interest displayed in S103 overlaps with the viewpoint acquired in S104. For example, the determination in S503 may be performed based on the coordinates of the region of interest determined in S202 and the viewpoint coordinates determined in S302. If it is determined in S503 that there is overlap, the display control in S106 is performed, and if it is determined in S503 that there is no overlap, the processing from S501 is repeated.
As described above, according to the present embodiment, when the vehicle travels in a scene where a point requiring attention such as an intersection exists, the area is displayed on the front window so as to be recognizable. In addition, when the driver does not stay in the area for a predetermined time, the display mode of the area is further changed. With such a configuration, the driver can be effectively prompted to call attention. When the driver stops the viewpoint in the area, the recognition display is canceled according to the amount of movement of the viewpoint. With such a configuration, the driver can be sufficiently motivated to call attention.
[ second embodiment ]
Hereinafter, the second embodiment will be described with respect to points different from the first embodiment. In the first embodiment, as described with reference to fig. 5, when it is determined in S102 that the region of interest is not to be displayed after the current position of the vehicle 1 is acquired in S101, the processes from S101 are repeated. In the present embodiment, after S101, the risk determination of the environment outside the vehicle 1 is performed. Here, the risk determination means, for example, a determination of the possibility of the oncoming vehicle approaching the vehicle 1. For example, when the vehicle 1 needs to avoid another vehicle or approach of a moving object, the warning display is performed without performing the processing from S103 onward.
Fig. 10 is a flowchart showing a process of display control according to the present embodiment. For example, the processing of fig. 10 is realized by the HUD control unit 218 reading out and executing a program from a storage area such as a ROM. The process of fig. 10 is started when the vehicle 1 starts traveling.
S101 is the same as that described in the first embodiment, and therefore, the description thereof is omitted. In the present embodiment, after acquiring the current position of the vehicle 1 in S101, the HUD control unit 218 performs risk determination in S601. For example, the risk determination may be performed by determining the possibility of another vehicle, the travel route of a moving object, and the travel route of the vehicle 1 overlapping based on the recognition result of the external recognition unit 201. In addition, the risk determination may be performed by determining the possibility of the dead angle region from the vehicle 1 being generated based on other vehicles, for example. Further, the risk determination may be performed based on road surface conditions such as freezing, and meteorological conditions such as rainfall and heavy fog, for example. Various indexes may be used for the result of the risk determination, and for example, a collision Margin (MTC) may be used.
In the present embodiment, in S102, the HUD control unit 218 first determines whether or not to display the region of interest based on the result of the risk determination in S601. For example, the HUD control unit 218 determines not to display the region of interest when the collision margin is equal to or less than the threshold value by recognizing the approach of another vehicle. On the other hand, when it is determined that the attention area is displayed based on the risk determination result, it is determined whether or not to display the attention area based on the current position of the vehicle 1 acquired in S101.
If it is determined in S102 that the region of interest is not to be displayed, the HUD control unit 218 determines in S602 whether or not to display a warning. In the determination at S602, for example, when it is determined that the region of interest is not to be displayed based on the risk determination result at S102, it is determined that the warning display is performed. If it is determined in S102 that the region of interest is not to be displayed based on the current position of the vehicle 1, it is determined that the warning display is not to be performed. If it is determined in S602 that the warning display is not to be performed, the processing from S101 is repeated. If it is determined in S602 that the warning display is performed, the process proceeds to S603.
In S603, the HUD control unit 218 generates display data for displaying a warning, and controls the HUD219 to display the display data on the front window. Here, the display data may be data indicating the direction of another approaching vehicle or moving object, for example. Further, an area display may be employed in which other vehicles or moving objects in the vicinity are surrounded on the visual field area 300. After the warning display is performed, if it is detected that the driver stops the viewpoint in the vicinity of the warning display, the warning display may be canceled. After S603, the processing from S101 is repeated.
As described above, according to the present embodiment, when a risk such as collision of another vehicle or a moving object is determined while the vehicle 1 is traveling, a display for notifying the risk is performed without displaying the attention area. With such a configuration, the driver can be made aware of the occurrence of the risk more effectively.
[ third embodiment ]
The third embodiment is described below with respect to points different from the first and second embodiments. In the first and second embodiments, the attention area is displayed at the timing when it is determined in S102 that the attention area is displayed. In the present embodiment, the attention area is displayed at a timing when it is determined that the driver does not stay at the viewpoint based on the attention area set inside. According to such a configuration, in the case of a driver who is likely to have a high possibility of staying at the viewpoint in the attention area, such as a skilled driver, the frequency of HUD display on the front window can be reduced, and the driver can be focused on driving.
Fig. 11 is a flowchart showing a process of display control according to the present embodiment. For example, the processing of fig. 11 is realized by the HUD control unit 218 reading out and executing a program from a storage area such as a ROM. The process of fig. 11 is started when the vehicle 1 starts traveling.
S101 is the same as that described in the first embodiment, and therefore, the description thereof is omitted. In the present embodiment, after acquiring the current position of the vehicle 1 in S101, the HUD control unit 218 determines whether or not to set the region of interest based on the current position of the vehicle 1 acquired in S101 in S701. The criterion for determining whether or not to set the region of interest is the same as the criterion in S102 in the first embodiment. If it is determined in S701 that the region of interest is not set, the processing from S101 is repeated. If it is determined in S701 that the region of interest is set, the process proceeds to S702.
In S702, the HUD control unit 218 sets a region of interest. The set attention region corresponds to the regions 301 and 302 shown in fig. 3. However, unlike the first embodiment, the region of interest is not displayed at this timing. That is, in S702, the processing of S201 and S202 in fig. 6 is performed, and the processing of S203 is not performed.
After S702, the processing of S104 and S105 is performed. S104 and S105 are the same as those in the first embodiment, and therefore, the description thereof is omitted. In S105, it is determined whether or not the region of interest set in S702 overlaps with the viewpoint acquired in S104. If it is determined in S105 that there is overlap, the processing from S101 is repeated. That is, in the present embodiment, when the driver has a viewpoint staying in an area where attention needs to be called, HUD display of the front window is not performed. On the other hand, if it is determined in S105 that there is no overlap, the process proceeds to S703.
In S703, the HUD controller 218 displays the region of interest set in S702. In S703, the same processing as in S203 of the first embodiment is performed. With such a configuration, the driver can be prompted to pay attention to the attention area, as in the first embodiment. After S703, the process from S105 is repeated.
When it is determined in S105 that there is overlap after the processing in S703 is performed, the HUD controller 218 performs display control of the region of interest displayed in S703 in S704. For example, in S704, the same processing as S106 of the first embodiment may be performed, and the processing from S101 may be repeated. Alternatively, instead of canceling all the display of the region of interest when the overlap region reaches the predetermined amount, if it is determined in S105 that there is overlap, all the display of the region of interest may be canceled, and the processing from S101 may be repeated. With this configuration, the frequency of displaying the attention area to the skilled driver can be reduced.
As described above, according to the present embodiment, the display of the attention area is performed at a timing at which it is determined that the driver does not observe the set attention area. With this configuration, in the case of a driver who is highly likely to observe the attention area, such as a skilled driver, it is possible to reduce the frequency of displaying the HUD on the front window and concentrate on driving.
The operations of the first embodiment, the second embodiment, and the third embodiment may be switched. Such switching is performed, for example, on a user interface screen displayed on the display device 216, and the control unit 200 transmits the selected operation mode to the HUD control unit 218.
< summary of the embodiments >
The display control device of the above embodiment includes: a display control unit (218) that displays an image so as to overlap the image with a field of view region of a driver of a vehicle; and a detection unit (209, 210, 203, S104) that analyzes the line of sight of the driver and detects the viewpoint of the driver on the visual field region obtained from the result of the analysis, wherein the display control unit is configured to set a predetermined region on the visual field region as a target of display control, and the display control unit changes the manner of display of the image when a condition is satisfied by an overlap between the predetermined region on the visual field region and the viewpoint of the driver detected by the detection unit (S106), based on a determination result of the overlap.
With such a configuration, the driver can be effectively encouraged to watch a predetermined region (attention region) on the visual field region.
The images are images (301, 302) overlapping the predetermined region. In addition, the display control unit performs recognition display of the image so as to become recognizable.
With such a configuration, for example, the predetermined region is displayed in light color, so that the driver can easily recognize the predetermined region.
When the predetermined region is specified in the visual field region, the recognition display is performed (S103). Further, the predetermined area is specified in the visual field area, and the recognition display is performed when the viewpoint of the driver is detected at a position different from the predetermined area (S703).
According to this configuration, when the predetermined region is recognized, the recognition display is performed, and the predetermined region can be displayed quickly. In addition, for example, the frequency of display in a predetermined area can be reduced for a skilled driver.
Further, the display control means cancels the recognition display of the portion corresponding to the overlap in the image when the overlap is detected as the condition (S401). Further, the display control means cancels the recognition display of the image when the overlap exceeds a predetermined amount as the condition (S404, S704).
With this configuration, the driver can be effectively made aware that the viewpoint is stuck in the predetermined area.
Further, the display control means changes the manner of the recognition display when the viewpoint of the driver is detected at a position different from the predetermined area after the recognition display. The display control means changes the manner of the recognition display by changing the density of the image (S502).
According to such a configuration, when the driver does not stay at the viewpoint in the predetermined area, the driver can be effectively encouraged to watch the predetermined area on the visual field area.
The display control device further includes a determination unit (S601) that determines a risk of the external environment of the vehicle, and the display control unit displays, as the image, an image that warns of the risk so as to overlap the field of view region, based on a determination result of the determination unit (S603). With such a configuration, for example, when a moving object or another vehicle approaches the host vehicle, a warning display indicating the approach can be performed. In addition, when the driver observes the approaching position, the warning display can be canceled.
The present invention is not limited to the above-described embodiments, and various modifications and changes can be made within the scope of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种场景随动的全息投影系统及其汽车