Method, electronic device, and computer storage medium for vehicle collision avoidance
1. A method for preventing a vehicle collision, comprising:
at a server, acquiring an environment image about an environment outside a vehicle;
determining whether there is an object that is likely to collide with the vehicle based on the environment image;
in response to determining that the object that may collide with the vehicle exists, determining whether a vehicle moving space exists in an environment outside the vehicle based on the environment image; and
in response to determining that the vehicle exterior environment is the vehicle moving space, generating a vehicle moving instruction for sending to the vehicle, the vehicle moving instruction indicating at least an angle and a distance for moving the vehicle.
2. The method of claim 1, wherein said acquiring an environmental image about an environment external to the vehicle comprises at least one of:
receiving the environment image from the vehicle at a predetermined time interval; and
receiving the environmental image from the vehicle based on instructions of the server.
3. The method of claim 1, wherein the determining whether there is an object that is likely to collide with the vehicle comprises:
determining whether the object approaching the vehicle is present in the environment image;
in response to determining that the object approaching the vehicle is present in the environmental image, generating a safe distance value indicating minimum distance information between the object and the vehicle that should be maintained for collision avoidance; and
determining that there is the object that is likely to collide with the vehicle in response to determining that the distance between the object and the vehicle is less than the safe distance value;
determining that there is no object that may collide with the vehicle in response to determining that the distance between the object and the vehicle is not less than the safe distance value.
4. The method of claim 1, further comprising:
in response to determining that there is no object that is likely to collide with the vehicle, generating an instruction to the vehicle to maintain a first state for transmission to the vehicle.
5. The method of claim 1, wherein the determining whether there is a space outside the vehicle comprises:
acquiring the environment image forming a preset angle with the vehicle body direction of the vehicle; and
in response to a space larger than a minimum vehicle moving width of the vehicle existing in the environment image, determining that the vehicle moving space exists in an environment outside the vehicle;
determining that the vehicle moving space does not exist in the environment outside the vehicle in response to the absence of a space larger than the minimum vehicle moving width of the vehicle in the environment image.
6. The method of claim 5, wherein said acquiring the environmental image at a predetermined angle to a body direction of the vehicle comprises at least one of:
intercepting an environment image segment which forms the preset angle with the vehicle body direction of the vehicle based on the environment image;
receiving the environment image acquired by an on-board camera device within the preset angle with the vehicle body direction from the vehicle; and
receiving the environment image acquired by adjusting the environment image to be at the preset angle with the body direction of the vehicle through an on-board camera device from the vehicle.
7. The method of claim 1, further comprising:
in response to determining that the vehicle exterior environment does not present the stolen space, generating an instruction indicating that the vehicle initiates a second state for transmission to the vehicle.
8. The method of claim 3, wherein the determining whether the object approaching the vehicle is present in the environmental image comprises:
identifying the object in the environmental image;
in response to identifying the object in the environmental image, obtaining displacement state information of the object, the displacement state information including: distance information between the object and the vehicle and moving speed information of the object; and
determining whether the object approaching the vehicle exists in the environment image in response to acquiring the displacement state information of the object.
9. The method of claim 8, wherein the identifying the object in the environmental image further comprises:
identifying type information of the object, the type information of the object being generated based on an input recognition model, the recognition model being trained via samples of a plurality of sets of training data, each of the plurality of sets of training data comprising: an image of the object and identification information to identify a type of the object.
10. The method of claim 3, wherein the safe distance value is generated based on an input safe distance model, the safe distance model being trained via samples of a plurality of sets of training data, each set of training data of the plurality of sets of training data comprising at least: the type information, quality information, and the displacement state information of the object.
11. The method of claim 10, wherein the quality information of the subject is generated based on an input quality model, the quality model being trained via samples of a plurality of sets of training data, each of the plurality of sets of training data comprising: an image of the object and identification information to identify a quality of the object.
12. The method of claim 10, wherein the training data of the safe distance model further comprises at least one of: weather information and road surface information.
13. The method of claim 12, wherein the weather information is obtained based on at least one of:
acquiring current weather information based on the environment image;
obtaining current weather information associated with a current location of the vehicle based on an application; and
current weather information associated with a current location of the vehicle is crawled from predetermined data sources.
14. A method for preventing a vehicle collision, comprising:
determining, at the vehicle-mounted terminal device, that the vehicle is detected to be in a first state;
in response to determining that the vehicle is detected in a first state, capturing, via an onboard camera device, an environmental image about an environment external to the vehicle;
transmitting the environment image for a server to determine a vehicle moving space existing in an environment outside the vehicle in response to determining that an object which may collide with the vehicle exists, for generating an instruction about vehicle moving based on the vehicle moving space; and
in response to receiving the instruction about vehicle moving, the vehicle is controlled to move to the vehicle moving space via a vehicle control module.
15. The method of claim 14, wherein the sending the environmental image comprises at least one of:
transmitting the environment image to the server at a predetermined time interval; and
sending the environmental image to the server based on an instruction of the server.
16. The method of claim 14, further comprising:
maintaining a first state of the vehicle in response to receiving an instruction generated by the server based on the environmental image to indicate that the vehicle maintains the first state that the object is not likely to collide with the vehicle.
17. The method of claim 14, further comprising:
and in response to receiving an instruction indicating that the vehicle starts the second state, which is generated by the server based on the environment image to determine that the vehicle exterior environment does not have the vehicle moving space, starting the second state of the vehicle.
18. An electronic device, comprising:
a memory configured to store one or more computer programs; and
a processor coupled to the memory and configured to execute the one or more programs to cause the apparatus to perform the method of any of claims 1-17.
19. A non-transitory computer readable storage medium having stored thereon machine executable instructions which, when executed, cause a machine to perform the steps of the method of any of claims 1-17.
Background
At present, more and more automatic driving of automobiles focuses on a technology for preventing a vehicle collision when the vehicle is driven, but in real life, accidents that the vehicle is collided with by other vehicles in a parking state are frequently occurred, and thus the problems of the current technology for preventing the vehicle collision are revealed.
First, in a stationary state of the vehicle, particularly in a temporary stop, the driver often cannot pay close attention to the traveling state of the vehicle in the vicinity of the vehicle as in the driving state, and therefore cannot find another vehicle approaching the own vehicle in time.
Second, even if other vehicles approaching the vehicle are timely found out by the alarm prompt of the existing collision avoidance technology, the driver does not necessarily have enough reaction time to take correct collision avoidance measures or safety protection measures.
Thirdly, the influence of factors such as weather, road surface, vehicle type, vehicle weight and the like on the braking distance cannot be considered in the conventional anti-collision technology, so that the early warning accuracy is greatly reduced, and large-scale collision accidents cannot be effectively avoided.
Disclosure of Invention
The present disclosure provides a method, an electronic device, and a computer storage medium for vehicle collision avoidance, which can prevent a vehicle collision from occurring or take safety measures in time when a vehicle is in a stationary state, reduce the occurrence rate of vehicle collision accidents, and reduce economic loss or personal injury caused by the vehicle collision.
In a first aspect of the present disclosure, a method of preventing a vehicle collision is provided. The method comprises the following steps: at a server, an environment image about an environment outside a vehicle is acquired, whether an object that may collide with the vehicle exists is determined based on the environment image, if it is determined that the object that may collide with the vehicle exists, whether a vehicle moving space exists in the environment outside the vehicle is further determined based on the environment image, and if it is determined that the vehicle moving space exists in the environment outside the vehicle, an instruction about vehicle moving is generated for transmission to the vehicle.
In a second aspect of the present disclosure, a method of preventing a vehicle collision is also provided. The method comprises the following steps: at the vehicle-mounted terminal device, determining that the vehicle is detected to be in a first state, acquiring an environment image about an environment outside the vehicle via the vehicle-mounted camera device if it is determined that the vehicle is detected to be in the first state, transmitting the environment image for the server to determine a vehicle moving space in which the environment outside the vehicle exists for generating an instruction about vehicle moving based on the vehicle moving space in response to determining that an object that may collide with the vehicle exists, and controlling the vehicle to move to the vehicle moving space via the vehicle control module in response to receiving the instruction about vehicle moving.
According to a third aspect of the present invention, there is also provided an electronic device, the device comprising: a memory configured to store one or more computer programs; and a processor coupled to the memory and configured to execute the one or more programs to cause the apparatus to perform the method of the first aspect of the disclosure.
In a fourth aspect of the disclosure, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon a computer program which, when executed by a machine, causes the machine to carry out any of the steps of the method described according to the first aspect of the disclosure.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the disclosure.
FIG. 1 shows a schematic diagram of an example of a system 100 for a method of preventing a vehicle collision according to an embodiment of the present disclosure;
fig. 2 shows a schematic diagram of an example of a vehicle apparatus 200 of a method for preventing a vehicle collision according to an embodiment of the present disclosure;
FIG. 3 shows a schematic flow diagram of a method 300 for preventing a vehicle collision according to an embodiment of the present disclosure;
FIG. 4 shows a schematic flow diagram of a method 400 for determining a likelihood of a vehicle collision according to an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a method 500 for determining vehicle take over space according to an embodiment of the present disclosure;
FIG. 6 shows a schematic flow diagram of a method 600 for preventing a vehicle collision according to an embodiment of the present disclosure;
FIG. 7 shows a schematic flow diagram of a method 700 for preventing a vehicle collision according to an embodiment of the present disclosure; and
fig. 8 illustrates a schematic block diagram of an example device 800 that may be used to implement embodiments of the present disclosure.
Like or corresponding reference characters designate like or corresponding parts throughout the several views.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment". The term "another embodiment" means "at least one additional embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As described above, the present vehicle is often collided by other vehicles in a parking state, for example, the vehicle is not braked in time after the vehicle is slippery on a rainy or snowy day, which not only brings economic loss to the vehicle owner, but also brings personal safety threat to passengers.
To address, at least in part, one or more of the above problems and other potential problems, example embodiments of the present disclosure propose a solution for preventing a vehicle collision. In this scheme, at a server, an environment image regarding an environment outside a vehicle is acquired, it is determined whether there is an object that is likely to collide with the vehicle based on the environment image, if it is determined that there is an object that is likely to collide with the vehicle, it is further determined whether there is a vehicle moving space in the environment outside the vehicle based on the environment image, and if it is determined that there is a vehicle moving space in the environment outside the vehicle, an instruction regarding vehicle moving is generated for transmission to the vehicle.
Therefore, vehicle moving measures or safety protection measures can be taken pertinently based on whether a vehicle moving space exists around the vehicle, so that the risk of vehicle collision is reduced, and vehicle collision avoidance is more refined and intelligent.
Hereinafter, specific examples of the present scheme will be described in more detail with reference to the accompanying drawings.
Fig. 1 shows a schematic diagram of an example of a system 100 for a method of preventing a vehicle collision according to an embodiment of the present disclosure. As shown in FIG. 1, system 100 includes a plurality of vehicles 110 (e.g., vehicle 110-1, vehicle 110-2, and vehicle 110-3), a server 150, a plurality of base stations 130, a plurality of obstacles (e.g., moving obstacle 120-1 and stationary obstacle 120-2). In some embodiments, vehicles 110-2, 110-3 are traveling in different states, for example, at different orientations of vehicle 110-1. Vehicle 110-2 travels forward of vehicle 110-1, for example, and approaches vehicle 110-1 in reverse. Vehicle 110-3, for example, travels behind vehicle 110-1 and approaches vehicle 110-1 straight. In some embodiments, the moving obstacle 120-1 may be, for example, a moving vehicle, a pedestrian, or the like, and may be one or more. The stationary barrier 120-2 may be, for example, a sidewalk, a green belt, a guardrail, etc., and may be one or more.
As for the server 150, it is used to identify an object colliding with the vehicle, determine displacement state information of the object, determine weather information and road surface information, generate a safe distance value, determine a vehicle moving space, and generate an instruction on vehicle moving, etc. Server 150 may interact with vehicle 110 via base station 130, network 140, for example. The server 150 includes, but is not limited to, a personal computer, server computer, multiprocessor system, mainframe computer, distributed computing environment that includes any of the above systems or devices, and the like. In some embodiments, the server may have one or more processing units, including special purpose processing units such as GPUs, FPGAs, ASICs, and general purpose processing units such as CPUs. In addition, one or more virtual machines may also be running on each computing device.
As for the vehicle 110-1, as shown in fig. 2, it includes at least: vehicle-mounted data sensing device, vehicle-mounted terminal device 230, vehicle control module 220, etc. The vehicle-mounted data sensing device is used for acquiring an environment image about the outside of the vehicle. The vehicle-mounted data perception device at least comprises a plurality of vehicle-mounted camera devices 210 such as cameras, the types can comprise a monocular camera, a binocular camera, a wide-angle camera, a fisheye camera and the like, the installation positions comprise but are not limited to a front-view camera (such as the vehicle-mounted camera device 210-1), a rear-view camera (such as the vehicle-mounted camera device 210-4), a side-view camera (such as the vehicle-mounted camera devices 210-2 and 210-3 which can be installed below a left side mirror and a right side mirror, for example), a circular-view camera (such as the vehicle-mounted camera device 210-5 which is installed above the roof of the vehicle and can perform 360-degree circular collection of environment images related to the outside of the vehicle) and the like. The in-vehicle camera device 210 may capture video images or photographs of the environment outside the vehicle. Alternatively or additionally, in some embodiments, the on-board data-aware device may also include a radar sensing device (e.g., a lidar sensor, a millimeter-wave radar sensor, an ultrasonic radar sensor, etc.).
Vehicle terminal device 230 includes, for example, at least a vehicle computing device 240 (e.g., a car machine) and a vehicle T-BOX 250. In-vehicle computing device 240 (e.g., a car machine) may send the environmental image generated based on the video image to server 150 for server 150 to determine objects that may collide with vehicle 110-1. The vehicle-mounted computing device (e.g., a vehicle machine) may further obtain the instruction about vehicle moving issued by the server 150 via the vehicle-mounted T-BOX 250, and decompose the instruction about vehicle moving into a steering and braking control signal, so as to implement control over the vehicle control module 220.
The onboard T-BOX 250 is used for vehicle state detection and data interaction with the onboard computing device 240 (e.g., vehicle machine), the server 150. In some embodiments, onboard T-BOX 250 includes, for example, a SIM card, a GPS antenna, a 4G or 5G antenna, or the like. Data interaction, such as transmission of vehicle state information, control instructions, etc., can be achieved through canbus communication between the vehicle-mounted T-BOX 250 and a vehicle-mounted computing device (e.g., a vehicle machine). The onboard T-BOX 250 can collect bus data associated with the buses Dcan, Kcan, PTcan of the vehicles 110-1. When the vehicle-mounted T-BOX 250 detects that the vehicle is in a first state (e.g., stationary and not turned off), it sends an instruction to capture an environmental image about the outside of the vehicle 110-1 to the vehicle-mounted camera device 210 through the CAN bus. When the vehicle-mounted T-BOX 250 receives the instruction about vehicle moving issued by the server 150, the control message is sent through the CAN bus and the control of the vehicle is realized.
The vehicle control module 220 is used to effect movement of the vehicle position. The vehicle control module 220 includes a steering control module and a braking control module. The vehicle control module 220 acquires the steering and braking control signal decomposed and generated by the in-vehicle terminal device 230 via the CAN bus, and controls the movement of the vehicle through the steering control module and the braking control module.
A method for preventing a vehicle collision according to an embodiment of the present disclosure will be described below with reference to fig. 3 to 5. Fig. 3 shows a flowchart of a method 300 for preventing a vehicle collision according to an embodiment of the present disclosure. It should be understood that the method 300 may be performed, for example, at the electronic device 800 depicted in fig. 8. May also be performed at the server 150 depicted in fig. 1. The server 150 is described below as an example. It should be understood that method 300 may also include additional acts not shown and/or may omit acts shown, as the scope of the disclosure is not limited in this respect.
At block 302, at server 150, an environmental image is acquired regarding the environment external to vehicle 110-1. In some embodiments, the environment image may be generated based on a video image captured by the in-vehicle camera device 210 about the environment outside the vehicle. In some embodiments, the environmental image obtained with respect to the environment external to vehicle 110-1 may include, for example, environmental images received from the vehicle at predetermined time intervals. The predetermined time interval may comprise, for example, 1s, 5s, 10s, etc. In other embodiments, the obtaining of the environmental image regarding the environment external to vehicle 110-1 may include receiving the environmental image from vehicle 110-1 based on instructions from server 150. For example, server 150, based on determining that there is an object (e.g., vehicle 110-2) that may collide with vehicle 110-1, sends instructions to the vehicle regarding obtaining an environmental image about the environment external to vehicle 110-1, which is in turn received from vehicle 110-1. Thus, the server 150 makes more accurate and safe the result of the analysis for determining whether there is an object colliding with the vehicle 110-1 and whether there is a vehicle moving space in the environment outside the vehicle by acquiring the latest environment image.
At block 304, server 150 determines whether there is an object that may collide with vehicle 110-1 based on the environmental image. The process of the server 150 determining whether there is an object that may collide with the vehicle 110-1 using the environment image will be further described below in conjunction with fig. 4. It should be understood that method 400 may also include additional acts not shown and/or may omit acts shown, as the scope of the disclosure is not limited in this respect.
At block 402, the server 150 identifies an object (e.g., vehicle 110-2) in the environmental image. In some embodiments, server 150 may perform enhancement, restoration, segmentation, fusion, etc. processing on one or more of the acquired environmental images to identify objects (e.g., vehicle 110-2) present in the environmental images.
With regard to identifying an object (e.g., vehicle 110-2) in the environmental image, it may also include, for example, type information identifying the object (e.g., vehicle 110-2). Type information of the object is generated based on the input recognition model. The recognition model is trained via samples of multiple sets of training data, each of which includes, for example: an image of an object and identification information for identifying a type of the object. For example, the input to the recognition model is an acquired image of the object, including, for example, a front image, a side image, and a back image of the object. The output of the recognition model is type information about the object, including, for example, the type of vehicle (e.g., car, SUV, van, truck, etc.), the brand of vehicle (e.g., volkswagen, ford, biddie, etc.), the model number of vehicle (e.g., volkswagen), etc. The training examples of the recognition model are, for example, a plurality of sets of images of an object artificially labeled with type information. For example, the description in which the image of the popular new meiteng car that is close to the vehicle in the left-rear direction of the vehicle is labeled according to the attributes of the type, brand, and model of the car included therein is "car", "popular", "new meiteng", for example. By adopting the means, the method is beneficial to the identification model to identify and describe different types of objects so as to more accurately lock the detection range of the object colliding with the vehicle and provide more detailed input for the safe distance model.
At block 404, in response to identifying an object (e.g., vehicle 110-2) in the environmental image, server 150 obtains displacement state information for the object, the displacement state information including at least: distance information between the object and the vehicle 110-1 and moving speed information of the object. Regarding distance information between the object and the vehicle 110-1, in some embodiments, the server 150 may calculate the distance between the object and the vehicle 110-1 based on the focal length of the onboard camera device, the height of the onboard camera device from the ground, and the longitudinal projection size of the object detection frame landing point in the environment image. The manner of calculating the distance between the object and the vehicle 110-1 is described below with reference to equation (1).
In the above equation (1), f represents the focal length (unit is, for example, meter) of the in-vehicle image pickup apparatus. H represents the height of the in-vehicle image pickup apparatus from the ground (in meters, for example). y represents a longitudinal projection size (unit is, for example, meter) of the photographed subject detection frame landing point in the environment image. Z represents the distance (in meters, for example) between the object and vehicle 110-1.
In other embodiments, determining the distance between the object and the vehicle 110-1 may include determining a ratio between a product of a focal length of the vehicle-mounted camera and an actual width of the object (e.g., a body width of a known vehicle type) and a lateral projection size of the captured object detection frame in the environment image as the distance between the object and the vehicle 110-1.
The information on the moving speed of the object may include, for example, the magnitude of the moving speed of the object. In some embodiments, server 150 may sequentially acquire two environment images based on a predetermined time interval, calculate the distance between the object and vehicle 110-1, and determine the moving speed of the object based on the ratio between the difference between the front and rear distances and the predetermined time interval. The predetermined time interval is, for example, 500ms, 1s, 2s, etc. Specifically, the moving speed V of the object can be calculated according to equation (2):
wherein D is1Based on at T for server 1501The environmental image taken at the moment calculates the distance (in meters, for example) between the object and the vehicle 110-1, D2Based on at T for server 1502The environmental image acquired at the time calculates the distance (in meters, for example) between the acquired object and the vehicle 110-1. T is2And T1Represents a predetermined time interval (in units of, for example, milliseconds). V is the size of the transfer speed of the object (unit is, for example, km/h).
Regarding the moving speed information of the object, it may also include, for example, a moving speed direction B of the object (as shown in fig. 5). In some embodiments, the server 150 may sequentially acquire two environment images based on a predetermined time interval, identify a first position of the object in a previous environment image, identify a second position of the object in a subsequent environment image, and indicate a moving speed direction B of the object based on a line that the first position points to the second position. The predetermined time interval is, for example, 500ms, 1s, 2s, etc.
At block 406, in response to obtaining displacement status information for an object (e.g., vehicle 110-2), server 150 determines whether an object approaching vehicle 110-1 is present in the environmental image. For example, if the distance between the object and the vehicle is decreasing, or the moving speed direction B of the object is pointed to the current position of the vehicle, the server 150 determines that there is an object approaching the vehicle 110-1 in the environment image.
At block 408, in response to determining that an object approaching vehicle 110-1 is present in the environmental image, server 150 generates a safe distance value indicating minimum distance information between the object (e.g., vehicle 110-2) and vehicle 110-1 that should be maintained for collision avoidance. Safe distance values are generated based on an input safe distance model, the safe distance model being trained via samples of a plurality of sets of training data, each set of training data of the plurality of sets of training data including at least: type information, quality information, and displacement state information of the object. Alternatively or additionally, each of the plurality of sets of training data may further include, for example, weather information and road surface information of a location where the vehicle is located. In some embodiments, the input of the safe distance model is the type information, quality information, displacement state information, and current weather information, road surface information of the acquired object. The output of the safe distance model is a safe distance value. The training samples of the safe distance model are, for example, a plurality of groups of type information, quality information, displacement state information, current weather information and road surface information of the object, and corresponding safe distance values labeled artificially. Regarding the type information of the object, which includes, for example, the brand of the vehicle, the model of the vehicle, etc., different braking systems and tire designs are employed due to different brands, models of the vehicle, etc., so that different types of vehicles have different braking capabilities.
The quality information on the object includes, for example, the quality when the object is normally loaded with passengers or goods, the quality when the object is overloaded, and the like. The situation that tires are locked easily occurs when the vehicles carry more passengers or goods, so that static friction between the tires of the vehicles and the road surface is converted into sliding friction, and a longer braking distance is caused based on the principle that the sliding friction is smaller than the maximum critical static friction value. Quality information of the subject is generated based on an input quality model, the quality model being trained via samples of a plurality of sets of training data, each set of training data of the plurality of sets of training data comprising: an image of an object and identification information to identify a quality of the object. For example, the input to the quality model is an acquired image of the object, including, for example, a front image, a side image, and a back image of the object. The output of the quality model is quality information about the object. The training samples of the quality model are, for example, a plurality of sets of images of an object artificially labeled with quality information. For example, a description in which an image of the unloaded state of futian mascot V van in which the vehicle approaches in the left-rear direction of the vehicle is labeled according to the attribute of the quality information included therein is "normal cargo", "1.34 t", for example. As another example, the description that the image of the ultrahigh cargo capacity of futian auspicious V truck in which the vehicle approaches in the left-rear direction thereof is labeled according to the attribute of the quality information included therein is "overload", "2.86 t", for example. Thus, the present disclosure can automatically, quickly, and easily acquire the type information and quality information of the object approaching the vehicle by acquiring the environment image about the environment outside the vehicle.
As to weather information, it includes, for example, rainy days, snowy days, sunny days, and the like. According to statistics, the friction coefficient of a normally dry asphalt pavement is 0.6, the friction coefficient of the pavement in rainy days is reduced to 0.4, and the friction coefficient of the pavement in snowy days is 0.28. The friction coefficient of the road surface is reduced, and the braking distance is increased. In some embodiments, obtaining weather information includes identifying weather information for a location where the current vehicle 110-1 is located based on the environmental image. Alternatively or additionally, in some embodiments, obtaining weather information further includes obtaining current weather information associated with the current location of vehicle 110-1 based on an application. Obtaining the current location of vehicle 110-1 may include sending a request to vehicle 110-1 for obtaining the current location and receiving a message from vehicle 110-1 indicating the current location of vehicle 110-1. For example, if the current location of the vehicle 110-1 is located in the Shanghai Xue Hui area, the weather information for the Shanghai Xue Hui area may be obtained from an application for weather forecasting. Alternatively or additionally, in some embodiments, obtaining weather information may include crawling current weather information associated with the vehicle location from a predetermined data source. The predetermined data source is, for example and without limitation, a predetermined website, such as an official website of China weather net, etc., and current weather information associated with the vehicle location in the predetermined website can be crawled by a crawler. Therefore, the acquired weather information is related to the current position of the vehicle, and the acquired weather information is more accurate and definite.
As regards the road surface information, it includes, for example, a flat road surface, a crushed stone road surface, a stone block road surface, a sand-mud road surface, a road surface including a speed bump, a plastic road surface, an ice road surface, and the like. As described above, the road surface friction coefficient decreases and the braking distance increases. In some embodiments, acquiring the road surface information includes identifying the road surface information where the current vehicle 110-1 is located based on the environmental image.
Table 1 below lists samples of training data for several sets of identified safe distance values as examples. It should be understood that the training data samples listed in table 1 are merely examples and do not represent actual or accurate data.
Table 1: sample examples of training data for identified safe distance values
In some embodiments, the safe distance model may adjust the safe distance value (e.g., increase the safe distance value) to achieve a more accurate prediction result by confirmation of the occurrence of the collision event. Thus, the server 150 may adjust the safe distance value based on a large amount of positive sample information and negative sample information of the safe distance values from a plurality of vehicles, which is beneficial to improving the accuracy of predicting the safe distance value.
At block 410, in response to determining that the distance between the object (e.g., vehicle 110-2) and vehicle 110-1 is less than the safe distance value, server 150 determines that there is an object that may collide with the vehicle.
At block 412, in response to determining that the distance between the object (e.g., vehicle 110-2) and vehicle 110-1 is not less than the safe distance value, server 150 determines that there are no objects that may collide with the vehicle.
Continuing back to the description of FIG. 3, at block 306, in response to determining that an object (e.g., vehicle 110-2) is present that may collide with vehicle 110-1, server 150 determines whether a vehicle moving space is present in the environment external to the vehicle based on the environment image. The process of the server 150 determining whether there is a space for moving the vehicle outside the vehicle based on the environment image will be further described below with reference to fig. 5. As shown in FIG. 5, A represents the body direction of the vehicle 110-1 in the first state, and B representsMoving speed direction of recognized object (e.g., vehicle 110-2) approaching vehicle 110-1, C1、C2Which represents the direction of vehicle movement of vehicle 110-1, body direction a intersects moving speed direction B at an angle alpha. It should be understood that the moving speed direction B in fig. 5 and the vehicle moving direction C of the vehicle 110-11、C2For illustration only, there may be other situations for the direction of the speed of movement and the direction of vehicle handling.
In some embodiments, the server 150 intercepts the environmental image segment at a preset angle γ with respect to the body direction a of the vehicle 110-1, for example, based on the acquired environmental image. Specifically, the range of the preset angle γ can be calculated according to equations (3) to (5):
γ=[γ1,γ2] (3)
γ1=α-β (4)
γ2=α+β (5)
where α represents an angle that the moving speed direction B of the object (e.g., vehicle 110-2) makes with the body direction A of vehicle 110-1. Beta represents the moving speed direction B of the object (e.g., vehicle 110-2) and the vehicle moving direction C of vehicle 110-11、C2The angle is formed. The value of β may range, for example, from 0 ° to 90 °, with 90 ° being a preferred angle. When γ is a positive value, it represents a clockwise direction with the vehicle body direction a of the vehicle 110-1 as a starting point, and when γ is a negative value, it represents a counterclockwise direction with the vehicle body direction a of the vehicle 110-1 as a starting point.
In other embodiments, server 150 sends, for example, an instruction to vehicle 110-1 to obtain an environmental image at a preset angle γ to body direction a of vehicle 110-1. The vehicle-mounted camera device 210 receives an instruction sent by the server 150 via the vehicle-mounted terminal device 230 to acquire an environment image at a preset angle γ with respect to the body direction a of the vehicle 110-1, and acquires one or more environment images at the preset angle γ with respect to the body direction a of the vehicle 110-1. In some embodiments, the capturing of the environment image may include capturing by the on-board camera device 210 (for example, the on-board camera devices 210-1, 210-2, and 210-3 shown in fig. 5) within a preset angle γ with the body direction a of the vehicle 110-1, or capturing by adjusting the shooting angle to be within the preset angle γ with the body direction a of the vehicle 110-1 by looking around the camera (for example, the on-board camera device 210-5). The in-vehicle image pickup device 210 transmits the captured environment image to the server 150 via the in-vehicle terminal device 230.
For example, if server 150 determines that there is a space larger than minimum vehicle moving width D of vehicle 110-1 in the environment image, it determines that there is a vehicle moving space in the environment outside vehicle 110-1. Specifically, the minimum vehicle moving width D may be calculated according to equation (6) and equation (7):
D=W+2R(I-cosδ) (6)
wherein W is the vehicle body width, L is the vehicle body length, delta is the angle for moving the vehicle, R is the turning radius required for moving the vehicle, and D is the minimum vehicle moving width required for moving the vehicle 110-1.
For example, if server 150 determines that there is a moving obstacle 120-1 (e.g., a moving vehicle, pedestrian, etc.) in the environmental image and that there is no space greater than the minimum vehicle-in-range width D of vehicle 110-1. In some embodiments, the server 150 determines that there is a vehicle moving space in the environment outside the vehicle 110-1 by calculating the moving speed of the moving obstacle 120-1 using the same image speed measuring technique as at block 404 and calculating that the moving obstacle 120-1 moves within a preset time so that there is a space in the environment image that is greater than the minimum vehicle moving width D of the vehicle 110-1. The preset time should be less than the time that the object (e.g., vehicle 110-2) moves to the current location of vehicle 110-1.
For example, if server 150 determines that a stationary obstacle 120-2 (e.g., a sidewalk, a green belt, a wall surface, a river, a stationary vehicle, etc.) exists in the environment image and that there is no space greater than the minimum vehicle moving width D of vehicle 110-1, it determines that there is no vehicle moving space in the environment outside vehicle 110-1.
At block 308, in response to determining that there is a space for moving the vehicle in the environment external to the vehicle, server 150 generates instructions for moving the vehicle for sending to the vehicle, the instructions for moving the vehicle indicating at least an angle and a distance for moving the vehicle. As for the angle δ for moving the vehicle, it is, for example, an angle of the movable space with respect to the body direction a of the vehicle 110-1. Regarding the vehicle moving distance, it is calculated, for example, based on the difference between the obtained safe distance value and the distance between the object (e.g., vehicle 110-2) and vehicle 110-1.
Fig. 6 shows a flowchart of a method 600 for preventing a vehicle collision according to an embodiment of the present disclosure. It should be understood that the method 600 may be performed, for example, at the electronic device 800 depicted in fig. 8, as well as at the server 160 depicted in fig. 1. It should be understood that method 600 may also include additional acts not shown and/or may omit acts shown, as the scope of the disclosure is not limited in this respect.
At block 602, at in-vehicle terminal device 230 (e.g., in-vehicle T-BOX 250), it is determined that vehicle 110-1 is detected to be in a first state. In some embodiments, the first state may include the vehicle speed being equal to 0 and the vehicle not turned off (e.g., the vehicle is a fuel-powered vehicle), and may also include the vehicle speed being equal to 0 and the vehicle turned off (e.g., the vehicle is an electric vehicle). In other embodiments, the first state may include the vehicle speed being equal to 0 and the driver being in the driver's seat, and may further include the vehicle speed being equal to 0 and the driver not being in the driver's seat. Therefore, the vehicle-mounted terminal device 230 can start the collision detection function only when the vehicle is in a stationary state or the driver is not in the driver seat, so that unnecessary resource occupation is reduced, and the intellectualization of starting of the collision avoidance system is realized.
If the in-vehicle terminal device 230 (e.g., in-vehicle T-BOX 250) determines that the vehicle is detected to be in the first state, at block 604, the in-vehicle terminal device 230 (e.g., in-vehicle computing device 240) captures an environmental image about the environment external to the vehicle via the in-vehicle camera device 210. For example, the in-vehicle computing device 240 selects a predetermined number of frames (e.g., 16 or 32 frames) of the environment images in the acquired video images.
At block 606, in-vehicle terminal device 230 transmits an environment image for server 150 to determine a vehicle moving space in which an environment outside the vehicle exists in response to determining that an object that may collide with vehicle 110-1 exists, for generating an instruction regarding vehicle moving based on the vehicle moving space. The present disclosure facilitates to rapidly identify an object that may collide with the vehicle 110-1 and determine a vehicle moving space on the basis of predicting a safe distance value without much depending on the calculation power of the in-vehicle computing device 240 by transmitting the environment image data to the server 150 and determining the object that may collide with the vehicle 110-1 through the server 150 and determining the vehicle moving space existing around the vehicle 110-1, and to take advantage of the strong calculation power of the server 150, so as to prevent the vehicle 110-1 from being unable to move or take safety measures in time before the collision occurs.
At block 608, in response to receiving the instruction regarding vehicle removal, the in-vehicle terminal device 230 controls the vehicle to move to the vehicle removal space via the vehicle control module 220. In some embodiments, the onboard computing device 240 (e.g., a car machine) may obtain the command for moving via the onboard T-BOX 250 from the server 150 and decompose the command into steering and braking control signals to implement control of the vehicle control module 220. In some embodiments, such as when vehicle 110-1 is stationary and has been turned off, vehicle-mounted computing device 240 (e.g., a vehicle machine) may break the move command into steering and braking control signals after starting the motors via vehicle-mounted T-BOX 250 to effect control of vehicle control module 220.
In some embodiments, method 600 may further include in response to receiving an instruction generated by server 150 based on the environmental image to determine that there is no object likely to collide with the vehicle indicating that the vehicle maintains the first state, in-vehicle terminal device 230 (e.g., in-vehicle T-BOX 250) maintains the first state of the vehicle.
In some embodiments, method 600 may further include in response to receiving an instruction generated by server 150 based on the environment image to determine that no stolen space exists in the environment outside the vehicle indicating that the vehicle is initiating the second state, in-vehicle terminal device 230 (e.g., in-vehicle T-BOX 250) initiates the second state of the vehicle. For example, the second state may include acquiring a collision accident picture about the environment outside the vehicle, which is generated based on a video image about the environment outside the vehicle captured within a preset time by the in-vehicle image capturing apparatus 210, via the in-vehicle image capturing apparatus 210, and transmitting the collision accident picture to the server 150. The preset time may be, for example, 30s, 60s, 90 s. The second state may also include a voice broadcast collision warning. For example, the content of the voice announcement may include a content indicating a position where the vehicle is about to collide (e.g., right front, right side, right rear, etc.) and may also include a content indicating other safety measures (e.g., "about to open an airbag", "please fasten a seat belt to a passenger", etc.). The second state may also include pre-ejecting the airbag and/or tightening the seat belt. For example, the vehicle 110-1 may detect whether a passenger is seated, eject an airbag of the corresponding seat in advance, and/or tighten a seat belt by using a pressure sensor or the like on the seat. The second state may also include turning on a warning light, automatic whistling, etc. It should be understood that the above-listed security measures are only illustrative of the second state, and that other situations of security measures are possible.
Fig. 7 shows a flowchart of a method 700 for avoiding a vehicle collision according to an embodiment of the present disclosure. It should be understood that method 700 may also include additional steps not shown and/or may omit steps shown, as the scope of the present disclosure is not limited in this respect.
At 702, in-vehicle terminal device 230 (e.g., in-vehicle T-BOX 250) detects that vehicle 110-1 is in a first state.
At 704, the in-vehicle terminal device 230 (e.g., the in-vehicle T-BOX 250) sends an instruction to acquire an environment image about the environment outside the vehicle to the in-vehicle image pickup device 210.
At 706, the in-vehicle camera device 210 acquires an environment image about the environment outside the vehicle.
At 708, the in-vehicle camera device 210 transmits an environment image about the environment outside the vehicle to the server 150.
At 710, the server 150 identifies objects present in the environmental image.
At 712, the server 150 obtains displacement status information of the object.
At 714, the server 150 determines that an object approaching the vehicle 110-1 is present.
At 716, the server 150 generates a safe distance value via the safe distance model.
At 718, server 150 determines that the distance between the object (e.g., vehicle 110-2) and vehicle 110-1 is less than the safe distance value.
At 720, server 150 determines that there is an object (e.g., vehicle 110-2) that may collide with vehicle 110-1.
At 722, server 150 determines that there is a vehicle moving space in the environment outside the vehicle based on the environment image.
At 724, the server 150 determines an angle and distance for moving the vehicle.
At 726, server 150 sends an instruction regarding vehicle removal to in-vehicle terminal device 230 (e.g., in-vehicle computing device 240).
At 728, the vehicle terminal device 230 (e.g., vehicle computing device 240) decomposes the move command into steering and braking control signals.
At 730, the in-vehicle terminal device 230 (e.g., the in-vehicle computing device 240) sends steering and braking control signals to the vehicle control module 220.
At 732, vehicle control module 220 controls vehicle 110-1 to move.
Fig. 8 illustrates a schematic block diagram of an example device 800 that may be used to implement embodiments of the present disclosure. The apparatus 800 may be an apparatus for implementing the methods 300, 400, 600, 700 shown in fig. 3-4, 6-7. As shown in fig. 8, device 800 includes a Central Processing Unit (CPU)801 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)802 or loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, a processing unit 801 perform the respective methods and processes described above, e.g. performing the methods 300, 400, 600, 700. For example, in some embodiments, the methods 300, 400, 600, 700 may be implemented as a computer software program stored on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by CPU 801, may perform one or more of the operations of methods 300, 400, 600, 700 described above. Alternatively, in other embodiments, the CPU 801 may be configured by any other suitable means (e.g., by way of firmware) to perform one or more of the acts of the methods 300, 400, 600, 700.
It should be further appreciated that the present disclosure may be embodied as methods, apparatus, systems, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for carrying out various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor in a voice interaction device, a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
The above are merely alternative embodiments of the present disclosure and are not intended to limit the present disclosure, which may be modified and varied by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.