Electronic label system

文档序号:9068 发布日期:2021-09-17 浏览:26次 中文

1. An electronic signage system, comprising:

more than one vehicle;

a display that is provided on the vehicle and displays an image in one or more display regions that can be visually confirmed from outside the vehicle;

an electronic label controller is provided, which comprises a controller,

the electronic label controller is configured to function as:

an information collection unit that collects information of at least one of the browsing vehicle and an occupant of the browsing vehicle as peripheral information by communicating with a browsing vehicle that is another vehicle located at a position facing the display area;

an image selecting unit that selects an image to be displayed in the display area based on the surrounding information;

and a display control unit that causes the selected image to be displayed in the display area.

2. The electronic signage system of claim 1,

the one or more display regions include at least a display region provided at a position where the display region can be visually recognized from the rear of the vehicle.

3. The electronic signage system of claim 1 or 2,

the electronic sign controller selects the image based on attribute information of at least one of the viewing vehicle and the occupant of the viewing vehicle obtained from the surrounding information, and prohibits use of the behavior history of the viewing vehicle and the occupant of the viewing vehicle for the selection of the image.

4. The electronic signage system of any one of claims 1 to 3,

the electronic sign controller corrects the image to be displayed in the display area based on at least one of a relative positional relationship and a relative velocity relationship between the vehicle and the viewing vehicle.

5. The electronic signage system of any one of claims 1 to 4,

the browsing vehicle includes a user interface for inputting an operation instruction for an image displayed in a display area of the vehicle,

the electronic sign controller performs processing corresponding to the operation instruction.

6. The electronic signage system of any one of claims 1 to 5,

the electronic label controller is provided with:

a center-side controller provided at the management center;

a vehicle-side controller mounted on the vehicle and communicating with the center-side controller,

the vehicle-side controller acquires the surrounding information by communicating with the viewing vehicle, and transmits the acquired surrounding information to the center-side controller.

Background

Conventionally, there is known a technology for utilizing a vehicle as a portable digital electronic sign by providing a display on an outer surface of the vehicle and displaying an image on the display. For example, patent document 1 discloses a technique of providing a display on an exterior surface of a vehicle and displaying an image for advertisement on the display. Further, patent document 2 discloses a technique of photographing a license plate of a following vehicle, discriminating a registered place or the like of the following vehicle based on the license plate, and displaying an advertisement image corresponding to the discrimination result on a display portion provided at the rear of the vehicle.

Prior art documents

Patent document

Patent document 1: japanese patent laid-open publication No. 2019-117215

Patent document 2: japanese patent application laid-open No. 2010-237411

However, in patent document 1, it is not clear in which way the image to be displayed is selected. Further, although in patent document 2, the advertisement image is selected based on the license plate of the vehicle, the information that can be grasped by the license plate of the vehicle is limited. As a result, in the conventional technology, it is not always possible to display an advertisement image suitable for the occupant of the surrounding vehicle.

In addition, in recent years, IT is advancing to vehicles, and a networked vehicle (Connected Car) has been proposed in which the vehicle can communicate with not only a specific management center but also various devices. In the networked vehicles, for example, inter-vehicle communication (V2V) as communication between the vehicle and another vehicle, inter-vehicle communication (V2I) as communication between the vehicle and infrastructure equipment provided on the road, inter-vehicle-pedestrian communication (V2P) as communication between terminals held by the vehicle and pedestrians, and the like are implemented. Methods for effectively utilizing information obtained by such networking techniques have not been sufficiently studied.

Therefore, the present specification discloses an electronic signage system that can further improve the effects of advertisement promotion and attention reminding by utilizing information obtained by internet technology.

Disclosure of Invention

An electronic signage system disclosed in this specification is characterized by comprising: more than one vehicle; a display that is provided on the vehicle and displays an image in one or more display regions that can be visually confirmed from outside the vehicle; an electronic signage controller configured to function as: an information collection unit that collects information of at least one of the browsing vehicle and an occupant of the browsing vehicle as peripheral information by communicating with a browsing vehicle that is another vehicle located at a position facing the display area; an image selecting unit that selects all images displayed in the display area based on the surrounding information; and a display control unit that causes the selected image to be displayed in the display area.

With this configuration, since the image more suitable for viewing the occupant of the vehicle can be selected, the effects of advertisement and attention can be further enhanced.

In this case, the one or more display regions may include at least a display region provided at a position where the display region can be visually recognized from the rear of the vehicle.

Typically, a subsequent vehicle may continue to observe a vehicle for a longer period of time than a parallel vehicle. By setting the display area at a position where the subsequent vehicle can be visually confirmed and displaying the image, the effects of advertisement promotion and attention reminding can be further improved.

Further, the electronic sign controller may be configured to select the image based on attribute information of at least one of the viewing vehicle and the occupant of the viewing vehicle, which is obtained from the surrounding information, and to prohibit the use of the behavior history of the viewing vehicle and the occupant of the viewing vehicle for the selection of the image.

Unlike web advertisements, the display area of an electronic signage system is also easily observable by people other than the occupants of the reading vehicle. In this case, if an image corresponding to the behavior history of the viewing vehicle or the occupant of the viewing vehicle is displayed in the display area, there is a possibility that the privacy of the occupant of the viewing vehicle cannot be appropriately protected. On the other hand, as described above, by prohibiting the use of the action history for the selection of the image, the privacy of the occupant can be appropriately protected.

In addition, the electronic sign controller may correct the image to be displayed in the display area based on at least one of a relative positional relationship and a relative velocity relationship between the vehicle and the viewing vehicle.

By changing the size of a video or the like or the font size of characters contained in an image or changing the amount of information contained in an image based on at least one of the relative positional relationship and the relative speed relationship, the image can be easily viewed from a viewing vehicle, and the effects of advertising and attention can be further enhanced.

In addition, the reading vehicle may include a user interface for inputting an operation instruction with respect to an image displayed in a display area of the vehicle, and the electronic signage controller may execute processing according to the operation instruction.

By receiving the movement from the passenger viewing the vehicle, the utility of advertisement and attention can be further improved.

In addition, the electronic signage controller may include: a center-side controller provided at the management center; and a vehicle-side controller mounted on the vehicle and communicating with a center-side controller, the vehicle-side controller acquiring the surrounding information by communicating with the viewing vehicle and transmitting the acquired surrounding information to the center-side controller.

By configuring the vehicle controller to communicate with the reading vehicle and acquire the surrounding information, the processing for acquiring the surrounding information can be simplified as compared with a configuration in which the central controller specifies the reading vehicle and communicates with the reading vehicle to acquire the surrounding information from the reading vehicle.

According to the electronic signage system disclosed in the present specification, the information obtained by the internet technology can be utilized to further enhance the effects of advertising and attention reminding.

Drawings

Fig. 1 is a block diagram showing the structure of the functionality of the electronic signage system.

Fig. 2 is a block diagram showing a physical structure of the electronic signage system.

Fig. 3 is a perspective view of the vehicle.

Fig. 4 is an image diagram showing a flow of processing of the electronic signage system.

Fig. 5 is a diagram showing a relative positional relationship between the display vehicle and the reading vehicle and a relationship between the display vehicle and an image displayed in the display area.

Fig. 6 is an image showing a display image and a U/I displayed in the display area.

Detailed Description

Hereinafter, the structure of the electronic signage system 10 will be described with reference to the drawings. Fig. 1 is a functional block diagram of an electronic signage system 10, and fig. 2 is a physical block diagram of the electronic signage system 10. Further, fig. 3 is a perspective view of the vehicle 12 used in the electronic signage system 10.

The electronic signage system 10 is a system that uses the vehicle 12 as a mobile digital electronic signage by displaying an image on a display area 17 (for example, a display area 17 provided on the rear side surface of the vehicle 12, see fig. 3) that can be visually recognized from the outside by a display 16 provided on the vehicle 12. The image displayed in the display area 17 of one vehicle 12 is viewed by occupants of other vehicles located around the vehicle 12. Hereinafter, the vehicle 12 that displays the image is referred to as a "display vehicle", and the vehicle on which the occupant who views the image is seated is referred to as a "viewing vehicle". One vehicle 12 serves as a display vehicle and a reading vehicle. For example, when an image is displayed on the rear side surface of a preceding vehicle of the one vehicle 12 while the one vehicle 12 is displaying an image for a following vehicle on the rear side surface thereof, the one vehicle 12 also serves as a viewing vehicle as a display vehicle.

Next, the structure of the vehicle 12 will be described with reference to fig. 1. When the vehicle 12 becomes a display vehicle, the information collection unit 24, the sensor group 26, the display control unit 28, and the display 16 are used. The information collection unit 24 communicates with another vehicle located in the periphery of the vehicle 12, that is, a vehicle serving as a reading vehicle, and collects information of the reading vehicle or an occupant of the reading vehicle as the peripheral information 80 (see fig. 4). The surrounding information 80 includes at least identification information of the reading vehicle or an occupant of the reading vehicle. The information collection unit 24 transmits the collected surrounding information 80 to the management center 14.

The information collection unit 24 also collects operation data 88 (see fig. 4) by communicating with the reading vehicle. The operation data 88 is data indicating the instruction contents input by the passenger of the reading vehicle operating the U/I32 mounted on the reading vehicle, and will be described later. The information collection unit 24 also transmits the collected operation data 88 to the management center 14.

The sensor group 26 is configured by one or more sensors mounted on the vehicle 12. The sensor group 26 may also have a position sensor (e.g., GPS) that detects the position of the vehicle 12, for example. The detection result of the position sensor may be transmitted to the management center 14 together with the surrounding information 80. The sensor group 26 may include a peripheral environment sensor that detects a peripheral environment of the vehicle 12. Examples of the ambient environment sensor include a camera, a laser radar (Lidar), a millimeter wave radar, a sonar, and a magnetic sensor. The vehicle 12 acquires at least one of a relative position and a relative speed with respect to the reading vehicle based on the detection result of the surrounding environment sensor. The obtained relative position and relative speed are sent to the display control unit 28.

The display control unit 28 controls driving of the display 16. Specifically, the display control unit 28 causes the display 16 to display the image data 84 transmitted from the management center 14. The display control unit 28 corrects the image data 84 based on at least one of the relative positional relationship and the relative speed relationship with the viewing vehicle, which will be described later.

The display 16 displays an image in a display area 17 (see fig. 3) that can be visually recognized from outside the vehicle 12. As the display 16, for example, a display disposed on an outer surface of the vehicle 12 can be used. In this case, the display functions as the display region 17. In the case where the display 16 is a display, the display 16 may be mounted on either the exterior surface of the vehicle 12 as shown in FIG. 3 or the interior surface of the window glass of the vehicle 12. The display 16 may be configured by a transparent display, and the transparent display may be disposed on a window of the vehicle 12 instead of the window glass of the vehicle 12. In addition, when the passenger cannot observe the outside scenery through the window because the display is disposed on the window, the vehicle 12 may be provided with a camera for capturing the outside scenery and a display for displaying the captured image of the camera may be provided on the inner surface of the vehicle cabin.

The display 16 may be a projector that projects an image on a portion of the vehicle 12 (e.g., a hood, trunk lid, etc.) or a road surface. In this case, the area where the image is projected by the projector is the display area 17.

The vehicle 12 has at least one display area 17 provided at a position where it can be visually confirmed from behind the vehicle 12. Here, in general, a subsequent vehicle can continuously observe one vehicle for a longer time than a parallel vehicle. By providing the display region 17 at a position where the following vehicle can be visually confirmed and displaying the image, the effects of advertisement and attention can be further improved. The display area 17 may be provided on a plurality of surfaces of the vehicle 12. For example, the display area 17 may also be provided on the rear surface and the side surface of the vehicle 12 as shown in fig. 3. In this case, the display region 17 of the rear surface and the display region 17 of the side surface may display images that are the same as each other or different from each other.

The number of reading vehicles for one display vehicle is not limited to one, and may be plural. For example, when the display vehicle is the vehicle 12 shown in fig. 3, a following vehicle that can observe the image of the display region 17 on the rear side and a parallel vehicle that can observe the image of the display region 17 on the side surface become viewing vehicles. In this case, the vehicle 12 collects the surrounding information 80 of each of the following vehicles and the parallel vehicles and transmits the same to the management center 14. The image data 84 selected based on the surrounding information 80 of the following vehicle and the image data 84 selected based on the surrounding information 80 of the parallel running vehicle are transmitted from the management center 14 to the vehicle 12. The vehicle 12 causes the received image data 84 to be displayed in the respective display areas 17.

The information providing unit 30 and the U/I32 are used when the vehicle 12 becomes a viewing vehicle. The information providing unit 30 communicates with the display vehicle to provide the display vehicle with the information of at least one of the host vehicle and the occupant of the host vehicle, that is, the surrounding information 80 for the display vehicle. The information providing unit 30 communicates with the display vehicle to provide the display vehicle with data indicating the instruction content input via the U/I32, that is, the motion data 88 for the display vehicle.

The U/I32 is a member for receiving an operation instruction from an occupant of the host vehicle 12. The U/I32 may be either contact or voice-input. In the case of a contact type, the U/I32 includes at least one of a switch and a touch panel. Further, in the case of a voice-input type, U/I32 includes a microphone to collect voice instructions. The configuration of the U/I32, for example, the functions assigned to the switches, the number and functions of virtual switches displayed on the touch panel, the type of voice command that can be input by voice, and the like may be fixed or may be changeable.

As shown in fig. 2, the above vehicle 12 physically has a vehicle controller 50, the vehicle controller 50 having a processor 60, a storage device 62, and a communication I/F64. The vehicle controller 50 and a central controller 52 described below constitute the electronic sign controller 18 that collects the surrounding information 80 and causes the display 16 to display a predetermined image based on the surrounding information 80.

The vehicle controller 50 is a computer having a processor 60, a memory device 62, a communication I/F64, and a data bus 65. In the "computer", a microcontroller is also included which loads the computer system into an integrated circuit. Furthermore, processor 60 refers to a processor in a broad sense, including a general purpose processor (e.g., CPU: Central Processing Unit, etc.), or a special purpose processor (e.g., GPU: Graphics Processing Unit, ASIC: Application Specific Integrated Circuit, FPGA: Field Programmable Gate Array, Programmable logic device, etc.).

Further, the storage device 62 may also include at least one of a semiconductor memory (e.g., RAM, ROM, solid state drive, etc.) and a magnetic disk (e.g., hard disk drive, etc.).

The communication I/F64 enables communication with various devices external to the vehicle 12. The communication I/F64 may correspond to a plurality of kinds of communication standards. Therefore, the communication I/F64 may also have a communication device that performs internet communication via a wireless LAN such as WiFi (registered trademark), or mobile data communication that is served by a mobile phone company or the like, for example. Further, the communication I/F64 may also have a communication device (antenna or the like) for DSRC (dedicated short range communication technology) that communicates with other vehicles and infrastructure equipment on the road without via the internet. The vehicle controller 50 performs transmission and reception of various data with the management center 14 and other vehicles via the communication I/F64. The display 16, the sensor group 26, and the U/I32 mounted on the vehicle 12 are connected to the processor 60 via a data bus, and send and receive various signals via the data bus.

Next, the management center 14 will be explained with reference to fig. 1. The management center 14 includes an image selecting unit 40, a display instructing unit 42, an operation processing unit 44, an image DB46, and a vehicle DB 48. The image selecting unit 40 selects an image to be displayed in the display area 17 of the vehicle 12 based on the surrounding information 80 transmitted from the vehicle 12. In order to make this selection, image selecting unit 40 refers to vehicle DB48 and image DB 46.

In the vehicle DB48, various information related to the vehicle 12 and the occupant of the vehicle 12 is recorded. Specifically, the vehicle DB48 stores attributes of the vehicle 12 and attributes of the occupant so as to establish a correspondence relationship. The attributes of the vehicle 12 include, for example, identification information of the vehicle 12, a vehicle type, a class, and a registered place, a registered day, whether the vehicle is a rental car, whether the vehicle is a legal owner, and the like. The attribute information of the occupant includes, for example, identification information of the occupant, age, sex, family structure, occupation, place of residence, place of birth, and the like. The attribute information of the vehicle 12 and the occupant may be acquired by registering the occupant using an information terminal in advance. The attribute information of the vehicle 12 and the occupant may be automatically acquired or updated based on information obtained by a dealer or a vehicle repair at the time of sale or maintenance check of the vehicle 12. In addition, when the vehicle 12 is a rental vehicle, the user of the rental vehicle may be registered as the occupant based on information acquired by an operator of the rental vehicle when the vehicle 12 is rented. In addition, in the case where the vehicle 12 is not a rental vehicle but a vehicle owned by a legal person, the legal person may be registered as a passenger.

The vehicle DB48 may include attributes of the vehicle 12 and the occupants, and may also include action histories, settlement information, and mail addresses of the vehicle and the occupants. The action history of the vehicle 12 includes, for example, a travel route history of the vehicle 12, a destination registered in a car navigation system, a facility or a store where the vehicle 12 is parked for a certain time or more, and the like. The action history of the occupant includes a search history and a purchase history of the occupant. Such an occupant action history is acquired based on an operation history in an information terminal associated with the vehicle 12, an electronic settlement associated with the vehicle 12, settlement information in electronic money, and the like. In the case where there are information on the settlement method associated with the vehicle 12 or the occupant, a mail address, and the like, these pieces of information are also recorded in the vehicle DB 48.

The information recorded in the vehicle DB48 is managed by being divided into several protection levels, and the protection level to be referred to is determined according to the purpose of use. In the present example, the action history of the vehicle 12 and the occupant is set to a higher protection level than the attribute information. The image selecting unit 40 refers only to the attribute information of the vehicle 12 and the occupant when selecting an image to be displayed on the vehicle 12, and cannot refer to an action history having a higher protection level than these pieces of information. As another aspect, the occupant of the reading vehicle may be able to set the level of information available for selection of the image data 84 in advance.

The image DB46 stores therein a plurality of image data 84 and selection conditions of the image data 84. The image data 84 is data of an image displayed in the display area 17 of the vehicle 12. The image displayed in the display area 17 may be a still image or a moving image. The image may be an image for advertising, for example. The image may be an image showing evacuation information, rescue information, and the like when a disaster occurs.

Each image data 84 is associated with a selection condition of the image data. The selection condition includes at least an object condition of the corresponding image. The object condition is information that defines a condition of an object to be observed in a corresponding image. The target person condition is a condition that can be determined from the attribute information of the vehicle and the occupant recorded in the vehicle DB 48. Therefore, the subject condition includes, for example, the age, sex, family structure, occupation, place of residence, place of birth, vehicle type, class, and the like of the subject who is to be observed with the image. For example, in the image data 84 used for the advertisement of the clothing maker for men, "sex: male ". In the image data 84 for notifying an evacuation place at the time of occurrence of a disaster, the target person condition may be set to have a residential area as a disaster area address.

In addition, the selection condition may further include a region condition, a time condition, and a priority of the corresponding image. For example, in the image data 84 to be displayed only in the kanto region, "kanto region" is set as a region condition. In order to determine whether or not the display vehicle meets such a region condition, the display vehicle may transmit the position information of the host vehicle together with the surrounding information to the management center 14. In addition, "night" is set as a time condition in the image data 84 to be displayed only at night. The priority indicates the priority of the corresponding image data 84. The image data 84 having the higher priority is selected with higher priority. The priority is determined based on, for example, a reward obtained when advertisement image data is displayed, the degree of urgency or public welfare of the image data 84, and the like. For example, image data 84 with higher consideration may be given higher priority. In addition, the image data 84 having higher urgency or higher public utility such as evacuation information may be given higher priority when a disaster occurs.

The image selecting unit 40 refers to the vehicle DB48 and the image DB46, and selects the image data 84 to be displayed in the display area 17 in which the vehicle is displayed. Specifically, the image selecting unit 40 compares the surrounding information 80 transmitted from the display vehicle with the vehicle DB48, and acquires attribute information of the viewing vehicle and the occupant of the viewing vehicle. Next, the image selecting unit 40 compares the obtained attribute information with the image DB46, and selects the image data 84 suitable for display. Specifically, the image data 84 having a high priority and in which the subject condition matching the attribute of the viewing vehicle or the occupant thereof is set is selected. The image selecting unit 40 transmits the selected image data 84 to the display instructing unit 42. The display instructing unit 42 transmits the image data 84 to the display vehicle.

The operation processing unit 44 executes predetermined processing based on the operation data 88. That is, as described above, the operation data 88 is data indicating the content of the operation instruction input by the operation of the U/I32 by the occupant of the viewing vehicle. The operation processing unit 44 executes processing according to the operation instruction, which will be described later.

As shown in fig. 2, the above management center 14 physically has a central controller 52, the central controller 52 having a processor 66, a storage device 68, and a communication I/F70. Also, as described above, the central controller 52 and the vehicle controller 50 constitute the electronic sign controller 18.

The central controller 52 is also a computer having a processor 66, a storage device 68, a communication I/F70, and a data bus 71, as is the vehicle controller 50. In the "computer", a microcontroller is also included which loads the computer system into an integrated circuit. Further, the processor 66 is intended to refer to a broad range of processors, including general purpose processors, or special purpose processors.

Further, the storage device 68 may also include at least one of semiconductor memory (e.g., RAM, ROM, solid state drive, etc.) and magnetic disk (e.g., hard disk drive, etc.). Further, the storage 68 need not all be physically located in the same location as the processor 66, etc., but may also include storage on the cloud. Since the communication I/F70 enables communication with various devices outside the management center 14, it is also possible to have a communication device that performs internet communication, for example.

Next, a flow of processing in the electronic signage system 10 will be described with reference to fig. 4. Fig. 4 is a video diagram showing a flow of processing of the electronic signage system 10. In the electronic signage system 10, the vehicle 12 located at a position facing the display area 17 displaying the vehicle 12a becomes the reading vehicle 12 b. When the display area 17 is disposed on the rear side of the display vehicle 12a, the following vehicle of the display vehicle 12a becomes the reading vehicle 12 b.

The display vehicle 12a outputs a request for the surrounding information 80 to the browsing vehicle 12 b. The reading vehicle 12b receives the request and transmits the surrounding information 80 to the display vehicle 12 a.

The display vehicle 12a transmits the acquired surrounding information 80 to the management center 14. The management center 14 compares the surrounding information 80 with the vehicle DB48, and extracts attribute information of the reading vehicle 12b and its occupants. Then, the extracted attribute information is collated with the image DB46 to be selected in correspondence with the image data 84 to be displayed on the display vehicle 12 a. As described above, the image data 84 is selected without referring to the behavior history of the lead vehicle 12b and the occupant thereof. This is because, in the case of the electronic signage system 10 that displays an image on the vehicle 12, the image is easily observed by a person other than the occupant viewing the vehicle 12b, unlike the network advertisement. In this case, if an image corresponding to the action history of the reading vehicle 12b and its occupants is displayed, there is a possibility that personal information is lost or privacy is not properly protected. Therefore, the image to be displayed on the vehicle 12 is selected based on a wide range of information such as attribute information.

When the image data 84 selection is completed, the management center 14 transmits the image data 84 to the display vehicle 12 a. The display vehicle 12a displays the received image data 84 in the display area 17. Here, the display form of the image displayed in the display area 17 may be corrected based on at least one of the relative position and the relative speed of the viewing vehicle 12b with respect to the display vehicle 12 a.

For example, the size of at least one of the video and the characters included in the image may be reduced as the relative position is closer. That is, as shown in fig. 5, when the inter-vehicle distance between the display vehicle 12a and the browsing vehicle 12b is short (in the case of the left side of fig. 5), the size of the video and the font size of the characters included in the image displayed in the display area 17 may be reduced so that more information can be displayed. With this configuration, more information can be provided to the occupant of the reading vehicle 12 b.

In the case where the inter-vehicle distance between the display vehicle 12a and the viewing vehicle 12b is long (the case on the right side in fig. 5), the image may be adjusted so as to be easily visible even when the distance is long by increasing the size of the video and the font size of the characters included in the image displayed in the display area 17. With this configuration, necessary information can be provided more reliably to the occupant of the reading vehicle 12 b.

Further, the text information included in the image may be increased as the relative speed between the display vehicle 12a and the browsing vehicle 12b is decreased. That is, since the occupant of the reading vehicle 12b can observe the image of the display area 17 relatively stably when the relative speed is small, the text information included in the image can be increased. Conversely, when the relative speed between the display vehicle 12a and the reading vehicle 12b is large, the distance between the occupant of the reading vehicle 12b and the display area 17 changes at any time, and thus it becomes difficult to distinguish a relatively fine image of characters or the like. Therefore, in this case, the text information included in the image can be reduced.

The occupant of the browsing vehicle 12b browses the image displayed on the display vehicle 12a, and operates the U/I32 to input a predetermined operation instruction if necessary. This is explained with reference to fig. 6. Fig. 6 is a diagram showing an example of the image displayed in the display area 17 of the display vehicle 12a and the U/I32 of the viewing vehicle 12 b. As shown in fig. 6, the display area 17 of the display vehicle 12a is configured to display an image advertising a doll toy of a bear and "1: and (3) purchasing 2: transmission information 3: change advertisement "option 90.

In this case, the U/I32 of the browsing vehicle 12b is configured to be able to answer the inquiry. For example, when the U/I32 mounted on the reading vehicle 12b includes a touch panel, three virtual switches 92a to 92c corresponding to the options "1", "2", and "3" are displayed on the touch panel.

The occupant of the viewing vehicle 12b operates the U/I32 to input a predetermined operation instruction. In this case, the content of the operation instruction is transmitted from the reading vehicle 12b to the display vehicle 12a as the motion data 88. The display vehicle 12a transmits the received motion data 88 to the management center 14. The management center 14 parses the received action data 88 and executes processing in accordance with the content of the action data 88. In the example of fig. 6, in the case of "1: in the case where purchase "is selected, the management center 14 executes a purchase procedure of the product advertised by the corresponding image data 84. At this time, the credit card information of the occupant or the like recorded in the vehicle DB48 is used as necessary. Further, in the case of "2: when the delivery information "is selected, the management center 14 transmits the URL of the website in which the detailed information of the product to be advertised is stored to the mail address held by the passenger of the viewing vehicle 12 b. At this time, the mailbox address information of the occupant recorded in the vehicle DB48 and the like are utilized as necessary. And, in the case of "3: when the change advertisement "is selected or when the operation is not performed for a certain time or more, the management center 14 notifies the display vehicle 12a of the display of the other image data 84.

Needless to say, the reading vehicle 12b may not have a sufficient U/I32 mounted thereon. In this case, no option is displayed in the image displayed in the display area 17. The operation data 88 is not transmitted from the reading vehicle 12b to the display vehicle 12 a.

In this way, by acquiring the information of the viewing vehicle 12b as the surrounding information 80 by the inter-vehicle communication and selecting the image to be displayed on the display vehicle 12a based on the surrounding information 80, it is possible to provide an image more suitable for the occupant of the viewing vehicle 12 b. Also, an electronic signage system capable of further enhancing the effects of advertising and attention reminding is thus disclosed.

The configuration described above is an example, and other configurations may be appropriately changed as long as information of at least one of the viewing vehicle 12b and the occupant is collected as the surrounding information 80 based on at least communication with the viewing vehicle 12b, and an image to be displayed in the display area 17 is selected based on the surrounding information 80. For example, although in the present example, the selection of the image data 84 is implemented at the management center 14, the selection may also be implemented on the vehicle 12 side. That is, the vehicle controller 50 may include the vehicle DB48 and the image DB46, and select the image data 84 to be displayed in the display area 17 with reference to these DBs.

Note that attribute information and the like of the vehicles 12 may be managed by each vehicle 12 without being stored in the management center 14. For example, when the management center 14 receives the identification information of the browsing vehicle 12b from the display vehicle 12a, the browsing vehicle 12b is specified based on the identification information. The management center 14 may communicate with the specified browsing vehicle 12b to acquire attribute information of the browsing vehicle 12b and its occupants.

Further, the surrounding information 80 may be collected not by the display vehicle 12a but by the management center 14. For example, the central controller 52 may identify the vehicle 12 that can become the browsing vehicle 12b based on the current location of the displayed vehicle 12a, and may communicate with the vehicle to collect the surrounding information 80. However, in this case, the management center 14 needs to grasp the current locations of the plurality of vehicles, and the process is liable to be complicated. On the other hand, as described above, if the configuration is adopted in which the display vehicle 12a collects the surrounding information 80, the management center 14 does not need to search for the vehicle 12 that displays the surroundings of the vehicle 12a, and the processing can be simplified.

The browsing vehicles 12b encountered while the display vehicle 12a travels on various routes are not limited to the vehicles 12 capable of inter-vehicle communication. When the reading vehicle 12b is the vehicle 12 which cannot perform the inter-vehicle communication, a camera for photographing a license plate of the reading vehicle 12b or the like may be mounted on the display vehicle 12a, and the image data 84 may be selected based on information obtained from a photographed image of the camera.

Description of the symbols

10 … electronic signage system; 12 … vehicle; 12a … shows a vehicle; 12b … viewing the vehicle; 14 … management center; 16 … display; 17 … display area; 18 … electronic signage controller; 24 … an information collection unit; 26 … sensor group; 28 … display control unit; 30 … information providing part; 40 … an image selecting section; 42 … display an indication part; 44 … operation processing unit; 46 … image DB; 48 … vehicle DB; 50 … vehicle controller; 52 … central controller; 60. a 66 … processor; 62. 68 … storage means; 64. 70 … communication I/F; 65. 71 … data bus; 80 … surrounding information; 84 … image data; 88 … motion data; option 90 …; 92 a-92 c … are virtual switches.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种用户交互方法、装置以及设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!