Control method and device of intelligent equipment, electronic equipment and storage medium
1. A control method of an intelligent device is characterized by comprising the following steps:
responding to a control interface to receive a touch operation instruction, and determining a touch position corresponding to the touch operation instruction;
determining a display position of a position identifier in the control interface;
under the condition that the touch position is the display position of the position identifier, acquiring position information of the intelligent device associated with the position identifier;
and displaying the position information of the intelligent equipment in the control interface.
2. The method of claim 1, wherein the location information of the smart device comprises at least one of:
a panoramic image of a space where the intelligent device is located;
a planar image of a space where the intelligent device is located;
a three-dimensional model of a space in which the intelligent device is located; and the number of the first and second groups,
the location of the smart device in the space.
3. The method of claim 1, wherein the location information of the smart device is a virtual reality image of a space in which the smart device is located, and after the presenting the location information of the smart device, the method further comprises:
in response to receiving a virtual reality image display angle adjustment instruction, determining a target display angle corresponding to the adjustment instruction;
and adjusting the virtual reality image of the space where the intelligent equipment is located based on the target display angle.
4. The method of claim 3, wherein after the adjusting the virtual reality image of the space in which the smart device is located, further comprising:
displaying a preset control in the adjusted virtual reality image;
and responding to the received touch operation aiming at the preset control, and redisplaying the position information of the intelligent equipment in the control interface.
5. The method of claim 1, further comprising:
determining the display position of the intelligent equipment identifier in the control interface;
and under the condition that the touch position is the display position of the intelligent equipment identifier, displaying a control operation page of the intelligent equipment in the control interface.
6. The method of any of claims 1-5, further comprising:
receiving an image identification instruction, wherein the image identification instruction comprises an image to be identified and the type of a target intelligent device;
identifying the image based on the type of the target smart device to determine a location of the target smart device in the image;
generating position information of the target intelligent device according to the position of the target intelligent device in the image and the image;
and associating the position information of the target intelligent equipment with the position identification of the target intelligent equipment in the control interface.
7. The method of claim 6, wherein the receiving an image recognition instruction comprises:
determining that an image recognition instruction is received in the case where an image capturing end instruction is received;
alternatively, the first and second electrodes may be,
and in the case that the image uploading is determined to be successful, determining that an image identification instruction is received.
8. The method of claim 6, after the identifying the image based on the type of the target smart device, further comprising:
under the condition that the target intelligent equipment is not identified, displaying an intelligent equipment identification failure prompt message and an intelligent equipment addition control on a control interface;
displaying candidate intelligent equipment under the condition of receiving touch operation aiming at the intelligent equipment adding control;
in the case that a touch operation for a candidate smart device is received, adding the candidate smart device to the image according to the touch operation;
generating position information of the candidate intelligent equipment according to the position of the candidate intelligent equipment in the image and the image;
and associating the position information of the candidate intelligent equipment with the position identification of the candidate intelligent equipment in the control interface.
9. A control device of an intelligent device, comprising:
the first determining module is used for responding to a touch operation instruction received by a control interface and determining a touch position corresponding to the touch operation instruction;
the second determination module is used for determining the display position of the position identifier in the control interface;
the acquisition module is used for acquiring the position information of the intelligent equipment associated with the position identifier under the condition that the touch position is the display position of the position identifier;
and the display module is used for displaying the position information of the intelligent equipment in the control interface.
10. The apparatus of claim 9, wherein the location information of the smart device comprises at least one of:
a panoramic image of a space where the intelligent device is located;
a planar image of a space where the intelligent device is located;
a three-dimensional model of a space in which the intelligent device is located; and the number of the first and second groups,
the location of the smart device in the space.
11. The apparatus of claim 9, wherein the location information of the smart device is a virtual reality image of a space in which the smart device is located, the apparatus further comprising:
the third determination module is used for responding to the received virtual reality image display angle adjustment instruction and determining a target display angle corresponding to the adjustment instruction;
and the adjusting module is used for adjusting the virtual reality image of the space where the intelligent equipment is located based on the target display angle.
12. The apparatus of claim 11,
the display module is also used for displaying a preset control in the adjusted virtual reality image;
the display module is further configured to respond to the received touch operation for the preset control, and redisplay the position information of the intelligent device in the control interface.
13. The apparatus of claim 9,
the first determination module is further configured to determine a display position of an intelligent device identifier in the control interface;
the display module is further configured to display a control operation page of the intelligent device in the control interface when the touch position is the display position of the intelligent device identifier.
14. The apparatus of any of claims 9-13, further comprising:
the system comprises a receiving module, a judging module and a judging module, wherein the receiving module is used for receiving an image identification instruction, and the image identification instruction comprises an image to be identified and the type of target intelligent equipment;
a fourth determination module, configured to identify the image based on the type of the target smart device to determine a location of the target smart device in the image;
and the generating module is used for generating the position information of the target intelligent equipment according to the position of the target intelligent equipment in the image and the image.
And the association module is used for associating the position information of the target intelligent equipment with the position identification of the target intelligent equipment in the control interface.
15. The apparatus of claim 14, wherein the receiving module is specifically configured to:
determining that an image recognition instruction is received in the case where an image capturing end instruction is received;
alternatively, the first and second electrodes may be,
and in the case that the image uploading is determined to be successful, determining that an image identification instruction is received.
16. The apparatus of claim 14,
the display module is further used for displaying an intelligent equipment identification failure prompt message and an intelligent equipment adding control on a control interface under the condition that the target intelligent equipment is not identified;
the display module is further used for displaying each candidate intelligent device under the condition that the touch operation for adding the control to the intelligent device is received;
the first determining module is further configured to, when a touch operation for any candidate smart device is received, add the any candidate smart device to the image according to the touch operation.
The generating module is further configured to generate location information of the candidate smart device according to the location of the candidate smart device in the image and the image;
the association module is further configured to associate the location information of the candidate smart device with the location identifier of the candidate smart device in the control interface.
17. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to invoke and execute the memory-stored executable instructions to implement the control method of the smart device of any of claims 1-8.
18. A non-transitory computer-readable storage medium, instructions in which, when executed by a processor of an electronic device, enable the electronic device to perform the control method of the smart device of any one of claims 1-8.
Background
With the development of computer technology, smart homes have gradually deepened into the lives of common users, and more smart devices are in the homes of the users. If a user wants to control a certain intelligent device, the user may have a plurality of intelligent devices of the same type, and the user may not be able to distinguish which intelligent device is the intelligent device to be controlled, and may need to try many times to determine the correct intelligent device, and then control the intelligent device.
Disclosure of Invention
The present disclosure is directed to solving, at least to some extent, one of the technical problems in the related art.
An embodiment of a first aspect of the present disclosure provides a control method for an intelligent device, including:
responding to a control interface to receive a touch operation instruction, and determining a touch position corresponding to the touch operation instruction;
determining a display position of a position identifier in the control interface;
under the condition that the touch position is the display position of the position identifier, acquiring position information of the intelligent device associated with the position identifier;
and displaying the position information of the intelligent equipment in the control interface.
An embodiment of a second aspect of the present disclosure provides a control apparatus for an intelligent device, including:
the first determining module is used for responding to a touch operation instruction received by a control interface and determining a touch position corresponding to the touch operation instruction;
the second determination module is used for determining the display position of the position identifier in the control interface;
the acquisition module is used for acquiring the position information of the intelligent equipment associated with the position identifier under the condition that the touch position is matched with the display position of the position identifier;
and the display module is used for displaying the position information of the intelligent equipment in the control interface.
An embodiment of a third aspect of the present disclosure provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; the processor is configured to call and execute the executable instructions stored in the memory to implement the control method of the intelligent device proposed in the embodiment of the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the control method of an intelligent device set forth in the first aspect of the present disclosure.
A fifth aspect of the present disclosure provides a computer program product, where the computer program, when executed by a processor of an electronic device, enables the electronic device to execute the method for controlling an intelligent device provided in the first aspect of the present disclosure.
According to the control method and device of the intelligent device, the electronic device and the storage medium, when the control interface receives the touch operation instruction, the touch position corresponding to the touch operation instruction can be determined first, then the display position of the position identifier in the control interface is determined, then the position information of the intelligent device associated with the position identifier is obtained when the touch position is the display position of the position identifier, and the position information of the intelligent device is displayed in the control interface. Therefore, the position identification corresponding to the touch operation can be accurately determined by comparing the touch position with the display position of the position identification, and the position information of the intelligent equipment associated with the position identification is displayed, so that the user can visually know the position information and the space of the intelligent equipment, and the user can be helped to quickly and accurately determine the intelligent equipment to be controlled, the accuracy and the efficiency of controlling the intelligent equipment are improved, and good experience can be given to the user.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
Fig. 1 is a flowchart of a control method of an intelligent device according to an embodiment of the present disclosure;
FIG. 1A is a schematic view of a control interface according to an embodiment of the present disclosure;
FIG. 1B is a schematic view of a control interface according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a control method of a smart device according to an embodiment of the present disclosure;
FIG. 2A is a schematic view of a control interface according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a control method of a smart device according to an embodiment of the present disclosure;
FIG. 3A is a schematic diagram of an image to be recognized according to an embodiment of the present disclosure;
FIG. 3B is a schematic diagram of an image to be recognized according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a control apparatus of an intelligent device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be illustrative of the present disclosure, and should not be construed as limiting the present disclosure.
A control method and apparatus of an intelligent device, an electronic device, and a storage medium according to embodiments of the present disclosure are described below with reference to the accompanying drawings.
The control method of the intelligent device according to the embodiment of the present disclosure may be executed by the control apparatus of the intelligent device according to the embodiment of the present disclosure, and the apparatus may be configured in an electronic device.
For convenience of description, the control device of the smart device in the embodiment of the present disclosure may be simply referred to as "control device".
Fig. 1 is a schematic flowchart of a control method for an intelligent device according to an embodiment of the present disclosure.
As shown in fig. 1, the control method of the smart device may include the steps of:
step 101, responding to a control interface receiving a touch operation instruction, and determining a touch position corresponding to the touch operation instruction.
The control interface may be any interface capable of receiving a user touch operation, which is not limited in this disclosure.
In addition, the touch operation may be various, for example, the touch operation may be a click, a selection, a long press, a drag, and the like, which is not limited in this disclosure.
It is understood that when the user performs a touch operation in the control interface, a certain position in the control interface may be clicked. Therefore, the control device can determine the corresponding touch position according to the touch operation of the user under the condition of acquiring the touch operation of the user on the control interface.
For example, a two-dimensional coordinate system may be established by setting any point in the control interface as the origin of coordinates, such as a center point of the control interface as the origin of coordinates. Therefore, when a user clicks a certain position in the control interface, the coordinates corresponding to the touch position and the like can be determined according to the touch operation of the user.
Or, each position area may be set in advance in the control interface, and when the touch operation is located in any area, the touch position corresponding to the touch operation may be determined to be the any area.
It should be noted that the above examples are only examples, and cannot be used as limitations on the touch position, the manner of determining the touch position, and the like in the embodiments of the present disclosure.
It is to be understood that the executing subject of the present disclosure may be a control apparatus of the smart device, or may also be any device or system configured with a control method of the smart device, and the like, which may be applied in any scenario of controlling the smart device, and the present disclosure is not limited thereto.
And step 102, determining the display position of the position identifier in the control interface.
The identifier in the control interface may be various, for example, the identifier may be a location identifier, a non-location identifier, and the like, which is not limited in this disclosure.
It can be understood that the location identifier may be an identifier associated with the location information in the control interface, and the location identifier may be triggered to display the location information of the smart device associated with the location identifier by triggering the location identifier, and the location identifier may be in any form or style, which is not limited in this disclosure.
There may be one or more position identifiers, which is not limited in this disclosure.
In addition, the non-location identifier may be any other identifier that is not associated with the location information in the control interface, for example, the non-location identifier may be a "cancel" identifier, a "return" identifier, and the like, which is not limited in this disclosure.
In addition, there are many ways to determine the display position of the position indicator in the control interface.
For example, a coordinate system may be established with any point of the control interface as the origin of coordinates. Therefore, the display position of each position mark can be determined according to the coordinates of each position mark in the control interface.
Or, areas may be divided in the control interface in advance, and when the position identifier is located in any area, the display position of the position identifier may be determined to be the any area.
It should be noted that the above examples are only illustrative, and cannot be taken as a limitation on the manner of determining the display position of the position indicator in the control interface in the embodiment of the present disclosure.
And 103, acquiring the position information of the intelligent device associated with the position identifier under the condition that the touch position is the display position of the position identifier.
For example, the current location is identified as one. In addition, any point of the control interface can be used as a coordinate origin to establish a coordinate system. The coordinate corresponding to the touch position determined by the control device is (x)0、y0) And the coordinate of the display position of the position mark is also (x)0、y0) Then, it may be determined that the touch position is the display position of the position identifier. Alternatively, the control device determines the coordinates of the four vertices corresponding to the display position of the position mark to be (x)1、y1)、(x2、y2)、(x3、y3)、(x4、y4) And (x)0、y0) In an area surrounded by four vertices corresponding to the display position of the position identifier, it may be determined that the touch position is the display position of the position identifier, that is, the touch operation is performed on the position identifier.
Or, the position marks are a plurality of position marks, which are respectively a position mark a, a position mark b and a position mark c. The coordinate corresponding to the touch position determined by the control device is (x)6、y6) The coordinates corresponding to the display positions of the position mark a, the position mark b and the position mark c are (x)5、y5)、(x6、y6)、(x7、y7) Then, it may be determined that the touch position is the display position of the position indicator b, that is, the touch operation is performed on the position indicator b. And then the position information of the intelligent equipment associated with the position identifier b can be obtained.
Alternatively, the areas may be divided in the control interface in advance. For example, the area corresponding to the touch position is a1, the areas corresponding to the position identifier z are a1 and a2, the overlap portion a1 exists between the touch position and the other identifiers, and the touch position is determined to be the display position of the position identifier z, that is, the touch operation is performed on the position identifier z, and then the position information of the smart device associated with the position identifier z can be obtained.
It should be noted that the above examples are only examples, and cannot be taken as limitations on the display position manner for determining the touch position and the position identifier in the embodiments of the present disclosure.
In addition, the location information of the smart device may be generated and stored in advance, which is not limited in this disclosure.
It can be understood that the location information of the smart device may be a panoramic image of a space where the smart device is located, or may also be a planar image of the space where the smart device is located, or may also be a three-dimensional model of the space where the smart device is located, or may also be a location of the smart device in the space where the smart device is located.
Optionally, the location information of the smart device may be one or more items described above, for example, the location information may be a panoramic image and a planar image of a space where the smart device is located, or may also be a three-dimensional model of the space where the smart device is located and a location of the smart device in the space, and the like, which is not limited in this disclosure.
In addition, the corresponding relation between the position identifier and the associated intelligent device can be set in advance and stored. For example, a position identifier a and the intelligent device a are set in advance and are correspondingly associated; the location identifier B and the smart device B are correspondingly associated, and the like, which is not limited in this disclosure.
For example, if it is determined that the touch position is the display position of the position identifier a, the smart device associated with the position identifier may be determined according to the association relationship between the position identifier a and the smart device, and then the position information of the smart device may be obtained according to the position information stored in advance, which is not limited in this disclosure.
In the embodiment of the disclosure, under the condition that the touch position is the display position of the position identifier, the position information of the intelligent device associated with the position identifier can be acquired according to the corresponding relationship between the position identifier and the intelligent device, so that a user can know the position information of the intelligent device, and the accuracy and efficiency of controlling the intelligent device are improved.
And 104, displaying the position information of the intelligent equipment in the control interface.
It can be appreciated that there are many situations in the control interface when the location information of the smart device is presented. For example, the position information of the smart device may be displayed in a scrolling manner in the control interface, or the position information of the smart device may be enhanced in brightness, and the like, which is not limited in this disclosure.
For example, the position information of the smart device is a panoramic image of a space where the smart device is located and a position in the space where the smart device is located, so that the smart device in the panoramic image can be labeled in different colors or brightness enhanced, and the panoramic image of the space where the smart device is located and the position in the space where the smart device is located can be displayed in a rolling manner.
Or the position information of the intelligent device is a panoramic image of the space where the intelligent device is located and a planar image of the space where the intelligent device is located, and the intelligent device in the panoramic image and the planar image is marked specially, for example, the set color is filled. The control device can display the panoramic image and the plane image on a control interface, a user can check the panoramic image and the plane image through sliding operation, or the panoramic image and the plane image can be enlarged to check the panoramic image and the plane image, so that the user can clearly and visually know the position information of the intelligent equipment, and the intelligent equipment to be controlled can be accurately determined.
In the embodiment of the disclosure, the user can quickly and accurately determine the intelligent device to be controlled according to the position information of the intelligent device displayed on the control interface, and visually know the space where the intelligent device is located, and then can control, operate and the like the intelligent device according to the requirements, so that the accuracy and the efficiency of controlling the intelligent device are further improved.
It should be noted that the above examples are only examples, and should not be taken as limitations on the location information of the smart device and the manner of displaying the location information of the smart device in the embodiments of the present disclosure.
For example, in the schematic diagram of the control interface shown in fig. 1A, the intelligent device is a floor fan, the position information of the floor fan can be displayed by clicking a position identifier corresponding to the floor fan, the position information of the floor fan can be shown in fig. 1B, and the specific position of the floor fan in the living room can be seen through the schematic diagram shown in fig. 1B, so that a user can clearly and intuitively know the position information of the floor fan, and further can accurately determine whether the floor fan is the intelligent device to be controlled, thereby saving the time of the user, and improving the accuracy and efficiency of controlling the intelligent device.
It should be noted that, in the actual use process, content, such as other identifiers, may be added to the control interface as needed, and this disclosure does not limit this.
According to the embodiment of the disclosure, under the condition that the control interface receives the touch operation instruction, the touch position corresponding to the touch operation instruction can be determined first, then the display position of the position identifier in the control interface is determined, then the position information of the intelligent device associated with the position identifier is obtained under the condition that the touch position is the display position of the position identifier, and the position information of the intelligent device is displayed in the control interface. Therefore, the position identification corresponding to the touch operation can be accurately determined by comparing the touch position with the display position of the position identification, and the position information of the intelligent equipment associated with the position identification is displayed, so that the user can visually know the position information and the space of the intelligent equipment, and the user can be helped to quickly and accurately determine the intelligent equipment to be controlled, the accuracy and the efficiency of controlling the intelligent equipment are improved, and good experience can be given to the user.
According to the embodiment, the display position of the position identifier corresponding to the touch operation can be determined by comparing the touch position with the display position of the position identifier, and then the position information of the intelligent device associated with the position identifier can be displayed, so that the intelligent device can be accurately controlled. In an actual implementation process, the position information of the smart device may also be a virtual reality image of a space where the smart device is located, and the virtual reality image may also be adjusted according to the received adjustment instruction, which is further described with reference to fig. 2.
Fig. 2 is a schematic flowchart of a control method of an intelligent device according to an embodiment of the present disclosure. As shown in fig. 2, the control method of the smart device may include the steps of:
step 201, in response to the control interface receiving the touch operation instruction, determining a touch position corresponding to the touch operation instruction.
Step 202, determining a display position of a position identifier in the control interface.
In a possible implementation manner, the display position of the smart device identifier in the control interface may also be determined, and then the control operation page of the smart device is displayed in the control interface when the touch position is the display position of the smart device identifier.
The smart device identifier may be any identifier that can represent the smart device, besides the location identifier, in the control interface, and may be one or multiple identifiers, which is not limited in this disclosure.
In addition, specific contents and implementation manners of determining the display position of the intelligent device identifier in the control interface may refer to specific contents of determining the display position of the position identifier in the control interface in each embodiment of the present disclosure, and details are not repeated here.
For example, the coordinates of the determined touch position are (x)1、y1) The coordinate of the display position of the position mark is (x)2、y2) The coordinate of the display position of the intelligent equipment mark is (x)1、y1) Then, it may be determined that the touch position is the display position of the smart device identifier. And then, a control operation page of the intelligent device can be displayed in the control interface, so that a user can control, operate and the like the intelligent device on the control operation page.
It should be noted that the above examples are only examples, and cannot be used as limitations on the touch position, the display position of the smart device identifier, the display position of the position identifier, and the like in the embodiments of the present disclosure.
In the embodiment of the disclosure, under the condition that the touch position is the display position of the intelligent device identifier, the control operation page of the intelligent device can be displayed in the control interface, so that the user can control, operate and the like the intelligent device on the control operation page of the intelligent device, the user operation is facilitated, the user efficiency is improved, and the requirement of the user on controlling the intelligent device is met.
It should be noted that, in the embodiment of the present disclosure, the touch position corresponding to the touch operation instruction and the display position of each identifier in the control interface may be determined according to any desirable manner in the related art, which is not limited in the present disclosure.
And 203, acquiring the position information of the intelligent device associated with the position identifier under the condition that the touch position is the display position of the position identifier.
And 204, displaying the position information of the intelligent equipment in the control interface. Step 205, in response to receiving the virtual reality image display angle adjustment instruction, determining a target display angle corresponding to the adjustment instruction.
When the target display angle corresponding to the adjustment instruction is determined, various modes can be provided.
For example, if the virtual reality image display angle adjustment instruction includes a target display angle, the target display angle corresponding to the adjustment instruction can be determined by analyzing the received adjustment instruction. For example, by analyzing the adjustment command, and determining that the angle included in the adjustment command is downward and 20 degrees, the target display angle corresponding to the adjustment command may be determined as: adjusted 20 degrees downward from the current position.
Alternatively, the control device may directly acquire the angle to be adjusted input by the user. For example, after receiving the virtual reality image display angle adjustment instruction, the control device determines the received angle input by the user as the target display angle corresponding to the adjustment instruction.
It should be noted that the above examples are merely illustrative, and are not intended to limit the adjustment instruction, the target display angle, and the manner of determining the target display angle corresponding to the adjustment instruction in the embodiments of the present disclosure.
And step 206, adjusting the virtual reality image of the space where the intelligent equipment is located based on the target display angle.
For example, if the target display angle is 10 degrees to the left from the current position, the virtual reality image of the space where the intelligent device is located may be deflected 10 degrees to the left, so that the virtual reality image of the space where the intelligent device is located may be more complete and comprehensive, and the adjusted virtual reality image of the space where the intelligent device is located may be displayed to the user.
It should be noted that the above examples are only illustrative, and should not be taken as limitations on target display angles and the like in the embodiments of the present disclosure.
In the embodiment of the disclosure, the virtual reality image of the space where the intelligent device is located can be correspondingly adjusted according to the adjustment instruction of the user, so that the virtual reality image of the space where the intelligent device is located at different angles can be displayed, the virtual reality image of the space where the intelligent device is located is enriched, and the position information of the intelligent device can be more comprehensive and complete. The virtual reality image of the space where the intelligent equipment is located is displayed in a multi-angle, all-directional and multi-level mode, so that a user can clearly and completely know the virtual reality image of the space where the intelligent equipment is located. Therefore, when the user controls the intelligent equipment, the user can know the position of the intelligent equipment in an all-around mode according to the position information of the displayed intelligent equipment, and then the intelligent equipment to be controlled is accurately determined, so that the accuracy and the efficiency of controlling the intelligent equipment are improved.
And step 207, displaying a preset control in the adjusted virtual reality image.
The preset control can be a control set in advance, and can be in any form or style. In addition, it may be located at any position in the control interface, for example, at the top left of the control interface, or may be suspended above the control interface, etc., which is not limited by this disclosure.
And step 208, in response to receiving the touch operation aiming at the preset control, re-displaying the position information of the intelligent equipment in the control interface.
In the embodiment of the disclosure, in order to simplify the user operation, the preset control may be set in advance, and the position information of the intelligent device may be redisplayed in the control interface when the touch operation for the preset control is received. In this embodiment of this disclosure promptly, the user can obtain the positional information of the smart machine after the adjustment at control interface through carrying out touch-control operation to this preset controlling part to can have more comprehensive, complete understanding to the positional information of smart machine, and then the user can be according to the positional information of smart machine when controlling smart machine, and the accurate smart machine who confirms to treat control further improves accuracy and the reliability of controlling smart machine.
For example, in the schematic diagram shown in fig. 2A, a user may trigger a preset control in the interface when the user wants to check the location information of the smart device, and when the control device receives a touch operation for the preset control, the control device may redisplay the location information of the smart device in the control interface. Therefore, the user operation is simplified, the efficiency is improved, and the requirements of the user are better met.
It can be understood that the control method of the smart device provided by the present disclosure may be applied to any scenario in which the smart device is controlled, and the present disclosure does not limit this scenario.
For example, when a plurality of intelligent devices of the same type are in a family, for example, a plurality of cameras or a plurality of air purifiers are provided, family members can firstly perform touch operation on a control interface, then the control device can compare the touch position with the display position of the position identifier when receiving a touch operation instruction, and under the condition that the touch position is the display position of the position identifier, the position information of the intelligent device associated with the position identifier can be acquired according to the corresponding relationship between the position identifier and the intelligent device, and the position information of the intelligent device is displayed. Thereby family member can directly look over this smart machine in indoor specific position to having clear, audio-visual understanding to this smart machine place space, having improved accuracy and the efficiency of controlling smart machine, can giving the user good experience.
Or, when a customer enters the intelligent hotel, in the face of an unfamiliar room environment, touch operation can be performed on a control interface, then when a control device receives a touch operation instruction, the touch position can be compared with the display position of the position identifier, under the condition that the touch position is the display position of the position identifier, the position information of the intelligent device associated with the position identifier can be acquired according to the corresponding relation between the position identifier and the intelligent device, and the position information of the intelligent device is displayed, so that the customer can check the specific indoor position of the intelligent device, the intelligent device can be used more accurately, and the check-in experience is improved.
According to the embodiment of the disclosure, under the condition that the control interface receives a touch operation instruction, a touch position corresponding to the touch operation instruction and a display position of a position identifier in the control interface can be respectively determined, then under the condition that the touch position is the display position of the position identifier, position information of the intelligent device associated with the position identifier is acquired, the position information of the intelligent device is displayed in the control interface, then under the condition that a virtual reality image display angle adjustment instruction is received, a target display angle corresponding to the adjustment instruction is determined, that is, a virtual reality image of a space where the intelligent device is located can be adjusted based on the target display angle, a preset control can be displayed in the adjusted virtual reality image, and the position information of the intelligent device is displayed again in the control interface after the touch operation aiming at the preset control is received. Therefore, the virtual reality image of the space where the intelligent equipment is located can be adjusted and displayed according to the user requirements, the position information of the intelligent equipment can be more comprehensive and complete, the accuracy and the efficiency of controlling the intelligent equipment are improved, and good experience can be provided for the user.
It can be understood that, in an actual implementation process, when the location information of the smart device is generated, the received image may be identified first, and then the location of the smart device in the image is determined, so as to generate the location information of the smart device, which is further described with reference to fig. 3.
Fig. 3 is a schematic flowchart of a control method of an intelligent device according to an embodiment of the present disclosure. As shown in fig. 3, the control method of the smart device may include the steps of:
step 301, receiving an image recognition instruction, where the image recognition instruction includes an image to be recognized and a type of a target smart device.
The image to be recognized may be one image, or may also be multiple images, and the like, which is not limited in this disclosure.
In addition, the type of the target intelligent device may be various, for example, the target intelligent device may be a speaker, an air conditioner, a sweeping robot, or the like, and may also be set according to a user requirement, which is not limited in this disclosure.
Alternatively, it may be determined that the image recognition instruction is received in a case where the image capturing end instruction is received.
For example, during actual use, the user may capture an image first, and after the image capture is finished, send an image capture end instruction. Thus, the control device, upon receiving the image capturing end instruction, may determine that the user has ended capturing of the image, that is, may automatically trigger the image recognition instruction, perform image recognition on the image, and the like. The present disclosure is not limited thereto.
Optionally, it may be determined that the image recognition instruction is received in a case where it is determined that the image upload is successful.
For example, if the control device determines that all the images to be uploaded are successfully uploaded, an image recognition instruction can be automatically triggered, and the images can be recognized. The present disclosure is not limited thereto.
Step 302, identifying the image to be identified based on the type of the target intelligent device to determine the position of the target intelligent device in the image to be identified.
It is understood that the image to be identified may include one or more target smart devices of the same type, which is not limited by this disclosure.
For example, in the schematic diagram shown in fig. 3A, the types of target smart devices are: and the air conditioner can determine the position of the air conditioner in the image to be identified after the image to be identified is identified, and can also be specially marked. For example, lines of different colors may be used in the image to be recognized to frame the air conditioner, or the air conditioner may also be color-filled in the image, or the air conditioner may also be contour-enhanced, etc.
Or, the type of the target intelligent device is a camera, and after the image to be recognized is recognized, the number of the cameras in the image is determined to be 3, so that the positions of the three cameras in the image can be determined respectively, and the like.
It should be noted that the above examples are only illustrative, and should not be taken as limiting the number, location, etc. of the target smart devices in the embodiments of the present disclosure.
It should be noted that, in the embodiment of the present disclosure, the image to be recognized may be recognized according to any desirable manner in the related art, for example, an image recognition model generated by pre-training may be used, or a graph neural network may also be used, which is not limited by the present disclosure.
Step 303, generating the position information of the target intelligent device according to the position of the target intelligent device in the image and the image to be identified.
It is to be understood that the location information of the target smart device may be one or more of: panoramic images of the space where the target intelligent device is located; a planar image of a space where the target intelligent device is located; a three-dimensional model of a space in which the target intelligent device is located; and the location of the target smart device in the space, etc., which is not limited by this disclosure.
For example, by recognizing the image, it is determined that the target smart device is located at the center of the image, and the coordinates of the target smart device are (a, b), it may be determined that the coordinates of the target smart device in the image are (a, b), and then, in combination with the image to be recognized, location information of the target smart device may be generated, for example, a panoramic image of a space where the target smart device is located is generated.
Or, if there are a plurality of images to be recognized, a planar image and a three-dimensional model of a space where the target intelligent device is located may be generated according to the position of the target intelligent device in the image and the plurality of images to be recognized, and the position of the intelligent device in the space may be determined.
Or, if there are multiple target smart devices in the image, the location information of each target smart device may be generated according to the location of each target smart device in the image and the image.
It should be noted that the above examples are merely illustrative, and are not intended to limit the manner of generating the location information of the target smart device in the embodiments of the present disclosure.
Step 304, associating the location information of the target intelligent device with the location identifier of the target intelligent device in the control interface.
For example, the corresponding relationship between the location information of the target smart device and the location identifier of the target smart device in the control interface may be stored. Therefore, the position information of the target intelligent device can be acquired according to the corresponding relation stored in advance by triggering the position identification of the target intelligent device, so that the operation of a user is facilitated, and the efficiency is improved. And 305, under the condition that the target intelligent equipment is not identified, displaying an intelligent equipment identification failure prompt message and an intelligent equipment addition control on a control interface.
It is understood that after the image to be recognized is recognized, the target smart device may not be recognized, and a smart device recognition failure prompt message may be displayed on the control interface to inform the user that the target smart device is failed to recognize. For example, a word "the smart device fails to recognize" may be displayed in the middle of the control interface, or a message prompting the smart device to recognize the failure may also be scrolled over the control interface, and the like, which is not limited in this disclosure.
In addition, the intelligent device can be added by triggering the intelligent device to add the control, and the form or style of the intelligent device adding control is not limited in the disclosure.
And step 306, displaying each candidate intelligent device under the condition that the touch operation for adding the control to the intelligent device is received.
The candidate smart device may be set according to default information, or may be generated correspondingly according to the type of the target smart device included in the received image recognition instruction, which is not limited in this disclosure.
And 307, in the case of receiving the touch operation for any candidate smart device, adding any candidate smart device to the image according to the touch operation.
For example, in the schematic diagram shown in fig. 3B, the candidate smart devices are: the air conditioner, the floor fan and the camera receive the selection operation of the user for the camera, and then the camera can be indicated to be target intelligent equipment which is not identified, so that the camera can be added into the image, and the position of the camera is marked in the image.
It should be noted that the above examples are only illustrative, and should not be taken as a limitation on the manner in which any candidate smart device is added to an image in the embodiments of the present disclosure.
It can be understood that, in the case of receiving a touch operation for any candidate smart device, it may be indicated that the any candidate smart device is an unidentified target smart device, and then the any candidate smart device may be added to the image according to the touch operation, so that the image information may be more complete and comprehensive.
And 308, generating position information of the candidate intelligent device according to the position of the candidate intelligent device in the image and the image.
For example, after the position coordinates of the candidate smart device in the image are determined to be (c, d), and then the image is combined, the position information of the candidate smart device may be generated, such as generating a panoramic image, a three-dimensional model, and the like of the space where the candidate smart device is located. The present disclosure is not limited thereto.
Alternatively, a three-dimensional model, a planar image, and the position of the candidate smart device in the space may be generated according to the position of the candidate smart device in the image and the image. The present disclosure is not limited thereto.
Step 309, associating the location information of the candidate smart device with the location identifier of the candidate smart device in the control interface.
For example, the correspondence between the location information of the target smart device and the location identifier of the target smart device in the control interface may be stored in a table form, or may also be stored in a database, and the like. Therefore, the stored position information of the candidate intelligent equipment and the position identification of the candidate intelligent equipment in the control interface are more complete and comprehensive in association relation.
In the embodiment of the disclosure, the position information of the candidate intelligent device can be generated according to the position of the candidate intelligent device in the image and the image, and the position information of the candidate intelligent device is associated with the position identifier of the candidate intelligent device in the control interface, so that the stored position information of the intelligent device can be more comprehensive, complete and accurate. And then according to the touch operation instruction received, the positional information of the intelligent equipment that demonstrates is also more accurate and reliable to can further improve accuracy and the efficiency of controlling intelligent equipment.
According to the embodiment of the disclosure, an image recognition instruction may be received first, and then, based on the type of the target intelligent device, the image to be recognized is recognized to determine the position of the target intelligent device in the image to be recognized, so as to generate the position information of the target intelligent device, and the position information of the target intelligent device is associated with the position identifier of the target intelligent device in the control interface. Therefore, the generated position information of the intelligent equipment is richer, more comprehensive and more complete, so that the position information of the intelligent equipment displayed in the control interface is more complete and more accurate, and the accuracy and the efficiency of controlling the intelligent equipment are further improved.
The embodiment of the disclosure also provides a control device of an intelligent device, and fig. 4 is a schematic structural diagram of the control device of the intelligent device according to the embodiment of the disclosure.
As shown in fig. 4, the control apparatus 100 of the smart device includes: a first determination module 110, a second determination module 120, an acquisition module 130, and a display module 140.
The first determining module 110 is configured to determine, in response to a touch operation instruction received by a control interface, a touch position corresponding to the touch operation instruction.
A second determining module 120, configured to determine a display position of the location identifier in the control interface.
An obtaining module 130, configured to obtain, when the touch position is a display position of the position identifier, position information of the smart device associated with the position identifier.
And the display module 140 is configured to display the location information of the smart device in the control interface.
Optionally, the location information of the smart device includes at least one of the following:
a panoramic image of a space where the intelligent device is located;
a planar image of a space where the intelligent device is located;
a three-dimensional model of a space in which the intelligent device is located; and the number of the first and second groups,
the location of the smart device in the space.
Optionally, the position information of the intelligent device is a virtual reality image of a space where the intelligent device is located, and the apparatus further includes:
and the third determining module is used for responding to the received virtual reality image display angle adjusting instruction and determining a target display angle corresponding to the adjusting instruction.
And the adjusting module is used for adjusting the virtual reality image of the space where the intelligent equipment is located based on the target display angle.
Optionally, the display module 140 is further configured to display a preset control in the adjusted virtual reality image.
The display module 140 is further configured to, in response to receiving the touch operation for the preset control, redisplay the location information of the smart device in the control interface.
Optionally, the first determining module 110 is further configured to determine a display position of the smart device identifier in the control interface.
The display module 140 is further configured to display a control operation page of the smart device in the control interface when the touch position is the display position of the smart device identifier.
Optionally, the apparatus 100 may further include:
the receiving module is used for receiving an image identification instruction, wherein the image identification instruction comprises an image to be identified and the type of the target intelligent equipment.
A fourth determination module to identify the image based on the type of the target smart device to determine a location of the target smart device in the image.
And the generating module is used for generating the position information of the target intelligent equipment according to the position of the target intelligent equipment in the image and the image.
And the association module is used for associating the position information of the target intelligent equipment with the position identification of the target intelligent equipment in the control interface.
Optionally, the receiving module is specifically configured to:
determining that an image recognition instruction is received in the case where an image capturing end instruction is received;
alternatively, the first and second electrodes may be,
and in the case that the image uploading is determined to be successful, determining that an image identification instruction is received.
Optionally, the display module 140 is further configured to display, on the control interface, an intelligent device identification failure prompt message and an intelligent device addition control in the case that the target intelligent device is not identified.
The display module 140 is further configured to display each candidate smart device when a touch operation for the smart device addition control is received.
The first determining module 110 is further configured to, if a touch operation for any candidate smart device is received, add the any candidate smart device to the image according to the touch operation.
The generating module is further used for generating position information of the candidate intelligent equipment according to the position of the candidate intelligent equipment in the image and the image;
and the association module is further used for associating the position information of the candidate intelligent equipment with the position identification of the candidate intelligent equipment in the control interface.
The functions and specific implementation principles of the modules in the embodiments of the present disclosure may refer to the embodiments of the methods, and are not described herein again.
In the control apparatus of the smart device according to the embodiment of the present disclosure, when the control interface receives the touch operation instruction, the touch position corresponding to the touch operation instruction may be determined first, then the display position of the position identifier in the control interface is determined, then, when the touch position is the display position of the position identifier, the position information of the smart device associated with the position identifier is obtained, and the position information of the smart device is displayed in the control interface. Therefore, the position identification corresponding to the touch operation can be accurately determined by comparing the touch position with the display position of the position identification, and the position information of the intelligent equipment associated with the position identification is displayed, so that the user can visually know the position information and the space of the intelligent equipment, and the user can be helped to quickly and accurately determine the intelligent equipment to be controlled, the accuracy and the efficiency of controlling the intelligent equipment are improved, and good experience can be given to the user.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 5, the electronic apparatus 200 includes: a memory 210 and a processor 220, and a bus 230 connecting the various components, including the memory 210 and the processor 220.
Wherein, the memory 210 is used for storing the executable instructions of the processor 220; the processor 201 is configured to call and execute the executable instructions stored in the memory 202 to implement the control method of the smart device proposed by the above-mentioned embodiment of the present disclosure.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 200 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by electronic device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)240 and/or cache memory 250. The electronic device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 280 having a set (at least one) of program modules 270, including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment, may be stored in, for example, the memory 210. The program modules 270 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
Electronic device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with electronic device 200, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 293. As shown, the network adapter 293 communicates with the other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAI D systems, tape drives, and data backup storage systems, etc.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process of the electronic device according to the embodiment of the present disclosure, reference is made to the foregoing explanation of the method for controlling an intelligent device according to the embodiment of the present disclosure, and details are not described here again.
In the electronic device according to the embodiment of the disclosure, when the control interface receives the touch operation instruction, the touch position corresponding to the touch operation instruction may be determined first, then the display position of the position identifier in the control interface is determined, then the position information of the intelligent device associated with the position identifier is obtained when the touch position is the display position of the position identifier, and the position information of the intelligent device is displayed in the control interface. Therefore, the position identification corresponding to the touch operation can be accurately determined by comparing the touch position with the display position of the position identification, and then the position information of the intelligent equipment associated with the position identification can be displayed, so that the user can visually know the position information and the space of the intelligent equipment, the intelligent equipment to be controlled can be rapidly and accurately determined by the user, the accuracy and the efficiency of controlling the intelligent equipment are improved, and good experience can be given to the user.
In order to implement the foregoing embodiments, the present disclosure also provides a non-transitory computer-readable storage medium, where instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to execute the control method of the smart device as described above.
In order to implement the foregoing embodiments, the present disclosure also provides a computer program product, which, when executed by a processor of an electronic device, enables the electronic device to execute the control method of the intelligent device as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.