Control method and device of self-moving equipment, storage medium and self-moving equipment
1. A control method of a self-moving device is characterized in that an image acquisition component and a power supply component are installed on the self-moving device, and the method comprises the following steps:
acquiring an environment image acquired by the image acquisition assembly in the moving process of the self-moving equipment, wherein the environment image is acquired by the image acquisition assembly in the moving direction of the self-moving equipment;
acquiring an image recognition model, wherein the computing resource occupied by the image recognition model in operation is lower than the maximum computing resource provided by the self-mobile equipment;
controlling the environment image to be input into the image recognition model to obtain an object recognition result, wherein the object recognition result is used for indicating the category of a target object, and the target object comprises a chair, pet excrement, a door, a window, a charging assembly and/or liquid;
and determining that the residual capacity of the power supply assembly is less than or equal to a capacity threshold value, wherein the object identification result comprises a charging assembly, and controlling the self-mobile equipment to move to the charging assembly.
2. The method of claim 1, wherein before the controlling the mobile device to move to the charging component, the method further comprises: and determining the direction of the charging assembly relative to the self-moving equipment according to the position of the image of the charging assembly in the environment image.
3. The control method of the self-moving device according to claim 1 or 2, wherein a positioning sensor is further mounted on the self-moving device, and the positioning sensor is used for positioning the charging component;
after the controlling the self-moving device to move to the charging component, the method further comprises:
in the process of moving to the charging assembly, controlling the positioning sensor to position the position of the charging assembly to obtain a positioning result;
and controlling the self-moving equipment to move according to the positioning result so as to realize the butt joint of the self-moving equipment and the charging assembly.
4. The method for controlling the mobile device according to claim 3, wherein the controlling the positioning sensor to position the position of the charging assembly to obtain the positioning result comprises: controlling a positioning sensor to position a charging interface on the charging assembly;
the controlling the self-moving equipment to move according to the positioning result comprises: and controlling the self-moving equipment to move to the charging interface.
5. The method as claimed in claim 4, wherein the positioning sensor is a laser sensor, the charging interface on the charging assembly emits laser signals at different angles, and the positioning sensor determines the position of the charging interface based on the angle difference of the received laser signals.
6. The control method of the self-moving device according to claim 1 or 2, wherein before the obtaining the image recognition model, the method further comprises:
obtaining a small network detection model, wherein a characteristic layer of the small network detection model is removed;
acquiring training data, wherein the training data comprises training images of all objects in a working area of the self-moving equipment and a recognition result of each training image;
inputting the training image into the small network detection model to obtain a model result;
and training the small network detection model based on the difference between the model result and the recognition result corresponding to the training image to obtain the image recognition model.
7. The method of claim 4, further comprising, after obtaining the image recognition model:
performing model compression processing on the image recognition model to obtain an image recognition model for recognizing an object, wherein the model compression comprises model cutting, model quantization and/or low-rank decomposition;
and after the model is compressed, the compressed image recognition model is trained again by using the training data to obtain the finally used image recognition model.
8. The method of claim 1, wherein the small network detection model is: a miniature YOLO model; alternatively, the MobileNet model.
9. A computer-readable storage medium, characterized in that the storage medium has stored therein a program for implementing the control method of the self-moving device according to any one of claims 1 to 8 when executed by a processor.
10. An autonomous mobile device, comprising:
the moving component is used for driving the self-moving equipment to move;
the movement driving component is used for driving the movement of the movement component;
the image acquisition assembly is installed on the self-moving equipment and used for acquiring an environment image in the traveling direction;
a power supply component mounted on the self-moving device;
the control assembly is in communication connection with the mobile driving assembly and the image acquisition assembly and is in communication connection with the memory; the memory stores therein a program that is loaded and executed by the control component to implement the control method of the self-moving apparatus according to any one of claims 1 to 8.
Background
With the development of artificial intelligence and the robot industry, intelligent household appliances such as a sweeping robot are gradually popularized.
A common sweeping robot collects an environment picture through a camera assembly fixed above a machine body and identifies objects in the collected picture by using an image identification algorithm. In order to ensure the image recognition accuracy, the image recognition algorithm is usually trained based on a neural network model and the like.
However, the existing image recognition algorithm needs to be implemented by combining a Graphics Processing Unit (GPU) and a Neural Network Processor (NPU), and has a high requirement on hardware of the sweeping robot.
Disclosure of Invention
The application provides a control method and device of a self-moving device and a storage medium, which can solve the problem that the application range of an object recognition function of a sweeping robot is limited due to high requirement of the existing image recognition algorithm on hardware of the sweeping robot. The application provides the following technical scheme:
in a first aspect, a method for controlling a self-moving device is provided, where an image capturing component is installed on the self-moving device, and the method includes:
acquiring an environment image acquired by the image acquisition assembly;
acquiring an image recognition model, wherein the computing resource occupied by the image recognition model in operation is lower than the maximum computing resource provided by the self-mobile equipment;
and controlling the environment image to be input into the image recognition model to obtain an object recognition result, wherein the object recognition result is used for indicating the category of the target object.
Optionally, the image recognition model is obtained by training a small network detection model.
Optionally, before the acquiring the image recognition model, the method further includes:
acquiring a small network detection model;
acquiring training data, wherein the training data comprises training images of all objects in a working area of the self-moving equipment and a recognition result of each training image;
inputting the training image into the small network detection model to obtain a model result;
and training the small network detection model based on the difference between the model result and the recognition result corresponding to the training image to obtain the image recognition model.
Optionally, after the training the small network detection model based on the difference between the model result and the recognition result corresponding to the training image to obtain the image recognition model, the method further includes:
and carrying out model compression processing on the image recognition model to obtain the image recognition model for recognizing the object.
Optionally, the small network detection model is: a miniature YOLO model; alternatively, the MobileNet model.
Optionally, after the controlling the environment image to be input into the image recognition model to obtain the object recognition result, the method further includes:
and controlling the self-mobile equipment to move to complete the corresponding task based on the object recognition result.
Optionally, the self-moving device is provided with a liquid cleaning assembly, and the controlling of the self-moving device to complete the corresponding task based on the object recognition result includes:
when the object recognition result indicates that the environment image contains a liquid image, controlling the self-moving equipment to move to a region to be cleaned corresponding to the liquid image;
sweeping liquid in the area to be cleaned using the liquid sweeping assembly.
Optionally, a power supply component is installed in the self-moving device, the power supply component charges by using a charging component, and the controlling of the self-moving device to move to complete a corresponding task based on the object recognition result includes:
and when the residual capacity of the power supply assembly is less than or equal to a capacity threshold value and the environment image comprises the image of the charging assembly, determining the actual position of the charging assembly according to the image position of the charging assembly.
Optionally, a positioning sensor is further installed on the mobile device, and the positioning sensor is used for positioning a position of a charging interface on the charging assembly; after the controlling the self-moving device to move to the charging component, the method further comprises:
in the process of moving to the charging assembly, controlling the positioning sensor to position the position of the charging assembly to obtain a positioning result;
and controlling the self-moving equipment to move according to the positioning result so as to realize the butt joint of the self-moving equipment and the charging interface.
In a second aspect, a control apparatus for a self-moving device is provided, the self-moving device having an image capturing component mounted thereon, the apparatus comprising:
the image acquisition module is used for acquiring the environment image acquired by the image acquisition assembly in the moving process of the self-moving equipment;
the model acquisition module is used for acquiring an image recognition model, and the calculation resource occupied by the image recognition model in the running process is lower than the maximum calculation resource provided by the self-mobile equipment;
and the equipment control module is used for controlling the environment image to be input into the image recognition model to obtain an object recognition result, and the object recognition result is used for indicating the category of the target object.
In a third aspect, there is provided a control apparatus from a mobile device, the apparatus comprising a processor and a memory; the memory stores therein a program that is loaded and executed by the processor to implement the control method of the self-moving apparatus of the first aspect.
In a fourth aspect, there is provided a computer-readable storage medium having a program stored therein, the program being loaded and executed by the processor to implement the control method of the self-moving apparatus according to the first aspect.
In a fifth aspect, an autonomous mobile device is provided, comprising:
the moving component is used for driving the self-moving equipment to move;
the movement driving component is used for driving the movement of the movement component;
the image acquisition assembly is installed on the self-moving equipment and used for acquiring an environment image in the traveling direction;
the control assembly is in communication connection with the mobile driving assembly and the image acquisition assembly and is in communication connection with the memory; the memory stores therein a program that is loaded and executed by the control component to implement the control method of the self-moving apparatus of the first aspect.
The beneficial effect of this application lies in: the method comprises the steps that an environment image collected by an image collecting assembly is obtained in the moving process of the self-moving equipment; acquiring an image recognition model, wherein the computing resource occupied by the image recognition model in operation is lower than the maximum computing resource provided by the mobile equipment; controlling the environment image to input an image recognition model to obtain an object recognition result, wherein the object recognition result is used for indicating the category of a target object in the environment image; the problem that the application range of an object recognition function of the sweeping robot is limited due to high requirement of the existing image recognition algorithm on hardware of the sweeping robot can be solved; by using the image recognition model which consumes less computing resources to recognize the target object in the environment image, the hardware requirement of the object recognition method on the self-moving equipment can be reduced, and the application range of the object recognition method is expanded.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clear and clear, and to implement the technical solutions according to the content of the description, the following detailed description is made with reference to the preferred embodiments of the present application and the accompanying drawings.
Drawings
Fig. 1 is a schematic structural diagram of a self-moving device provided in an embodiment of the present application;
fig. 2 is a flowchart of a control method of a self-moving device according to an embodiment of the present application;
FIG. 3 is a flow diagram of enforcing a work policy provided by one embodiment of the present application;
FIG. 4 is a schematic diagram of an enforcement work policy provided by one embodiment of the present application;
FIG. 5 is a flow diagram of enforcing a work policy provided by another embodiment of the present application;
FIG. 6 is a schematic diagram of an enforcement work policy provided by another embodiment of the present application;
fig. 7 is a block diagram of a control apparatus of a self-moving device according to an embodiment of the present application;
fig. 8 is a block diagram of a control apparatus of a self-moving device according to an embodiment of the present application.
Detailed Description
The following detailed description of embodiments of the present application will be described in conjunction with the accompanying drawings and examples. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
First, several terms related to the present application will be described below.
Model compression: the method is a mode for reducing parameter redundancy in the trained network model so as to reduce storage occupation, communication bandwidth and calculation complexity of the network model.
Model compression includes, but is not limited to: model clipping, model quantization, and/or low rank decomposition.
Model cutting: refers to a search process of an optimal network structure. The model cutting process comprises the following steps: 1. training a network model; 2. clipping insignificant weights or channels; 3. the pruned network is trimmed or retrained. Wherein the 2 nd step is usually done by iterative layer-by-layer clipping, fast fine tuning or weight reconstruction to maintain accuracy.
And (3) quantification: the quantization model is a general term of a model acceleration method, and is a process of representing floating point type data in a limited range (such as 32 bits) by using a data type with fewer bits, so that the aims of reducing the size of the model, reducing the memory consumption of the model, accelerating the reasoning speed of the model and the like are fulfilled.
Low rank decomposition: the weight matrix of the network model is decomposed into a plurality of small matrixes, and the calculated amount of the small matrixes is smaller than that of the original matrix, so that the purposes of reducing the calculated amount of the model and reducing the memory occupied by the model are achieved.
The YOLO model: one of the basic network models is a Neural network model that can realize the positioning and identification of an object through a Convolutional Neural Network (CNN) network. The YOLO models include YOLO, YOLO v2, and YOLO v 3. Among them, YOLO v3 is YOLO and another target detection algorithm of the YOLO series after YOLO v2, and is based on improvement of YOLO v 2. And YOLO v3-tiny is a simplified version of YOLO v3, and certain characteristic layers are removed on the basis of YOLO v3, so that the effects of reducing the model calculation amount and achieving faster calculation are achieved.
MobileNet model: is a network model whose basic unit is a depth-level separable convolution (depthwise separable convolution). Among them, the depth-level separable convolution can be decomposed into depth separable convolution (DW) and Pointwise convolution (PW). DWs are different from standard convolutions, for which the convolution kernel is used on all input channels, whereas DWs use different convolution kernels for each input channel, that is, one convolution kernel for each input channel. And PW is just a common convolution, except that it uses a convolution kernel of 1x 1. For the depth-level separable convolution, DW is firstly adopted to perform convolution on different input channels respectively, and then PW is adopted to combine the outputs, so that the overall calculation result is approximately the same as that of a standard convolution process, but the calculation amount and the model parameter amount are greatly reduced.
Fig. 1 is a schematic structural diagram of a self-moving device according to an embodiment of the present application, and as shown in fig. 1, the system at least includes: a control component 110, and an image acquisition component 120 communicatively coupled to the control component 110.
The image acquisition component 120 is used for acquiring an environment image 130 in the moving process of the mobile device; and sends the environment image 130 to the control component 110. Alternatively, the image capturing assembly 120 may be implemented as a camera, a video camera, or the like, and the implementation manner of the image capturing assembly 120 is not limited in this embodiment.
Optionally, the field angle of the image capturing assembly 120 is 120 ° in the horizontal direction and 60 ° in the vertical direction; of course, the field angle may be other values, and the value of the field angle of the image capturing assembly 120 is not limited in this embodiment. The field of view of the image capture component 120 may ensure that the environmental image 130 in the direction of travel from the mobile device can be captured.
In addition, the number of the image capturing assemblies 120 may be one or more, and the number of the image capturing assemblies 120 is not limited in this embodiment.
The control component 110 is used to control the self-moving device. Such as: controlling the starting and stopping of the mobile equipment; controls the starting, stopping, etc. of various components from the mobile device, such as image acquisition component 120.
In this embodiment, the control component 110 is communicatively coupled to the memory; the memory stores a program, which is loaded and executed by the control component 110 to implement at least the following steps: acquiring an environment image 130 acquired by an image acquisition component 120 in the moving process of the self-moving equipment; acquiring an image recognition model; the control environment image 130 is input to the image recognition model to obtain an object recognition result 140, where the object recognition result 140 is used to indicate the category of the target object in the environment image 130. In other words, the program is loaded and executed by the control component 110 to implement the control method of the self-moving device provided by the present application.
In one example, when a target object is included in the environment image, the object recognition result 140 is a type of the target object; when the target object is included in the environment image, the object recognition result 140 is empty. Alternatively, when the target object is included in the environment image, the object recognition result 140 is an indication that the target object is included (e.g., the target object is included by "1") and a type of the target object; when the target object is not included in the environment image, the object recognition result 140 is an indication that the target object is not included (for example, by "0" indicating that the target object is not included).
Wherein the image recognition model occupies lower computational resources than the maximum computational resources provided from the mobile device at runtime.
Optionally, the object recognition result 140 may also include, but is not limited to: position, size, etc. of the image of the target object in the environment image 130.
Optionally, the target object is an object located in a work area of the self-moving device. Such as: when the working area of the self-moving equipment is a room, the target object can be a bed, a table, a chair, a person and other objects in the room; when the work area of the self-moving device is a logistics warehouse, the target object may be a box, a person, or the like in the warehouse, and the embodiment does not limit the type of the target object.
Optionally, the image recognition model is that the number of model layers is smaller than a first numerical value; and/or a network model in which the number of nodes in each layer is less than a second value. The first numerical value and the second numerical value are small integers, so that the image recognition model is guaranteed to consume less computing resources during operation.
It should be added that, in this embodiment, the self-moving device may further include other components, such as: the mobile driving component (for example, a wheel) is used for driving the self-moving device to move, the mobile driving component (for example, a motor) is used for driving the mobile component to move, and the mobile driving component is in communication connection with the control component 110, and the mobile driving component operates and drives the mobile component to move under the control of the control component 110, so as to implement the overall movement of the self-moving device.
In addition, the self-moving equipment can be a sweeping robot, an automatic mower or other equipment with an automatic traveling function, and the type of the self-moving equipment is not limited in the application.
In this embodiment, by using the image recognition model consuming less computing resources to recognize the target object in the environment image 130, the hardware requirement of the object recognition method for the mobile device can be reduced, and the application range of the object recognition method can be expanded.
The following describes the control method of the self-moving device provided in the present application in detail.
Fig. 2 is a flowchart of another method for controlling the self-moving device, and fig. 2 illustrates an example in which the method for controlling the self-moving device is used in the self-moving device shown in fig. 1, and the main execution subject of each step is the control component 110, and with reference to fig. 2, the method at least includes the following steps:
step 201, in the process of moving the mobile device, obtaining an environment image collected by an image collection assembly.
Optionally, the image capturing component is configured to capture video data, and at this time, the environment image may be a frame of image data in the video data; or the image acquisition assembly is used for acquiring single image data, and at the moment, the environment image is the single image data sent by the image acquisition assembly.
Step 202, obtaining an image recognition model, wherein the computing resource occupied by the image recognition model in operation is lower than the maximum computing resource provided by the mobile device.
In this embodiment, by using the image recognition model whose computational resource is lower than the maximum computational resource provided by the self-moving device, the hardware requirement of the image recognition model on the self-moving device can be reduced, and the application range of the object recognition method can be expanded.
In one example, a pre-trained image recognition model is read from a mobile device. At this time, the image recognition model is obtained by training the small network detection model. Training a small network detection model, comprising: acquiring a small network detection model; acquiring training data; inputting the training image into a small network detection model to obtain a model result; and training the small network detection model based on the difference between the model result and the recognition result corresponding to the training image to obtain an image recognition model.
Wherein the training data comprises training images of various objects in the working area of the mobile equipment and the recognition result of each training image.
In this embodiment, the small network model means that the number of model layers is smaller than a first value; and/or a network model in which the number of nodes in each layer is less than a second value. Wherein the first numerical value and the second numerical value are both smaller integers. Such as: the small network detection model is as follows: a miniature YOLO model; alternatively, the MobileNet model. Of course, the small network detection model may be other models, and this embodiment is not listed here.
Optionally, in order to further compress the computing resources occupied by the image recognition model during running, after the small network detection model is trained to obtain the image recognition model, the self-moving device may further perform model compression processing on the image recognition model to obtain the image recognition model for recognizing the object.
Optionally, the model compression process includes, but is not limited to: model clipping, model quantization and/or low rank decomposition, etc.
Optionally, after the model is compressed, the self-moving device may train the compressed image recognition model by using the training data again to improve the recognition accuracy of the image recognition model.
And step 203, controlling the environment image to input the image recognition model to obtain an object recognition result, wherein the object recognition result is used for indicating the category of the target object.
Optionally, the object recognition result further includes but is not limited to: position, and/or size of the image of the target object in the environment image.
In summary, in the control method for the mobile device provided in this embodiment, the environment image acquired by the image acquisition assembly is acquired in the moving process of the mobile device; acquiring an image recognition model, wherein the computing resource occupied by the image recognition model in operation is lower than the maximum computing resource provided by the mobile equipment; controlling the environment image to input an image recognition model to obtain an object recognition result, wherein the object recognition result is used for indicating the category of a target object in the environment image; the problem that the application range of an object recognition function of the sweeping robot is limited due to high requirement of the existing image recognition algorithm on hardware of the sweeping robot can be solved; by using the image recognition model which consumes less computing resources to recognize the target object in the environment image, the hardware requirement of the object recognition method on the self-moving equipment can be reduced, and the application range of the object recognition method is expanded.
In addition, the image recognition model is obtained by adopting small network model training and learning, and the object recognition process can be realized without the combination of a Graphics Processing Unit (GPU) and an embedded Neural Network Processor (NPU), so that the requirement of the object recognition method on equipment hardware can be reduced.
In addition, model compression processing is carried out on the image recognition model to obtain an image recognition model for recognizing the object; the method can further reduce the computing resources occupied by the image recognition model during operation, improve the recognition speed and enlarge the application range of the object recognition method.
Optionally, based on the above embodiment, in the present application, after the object recognition result is obtained from the mobile device, the mobile device is further controlled to move based on the object recognition result to complete the corresponding task. Such tasks include, but are not limited to: the task of avoiding barriers of certain objects is realized, for example, the barriers of chairs, pet excrement and the like are avoided; tasks of positioning certain items, such as positioning doors and windows, charging assemblies, and the like; the task of monitoring and following a person; cleaning a specific object, such as a liquid; and/or, an automatic recharge task. Next, the tasks to be executed corresponding to the different object recognition results will be described.
Optionally, a liquid sweeping assembly is mounted on the self-moving device. At this time, after step 203, controlling the mobile device to move to complete the corresponding task based on the object recognition result, including: when the object recognition result indicates that the environment image contains the liquid image, controlling the mobile equipment to move to a region to be cleaned corresponding to the liquid image; the liquid in the area to be cleaned is swept using the liquid sweeping assembly.
In one example, the liquid-sweeping assembly includes a water-absorbing mop mounted to the periphery of a wheel of the self-moving device. When the liquid image exists in the environment image, the self-moving equipment is controlled to move to the area to be cleaned corresponding to the liquid image, so that the wheel body of the self-moving equipment passes through the area to be cleaned, and the water-absorbing mop cloth absorbs liquid on the ground. A cleaning pool and a reservoir are also arranged in the self-moving equipment; the cleaning pool is positioned below the wheel body; the water pump sucks water in the reservoir, and the water is sprayed onto the wheel body from the nozzle through the pipeline to flush dirt on the water-absorbing mop cloth to the cleaning pool. The wheel body is also provided with a press roller for wringing out the water-absorbing mop.
Of course, the liquid cleaning assembly is only exemplary, and in practical implementation, the liquid cleaning assembly may be implemented in other ways, and this embodiment is not listed here.
In order to more clearly understand the way of executing the corresponding working strategy based on the object recognition result, referring to the schematic diagrams of executing the working strategy of cleaning liquid shown in fig. 3 and 4, it can be known from fig. 3 and 4 that after the environment image is collected by the mobile device, the object recognition result of the environment image is obtained by using the image recognition model; when the object recognition result is that the current environment includes liquid, the liquid is cleaned using the liquid cleaning assembly 31.
Optionally, in this embodiment, the self-moving device may be a sweeping robot, and at this time, the self-moving device has a function of uniformly removing dry and wet garbage.
In the embodiment, when the liquid image exists in the environment image, the liquid cleaning assembly is started, so that the problem that the cleaning task cannot be completed due to the fact that the liquid is bypassed by the mobile equipment can be avoided; the cleaning effect from the mobile equipment can be improved. Meanwhile, liquid can be prevented from entering the interior of the mobile equipment to cause circuit damage, and the damage risk of the mobile equipment can be reduced.
Optionally, based on the above embodiment, a power supply component is installed in the self-moving device. Controlling movement from the mobile device to accomplish a corresponding task based on the object recognition result, including: when the residual capacity of the power supply assembly is smaller than or equal to the capacity threshold and the environment image comprises an image of the charging assembly, determining the actual position of the charging assembly according to the image position of the charging assembly by the mobile equipment; control moves from the mobile device to the charging assembly.
After the image of the charging assembly is shot by the self-moving device, the direction of the charging assembly relative to the self-moving device can be determined according to the position of the image in the environment image, and therefore the self-moving device can move towards the charging assembly according to the approximately determined direction.
Optionally, in order to improve the accuracy of moving from the mobile device to the charging assembly, a positioning sensor is further installed on the mobile device, and the positioning sensor is used for positioning the position of the charging interface on the charging assembly. At the moment, the self-moving equipment controls the positioning sensor to position the position of the charging assembly to obtain a positioning result in the process of controlling the self-moving equipment to move to the charging assembly; and controlling the self-moving equipment to move according to the positioning result so as to realize the butt joint of the self-moving equipment and the charging interface.
In one example, the positioning sensor is a laser sensor. At the moment, the charging interface of the charging assembly emits laser signals at different angles, and the positioning sensor determines the position of the charging interface based on the angle difference of the received laser signals.
Of course, the positioning sensor may be other types of sensors, and the present embodiment does not limit the type of the positioning sensor.
In order to more clearly understand the way of executing the corresponding working strategy based on the object recognition result, referring to the schematic diagrams of executing the working strategy of cleaning liquid shown in fig. 5 and 6, it can be known from fig. 5 and 6 that after the environment image is collected by the mobile device, the object recognition result of the environment image is obtained by using the image recognition model; when the object recognition result is that the current environment includes the charging assembly 51, locating the position of the charging interface 53 on the charging assembly 51 by using the locating sensor 52; the mobile device moves towards the charging interface 53, so that the mobile device is electrically connected with the charging component 51 through the charging interface to realize charging.
In the embodiment, the charging assembly is identified through the image identification model and is moved to the position near the charging assembly; the automatic returning charging component of the mobile equipment can be used for charging, and the intelligence of the mobile equipment is improved.
In addition, the position of the charging interface on the charging assembly is determined through the positioning sensor, so that the accuracy of the self-mobile equipment in automatic returning to the charging assembly can be improved, and the automatic charging efficiency is improved.
Fig. 7 is a block diagram of a control apparatus of a self-moving device according to an embodiment of the present application, and this embodiment takes the application of the apparatus to the self-moving device shown in fig. 1 as an example for explanation. The device at least comprises the following modules: an image acquisition module 710, a model acquisition module 720, and a device control module 730.
An image obtaining module 710, configured to obtain an environment image collected by the image collecting assembly during a moving process of the self-moving device;
a model obtaining module 720, configured to obtain an image recognition model, where a computing resource occupied by the image recognition model during running is lower than a maximum computing resource provided by the self-moving device;
and the device control module 730 is configured to control the environment image to be input into the image recognition model to obtain an object recognition result, where the object recognition result is used to indicate a category of the target object.
For relevant details reference is made to the above-described method embodiments.
It should be noted that: in the control device of the self-moving device provided in the above embodiments, when the self-moving device is controlled, only the division of the above functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the control device of the self-moving device may be divided into different functional modules to complete all or part of the above described functions. In addition, the control apparatus of the self-moving device provided in the above embodiment and the control method embodiment of the self-moving device belong to the same concept, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
Fig. 8 is a block diagram of a control apparatus of a self-moving device according to an embodiment of the present application, where the control apparatus may be the self-moving device shown in fig. 1, and of course, may also be another device that is installed on the self-moving device and is independent from the self-moving device. The apparatus comprises at least a processor 801 and a memory 802.
Processor 801 may include one or more processing cores, such as: 4 core processors, 8 core processors, etc. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the control method of the self-moving device provided by the method embodiments herein.
In some embodiments, the control device of the mobile device may further include: a peripheral interface and at least one peripheral. The processor 801, memory 802 and peripheral interface may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface via a bus, signal line, or circuit board. Illustratively, peripheral devices include, but are not limited to: radio frequency circuit, touch display screen, audio circuit, power supply, etc.
Of course, the control device of the self-moving device may also include fewer or more components, which is not limited in this embodiment.
Optionally, the present application further provides a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the control method of the self-moving device of the above-mentioned method embodiment.
Optionally, the present application further provides a computer product, which includes a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the control method of the self-moving device of the above-mentioned method embodiment.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.