Image processing method, related terminal, device and storage medium
1. An image processing method, comprising:
a terminal acquires an image to be registered;
carrying out first image registration on the image to be registered by utilizing a local target image to obtain a local processing result; and
sending the image to be registered to a cloud end, so that the cloud end performs second image registration on the image to be registered by utilizing a cloud end target image to obtain a cloud end processing result;
and obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
2. The method according to claim 1, wherein the performing a first image registration on the image to be registered by using a local target image to obtain a local processing result comprises:
finding out at least one first target image from the first target image set as the local target image;
registering the image to be registered and the local target image by using a local registration mode to obtain a local transformation parameter between the image to be registered and the local target image;
obtaining the local processing result based on the local transformation parameter;
and/or the cloud processing result is obtained by registering the image to be registered and the cloud target image by the cloud in a cloud registration mode, wherein the cloud target image is from a second target image set.
3. The method of claim 2, wherein at least some of the images in the first and second target image sets are identical;
and/or the number of images in the first target image set is less than the number of images in the second target image set;
and/or the computing power or computing time required by the local registration mode is less than the computing power or computing time required by the cloud registration mode.
4. The method of claim 2 or 3, wherein said finding at least one first target image from a first set of target images as the local target image comprises:
determining feature similarity between the image to be registered and a first target image based on feature representations of feature points in the image to be registered and the first target image;
selecting at least one first target image with the characteristic similarity meeting a preset similarity requirement as the local target image;
and/or, the registering the image to be registered and the local target image by using a local registration mode to obtain a local transformation parameter between the image to be registered and the local target image, including:
determining at least one group of local matching point pairs between the image to be registered and the local target image based on the feature representation of the feature points in the image to be registered and the local target image;
and obtaining the local transformation parameters based on the at least one group of local matching point pairs.
5. The method according to any one of claims 1 to 4, wherein obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result comprises:
in response to the condition that a preset condition is met, taking the cloud processing result as a first processing result of the image to be registered;
and in response to the condition that the preset condition is not met, taking the local processing result as a first processing result of the image to be registered.
6. The method of claim 5, wherein the predetermined condition is that the cloud processing result is received within a predetermined time.
7. The method according to any one of claims 1 to 6, wherein the acquiring an image to be registered comprises:
acquiring image frames obtained by shooting by a shooting device, wherein the image frames comprise a first image frame and a second image frame;
taking the first image frame as the image to be registered;
the method further comprises the following steps:
and sequentially using second image frames as images to be tracked, and obtaining a second processing result of the images to be tracked based on a reference processing result of a reference image frame, the images to be tracked and image information in the reference image frame, wherein the reference image frame is the image frame before the images to be tracked, and the reference processing result is determined based on the first processing result.
8. The method of claim 7, wherein the step of performing a first image registration on the image to be registered by using a local target image to obtain a local processing result is performed by a first thread;
the step of obtaining a second processing result of the image to be tracked based on a reference processing result of the reference image frame, the image to be tracked and image information in the reference image frame, the step of sending the image to be registered to a cloud, and the step of obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result, wherein at least one of the steps is executed by a second thread;
wherein the first thread and the second thread are processed asynchronously.
9. The method of claim 8, wherein the first processing result is a first transformation parameter between the image to be registered and a final target image, the final target image is the local target image or a cloud target image, and the second processing result is a second transformation parameter between the image to be tracked and the reference image frame;
or, the first processing result is the pose of the image to be registered, and the second processing result is the pose of the image to be tracked;
or, the first processing result is the first transformation parameter, the second processing result is the pose of the image to be tracked, and the method further includes executing the following steps by using the second thread: and obtaining the pose of the image to be registered by using the first transformation parameter.
10. The method according to claim 8 or 9, wherein before said first image registration of the image to be registered with the local target image for local processing result, the method further comprises performing the following steps with the second thread:
initializing the first thread;
and/or, the method further comprises performing, with the second thread, at least one of:
rendering and displaying the image to be tracked based on a second processing result of the image to be tracked after the second thread obtains the second processing result of the image to be tracked;
and rendering and displaying the image to be registered based on the first processing result of the image to be registered under the condition that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed.
11. An image processing terminal characterized by comprising:
the image acquisition module is used for acquiring an image to be registered;
the local registration module is used for carrying out first image registration on the image to be registered by utilizing a local target image so as to obtain a local processing result; and
the cloud registration module is used for sending the image to be registered to a cloud so that the cloud performs second image registration on the image to be registered by using a cloud target image to obtain a cloud processing result;
and the determining module is used for obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
12. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image processing method of any one of claims 1 to 9.
13. A computer-readable storage medium on which program instructions are stored, which program instructions, when executed by a processor, implement the image processing method of any one of claims 1 to 9.
Background
Image registration and tracking are important research points in the field of computer vision such as AR and VR, transformation parameters between a current image and a target image which are shot by a camera can be obtained through image registration and image tracking technologies, and therefore the position of the target image in the current image can be obtained through the transformation parameters subsequently.
At present, a terminal runs an image processing algorithm, or an acquired image is uploaded to a cloud end through a network, a specific image processing process is completed by the cloud end, and then a processing result is fed back to a local end; or only the local end runs the image processing algorithm by using self-computing power to obtain a corresponding processing result. The method is easily affected by poor network transmission speed and slow cloud processing speed, so that the equipment cannot obtain results in time, or the local end has insufficient computing capability, so that the accuracy of image detection processing is not high, and the like, and the further development of the technology is greatly hindered by the problems.
Therefore, how to improve the speed of the equipment in running the image processing algorithm and improve the accuracy of image processing has very important significance.
Disclosure of Invention
The application provides an image processing method, a related terminal, a device and a storage medium.
A first aspect of the present application provides an image processing method, including: a terminal acquires an image to be registered; carrying out first image registration on the image to be registered by utilizing a local target image to obtain a local processing result; sending the image to be registered to a cloud end, so that the cloud end performs second image registration on the image to be registered by utilizing a cloud end target image to obtain a cloud end processing result; and obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
Therefore, the local processing result is obtained by utilizing the first image registration, or the cloud processing result is obtained by utilizing the second image registration, so that the terminal can utilize the computing power of the cloud, and can also utilize the computing power of the local end, the image registration mode of the terminal is more flexible, and the final processing result is obtained based on at least one of the local processing result and the cloud processing result, so that the final processing result can be obtained by utilizing the other processing result of the local processing result and the cloud processing result even when the local processing result or the cloud processing result cannot be obtained, and the reliability of the image registration is improved.
In some application scenarios, the processing result obtained first can be selected as the final processing result, so that the speed of image registration can be further increased, and in some application scenarios, the processing result at one end with better processing resources (for example, stronger and more accurate registration capability) can be preferentially selected as the final processing result, so that the accuracy of image registration can be increased.
The performing, by using the local target image, the first image registration on the image to be registered to obtain a local processing result includes: finding out at least one first target image from the first target image set as a local target image; registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image; obtaining a local processing result based on the local transformation parameters; and/or the cloud processing result is obtained by registering the image to be registered and a cloud target image by the cloud in a cloud registration mode, wherein the cloud target image is from the second target image set.
Thus, by finding out at least one first target image from the first target image set as a local target image, local transformation parameters may be calculated based on the local target image, and finally a local processing result is obtained.
Wherein, the partial images in the first target image set and the second target image set are at least the same; and/or the number of images in the first target image set is less than the number of images in the second target image set; and/or the computing power or computing time required by the local registration mode is less than that required by the cloud registration mode.
Therefore, by setting that at least partial images in the first target image set and the second target image set are the same, image registration can be performed on the partial images and the images to be registered by using the cloud end and the local end, and the robustness of the image processing method is improved. In addition, the number of the images in the first target image set is smaller than that in the second target image set, so that the number of the images needing to be subjected to image registration with the images to be registered is smaller when the local end carries out first image registration, the registration speed of the first image registration can be increased, more target images are configured for the cloud end in consideration of the high cloud end processing capacity, and the cloud end can be accurately registered. In addition, the computing power required by the local registration mode is set to be smaller than that required by the cloud registration mode, so that the requirement on the local computing power of the terminal can be reduced, and the speed of local registration can be increased; the local registration speed can be accelerated by setting the calculation time required by the local registration mode to be less than that required by the cloud registration mode.
The finding out at least one first target image from the first target image set as a local target image includes: determining feature similarity between the image to be registered and the first target image based on feature representation of feature points in the image to be registered and the first target image; selecting at least one first target image with the characteristic similarity meeting a preset similarity requirement as a local target image; and/or, registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image, wherein the local transformation parameters comprise: determining at least one group of local matching point pairs between the image to be registered and the local target image based on the characteristic representation of the characteristic points in the image to be registered and the local target image; and obtaining local transformation parameters based on at least one group of local matching point pairs.
Therefore, the first target image with the highest similarity to the image to be registered can be quickly determined by performing similarity calculation, and the local registration mode of the local end is accelerated.
Wherein, the obtaining of the first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result includes: in response to the condition that the preset condition is met, taking the cloud processing result as a first processing result of the image to be registered; and in response to the condition that the preset condition is not met, taking the local processing result as a first processing result of the image to be registered.
Therefore, by judging whether the cloud processing result meets the preset condition or not, the computing power of the cloud can be fully utilized when the preset condition is met; when the preset condition is not met, the local processing result is used as the first processing result of the image to be registered, so that the image registration can be continuously executed.
The preset condition is that a cloud processing result is received within a preset time.
Therefore, the preset condition is set to be within the preset time to receive the cloud processing result, when the preset condition is not met, the cloud processing result is not utilized, and the response time of the terminal is prevented from being too long.
Wherein, the above-mentioned acquisition waits to register the image, includes: acquiring image frames obtained by shooting by a shooting device, wherein the image frames comprise a first image frame and a second image frame; taking the first image frame as an image to be registered; the method further comprises the following steps: and sequentially taking the second image frames as images to be tracked, and obtaining a second processing result of the images to be tracked based on a reference processing result of the reference image frame, the images to be tracked and the image information in the reference image frame, wherein the reference image frame is the image frame before the images to be tracked, and the reference processing result is determined based on the first processing result.
Therefore, by using the first image frame as the image to be registered, the image registration of the image frames obtained by the shooting device can be continuously performed. In addition, the second image frame is sequentially used as the image to be tracked, and the image tracking of the second image frame is realized subsequently.
The step of performing first image registration on the image to be registered by using the local target image to obtain a local processing result is executed by a first thread; the method comprises the steps of obtaining a second processing result of an image to be tracked based on a reference processing result of a reference image frame, the image to be tracked and image information in the reference image frame, sending the image to be registered to a cloud, and obtaining a first processing result of the image to be registered based on at least one of a local processing result and a cloud processing result, wherein the at least one step is executed by a second thread; wherein the first thread and the second thread are processed asynchronously.
Therefore, by setting the first thread and the second thread to be asynchronous processing, image registration can be performed, and image tracking can also be performed at the same time, so that the terminal can obtain a tracking result (second processing result) in time without waiting for a result (first processing result) of the image registration, the response speed of the terminal is improved, and delay is reduced.
The first processing result is a first transformation parameter between the image to be registered and the final target image, the final target image is a local target image or a cloud target image, and the second processing result is a second transformation parameter between the image to be tracked and the reference image frame; or the first processing result is the pose of the image to be registered, and the second processing result is the pose of the image to be tracked; or, the first processing result is a first transformation parameter, the second processing result is a pose of the image to be tracked, and the method further comprises executing the following steps by using a second thread: and obtaining the pose of the image to be registered by using the first transformation parameter.
Therefore, by setting the second processing result to a different type (the second transformation parameter or the pose of the image to be tracked), selection can be subsequently made as needed.
Before the first image registration is performed on the image to be registered by using the local target image to obtain the local processing result, the method further includes the following steps performed by using a second thread: initializing a first thread; and/or, the method further comprises performing, with the second thread, at least one of: rendering and displaying the image to be tracked based on the second processing result of the image to be tracked after the second processing result of the image to be tracked is obtained by the second thread; and rendering and displaying the image to be registered based on the first processing result of the image to be registered under the condition that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed.
Therefore, the image to be tracked is rendered and displayed by the second thread, or the image to be registered is rendered and displayed by the second thread, so that the image frame can be processed, and the interaction with the real environment can be realized.
A second aspect of the present application provides an image processing terminal, comprising: the system comprises an image acquisition module, a local registration module, a cloud registration module and a determination module, wherein the image acquisition module is used for acquiring an image to be registered; the local registration module is used for performing first image registration on an image to be registered by using a local target image to obtain a local processing result; the cloud registration module is used for sending the image to be registered to the cloud so that the cloud performs second image registration on the image to be registered by using the cloud target image to obtain a cloud processing result; the determining module is used for obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
A third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the image processing method in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the image processing method of the first aspect described above.
According to the scheme, the local processing result is obtained by utilizing the first image registration, or the cloud processing result is obtained by utilizing the second image registration, so that the terminal can utilize the computing power of the cloud, the computing power of the local terminal can also be utilized, the mode of image registration of the terminal is more flexible, the final processing result is obtained based on at least one of the local processing result and the cloud processing result, even when the local processing result or the cloud processing result cannot be obtained, the final processing result can also be obtained by utilizing the other processing result, and the reliability of image registration is improved.
In some application scenarios, the processing result obtained first can be selected as the final processing result, so that the image registration speed can be increased, in some application scenarios, the processing result at one end with better processing resources (for example, stronger and more accurate registration capability) can be selected preferentially as the final processing result, and the image registration accuracy can be improved, and when an image registration algorithm is operated, the processing result can be obtained faster, or a more accurate registration result can be obtained.
Drawings
FIG. 1 is a first flowchart of a first embodiment of an image registration method of the present application;
FIG. 2 is a second flowchart of a first embodiment of the image processing method of the present application;
FIG. 3 is a third flowchart of a first embodiment of the image processing method of the present application;
FIG. 4 is a fourth flowchart illustrating a first embodiment of an image processing method according to the present application;
FIG. 5 is a schematic flow chart diagram of a second embodiment of the image processing method of the present application;
FIG. 6 is a block diagram of an embodiment of an image processing terminal according to the present application;
FIG. 7 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a first flowchart illustrating an image processing method according to a first embodiment of the present application. Specifically, the method may include the steps of:
step S11: and acquiring an image to be registered.
The image processing method can be implemented by a mobile terminal, and specifically can comprise a mobile phone, a tablet personal computer, smart glasses and the like. The image to be registered may be an image captured by a capturing device, such as a camera of a mobile phone or a tablet, or a monitoring camera, and the specific obtaining mode of the image to be registered is not limited.
In a specific implementation scenario, the image processing method of the present application may be executed in a web browser, that is, in a web page side.
Step S12: and carrying out first image registration on the image to be registered by using the local target image so as to obtain a local processing result.
In one embodiment, the local object images belong to a first object image set, which has a first predetermined number of local object images. When the first image registration is performed, all local target images in the first target image set are subjected to first image registration with the image to be registered.
The local target image is used for carrying out first image registration on the image to be registered, so that a universal image registration method can be used. The image registration algorithm is for example a grayscale and template based algorithm or a feature based matching method. For example, regarding a feature-based matching method, a certain number of matching point pairs related to an image to be registered and a local target image may be obtained, and then transformation parameters of the image to be registered and the local target image are calculated by using a random consensus sampling algorithm (RANSAC), so as to obtain a local processing result. In addition, in a specific implementation scenario, the local processing result may be directly determined as a transformation parameter of the image to be registered and the target image; in another implementation scenario, the local processing result may also be the pose of the terminal, that is, the pose of the terminal (hereinafter referred to as the pose of the terminal) in the world coordinate system established based on the local target image is obtained by using the transformation parameters (for example, the homography matrix H) of the image to be registered and the local target image.
In one implementation scenario, the local registration mode may be utilized to register the image to be registered and the local target image, so as to obtain a local processing result. The local registration means is, for example, the first image registration described above.
Step S13: and sending the image to be registered to the cloud end, so that the cloud end performs second image registration on the image to be registered by utilizing the cloud end target image to obtain a cloud end processing result.
In one implementation scenario, the cloud target image belongs to a second set of target images. The second target image set comprises a second preset number of cloud target images. When the second image registration is performed, performing second image registration on all the cloud target images in the second target image set and the image to be registered.
It is understood that, since the cloud end and the local end can perform image registration, the execution order of step S12 and step S13 is not limited, and step S12 may be executed first, or step S13 may be executed first, or both may be executed at the same time.
In an implementation scenario, because the computing power of the cloud is greater than that of the local end, the cloud performs second image registration on the image to be registered by using the cloud target image, which may be an image registration algorithm requiring greater computing power than that of the local end.
In an implementation scenario, the cloud processing result is obtained by registering the image to be registered and the cloud target image by the cloud in a cloud registration manner. By utilizing the cloud registration mode, a cloud processing result which is more accurate compared with a local processing result can be obtained by means of higher computing power of a cloud.
Step S14: and obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
After the local processing result and the cloud processing result are obtained, at least one of the local processing result and the cloud processing result can be selected as required to obtain a first processing result of the image to be registered. For example, in order to meet the response speed requirement of the terminal, the local processing result may be preferentially selected; in order to meet the image registration accuracy requirement of the terminal, a cloud processing result can be preferentially selected.
In one implementation scenario, the first processing result is a first transformation parameter between the image to be registered and the final target image. And the final target image is a local target image or a cloud target image.
Therefore, the local processing result is obtained by utilizing the first image registration, or the cloud processing result is obtained by utilizing the second image registration, so that the terminal can utilize the computing power of the cloud, and can also utilize the computing power of the local end, the image registration mode of the terminal is more flexible, and the final processing result is obtained based on at least one of the local processing result and the cloud processing result, so that the final processing result can be obtained by utilizing the other processing result of the local processing result and the cloud processing result even when the local processing result or the cloud processing result cannot be obtained, and the reliability of the image registration is improved.
In some application scenarios, the processing result obtained first can be selected as the final processing result, so that the speed of image registration can be further increased, and in some application scenarios, the processing result at one end with better processing resources (for example, stronger and more accurate registration capability) can be preferentially selected as the final processing result, so that the accuracy of image registration can be increased.
In one embodiment, at least some of the images in the first target image set and the second target image set are identical. For example, the images in the second target image set may include all of the images in the first target image set. By setting that at least partial images in the first target image set and the second target image set are the same, image registration can be performed on the partial images and the images to be registered by using the cloud end and the local end, and the robustness of the image processing method is improved.
In one implementation scenario, the number of images in the first target image set is less than the number of images in the second target image set. By setting the number of the images in the first target image set to be smaller than that in the second target image set, the number of the images needing to be subjected to image registration with the images to be registered is smaller when the local end carries out first image registration, so that the registration speed of the first image registration can be increased, more target images are configured for the cloud end in consideration of stronger cloud end processing capacity, and more accurate registration of the cloud end can be realized.
In one implementation scenario, the computing power or computing time required for the local registration approach is less than the computing power or computing time required for the cloud registration approach. Therefore, by setting the computing power required by the local registration mode to be smaller than that required by the cloud registration mode, the requirement on the local computing power of the terminal can be reduced, and the speed of local registration can be increased; the local registration speed can be accelerated by setting the calculation time required by the local registration mode to be less than that required by the cloud registration mode.
Referring to fig. 2, fig. 2 is a second flowchart of the image processing method according to the first embodiment of the present application. The present embodiment is a further extension of the step of "acquiring an image to be registered" mentioned in step S11, and may specifically include the following steps:
step S111: the method comprises the steps of obtaining image frames obtained by shooting through a shooting device, wherein the image frames comprise a first image frame and a second image frame.
The photographing device is, for example, a camera module of the terminal or other image capturing devices (such as a monitoring camera). The image frames captured by the capturing device may be divided into a first image frame and a second image frame. The first image frame may be used for image registration and the second image frame may be used for image tracking after image registration. The first image frame and the second image frame may be the same or different; and meanwhile, the second image frame is the first image frame.
Step S112: and taking the first image frame as an image to be registered.
The terminal can acquire the first image frame as an image to be registered for image registration.
In one embodiment, the first image frames may be sequentially used as images to be registered, and the first image frames may be sequentially obtained as images to be registered according to an obtaining order of the first image frames.
Therefore, by using the first image frame as the image to be registered, the image registration of the image frames obtained by the shooting device can be continuously performed.
Referring to fig. 3, fig. 3 is a third flowchart illustrating an image processing method according to a first embodiment of the present application. The present embodiment specifically expands the "performing first image registration on an image to be registered by using a local target image to obtain a local processing result" mentioned in step S12 of the above embodiment, and includes the following steps:
step S121: at least one first target image is found from the first set of target images as a local target image.
When the first image registration is performed, at least one first target image may be first found from the first target image set as a local target image for subsequently obtaining a local processing result. The search method is, for example, a matching degree between the feature information of the first target image and the feature information of the image to be registered, or a similarity degree between the feature information of the first target image and the image to be registered, or the like.
In an implementation scenario, the step may specifically include step S1211 and step S1212.
Step S1211: and determining the feature similarity between the image to be registered and the first target image based on the feature representation of the feature points in the image to be registered and the first target image.
In an implementation scenario, feature extraction may be performed on the image to be registered and the first target image by using some feature extraction algorithms to obtain feature points in the image, where the number of the feature points is not particularly limited. In the present application, the feature points extracted from the image frames may include feature points obtained by feature extraction of a series of image frames in an image pyramid established based on the image frames. In the present embodiment, the feature points for feature extraction based on the image frame can be considered to be on the same plane as the final target image.
The feature extraction algorithm is, for example, FAST (features from obtained segment) algorithm, SIFT (Scale-innovative feature transform) algorithm, orb (organized FAST and related bridge) algorithm, and the like. In one implementation scenario, the feature extraction algorithm is the orb (organized FAST and rotaed brief) algorithm. After the feature points are obtained, a feature representation corresponding to each feature point is also obtained, and the feature representation is, for example, a feature vector. Therefore, each feature point has a feature representation corresponding to it.
In a specific implementation scenario, a feature representation obtained by feature extraction on all the first target images may be input into a "Bag of words" (Bag of words) model as a local feature set of the first target images, so as to construct a database for quickly retrieving the first target images.
Thereafter, the degree of similarity of the feature representations of the feature points in the image to be registered and each of the first target images may be calculated, for example, the distance between the feature representations of the feature points in the image to be registered and each of the first target images. In a specific implementation scenario, a feature representation obtained by feature extraction of an image to be registered may be input into the above "bag of words" model, so as to quickly determine a feature similarity between the image to be registered and the first target image.
Step S1212: and selecting at least one first target image with the characteristic similarity meeting the preset similarity requirement as a local target image.
The preset similarity requirement may be the first target image that is most similar to the image to be registered among all the first target images.
In a specific implementation scenario, the preset similarity requirement further includes that a distance between the feature representation of the feature point of the image to be registered and the feature representation of the feature point of the first target image satisfies a preset threshold requirement. For example, a first target image most similar to the image to be registered may be selected first, and then whether the distance between the feature representations of the feature points of the first target image and the feature points of the second target image meets a preset threshold requirement is calculated, and if so, the first target image is taken as the local target image. If not, selecting the first target image with the second similarity of the images to be registered, then calculating whether the distance between the feature representations of the feature points of the first target image and the second target image meets the preset threshold requirement, and the like.
By carrying out similarity calculation, the first target image with the highest similarity to the image to be registered can be quickly determined, and the local registration mode of the local end is accelerated.
Step S122: and registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image.
After the local target image is determined, the image to be registered and the local target image can be registered by using a local registration mode, so that the computing capability of the terminal is fully utilized, and local transformation parameters between the image to be registered and the local target image are obtained.
In one implementation scenario, this step includes the following steps S1221 and S1222.
Step S1221: and determining at least one group of local matching point pairs between the image to be registered and the local target image based on the characteristic representation of the characteristic points in the image to be registered and the local target image.
Firstly, feature extraction can be performed on the image to be registered and the local target image to obtain feature representations of feature points in the image to be registered and the local target image. The feature points of the local target image are defined as first feature points, and the feature points of the image to be registered are defined as second feature points. In an implementation scenario, in step S1211 described above, since the features of the image to be registered and the local target image have already been extracted, the images can be directly acquired at this time, and the processing steps are reduced.
Then, the matching degree of the feature points between the image to be registered and the local target image can be calculated to obtain at least one group of local matching point pairs. The matching degree of the feature points may specifically be a matching degree of feature representations between two feature points. In one implementation scenario, the matching degree of each feature point in the image to be registered and each feature point in the local target map may be calculated. In one implementation scenario, the degree of matching between two feature points is derived based on the distance of the feature representation of the second feature point. For example, the distance between the feature representations of two feature points is the matching degree, and the closer the distance is, the more matching is; the closest distance is considered to be the best match. In one implementation scenario, the feature representations are feature vectors, and the distance between feature representations is the distance between feature vectors. When the local matching point pairs are specifically determined, at least one group of local matching point pairs can be selected according to the matching degree from high to low. In the group of local matching point pairs, the first feature point is a first matching point, and the second feature point is a second matching point.
Step S1222: and obtaining local transformation parameters based on at least one group of local matching point pairs.
In one implementation scenario, transformation parameters of the image to be registered and the local target image may be calculated using a random consensus sampling algorithm (RANSAC) based on at least one set of local matching point pairs.
In another implementation scenario, after obtaining at least one set of locally matching point pairs, directional information for each set of locally matching point pairs may be calculated. The direction information of the local matching point pair can be obtained according to the directions of the first matching point and the second matching point in the local matching point pair.
In one implementation scenario, the direction information of the local matching point pair may be a difference between the directions of the first matching point and the second matching point. For example, when the feature points are extracted by the ORB algorithm, the direction of the first matching point is a corner point direction angle, and the direction of the second matching point is also a corner point direction angle, and the direction information of the local matching point pair may be a difference between the corner point direction angle of the first matching point and the corner point direction angle of the second matching point. Therefore, the rotation angle of the image to be registered relative to the local target image can be obtained by calculating the direction information of a group of local matching point pairs. After the direction information of a group of local matching point pairs is obtained, image registration can be performed subsequently by using the rotation angle of the image to be registered, represented by the direction information of the group of local matching point pairs, relative to the local target image, so as to finally obtain local transformation parameters between the local target image and the image to be registered.
In one implementation scenario, a first image region centered on a first matching point may be extracted from the local target image, and a second image region centered on a second matching point may be extracted from the image to be registered. Then, a first deflection angle of the first image area and a second deflection angle of the second image area are determined. Finally, a transformation parameter is obtained based on the first deflection angle and the second deflection angle, specifically, the transformation parameter may be obtained based on the direction information of the local matching point pair and the pixel coordinate information of the first matching point and the second matching point in the local matching point pair.
In one implementation scenario, the first deflection angle is a directional angle between a line connecting the centroid of the first image region and the center of the first image region and a predetermined direction (e.g., an X-axis of a world coordinate system). The second deflection angle is a directed included angle between a connecting line of the centroid of the second image area and the center of the second image area and the preset direction.
In another implementation scenario, the first deflection angle θ can be directly obtained by the following equation:
θ=arctan(∑yI(x,y),∑xI(x,y)) (1)
in the above formula (1), (x, y) represents the offset of a certain pixel point in the first image region with respect to the center of the first image region, I (x, y) represents the pixel value of the pixel point, and Σ represents the summation, whose summation range is the pixel point in the first image region. Similarly, the second deflection angle can also be calculated in the same way.
In one implementation scenario, the direction information of the local matching point pair and the coordinate information, such as pixel coordinate information, of the first matching point and the second matching point in the local matching point pair may be utilized to arrive at the final transformation parameter between the local target image and the image to be registered. Thereby enabling computation of local transformation parameters using a set of local matching point pairs.
In a specific embodiment, the transformation parameters between the image to be registered and the local target image can be obtained through the following steps a and b.
Step a: an angular difference between the first deflection angle and the second deflection angle is obtained.
The angular difference is, for example, the difference between the first deflection angle and the second deflection angle.
In one implementation scenario, equation (2) for calculating the angular difference is as follows:
wherein theta is an angle difference,at a first deflection angle, T represents a local target image,for the second deflection angle, F denotes the image to be registered.
Step b: and obtaining a first candidate transformation parameter based on the angle difference and the scale corresponding to the first matching point pair.
The first candidate transformation parameter is for example a homography matrix of the correspondence between the image to be registered and the local target image. The homography matrix is calculated as follows:
H=HlHsHRHr (3)
h is a corresponding homography matrix between the local target image and the image to be registered, namely a first candidate transformation parameter; hrRepresenting the translation amount of the image to be registered relative to the local target image; hsThe scale corresponding to the represented first matching point pair is the scale information when the local target image is zoomed; hRRepresenting the amount of rotation, H, of the image to be registered relative to the local target imagelRepresenting the amount of translation reset after translation.
In order to obtain the angular difference, the above equation (3) may be converted to obtain equation (4).
Wherein the content of the first and second substances,local target map for first matching pointPixel coordinates on the image;pixel coordinates of the second matching point on the image to be registered; s is the scale corresponding to the first matching point pair, i.e. the pointA corresponding scale; θ is the angular difference.
Step S123: and obtaining a local processing result based on the local transformation parameters.
If the local processing result is determined to be the local transformation parameter, the local transformation parameter obtained in step S122 may be used as the local processing result.
If the local processing result is determined as the pose of the image to be registered, namely the pose of the terminal when shooting the image to be registered, conversion can be performed based on the local transformation parameters, so that the pose (local processing result) of the image to be registered is obtained. For example, the local transformation parameters may be processed by using a PnP (passive-n-Point) algorithm, so as to obtain the pose of the image to be registered.
Thus, by finding out at least one first target image from the first target image set as a local target image, local transformation parameters may be calculated based on the local target image, and finally a local processing result is obtained.
Referring to fig. 4, fig. 4 is a fourth flowchart illustrating an image processing method according to a first embodiment of the present application. The present embodiment is a specific extension of step S14 of the above embodiment, and includes the following steps:
step S141: and judging whether the cloud processing result meets a preset condition or not.
The preset conditions are, for example, accuracy requirements of the processing results in the cloud, processing time requirements of the processing results in the cloud, and the like, and are not limited herein.
In a specific implementation scenario, the preset condition is that the cloud processing result is received within a preset time. For example, after the image to be registered is sent to the cloud, if the cloud processing result is not received within the preset time, the cloud processing result may be considered to not satisfy the preset condition. By setting the preset condition to receive the cloud processing result within the preset time, the cloud processing result is not utilized when the preset condition is not met, and the response time of the terminal is prevented from being too long.
Step S142: and in response to the condition that the preset condition is met, taking the cloud processing result as a first processing result of the image to be registered.
The condition that the preset condition is met means that the cloud processing result can be used, and at the moment, the terminal can respond to the condition that the preset condition is met, and the cloud processing result is used as a first processing result of the image to be registered so as to utilize the cloud processing result, so that the computing capability of the cloud can be utilized.
Step S143: and in response to the condition that the preset condition is not met, taking the local processing result as a first processing result of the image to be registered.
Under the condition that the preset condition is not met, the cloud processing result cannot be used for image registration, and at the moment, the terminal can respond to the condition that the preset condition is not met, and take the local processing result as a first processing result of the image to be registered, so that the image registration can be continuously executed.
Therefore, by judging whether the cloud processing result meets the preset condition or not, the computing power of the cloud can be fully utilized when the preset condition is met; when the preset condition is not met, the local processing result is used as the first processing result of the image to be registered, so that the image registration can be continuously executed.
In one embodiment, after obtaining the first processing result of the image to be registered, the image processing method of the present application may further include the following step S21.
Step S21: and sequentially taking the second image frames as images to be tracked, and obtaining a second processing result of the images to be tracked based on the reference processing result of the reference image frame, the images to be tracked and the image information in the reference image frame, wherein the reference image frame is the image frame before the images to be tracked.
And sequentially using the second image frames as the images to be tracked, wherein the second image frames can be sequentially used as the images to be tracked based on the obtaining sequence of the second image frames to obtain a second processing result of the images to be tracked. In a specific implementation scenario, the second processing result may be directly determined as a transformation parameter of the image to be tracked and the final target image; in another implementation scenario, the second processing result may also be a pose of the terminal. The specific method for obtaining the transformation parameters of the image to be tracked and the final target image can be the same image registration algorithm, and the method for obtaining the pose of the terminal can be a general image tracking algorithm, which is not described in detail herein. Therefore, the second image frame is sequentially used as the image to be tracked, and the image tracking of the second image frame is realized subsequently.
In one implementation scenario, the first image frame is a different image frame than the second image frame. For example, after the 1 st image frame is used as the first image frame, the subsequent 2 nd image frame is used as the second image frame for image tracking. After the 10 th image frame is used as the first image frame, the subsequent 11 th image frame is used as the second image frame for image tracking. In another implementation scenario, at least a portion of the first image frame may be a second image frame. For example, the 10 th image frame may be used as the first image frame, and the 10 th image frame may be used as the second image frame. By having the first image frame and the second image frame be different image frames or at least part of the first image frame be the second image frame, image registration may be performed for the first image frame or image tracking may be performed for the second image frame, respectively.
The image information in the image to be tracked and the image information in the reference image frame can be understood as all information obtained after the image to be tracked and the reference image frame are processed. For example, feature extraction may be performed on the image to be tracked and the reference image frame respectively based on a feature extraction algorithm to obtain feature information about feature points in the image to be tracked and the reference image frame, which may be regarded as image information in the image to be tracked and the reference image frame. By using the image information in the image to be tracked and the reference image frame, the corresponding transformation parameters or the corresponding pose variation between the image to be tracked and the reference image frame can be obtained. The corresponding transformation parameters or the corresponding pose change amounts can be obtained by the same image registration method or image tracking method, and are not described in detail herein.
In one implementation scenario, the reference image frame is an image frame preceding the image to be tracked. In one implementation scenario, the reference image frame is the ith frame before the image to be tracked, and i is an integer greater than or equal to 1. If there is a part of the second image frame before the second image frame as the image to be tracked, the reference image frame may be the first image frame or the second image frame.
In one implementation scenario, the reference processing result is derived based on the first processing result.
In a specific implementation scenario, when the reference image frame is the first image frame, the reference image frame is the image to be registered, and at this time, the first processing result may be directly used as the reference processing result. When the first processing result is the transformation parameter of the image to be registered and the target image, the reference processing result may be the transformation parameter of the image to be registered and the target image, and the reference processing result may also be the pose of the terminal obtained based on the transformation parameter. When the first processing result is the pose of the terminal, the reference processing result can be directly determined as the pose of the terminal.
In another implementation scenario, when the reference image frame is the second image frame, the reference processing result may be determined based on the first processing result, and the specific determination method may be to obtain a relative processing result of the reference image frame with respect to its preceding n image frames (n is greater than or equal to 1) and a processing result of the preceding n frames, thereby obtaining the reference processing result, wherein the processing result of the preceding n frames is obtained based on the first processing result. For example, when the 1 st image frame is a first image frame, the 2 nd image frame is a second image frame, and the 2 nd image frame is a reference image frame, the reference processing result may be a result of acquiring a relative processing of the 2 nd image frame with respect to the 1 st image frame (a homography matrix corresponding to the two image frames or a pose change amount of a terminal corresponding to the two image frames), and a processing result (a first processing result) of the 1 st image frame is acquired, so as to obtain a processing result (a reference processing result) of the 2 nd image frame. Thereafter, when the 3 rd image frame is the second image frame and is the reference image frame, the relative processing result of the 3 rd image frame with respect to the 2 nd image frame may also be acquired at this time, and the processing result of the 2 nd image frame may be acquired, thereby obtaining the processing result (reference processing result) of the 3 rd image frame, because the processing result of the 2 nd image frame is obtained based on the first processing result, it may also be considered that the processing result (reference processing result) of the 3 rd image frame is determined based on the first processing result. In another specific implementation scenario, the processing result (reference processing result) of the 3 rd image frame may also be obtained by obtaining the relative processing result of the 3 rd image frame with respect to the 1 st image frame and obtaining the first processing result of the 1 st image frame. The specific determination method may be adjusted according to the need, and is not limited herein. In a specific implementation scenario, after the first processing result is obtained, the first image frame corresponding to the first processing result and each subsequent image frame may be used as a reference image frame, so as to implement subsequent continuous tracking on the image frames.
In one implementation scenario, the first processing result is a first transformation parameter between the image to be registered and the final target image (the local target image or the cloud target image), and the second processing result is a second transformation parameter between the image to be tracked and the final target image. At this time, the reference processing result obtained based on the first processing result (first transformation parameter) may be a reference transformation parameter between the reference image frame and the final target image, and then the second transformation parameter may be obtained using the reference transformation parameter and a corresponding transformation parameter between the image to be tracked and the reference image frame.
In an implementation scenario, the first processing result is the pose of the image to be registered, and the second processing result is the pose of the image to be tracked. At this time, the reference processing result obtained based on the first processing result (the pose of the image to be registered) may be the pose of the reference image frame (the pose when the terminal captures the reference image frame), and then the pose of the image to be tracked is obtained using the pose of the reference image frame, the amount of change in the corresponding pose between the image to be tracked and the reference image frame.
In an implementation scenario, the first processing result is a first transformation parameter, and the second processing result is a pose of an image to be tracked, where the image processing method further includes: and obtaining the pose of the image to be registered by using the first transformation parameter. Thus, the pose of the image to be registered is obtained, and the pose of the image to be tracked is finally obtained.
Therefore, by setting the second processing result to a different type (the second transformation parameter or the pose of the image to be tracked), selection can be subsequently made as needed.
In a disclosed embodiment, the step of performing the first image registration on the image to be registered by using the local target image to obtain the local processing result is performed by the first thread.
The method comprises the steps of obtaining a second processing result of an image to be tracked based on a reference processing result of a reference image frame, the image to be tracked and image information in the reference image frame, sending the image to be registered to a cloud, and obtaining a first processing result of the image to be registered based on at least one of a local processing result and a cloud processing result, wherein the at least one step is executed by a second thread. Additionally, the first thread and the second thread are processed asynchronously. The first thread and the second thread are processed asynchronously, i.e., the first thread and the second thread may be executed asynchronously. After the first processing result is obtained, the second processing result can be continuously obtained, so that the second processing result of continuously obtaining the image to be tracked is realized, and asynchronous processing of image registration and image tracking is realized.
In general, in the step of performing image registration ("performing first image registration on an image to be registered by using a local target image to obtain a local processing result" or "sending the image to be registered to the cloud"), a longer time (algorithm running time) is required to obtain the result, and compared with image registration, the time required for performing image tracking is shorter, so that by setting the first thread and the second thread to be asynchronous processing, image registration can be performed while image tracking is performed, and the result of image registration (the first processing result) does not need to be waited, so that the terminal can obtain the tracking result (the second processing result) in time, the response speed of the terminal is increased, and delay is reduced.
When the image processing method of the present application is executed in a browser, that is, in a web page side, the first thread is, for example, a worker (worker) thread. By creating and utilizing the worker thread at the webpage end, the webpage end can execute a multi-thread task, and the running speed of the webpage end for running the image processing method is improved.
In one implementation scenario, some or all of the steps performed by the first thread or the second thread are implemented in the WebAssembly (WASN) programming language. By using the WASN programming language to execute partial or all execution steps of the first thread or the second thread on the webpage end, the calculation power of the terminal can be fully utilized, the use efficiency of the equipment is improved, the running speed of the whole image processing method can be improved, and the delay is reduced.
Referring to fig. 5, fig. 5 is a flowchart illustrating an image processing method according to a second embodiment of the present application. In this embodiment, before performing the above "performing the first image registration on the image to be registered by using the local target image to obtain the local processing result", the image processing method further includes performing the following steps by using a second thread:
step S31: the first thread is initialized.
The initialization of the first thread may be a conventional thread initialization process, which is not described herein again. By initializing the first thread, steps such as local image registration (step S12) may be subsequently performed using the first thread.
Step S32: and rendering and displaying the image to be tracked based on the second processing result of the image to be tracked after the second thread obtains the second processing result of the image to be tracked.
And rendering and displaying the image to be tracked based on a second processing result of the image to be tracked, specifically, rendering and displaying the image to be tracked according to the pose of the image to be tracked, namely the pose when the image to be tracked is shot by the terminal. It can be understood that if the second processing result is a transformation parameter between the image to be tracked and the final target image, the pose of the terminal can be obtained according to the transformation parameter; and if the second processing result is the pose of the image to be tracked, rendering and displaying the image to be tracked directly according to the pose of the image to be tracked.
Step S33: and rendering and displaying the image to be registered based on the first processing result of the image to be registered under the condition that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed.
The first thread obtains a first processing result of the image to be registered, and the image to be registered is not displayed, which means that the image to be registered can be rendered and displayed based on the first processing result. It can be understood that if the first processing result is a transformation parameter between the image to be tracked and the target image, the pose of the terminal can be obtained according to the transformation parameter; and if the first processing result is the pose of the image to be registered, rendering and displaying the image to be registered directly according to the pose of the image to be registered.
Therefore, the image to be tracked is rendered and displayed by the second thread, or the image to be registered is rendered and displayed by the second thread, so that the image frame can be processed, and the interaction with the real environment can be realized.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of an embodiment of an image processing terminal according to the present application. The image processing terminal 60 includes an image acquisition module 61, a local registration module 62, and a cloud registration module 63 and a determination module 64. The image acquisition module 61 is used for acquiring an image to be registered; the local registration module 62 is configured to perform first image registration on an image to be registered by using a local target image to obtain a local processing result; the cloud registration module 63 is configured to send the image to be registered to the cloud, so that the cloud performs second image registration on the image to be registered by using the cloud target image to obtain a cloud processing result; the determining module 64 is configured to obtain a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
The local registration module 62 is configured to perform first image registration on an image to be registered by using a local target image to obtain a local processing result, and includes: finding out at least one first target image from the first target image set as a local target image; registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image; obtaining a local processing result based on the local transformation parameters; the cloud processing result is obtained by registering the image to be registered and the cloud target image in a cloud registration mode by the cloud, and the cloud target image is from the second target image set.
Wherein, the partial images in the first target image set and the second target image set are at least the same; and/or the number of images in the first target image set is less than the number of images in the second target image set; and/or the computing power or computing time required by the local registration mode is less than that required by the cloud registration mode.
The local registration module 62 is configured to find at least one first target image from the first target image set as a local target image, and includes: determining feature similarity between the image to be registered and the first target image based on feature representation of feature points in the image to be registered and the first target image; selecting at least one first target image with the characteristic similarity meeting a preset similarity requirement as a local target image; the local registration module 62 is configured to register the image to be registered and the local target image in a local registration manner, so as to obtain a local transformation parameter between the image to be registered and the local target image, and includes: determining at least one group of local matching point pairs between the image to be registered and the local target image based on the characteristic representation of the characteristic points in the image to be registered and the local target image; and obtaining local transformation parameters based on at least one group of local matching point pairs.
The determining module 64 is configured to obtain a first processing result of the image to be registered based on at least one of a local processing result and a cloud processing result, and includes: in response to the condition that the preset condition is met, taking the cloud processing result as a first processing result of the image to be registered; and in response to the condition that the preset condition is not met, taking the local processing result as a first processing result of the image to be registered.
The preset condition is that a cloud processing result is received within a preset time.
The image obtaining module 61 is configured to obtain an image to be registered, and includes: acquiring image frames obtained by shooting by a shooting device, wherein the image frames comprise a first image frame and a second image frame; and taking the first image frame as an image to be registered.
The image processing terminal 60 further includes an image tracking module, which is configured to sequentially use the second image frames as images to be tracked, and obtain a second processing result of the images to be tracked based on a reference processing result of the reference image frame, the images to be tracked, and image information in the reference image frame, where the reference image frame is an image frame before the images to be tracked, and the reference processing result is determined based on the first processing result.
The step of performing first image registration on the image to be registered by using the local target image to obtain a local processing result is executed by a first thread; the method comprises the steps of obtaining a second processing result of an image to be tracked based on a reference processing result of a reference image frame, the image to be tracked and image information in the reference image frame, sending the image to be registered to a cloud, and obtaining a first processing result of the image to be registered based on at least one of a local processing result and a cloud processing result, wherein the at least one step is executed by a second thread; wherein the first thread and the second thread are processed asynchronously.
The first processing result is a first transformation parameter between the image to be registered and the final target image, the final target image is a local target image or a cloud target image, and the second processing result is a second transformation parameter between the image to be tracked and the reference image frame; or the first processing result is the pose of the image to be registered, and the second processing result is the pose of the image to be tracked; or, the first processing result is a first transformation parameter, the second processing result is a pose of the image to be tracked, and the method further comprises executing the following steps by using a second thread: and obtaining the pose of the image to be registered by using the first transformation parameter.
Before the local registration module 62 is configured to perform first image registration on an image to be registered by using a local target image to obtain a local processing result, the image processing method of the present application further includes using a second thread to perform the following steps: initializing a first thread; and/or, the method further comprises performing, with the second thread, at least one of: rendering and displaying the image to be tracked based on the second processing result of the image to be tracked after the second processing result of the image to be tracked is obtained by the second thread; and rendering and displaying the image to be registered based on the first processing result of the image to be registered under the condition that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an electronic device according to an embodiment of the present application. The electronic device 70 comprises a memory 701 and a processor 702 coupled to each other, and the processor 701 is configured to execute program instructions stored in the memory 701 to implement the steps of any of the embodiments of the image processing method described above. In one particular implementation scenario, the electronic device 70 may include, but is not limited to: a microcomputer, a server, and the electronic device 70 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 702 is configured to control itself and the memory 701 to implement the steps of any of the above-described embodiments of the image processing method. Processor 702 may also be referred to as a CPU (Central Processing Unit). The processor 702 may be an integrated circuit chip having signal processing capabilities. The Processor 702 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 702 may be collectively implemented by an integrated circuit chip.
Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 80 stores program instructions 801 that can be executed by the processor, the program instructions 801 being for implementing the steps of any of the image processing method embodiments described above.
According to the scheme, the local processing result is obtained by utilizing the first image registration, or the cloud processing result is obtained by utilizing the second image registration, so that the terminal can utilize the computing power of the cloud, and also can utilize the computing power of the local terminal, and the terminal can obtain the processing result quickly or obtain a relatively accurate registration result when the terminal runs the image registration algorithm.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种遥感卫星影像的辐射校正方法及装置