Image classification method and device
1. Method for image classification, characterized in that it carries out the following steps:
step 1: extracting image features of the training classification images to obtain a training classification image feature group;
step 2: performing feature analysis on the training classification image feature group, which specifically comprises the following steps: normalizing each feature in the training classification image feature group to obtain a feature value; setting a plurality of comparison values, calculating a difference value between the characteristic value and the comparison values to judge the distance between the characteristic value and the comparison values, and taking the comparison value with the minimum difference value obtained by calculation as a classification center of the characteristic values;
and step 3: based on each comparison value, and taking the comparison value as a characteristic value of a classification center as a classification group;
and 4, step 4: establishing a classification tree by taking the classification groups as nodes, and classifying the images to be classified; the method specifically comprises the following steps: the image classification method comprises the steps of obtaining image features of images to be classified after feature extraction is carried out on the images to be classified, taking the image features of all the images to be classified as an image feature group to be classified, directly substituting the image feature group to be classified into a classification tree for classification, sequentially taking the image features of the images to be classified from the image feature group to be classified in the classification tree classification process, putting the image features into the classification tree for classification until all the image features of the images to be classified in the image feature group to be classified are taken out, and completing image classification.
2. The method of claim 1, wherein the step 1 of extracting image features of the training classification image comprises: performing direct pixel mapping on pixels of the training classification image to obtain an M-dimensional pixel mapping coefficient of the pixels, wherein M is an integer larger than 256; obtaining M direct pixel mapping maps corresponding to the training classification images according to the pixel mapping coefficients, wherein the value of any coordinate point in the kth direct pixel mapping map in the M direct pixel mapping maps is the value of the pixel mapping coefficient corresponding to the any coordinate point in the kth dimension, and k is a positive integer less than or equal to M; pooling the M direct pixel mapping maps respectively to obtain M-dimensional pooling features of pixels to be extracted, wherein the M direct pixel mapping maps correspond to the M dimensions of the pooling features one by one; performing dimensionality reduction on the pooled features to obtain reduced features of the pixels to be extracted for representing the pooled features, wherein the dimensionality of the reduced features is smaller than that of the pooled features; and taking the obtained reduction features as the image features of the extracted training classification images.
3. The method of claim 2, wherein the direct pixel mapping of the pixels of the training classification image to obtain M-dimensional pixel mapping coefficients of the pixels performs the steps of: direct pixel mapping is performed on pixels of the training classification image using the following formula:wherein T is a pixel mapping coefficient,the value range is as follows: 0.2 to 0.5;representing training classification imagesThe value of the pixel R of (a),representing training classification imagesThe value of the pixel G of (a),representing training classification imagesThe pixel B value of (a).
4. The method of claim 3, wherein the method of pooling the M direct pixel maps to obtain M-dimensional pooled features of the pixels to be extracted performs the steps of: the pooling process was performed using the following formula:wherein the content of the first and second substances,for each direct pixel map the probability of occurrence in all direct pixel maps,representing a direct pixel map;the resulting pooling characteristics.
5. The method of claim 1, wherein the classification tree is a multi-way tree comprising at least two parts, and each node in the classification tree corresponds to a respective classification group; training classification models corresponding to father nodes according to characteristic values corresponding to the nodes in the classification tree, wherein the characteristic values are labeled in type in advance and stored in the corresponding nodes, the father nodes correspond to at least one child node, and the classification models are used for dividing the corpus into the corresponding child nodes; acquiring an image to be classified, wherein the image to be classified is data to be predicted of an unknown classification group; and classifying the images to be classified step by step through the classification models of all the nodes in the classification tree.
6. The method of claim 5, wherein the training the classification model corresponding to each parent node according to the feature value corresponding to each node in the classification tree comprises: acquiring the characteristic values corresponding to each child node corresponding to the current parent node and the classification types corresponding to each child node; training a classification model corresponding to the current parent node through a preset model training algorithm according to the feature values corresponding to the child nodes and the classification types corresponding to the child nodes, wherein the preset model training algorithm comprises at least one of a Support Vector Machine (SVM) algorithm, a K nearest neighbor classification (KNN) algorithm, a decision tree algorithm and a naive Bayes NBM algorithm; and storing the classification model obtained by training into the current father node.
7. The method according to claim 6, wherein the training the classification model corresponding to each parent node according to the training corpus corresponding to each node in the classification tree further comprises: adding a training task into a waiting queue, wherein the training task is used for indicating a classification model corresponding to a training parent node; detecting whether the number of training tasks in the execution queue is less than a threshold value; and if the number of the training tasks in the execution queue is smaller than the threshold value, adding the training tasks in the waiting queue into the execution queue, and executing the steps of acquiring the characteristic values corresponding to the current parent node in each child node and the classification types corresponding to the child nodes.
8. The method of claim 7, further comprising, before extracting the image features of the training classification image, a step of image preprocessing the training classification image, specifically comprising: filtering the training classification image according to the pixel position information and the pixel gray value information of the training classification image to generate a filtered image; the pixel position information is the spatial distance between a first pixel in the training classification image and other pixels in the first pixel established neighborhood range; the pixel grayscale value information is a difference in grayscale values between a first pixel in the training classification image and pixels surrounding the first pixel; according to the local pixel information and the whole pixel information of the filtering image, performing enhancement processing on the filtering image to generate an enhanced image; generating a class gradient map according to the enhanced image; and carrying out binarization processing on the class gradient map to generate a binary image.
9. The method according to claim 8, wherein the filtering the training classification image according to the pixel position information and the pixel gray value information of the training classification image specifically comprises: generating a Gaussian template according to the pixel position information of the training classification image; generating a gray value difference template according to the pixel gray value information of the training classification image, wherein the Gaussian template and the gray value difference template have the same size; multiplying the template coefficient in the Gaussian template by the template coefficient at the corresponding position in the gray value difference template, and taking the product as the template coefficient at the corresponding position in the generated filter coefficient template; and performing filtering processing on the training classification image by using the filter coefficient template.
10. An image classification apparatus for implementing the method of any one of claims 1 to 9.
Background
Image classification, an image processing method for distinguishing different types of objects based on different characteristics respectively reflected in image information. It uses computer to make quantitative analysis of image, and classifies each picture element or region in the image into one of several categories to replace human visual interpretation.
The image classification methods generally include the following methods:
1. color feature based indexing techniques: color is a visual characteristic of the surface of an object, each object has its own color characteristics, for example, people say that green is often related to trees or grassland, say that blue is often related to sea or blue sky, and the same class of objects are photographed with similar color characteristics, so we can distinguish objects according to color characteristics. A global color feature index and a local color feature index.
2. Texture-based image classification techniques: the texture feature is also one of the important features of the image, and the essence of the texture feature is that the neighborhood gray scale spatial distribution rule of the depicted pixel can be borrowed into image classification because the texture feature has already obtained abundant research results in the fields of pattern recognition, computer vision and the like. In the early 70 s, Haralick et al proposed a gray level co-occurrence matrix representation of texture features, which extracted the gray level spatial correlation of texture, and first established a gray level co-occurrence matrix based on the distance and direction between pixels, and then extracted meaningful statistics from this matrix as texture feature vectors. Based on a psychological study of human eyes on visual perception of textures, Tamurar et al propose 6 texture attributes that can simulate a texture visual model, including granularity, contrast, directionality, linearity, uniformity, and roughness. This texture representation is used by QBIC and MARS systems. In the early 90 s, after the theoretical knot of wavelet transform was established, many researchers began investigating how to represent texture features using wavelet transforms. smiht and chang use statistics (mean and variance) extracted from wavelet subbands as texture features. This algorithm achieved 90% accuracy in 112 Brodatz texture images. In order to exploit the characteristics of the intermediate bands, Chang and Kuo develop a tree-structured wavelet transform to further improve the classification accuracy. Still other researchers have combined wavelet transforms with other transforms for better performance, such as Thygaarajna et al combined wavelet transforms and co-occurrence matrices to take advantage of both statistical-based and transform-based texture analysis algorithms.
3. Shape-based image classification techniques: shape is one of the important visualizations of images in two-dimensional image space, shape is generally considered as a region enclosed by a closed contour curve, so the description of shape involves the description of the contour boundary and the description of the region enclosed by the boundary. The description of the shape profile features mainly comprises straight line segment description, spline fitting curve, Fourier descriptor, Gaussian parameter curve and the like. Eakins et al propose a set of redrawing rules and simplified expression of shape outlines by line segments and circular arcs, and then define two family-adjacent and family-shape functions of shapes to classify the shapes.
Disclosure of Invention
The invention mainly aims to provide an image classification method and device, which are different from the image classification method in the prior art, the image classification method is classified based on the characteristics of the images, and in the classification process, a comparison value is set to determine a classification center, so that a classification tree is constructed to classify the images to be classified, the classification algorithm and process are greatly simplified, and meanwhile, the accuracy of the classification result is still maintained at a higher level.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
an image classification method, said method performing the steps of:
step 1: extracting image features of the training classification images to obtain a training classification image feature group;
step 2: performing feature analysis on the training classification image feature group, which specifically comprises the following steps: normalizing each feature in the training classification image feature group to obtain a feature value; setting a plurality of comparison values, calculating a difference value between the characteristic value and the comparison values to judge the distance between the characteristic value and the comparison values, and taking the comparison value with the minimum difference value obtained by calculation as a classification center of the characteristic values;
and step 3: based on each comparison value, and taking the comparison value as a characteristic value of a classification center as a classification group;
and 4, step 4: establishing a classification tree by taking the classification groups as nodes, and classifying the images to be classified; the method specifically comprises the following steps: the image classification method comprises the steps of obtaining image features of images to be classified after feature extraction is carried out on the images to be classified, taking the image features of all the images to be classified as an image feature group to be classified, directly substituting the image feature group to be classified into a classification tree for classification, sequentially taking the image features of the images to be classified from the image feature group to be classified in the classification tree classification process, putting the image features into the classification tree for classification until all the image features of the images to be classified in the image feature group to be classified are taken out, and completing image classification.
Further, the method for extracting the image features of the training classification image in the step 1 includes: performing direct pixel mapping on pixels of the training classification image to obtain an M-dimensional pixel mapping coefficient of the pixels, wherein M is an integer larger than 256; obtaining M direct pixel mapping maps corresponding to the training classification images according to the pixel mapping coefficients, wherein the value of any coordinate point in the kth direct pixel mapping map in the M direct pixel mapping maps is the value of the pixel mapping coefficient corresponding to the any coordinate point in the kth dimension, and k is a positive integer less than or equal to M; pooling the M direct pixel maps respectively to obtain M-dimensional pooling features of the pixels to be extracted, wherein the M direct pixel maps correspond to the M dimensions of the pooling features one by one; performing dimensionality reduction on the pooled features to obtain reduced features of the pixels to be extracted for representing the pooled features, wherein the dimensionality of the reduced features is smaller than that of the pooled features; and taking the obtained reduction features as the image features of the extracted training classification images.
Further, the method for performing direct pixel mapping on the pixels of the training classification image to obtain the M-dimensional pixel mapping coefficient of the pixels performs the following steps: direct pixel mapping is performed on pixels of the training classification image using the following formula: t =(ii) a Wherein T is a pixel mapping coefficient,for adjusting the coefficient, the value range is as follows: 0.2 to 0.5;representing training classification imagesThe value of the pixel R of (a),representing training classification imagesThe value of the pixel G of (a),representing training classification imagesThe pixel B value of (a).
Further, the method for pooling the M direct pixel maps to obtain the M-dimensional pooled feature of the pixel to be extracted performs the following steps: the pooling process was performed using the following formula:(ii) a Wherein the content of the first and second substances,for each direct pixel map the probability of occurrence in all direct pixel maps,representing a direct pixel map;the resulting pooling characteristics.
Furthermore, the classification tree is a multi-branch tree and comprises at least two parts, and each node in the classification tree corresponds to a respective classification group; training classification models corresponding to father nodes according to characteristic values corresponding to the nodes in the classification tree, wherein the characteristic values are labeled in type in advance and stored in the corresponding nodes, the father nodes correspond to at least one child node, and the classification models are used for dividing the corpus into the corresponding child nodes; acquiring an image to be classified, wherein the image to be classified is data to be predicted of an unknown classification group; and classifying the images to be classified step by step through the classification models of all the nodes in the classification tree.
Further, the training of the classification model corresponding to each father node according to the feature value corresponding to each node in the classification tree includes: acquiring the characteristic values corresponding to each child node corresponding to the current parent node and the classification types corresponding to each child node; training a classification model corresponding to the current parent node through a preset model training algorithm according to the feature values corresponding to the child nodes and the classification types corresponding to the child nodes, wherein the preset model training algorithm comprises at least one of a Support Vector Machine (SVM) algorithm, a K nearest neighbor classification (KNN) algorithm, a decision tree algorithm and a naive Bayes NBM algorithm; and storing the classification model obtained by training into the current father node.
Further, the training the classification model corresponding to each father node according to the training corpus corresponding to each node in the classification tree further includes: adding a training task into a waiting queue, wherein the training task is used for indicating a classification model corresponding to a training parent node; detecting whether the number of training tasks in the execution queue is less than a threshold value; and if the number of the training tasks in the execution queue is smaller than the threshold value, adding the training tasks in the waiting queue into the execution queue, and executing the steps of acquiring the characteristic values corresponding to the current parent node in each child node and the classification types corresponding to the child nodes.
Further, before extracting the image features of the training classification images, the method further includes a step of performing image preprocessing on the training classification images, and specifically includes: filtering the training classification image according to the pixel position information and the pixel gray value information of the training classification image to generate a filtered image; the pixel position information is the spatial distance between a first pixel in the training classification image and other pixels in the first pixel established neighborhood range; the pixel grayscale value information is a difference in grayscale values between a first pixel in the training classification image and pixels surrounding the first pixel; according to the local pixel information and the whole pixel information of the filtering image, performing enhancement processing on the filtering image to generate an enhanced image; generating a class gradient map according to the enhanced image; and carrying out binarization processing on the class gradient map to generate a binary image.
Further, the performing filtering processing on the training classification image according to the pixel position information and the pixel gray value information of the training classification image specifically includes: generating a Gaussian template according to the pixel position information of the training classification image; generating a gray value difference template according to the pixel gray value information of the training classification image, wherein the Gaussian template and the gray value difference template have the same size; multiplying the template coefficient in the Gaussian template by the template coefficient at the corresponding position in the gray value difference template, and taking the product as the template coefficient at the corresponding position in the generated filter coefficient template; and performing filtering processing on the training classification image by using the filter coefficient template.
An image classification device for realizing the method.
The image classification method and the image classification device have the following beneficial effects: compared with the image classification method in the prior art, the image classification method is different from the image classification method in the prior art, the classification is carried out based on the characteristics of the images, the classification center is determined by setting the comparison value in the classification process, the classification tree is further constructed to classify the images to be classified, the classification algorithm and the classification process are greatly simplified, and meanwhile, the accuracy of the classification result is still maintained at a high level.
Drawings
Fig. 1 is a schematic method flow diagram of an image classification method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a classification tree of the image classification method and apparatus according to the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a classification group of an image classification method and apparatus according to an embodiment of the present invention;
fig. 4 is a graph illustrating the efficiency of the image classification method and apparatus according to the embodiment of the present invention changing with the number of test fields, and a comparison experiment effect diagram in the prior art.
Detailed Description
The technical solution of the present invention is further described in detail below with reference to the following detailed description and the accompanying drawings:
example 1
As shown in figure 1 of the drawings, in which,
an image classification method, said method performing the steps of:
step 1: extracting image features of the training classification images to obtain a training classification image feature group;
step 2: performing feature analysis on the training classification image feature group, which specifically comprises the following steps: normalizing each feature in the training classification image feature group to obtain a feature value; setting a plurality of comparison values, calculating a difference value between the characteristic value and the comparison values to judge the distance between the characteristic value and the comparison values, and taking the comparison value with the minimum difference value obtained by calculation as a classification center of the characteristic values;
and step 3: based on each comparison value, and taking the comparison value as a characteristic value of a classification center as a classification group;
and 4, step 4: establishing a classification tree by taking the classification groups as nodes, and classifying the images to be classified; the method specifically comprises the following steps: the image classification method comprises the steps of obtaining image features of images to be classified after feature extraction is carried out on the images to be classified, taking the image features of all the images to be classified as an image feature group to be classified, directly substituting the image feature group to be classified into a classification tree for classification, sequentially taking the image features of the images to be classified from the image feature group to be classified in the classification tree classification process, putting the image features into the classification tree for classification until all the image features of the images to be classified in the image feature group to be classified are taken out, and completing image classification.
Example 2
On the basis of the above embodiment, the method for extracting the image features of the training classification image in step 1 includes: performing direct pixel mapping on pixels of the training classification image to obtain an M-dimensional pixel mapping coefficient of the pixels, wherein M is an integer larger than 256; obtaining M direct pixel mapping maps corresponding to the training classification images according to the pixel mapping coefficients, wherein the value of any coordinate point in the kth direct pixel mapping map in the M direct pixel mapping maps is the value of the pixel mapping coefficient corresponding to the any coordinate point in the kth dimension, and k is a positive integer less than or equal to M; pooling the M direct pixel maps respectively to obtain M-dimensional pooling features of the pixels to be extracted, wherein the M direct pixel maps correspond to the M dimensions of the pooling features one by one; performing dimensionality reduction on the pooled features to obtain reduced features of the pixels to be extracted for representing the pooled features, wherein the dimensionality of the reduced features is smaller than that of the pooled features; and taking the obtained reduction features as the image features of the extracted training classification images.
Specifically, the image features mainly include color features, texture features, shape features, and spatial relationship features of the image.
The color feature is a global feature describing surface properties of a scene corresponding to an image or an image area; texture features are also global features that also describe the surface properties of the scene corresponding to the image or image area; the shape features are represented by two types, one is outline features, the other is region features, the outline features of the image mainly aim at the outer boundary of the object, and the region features of the image are related to the whole shape region; the spatial relationship characteristic refers to the mutual spatial position or relative direction relationship among a plurality of targets segmented from the image, and these relationships can be also divided into a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like.
Example 3
On the basis of the previous embodiment, the method for performing direct pixel mapping on the pixels of the training classification image to obtain the M-dimensional pixel mapping coefficient of the pixels performs the following steps: direct pixel mapping is performed on pixels of the training classification image using the following formula: t =(ii) a Wherein T is a pixel mapping coefficient,for adjusting the coefficient, the value range is as follows: 0.2 to 0.5;representing training classification imagesThe value of the pixel R of (a),representing training classification imagesThe value of the pixel G of (a),representing training classification imagesIs/are as followsThe pixel B value of (a).
Example 4
On the basis of the above embodiment, the method for pooling the M direct pixel maps to obtain M-dimensional pooled features of the pixels to be extracted performs the following steps: the pooling process was performed using the following formula:(ii) a Wherein the content of the first and second substances,for each direct pixel map the probability of occurrence in all direct pixel maps,representing a direct pixel map;the resulting pooling characteristics.
In particular, pooling, i.e., down-sampling (subsample), reduces the size of the data. The usual pooling method is: max _ pooling, mean pooling. Among these, maximum pooling is most commonly used. Pooling was performed by using a pooling kernel and using max _ pooling.
Example 5
On the basis of the previous embodiment, the classification tree is a multi-branch tree and comprises at least two parts, and each node in the classification tree corresponds to a respective classification group; training classification models corresponding to father nodes according to characteristic values corresponding to the nodes in the classification tree, wherein the characteristic values are labeled in type in advance and stored in the corresponding nodes, the father nodes correspond to at least one child node, and the classification models are used for dividing the corpus into the corresponding child nodes; acquiring an image to be classified, wherein the image to be classified is data to be predicted of an unknown classification group; and classifying the images to be classified step by step through the classification models of all the nodes in the classification tree.
In particular, tree families exist for convenient and fast lookup. The height of the tree is an irresistible lower time limit for hit lookup. Under certain data conditions, the height and width of the tree are constrained to each other. The simplest binary tree in the tree family (as if the length and width of the rectangle were constrained to each other for a given area) is not of practical value, although it is easy to implement. Which most often means that the height of the binary tree is too high. The proposal and implementation of the n-branch tree solve the defects of the binary tree, and the typical n-branch tree comprises the following components: 2-3-4 trees/Red Black Tree and B tree.
Example 6
On the basis of the above embodiment, the training, according to the feature value corresponding to each node in the classification tree, a classification model corresponding to each father node includes: acquiring the characteristic values corresponding to each child node corresponding to the current parent node and the classification types corresponding to each child node; training a classification model corresponding to the current parent node through a preset model training algorithm according to the feature values corresponding to the child nodes and the classification types corresponding to the child nodes, wherein the preset model training algorithm comprises at least one of a Support Vector Machine (SVM) algorithm, a K nearest neighbor classification (KNN) algorithm, a decision tree algorithm and a naive Bayes NBM algorithm; and storing the classification model obtained by training into the current father node.
Example 7
On the basis of the previous embodiment, the training, according to the training corpus corresponding to each node in the classification tree, a classification model corresponding to each father node, further includes: adding a training task into a waiting queue, wherein the training task is used for indicating a classification model corresponding to a training parent node; detecting whether the number of training tasks in the execution queue is less than a threshold value; and if the number of the training tasks in the execution queue is smaller than the threshold value, adding the training tasks in the waiting queue into the execution queue, and executing the steps of acquiring the characteristic values corresponding to the current parent node in each child node and the classification types corresponding to the child nodes.
Example 8
On the basis of the above embodiment, before extracting the image features of the training classification image, the method further includes a step of performing image preprocessing on the training classification image, and specifically includes: filtering the training classification image according to the pixel position information and the pixel gray value information of the training classification image to generate a filtered image; the pixel position information is the spatial distance between a first pixel in the training classification image and other pixels in the first pixel established neighborhood range; the pixel grayscale value information is a difference in grayscale values between a first pixel in the training classification image and pixels surrounding the first pixel; according to the local pixel information and the whole pixel information of the filtering image, performing enhancement processing on the filtering image to generate an enhanced image; generating a class gradient map according to the enhanced image; and carrying out binarization processing on the class gradient map to generate a binary image.
Specifically, filtering is an operation of filtering out a specific band of frequencies in a signal, and is an important measure for suppressing and preventing interference. Is a probability theory and method for estimating a random process related to a certain random process according to the result of observing the random process. The term filtering, originating from communication theory, is a technique for extracting a useful signal from a received signal containing interference. The "received signal" corresponds to the observed random process and the "useful signal" corresponds to the estimated random process. For example, radar is used to track an airplane, and data of a measured airplane position contains measurement errors and other random interferences, and a filtering and predicting problem is presented in how to estimate the position, speed, acceleration, and the like of the airplane at each moment as accurately as possible by using the data and predict the future position of the airplane. Such problems are abundant in the electronics, aerospace, control engineering and other scientific and technical sectors. Historically, the earliest considered was wiener filtering, which was later proposed in the 60's of the 20 th century by r.e. kalman and r.s. The general nonlinear filtering problem is actively studied.
Example 9
On the basis of the previous embodiment, the filtering processing on the training classification image according to the pixel position information and the pixel gray value information of the training classification image specifically includes: generating a Gaussian template according to the pixel position information of the training classification image; generating a gray value difference template according to the pixel gray value information of the training classification image, wherein the Gaussian template and the gray value difference template have the same size; multiplying the template coefficient in the Gaussian template by the template coefficient at the corresponding position in the gray value difference template, and taking the product as the template coefficient at the corresponding position in the generated filter coefficient template; and performing filtering processing on the training classification image by using the filter coefficient template.
Example 10
An image classification device for realizing the method.
The above description is only an embodiment of the present invention, but not intended to limit the scope of the present invention, and any structural changes made according to the present invention should be considered as being limited within the scope of the present invention without departing from the spirit of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the system provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种对象识别方法及装置、芯片及电子设备