Method and device for detecting roadside parking lot berth based on deep learning
1. A method for detecting roadside parking lot berths based on deep learning is characterized by comprising the following steps:
acquiring an image containing at least one berth in a preset monitoring area, and detecting the image through an angular point detection model to obtain the position and the type of each visible berth angular point in the image;
segmenting the image through a semantic segmentation model to obtain a segmentation map containing each berthage side line region;
analyzing the position relation between each visible berthage corner point in the image and each berthage sideline region according to the position and the type of each visible berthage corner point and each berthage sideline region in the segmentation graph;
and determining the position of each berth in the image according to the position relation.
2. The method according to claim 1, wherein the obtaining an image of at least one berth in a predetermined monitoring area, detecting the image through a corner detection model, and obtaining the position and type of each visible berth corner in the image comprises:
acquiring an image containing at least one berth in a preset monitoring area, acquiring the position of each visible berth corner in the image through a corner detection model, and intercepting a corner area of each berth corner in the image;
determining the type of each visible berthage corner point in the corner point area through a convolutional neural network consisting of a preset number of layers;
the types of the berth corner points comprise any one of an L-shaped corner point, a T-shaped corner point, an I-shaped corner point and a non-corner point.
3. The method according to any one of claims 1-2, wherein before the step of detecting the image by the corner detection model to obtain the position and type of each visible berthing corner in the image, comprising:
acquiring an angular point training set, and marking a parking angular point rectangular frame region and a parking angular point type in each image in the angular point training set;
and training the marked corner training set through a gradient descent algorithm to obtain a corner detection model.
4. The method of claim 3, wherein the obtaining, by the corner detection model, the position of each visible berth corner in the image and intercepting a corner region of each berth corner in the image comprises:
obtaining a plurality of characteristics of different scales of the image through a preset target detection model and a preset network structure, and respectively obtaining target objects with different sizes according to the characteristics of the different scales;
identifying the berth corner points in the target object, determining a rectangular frame region containing each berth corner point in the image, and intercepting the corner point region in the rectangular frame region.
5. The method of claim 4, wherein said segmenting the image through a semantic segmentation model to obtain a segmentation map containing each berthage boundary region comprises:
inputting the image into the semantic segmentation model to obtain a feature map of the image;
determining a pixel area corresponding to each pixel in the feature map, and analyzing the semantic category of each pixel area;
determining each pixel with the semantic category being the berthage boundary area according to the semantic category of each pixel area to obtain a segmentation map containing each berthage boundary area;
wherein the semantic categories include any one of a background and a ground solid line, the ground solid line including at least one of a lane line and a parking line.
6. The method of claim 5, wherein analyzing the positional relationship between each visible berthage corner in the image and each berthage boundary region according to the position and type of each visible berthage corner and each berthage boundary region in the segmentation map comprises:
judging whether a line segment between every two of the visible berth angular points is a ground solid line;
if so, determining the line segment between every two berth corner points as a line segment set of a ground solid line according to the segmentation graph to obtain a corner point connection graph.
7. The method of claim 6, wherein said determining the location of each berth in the image according to the positional relationship comprises:
judging whether the connection graph at least comprises a to-be-detected area connected with three line segments;
if so, judging whether the type of the berth corner point in the region to be detected is an L-shaped corner point or a T-shaped corner point;
and if so, determining that the area to be detected is a berth.
8. The utility model provides a device based on roadside parking lot berth is detected in deep learning which characterized in that includes:
the detection module is used for acquiring an image containing at least one berth in a preset monitoring area, detecting the image through an angular point detection model and obtaining the position and the type of each visible berth angular point in the image;
the segmentation module is used for segmenting the image through a semantic segmentation model to obtain a segmentation map containing each berthage side line region;
the analysis module is used for analyzing the position relation between each visible berthage corner point in the image and each berthage sideline region according to the position and the type of each visible berthage corner point and each berthage sideline region in the segmentation graph;
and the determining module is used for determining the position of each berth in the image according to the position relation.
9. The apparatus of claim 8, wherein the detection module comprises:
the system comprises an acquisition and interception unit, a processing unit and a processing unit, wherein the acquisition and interception unit is used for acquiring an image containing at least one berth in a preset monitoring area, acquiring the position of each visible berth corner point in the image through a corner point detection model, and intercepting the corner point area of each berth corner point in the image;
the determining unit is used for determining the type of each visible berthage corner point in the corner point area through a convolutional neural network consisting of a preset number of layers;
the types of the berth corner points comprise any one of an L-shaped corner point, a T-shaped corner point, an I-shaped corner point and a non-corner point.
10. The apparatus according to any one of claims 8-9, comprising:
the marking module is used for acquiring the corner training set and marking the rectangular frame area of the parking corner and the type of the parking corner in each image in the corner training set;
and the training module is used for training the labeled corner point training set through a gradient descent algorithm to obtain a corner point detection model.
11. Device according to claim 10, characterized in that the acquisition and interception unit is particularly adapted to
Obtaining a plurality of characteristics of different scales of the image through a preset target detection model and a preset network structure, and respectively obtaining target objects with different sizes according to the characteristics of the different scales;
identifying the berth corner points in the target object, determining a rectangular frame region containing each berth corner point in the image, and intercepting the corner point region in the rectangular frame region.
12. Device according to claim 11, characterized in that said segmentation module is particularly adapted to
Inputting the image into the semantic segmentation model to obtain a feature map of the image;
determining a pixel area corresponding to each pixel in the feature map, and analyzing the semantic category of each pixel area;
determining each pixel with the semantic category being the berthage boundary area according to the semantic category of each pixel area to obtain a segmentation map containing each berthage boundary area;
wherein the semantic categories include any one of a background and a ground solid line, the ground solid line including at least one of a lane line and a parking line.
13. Device according to claim 12, characterized in that the analysis module is, in particular, for
Judging whether a line segment between every two of the visible berth angular points is a ground solid line;
if so, determining the line segment between every two berth corner points as a line segment set of a ground solid line according to the segmentation graph to obtain a corner point connection graph.
14. The device according to claim 13, characterized in that said determination module is particularly adapted to
Judging whether the connection graph at least comprises a to-be-detected area connected with three line segments;
if so, judging whether the type of the berth corner point in the region to be detected is an L-shaped corner point or a T-shaped corner point;
and if so, determining that the area to be detected is a berth.
Background
In the prior art, a roadside parking lot management system based on a high-level video is generally constructed by a network camera, cloud computing and parking lot management equipment, and is an information service system for driving vehicles out and in of roadside parking lots. The roadside parking lot management system acquires images and videos of vehicle information through the camera, analyzes and processes the acquired images through a computer vision technology, and achieves dynamic and static comprehensive management of vehicles. In the visual scene, the vehicle and the berth form two main management target objects, wherein the berth management comprises berth positioning, berth occupation situation and the like. The detection accuracy of the positioning of the berth directly influences the detection of berth occupation.
The existing parking position positioning method comprises a manual calibration method, the positioning precision of the method is high, but the labor cost is extremely high along with the large-scale expansion of parking system services, and the manual calibration cost is extremely high if the parking position positioning method is an urban scale with millions of parking positions; in addition, the camera is inevitably interfered by external force to cause the camera to move, and the calibrated berth position is also repositioned, so that the labor cost is doubled. Therefore, the method for automatically calibrating the parking position gradually replaces the method for manually calibrating, and the existing method for automatically calibrating the parking position, such as the method based on Hough line detection, is not suitable for application in a complex scene. Therefore, a method suitable for automatically calibrating a berth in a complex scene is needed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for detecting roadside parking lots based on deep learning, which can accurately position the positions of the parks aiming at complex roadside parking scenes without manual operation.
In one aspect, an embodiment of the present invention provides a method for detecting a roadside parking lot berthage based on deep learning, including:
acquiring an image containing at least one berth in a preset monitoring area, and detecting the image through an angular point detection model to obtain the position and the type of each visible berth angular point in the image;
segmenting the image through a semantic segmentation model to obtain a segmentation map containing each berthage side line region;
analyzing the position relation between each visible berthage corner point in the image and each berthage sideline region according to the position and the type of each visible berthage corner point and each berthage sideline region in the segmentation graph;
and determining the position of each berth in the image according to the position relation.
Further, the acquiring an image including at least one berth in a predetermined monitoring area, detecting the image through an angular point detection model, and obtaining the position and type of each visible berth angular point in the image includes:
acquiring an image containing at least one berth in a preset monitoring area, acquiring the position of each visible berth corner in the image through a corner detection model, and intercepting a corner area of each berth corner in the image;
determining the type of each visible berthage corner point in the corner point area through a convolutional neural network consisting of a preset number of layers;
the types of the berth corner points comprise any one of an L-shaped corner point, a T-shaped corner point, an I-shaped corner point and a non-corner point.
Further, before the step of detecting the image through the corner detection model to obtain the position and the type of each visible berth corner in the image, the method includes:
acquiring an angular point training set, and marking a parking angular point rectangular frame region and a parking angular point type in each image in the angular point training set;
and training the marked corner training set through a gradient descent algorithm to obtain a corner detection model.
Further, the obtaining, by the corner detection model, the position of each visible berth corner in the image, and intercepting a corner region of each berth corner in the image includes:
obtaining a plurality of characteristics of different scales of the image through a preset target detection model and a preset network structure, and respectively obtaining target objects with different sizes according to the characteristics of the different scales;
identifying the berth corner points in the target object, determining a rectangular frame region containing each berth corner point in the image, and intercepting the corner point region in the rectangular frame region.
Further, the segmenting the image through a semantic segmentation model to obtain a segmentation map including each berthage boundary region includes:
inputting the image into the semantic segmentation model to obtain a feature map of the image;
determining a pixel area corresponding to each pixel in the feature map, and analyzing the semantic category of each pixel area;
determining each pixel with the semantic category being the berthage boundary area according to the semantic category of each pixel area to obtain a segmentation map containing each berthage boundary area;
wherein the semantic categories include any one of a background and a ground solid line, the ground solid line including at least one of a lane line and a parking line.
Further, the analyzing the position relationship between each visible parking corner point in the image and each parking side line region according to the position and type of each visible parking corner point and each parking side line region in the segmentation map includes:
judging whether a line segment between every two of the visible berth angular points is a ground solid line;
if so, determining the line segment between every two berth corner points as a line segment set of a ground solid line according to the segmentation graph to obtain a corner point connection graph.
Further, the determining the position of each berth in the image according to the position relationship includes:
judging whether the connection graph at least comprises a to-be-detected area connected with three line segments;
if so, judging whether the type of the berth corner point in the region to be detected is an L-shaped corner point or a T-shaped corner point;
and if so, determining that the area to be detected is a berth.
On the other hand, the embodiment of the invention provides a device for detecting the roadside parking lot berth based on deep learning, which comprises the following components:
the detection module is used for acquiring an image containing at least one berth in a preset monitoring area, detecting the image through an angular point detection model and obtaining the position and the type of each visible berth angular point in the image;
the segmentation module is used for segmenting the image through a semantic segmentation model to obtain a segmentation map containing each berthage side line region;
the analysis module is used for analyzing the position relation between each visible berthage corner point in the image and each berthage sideline region according to the position and the type of each visible berthage corner point and each berthage sideline region in the segmentation graph;
and the determining module is used for determining the position of each berth in the image according to the position relation.
Further, the detection module includes:
the system comprises an acquisition and interception unit, a processing unit and a processing unit, wherein the acquisition and interception unit is used for acquiring an image containing at least one berth in a preset monitoring area, acquiring the position of each visible berth corner point in the image through a corner point detection model, and intercepting the corner point area of each berth corner point in the image;
the determining unit is used for determining the type of each visible berthage corner point in the corner point area through a convolutional neural network consisting of a preset number of layers;
the types of the berth corner points comprise any one of an L-shaped corner point, a T-shaped corner point, an I-shaped corner point and a non-corner point.
Further, comprising:
the marking module is used for acquiring the corner training set and marking the rectangular frame area of the parking corner and the type of the parking corner in each image in the corner training set;
and the training module is used for training the labeled corner point training set through a gradient descent algorithm to obtain a corner point detection model.
Further, the obtaining and intercepting unit is specifically configured to
Obtaining a plurality of characteristics of different scales of the image through a preset target detection model and a preset network structure, and respectively obtaining target objects with different sizes according to the characteristics of the different scales;
identifying the berth corner points in the target object, determining a rectangular frame region containing each berth corner point in the image, and intercepting the corner point region in the rectangular frame region.
Further, the segmentation module is particularly adapted for
Inputting the image into the semantic segmentation model to obtain a feature map of the image;
determining a pixel area corresponding to each pixel in the feature map, and analyzing the semantic category of each pixel area;
determining each pixel with the semantic category being the berthage boundary area according to the semantic category of each pixel area to obtain a segmentation map containing each berthage boundary area;
wherein the semantic categories include any one of a background and a ground solid line, the ground solid line including at least one of a lane line and a parking line.
Further, the analysis module is particularly adapted for
Judging whether a line segment between every two of the visible berth angular points is a ground solid line;
if so, determining the line segment between every two berth corner points as a line segment set of a ground solid line according to the segmentation graph to obtain a corner point connection graph.
Further, the determination module is specifically configured to
Judging whether the connection graph at least comprises a to-be-detected area connected with three line segments;
if so, judging whether the type of the berth corner point in the region to be detected is an L-shaped corner point or a T-shaped corner point;
and if so, determining that the area to be detected is a berth.
The technical scheme has the following beneficial effects: according to the invention, the position and the type of each visible berthage corner point in the image can be efficiently and accurately determined, the segmentation of each berthage sideline region obtained by segmenting the image on the premise of confirming the timeliness and the accuracy is realized, and necessary precondition is provided for the subsequent accurate positioning of the berthage position; the method and the device have the advantages that the position relation between each visible parking corner and each parking side line region in the image is analyzed, manual operation is not needed, the position of the parking position can be accurately positioned for a complex roadside parking scene, and further the roadside parking management cost is greatly saved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for detecting roadside parking lots based on deep learning according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an apparatus for detecting roadside parking lots based on deep learning according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an image captured by a camera under a roadside parking lot scene in a preferred embodiment of the present invention;
FIG. 4 is a schematic view of a docking corner point in a preferred embodiment of the present invention;
FIG. 5 is a schematic flow chart of a corner detection model training process according to a preferred embodiment of the present invention;
FIG. 6 is a schematic diagram of a parking position captured by a camera under a roadside parking lot scene in a preferred embodiment of the present invention;
FIG. 7 is a schematic view of a type of a landing corner point in a preferred embodiment of the present invention;
FIG. 8 is a diagram illustrating an example original image in an image set of a training semantic segmentation model and a segmentation image labeled on the example original image in accordance with an embodiment of the present invention;
fig. 9 is a schematic view of a connecting line of the berth corner points in a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical scheme of the embodiment of the invention has the following beneficial effects: according to the invention, the position and the type of each visible berthage corner point in the image can be efficiently and accurately determined, the segmentation of each berthage sideline region obtained by segmenting the image on the premise of confirming the timeliness and the accuracy is realized, and necessary precondition is provided for the subsequent accurate positioning of the berthage position; the method and the device have the advantages that the position relation between each visible parking corner and each parking side line region in the image is analyzed, manual operation is not needed, the position of the parking position can be accurately positioned for a complex roadside parking scene, and further the roadside parking management cost is greatly saved.
The above technical solutions of the embodiments of the present invention are described in detail below with reference to application examples:
the application example of the invention aims to accurately position the parking position aiming at the complicated roadside parking scene without manual operation.
In one possible implementation, for example, in a roadside parking management system, a corner training set, such as a corner training set D, of images of a predetermined monitoring area acquired by a video acquisition device is acquired, and a parking corner rectangular frame area and a parking corner type in each image in the corner training set D are labeled; then, rectangular frame labels corresponding to the images in the corner training set D, that is, a rectangular region with the berth corner as the center and the types of the berth corners in each image, such as the berth corner { G3}, the berth corner { G1, G2} and the berth corner { G3, G4, G6} of the image in fig. 4, may be obtained; subsequently, YOLOv3 (the third version of the target detection algorithm in the youonly Look Once series) is used as a corner detection model, and an optimal corner detection model of the convolutional neural network with the two-stage cascade structure is obtained by utilizing a corner training set D and training through a gradient descent algorithm, wherein a flow schematic diagram of a corner detection model training process is shown in fig. 5.
In the parking position positioning process of the roadside parking management system, firstly, images containing at least one parking position in a preset monitoring area are obtained, each image is detected through a corner detection model obtained through training, and the position and the type of each visible parking position corner point in each image are obtained; then, each image is segmented through a semantic segmentation model to obtain a segmentation map of each image, wherein each segmentation map comprises each berthage side line area; and analyzing the position relation between each visible berthage corner point and each berthage side line region in each image according to the position and the type of each visible berthage corner point and each berthage side line region in the segmentation graph aiming at each image, and determining the position of each berthage in each image according to the position relation.
It should be noted that the video capture device in the embodiment of the present invention is a monocular camera disposed in a roadside parking lot. The position, the angle and the height of the camera are adjusted to simultaneously monitor a plurality of roadside berths. As shown in fig. 6, the image acquired by the adjusted camera includes the berths such as berth a, berth B, and berth C. The camera in this state is used for acquiring images as a training of a subsequent detection and segmentation algorithm model and a data source used by the parking management system.
In a possible implementation manner, the acquiring an image including at least one berth in a predetermined monitoring area, and detecting the image through a corner detection model to obtain a position and a type of each visible berth corner in the image includes: acquiring an image containing at least one berth in a preset monitoring area, acquiring the position of each visible berth corner in the image through a corner detection model, and intercepting a corner area of each berth corner in the image; determining the type of each visible berthage corner point in the corner point area through a convolutional neural network consisting of a preset number of layers; the types of the berth corner points comprise any one of an L-shaped corner point, a T-shaped corner point, an I-shaped corner point and a non-corner point.
The method for acquiring the positions of the visible berth corner points in the image through the corner point detection model and intercepting the corner point region of each berth corner point in the image comprises the following steps: obtaining a plurality of characteristics of different scales of the image through a preset target detection model and a preset network structure, and respectively obtaining target objects with different sizes according to the characteristics of the different scales; identifying the berth corner points in the target object, determining a rectangular frame region containing each berth corner point in the image, and intercepting the corner point region in the rectangular frame region.
For example, in the parking space positioning process of the roadside parking management system, an image in a predetermined monitoring area, such as image a, is obtained, as shown in fig. 3, it is determined whether the image a includes at least one image of a parking space, if so, by inputting the image a into a first stage of a corner detection model, which uses a deep learning target detection model YOLOv3 and a Feature Pyramid FPN (Feature Pyramid network) network structure, YOLOv3 predicts and outputs three features of different scales, detects target objects of different sizes on three different scales, identifies parking space corners in the target objects by YOLOv3 and outputs rectangular frame corner areas of each parking space in the image a, and then, in a second stage of the corner detection model, intercepts the image a areas by the rectangular frame areas of each obtained parking space in the image a, and inputting the intercepted corner region of the image A into a network of 5 convolutional layers, and acquiring the type of each berth corner in the image A, wherein the types of the berth corners output by the classification network comprise 'L' -shaped corners, 'T' -shaped corners, 'I' -shaped corners and non-corners. The type of the landing corner point is shown in fig. 7.
In one possible implementation manner, the segmenting the image by using a semantic segmentation model to obtain a segmentation map including each berthage boundary region includes: inputting the image into the semantic segmentation model to obtain a feature map of the image; determining a pixel area corresponding to each pixel in the feature map, and analyzing the semantic category of each pixel area; and determining each pixel with the semantic category being the berthage boundary area according to the semantic category of each pixel area to obtain a segmentation map containing each berthage boundary area.
Wherein the semantic categories include any one of a background and a ground solid line, the ground solid line including at least one of a lane line and a parking line.
For example, in the parking position positioning process of the roadside parking management system, an image A containing at least one parking position in a preset monitoring area is obtained, the image A is detected through a corner point detection model, and the position and the type of each visible parking position corner point in the image A are obtained; and inputting the image A into a semantic segmentation model, wherein a deep learning semantic segmentation model PSPNet (Pyramid Scene segmentation Network) is adopted in the embodiment, the PSPNet is a semantic segmentation model with real-time performance, and the image to be detected is segmented into two types, namely a background and a ground solid line, by adopting the model. The PSPNet is composed of ResNet (Residual Network) and PPM (pyramid pooling module) that apply a dilation convolution. Inputting an original image to be detected, such as an image A, as a PSPNet to obtain a feature map of the image A, wherein the size of the output feature map is 1/8 of the image A; then, determining a pixel area corresponding to each pixel in the feature map, analyzing the semantic category of each pixel area, predicting the semantic category of each pixel on the feature map, specifically predicting the probability that each pixel belongs to a ground solid line and a background, finally determining the semantic category of each pixel area according to the probability value, and then determining each pixel with the semantic category being a berth side line area according to the semantic category of each pixel area to obtain a segmentation map containing each berth side line area.
Wherein, still include: the semantic segmentation model for segmenting the berthage lines is pre-trained. The semantic segmentation model of the convolutional neural network berthage line is trained in advance, an image set S is required to be collected from a camera, the image set S comprises an original camera image and annotation information, and the annotation information is a segmentation graph. In the semantic category of each pixel point in the segmentation graph, 0 represents a background, 1 represents a ground solid line, the ground solid line includes a parking line and a lane line, as shown in the segmentation graph marked in fig. 8, a white area is a background, and a black area is a ground solid line. And training a berthage line segmentation model by using the image set S through a gradient descent algorithm, and finally, deriving a finally trained semantic segmentation model of the berthage line, wherein the semantic segmentation model can be used for acquiring all solid ground line areas in the image to be detected.
It should be noted that, in the embodiment of the present invention, the parking space edge segmentation problem is treated as an image semantic segmentation problem, and because the similarity between the parking space line and the lane line is very high, in the embodiment of the present invention, the lane line and the parking space line are treated as line segments of the same category, that is, both belong to a ground solid line, in the segmentation process, the ground solid line is regarded as a foreground category, and the other areas are backgrounds. By the method, necessary preconditions are provided for subsequently, efficiently and accurately analyzing the position relation between each visible parking position corner and each parking position side line region in the image, and meanwhile, the real-time performance and the accuracy of image segmentation can be ensured by adopting the deep learning semantic segmentation model PSPNet, and further, the positioning precision of the parking position is improved.
In a possible implementation manner, the analyzing, according to the position and the type of each visible parking corner point and each parking edge area in the segmentation map, a positional relationship between each visible parking corner point in the image and each parking edge area includes: judging whether a line segment between every two of the visible berth angular points is a ground solid line; if so, determining the line segment between every two berth corner points as a line segment set of a ground solid line according to the segmentation graph to obtain a corner point connection graph.
Wherein the determining the position of each berth in the image according to the position relationship comprises: judging whether the connection graph at least comprises a to-be-detected area connected with three line segments; if so, judging whether the type of the berth corner point in the region to be detected is an L-shaped corner point or a T-shaped corner point; and if so, determining that the area to be detected is a berth.
For example, in the parking positioning process of the roadside parking management system, according to the positions of the visible parking corner points in the image a obtained by the corner point detection model, as shown in fig. 9, 11 corner points { V1, V2, V3, V4, V5, V6, V7, V8, V9, V10, and V11} in the image are obtained by the corner point detection model, a segmentation map including edge line regions of each parking is obtained to determine whether a region between each two corner points is a ground solid line, if so, the two edge lines are connected to obtain a connection map, and the parking corner points { e1, e2, e3, e4, e5, e6, and e7} are obtained to determine whether the connection map includes at least a region to be detected in which three line segments are connected, and if so, determine whether the type of the parking in the region to be detected is "L" type or "T" type; if so, determining that the region to be detected is a berth, wherein only two "I" corner points are present as a solid line e6 in fig. 9, and the solid line e6 is a non-berth line obtained by dividing.
The embodiment of the invention provides a device for detecting roadside parking lots based on deep learning, which can realize the method embodiment provided above, and for specific function realization, please refer to the description in the method embodiment, and further description is omitted here.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks, units, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.