Brain segmentation method fusing differential geometric information
1. A brain segmentation method fusing differential geometric information comprises the following steps:
step S1: performing normalization preprocessing on an MR image set, wherein the MR image set comprises a T1 weighted MR image set of normal brain tissues or a multi-modal MR image set of brain tumors;
step S2: introducing a Jacobian matrix and a Laplace operator into a preprocessing result of the MR image set so as to emphasize image edge information through differential geometric processing;
step S3: performing data enhancement on the MR image set subjected to differential geometric processing to obtain an enhanced data set;
step S4: and training a neural network model by using the enhanced data set, realizing image segmentation, reconstructing a segmentation result and further outputting prediction of the segmentation result.
2. Method according to claim 1, characterized in that in step S2, for one MR image, a differential geometry processing is performed according to the following steps:
step S21, regarding an MR image as a two-dimensional space to two-dimensional space transformation function f (x), where the gray value is expressed as a function value of each position, the gray value changes along with the position information in the x and y directions, and the function for calculating the jacobian matrix is expressed as:
wherein Jf(x,y)A Jacobian matrix representing an image function, wherein elements in the matrix are respectively subjected to partial derivatives in the x direction and the y direction and are used for reflecting the gray value variation trend in each direction;
step S22, regarding an MR image as a second order differential operator in a two-dimensional euclidean space, and differentiating the second order derivatives of the Laplace operator in the x and y directions to obtain the Laplace operator of the discrete function, where for a two-dimensional function f (x, y), the second order differences in the x and y directions are respectively expressed as:
the difference form of the Laplace operator obtained by calculation is expressed as:
written as a spatial filter is of the form:
the spatial filter is moved line by line over the original MR image, the values of which are multiplied by the overlapping pixels and summed, assigned to the pixel of the central point.
3. The method of claim 1, wherein in step S1, the normalization preprocessing on the MR image set is to normalize the Z-score data, scale the data within a fixed interval, and convert the data into pure values fitting a normal distribution, and the transformation function represents:
where μ is the mean of the samples and σ is the standard deviation of the samples.
4. The method as claimed in claim 1, wherein in step S1, when the MR image set is a T1 weighted MR image set of normal tissues of the brain, further comprising a histogram matching and equalization preprocessing procedure, the following steps are performed:
reading original image information, and converting the original image information into a gray level histogram image to obtain matrix information of the image;
counting the frequency of different gray scales, and calculating to obtain a probability distribution cumulative function of the gray scales;
equalizing the cumulative distribution function;
and (4) applying the equalized distribution function to the original image and homogenizing the original image.
5. The method according to claim 1, wherein in step S1, when the MR image set is a multi-modal MR image set of a brain tumor, the data tagging is performed according to the following steps:
slicing the data of each mode, discarding the data without a segmentation label, and splicing all the modes together to be used as the multichannel input of the neural network model;
the segmented whole tumor, enhanced tumor and tumor core to tag correspondence is expressed as:
WT=ED+ET+NET
TC=ET+NET
wherein NET is gangrene-tag 1, ED is edema area-tag 2, ET is enhanced tumor area-tag 4, background portion-tag 0, WT represents whole tumor, ET represents enhanced tumor and TC represents tumor core.
6. The method of claim 1, wherein the neural network model adopts a U-shaped structure and hierarchy of Unet, including multiple downsampling and corresponding multiple upsampling, and the convolution processing of the last layer is set as: n1 × 1 convolutions are used, and then activation is performed by using the soft-max function to change the number of output channels to N, wherein N is equal to the number of target areas required to be segmented.
7. The method of claim 1, wherein the neural network model is trained using Dice coefficients as a loss function.
8. The method according to claim 1, wherein in step S3, the enhanced data set is obtained by flipping, rotating, scaling, displacing the MR image set.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
10. A computer device comprising a memory and a processor, on which memory a computer program is stored which is executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented when the processor executes the program.
Background
Magnetic Resonance Imaging (MRI) is generally selected for brain functional imaging, and is widely applied to brain disease analysis because MRI medical images have high contrast and spatial resolution to soft tissues with small density difference. MRI is mainly characterized in that soft tissues, anatomical structures and pathological changes can be clearly displayed, multi-azimuth and multi-sequence imaging is carried out on focuses, abundant information is provided for clinical diagnosis, and compared with other medical images, MRI has the characteristics of small ionizing radiation damage to human bodies, no wound and the like. However, in the actual brain image, the gray matter and white matter have small intensity difference, low contrast, more returns, motion artifacts, partial volume effect, etc., so that the final segmentation result is not accurate enough. Therefore, accurate segmentation of brain MRI images is of great significance for medical diagnosis.
Image segmentation is a key and common technique for medical image processing, and is widely applied in clinical and scientific research aspects: such as lesion process visualization, surgical planning, lesion identification, and three-dimensional localization, etc. Segmentation is the extraction of target features from the image background for subsequent clinical measurement and analysis. The deep learning method has a vital function in the field of medical image segmentation, and obtains a good effect on the prediction and segmentation of the image target by means of strong calculation power of a neural network.
In recent years, the research in the field of medical image segmentation at home and abroad is mainly divided into two main categories: a conventional segmentation-based approach and a deep learning-based approach. The traditional segmentation method mainly judges based on the geometry, gray scale, texture and the like of the image. The common characterization is gray level, which mainly includes information such as threshold, region, boundary, cluster, etc., and the segmentation target is judged by calculating the distribution and size of the gray level value. For example, the threshold-based method is based on the difference in the gray-scale values between different tissues, and the target region can be determined by comparing the gray-scale values. In practical application, due to low contrast between different tissues, the gray value of the image is similar to that of the texture feature, and the segmentation result of the traditional method is not accurate enough. Recently, with the hardware progress, a plurality of deep learning-based frameworks appear, so that the segmentation algorithm remarkably improves the running speed and the segmentation precision, and there are two methods: 1) extracting global and local features required by segmentation by using a Convolutional Neural Network (CNN); 2) the pixel-level prediction is carried out on the full-size image by utilizing a Full Convolution Network (FCN), and the positioning performance of the traditional method is iterated. According to the category division of deep learning, two broad categories of methods can be divided into supervised learning-based division and unsupervised learning-based division. However, in the current technical scheme of image segmentation based on the deep learning method, because the image generally has uneven gray scale, unobvious edges, etc., the problem of poor segmentation effect still exists.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a brain segmentation method fusing differential geometric information, which comprises the following steps:
step S1: performing normalization preprocessing on an MR image set, wherein the MR image set comprises a T1 weighted MR image set of normal brain tissues or a multi-modal MR image set of brain tumors;
step S2: introducing a Jacobian matrix and a Laplace operator into a preprocessing result of the MR image set so as to emphasize image edge information through differential geometric processing;
step S3: performing data enhancement on the MR image set subjected to differential geometric processing to obtain an enhanced data set;
step S4: and training a neural network model by using the enhanced data set, realizing image segmentation, reconstructing a segmentation result and further outputting prediction of the segmentation result.
Compared with the prior art, the method has the advantages that a segmentation network introducing differential geometric information is provided based on a neural network framework, the organization edge information of the image is fully utilized, and the segmentation precision of the network is improved. The brain tissue and tumor segmentation network constructed by the invention can achieve better performance, help doctors analyze the state of an illness, reduce misjudgment and have great significance in disease diagnosis and treatment.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of an improved Unet network architecture according to one embodiment of the present invention;
FIG. 2 is a flow chart of a brain tissue Unet segmentation experiment according to an embodiment of the present invention;
FIG. 3 is an image of a brain tissue Unet segmentation result according to an embodiment of the present invention;
FIG. 4 is a flow chart of a brain tumor segmentation experiment according to another embodiment of the present invention;
FIG. 5 is a schematic diagram of the result of brain tumor segmentation according to another embodiment of the present invention;
FIG. 6 is a brain tumor data tag visualization image according to another embodiment of the present invention;
FIG. 7 is a diagram of data enhancement effects according to another embodiment of the present invention;
in the figure, Max Pooling-Max Pooling; Upsampling-Upsampling; Pooling-Pooling; 2 DUnet-two dimensional Unet.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In brief, the brain segmentation method fused with differential geometric information provided by the invention comprises the following steps: step S110, for N1Preprocessing such as gray level standardization, histogram matching and balancing is carried out on the MR image set of the normal brain tissue weighted by T1; to N2Carrying out preprocessing such as standardization, cutting and the like on the brain tumor multi-modal MR image set, and eliminating adverse effects of too bright or too dark uneven gray values on segmentation; step S120, introducing a Jacobian determinant and a Laplace operator into the preprocessing results of the MR image set of the normal brain tissue and the tumor multi-modal MR image set, emphasizing the edge information of the images and completing differential geometric processing; step S130, using the techniques of turning, rotating, zooming, shifting and the like to realize data enhancement on data set data, thereby expanding the number of effective training data in neural network model training; and step S140, training a neural network model to realize the segmentation and reconstruction of the image, wherein the final output is the prediction of the segmentation result. For example, accurate segmentation of gray matter, white matter, cerebrospinal fluid and the like can be realized for normal brain tissue MR images, and segmentation of whole tumors, enhanced tumors, tumor cores and the like can be realized for multi-modal brain tumor images.
The preprocessing process in step S110 can realize the normalization of the image and primarily eliminate the noise in the MR image, so as to improve the accuracy and efficiency of the subsequent image segmentation.
For example, for N1Set T1 weighted brain MR image set, and N2A multi-modal brain tumor MR image set is set, and a Z-score data standardization method is adopted, and specifically comprises the following steps:
and (3) scaling the data in a small fixed interval by using the mean value and the standard deviation of the original data, so that the data are converted into pure values conforming to normal distribution, and the limitation on data units is eliminated. The transformation function of the method is:
wherein, mu is defined as the mean value of the sample, and sigma is the standard deviation of the sample
For example, histogram matching and equalization are the inverse operations of transforming the MR image into the same normalized uniform histogram, then homogenizing the reference image with the uniform histogram as the medium. The specific algorithm of histogram equalization comprises: reading original image information, and converting the original image information into a gray level histogram image to obtain matrix information of the image; counting the frequency of different gray scales, and calculating to obtain a probability distribution cumulative function of the gray scales; equalizing the cumulative distribution function; and (4) applying the equalized distribution function to the original image and homogenizing the original image.
The following description focuses on the process of fusing differential geometry information proposed by the present invention with specific embodiments. For example, the segmentation accuracy of gray matter, white matter and cerebrospinal fluid in brain tissue can be effectively improved on an IBSR (inter-class labeling resonance) data set by using 18 medical standard image data sets and verifying that edge information is introduced by using a Jacobian determinant and a Laplace operator. For another example, based on the Unet, edge information based on differential geometry is introduced through Jacobian determinant and Laplace operator, and is used for segmentation of brain tissue images and multi-modal brain tumor images, and verification is completed on IBSR2018 and BraTS2018 data sets.
Jacobian determinant
The invention provides a method for embodying the edge information of a brain image in Jacobian determinant. Specifically, in vector differential geometry, the Jacobian matrix is a matrix formed in a certain way by the first partial derivatives of the functions corresponding to the vectors. If the matrix is represented as a square matrix, it can be referred to as a jacobian.
Mathematically, the following definitions can be made: the hypothesis function f may be to vector an n-dimension Becomes an m-dimensional vector f (x), f (x) e RmWhere f is a function consisting of m real functions. The Jacobian matrix J of the functionfCan be defined as:
for a single element therein, it can be defined as:
for an image, it can be assumed that the function f (x) is a transformation from a two-dimensional space to a two-dimensional space, the gray value is expressed as a function value at each position, the gray value will change along with the position information in the x and y directions, respectively, and the function for calculating the Jacobian matrix is:
in the two-dimensional space of the image, the partial derivatives are respectively calculated in the x direction and the y direction, so that the change of the gray value in each direction can be calculated, the edge parts of different tissues have certain discrimination on the medical image, the change is obvious, the change can be embodied in the Jacobian, and the change trend of the gray value can be seen by solving the Jacobian of the image. The jacobian matrix of the image will contain considerable edge information.
Second, Laplace operator
The Laplace operator can determine steep edges and slowly changing edges by differentiating zero crossings of positive and negative peaks, and the method is widely applied to the aspect of edge detection due to the characteristic of being sensitive to edge changes. The Laplace operator is a second-order differential operator of an n-dimensional Euclidean space, is defined as the divergence of the gradient, is the simplest isotropic differential operator, and has rotation and no deformation. The Laplace transform of a two-dimensional function is the isotropic second derivative. To be more suitable for digital image processing, the function may be converted into a discrete form.
The derivative of the discrete function degrades into a difference, where the one-dimensional first order difference equation and the second order difference equation are expressed as:
the Laplace operator of the discrete function is obtained by respectively differentiating the second derivatives of the Laplace operator in the x direction and the y direction, so that for a two-dimensional function f (x, y), the second differences in the x direction and the y direction are respectively expressed as:
the difference form of the Laplace operator obtained by calculation is as follows:
written as a spatial filter then is:
the spatial filter can be used to directly operate on the image, the specific operation form is basically consistent with other filters, the image is moved on the original image line by line, the numerical value of the image is multiplied by the overlapped pixels to be summed, and the summation is assigned to the pixel of the central point.
As the image edge is the region where the gray information jumps, it can be seen from the operator that, for a pixel point at a position, if the gray value of the pixel point is very close to the pixel values of the pixel at four directions, namely, the upper, the lower, the left and the right of the pixel, the new pixel value of the point obtained by calculation is close to 0, so that the region where the Laplace sharpening template changes compared with the pixel can be obviously obtained, and the Laplace sharpening template is very sensitive at the edge part.
Thirdly, constructing an image data set
In one embodiment, the IBSR2018 dataset and the BraTS2018 dataset are used primarily in the experiment. The IBSR2018 data set contains a set of MR images of the brain from a T1 sequence of 18 healthy populations, which can be used as a criterion for tissue quantification and segmentation assessment. The 18 sets of data comprise 10 sets of training sets, 5 sets of verification sets and 3 sets of test sets, wherein the training and verification data have corresponding ground truth (real label). The data is 3D data of a T1 modality with segmentation labels, is generated by scanning of a 1.5T nuclear magnetic machine, and is subjected to pretreatment of offset field correction and skull stripping, so that subsequent image pretreatment and network segmentation work can be facilitated. The data set provides 3 annotations: the background part is marked as 0, and network calculation can be directly carried out according to marked label to obtain a target result.
The BraTS2018 dataset contains glioma MR data from 285 sets of multimodalities, including the four modalities t1, t2, flair, and t1 ce. For example, three parts need to be split: whole Tumor (WT), Enhance Tumor (ET), and Tumor Core (TC). In the experiment, the MR image is data of four modalities, each of which includes 155 pictures, each of which has a size of 240 × 240, and the data of the four modalities are read during the experiment.
Fourthly, constructing an image segmentation model
The image segmentation model may employ various types of neural network models. In one embodiment, image segmentation is framed by the Unet model. The Unet is an improved semantic segmentation network based on Full Convolutional neural networks (FCNs), and mainly comprises two parts, namely an encoding layer and a decoding layer, wherein the encoding layer is used for performing feature extraction and compressing features into a feature diagram formed by the features. The decoding layer structure performs upsampling, performs a path expansion, and decodes the extracted feature map into a divided predicted image having a size equal to that of the original image.
For a scene with the final goal of segmentation in a three-dimensional brain tumor MR image, the applicability of the 2DUnet network primitive structure is adjusted in consideration of the relevant image information contained in the Z-axis under multi-modal data, as shown in fig. 1. In the last layer, 4 convolutions of 1 × 1 are adopted, and activated by Softmax, the output channel number is converted to be the same as the number of tissues needing to be segmented, so that the segmentation targets of four tissues, namely Cerebrospinal fluid (CSF), Gray Matter (GM), White Matter (WM) and background, can be simultaneously output.
Evaluation criteria
In one embodiment, a Dice coefficient (DSC), which is the most common evaluation index in medical image segmentation, is used as an evaluation index for segmentation results, reflecting repeatability in addition to comparing the matching degree of a segmented region with an actual region. The DSC value ranges from 0 to 1, and the numerical value is closer to 1, which indicates that the higher the similarity between the two sets is, the better the segmentation result is, otherwise, the worse the segmentation result is, and the formula is as follows:
wherein TP indicates true positive, FP indicates false positive, and FN indicates false negative.
For the case of dividing 4 target regions, each partial target region needs to be calculated, so the loss function is expressed as:
sixthly, evaluation results
The effect of the method for applying the differential geometric information to the brain tissue segmentation and the brain tumor segmentation is verified respectively through the IBSR2018 data set and the network segmentation result of the BraTS2018 data set.
On an IBSR2018 data set, preprocessing such as gray standardization, histogram matching, histogram equalization and the like is firstly carried out on an input original image, a comparison experiment is carried out on three parts after geometric information is not added, Jacobian processing (marked as Unet + JD) and Laplace processing (marked as Unet + Laplace) are added, a Unet frame is used for segmenting processed data, and a predicted segmentation result is finally output as a network. The experimental flow is shown in fig. 2, the differential geometry processing results are shown in fig. 3, and the experimental results are shown in table 1. It can be seen that under the condition of not changing other experimental conditions, the partition precision of gray matter, white matter and cerebrospinal fluid is effectively improved by adding Jacobian determinant information and adding Laplace operator information.
TABLE 1 Unet segmentation brain tissue results fused with differential geometry (evaluation index: Dice coefficient)
For the BraTS2018, firstly, z-score standardization is carried out on data, and adverse effects of uneven gray values and label-free data slices on subsequent image segmentation are eliminated; slicing the data of each mode, visualizing the data labels as shown in figure 6, and discarding the data without the segmentation labels; in the differential geometry processing part, the obtained slice data is calculated to respectively obtain a JD image and a Laplace image; through the processing of the data enhancement technology, after the quantity of effective data in the network training is increased, the effective data is input into the Unet network for segmentation, and the experimental flow is shown in figure 4.
In one embodiment, the data tag processing for the multi-modal brain tumor MR image set comprises:
the data of each mode is sliced, the data without a segmentation label is abandoned, and then all the modes are spliced together to be used as the input of four channels.
Similar label processing is carried out on the image mask, and the correspondence between the 3 parts WT, ET, TC which need to be divided and the label is as follows:
WT=ED+ET+NET (13)
TC=ET+NET (14)
wherein NET is gangrene-tag 1, ED is edema zone-tag 2, ET is tumor enhancement zone-tag 4, and 1 background part-tag 0. For training of WT, TC, ET, in fact, the corresponding tags are read, and training is performed separately for individual targets, so that only one predicted image of a target appears at the last input layer.
In one embodiment, the data enhancement comprises obtaining the enhanced data set by flipping, rotating, scaling, displacing the original image data set. As shown in fig. 7, the shape of the enhanced data is substantially the same as that of the original image, but slightly different, and the neural network can determine the enhanced data to be a different image, so that the enhanced data can be used as new effective data for training to improve the model segmentation accuracy.
The tumor segmentation results are shown in fig. 5 and table 2. Experimental results show that the method can effectively segment the multi-modal brain tumor data set.
TABLE 2 Unet brain tumor segmentation results fused with geometric information (evaluation index: Dice coefficient)
Accordingly, the present invention also provides a brain segmentation system incorporating differential geometric information for implementing one or more aspects of the above method, for example, the system comprising: a preprocessing unit for performing a normalization preprocessing on an MR image set, wherein the MR image set comprises a T1 weighted MR image set of normal tissues of the brain or a multi-modal MR image set of tumors of the brain; a differential geometry processing unit for introducing a Jacobian and a Laplace operator in a pre-processing result of the MR image set to emphasize image edge information by differential geometry processing; a data enhancement unit for data enhancing the differentiated geometry processed MR image set, obtaining an enhanced data set; and the segmentation prediction unit is used for training a neural network model by using the enhanced data set, realizing image segmentation, reconstructing a segmentation result and outputting the prediction of the segmentation result. In this system, each unit involved can be realized by software, dedicated hardware, FPGA, or the like.
It is noted that those skilled in the art can make appropriate changes or modifications to the above-described embodiments, for example, to achieve more or less target region segmentation, or to train a neural network model using other loss functions, etc., without departing from the spirit and scope of the present invention. In addition, the present invention does not limit the number of convolution layers, the size of convolution kernels, and the like in the neural network model.
In conclusion, the invention utilizes the neural network model, preferably the improved Unet model, introduces edge information by using the Jacobian matrix and the Laplace operator, and explores the influence of differential geometric information on the model. The segmentation result on the IBSR2018 data set shows that the segmentation method can effectively improve the segmentation precision. The segmentation method is developed to be used for multi-modal brain tumor data, the brain tumor is effectively segmented on the multi-modal data BraTS2018, and the result shows that the segmentation precision of the tumor can be improved by adding geometric information. The invention can help doctors to analyze the state of illness and reduce the misjudgment.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The descriptions herein may be downloaded to various computing/processing devices from a computer-readable storage medium, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.