Muscle CT image delineation method, system, electronic equipment and machine storage medium
1. A muscle CT image delineation method is characterized by comprising the following steps:
preprocessing the acquired muscle CT image to obtain a preprocessed muscle image sequence;
positioning and identifying the muscle image sequence in the CT sagittal position to obtain a sagittal position positioning image of the target muscle; wherein the target muscle comprises a muscle or group of muscles at a specified location;
performing segmentation processing on the sagittal location image of the target muscle at an axial position level to obtain a segmentation result and a parameter result of the CT image of the muscle or the muscle group at the designated position; the segmentation result includes a plurality of muscle parameters.
2. The muscle CT image delineation method according to claim 1, wherein the step of preprocessing the acquired muscle CT images to obtain a muscle image sequence comprises:
resampling the muscle CT image to obtain a resampled muscle CT image; the muscle CT image comprises a plurality of image images of vertebral body level;
extracting the region of interest of the muscle CT image after resampling to obtain a muscle region image; wherein the muscle region image comprises a muscle region to be segmented;
and carrying out normalization processing on the muscle area image to obtain the muscle image sequence.
3. The muscle CT image delineation method of claim 1, wherein the step of performing location identification on the muscle image sequence in CT sagittal location to obtain a sagittal location image of the target muscle comprises:
positioning and identifying the muscle image sequence in the CT sagittal position based on a pre-trained positioning neural network to obtain a sagittal position positioning image of the target muscle;
the pre-trained positioning neural network comprises an input layer, a first convolution layer, a first maximum pooling layer, a second convolution layer, a second maximum pooling layer, a third convolution layer, a third maximum pooling layer, a fourth convolution layer, a fourth maximum pooling layer, a third full-connection layer and an input layer which are connected in sequence, wherein the first convolution layer, the first maximum pooling layer, the second convolution layer, the second maximum pooling layer, the third convolution layer and the input layer are in a first specified number.
4. The muscle CT image delineation method according to claim 3, wherein the first specified number is 2; the second specified number is 1; the third specified number is 2.
5. The muscle CT image delineation method according to claim 1, wherein the step of performing segmentation processing on the sagittal positioning image of the target muscle at an axial level to obtain the segmentation result of the CT image of the muscle or muscle group at the designated position comprises:
inputting the sagittal positioning image of the target muscle into a pre-trained segmentation neural network to obtain a plurality of segmentation results of the CT image of the muscle or the muscle group at the designated position;
the pre-trained segmented neural network comprises an input layer, a forward segmented sub-network, a reverse segmented sub-network, a convolutional layer and a softmax layer which are connected in sequence; wherein the forward segmentation subnetwork comprises a fourth preset number of convolution residual modules-pooling layer pairs; the inverse partitioning sub-network includes a fifth preset number of convolution residual module-inverse pooling layer pairs.
6. The method of claim 1, wherein the muscle parameters include a muscle cross-sectional area, a muscle area index, a mean CT value, and a degree of muscle fat infiltration.
7. The muscle CT image delineation method of claim 6, wherein the degree of muscle fat infiltration is characterized by a range of fat CT values and a range of muscle CT values; wherein, fat CT value range: -190 to-30 Hu; muscle CT value range: -29-150 Hu.
8. A muscle CT image delineation system, the system comprising:
the preprocessing module is used for preprocessing the acquired muscle CT image to obtain a preprocessed muscle image sequence;
the positioning identification module is used for positioning and identifying the muscle image sequence in the CT sagittal position to obtain a sagittal position positioning image of the target muscle; wherein the target muscle comprises a muscle or group of muscles at a specified location;
the segmentation processing module is used for performing segmentation processing on the sagittal positioning image of the target muscle at the axial level to obtain a segmentation result of the CT image of the muscle or the muscle group at the designated position; the segmentation result includes a plurality of muscle parameters.
9. An electronic device comprising a processor and a memory, wherein the memory stores machine executable instructions executable by the processor, and the processor executes the machine executable instructions to implement the muscle CT image delineation method according to any one of claims 1 to 7.
10. A machine storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to carry out the muscle CT image delineation method of any one of claims 1 to 7.
Background
With the aging of the population, the population suffering from sarcopenia is increasing day by day, and the sarcopenia affects the population more year by year in the next decades. Sarcopenia mainly causes common complications such as reduced quality of life, impaired cardio-pulmonary function, falling down and fracture. In addition, sarcopenia is closely related to the treatment effect and prognosis of some serious malignant diseases.
The acquisition and recording of muscle parameters become an essential link in sarcopenia research, the muscle parameter indexes mainly include muscle mass and muscle function (muscle strength or activity), wherein the muscle delineation at the CT axis level is a gold standard for measuring the muscle size by using the CT image examination technology, and the main evaluation parameters include Skeletal Muscle Area (SMA), Skeletal Muscle Index (SMI) and muscle CT value (MRA).
At present, the muscle delineation by utilizing the CT inspection technology is mainly carried out by a manual or semi-manual method, the method consumes a large amount of manpower, the manual operation individual difference is large, the fatigue is easy, the standard consistency in different areas is poor, the data quantification work is heavy, and the development of the muscle delineation measurement work is not facilitated, so the muscle delineation work progress is slow, the mistakes are easy to make, and the efficiency is low.
Disclosure of Invention
The invention aims to provide a muscle CT image delineation method, a system, an electronic device and a machine storage medium, which can quickly and accurately realize the measurement of muscle segmentation parameter information of a three-dimensional medical image, thereby improving the muscle delineation processing efficiency of the medical image (image) and reducing the labor cost.
In a first aspect, the present invention provides a muscle CT image delineation method, comprising: preprocessing the acquired muscle CT image to obtain a preprocessed muscle image sequence; carrying out positioning identification on the muscle image sequence in the CT sagittal position to obtain a sagittal position positioning image of the target muscle; wherein the target muscle comprises a muscle or group of muscles at a specified location; performing segmentation processing on the sagittal muscle positioning image of the target muscle at the axial level to obtain a segmentation result of the CT image of the muscle or the muscle group at the specified position; the segmentation result includes a variety of muscle parameters.
In an alternative embodiment, the step of preprocessing the acquired muscle CT images to obtain a muscle image sequence includes: resampling the muscle CT image to obtain a resampled muscle CT image; the muscle CT image comprises a plurality of image images of the vertebral body level; extracting the region of interest of the muscle CT image after resampling to obtain a muscle region image; the muscle area image comprises a muscle area to be segmented; and carrying out normalization processing on the muscle area image to obtain a muscle image sequence.
In an alternative embodiment, the step of locating and identifying the muscle image sequence in the CT sagittal region to obtain the sagittal region locating image of the target muscle includes: positioning and identifying a muscle image sequence in a CT sagittal position based on a pre-trained positioning neural network to obtain a sagittal position positioning image of target muscle; the pre-trained positioning neural network comprises an input layer, a first specified number of first convolutional layers, a first maximum value pooling layer, a first specified number of second convolutional layers, a second maximum value pooling layer, a first specified number of third convolutional layers, a third maximum value pooling layer, a second specified number of fourth convolutional layers, a fourth maximum value pooling layer, a third specified number of full-connection layers and an input layer which are connected in sequence.
In an alternative embodiment, the first specified number is 2; the second specified number is 1; the third specified number is 2.
In an alternative embodiment, the step of performing segmentation processing on the sagittal positioning image of the target muscle at the axial level to obtain a segmentation result of the CT image of the muscle or muscle group at the designated position includes: inputting the sagittal positioning image of the target muscle into a pre-trained segmentation neural network to obtain a segmentation result of the CT image of the muscle or muscle group at the designated position; the pre-trained segmented neural network comprises an input layer, a forward segmented sub-network, a reverse segmented sub-network, a convolutional layer and a softmax layer which are connected in sequence; the forward segmentation sub-network comprises a fourth preset number of convolution residual modules-pooling layer pairs; the backward partition subnetwork includes a fifth predetermined number of convolution residual module-anti-pooling layer pairs.
In alternative embodiments, the muscle parameters include muscle cross-sectional area, muscle area index, mean CT value, degree of muscle fat infiltration.
In an alternative embodiment, the degree of muscle fat infiltration is characterized by a range of fat CT values and a range of muscle CT values; wherein, fat CT value range: -190 to-30 Hu; muscle CT value range: -29-150 Hu.
In a second aspect, the present invention provides a muscle CT image delineation system, comprising: the preprocessing module is used for preprocessing the acquired muscle CT image to obtain a preprocessed muscle image sequence; the positioning identification module is used for positioning and identifying the muscle image sequence in the CT sagittal position to obtain a sagittal position positioning image of the target muscle; wherein the target muscle comprises a muscle or group of muscles at the specified location; the segmentation processing module is used for performing segmentation processing on the sagittal positioning image of the target muscle at the axial level to obtain a segmentation result of the CT image of the muscle or muscle group at the designated position; the segmentation result includes a variety of muscle parameters.
In a third aspect, the present invention provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the muscle CT image delineation method according to any one of the foregoing embodiments.
In a fourth aspect, the present invention provides a machine-storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the muscle CT image delineation method of any one of the preceding embodiments.
The invention provides a muscle CT image sketching method, a system, electronic equipment and a machine storage medium. According to the method, the processed muscle image is subjected to positioning identification and segmentation processing to obtain various muscle parameters such as muscle cross-sectional area, muscle area index, average CT value and muscle fat infiltration degree, and the muscle segmentation parameter information measurement of the three-dimensional medical image can be rapidly and accurately realized, so that the processing efficiency of the medical image (image) for the muscle is improved, and the labor cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating a muscle CT image delineation method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a positioning neural network according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a specific segmented neural network according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a muscle CT image delineation system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings or the orientations or positional relationships that the products of the present invention are conventionally placed in use, and are only used for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Considering that the prior art of utilizing CT inspection technology to carry out muscle delineation mainly depends on manual or semi-manual methods, the method consumes a large amount of manpower, has large individual difference of manual operation, is easy to fatigue, has poor standard consistency in different areas, has heavy data quantification work, and is not beneficial to the development of the muscle delineation measurement work. The embodiment of the invention provides a method, a system, electronic equipment and a machine storage medium for delineating a muscle image, which can quickly and accurately realize measurement of muscle segmentation parameter information of a three-dimensional medical image, thereby improving the processing efficiency of the medical image (image) aiming at the muscle and reducing the labor cost.
For convenience of understanding, firstly, a detailed description is given to a muscle CT image delineation method provided by an embodiment of the present invention, referring to a schematic flow chart of the muscle CT image delineation method shown in fig. 1, the method mainly includes the following steps S102 to S106:
and S102, preprocessing the acquired muscle CT image to obtain a preprocessed muscle image sequence.
The acquired muscle CT image is a CT image acquired by using a Computed Tomography (CT) image inspection technology, and the muscle CT image is an originally acquired three-dimensional medical image.
In order to ensure the accuracy of the positioning identification and the segmentation processing of the subsequent muscle CT image, the preprocessing operation may include performing data sampling processing on the acquired muscle CT image and extracting the region including the region to be processed in the muscle CT image, so that the accuracy of the subsequent positioning segmentation can be ensured by the muscle image sequence obtained after the preprocessing.
The muscle image sequence is a three-dimensional medical image sequence, namely the three-dimensional medical image after preprocessing operation. The muscle image sequence may be a slice sequence in a CT three-dimensional image, and the slice sequence in the CT three-dimensional image may include a sequence of medical images with various slice intervals, different slice numbers, and various CT resolutions.
And step S104, positioning and identifying the muscle image sequence in the CT sagittal position to obtain a sagittal position positioning image of the target muscle.
The target muscle includes a muscle or a muscle group at a designated position. The positioning identification is executed by inputting the muscle image sequence into a positioning neural network which is trained in advance. The pre-trained positioning neural network adopts the ideas of convolution, pooling, full connection and coding and decoding, and the specific network structure is determined according to the characteristics of tests and medical image sequences.
The obtained muscle positioning image is a two-dimensional medical image which is a CT slice, namely the CT slice is determined through a positioning neural network so as to carry out subsequent analysis processing on the CT slice.
And S106, carrying out segmentation processing on the muscle positioning image to obtain a segmentation result aiming at the muscle image.
The muscle positioning image obtained in the mode is input into the image segmentation neural network, so that an accurate segmentation result can be obtained. The segmented neural network may include a modified U-net neural network or may be a modified Full Convolutional Network (FCN). The segmentation result comprises a plurality of muscle parameters, wherein the muscle parameters comprise muscle cross-sectional area, muscle area index, average CT value and muscle fat infiltration degree. It can be understood that the segmentation result can also be automatically delineated for different muscle parameters of the muscle image to obtain the area of each segmented region.
The segmentation result obtained by segmenting the image segmentation neural network can replace the work of artificial muscle delineation, and because the muscle section is positioned and the muscle is segmented by adopting the mode of combining the two neural networks of the positioning neural network and the image segmentation neural network, the efficiency of intelligent identification and the accuracy of segmentation can be improved, the requirement of end-to-end identification by adopting a three-dimensional neural network on a three-dimensional medical image is overcome, and the accuracy of segmentation of the muscle image is improved.
According to the muscle CT image delineation method provided by the embodiment of the invention, the muscle parameters including the muscle cross-sectional area, the muscle area index, the average CT value and the muscle fat infiltration degree are obtained by performing positioning identification and segmentation treatment on the treated muscle image, and the muscle segmentation parameter information measurement of the three-dimensional medical image can be rapidly and accurately realized, so that the treatment efficiency of the medical image (image) aiming at the muscle is improved, and the labor cost is reduced.
In one embodiment, to ensure that the quality of the input to the positioning neural network meets the requirement of positioning segmentation, the acquired muscle CT images are first preprocessed to obtain a muscle image sequence. In specific implementation, the following steps 2.1) to 2.3) may be employed:
step 2.1), resampling the muscle CT image to obtain a resampled muscle CT image; the muscle CT image comprises a plurality of image images of the vertebral body level;
step 2.2), extracting an interested area of the resampled muscle CT image to obtain a muscle area image; the muscle area image comprises a muscle area to be segmented;
and 2.3) carrying out normalization processing on the muscle area image to obtain a muscle image sequence.
Aiming at the step 2.1), when resampling is performed, the resampling can be performed according to a preselected sampling interval, so as to ensure the data sampling quality of the muscle CT image.
In addition, the CT mode adopted may be thoracic CT, abdominal CT, lumbar CT, or pelvic CT, and the vertebral bodies of the image images at the multiple vertebral body levels may be T4 (thoracic vertebra), T8-T12 (thoracic vertebra), L1-L5 (vertebra), S1 (lumbar vertebra).
As for the step 2.2), the region of interest extraction may be a way of selecting a fixed slice region in different types of CT images, and by performing the extraction of the slice region, it is ensured that the muscle region image includes a muscle region to be segmented. The muscle region to be segmented needs to contain an accurate segmentation area for segmentation, so that a complete and accurate segmentation result can be obtained during subsequent segmentation.
In addition, the present embodiment provides another specific example of muscle CT image preprocessing, which first processes the acquired muscle image and determines the reference axis. Taking CT image as an example, taking normalized DICOM image as a starting point, detecting and segmenting the spine, the cone is segmented into independent units, taking cranio-caudal longitudinal axis as a direction, and a scaled reference axis can be formed, which includes a threshold range, morphological characteristics, and the like. This reference axis is formed for positioning to the correct level of the cone in preparation for muscle segmentation.
Furthermore, the uncompressed DICOM data is preprocessed by the medical image module processing platform and converted into data which can be directly input to the positioning neural network of the embodiment for processing. The collected data are processed in advance, target muscles are artificially outlined, a standard database is made, the standard database is randomly divided into 5 subsets to be analyzed, wherein 4 subsets are set as training sets, 1 subset is set as a test set, and a deep learning system formed by a positioning neural network and a dividing neural network is trained by adopting a 5-fold cross test, so that the muscles are divided. By the method, muscle segmentation is carried out on a plurality of cones horizontally, segmentation precision is analyzed, and the spatial overlapping degree between manual segmentation and automatic segmentation is measured.
Further, after the muscle image sequence is obtained, the muscle image sequence can be input into a pre-trained positioning neural network, and the muscle image sequence is positioned and identified in the CT sagittal position based on the pre-trained positioning neural network, so that a muscle positioning image is obtained. The pre-trained positioning neural network comprises an input layer, a first specified number of first convolutional layers, a first maximum value pooling layer, a first specified number of second convolutional layers, a second maximum value pooling layer, a first specified number of third convolutional layers, a third maximum value pooling layer, a second specified number of fourth convolutional layers, a fourth maximum value pooling layer, a third specified number of full-connection layers and an input layer which are connected in sequence. Fig. 2 shows a specific structure of a positioning neural network, and for the positioning neural network shown in fig. 2, the first specified number is 2, the second specified number is 1, and the third specified number is 2.
Further, the specific parameters of each layer in the above-mentioned positioning neural network are as follows:
the first layer is an input layer, and the input is a slice sequence in a single-channel CT three-dimensional image.
The second layer is convolutional layer Conv1 with convolution kernel 3 x 3, number of input channels 1, number of output channels 6, and shift step s of 1.
The third layer is convolutional layer Conv2, with convolution kernel 3 × 3, number of input channels 6, number of output channels 6, shift step s of 1, followed by ReLU function active layer.
The fourth layer is the max pooling layer MaxP3, using a filter of 2 x 2, with a moving step s of 2.
The fifth layer is convolutional layer Conv4, with convolution kernel 3 × 3, number of input channels 6, number of output channels 16, and shift step s of 1.
The sixth layer is convolutional layer Conv5 with convolution kernel 3 x 3, number of input channels 16, number of output channels 16, shift step s1, followed by the ReLU function active layer.
The seventh layer is a maximum pooling layer MaxP6, using a filter of 2 x 2, with a moving step s of 2.
The eighth layer is convolutional layer Conv7 with convolution kernel 3 x 3, number of input channels 16, number of output channels 16, and shift step s of 1.
The ninth layer is convolutional layer Conv8 with convolution kernel 3 x 3, number of input channels 16, number of output channels 32, shift step s1, followed by the ReLU function active layer.
The tenth layer is the max pooling layer MaxP9, with 2 x 2 filters and a moving step s of 2.
The eleventh layer is convolutional layer Conv10 with convolution kernel 3 x 3, number of input channels 32, number of output channels 64, shift step s1, followed by the ReLU function active layer.
The twelfth layer is a maximum pooling layer MaxP11, using a filter of 2 x 2, with a moving step s of 2.
The thirteenth layer is a full connection layer, the number of input channels is 64, and the number of output channels is 240.
The fourteenth layer is a fully connected layer, the number of input channels is 240, and the number of output channels is 84.
And the fifteenth layer is an output layer and adopts a sigmoid activation function to carry out positioning output.
Furthermore, the muscle positioning image output by the pre-trained positioning neural network is segmented. In specific implementation, the segmentation result of the CT image of the muscle or muscle group at the designated position can be obtained by inputting the sagittal position positioning image of the target muscle into the pre-trained segmentation neural network. The segmentation result mainly comprises the muscle cross-sectional area, the muscle area index, the average CT value and the muscle fat infiltration degree. Preferably, in order to obtain the variation of each parameter accurately, time-dynamic curves of four parameter values can be included.
For ease of understanding, the present embodiment provides a split neural network based on a modified U-net neural network, which may include an input layer, a forward split sub-network, a convolution residual module, a reverse split sub-network, a convolution layer, and a softmax layer connected in sequence; the forward segmentation sub-network comprises a fourth preset number of convolution residual modules-pooling layer pairs; the backward partition subnetwork includes a fifth predetermined number of convolution residual module-anti-pooling layer pairs. The convolution residual module-pooling layer pair is also a convolution residual module and a pooling layer which are connected in sequence, and the convolution residual module-inverse pooling layer pair is also an inverse pooling layer and a convolution residual module which are connected in sequence. Fig. 3 shows a structural diagram of a specific segmented neural network, in this example, the fourth preset number is 4, and the fifth preset number is 4.
Specifically, the method comprises the following steps: the number of model layers for segmenting the neural network is as follows:
the first layer is an input layer, and a single-channel CT two-dimensional slice image, namely a muscle positioning image output by a positioning neural network, is input.
The second layer is a convolution residual block ResBlock1 with an input channel number of 1 and an output channel number of 16, followed by a prilu function activation layer (not shown). The convolution residual block1 includes a convolution path with a convolution kernel size of 5 × 5 and a residual path with a shift step s of 1, and the residual path performs element addition based on the convolution path.
The third layer is a pooling layer MaxP2, using a filter of 2 x 2, with a moving step s of 2.
The fourth layer is the convolution residual block ResBlock3, with 16 input channels and 32 output channels, followed by the prilu function activation layer (not shown). The convolution residual block3 includes a convolution path with a convolution kernel size of 5 × 5 and a residual path with a shift step s of 1, and the residual path performs element addition based on the convolution path.
The fifth layer is a pooling layer MaxP4, using a filter of 2 x 2, with a moving step s of 2.
The sixth layer is the convolution residual block ResBlock5, with 32 input channels and 64 output channels, followed by the prilu function activation layer (not shown). The convolution residual block5 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 2-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The seventh layer is a pooling layer MaxP6, using a filter of 2 x 2, with a moving step s of 2.
The eighth layer is a convolution residual block ResBlock7, with an input channel number of 64 and an output channel number of 128, followed by a prilu function activation layer (not shown). The convolution residual block7 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 3-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The ninth layer is a pooling layer MaxP8, using a filter of 2 x 2, with a moving step s of 2.
The tenth layer is a convolution residual block ResBlock9, with an input channel number of 128 and an output channel number of 256, followed by a prilu function activation layer (not shown). The convolution residual block9 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 3-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The eleventh layer is an anti-pooling layer MaxP10, with a filter of 2 x 2 and a moving step s of 2.
The twelfth layer is a convolution residual block ResBlock11, with an input channel number of 256 and an output channel number of 256, followed by a prilu function activation layer (not shown). The convolution residual block11 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 3-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The thirteenth layer is an anti-pooling layer MaxP12, which uses a filter of 2 × 2 and moves by step s of 2.
The fourteenth layer is the convolution residual block ResBlock13, with 256 input channels and 128 output channels, followed by the prime function activation layer (not shown). The convolution residual block13 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 2-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The fifteenth layer is an anti-pooling layer MaxP14, using a filter of 2 x 2, and a moving step s of 2.
The sixteenth layer is a convolution residual block ResBlock15, with 128 input channels and 64 output channels, followed by a prilu function activation layer (not shown). The convolution residual block15 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 1-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The seventeenth layer is an anti-pooling layer MaxP16, with a filter of 2 x 2 and a moving step s of 2.
The eighteenth layer is a convolution residual block ResBlock17, with an input channel number of 64 and an output channel number of 32, followed by a prilu function activation layer (not shown). The convolution residual block17 comprises a convolution path and a residual path, the convolution kernel size of the convolution path is 5 × 5, the moving step s is 1, a 1-layer convolution neural network is adopted, and the residual path is subjected to element addition on the basis of the convolution path.
The nineteenth layer is convolutional layer Conv18, the number of input channels is 32, the number of output channels is 7, the size of the convolutional kernel is 5 × 5, and the shift step s is 1.
The twentieth layer is a SoftMax layer, and normalizes the channel weights of the convolutional layer Conv18 to output the above-described division result.
According to the positioning neural network and the segmentation neural network, the region of interest (ROI) of muscles and muscle groups is sketched according to the difference of CT values of different tissues of a human body, then the area, the area index and the CT value of axial muscles and muscle groups are calculated from the ROI after muscle segmentation, then the threshold range of the CT value is controlled, all fat tissues in the sketched contour are removed, the simple muscle area is obtained, and the percentage content of the muscle fat infiltration degree in the segmentation area is calculated. The muscle parameters are used for accurately evaluating the muscle quality.
In quantitative evaluation of muscle, muscle cross-sectional area (muscle cross-sectional area) and muscle area index MI obtained by correcting height are measured (muscle area index, MI). The muscle cross-sectional area is the area of the designated muscle or muscle group obtained by automatic segmentation of software, and then the muscle area value is subjected to standardized conversion by using the height to calculate the muscle area index, wherein the area index is the area/height2(cm2/m2)。
When muscle quality evaluation is carried out, all fat tissue area areas in the segmentation result are removed through threshold processing, the muscle fat infiltration degree (MFI) is calculated, and the MFI (%) is the simple muscle area obtained after the fat contained in the muscle fat is removed/the muscle cross-sectional area (m) is obtained through automatic segmentation2/m2)。
The muscle images applicable to the present embodiment may include CT images of muscle groups of a specified number and a specified position. In one embodiment, taking abdominal and lumbar CT as an example, 19 sets of data can be included, as follows: skeletal muscle area; the total area of the psoas major, the area of the left psoas major and the area of the right psoas major; the total area of the posterior spine muscle group, the area of the left posterior spine muscle group and the area of the right posterior spine muscle group; the total area of the quadratus lumborum, the area of the quadratus lumborum on the left side and the area of the quadratus lumborum on the right side; the total area of the paraspinal muscle group, the area of the left paraspinal muscle group and the area of the right paraspinal muscle group; total area of rectus abdominis, left rectus abdominis, right rectus abdominis; total abdominal sidewall muscle group area, left abdominal sidewall muscle group area, right abdominal sidewall muscle group area.
After the continuous region is segmented, fat tissues with lower CT values are possibly contained inside, all fat tissue regions inside the segmentation result are removed through threshold processing, and the muscle fat infiltration degree (MFI) is calculated and can be expressed in a percentage mode. Wherein, fat CT value range: -190 to-30 Hu; muscle CT value range: -29-150 Hu. Obtaining muscle cross-sectional area (m) by simple muscle area/automatic segmentation after removing fat2/m2)。
In conclusion, the muscle parameter quantitative analysis method based on deep learning muscle delineation and feature analysis can effectively establish a relevant database, obtain the muscle cutoff value (cutoff value) of normal healthy people and realize the quantification of muscle parameters. Further, an image and a clinical system can be established, and risk assessment of personal sarcopenia is realized. Aiming at serious diseases such as tumor, diabetes and the like, the system formed by the method can be used for monitoring and evaluating, the prognosis condition of the patient can be estimated more accurately, limited manual intervention is carried out, and the survival rate and the life quality of the patient are improved. Compared with manual or semi-manual operation, the method can achieve consistency of drawing standards, is high in speed, free of fatigue and easy in data collection, and can rapidly obtain a large amount of scientific research data, acquire parameters of the skeletal muscle system of the crowd and acquire the standard parameters of the crowd.
In view of the foregoing muscle CT image delineation method, an embodiment of the present invention further provides a muscle CT image delineation system, as shown in fig. 4, the system mainly includes the following components:
a preprocessing module 402, configured to preprocess the acquired muscle CT image to obtain a preprocessed muscle image sequence;
a positioning identification module 404, configured to perform positioning identification on the muscle image sequence at a CT sagittal position to obtain a sagittal position positioning image of the target muscle; wherein the target muscle comprises a muscle or group of muscles at the specified location;
a segmentation processing module 406, configured to perform segmentation processing on the sagittal location image of the target muscle at the axial level to obtain a segmentation result of the CT image of the muscle or muscle group at the specified location; the segmentation result includes a variety of muscle parameters.
The muscle CT image delineation system provided by the embodiment of the invention can obtain muscle parameters including muscle cross-sectional area, muscle area index, average CT value, muscle fat infiltration degree and the like by performing positioning identification and segmentation processing on the processed muscle image, and can quickly and accurately realize muscle segmentation parameter information measurement of a three-dimensional medical image, thereby improving the processing efficiency of the medical image (image) aiming at the muscle and reducing the labor cost.
In some embodiments, the preprocessing module 402 is configured to resample the muscle CT image to obtain a resampled muscle CT image; the muscle CT image comprises a plurality of image images of the vertebral body level; extracting the region of interest of the muscle CT image after resampling to obtain a muscle region image; the muscle area image comprises a muscle area to be segmented; and carrying out normalization processing on the muscle area image to obtain a muscle image sequence.
In some embodiments, the positioning and identifying module 404 is further configured to perform positioning and identifying on the muscle image sequence in the CT sagittal position based on a pre-trained positioning neural network, so as to obtain a muscle positioning image; the pre-trained positioning neural network comprises an input layer, a first specified number of first convolutional layers, a first maximum value pooling layer, a first specified number of second convolutional layers, a second maximum value pooling layer, a first specified number of third convolutional layers, a third maximum value pooling layer, a second specified number of fourth convolutional layers, a fourth maximum value pooling layer, a third specified number of full-connection layers and an input layer which are connected in sequence.
In some embodiments, the first specified number is 2; the second specified number is 1; the third specified number is 2.
In some embodiments, the segmentation processing module 406 is further configured to input the sagittal location image of the target muscle into a pre-trained segmentation neural network, so as to obtain a segmentation result of the CT image for the muscle or muscle group at the specified location; the pre-trained segmented neural network comprises an input layer, a forward segmented sub-network, a reverse segmented sub-network, a convolutional layer and a softmax layer which are connected in sequence; the forward segmentation sub-network comprises a fourth preset number of convolution residual modules-pooling layer pairs; the backward partition subnetwork includes a fifth predetermined number of convolution residual module-anti-pooling layer pairs.
In some embodiments, the muscle parameters include muscle cross-sectional area, muscle area index, mean CT value, degree of muscle fat infiltration.
In some embodiments, the degree of muscle fat infiltration is characterized by a range of fat CT values and a range of muscle CT values; wherein, fat CT value range: -190 to-30 Hu; muscle CT value range: -29-150 Hu.
The system provided by the embodiment of the present invention has the same implementation principle and technical effect as the foregoing method embodiment, and for the sake of brief description, no mention is made in the system embodiment, and reference may be made to the corresponding contents in the foregoing method embodiment.
The embodiment of the invention provides electronic equipment, which particularly comprises a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the above described embodiments.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 100 includes: the device comprises a processor 50, a memory 51, a bus 52 and a communication interface 53, wherein the processor 50, the communication interface 53 and the memory 51 are connected through the bus 52; the processor 50 is arranged to execute executable modules, such as computer programs, stored in the memory 51.
The Memory 51 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 53 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
The bus 52 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
The memory 51 is used for storing a program, the processor 50 executes the program after receiving an execution instruction, and the method executed by the system defined by the flow process disclosed in any embodiment of the invention can be applied to the processor 50, or implemented by the processor 50.
The processor 50 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 50. The Processor 50 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 51, and the processor 50 reads the information in the memory 51 and completes the steps of the method in combination with the hardware thereof.
The method, system, electronic device and computer program product of the muscle CT image delineation provided in the embodiments of the present invention include a computer readable storage medium storing a nonvolatile program code executable by a processor, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the method described in the foregoing method embodiments is executed.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing embodiments, and is not described herein again.
The computer program product of the readable storage medium provided in the embodiment of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.