Artificial intelligence-based automatic segmentation method for aorta structure image

文档序号:9351 发布日期:2021-09-17 浏览:38次 中文

1. An aorta structure image automatic segmentation method based on artificial intelligence is characterized by comprising the following steps:

step 1: dividing decoding stages of the split network;

step 2, acquiring a label image and manufacturing a data set, obtaining the label image through manual labeling according to an original CT image of a heart structure, and respectively manufacturing an original image data set and a label image data set by carrying out slicing operation on the original CT image and the label image;

step 3, performing supervision and loss calculation in a first decoding stage, using an original image to supervise at the tail end of a decoder in the first decoding stage, and performing loss calculation on a predicted image and a tag image through a loss function to obtain a main loss Lmain, wherein the loss function is a loss function commonly used in the field, and is preferably a cross entropy loss function;

and 4, for supervision and loss calculation of the second decoding stage and the later stages, the method is realized by the following steps:

step 4.1: amplifying or reducing the label image to make the size of the label image the same as the size of the characteristic image at the corresponding decoding stage, wherein the label image is scaled by an image scaling method generally used in the field, preferably a bilinear difference method;

step 4.2: segmenting the label image, and converting the label image into a multi-channel image by using one-hot coding (one-hot), wherein each target structure occupies one channel of the label image;

step 4.3: adding noise to the label images of different channels and extracting morphological gradient, firstly, adding noise to the scaled multi-channel label images, wherein the noise is noise commonly used in the field, preferably Gaussian noise or salt and pepper noise, and secondly, extracting the morphological gradient of the label images added with the noise;

step 4.4, performing convolution operation on the characteristic images of each decoding stage, and outputting a multi-channel image, wherein the number of channels is consistent with that of the label images of the corresponding decoding stages;

step 4.5, performing morphological feature extraction on the multi-channel feature images of each decoding stage to obtain the morphological gradient of the multi-channel feature images of each decoding stage;

step 4.6: performing loss calculation on the morphological gradient of the label image after the noise is added and the morphological gradient of the feature image corresponding to the decoding stage, and setting a loss function as LossFunc, wherein the loss function is a loss function generally used in the prior art, preferably a cross entropy loss function, and is represented by the following formula:

where k represents a certain decoding stage,represents the loss value of the k-th decoding stage, i represents a certain channel, N represents the total number of channels, piRepresenting the morphological gradient of the multi-channel feature image, giA morphological gradient representing a label image;

step 4.7: accumulating the loss values of the second and subsequent decoding stages to obtain the deep-layer supervised overall loss value, wherein the accumulation mode may be a weighted sum adjusted according to different proportions, and the specific steps are as follows:

where k is a certain decoding stage, n is the total number of decoding stages, n-1 represents the number of decoding stages excluding the first decoding stage, and λkWeights, L, representing different decoding stagesauxRepresenting the global impairment of deep supervisionLosing the value;

step 4.8: the main loss value and the deep-layer supervised integral loss value are accumulated to obtain an overall loss value, the accumulation mode can be a weighted sum adjusted according to different proportions, and the method is specifically as follows:

L=Lmain+γLaux

wherein L represents the total loss value, LmainIs the main loss value, LauxRepresents the overall loss value of deep supervision, and gamma represents the weight;

and 5: and taking the L as a final loss value, and training and optimizing network parameters by using a selected optimizer according to a back propagation algorithm, wherein the optimizer preferably selects SGD or Adam.

2. The method as claimed in claim 1, wherein in step 2, the original CT image and the labeled image both include three-dimensional image information, and after multi-plane reconstruction, three sagittal, coronal and transverse planes can be obtained, and the original CT image and the labeled image are sliced from the three planes respectively to obtain two-dimensional original images corresponding to the three planes and two-dimensional labeled images matching the two-dimensional original images, and the two-dimensional original images and the two-dimensional labeled images are respectively generated as an original image data set and a labeled image data set.

3. The method for automatically segmenting the aortic structural image based on artificial intelligence as claimed in claim 1, wherein in step 4.2, the data used in TAVR/TAVI operation is the image of the aortic root structure based on CT image, the physiological structures and pathological tissues involved in the operation mainly include aorta, left ventricle and calcified tissue, and the above three target structures have different characteristics, wherein the overall shape of aorta is clear and regular, but the difference between the edge portion and the peripheral structure is small, so that the main body is easy to segment but the edge is difficult to segment; the shape in the left ventricle is complex and the structure of the junction of the left ventricle and the aortic valve is complex; the calcifications are distributed randomly and have different shapes, and the aim of the stage is to accurately segment the three target structures in the label image.

4. The method for aortic structure image segmentation based on artificial intelligence as claimed in claim 1, wherein in step 4.3, since different decoding stages contain different levels of semantic information, the original labeled image is used uniformly for supervision, and the difference of semantic information contained in different decoding stages is ignored, so as to limit the performance gain caused by deep supervision, therefore, in order to better simulate the specificity of the features, the added gaussian noise is different for each decoder stage, the deeper hierarchical features are more abstract, the noise needs to be added more, the noise adding level is increased in sequence after the second decoding stage, and the specific noise adding level is determined according to the following method:

d. adding noise with different degrees to a second decoding stage, and determining the noise degree of the stage through a comparison test;

e. on the basis of the second decoding stage, noise of a degree greater than that of the second decoding stage is added to the third decoding stage, and the noise degree of the stage is determined through a comparison test;

f. and on the basis of the previous decoding stage, noise is added to the next decoding stage in turn, so that the degree of the noise required to be added in different decoding stages is determined.

5. The method for automatically segmenting the aortic structure image based on artificial intelligence as claimed in claim 1, wherein the specific operation method for extracting the morphological gradient in step 4.3 is as follows:

d. expanding the label image added with the noise to obtain an expanded image;

e. corroding the label image added with the noise to obtain a corroded image;

f. and performing exclusive OR operation on the expansion image and the corrosion image to obtain the morphological gradient of the label image added with the noise.

6. The method of claim 1Method for the intelligent automatic segmentation of images of aortic structures, characterized in that in step 4.7 the weight λ is adjusted experimentally on the basis of different data setskThe specific method comprises the following steps: firstly, a data set is sent into a neural network to obtain a predicted image, then loss values of the predicted image and a tag image are calculated through a loss function with preset weight and optimized, different weight can cause different loss values, the neural network can be optimized to different effects, and an optimal weight value is selected through a contrast test.

7. The method for aortic structure image automatic segmentation based on artificial intelligence as claimed in claim 6, wherein the neural network is a common network for medical image segmentation, such as FCN, Unet, Unet + + and such network after pre-training in encoding stage, preferably Unet after pre-training on ImageNet.

Background

The aortic root is located in the central part of the heart, and the aortic sinus is located below the aortic root. The aortic sinus is cylindrically inserted between the mitral and tricuspid valves, the base is completely embedded in the surrounding tissue, and the posterior half is completely surrounded by both atria. The coronary arteries that supply the heart itself typically open into the left and right coronary sinuses within the aortic sinus. The aortic valve is located at the bottom of the aortic root, and the junction of the aortic sinus and the left ventricular outflow tract forms the boundary of the aorta and the left ventricle, is located at the morphological center of the heart, and is also the hemodynamic center, and the aortic valve plays the role of a one-way valve between the aorta and the left ventricle, prevents the aortic blood flow from flowing back to the left ventricle in the diastole and allows the blood flow from the left ventricle to flow into the aorta in the systole.

The aortic valve plays an important role in maintaining normal blood supply of the heart and the whole body, but due to congenital, rheumatic and degenerative changes, the aortic valve can have Aortic Stenosis (AS), Aortic Regurgitation (AR) and other diseases, which seriously affect the blood supply of the whole body, harm the health of the body and reduce the quality of life, and the severe aortic valvulopathy directly threatens the life of patients.

In recent years, the intervention treatment of aortic valvular disease through interventional operation is widely popularized, and hopes are brought to patients with high surgical risk or contraindication. Transcatheter Aortic Valve Replacement (TAVR), also known as Transcatheter Aortic Valve Implantation (TAVI), is a method of transcatheter aortic valve replacement, in which an assembled artificial aortic valve is placed at a diseased aortic valve via a catheter to functionally complete aortic valve replacement. Since TAVR surgery is an interventional operation performed in a non-direct-view state, a doctor is required to perform detailed and deep individual measurement and evaluation on the aortic root and adjacent physiological structures of a patient based on an image examination before surgery, and to perform the formulation of a surgical strategy and the selection of surgical instruments based on the results of the measurement and evaluation.

The imaging evaluation is the key point of TAVR/TAVI preoperative evaluation, and comprises the anatomical conditions of access of a self-body aortic valve, an aortic valve virtual valve ring, an aortic root, a coronary artery and a blood vessel, and whether the TAVR is suitable or not and the model of an implanted valve is selected. Multi-slice computed tomography (MSCT) is one of the most important means for TAVR imaging evaluation at present, and is the main basis for determining whether a patient is suitable for TAVR and selecting a model of a prosthetic valve. Through multi-plane reconstruction, the valve shape can be observed in multiple sections, and the valve thickness, the calcification degree and the volume occupied by the valve at the aortic root can be evaluated; measuring the circumference and area of the virtual annulus in the annulus plane, and then calculating the inner diameter of the annulus (circumference derived diameter, area derived diameter, major and minor diameters); on the basis, parameter measurement of regions such as a Left Ventricular Outflow Tract (LVOT), a Wagner's sinus, a sinotubular junction (STJ), an ascending aorta and the like is carried out, a basis is provided for valve model and type selection, and perivalvular leakage risk can be analyzed and predicted; MSCT can also be used to assess coronary ostia height, predict risk of coronary occlusion, and assess coronary lesions. In addition, MSCT can also be used to assess surgical access.

Currently, in the field of CT image post-processing, there are tool software for an anatomical structure measurement platform, such as FluoroCT, 3 dimensional, cvi42, etc. The doctor needs to take the points, the description and the measurement of the relevant structures of the aortic root in a purely manual or semi-automatic mode by depending on the experience and the understanding of the anatomical structure of the aortic root. The selection and extraction of the feature range in the image are completely completed by manual operation of a doctor.

The invention aims to provide a full-automatic segmentation method of an aorta structure image based on artificial intelligence, which can improve the accuracy of image segmentation, particularly the accuracy of structure contour segmentation, thereby improving the accuracy of three-dimensional model construction based on the image segmentation result, further improving the accuracy of positioning and measurement of related structures, and finally achieving the beneficial effect of improving the efficiency and the accuracy of TAVR/TAVI preoperative evaluation. The defects of inaccurate manual operation measurement, large measurement subjectivity, human error, difficulty in copying and reproducing and the like are overcome.

Disclosure of Invention

According to the characteristics of TAVR/TAVI operation core image data, the segmentation difficulty is mainly concentrated on the structure edge, namely, the inaccuracy of the segmentation of each structure outline causes the defect of the performance of the automatic segmentation method. Therefore, the invention aims to provide an automatic segmentation method of an aorta structure image based on artificial intelligence, which can perform more accurate segmentation on a target region, so that the region with incomplete segmentation is more perfect, and the segmentation effect is effectively improved.

The invention is realized by the following technical scheme:

an aorta structure image automatic segmentation method based on artificial intelligence is characterized by comprising the following steps:

step 1: dividing decoding stages of the split network;

step 2, acquiring a label image and manufacturing a data set, obtaining the label image through manual labeling according to an original CT image of a heart structure, and respectively manufacturing an original image data set and a label image data set by carrying out slicing operation on the original CT image and the label image;

step 3, performing supervision and loss calculation in a first decoding stage, using an original image to supervise at the tail end of a decoder in the first decoding stage, and performing loss calculation on a predicted image and a tag image through a loss function to obtain a main loss Lmain, wherein the loss function is a loss function commonly used in the field, and is preferably a cross entropy loss function;

and 4, for supervision and loss calculation of the second decoding stage and the later stages, the method is realized by the following steps:

step 4.1: amplifying or reducing the label image to make the size of the label image the same as the size of the characteristic image at the corresponding decoding stage, wherein the label image is scaled by an image scaling method generally used in the field, preferably a bilinear difference method;

step 4.2: segmenting the label image, and converting the label image into a multi-channel image by using one-hot coding (one-hot), wherein each target structure occupies one channel of the label image;

step 4.3: adding noise to the label images of different channels and extracting morphological gradient, firstly, adding noise to the scaled multi-channel label images, wherein the noise is noise commonly used in the field, preferably Gaussian noise or salt and pepper noise, and secondly, extracting the morphological gradient of the label images added with the noise;

step 4.4, performing convolution operation on the characteristic images of each decoding stage, and outputting a multi-channel image, wherein the number of channels is consistent with that of the label images of the corresponding decoding stages;

step 4.5, performing morphological feature extraction on the multi-channel feature images of each decoding stage to obtain the morphological gradient of the multi-channel feature images of each decoding stage;

and 4.6, performing loss calculation on the morphological gradient of the label image added with the noise and the morphological gradient of the feature image corresponding to the decoding stage, wherein a loss function is LossFunc, is a loss function generally used in the prior art, is preferably a cross entropy loss function, and is represented by the following formula:

where k represents a certain decoding stage,represents the loss value of the k-th decoding stage, i represents a certain channel, N represents the total number of channels, piRepresenting the morphological gradient of the multi-channel feature image, giA morphological gradient representing a label image;

and 4.7, accumulating the loss values of the second and the following decoding stages to obtain a deep supervised integral loss value, wherein the accumulation mode can be a weighted sum adjusted according to different proportions, and the method is specifically as follows:

where k is a certain decoding stage, n is the total number of decoding stages, n-1 represents the number of decoding stages excluding the first decoding stage, and λkWeights, L, representing different decoding stagesauxRepresents the overall loss value of deep supervision;

and 4.8, accumulating the main loss value and the integral loss value of the deep supervision to obtain an overall loss value, wherein the accumulation mode can be a weighted sum adjusted according to different proportions, and the method is specifically as follows:

L=Lmain+γLaux

wherein L represents the total loss value, LmainIs the main loss value, LauxRepresents the overall loss value of deep supervision, and gamma represents the weight;

and 5, taking the L as a final loss value, and training and optimizing network parameters by using a selected optimizer according to a back propagation algorithm, wherein the optimizer preferably selects SGD or Adam.

According to the method for automatically segmenting the aorta structure image based on artificial intelligence, in the step 2, the original CT image and the label image respectively contain three-dimensional image information, three sections of a sagittal plane, a coronal plane and a cross section can be obtained after multi-plane reconstruction, the original CT image and the label image are respectively sliced from the three sections to respectively obtain two-dimensional original images corresponding to the three sections and two-dimensional label images matched with the two-dimensional original images, and an original image data set and a label image data set are respectively manufactured.

According to the method for automatically segmenting the aorta structural image based on artificial intelligence, in the step 4.2, data used in TAVR/TAVI operation is an aorta root structural image based on a CT image, physiological structures and pathological tissues required by the operation mainly comprise an aorta, a left ventricle and calcified tissues, and the three target structures have different characteristics respectively, wherein the overall shape of the aorta is clear and regular, but the difference between the imaging of the edge part and the imaging of the peripheral structure is small, so that a main body is easily segmented, but the edge segmentation is difficult; the shape in the left ventricle is complex and the structure of the junction of the left ventricle and the aortic valve is complex; the calcifications are distributed randomly and have different shapes, and the aim of the stage is to accurately segment the three target structures in the label image.

According to the method for automatically segmenting the aorta structure image based on the artificial intelligence, in the step 4.3, because different decoding stages contain semantic information with different degrees, the original label image is uniformly used for supervision, the difference of the semantic information contained in the different decoding stages can be ignored, and the performance gain brought by deep supervision is limited, therefore, in order to better simulate the specificity of the characteristics, the added Gaussian noise degrees are different for each decoder stage, the deeper hierarchical characteristics are more abstract, the noise needing to be added is larger, the noise adding degrees are sequentially increased after the second decoding stage, and the specific noise adding degrees are determined according to the following method:

a. adding noise with different degrees to a second decoding stage, and determining the noise degree of the stage through a comparison test;

b. on the basis of the second decoding stage, noise of a degree greater than that of the second decoding stage is added to the third decoding stage, and the noise degree of the stage is determined through a comparison test;

c. and on the basis of the previous decoding stage, noise is added to the next decoding stage in turn, so that the degree of the noise required to be added in different decoding stages is determined.

According to the method for automatically segmenting the aorta structure image based on the artificial intelligence, in the step 4.3, a specific operation method for extracting the morphological gradient is as follows:

a. expanding the label image added with the noise to obtain an expanded image;

b. corroding the label image added with the noise to obtain a corroded image;

c. and performing exclusive OR operation on the expansion image and the corrosion image to obtain the morphological gradient of the label image added with the noise.

According to the method for automatically segmenting the aorta structure image based on the artificial intelligence, in step 4.7, the weight lambda is adjusted through experiments according to different data setskThe specific method comprises the following steps: firstly, a data set is sent into a neural network to obtain a predicted image, then loss values of the predicted image and a tag image are calculated through a loss function with preset weight and optimized, different weight can cause different loss values, the neural network can be optimized to different effects, and an optimal weight value is selected through a contrast test.

According to the method for automatically segmenting the aorta structure image based on the artificial intelligence, the neural network is a common network for segmenting medical images, such as FCN, Unet, Unet + + and the network after being pre-trained in a coding stage, preferably the Unet after being pre-trained on ImageNet.

The invention has the beneficial effects that: compared with the existing image processing method, the image processing method based on artificial intelligence can remove obvious mistaken segmentation areas, improve the image segmentation effect, more accurately segment the target area and improve the incompletely segmented area. Image data with higher accuracy is provided for the establishment of a three-dimensional model later, and the efficiency and the precision of TAVR/TAVI preoperative evaluation are effectively improved.

Drawings

FIG. 1 is a flow chart of an artificial intelligence based method for automatic segmentation of an aorta image according to the present invention;

FIG. 2 is a schematic diagram of an artificial intelligence based aorta image automatic segmentation method of the present invention;

FIG. 3a is a CT slice image of a first embodiment of the present invention;

FIG. 3b is a label image of the first embodiment of the present invention;

FIG. 3c is a label image after adding noise according to the first embodiment of the present invention;

FIG. 3d is a morphological gradient map of the tag image after the addition of noise according to the first embodiment of the present invention;

FIG. 3e shows the predicted segmentation result by the existing basic method in the first embodiment of the present invention;

FIG. 3f shows the segmentation result predicted by the method of the present invention in the first embodiment of the present invention;

FIG. 4a is a CT slice image of a second embodiment of the present invention;

FIG. 4b is a label image of a second embodiment of the present invention;

FIG. 4c is a label image after adding noise according to a second embodiment of the present invention;

FIG. 4d (1) a morphological gradient map of the left ventricular outflow tract structure of the labeled image after noise addition according to the second embodiment of the present invention;

FIG. 4d (2) a morphological gradient map of the aorta structure of the labeled image after adding noise according to the second embodiment of the present invention;

FIG. 4e shows the predicted segmentation result by the existing basic method in the second embodiment of the present invention;

fig. 4f segmentation results predicted by the method of the present invention in a second embodiment of the present invention.

Detailed Description

The invention will be further illustrated with reference to the following specific examples. It should be understood that the examples are only for illustrating the present invention and are not intended to limit the scope of the present invention. In addition, it should be understood that various changes or modifications can be made by those skilled in the art after reading the disclosure of the present invention, and such equivalents also fall within the scope of the invention.

As shown in FIG. 1, the method for automatically segmenting the aorta structure image based on artificial intelligence comprises the following 5 steps:

step 1: the decoding stages of the split network are divided into 4 or 5 stages, and in this embodiment, 5 decoding stages are adopted, as shown in fig. 2.

And 2, acquiring a label image and manufacturing a data set, obtaining the label image through manual labeling according to the original CT image of the heart structure, wherein the original CT image and the label image both contain three-dimensional image information, obtaining three sections of a sagittal plane, a coronal plane and a cross section after multi-plane reconstruction, respectively carrying out slicing operation on the original CT image and the label image from the three sections to respectively obtain two-dimensional original images (shown in figures 3a and 4 a) corresponding to the three sections and two-dimensional label images (shown in figures 3b and 4 b) matched with the three sections, and respectively manufacturing the original image data set and the label image data set.

And 3, performing supervision and loss calculation in a first decoding stage, supervising the decoder at the tail end of a first stage by using an original image, and performing loss calculation on the predicted image and the tag image through a loss function to obtain a main loss Lmain, wherein the loss function is a loss function commonly used in the field, and is preferably a cross entropy loss function.

And 4, for supervision and loss calculation of the second decoding stage and the later stages, the method is realized by the following steps:

step 4.1: the label image is enlarged or reduced to have the same size as the feature image size at the corresponding decoding stage, and the label image scaling method is an image scaling method generally used in the art, and preferably a bilinear difference method.

Step 4.2: the label image is segmented, and because the data used in the TAVR/TAVI operation is an image of the aortic root structure based on the CT image, the physiological structures and pathological tissues involved in the operation mainly include the aorta, the left ventricle and calcified tissues. The three target structures have different characteristics, wherein the whole form of the aorta is clear and regular, but the imaging difference between the edge part and the peripheral structure is small, so that the main body is easy to segment but the edge is difficult to segment; the shape in the left ventricle is complex and the structure of the junction of the left ventricle and the aortic valve is complex; calcifications are distributed randomly and in different shapes. The goal of this stage is to accurately segment the three target structures in the label image, and convert the label image into a multi-channel image using one-hot encoding (one-hot), where each target structure occupies one channel of the label image.

Step 4.3: firstly, noise is added to the scaled multi-channel label images, wherein the noise is noise commonly used in the field, gaussian noise or salt-and-pepper noise is selected in the embodiment, and since different decoding stages contain semantic information of different degrees, the original label images are uniformly used for supervision, so that the difference of the semantic information contained in different decoding stages can be ignored, and the performance gain caused by deep supervision is limited. Therefore, in order to better simulate the specificity of the features, the gaussian noise is added to different degrees for each decoder stage, the deeper the hierarchical features are abstracted, the noise to be added is larger, the degree of noise addition increases in sequence from the second decoding stage, and the specific degree of noise addition is determined according to the following method:

a. firstly, adding noise with different degrees to a second decoding stage, and determining the noise degree of the second decoding stage through a contrast test according to different precisions of images obtained by the test;

b. on the basis of the second decoding stage, noise of a degree greater than that of the second decoding stage is added to the third decoding stage, and the noise degree of the stage is determined through a comparison test;

c. on the basis of the previous decoding stage, noise is added to the next decoding stage in sequence, so that the noise degree required to be added in different decoding stages is determined;

fig. 3c and 4c are examples of the label image after noise is added in the two embodiments, respectively.

Secondly, performing morphological gradient extraction on the tag image img added with the noise, wherein the morphological gradient extraction specifically comprises the following operations:

a. expanding the label image img added with the noise to obtain an expanded image img _ dila;

b. corroding the tag image img added with the noise to obtain a corroded image img _ ero;

c. performing exclusive or operation on the expanded image img _ dila and the corroded image img _ ero to obtain a morphological gradient img _ gradient of the label image added with the noise;

fig. 3d and fig. 4d (1) and 4d (2) are examples of the label images with noise added in the two embodiments, respectively.

And 4.4, performing convolution operation on the characteristic images of the decoding stages, and outputting a multi-channel image, wherein the number of channels is consistent with that of the label images of the corresponding decoding stages.

And 4.5, performing morphological feature extraction on the multichannel feature images of each decoding stage to obtain the morphological gradient of the multichannel feature images of each decoding stage.

And 4.6, performing loss calculation on the morphological gradient of the label image added with the noise and the morphological gradient of the feature image corresponding to the decoding stage, wherein a loss function is LossFunc, is a loss function generally used in the prior art, is preferably a cross entropy loss function, and is represented by the following formula:

where k represents a certain decoding stage,represents the loss value of the k-th decoding stage, i represents a certain channel, N represents the total number of channels, piRepresenting the morphological gradient of the multi-channel feature image, giRepresenting the morphological gradient of the label image.

And 4.7, accumulating the loss values of the second and the following decoding stages to obtain a deep supervised integral loss value, wherein the accumulation mode can be a weighted sum adjusted according to different proportions, and the method is specifically as follows:

where k is a certain decoding stage, n is the total number of decoding stages, n-1 represents the number of decoding stages excluding the first decoding stage, and λkWeights, L, representing different decoding stagesauxRepresenting a value of the global loss for deep supervision, wherein the weight λ is adjusted experimentally from different data setskThe specific method comprises the following steps: firstly, a data set is sent into a neural network to obtain a predicted image, wherein the neural network is a common network for medical image segmentationNetworks such as FCN, Unet + + and such networks after pre-training in the encoding phase, preferably Unet after pre-training on ImageNet, then calculate and optimize the loss values of predicted images and labeled images by loss functions with predetermined weights, different weights will result in different loss values, optimize the neural network to different effects, select the optimal weight values by contrast tests.

And 4.8, accumulating the main loss value and the integral loss value of the deep supervision to obtain an overall loss value, wherein the accumulation mode can be a weighted sum adjusted according to different proportions, and the method is specifically as follows:

L=Lmain+γLaux

wherein L represents the total loss value, LmainIs the main loss value, LauxRepresenting the overall loss value for deep supervision and gamma representing the weight.

And 5, taking the L as a final loss value, and training and optimizing network parameters by using a selected optimizer according to a back propagation algorithm, wherein the optimizer preferably selects SGD or Adam.

The advantageous effects of the present invention are exemplified in the first embodiment and the second embodiment. As shown in fig. 3e, in the first embodiment, after the image segmentation is performed by the basic method, a region (2) outside the original structure appears, but the region does not belong to the target region and belongs to the neural network recognition error, and the region (1) belongs to the mis-segmented region, so that in the case that the segmentation of the left ventricular outflow tract (1) is similar, the basic method appears the mis-segmentation or recognition error. Fig. 3f shows the result of image segmentation performed by the method of the present invention, which does not generate erroneous segmentation, and effectively improves the segmentation accuracy compared with the prior art.

In the second embodiment, as shown in fig. 4e, after image segmentation by the basis method, the aorta (3) region is hardly segmented. Fig. 4f shows the result of image segmentation by the method of the present invention, which shows that the method of the present invention can segment the aorta structure more accurately, and the segmentation effect is obviously better in the case of similar segmentation result of the left ventricular outflow tract (1) region.

The basic approach used in the above embodiment is a Unet network, i.e., backbone-34, with the renet-34 as the encoder after pre-training on ImageNet.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种视频前景和背景分离方法及其相关装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!