Medical image processing method and device
1. A medical image processing method, characterized by comprising:
acquiring a medical image, wherein the medical image comprises a plurality of image layers;
in the medical image, determining a target region;
determining a target image layer to which the target area belongs;
and adjusting the target image layer to be a first layer thickness, and adjusting the rest image layers except the target image layer to be a second layer thickness to form the adjusted medical image, wherein the first layer thickness is smaller than the second layer thickness.
2. The method of claim 1, wherein determining a target region in the medical image comprises:
determining contour information of a region of interest from the medical image;
based on the contour information, a target region is determined.
3. The method of claim 1, wherein determining a target region in the medical image comprises:
responding to a received area selection request sent by a terminal, wherein the area selection request comprises the outline information of the selected area;
and determining the target area according to the contour information of the selected area.
4. The method of claim 2, wherein determining the target region based on the contour information comprises:
determining a region surrounded by the contour information of the region of interest as a target region;
alternatively, the first and second electrodes may be,
and determining a region with fuzzy boundaries in the region of interest as a target region.
5. The method of claim 1,
further comprising: setting a coordinate system for the medical image, and determining a coordinate range corresponding to each image layer;
the determining the target image layer to which the target area belongs includes:
determining coordinate information of the target area;
and screening a plurality of target image layers occupied by the target area according to the coordinate information of the target area and the coordinate range corresponding to each image layer.
6. The method of claim 5, wherein determining the coordinate information of the target area comprises:
determining coordinate information for the region contour of the target region.
7. The method of claim 5,
further comprising: setting any coordinate axis in the coordinate system to be parallel to the laminating direction of the plurality of image layers;
the determining the coordinate information of the target area comprises: and determining coordinate information of at least two coordinate points which are farthest in physical distance in a coordinate axis direction parallel to the stacking direction of the plurality of image layers in the target area in the stacking direction of the plurality of image layers.
8. The method according to any one of claims 1 to 7,
further comprising: constructing and storing a mapping relation between the feature type and the layer thickness;
analyzing the target feature type to which the target area belongs;
the adjusting the target image layer to a first layer thickness includes:
and adjusting the target image layer to be the layer thickness corresponding to the target feature type included in the mapping relation.
9. The method according to claim 8, wherein the adjusting the target image layer to a layer thickness corresponding to the target feature type included in the mapping relationship comprises:
for each of the target image layers, performing:
and judging whether the original scanning layer thickness of the target image layer is larger than the layer thickness corresponding to the target feature type, if not, compressing the target image layer based on the layer thickness corresponding to the target feature type.
10. The method of claim 9, further comprising:
when the original scanning layer thickness of the image layer is judged to be larger than the layer thickness corresponding to the target characteristic type,
adjusting the layer thickness of the image layer in an interpolation mode to enable the layer thickness of the image layer to be equal to the layer thickness corresponding to the target feature type; or adjusting the layer thickness of the image layer to the original scanning layer thickness.
11. The method of any of claims 1 to 7, further comprising:
acquiring behavior information of a terminal user aiming at the medical image;
determining a corresponding layer thickness for the end user according to the behavior information;
the adjusting the target image layer to a first layer thickness includes:
in response to receiving a query request for the medical image sent by a terminal, acquiring information of a terminal target user from the query request;
determining a target layer thickness for the end target user based on the information of the end target user;
adjusting the target image layer to the target layer thickness.
12. A medical image processing apparatus, characterized by comprising: an acquisition module and a processing module, wherein,
the acquisition module is configured to acquire a medical image, wherein the medical image includes a plurality of image layers;
the processing module is used for selecting a target area in the medical image; determining a target image layer to which the target area belongs; and adjusting the target image layer to be a first layer thickness, and adjusting the rest image layers except the target image layer to be a second layer thickness to form the adjusted medical image, wherein the first layer thickness is smaller than the second layer thickness.
13. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
14. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-11.
Background
With the advance of medical imaging technology, medical images obtained by scanning of medical equipment become clearer, and the amount of medical image data is increased, for example, a completed CT image data generally occupies hundreds of megabytes of storage space.
Currently, in order to reduce the transitional consumption of medical images to a storage system, medical images are generally compressed to a greater extent. In particular, for a three-dimensional medical image with a larger storage space, which is composed of a plurality of medical image slices with a certain layer thickness, it is common to compress the three-dimensional medical image to a larger extent by increasing each layer medical image slice to as large a layer thickness as possible, and then analyze the signs or features of the region of interest, etc. based on the compressed three-dimensional medical image.
The existing compression mode can cause the loss of important image information of the three-dimensional medical image, so that partial signs or characteristics cannot be fully embodied.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for processing a medical image, which can not only effectively improve resource utilization, but also more completely reflect the image features or characteristics.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a medical image processing method including:
acquiring a medical image, wherein the medical image comprises a plurality of image layers;
in the medical image, determining a target region;
determining a target image layer to which the target area belongs;
and adjusting the target image layer to be a first layer thickness, and adjusting the rest image layers except the target image layer to be a second layer thickness to form the adjusted medical image, wherein the first layer thickness is smaller than the second layer thickness.
Optionally, in the medical image, determining a target region comprises:
determining contour information of a region of interest from the medical image;
and determining the area surrounded by the outline information of the region of interest as a target area.
Optionally, in the medical image, determining a target region comprises:
responding to a received area selection request sent by a terminal, wherein the area selection request comprises the outline information of the selected area;
and determining the target area according to the contour information of the selected area.
Optionally, the medical image processing method further includes: setting a coordinate system for the medical image, and determining a coordinate range corresponding to each image layer;
the determining the target image layer to which the target area belongs includes:
determining coordinate information of the target area;
and screening a plurality of target image layers occupied by the target area according to the coordinate information of the target area and the coordinate range corresponding to each image layer.
Optionally, the determining the coordinate information of the target area includes:
determining coordinate information for the region contour of the target region.
Optionally, the medical image processing method further includes: setting any coordinate axis in the coordinate system to be parallel to the laminating direction of the plurality of image layers;
the determining the coordinate information of the target area comprises: and determining coordinate information of at least two coordinate points which are farthest in physical distance in a coordinate axis direction parallel to the stacking direction of the plurality of image layers in the target area in the stacking direction of the plurality of image layers.
Optionally, the medical image processing method further includes: constructing and storing a mapping relation between the feature type and the layer thickness;
analyzing the target feature type to which the target area belongs;
the adjusting the target image layer to a first layer thickness includes:
and adjusting the target image layer to be the layer thickness corresponding to the target feature type included in the mapping relation.
Optionally, the adjusting the target image layer to a layer thickness corresponding to the target feature type included in the mapping relationship includes:
for each of the target image layers, performing:
and judging whether the original scanning layer thickness of the target image layer is larger than the layer thickness corresponding to the target feature type, if not, compressing the target image layer based on the layer thickness corresponding to the target feature type.
Optionally, the medical image processing method further includes:
when the original scanning layer thickness of the image layer is judged to be larger than the layer thickness corresponding to the target characteristic type,
adjusting the layer thickness of the image layer in an interpolation mode to enable the layer thickness of the image layer to be equal to the layer thickness corresponding to the target feature type; or adjusting the layer thickness of the image layer to the original scanning layer thickness.
Optionally, the medical image processing method further includes:
acquiring behavior information of a terminal user aiming at the medical image;
determining a corresponding layer thickness for the end user according to the behavior information;
the adjusting the target image layer to a first layer thickness includes:
in response to receiving a query request for the medical image sent by a terminal, acquiring information of a terminal target user from the query request;
determining a target layer thickness for the end target user based on the information of the end target user;
adjusting the target image layer to the target layer thickness.
In a second aspect, an embodiment of the present invention provides a medical image processing apparatus, including: an acquisition module and a processing module, wherein,
the acquisition module is configured to acquire a medical image, wherein the medical image includes a plurality of image layers;
the processing module is used for selecting a target area in the medical image; determining a target image layer to which the target area belongs; and adjusting the target image layer to be a first layer thickness, and adjusting the rest image layers except the target image layer to be a second layer thickness to form the adjusted medical image, wherein the first layer thickness is smaller than the second layer thickness.
One embodiment of the above invention has the following advantages or benefits: the target image layer to which the target area belongs is adjusted to be the first layer thickness, and the rest image layers except the target image layer are adjusted to be the second layer thickness, wherein the first layer thickness is smaller than the second layer thickness, namely, the target image layer to which the target area belongs and the rest image layers except the target image layer are set to be different layer thicknesses, so that the layer thickness of the target area enables a user to observe key information, and the layer thicknesses of the rest image layers can reduce the occupation of storage resources.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a diagram of a system architecture for an application scenario of an embodiment of the present invention;
fig. 2 is a schematic diagram of a main flow of a medical image processing method according to an embodiment of the invention;
FIG. 3 is a schematic illustration of a medical image in relation to an image layer according to an embodiment of the invention;
FIG. 4 is a schematic illustration of a target area according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of a main flow of determining a target image layer to which a target region belongs according to one embodiment of the present invention;
FIG. 6 is a schematic illustration of a relationship between a three-dimensional coordinate system and a three-dimensional medical image according to one embodiment of the invention;
FIG. 7 is a schematic diagram of a main flow of adjusting layer thicknesses of target image layers according to one embodiment of the invention;
FIG. 8 is a schematic diagram of a main flow of adjusting a layer thickness of a target image layer according to another embodiment of the present invention;
fig. 9 is a schematic diagram of the main modules of a medical image processing apparatus according to an embodiment of the present invention;
fig. 10 is a schematic configuration diagram of a computer system suitable for implementing a terminal device or an image processing server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic view of an application scenario of a medical image processing method and a medical image processing apparatus according to an embodiment of the present invention. As shown in fig. 1, the application scenario may include: imaging system 110, network 120, image processing server 130, terminal devices 140, 150, 160, and database 170. Network 120 is used, among other things, to provide a medium for communication links between imaging system 110, image processing server 130, terminal devices 140, 150, 160, and database 170. Network 120 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The imaging system 110 may include a scanning device 111 and a visualization server 112 that converts scan data scanned by the scanning device 111 into pixels to form a sliced two-dimensional medical image or a three-dimensional medical image, among other things. For example, the Imaging system 110 may be a Positron emission tomography computer Imaging system (PET 10), a Positron emission computed tomography computer Imaging system (Positron emission tomography 10 with Computerized tomograph10, PET/CT), a Single photon emission computed tomography computer Imaging system (Single photon emission computed tomography 10 with Computerized tomograph10, SPECT/CT), a computed tomography system (computed tomography 10, CT), a Medical ultrasound examination computer Imaging system (Medical ultrasound Imaging 10), a Nuclear Resonance Imaging system (Nuclear Magnetic Resonance Imaging, NMRI), a Magnetic Resonance Imaging system (Magnetic Resonance Imaging, MRI), a cardiovascular Imaging system (MRI 10), a Digital radiography Imaging system (DR 10, MRI), a Digital radiography Imaging system (DR 10, CT). In an application scenario of the embodiment of the present invention, the various imaging systems described above may all communicate with the image processing server 130, the terminal devices 140, 150, and 160, and the database 170 through a network.
The scanning device 111 may be a digital subtraction angiography scanner, a magnetic resonance angiography scanner, a tomography scanner, an positron emission computed tomography scanner, a single photon emission computed tomography scanner, a medical ultrasound examination device, a nuclear magnetic resonance imaging scanner, a digital radiography scanner, or the like. Wherein the combination of the scanning device 111 and the visualization server 112 may form the imaging system 110 described above.
The two-dimensional medical image or the three-dimensional medical image formed by the imaging system 110 can be stored in the database 170 for the subsequent image processing server 130 and the terminal devices 140, 150, 160 to retrieve the required medical image from the database 170. In addition, the imaging system 110 may also directly provide the medical image to the image processing server 130 or the terminal devices 140, 150, 160, etc.
The database 170 may be a conventional database or a database deployed on a storage cloud.
The image processing server 130 may be a server that performs compression, pixel correction, three-dimensional reconstruction, and the like on the medical image formed by the imaging system 110. The image processing server 130 may store the processed image in a database 170 and/or provide the processed image to the terminal devices 140, 150, 160, etc.
The user may use the terminal devices 140, 150, 160 to obtain the medical image from the imaging system 110 and/or the database 170 and/or the image processing server 130 through the network 120, and the user may also use the terminal devices 140, 150, 160 to manually select a target region to be analyzed in the medical image, and the terminal devices 140, 150, 160 transmit the selected target region to the imaging system 110 or the image processing server 130, so that the imaging system 110 or the image processing server 130 further processes the target region, such as readjusting the layer thickness, adjusting the pixels, and so on.
The terminal devices 140, 150, 160 may be various electronic devices having display screens and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
It should be noted that the medical image processing method provided by the embodiment of the present invention is generally executed by the imaging system 110 or the image processing server 130, and accordingly, the image processing apparatus is generally disposed in the imaging system 110 or the image processing server 130.
It should be understood that the number of imaging systems, networks, image processing servers, terminal devices, and databases in fig. 1 are merely illustrative. There may be any number of imaging systems, networks, image processing servers, terminal devices, and databases, as desired for implementation.
A medical image processing method implemented by the imaging system 110 or the image processing server 130 according to an embodiment of the present invention may be shown in fig. 2. As can be seen from fig. 2, the medical image processing method may comprise the steps of:
step S201: acquiring a medical image, wherein the medical image comprises a plurality of image layers;
the medical image may be a three-dimensional image obtained by the imaging system 110 by performing processing such as scanning of a living body and image reconstruction of a scanning result. The organism may include any one or more of tissues, organs, specimens, and other parts of the body. For example, the organism may include one or more of the head, breast, limbs, heart, blood vessels, intestines, stomach, bladder, gall bladder, pelvic cavity, spine, bone, chest cavity, pleura, abdomen, and the like. Thus, the medical image may be a lung scan image, an angiographic image, a head CT image, a whole body nuclear magnetic scan image, or the like.
The scanning device 111 in the imaging system 110 shown in fig. 1 is generally used to perform slice scanning on a living body, and form a three-dimensional image as shown in fig. 3 by performing superposition reconstruction on a plurality of two-dimensional images generated by a plurality of slice scanning (i.e. one two-dimensional image generated by one slice scanning is an image layer). As shown in fig. 3, the medical image 300 is formed by superimposing a plurality of image layers 310. For the convenience of subsequent analysis based on fig. 3, in fig. 3, the medical image 300 further includes a lesion region 320 and a lesion sign 330. The three-dimensional medical image shown in fig. 3 obtained by the imaging system 110 generally includes the same layer thicknesses of the respective image layers, and at present, the respective image layers are compressed to the same layer thicknesses regardless of whether the three-dimensional medical image is compressed by the imaging system 110 or the three-dimensional medical image is compressed by the image processing server 130 shown in fig. 1.
It is worth mentioning that, the medical image is compressed, which substantially increases the layer thickness of each image layer, that is, the layer thickness of the image layer in the medical image increases, and the number of pixel points included in the medical image can be reduced, thereby compressing the medical image and reducing the storage space occupied by the medical image. However, over-compression of the medical image may result in partial signs of the medical image that disappear or are difficult for a device to capture or for medical workers to observe. Therefore, how to ensure that all the signs in the medical image or the signs concerned by the medical workers can be clearly shown through the medical image is a problem to be solved.
Step S202: selecting a target area in the medical image;
the target area refers to an area concerned by a worker engaged in medicine, and may be an area of interest or an area selected by the worker engaged in medicine through a terminal.
Therefore, there may be two specific implementations of this step for selecting the target region.
The first specific implementation manner: determining contour information of the region of interest from the medical image; based on the contour information, a target region is determined. Here, the region of interest (ROI) generally refers to a region to be processed, which is delineated from a processed image in a manner of a box, a circle, an ellipse, an irregular polygon, and the like in machine vision and image processing, and in the embodiment of the present invention, the region of interest refers to a lesion region 320 delineated by a prior art, such as an AI technique, and the like, as shown in fig. 3. The contour information may include spatial coordinates of each pixel point on the contour, pixel values of the pixel points, and the like.
Wherein, the determining the target area based on the contour information may be: the region surrounded by the contour information of the region of interest (i.e., the region of interest) is determined as a target region. For example, fig. 3 shows the lesion area 320 as the target area. In addition, the determining the target area based on the contour information may further include: and determining a region with fuzzy boundaries in the region of interest as a target region. The boundary blurring refers to that the pixel values of the partial pixel points included in the contour information are close to the pixel values of the other surrounding pixel points (the proximity may be that the difference between the pixel values of the partial pixel points included in the contour information and the pixel values of the other surrounding pixel points is not greater than a preset pixel difference threshold, and the pixel difference threshold may be set according to actual requirements), so that the partial contour of the region of interest and the background are hardly defined clearly, and the region with the blurred boundary may refer to the partial contour of the region of interest where the pixel values are close to the pixel values of the adjacent background, or may refer to a region including the partial contour. The pixel value is close to the pixel value of the adjacent background, which means that the difference between the pixel value and the pixel values of other surrounding pixels is not greater than a preset pixel difference threshold, and the pixel difference threshold can be set according to actual requirements. By selecting the area with the fuzzy boundary as the target area, the thickness of the image layer where the area with the fuzzy boundary is located can be reduced in the subsequent processing process, so that the area with the fuzzy boundary becomes clear, and the boundary of the focus area is better defined.
The second specific implementation manner is as follows: responding to a received area selection request sent by a terminal, wherein the area selection request comprises the outline information of the selected area; and determining the target area according to the contour information of the selected area. For example, if a user (a worker engaged in medical science) selects the outline 340 shown in fig. 4 from a medical image displayed on the terminal, it can be determined that the region surrounded by the outline 340 is the target region by the outline information of the region 340, such as three-dimensional coordinates, pixels of pixel points, and the like.
Different regions (such as a lesion region, a lesion boundary fuzzy region, a region concerned by a worker engaged in medicine, and the like) are selected through the various manners, and different regions can be selected as target regions according to differences concerned by the worker engaged in medicine (for example, the worker engaged in medicine compares the boundary of the lesion region concerned, and the target region can be a region with a fuzzy boundary, and for example, the worker engaged in medicine compares the lesion region concerned, and the target region can be a lesion region, and the like), so as to meet different requirements of medical image processing.
Step S203: determining a target image layer to which the target area belongs;
for example, the three-dimensional medical image shown in fig. 3 includes image layers: t1, T2, T3, T4, T5, T6, T7, and T8, with the focal region 320 shown in fig. 3 as the target region, the step S203 can determine that the target image layer to which the target region belongs is: t3, T4, T5 and T6.
Step S204: and adjusting the target image layer to be a first layer thickness, and adjusting the rest image layers except the target image layer to be a second layer thickness to form the adjusted medical image, wherein the first layer thickness is smaller than the second layer thickness.
In this step, the first layer thickness and the second layer thickness may be preset, or may be adjusted accordingly according to the user's needs.
In the embodiment shown in fig. 2, the target area is selected, the target image layer to which the target area belongs is adjusted to the first layer thickness, and the remaining image layers other than the target image layer are adjusted to the second layer thickness, where the first layer thickness is smaller than the second layer thickness, that is, the target image layer to which the target area belongs and the remaining image layers other than the target image layer are set to have different layer thicknesses, so that the layer thickness of the target area enables a user to observe key information, and the layer thicknesses of the remaining image layers can reduce the occupation of storage resources.
In addition, because the target image layer and the residual image layer of the target area are separated, and the definition of the residual image layer generally does not affect the target area, the residual image layer can be compressed as much as possible, that is, the layer thickness of the residual image layer can be increased as much as possible, so that the space occupation of the medical image is effectively reduced.
In the embodiment of the present invention, as shown in fig. 5, determining the target image layer to which the target area belongs may be implemented by:
step S501: setting a coordinate system for the medical image, and determining a coordinate range corresponding to each image layer;
the coordinate system for medical images is typically a three-dimensional coordinate system, and the orientation of the coordinate axes may be arbitrary. In a preferred embodiment, as shown in fig. 6, one coordinate axis (e.g., z axis) is parallel to the stacking direction of the plurality of image layers, and the bottom of the three-dimensional medical image is located on the plane of the other two coordinate axes. More preferably, the three-dimensional coordinate system is drawn with the lower left corner of the three-dimensional medical image as the origin of coordinates, such that one coordinate axis (e.g., z axis) is parallel to the stacking direction of the plurality of image layers, and the bottom of the three-dimensional medical image is located on the plane where the other two coordinate axes are located.
For the relationship between the three-dimensional coordinate system and the three-dimensional medical image shown in fig. 6, the coordinate range corresponding to each image layer may be determined as follows: the z-axis coordinate of the top surface of each image layer and the z-axis coordinate of the bottom surface of the image layer, or the z-axis coordinate of the superimposed surface of each two adjacent image layers.
Step S502: determining coordinate information of a target area;
this step S502 can be implemented in two ways.
A first implementation of determining coordinate information of a target area: coordinate information is determined for the region profile of the target region. As shown in fig. 6, in the case where the lesion area 320 is a target area, this step may determine coordinate information of a contour around the lesion area 320, and in the case of the coordinate system shown in fig. 6 (one coordinate axis is parallel to the stacking direction of the plurality of image layers), the coordinate information may be coordinates of each pixel point on the contour on the z-axis.
A second implementation of determining coordinate information of a target area: in the case where any one of the coordinate axes in the set coordinate system is parallel to the stacking direction of the plurality of image layers as shown in fig. 6, in the stacking direction of the plurality of image layers, coordinate information of at least two coordinate points that are the farthest in physical distance in the coordinate axis direction parallel to the stacking direction of the plurality of image layers in the target region is determined. As a and B identified in fig. 6 are two coordinate points with the farthest physical distance, the process of searching the target image layer can be effectively simplified, and the overhead of computing resources is further saved.
Step S503: and screening a plurality of target image layers occupied by the target area according to the coordinate information of the target area and the coordinate range corresponding to each image layer.
For the first implementation manner of the coordinate information of the target area, the specific implementation manner of step S503 is: and if the coordinates of the pixel points on the area outline of the target area are located in one image layer, determining the image layer as the target image layer.
It should be noted that if all the target image layers found are not completely continuous, the image layer at the position where the discontinuity is generated needs to be selected as the target image layer. For example, in the embodiment shown in fig. 3, the target image layers selected are T6, T5, and T3, and if the image layer at the position of the discontinuity is T4, T4 is also required as the target image layer, where the discontinuity is generated between T3 and T5.
For the second implementation manner of determining the coordinate information of the target area, the specific implementation manner of step S503 is: and determining an image layer where the at least two coordinate points with the farthest distance are located and an image layer between the image layers where the at least two coordinate points are located as a target image layer. Referring to fig. 6 and 3, the image layers where the two coordinate points farthest away are located are T6 and T3, the T6 and T3 are target image layers, and the T4 and T5 located between T6 and T3 are also target image layers.
In an embodiment of the present invention, as shown in fig. 7, the specific implementation of adjusting the target image layer to be the first layer thickness may include the following steps:
step S701: constructing and storing a mapping relation between the feature type and the layer thickness;
step S702: analyzing the target feature type to which the target area belongs;
step S703: and adjusting the target image layer to be the layer thickness corresponding to the target characteristic type included in the mapping relation.
The feature types may include, but are not limited to: calcification, fibrous cap, lipid core, degree of calcification, and the like.
For example, the target feature type to which the focal region belongs is found to be the calcification degree 1 by the prior art analysis, and the layer thickness of each target image layer to which the focal region belongs is adjusted to be C2 in the case where the focal region is the target region, such as the calcification degree 1 mapping layer thickness C1, the calcification degree 1 mapping layer thickness C2, the calcification degree 2 mapping layer thickness C3, the fiber cap mapping layer thickness C4, and the lipid core layer thickness C5. It is worth explaining that the mapping relation can be increased according to the actual situation, so that the mapping relation is more perfect, and assistance is provided for improving clearer and more accurate medical images.
In an embodiment of the present invention, the specific implementation of adjusting the target image layer to a layer thickness corresponding to the target feature type included in the mapping relationship may include:
for each target image layer, performing:
judging whether the original scanning layer thickness of the target image layer is larger than the layer thickness corresponding to the target characteristic type,
if so, adjusting the layer thickness of the image layer in an interpolation mode to enable the layer thickness of the image layer to be equal to the layer thickness corresponding to the target characteristic type, or adjusting the layer thickness of the image layer to be the original scanning layer thickness;
and otherwise, compressing the target image layer based on the layer thickness corresponding to the target feature type.
Through the process, aiming at the condition that the original scanning layer thickness of the target image layer is larger than the layer thickness corresponding to the target characteristic type, two processing modes can be provided, so that the image processing has flexibility, in addition, the layer thickness of the image layer is adjusted through an interpolation mode, the limitation of the original scanning layer thickness of the target image layer is overcome, and the definition of the target image layer of the target area is further improved.
The interpolation method adopts the conventional common interpolation methods for image processing, such as a nearest neighbor method, a bilinear interpolation method, a cubic interpolation method and the like.
In addition, the second layer thickness may be determined based on the first layer thickness, for example, after the first layer thickness is determined, in order to enable the processed medical image to reach the reduction target, the second layer thickness may be increased according to the reduction target and the first layer thickness to effectively reduce the size of the processed medical image, thereby reducing the space occupancy rate of the processed medical image.
In an embodiment of the present invention, as shown in fig. 8, the adjusting of the layer thickness of the target image layer may be further performed by:
step S801: acquiring behavior information of a terminal user aiming at the medical image;
the behavior information may include the degree of enlargement, degree of reduction, parameters of the selected medical image such as layer thickness parameters, etc., of the medical image by the end user.
Step S802: determining a corresponding layer thickness for the end user based on the behavior information;
the determined layer thickness can be obtained by counting the number of times the end user has viewed the recording of the medical image.
Step S803: in response to receiving a query request for a medical image sent by a terminal, acquiring information of a terminal target user from the query request;
step S804: determining a target layer thickness for the end target user based on the information of the end target user;
step S805: the target image layer is adjusted to the target layer thickness.
For example, for an aneurysm of 5mm or more, the original scanning layer thickness is 5mm, and the layer thickness obtained by the above-described fig. 7 is 0.625 mm. But by tracking the slice browsing speed and the staying habit of the doctor, the reconstructed layer thickness of 2.0mm aiming at the region of interest is analyzed to meet the requirement of the doctor, and the target layer thickness is dynamically adjusted to 2.0 mm.
Through the embodiment, the thickness of the target image layer can be customized individually to meet personal habits of different medical workers.
As shown in fig. 9, an embodiment of the present invention provides a medical image processing apparatus 900, where the medical image processing apparatus 900 may include: an acquisition module 901 and a processing module 902, wherein,
an acquiring module 901 configured to acquire a medical image, wherein the medical image includes a plurality of image layers;
a processing module 902 for determining a target region in a medical image; determining a target image layer to which the target area belongs; and adjusting the target image layer to be a first layer thickness, and adjusting the rest image layers except the target image layer to be a second layer thickness to form the adjusted medical image, wherein the first layer thickness is smaller than the second layer thickness.
In an embodiment of the present invention, the processing module 902 is configured to determine contour information of a region of interest from a medical image; based on the contour information, a target region is determined.
In this embodiment of the present invention, the processing module 902 is further configured to respond to receiving a region selection request sent by a terminal, where the region selection request includes profile information of a selected region; and determining the target area according to the contour information of the selected area.
In this embodiment of the present invention, the processing module 902 is further configured to determine an area surrounded by the contour information of the region of interest as a target area.
In this embodiment of the present invention, the processing module 902 is further configured to determine an area with a blurred boundary in the region of interest as the target area.
In this embodiment of the present invention, the processing module 902 is further configured to set a coordinate system for the medical image, and determine a coordinate range corresponding to each image layer; determining coordinate information of a target area; and screening a plurality of target image layers occupied by the target area according to the coordinate information of the target area and the coordinate range corresponding to each image layer.
In this embodiment of the present invention, the processing module 902 is further configured to determine coordinate information for the region contour of the target region.
In this embodiment of the present invention, the processing module 902 is further configured to set any coordinate axis in the coordinate system to be parallel to the stacking direction of the plurality of image layers; in the stacking direction of the plurality of image layers, coordinate information of at least two coordinate points in the target region that are the farthest in physical distance in the coordinate axis direction parallel to the stacking direction of the plurality of image layers is determined.
In this embodiment of the present invention, the processing module 902 is further configured to construct and store a mapping relationship between the feature type and the layer thickness; analyzing the target feature type to which the target area belongs; and adjusting the target image layer to be the layer thickness corresponding to the target characteristic type included in the mapping relation.
In this embodiment of the present invention, the processing module 902 is further configured to, for each target image layer, perform: and judging whether the original scanning layer thickness of the target image layer is larger than the layer thickness corresponding to the target characteristic type, if not, compressing the target image layer based on the layer thickness corresponding to the target characteristic type.
In this embodiment of the present invention, the processing module 902 is further configured to, when it is determined that the original scanning layer thickness of the image layer is greater than the layer thickness corresponding to the target feature type, adjust the layer thickness of the image layer in an interpolation manner, so that the layer thickness of the image layer is equal to the layer thickness corresponding to the target feature type, or adjust the layer thickness of the image layer to the original scanning layer thickness.
In this embodiment of the present invention, the processing module 902 is further configured to obtain behavior information of the end user for the medical image; determining a corresponding layer thickness for the end user based on the behavior information; in response to receiving a query request for a medical image sent by a terminal, acquiring information of a terminal target user from the query request; determining a target layer thickness for the end target user based on the information of the end target user; the target image layer is adjusted to the target layer thickness.
Referring now to FIG. 10, there is depicted a schematic block diagram of a computer system 1000 suitable for use with a terminal device or image processing server embodying embodiments of the present invention. The terminal device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU)1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the system 1000 are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other via a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 1001.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes an acquisition module and a processing module. The names of these modules do not in some cases constitute a limitation of the module itself, for example, the acquisition module may also be described as a "module for acquiring medical images".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: acquiring a medical image, wherein the medical image comprises a plurality of image layers; in a medical image, determining a target area; determining a target image layer to which the target area belongs; and adjusting the target image layer to be a first layer thickness, and adjusting the rest image layers except the target image layer to be a second layer thickness to form the adjusted medical image, wherein the first layer thickness is smaller than the second layer thickness.
According to the technical scheme of the embodiment of the invention, the target area is selected, the target image layer to which the target area belongs is adjusted to be the first layer thickness, and the rest image layers except the target image layer are adjusted to be the second layer thickness, wherein the first layer thickness is smaller than the second layer thickness, namely, the target image layer to which the target area belongs and the rest image layers except the target image layer are set to be the layer thicknesses with difference, so that the layer thickness of the target area can enable a user to observe key information, and the layer thicknesses of the rest image layers can reduce the occupation of storage resources.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.