Large-field-of-view CL reconstruction method, device, equipment and medium in offset scanning mode
1. A method for reconstructing a large field of view (CL) in an offset scanning mode, comprising:
acquiring a set of original projection image sequences of a scanned sample by a real detector;
establishing a virtual detector and determining the spatial position of the virtual detector;
converting a group of original projection image sequences acquired by the real detector onto the virtual detector according to the space coordinate relationship between the real detector and the virtual detector to form a group of new projection image sequences on the virtual detector;
calculating a reconstruction geometric parameter of a virtual detector CT scanning system; and
and weighting the new projection image sequence on the virtual detector, and obtaining the computed tomography image of the scanned sample based on the projection data after weighting and the reconstruction geometric parameters.
2. The method of claim 1, wherein the acquiring a set of original projection image sequences of the scanned sample by a real detector comprises:
scanning the scanned sample by using cone beam X-ray in a time period of continuous rotation of the sample rotary table at a constant speed for one circle through the real detector, acquiring a group of original projection image sequences,
wherein the rotation axis of the sample turntable has a predetermined inclination angle with respect to the ray main beam.
3. The method for reconstructing a large field of view CL in an offset scan mode as claimed in claim 2, wherein said establishing a virtual detector and determining the spatial position of the virtual detector comprises:
rotating the plane where the real detector is located clockwise by a preset angle around the X axis to obtain a preset plane;
constructing the virtual detector on the predetermined plane, and calculating the spatial coordinates of 4 boundary points of the virtual detector according to the spatial mapping relation of the projection of the ray source focus on the real detector and the virtual detector so as to determine the spatial position of the virtual detector; and
correcting the space position of the virtual detector aiming at the projection data truncation acquired by the real detector caused by the offset of the sample table;
wherein the predetermined angle is pi/2-alpha, alpha being the predetermined inclination angle of the rotation axis of the sample turntable.
4. The method for reconstructing the large field of view CL in the offset scanning mode as claimed in claim 1, wherein said transforming the original set of projection image sequences acquired by the real detector onto the virtual detector according to the spatial coordinate relationship between the real detector and the virtual detector to form a new set of projection image sequences on the virtual detector comprises:
establishing a two-dimensional detector cell array within a boundary region of the virtual detector to determine spatial coordinates of the cell array of the virtual detector;
according to the space coordinates of the unit array of the virtual detector, aiming at different rotation angles of the sample rotary table, calculating the space coordinates of intersection points of ray beams of the ray source focus corresponding to each detector unit in the unit array of the virtual detector and the plane where the real detector is located;
obtaining a projection value of the intersection point according to projection data of the detector units of the intersection point in the neighborhood of the real detector, so as to obtain a projection value of each detector unit; and
and obtaining a projection image of the virtual detector based on the projection value of each detector unit, and forming a group of new projection image sequences on the virtual detector by using each projection image aiming at the different rotation angles.
5. The method for reconstructing a large field of view CL in an offset scan mode as claimed in claim 1, wherein said weighting a new set of projection image sequences on the virtual detector comprises:
weighting a new set of projection image sequences obtained by the virtual detector by the following formula:
wherein the content of the first and second substances,r (l, t) represents a new set of projection image sequences obtained by the virtual detector, COR represents the offset distance of the sample turntable, and FDD' represents the distance from the source focus to the virtual detector plane.
6. The method of claim 1, wherein the reconstruction geometry parameters include: the virtual detector CT scanning system comprises the coordinates of a projection point of a ray source focus in a virtual detector plane and the distance from the ray source focus to the virtual detector plane.
7. The method for large field of view CL reconstruction in an offset scan mode as claimed in claim 1, wherein the offset scan mode comprises a stage offset scan mode and a detector offset scan mode.
8. A large field of view CL reconstruction apparatus in an offset scan mode, comprising:
the original projection image sequence acquisition unit is used for acquiring a group of original projection image sequences of the scanned sample through a real detector;
the virtual detector position determining unit is used for establishing a virtual detector and determining the spatial position of the virtual detector;
a new projection image sequence forming unit, configured to convert a set of original projection image sequences acquired by the real detector onto the virtual detector according to a spatial coordinate relationship between the real detector and the virtual detector, so as to form a set of new projection image sequences on the virtual detector;
the reconstruction geometric parameter calculation unit is used for calculating the reconstruction geometric parameters of the virtual detector CT scanning system; and
and the computer tomography image acquisition unit is used for weighting a group of new projection image sequences on the virtual detector and acquiring the computer tomography image of the scanned sample based on the projection data after weighting and the reconstruction geometric parameters.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the computer program implementing the steps of the method for reconstruction of a large field of view CL in an offset scan mode according to any of claims 1-7.
10. A non-transitory computer readable storage medium, having stored thereon a computer program, when being executed by a processor, for implementing the steps of the method for reconstruction of a large field of view CL in an offset scan mode according to any one of claims 1-7.
Background
Computed Tomography (CT) is widely applied to the fields of circuit board detection, medical diagnosis, aerospace and the like as an excellent nondestructive detection technology, but the traditional CT has the problems of limited detection space, low energy utilization rate of a ray source and the like in the detection of flat plate-shaped structural objects. Computer tomography (CL) technology can overcome these problems, and thus has a unique advantage for imaging detection of plate-like structure objects.
However, when the CL scanning system detects a large-sized flat plate-shaped sample, the CL scanning system is limited by the size of the area array detector, which may result in insufficient imaging field of view and reduced detection efficiency.
Disclosure of Invention
To solve the problems in the prior art, the present invention provides a method and an apparatus for reconstructing a large field of view CL in an offset scanning mode, an electronic device, and a storage medium.
In a first aspect, the present invention provides a method for reconstructing a large field of view CL in an offset scanning mode, including:
acquiring a set of original projection image sequences of a scanned sample by a real detector;
establishing a virtual detector and determining the spatial position of the virtual detector;
converting a group of original projection image sequences acquired by the real detector onto the virtual detector according to the space coordinate relationship between the real detector and the virtual detector to form a group of new projection image sequences on the virtual detector;
calculating a reconstruction geometric parameter of a virtual detector CT scanning system; and
and weighting the new projection image sequence on the virtual detector, and obtaining the computed tomography image of the scanned sample based on the projection data after weighting and the reconstruction geometric parameters.
Further, the acquiring a set of original projection image sequences of the scanned sample by the real detector includes:
scanning the scanned sample by using cone beam X-ray in a time period of continuous rotation of the sample rotary table at a constant speed for one circle through the real detector, acquiring a group of original projection image sequences,
wherein the rotation axis of the sample turntable has a predetermined inclination angle with respect to the ray main beam.
Further, the establishing a virtual detector and determining a spatial position of the virtual detector includes:
rotating the plane where the real detector is located clockwise by a preset angle around the X axis to obtain a preset plane;
constructing the virtual detector on the predetermined plane, and calculating the spatial coordinates of 4 boundary points of the virtual detector according to the spatial mapping relation of the projection of the ray source focus on the real detector and the virtual detector so as to determine the spatial position of the virtual detector; and
correcting the space position of the virtual detector aiming at the projection data truncation acquired by the real detector caused by the offset of the sample table;
wherein the predetermined angle is pi/2-alpha, alpha being the predetermined inclination angle of the rotation axis of the sample turntable.
Further, the converting a set of original projection image sequences acquired by the real detector onto the virtual detector according to the spatial coordinate relationship between the real detector and the virtual detector to form a set of new projection image sequences on the virtual detector includes:
establishing a two-dimensional detector cell array within a boundary region of the virtual detector to determine spatial coordinates of the cell array of the virtual detector;
according to the space coordinates of the unit array of the virtual detector, aiming at different rotation angles of the sample rotary table, calculating the space coordinates of intersection points of ray beams of the ray source focus corresponding to each detector unit in the unit array of the virtual detector and the plane where the real detector is located;
obtaining a projection value of the intersection point according to projection data of the detector units of the intersection point in the neighborhood of the real detector, so as to obtain a projection value of each detector unit; and
and obtaining a projection image of the virtual detector based on the projection value of each detector unit, and forming a group of new projection image sequences on the virtual detector by using each projection image aiming at the different rotation angles.
Further, the weighting the new set of projection image sequences on the virtual detector includes:
weighting a new set of projection image sequences obtained by the virtual detector by the following formula:
wherein the content of the first and second substances, representing a new set of projection image sequences obtained by said virtual detector, COR representing the offset distance of the sample turntable, and FDD representing the distance of the source focus to the plane of said virtual detector.
Further, the reconstruction geometry parameters include: the virtual detector CT scanning system comprises the coordinates of a projection point of a ray source focus in a virtual detector plane and the distance from the ray source focus to the virtual detector plane.
Further, the offset scanning mode comprises a sample stage offset scanning mode and a detector offset scanning mode.
In a second aspect, the present invention provides a large field of view CL reconstruction apparatus in an offset scan mode, comprising:
the original projection image sequence acquisition unit is used for acquiring a group of original projection image sequences of the scanned sample through a real detector;
the virtual detector position determining unit is used for establishing a virtual detector and determining the spatial position of the virtual detector;
a new projection image sequence forming unit, configured to convert a set of original projection image sequences acquired by the real detector onto the virtual detector according to a spatial coordinate relationship between the real detector and the virtual detector, so as to form a set of new projection image sequences on the virtual detector;
the reconstruction geometric parameter calculation unit is used for calculating the reconstruction geometric parameters of the virtual detector CT scanning system; and
and the computer tomography image acquisition unit is used for weighting a group of new projection image sequences on the virtual detector and acquiring the computer tomography image of the scanned sample based on the projection data after weighting and the reconstruction geometric parameters.
In a third aspect, the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the steps of the large field of view CL reconstruction method in the offset scan mode according to any one of the first aspect.
In a fourth aspect, the invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for reconstructing a large field of view CL in an offset scan mode according to any one of the first aspects.
The invention realizes the problem of large-view-field CL reconstruction in an offset scanning mode, can improve the CL scanning imaging view field by about 1 time on the premise of not changing the traditional CL scanning geometric layout, and has better engineering application value.
Drawings
Fig. 1 is a flowchart of a large field of view CL reconstruction method in an offset scanning mode according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a sample stage offset CL scan provided by an embodiment of the invention;
fig. 3(a) and 3(b) are a three-dimensional schematic diagram of a stage-offset CL scan and a side view of a stage-offset CL scan, respectively, according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a transformation of projection data between a real detector and a virtual detector according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a large field-of-view CL reconstruction apparatus in an offset scanning mode according to an embodiment of the present invention; and
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Description of reference numerals:
1: an X-ray source; 2: an X-ray principal plane; 3: a sample turntable; 4: a scanned sample; 5: a true detector; 6: a virtual detector.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a method for reconstructing a large field of view CL in an offset scanning mode according to an embodiment of the present invention. Referring to fig. 1, the method includes the steps of:
step S101: acquiring a set of original projection image sequences of a scanned sample by a real detector;
step S103: establishing a virtual detector and determining the spatial position of the virtual detector;
step S105: converting a group of original projection image sequences acquired by the real detector onto the virtual detector according to the space coordinate relationship between the real detector and the virtual detector to form a group of new projection image sequences on the virtual detector;
step S107: calculating a reconstruction geometric parameter of a virtual detector CT scanning system; and
step S109: and weighting the new projection image sequence on the virtual detector, and acquiring a computed tomography image of the scanned sample based on the projection data after weighting and the reconstruction geometric parameters.
In an embodiment of the invention, the offset scanning mode comprises a stage offset scanning mode and a detector offset scanning mode.
The method of the present invention will be described below by taking a sample stage offset scanning mode as an example.
Fig. 2 shows a schematic diagram of CL scanning with offset of the sample stage. As shown in fig. 2, the sample stage offset CL scanning device includes an X-ray source 1, an X-ray principal plane 2, a sample turntable 3, a scanned sample 4, a real detector 5 and a virtual detector 6, and wherein O represents a projection point of a source focus on a detector plane, Os represents a projection point of an intersection point of the sample turntable 3 and the X-ray principal plane 2 on a detector plane, and α represents an inclination angle of a rotation axis of the sample turntable 3.
In an embodiment of the present invention, specifically, step S101 (acquiring a set of original projection image sequences of the scanned sample by the real detector) is as follows:
an offset distance (hereinafter, referred to as COR) of the sample turntable 3 and an inclination angle (hereinafter, referred to as α) of a rotation axis of the sample turntable 3 are set according to the size and shape of a scanned sample, and the scanned sample is rotated by 360 ° by rotating the sample turntable 3 in steps at fixed angular intervals (i.e., by continuously rotating at a constant speed), and a projection image of the scanned sample at each step angle is acquired by the real detector 5, thereby obtaining a set of original projection image sequences.
Hereinafter, the processes of steps S103, S105, S107, S109 will be described in detail with reference to fig. 3(a) and 3(b) and fig. 4.
In the embodiment of the present invention, specifically, step S103 (establishing a virtual detector and determining the spatial position of the virtual detector) is as follows:
in fig. 3(a) and 3(b), the detector cell array of the real detector 5 is nw×nhThe pixel size is P, and the width and height of the rectangular area ABCD where the real detector 5 is located are WBD=nw·P,HBD=nhP. The projection point of the focal point S of the ray source 1 on the real detector 5 is OFThe distance from the focal spot S of the radiation source 1 to the real detector 5 is FDD. By projection point OFEstablishing a spatial coordinate system O for the originFXYZ, projection point OFThe coordinate on the real detector 5 is (F)w·P,FhP). The spatial coordinate for any one of the detector units G on the real detector 5 is (x)G,0,zG,QG) The two-dimensional matrix of a projection image obtained by the real detector 5 is denoted as G [ l, t ]],l∈[1,nh],t∈[1,nw]The matrix is in one-to-one correspondence with the two-dimensional array of detector cells of the real detector 5. Two-dimensional matrix G [ l, t ] corresponding to detector unit G]The element above is denoted as G (l)0,t0) Then, the spatial coordinates of the detector unit G can be expressed as the following formula (1):
(xG,0,zG,QG)=((t0-Fw)·P,0,(Fh-l0)·P,G(l0,t0)) (1)
the linear equation of the radiation beam GS corresponding to the detector unit G can be expressed as the following formula (2):
will be planar sigmaABCDClockwise rotating the angle (pi/2-alpha) around the X axis to obtain the plane sigmaMNPQThe plane is intersected with the conical beam S-ABCD to obtain a quadrilateral area MNPQ. Sum of squaresMNPQCan be expressed as the following equation (3):
ysinα-zsinα=0 (3)
the focal spot S of the radiation source 1 is in the plane sigmaMNPQProjected point on is OF'From FIG. 3(b), the rotation axis and the plane Σ of the sample turntable 3 can be seenMNPQAre all perpendicular to OF'S are parallel to each other. When in the plane ΣMNPQRay beam O when constructing the virtual detector 6F'S is the main beam of the radiation of the scanning structure corresponding to the virtual detector 6, and at this time, the inclination angle of the rotation axis of the sample turntable 3 is α ═ pi/2, so that the scanning structure formed by the virtual detector 6, the sample turntable 3 and the radiation source 1 meets the requirement of the CT scanning structure for offset of the sample stage. The radiation beam GS and the plane sigma emitted by the radiation source 1MNPQThe intensity value Q of the beam at point G is not changed during the straight propagation of the beam in the vacuum, since only one intersection point is designated as RGWith its intensity value Q at point RREqual, i.e. QG=QRThe intensity values of the ray projection points in the rectangular area ABCD and the intensity values of the ray projection points in the quadrilateral area MNPQ can be mapped in a one-to-one correspondence mode by traversing all ray beams in the conical field S-ABCD, and the mapping relation is TRG. According to the equations (2) and (3), the beam GS and the plane Σ can be obtainedMNPQIs the spatial coordinate of the unique intersection point R, i.e. the mapping TRGCan be expressed as the following equation (4):
the detector sigma can be converted by the mapping relationGThe resulting projection data are converted into a quadrilateral region MNPQ.
An area array detector, i.e. a two-dimensional detector unit array, used in a conventional cone-beam CT scanning system is characterized in that the number of different rows or different columns of the detector units is kept consistent, so that projection data acquired by the detector each time is in the form of a two-dimensional matrix, i.e. a projection image. From FIG. 3(a), the geometrical characteristics of the quadrangular region MNPQ are shownQP/MN, so the shape of this region is not rectangular. Therefore, a point P 'is selected on the QP to enable P' N ^ MN, the side of MN is extended, a point M 'is selected to enable QM'. DELTA.MN, and then a rectangular area M 'NP' Q is obtained. As can be seen from fig. 3(a), the offset of the sample turntable causes projection truncation to occur on the side close to the side BC of the real detector 5, when the projection data is converted to the quadrilateral region MNPQ, the projection data on the side of the side PC also appears, the rectangular region M ' NP ' Q is constructed by discarding the projection data of the triangular region NP ' P, and for the case of the cone beam scanning mode with small cone angle and the inclination angle α is not very large, the area of the region NP ' P for discarding the projection data is small, and the area of the triangular region NP ' P is smallThe side length can be expressed as the following formula (5):
it can be found that for small cone angle and small turntable tilt angle CL offset scans, such as α 45 °, FDD/HBD10, the focus of the ray source is on the area array detector sigmaGProjected point O onFCoincident with the centre of the detector i.e. (F)w,Fh)=(nw/2,nhAt the time of/2) the reaction,the amount of projection data discarded by this section is small. In addition, for a detector-biased CL scan configuration, the projection point O of the source focus SFX axis coordinate value FwIs smaller so thatThe loss is reduced. The real detector 5 has no projection truncation when the projection data on the side close to the AD side is converted to the QM side of the quadrangular region MNPQ, so that the expanded triangular region QM' M has no influence on the subsequent reconstruction as long as the fixed value Q is usedPIt is sufficient to fill in the projection data. This results in a spatial rectangular region M 'NP' Q at the boundary of the virtual detector 7. The spatial coordinates of the four vertices of the rectangular region ABCD are obtained from the formula (1), and the spatial coordinates of the four vertices of the rectangular region MNPQ can be obtained by substituting the spatial coordinates into the formula (4) Namely, it is
From the geometric relationship, M' (x)M',yM',zM')=(xQ,yN,zN),P'(xP',0,zP')=(xN,yQ,zQ) This yields the spatial coordinates of the four vertices of M 'NP' Q of the virtual detector 6 and the width and height of the virtual detector W, respectivelyNQ=xQ-xNAnd
in an embodiment of the present invention, specifically, step S105 (converting a set of original projection image sequences acquired by the real detector onto the virtual detector according to the spatial coordinate relationship between the real detector and the virtual detector to form a set of new projection image sequences on the virtual detector) is as follows:
the spatial position of the two-dimensional detector cell array of the virtual detector 6 is obtained by dividing the rectangular region M 'NP' Q into two-dimensional grids. The size of the grid cells indicates that the pixel size of the virtual detector 6 is equal to the pixel size P of the real detector 5, hence the pairThe size of a two-dimensional grid divided inside the rectangular region M 'NP' Q is nL×nT=HNQ/P×WNQP, the detector cell array of the virtual detector 6 is nL×nT. The two-dimensional matrix of a projection image obtained by the virtual detector 6 is denoted R l, t],l∈[1,nL],t∈[1,nT]Then the two-dimensional matrix is in one-to-one correspondence with the two-dimensional mesh vertices divided inside the rectangular region M 'NP' Q. Two-dimensional matrix R [ l, t ]]Any one of the elements R (l)0,t0) The vertex r of the corresponding two-dimensional mesh is in the coordinate system OF-spatial coordinates r (x) at XYZr,yr,zr,Qr) Can be expressed as the following formula (6):
(xN+t0·P,yN-l0·Pcosα,zN-l0·Psinα,R(l0,t0)) (6)
wherein Q isrRepresenting the intensity values of the ray beam at the grid vertex r. According to the mapping relation TRGIt can be calculated that the ray beam passing through the vertex r corresponds to the detector sigmaGProjected point G onrThe spatial coordinates of (a) are the following formula (7):
the projection point G can be obtained from the formula (1)rTwo-dimensional matrix G [ l, t ] corresponding to projection data at real detector 5]Subscript of the element of (a) is G (l)r,tr) This results in the elements R (l) of the two-dimensional matrix on the virtual detector 60,t0)=G(lr,tr). As shown in FIG. 4, the subscript l is typicallyr,trNot an integer, but a two-dimensional matrix G [ l, t]Is located at the grid vertices, i.e., subscripts are integers. Therefore, to calculate G (l)r,tr) Requires a two-dimensional matrix G [ l, t ]]The projection value of the point is calculated by adopting an interpolation operation mode. Here, the intensity attenuation of the radiation beam as it travels along a straight line is ignored, so that the intensity value of the intersection of any radiation beam with the real detector is equal to its intensity value with the virtual detectorThe intensity values of the intersection points of the detector, the intensity values of the rays are received by the detector and converted into corresponding projection values. And carrying out interpolation operation on the projection data in the neighborhood of the intersection point on the real detector to obtain the projection value of the intersection point.
In this context, the projection values of a detector unit are also the intensity values of the rays at the detector unit. By G (l)r,tr) Bilinear interpolation calculation [ l ] is carried out on projection values of four vertices of abcd of neighborhoodr,tr]Projected value G (l)r,tr) And the projection values corresponding to the four abcd points are as follows:
wherein the content of the first and second substances, the function floor () represents rounding down, and the function ceil () represents rounding up, then the following expression (8) can be obtained according to the calculation formula of the bilinear interpolation function:
the above solving process is performed by traversing the elements of the two-dimensional matrix R l, t, to obtain a projection image of the virtual detector 6. A group of projection image sequences on the virtual detector 6 are obtained by subjecting the two-dimensional matrix G [ l, t ] of the projection image sequences of the real detector 5 under all the rotation angles of the sample turntable to the projection conversion method.
In an embodiment of the present invention, specifically, step S107 (calculating the reconstruction geometry parameters of the virtual detector CT scanning system) is as follows:
weight of virtual detector CT scanning structureThe geometric parameters are mainly the projection point O of the focal point S of the ray source 1 on the virtual detector 6F'And the distance FDD' of the focal spot S of the source 1 from the plane MNPQ. As shown in fig. 3(b), the following expression (9) can be derived from the geometric relationship:
about projection point OF'The position coordinates of (2) are first a space coordinate system OFRotating angle beta of-XYZ around X axis pi/2-alpha to obtain space coordinate system OF-XY 'Z'. Vertex N (x'N,y′N,z′N) In the space coordinate system OFThe coordinates under-XY 'Z' are (x)N,0,yNcosα+zNsin α). The coordinates of the focal point S of the radiation source 1 in the space coordinate system are (0, FDD sin alpha, FDD cos alpha), and the projection point O of the focal point S of the radiation source 1 on the plane MNPQF'Has a spatial coordinate of (0,0, FDD · cos α), a projection point O of the focal spot S of the radiation source 1 on the plane MNPQ can be obtainedF'The positional coordinates of the boundary vertex N with respect to the virtual probe 6 are the following expression (10):
(Fw′·P,Fh′·P)=(-xN,FDD·cosα-yNcosα-zNsinα) (10)
in an embodiment of the present invention, step S109 (weighting a new set of projection image sequences on the virtual detector, and obtaining a computed tomography image of the scanned sample based on the weighted projection data and the reconstruction geometry parameters) is as follows:
weighting processing is required for the projection data obtained by the virtual detector 6 to remove redundant projection data. The expression of the weighting function used here is the following expression (11):
wherein the content of the first and second substances,representing the angle of the radiation beam with respect to the radiation beam passing through the axis of rotation of the sample turntable,representing the angle range of the projection data with redundancy, and R (l, t) representing the projection image sequence obtained by the virtual detector; then using the weighted projection data R [ l, t ]]And after the geometric parameters are reconstructed, the computer tomography image of the scanned sample can be obtained by using an FDK reconstruction method.
In order to further describe the method for reconstructing the large field of view CL in the offset scanning mode provided by the embodiment of the present invention, simulation experiments are performed on the steps of the method, wherein a scanned sample adopts a cylindrical simulated three-dimensional model, the diameter of the phantom is 800, the height of the phantom is 150, and the corresponding voxel unit size is 0.2 mm. The simulation experiment comprises the following specific steps:
(1) setting simulation experiment parameters: true detector size WBD=600,HBD700, the pixel size is 0.2mm, the distance from the focal point of the radiation source to the real detector is FDD 1000mm, the inclination angle of the rotating shaft of the sample turntable is-60 degrees, the offset distance COR of the sample turntable is 42.4mm, and the projection amplitude is 720. Obtaining a two-dimensional matrix G [ l, t ] of the original sequence of projection images];
(2) And (3) calculating the space position of the virtual detector according to the formulas (1) to (5) to obtain the width and height of the virtual detector as 600 multiplied by 808. For offset scanning to truncate the projection on the detector, the size of the projection data region NP' P that needs to be discardedThe padding value Q selected for the region QM' M for 24 pixel sizesP=0;
(3) Calculating to obtain a two-dimensional matrix R [ l, t ] of a projection image sequence of the virtual detector according to formulas (6) - (8);
(4) calculating the reconstruction geometric parameters of the sample stage offset CT scanning structure corresponding to the virtual detector according to the formulas (9) and (10), wherein the reconstruction geometric parameters are shown in the following table;
TABLE 1 virtual Detector CT scanning System geometry parameters
(5) And performing weighting processing on the projection data R (l, t) of the virtual detector according to a weighting function to obtain projection data R' (l, t), and then obtaining a computed tomography image of the scanned phantom sample by adopting an FDK reconstruction method.
Fig. 5 is a large field of view CL reconstruction apparatus in an offset scan mode according to an embodiment of the present invention. Referring to fig. 5, the apparatus 500 includes:
an original projection image sequence acquisition unit 501, configured to acquire a set of original projection image sequences of a scanned sample through a real detector;
a virtual detector position determination unit 503, configured to establish a virtual detector and determine a spatial position of the virtual detector;
a new projection image sequence forming unit 505, configured to convert a set of original projection image sequences acquired by the real detector onto the virtual detector according to a spatial coordinate relationship between the real detector and the virtual detector, so as to form a set of new projection image sequences on the virtual detector;
a reconstruction geometric parameter calculation unit 507, configured to calculate a reconstruction geometric parameter of the virtual detector CT scanning system; and
a computed tomography image obtaining unit 509, configured to perform weighting processing on the set of new projection image sequences on the virtual detector, and obtain a computed tomography image of the scanned sample based on the weighted projection data and the reconstruction geometric parameters.
As is clear from the above, the units 501 to 509 of the apparatus 500 may respectively perform the steps of the method described with reference to the above embodiments, and the details thereof will not be described here.
The embodiment of the invention converts a real detector CL offset scanning system into virtual detector CT offset scanning by establishing the virtual detector, and finally obtains the computed tomography image of the scanned sample by utilizing the pre-weighted FDK reconstruction method.
In another aspect, the present invention provides an electronic device. As shown in fig. 6, electronic device 600 includes a processor 601, memory 602, a communication interface 603, and a communication bus 604.
The processor 601, the memory 602, and the communication interface 603 complete communication with each other through the communication bus 604;
the processor 601 is used to call the computer program in the memory 602, and the processor 601, when executing the computer program, implements the steps of the large field of view CL reconstruction method in the offset scan mode provided by the embodiment of the present invention as described above.
Further, the computer program in the memory may be implemented in the form of a software functional unit and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product, which is stored in a storage medium and includes several computer programs to make a computer device (which may be a personal computer, a server, or a network device) execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention provides a non-transitory computer readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the method for reconstructing a large field of view CL in an offset scan mode as provided by the embodiments of the present invention described above.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Finally, it should be noted that: the above examples are only for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.