Three-dimensional reconstruction method for chickens containing RGBDT information

文档序号:9452 发布日期:2021-09-17 浏览:118次 中文

1. A three-dimensional reconstruction method for chickens containing RGBDT information is characterized by comprising the following steps:

1) the chicken image acquisition system is established and comprises a camera device and a computer, wherein the camera device is connected with the computer through a network cable, the camera device is fixedly installed right above the chicken or the heating calibration plate, and the camera device mainly comprises a first visible light camera C1A second visible light camera C2Thermal infrared camera T1Are arranged in parallel and at intervals;

2) according to Zhangzhen friend's plane calibration method, utilize the first visible light camera C in the chicken image acquisition system1Second visible lightMachine C2Thermal infrared camera T1Synchronously shooting N heating calibration plates with different angles, respectively acquiring N groups of heating calibration plate images, and utilizing a first visible light camera C1A second visible light camera C2Thermal infrared camera T1Respectively acquiring N groups of heating calibration plate images obtained by respective acquisition, calibrating the camera to obtain a first visible light camera C1EX of (2)aInternal reference matrix KaAnd distortion coefficient, second visible light camera C2EX of (2)bInternal reference matrix KbAnd distortion coefficient, thermal infrared camera T1Is given by the internal reference matrix KcAnd a distortion coefficient;

3) according to the first visible light camera C1EX of (2)aAnd a second visible light camera C2EX of (2)bFor the first visible light camera C1And a second visible light camera C2Carrying out three-dimensional calibration to obtain a first visible light camera C1And a second visible light camera C2The rotation matrix R and translation vector t in between;

4) the chicken image acquisition system is used for acquiring the original image of the chicken, and the first visible light camera C is used1A second visible light camera C2Thermal infrared camera T1The distortion coefficient of the image distortion model is used for carrying out distortion removal operation on the original image of the chicken to obtain a distortion removed image;

5) according to the first visible light camera C1Is given by the internal reference matrix KaAnd thermal infrared camera T1Is given by the internal reference matrix KcConstructing a scale and position transformation matrix Q, and using the scale and position transformation matrix Q to remove the distortion thermal infrared image Ic1After the scale and the position are adjusted, the projection transformation matrix is used for carrying out space transformation to obtain a registered thermal infrared image Ic3

6) According to the first visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbA rotation matrix R and a translation vector t, for the first undistorted visible light image Ia1And a second undistorted visible light image Ib1After stereo correction and parallax estimation processing, obtainingObtaining a depth image Idepth

7) Thermal infrared image I after registration is corrected by using Bouguet polar line correction methodc3Performing a first distortion removal on the visible light image Ia1The same stereo correction is carried out to obtain a corrected thermal infrared image Iir

8) Using the first corrected visible light image Ia2And depth image IdepthConstructing a scene three-dimensional color point cloud PcUsing corrected thermal infrared image IirAnd depth image IdepthConstructing a scene three-dimensional temperature field point cloud PtSelecting a depth threshold value d according to the known distance between the ground and the camera, and respectively extracting a scene three-dimensional color point cloud P according to the depth threshold value dcWith scene three-dimensional temperature field point cloud PtRespectively obtaining the background-removed three-dimensional color point cloud P 'of the point cloud above the middle ground surface'cAnd three-dimensional temperature field point cloud P't

9) Respectively removing the background three-dimensional color point cloud P 'by utilizing a DB-SCAN clustering method'cAnd three-dimensional temperature field point cloud P'tAfter clustering, respectively obtaining corresponding clustering results, respectively selecting the classes with the most points in the corresponding clustering results and respectively using the classes as the three-dimensional color point cloud P ″, of the chickenscAnd a point cloud P of three-dimensional temperature field of chickentThe three-dimensional color point cloud P' of the chicken is obtainedcAnd a point cloud P of three-dimensional temperature field of chickentChicken point cloud model M containing RGBDT information obtained after cascading1

2. The method for three-dimensional reconstruction of chicken according to claim 1, which contains RGBDT information, wherein: in the step 2), the first visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbAnd thermal infrared camera T1Is given by the internal reference matrix KcRespectively expressed as:

wherein the content of the first and second substances,respectively represent a first visible light camera C1The lateral focal length and the longitudinal focal length of the lens,respectively represent a first visible light camera C1Two coordinate values of the principal point of (2) in a pixel coordinate system;respectively represent a second visible light camera C2The lateral focal length and the longitudinal focal length of the lens,respectively represent a second visible light camera C1Two coordinate values of the principal point of (2) in a pixel coordinate system;respectively representing thermal infrared cameras T1The lateral focal length and the longitudinal focal length of the lens,respectively representing thermal infrared cameras T1Two coordinate values of the principal point of (2) in the pixel coordinate system.

3. The method for three-dimensional reconstruction of chicken containing RGBDT information as claimed in claim 1, wherein the step 4) is specifically as follows:

4.1) Using the first visible Camera C in the Chicken image acquisition System1A second visible light camera C2Thermal infrared camera T1Synchronously shooting chickens and then respectively acquiring first original visible light images I of the chickensa0A second original visible light image Ib0And original thermal infrared image Ic0

4.2) Using the first visible Camera C1A second visible light camera C2Thermal infrared camera T1Respectively to the first original visible light image Ia0A second original visible light image Ib0And original thermal infrared image Ic0Respectively obtaining first undistorted visible light images I after the undistorted operationa1A second distortion-removed visible light image Ib1And a distortion-removed thermal infrared image Ic1(ii) a Wherein the distortion removal operation comprises a radial distortion operation and a tangential distortion operation, and is mainly set by the following formula:

ud=(u-u0)(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)+u0

vd=(v-v0)(1+k1r2+k2r4+k3r6)+p1(r2+2y2)xy+2p2xy+v0

r2=x2+y2

wherein u and v respectively represent the first original visible light image Ia0A second original visible light image Ib0Or original thermal infrared image Ic0Two coordinate values of a middle pixel point, ud,vdRespectively representing a first undistorted visible light image Ia1A second distortion-removed visible light image Ib1Or a distortion-removed thermal infrared image Ic1Two coordinate values, k, of the middle corresponding pixel point1、k2、k3Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1First, secondSecond and third radial distortion coefficients, p1、p2Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1R denotes the first visible light camera C1A second visible light camera C2Or thermal infrared camera T1Radial radius of (u)0,v0Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1Two coordinate values of the principal point of (2) in the pixel coordinate system.

4. The method for three-dimensional reconstruction of chicken containing RGBDT information as claimed in claim 1, wherein the step 5) is specifically as follows:

5.1) according to the first visible light Camera C1Is given by the internal reference matrix KaAnd thermal infrared camera T1Is given by the internal reference matrix KcConstructing a scale and position transformation matrix Q, and using the scale and position transformation matrix Q to remove the distortion thermal infrared image Ic1After the scale and the position are adjusted, a primary thermal infrared conversion image I is obtainedc2Setting is performed by the following formula;

Ic2=QIc1

wherein alpha isa/c、βa/c、Tx、TyRespectively representing undistorted thermal infrared images Ic1The transverse magnification factor, the longitudinal magnification factor, the transverse translation distance and the longitudinal translation distance; c. CaAnd ccRespectively representing a first undistorted visible light image Ia1And a distortion-removed thermal infrared image Ic1Number of columns raAnd rcRespectively representing a first undistorted visible light image Ia1And a distortion-removed thermal infrared image Ic1Number of lines, TxAnd TyRespectively representing undistorted thermal infrared images Ic1Translation amounts in the lateral and longitudinal directions in the pixel coordinate system;

5.2) putting the first visible light camera C in the step 2)1Collecting the obtained N groups of heating calibration plate images, and recording the images asSequentially extracting each first visible light camera C1Acquired heated calibration plate imagesThe extracted coordinates of the checkerboard angular points are sequentially stored in a visible light angular point matrix M1Performing the following steps;

5.3) converting the thermal infrared camera T in the step 2)1Collecting the obtained N groups of heating calibration plate images, and recording the images asThermal infrared camera T using scale and position transformation matrix Q1Collecting N groups of heating calibration plate imagesThe transformation of the scale and the position is carried out,obtaining N groups of transformed calibration plate imagesExtracting the transformed calibration plate images in sequenceThe extracted coordinates of the checkerboard angular points are sequentially stored in a thermal infrared angular point matrix M2Performing the following steps;

5.4) from the visible angle point matrix M1And thermal infrared corner matrix M2Forming a checkerboard corner point pair set, adopting an M estimation sampling consistency algorithm to eliminate wrong matching point pairs, screening out optimal L pairs of matching point pairs, solving a projection matrix P by using a projection transformation model based on the optimal L pairs of matching point pairs, and using the solved projection matrix P to perform primary thermal infrared transformation on an image Ic2After the operations of translation, rotation and perspective are carried out, a thermal infrared image I after registration is obtainedc3Setting is performed by the following formula:

Ic3=PIc2

5. the method for three-dimensional reconstruction of chicken containing RGBDT information as claimed in claim 1, wherein the step 6) is specifically as follows:

6.1) according to the first visible light Camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbRotating the matrix R and translating the vector t, and utilizing a Bouguet epipolar line correction method to perform first distortion removal on the visible light image Ia1And a second undistorted visible light image Ib1Respectively obtaining first corrected visible light images I after stereo correctiona2And a second corrected visible light image Ib2

6.2) first corrected visible light image Ia2And a second corrected visible light image Ib2Inputting the three-dimensional image into an adaptive aggregation network model for stereo matching and parallax estimation, and outputting a parallax image Idisp

6.3) use the principle of triangulation to look for the poor imageIdispCarrying out depth solution to obtain a depth image Idepth

6. The method for three-dimensional reconstruction of chicken containing RGBDT information as claimed in claim 5, wherein the step 6.1) is specifically as follows:

first, according to the first visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbRotating the matrix R and translating the vector t to obtain the first visible light camera C1And a second visible light camera C2The two camera coordinate systems are respectively rotated towards the camera coordinate systems of the other side, the rotating angle is half of the included angle of the two camera coordinate systems, and the rotating directions are opposite, so that the first visible light camera C1And a second visible light camera C2The two imaging planes are coplanar, and the specific formula is as follows:

then, the first visible light camera C is set1And a second visible light camera C2Rotate around the optical axis thereof, so that the first visible light camera C1And a second visible light camera C2The two imaging planes are aligned in a row, and the specific formula is as follows:

wherein R is the first visible light camera C1And a second visible light camera C2A rotation matrix of rlIs a first visible light camera C1A first camera rotation matrix, r, rotating around its own camera coordinate systemrIs a second visible light camera C2A second camera rotation matrix, r, rotating around its own camera coordinate systemrectIs a first visible light camera C1Or a second visible light camera C2A matrix of optical axes rotating about their own optical axis, RlIs the first oneVisible light camera C1First stereo correction matrix of two rotation operations, RrIs a second visible light camera C2A second stereo correction matrix of two rotation operations;

finally, R is addedl、RrAre respectively multiplied to the first undistorted visible light image Ia1And a second undistorted visible light image Ib1Cutting the two left-multiplied visible light images to ensure that the sizes of the two cut images and the first distortion-removed visible light image Ia1Or a second undistorted visible light image Ib1Are the same and are respectively taken as the first corrected visible light image Ia2And a second corrected visible light image Ib2

Background

The livestock and poultry breeding industry is a great industry supporting national economy in China, high-quality consumption requirements and scarce labor resources promote the livestock and poultry breeding industry to transform towards informatization, refinement and intellectualization, the requirements on the intelligent monitoring technology of the physiological health conditions of livestock and poultry are particularly urgent, and the fusion of multi-source images of livestock and poultry is very important.

Because the traditional two-dimensional images lack real three-dimensional data and are difficult to reflect the body types and body condition information of livestock and poultry, more and more students use three-dimensional reconstruction technology to estimate the body types and the weights of animals in recent years (Mortensen A K, Lisouski P, Ahredt P. weight prediction of broilechipused 3D computer vision [ J ]. Computers & Electronics in Agriculture,2016,123: 319-. However, this method can only obtain the color and depth information of the animal, and for the chicken, the body surface temperature information is crucial, and it can visually reflect the physiological health status of the chicken, for example, when the environmental temperature is too high, the chicken will have difficulty in dissipating heat, high temperature, and heat stress state, which causes the chicken to have uncomfortable physiological status and low welfare (Souza-Junior J B F, El-Sabrout K, Arruda A M V D, et al. With the development of the thermal imaging technology, infrared thermal imaging is proved to have remarkable advantages in the aspect of evaluating the thermal physiological state of animals, if the color, the temperature and the three-dimensional data of the chicken can be subjected to multi-source fusion, the health welfare condition of the chicken can be visually observed and judged, and the intelligent level of livestock and poultry monitoring is favorably improved.

Binocular stereoscopic vision is an important branch of the field of computer vision, and a depth image of a scene can be obtained by performing stereoscopic matching on binocular images to reconstruct three-dimensional geometric information. In consideration of the fact that two thermal imagers are directly used for three-dimensional reconstruction, due to the fact that thermal infrared images are low in resolution and fuzzy in details, errors of three-dimensional data obtained through reconstruction are large, and therefore the binocular visible light camera is suitable for obtaining three-dimensional geometric data. Stereo matching is the most critical step in binocular reconstruction, and a traditional stereo matching algorithm is often greatly influenced by illumination and is difficult to overcome weak texture regions in images, so that image matching errors are large; under the influence of deep learning and tide, more and more students use deep learning method to apply to Stereo matching research and obtain better effect, such as Adaptive Aggregation Network AANet + (Xu H, Zhang J.AANet: Adaptive Aggregation Network for Efficient Stereo matching. in: Proceedings of the IEEE/CVF Conference Computer Vision and Pattern Recognition,2020, 1959-.

In view of the above, it is not suitable to directly use two thermal imagers for three-dimensional reconstruction, and in order to make the visible light image, the thermal infrared image, and the depth image correspond to each other at the pixel level, image registration needs to be performed on the visible light image and the thermal infrared image. However, the visible light image has high spatial resolution and image contrast, the thermal infrared image has low resolution and poor image detail performance, and the conventional feature-based image registration method, such as the feature detection algorithms of SIFT, SURF and the like, is difficult to detect the correct matching pair, and cannot ensure the fusion of heterogeneous data.

Disclosure of Invention

In order to solve the problems in the prior art, the invention provides a three-dimensional reconstruction method for chickens containing RGBDT information. The method constructs a three-dimensional model of the chicken containing color, Depth and temperature (RGB-Depth-Thermal, RGBDT) information, solves the problem that matching pairs are difficult to find out between visible light images and Thermal infrared images, adopts an adaptive aggregation network AANet + to perform stereo matching, and improves the robustness of the algorithm.

The technical scheme adopted by the invention for solving the technical problems is as follows:

the method comprises the following steps:

1) the chicken image acquisition system is established and comprises a camera device and a computer, wherein the camera device is connected with the computer through a network cable, the camera device is fixedly installed right above the chicken or the heating calibration plate, and the camera device mainly comprises a first visible light camera C1A second visible light camera C2Thermal infrared camera T1Are arranged in parallel and at intervals;

2) according to Zhangzhen friend's plane calibration method, utilize the first visible light camera C in the chicken image acquisition system1A second visible light camera C2Thermal infrared camera T1Synchronously shooting N heating calibration plates with different angles, respectively acquiring N groups of heating calibration plate images, and utilizing a first visible light camera C1A second visible light camera C2Thermal infrared camera T1Respectively acquiring N groups of heating calibration plate images obtained by respective acquisition, calibrating the camera to obtain a first visible light camera C1EX of (2)aInternal reference matrix KaAnd distortion coefficient, second visible light camera C2EX of (2)bInternal reference matrix KbAnd distortion coefficient, thermal infrared camera T1Is given by the internal reference matrix KcAnd a distortion coefficient;

3) according to the first visible light camera C1EX of (2)aAnd a second visible light camera C2EX of (2)bFor the first visible light camera C1And a second visible light camera C2Carrying out three-dimensional calibration to obtain a first visible light camera C1And a second visible light camera C2The rotation matrix R and translation vector t in between;

4) the chicken image acquisition system is used for acquiring the original image of the chicken, and the first visible light camera C is used1A second visible light camera C2Thermal infrared camera T1The distortion coefficient of the image distortion model is used for carrying out distortion removal operation on the original image of the chicken to obtain a distortion removed image;

5) according to the first visible light camera C1Is given by the internal reference matrix KaAnd thermal infrared camera T1Is given by the internal reference matrix KcConstructing a scale and position transformation matrix Q, and using the scale and position transformation matrix Q to remove the distortion thermal infrared image Ic1After the scale and the position are adjusted, the projection transformation matrix is used for carrying out space transformation to obtain a registered thermal infrared image Ic3

6) According to the first visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbA rotation matrix R and a translation vector t, for the first undistorted visible light image Ia1And a second undistorted visible light image Ib1Obtaining a depth image I after performing stereo correction and parallax estimation processingdepth

7) Thermal infrared image I after registration is corrected by using Bouguet polar line correction methodc3Performing a first distortion removal on the visible light image Ia1The same stereo correction is carried out to obtain a corrected thermal infrared image Iir

8) Using the first corrected visible light image Ia2And depth image IdepthConstructing a scene three-dimensional color point cloud PcUsing corrected thermal infrared image IirAnd depth image IdepthConstructing a scene three-dimensional temperature field point cloud PtSelecting a depth threshold value d according to the known distance between the ground and the camera, and respectively extracting a scene three-dimensional color point cloud P according to the depth threshold value dcWith scene three-dimensional temperature field point cloud PtRespectively obtaining the background-removed three-dimensional color point cloud P 'of the point cloud above the middle ground surface'cAnd three-dimensional temperature field point cloud P't

9) Respectively removing the background three-dimensional color point cloud P 'by utilizing a DB-SCAN clustering method'cAnd three-dimensional temperature field point cloud P'tAfter clustering, respectively obtaining corresponding clustering results, respectively selecting the classes with the most points in the corresponding clustering results and respectively using the classes as the three-dimensional color point cloud P ″, of the chickenscAnd a point cloud P of three-dimensional temperature field of chickentThe three-dimensional color point cloud P' of the chicken is obtainedcAnd a point cloud P of three-dimensional temperature field of chickentObtaining information containing RGBDT after concatenationChicken point cloud model M1

In the step 2), the first visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbAnd thermal infrared camera T1Is given by the internal reference matrix KcRespectively expressed as:

wherein the content of the first and second substances,respectively represent a first visible light camera C1The lateral focal length and the longitudinal focal length of the lens,respectively represent a first visible light camera C1Two coordinate values of the principal point of (2) in a pixel coordinate system;respectively represent a second visible light camera C2The lateral focal length and the longitudinal focal length of the lens,respectively represent a second visible light camera C1Two coordinate values of the principal point of (2) in a pixel coordinate system;respectively representing thermal infrared cameras T1The lateral focal length and the longitudinal focal length of the lens,respectively representing thermal infrared cameras T1Two coordinate values of the principal point of (2) in the pixel coordinate system.

The step 4) is specifically as follows:

4.1) Using the first visible Camera C in the Chicken image acquisition System1A second visible light camera C2Thermal infrared camera T1Synchronously shooting chickens and then respectively acquiring first original visible light images I of the chickensa0A second original visible light image Ib0And original thermal infrared image Ic0

4.2) Using the first visible Camera C1A second visible light camera C2Thermal infrared camera T1Respectively to the first original visible light image Ia0A second original visible light image Ib0And original thermal infrared image Ic0Respectively obtaining first undistorted visible light images I after the undistorted operationa1A second distortion-removed visible light image Ib1And a distortion-removed thermal infrared image Ic1(ii) a Wherein the distortion removal operation comprises a radial distortion operation and a tangential distortion operation, and is mainly set by the following formula:

ud=(u-u0)(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)+u0

vd=(v-v0)(1+k1r2+k2r4+k3r6)+p1(r2+2y2)xy+2p2xy+v0

r2=x2+y2

wherein u and v respectively represent the first original visible light image Ia0A second original visible light image Ib0Or original thermal infrared image Ic0Two coordinate values of a middle pixel point, ud,vdRespectively representing a first undistorted visible light image Ia1A second distortion-removed visible light image Ib1Or a distortion-removed thermal infrared image Ic1Two coordinate values, k, of the middle corresponding pixel point1、k2、k3Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1First, second and third radial distortion coefficients, p1、p2Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1R denotes the first visible light camera C1A second visible light camera C2Or thermal infrared camera T1Radial radius of (u)0,v0Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1Two coordinate values of the principal point of (2) in the pixel coordinate system.

The step 5) is specifically as follows:

5.1) according to the first visible light Camera C1Is given by the internal reference matrix KaAnd thermal infrared camera T1Is given by the internal reference matrix KcConstructing a scale and position transformation matrix Q, and using the scale and position transformation matrix Q to remove the distortion thermal infrared image Ic1After the scale and the position are adjusted, a primary thermal infrared conversion image I is obtainedc2Setting is performed by the following formula;

Ic2=QIc1

wherein alpha isa/c、βa/c、Tx、TyRespectively representing undistorted thermal infrared images Ic1The transverse magnification factor, the longitudinal magnification factor, the transverse translation distance and the longitudinal translation distance; c. CaAnd ccRespectively representing a first undistorted visible light image Ia1And a distortion-removed thermal infrared image Ic1Number of columns raAnd rcRespectively representing a first undistorted visible light image Ia1And a distortion-removed thermal infrared image Ic1Number of lines, TxAnd TyRespectively representing undistorted thermal infrared images Ic1Translation amounts in the lateral and longitudinal directions in the pixel coordinate system;

5.2) putting the first visible light camera C in the step 2)1Collecting the obtained N groups of heating calibration plate images, and recording the images asSequentially extracting each first visible light camera C1Acquired heated calibration plate imagesThe extracted coordinates of the checkerboard angular points are sequentially stored in a visible light angular point matrix M1Performing the following steps;

5.3) converting the thermal infrared camera T in the step 2)1Collecting the obtained N groups of heating calibration plate images, and recording the images asThermal infrared camera T using scale and position transformation matrix Q1Collecting N groups of heating calibration plate imagesCarrying out scale and position conversion to obtain N groups of converted marksStationary plate imageExtracting the transformed calibration plate images in sequenceThe extracted coordinates of the checkerboard angular points are sequentially stored in a thermal infrared angular point matrix M2Performing the following steps;

5.4) from the visible angle point matrix M1And thermal infrared corner matrix M2Forming a checkerboard corner point pair set, adopting an M estimation sampling consistency algorithm to eliminate wrong matching point pairs, screening out optimal L pairs of matching point pairs, solving a projection matrix P by using a projection transformation model based on the optimal L pairs of matching point pairs, and using the solved projection matrix P to perform primary thermal infrared transformation on an image Ic2After the operations of translation, rotation and perspective are carried out, a thermal infrared image I after registration is obtainedc3Setting is performed by the following formula:

Ic3=PIc2

the step 6) is specifically as follows:

6.1) according to the first visible light Camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbRotating the matrix R and translating the vector t, and utilizing a Bouguet epipolar line correction method to perform first distortion removal on the visible light image Ia1And a second undistorted visible light image Ib1Respectively obtaining first corrected visible light images I after stereo correctiona2And a second corrected visible light image Ib2

6.2) first corrected visible light image Ia2And a second corrected visible light image Ib2Inputting the three-dimensional image into an adaptive aggregation network model for stereo matching and parallax estimation, and outputting a parallax image Idisp

6.3) parallax image I by using triangulation principledispCarrying out depth solution to obtain a depth image Idepth

The step 6.1) is specifically as follows:

first, according toFirst visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbRotating the matrix R and translating the vector t to obtain the first visible light camera C1And a second visible light camera C2The two camera coordinate systems are respectively rotated towards the camera coordinate systems of the other side, the rotating angle is half of the included angle of the two camera coordinate systems, and the rotating directions are opposite, so that the first visible light camera C1And a second visible light camera C2The two imaging planes are coplanar, and the specific formula is as follows:

then, the first visible light camera C is set1And a second visible light camera C2Rotate around the optical axis thereof, so that the first visible light camera C1And a second visible light camera C2The two imaging planes are aligned in a row, and the specific formula is as follows:

wherein R is the first visible light camera C1And a second visible light camera C2A rotation matrix of rlIs a first visible light camera C1A first camera rotation matrix, r, rotating around its own camera coordinate systemrIs a second visible light camera C2A second camera rotation matrix, r, rotating around its own camera coordinate systemrectIs a first visible light camera C1Or a second visible light camera C2A matrix of optical axes rotating about their own optical axis, RlIs a first visible light camera C1First stereo correction matrix of two rotation operations, RrIs a second visible light camera C2A second stereo correction matrix of two rotation operations;

finally, R is addedl、RrAre respectively multiplied to the first undistorted visible light image Ia1And a second undistorted visible light image Ib1Two sheets are leftClipping the multiplied visible light images to ensure that the sizes of the two clipped images and the first distortion-removed visible light image Ia1Or a second undistorted visible light image Ib1Are the same and are respectively taken as the first corrected visible light image Ia2And a second corrected visible light image Ib2

The invention has the beneficial effects that:

according to the invention, the point cloud model of the chicken containing the RGBDT information is constructed, the registration problem of visible light and thermal infrared images is solved, and the adaptive aggregation network AANet + is adopted for stereo matching, so that the robustness of the algorithm is improved.

The chicken point cloud model containing the RGBDT information can visually reflect the color of chicken feathers, the temperature and the body condition of each part, and realizes the automatic and intelligent management of the breeding process.

Drawings

FIG. 1 is a general flow diagram of the present invention.

Fig. 2 is a schematic diagram of the chicken image acquisition system of the present invention.

Fig. 3 is a schematic diagram of the camera calibration and registration process of the present invention.

FIG. 4 shows a visible light camera C according to an embodiment of the present invention1Acquired original image Ia0Schematic representation.

FIG. 5 shows a visible light camera C according to an embodiment of the present invention2Acquired original image Ib0Schematic representation.

FIG. 6 shows a visible light camera T according to an embodiment of the present invention1Acquired original image Ic0Schematic representation.

FIG. 7 shows a visible light camera C according to an embodiment of the present invention1Stereo corrected image Ia2Schematic representation.

FIG. 8 shows a visible light camera C according to an embodiment of the present invention2Stereo corrected image Ib2Schematic representation.

FIG. 9 is a depth image I after stereo matching of FIGS. 6 and 7 according to an embodiment of the present inventiondepthSchematic representation.

FIG. 10 shows embodiment I of the present inventionc0Thermal red after image registration and stereo correction transformationOuter image IirSchematic representation.

FIG. 11 is a scene three-dimensional color point cloud P according to an embodiment of the present inventioncSchematic representation.

FIG. 12 shows a three-dimensional scene temperature field point cloud P according to an embodiment of the present inventiontSchematic representation.

FIG. 13 is three-dimensional color point cloud P 'after background rejection in the embodiment of the invention'cSchematic representation.

FIG. 14 shows a three-dimensional temperature field point cloud P 'after background rejection in an embodiment of the invention'tSchematic representation.

FIG. 15 is a chicken point cloud model M including RGBDT information according to an embodiment of the present invention1Schematic representation.

Detailed Description

The invention is further described with reference to the following figures and specific embodiments.

As shown in fig. 1, the specific process of the embodiment of the present invention is as follows:

1) establishing an image acquisition system for chickens, as shown in fig. 2, comprising a camera device and a computer, wherein the camera device is connected with the computer through a network cable, the camera device is fixedly installed right above the chickens or the heating calibration plate, and the camera device mainly comprises a first visible light camera C1A second visible light camera C2Thermal infrared camera T1Parallel and spaced, a first visible light camera C1A second visible light camera C2Thermal infrared camera T1The heating calibration plate is formed by stacking heating plates on the back of the checkerboard calibration plate; wherein the distance between the two visible light cameras is 11cm, and the first visible light camera C1Thermal infrared camera T1The distance between the first and second visible light cameras is 4cm, the installation height of the system from the ground is 150cm, and the first visible light camera C1And a second visible light camera C2All the image resolution of (1) is 1920pix by 1080pix, and the thermal infrared camera T1The image resolution of (a) is 384pix by 288 pix; the checkerboard is a 16 x 15 array of black and white squares, each square measuring 25mm by 25 mm.

2) As shown in FIG. 3, according to Zhangzhen plane calibration method, the chicken image acquisition system is utilizedFirst visible light camera C1A second visible light camera C2Thermal infrared camera T1The method comprises the steps of synchronously shooting N heating calibration plates at different angles, then respectively acquiring and obtaining N groups of heating calibration plate images, wherein N is more than 15, in the embodiment, N is 23, and a first visible light camera C is utilized1A second visible light camera C2Thermal infrared camera T1Respectively acquiring N groups of heating calibration plate images obtained by respective acquisition, calibrating the camera to obtain a first visible light camera C1EX of (2)aInternal reference matrix KaAnd distortion coefficient, second visible light camera C2EX of (2)bInternal reference matrix KbAnd distortion coefficient, thermal infrared camera T1Is given by the internal reference matrix KcAnd a distortion coefficient;

in step 2), the first visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbAnd thermal infrared camera T1Is given by the internal reference matrix KcRespectively expressed as:

wherein the content of the first and second substances,respectively represent a first visible light camera C1The lateral focal length and the longitudinal focal length of the lens,respectively represent a first visible light camera C1Two coordinate values of the principal point of (2) in a pixel coordinate system;respectively represent a second visible light camera C2The lateral focal length and the longitudinal focal length of the lens,respectively represent a second visible light camera C1Two coordinate values of the principal point of (2) in a pixel coordinate system;respectively representing thermal infrared cameras T1The lateral focal length and the longitudinal focal length of the lens,respectively representing thermal infrared cameras T1Two coordinate values of the principal point of (2) in the pixel coordinate system.

3) According to the first visible light camera C1EX of (2)aAnd a second visible light camera C2EX of (2)bFor the first visible light camera C1And a second visible light camera C2Carrying out three-dimensional calibration to obtain a first visible light camera C1And a second visible light camera C2The rotation matrix R and translation vector t in between;

4) the chicken image acquisition system is used for acquiring the original image of the chicken, and the first visible light camera C is used1A second visible light camera C2Thermal infrared camera T1The distortion coefficient of the image distortion model is used for carrying out distortion removal operation on the original image of the chicken to obtain a distortion removed image;

the step 4) is specifically as follows:

4.1) Using the first visible Camera C in the Chicken image acquisition System1A second visible light camera C2Thermal infrared camera T1Synchronously shooting chickens and then respectively acquiring first original visible light images I of the chickensa0A second original visible light image Ib0And original thermal infrared image Ic0As shown in fig. 4-6, respectively;

4.2) Using the first visible Camera C1The first stepTwo visible light cameras C2Thermal infrared camera T1Respectively to the first original visible light image Ia0A second original visible light image Ib0And original thermal infrared image Ic0Respectively obtaining first undistorted visible light images I after the undistorted operationa1A second distortion-removed visible light image Ib1And a distortion-removed thermal infrared image Ic1. Wherein the distortion removal operation comprises a radial distortion operation and a tangential distortion operation, and is mainly set by the following formula:

ud=(u-u0)(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)+u0

vd=(v-v0)(1+k1r2+k2r4+k3r6)+p1(r2+2y2)xy+2p2xy+v0

r2=x2+y2

wherein u and v respectively represent the first original visible light image Ia0A second original visible light image Ib0Or original thermal infrared image Ic0Two coordinate values of a middle pixel point, ud,vdRespectively representing a first undistorted visible light image Ia1A second distortion-removed visible light image Ib1Or a distortion-removed thermal infrared image Ic1Two coordinate values, k, of the middle corresponding pixel point1、k2、k3Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1First, second and third radial distortion coefficients, p1、p2Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1R denotes the first visible light camera C1A second visible light camera C2Or thermal infrared camera T1Radial radius of (u)0,v0Respectively represent a first visible light camera C1A second visible light camera C2Or thermal infrared camera T1Two coordinate values of the principal point of (2) in the pixel coordinate system.

5) According to the first visible light camera C1Is given by the internal reference matrix KaAnd thermal infrared camera T1Is given by the internal reference matrix KcConstructing a scale and position transformation matrix Q, and using the scale and position transformation matrix Q to remove the distortion thermal infrared image Ic1After the scale and the position are adjusted, the projection transformation matrix is used for carrying out space transformation to obtain a registered thermal infrared image Ic3

The step 5) is specifically as follows:

5.1) according to the first visible light Camera C1Is given by the internal reference matrix KaAnd thermal infrared camera T1Is given by the internal reference matrix KcConstructing a scale and position transformation matrix Q, and using the scale and position transformation matrix Q to remove the distortion thermal infrared image Ic1After the scale and the position are adjusted, the distortion-removed thermal infrared image I is madec1And a first undistorted visible light image Ia1The image has the same focal length and principal point, the resolution and position difference is eliminated preliminarily, and a primary thermal infrared conversion image I is obtainedc2Setting is performed by the following formula;

Ic2=QIc1

wherein alpha isa/c、βa/c、Tx、TyRespectively representing undistorted thermal infrared images Ic1The transverse magnification factor, the longitudinal magnification factor, the transverse translation distance and the longitudinal translation distance; in specific embodiments, αa/cIs a first visible light camera C1And thermal infrared camera T1Ratio of horizontal focal lengths, betaa/cIs a first visible light camera C1And thermal infrared camera T1Ratio of vertical focal lengths, caAnd ccRespectively representing a first undistorted visible light image Ia1And a distortion-removed thermal infrared image Ic1Number of columns raAnd rcRespectively representing a first undistorted visible light image Ia1And a distortion-removed thermal infrared image Ic1Number of lines, TxAnd TyRespectively representing undistorted thermal infrared images Ic1Translation amounts in the lateral and longitudinal directions in the pixel coordinate system;

in the embodiment of the present invention, it is,

5.2) putting the first visible light camera C in the step 2)1Collecting the obtained N groups of heating calibration plate images, and recording the images asSequentially extracting each first visible light camera C1Acquired heated calibration plate imagesThe extracted coordinates of the checkerboard angular points are sequentially stored in a visible light angular point matrix M1Performing the following steps;

5.3) converting the thermal infrared camera T in the step 2)1Collecting the obtained N groups of heating calibration plate images, and recording the images asThermal infrared camera T using scale and position transformation matrix Q1Collecting N groups of heating calibration plate imagesCarrying out scale and position conversion to obtain N groups of converted calibration plate imagesExtracting the transformed calibration plate images in sequenceThe extracted coordinates of the checkerboard angular points are sequentially stored in a thermal infrared angular point matrix M2Performing the following steps;

5.4) from the visible angle point matrix M1And thermal infrared corner matrix M2Forming a checkerboard corner point pair set, adopting an M-estimation sampling consistency (MSAC) M-Estimator Sample Consensus method to eliminate wrong matching point pairs, screening out an optimal L pairs of matching point pairs, solving a projection matrix P by using a projection transformation model based on the optimal L pairs of matching point pairs, and solving a first thermal infrared transformation image I by using the solved projection matrix Pc2After the operations of translation, rotation and perspective are carried out, a thermal infrared image I after registration is obtainedc3Setting is performed by the following formula:

Ic3=PIc2

in the present embodiment, it is preferred that,

6) according to the first visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbA rotation matrix R and a translation vector t, for the first undistorted visible light image Ia1And a second undistorted visible light image Ib1Obtaining a depth image I after performing stereo correction and parallax estimation processingdepth

The step 6) is specifically as follows:

6.1) according toFirst visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbRotating the matrix R and translating the vector t, and utilizing a Bouguet epipolar line correction method to perform first distortion removal on the visible light image Ia1And a second undistorted visible light image Ib1Respectively obtaining first corrected visible light images I after stereo correctiona2And a second corrected visible light image Ib2As shown in fig. 7 and 8.

The step 6.1) is specifically as follows:

first, according to the first visible light camera C1Is given by the internal reference matrix KaA second visible light camera C2Is given by the internal reference matrix KbRotating the matrix R and translating the vector t to obtain the first visible light camera C1And a second visible light camera C2The two camera coordinate systems are respectively rotated towards the camera coordinate systems of the other side, the rotating angle is half of the included angle of the two camera coordinate systems, and the rotating directions are opposite, so that the first visible light camera C1And a second visible light camera C2The two imaging planes are coplanar, and the specific formula is as follows:

then, the first visible light camera C is set1And a second visible light camera C2Rotate around the optical axis thereof, so that the first visible light camera C1And a second visible light camera C2The two imaging planes are aligned in a row, and the specific formula is as follows:

wherein R is the first visible light camera C1And a second visible light camera C2A rotation matrix of rlIs a first visible light camera C1A first camera rotation matrix, r, rotating around its own camera coordinate systemrIs a second visible light camera C2Second camera rotating around its own camera coordinate systemRotation matrix, rrectIs a first visible light camera C1Or a second visible light camera C2A matrix of optical axes rotating about their own optical axis, RlIs a first visible light camera C1First stereo correction matrix of two rotation operations, RrIs a second visible light camera C2A second stereo correction matrix of two rotation operations;

finally, R is addedl、RrAre respectively multiplied to the first undistorted visible light image Ia1And a second undistorted visible light image Ib1Cutting the two left-multiplied visible light images to ensure that the sizes of the two cut images and the first distortion-removed visible light image Ia1Or a second undistorted visible light image Ib1Are the same and are respectively taken as the first corrected visible light image Ia2And a second corrected visible light image Ib2

6.2) first corrected visible light image Ia2And a second corrected visible light image Ib2Inputting the three-dimensional image into an adaptive aggregation network AANet + model for stereo matching and parallax estimation, and outputting a parallax image Idisp

6.3) parallax image I by using triangulation principledispCarrying out depth solution to obtain a depth image IdepthAs shown in fig. 9;

7) thermal infrared image I after registration is corrected by using Bouguet polar line correction methodc3Performing a first distortion removal on the visible light image Ia1The same stereo correction is carried out to obtain a corrected thermal infrared image IirAs shown in fig. 10; i.e. according to the first visible light camera C1First stereo correction matrix R operated by two rotationslThe first stereo correction matrix RlLeft-hand to registered thermal infrared image Ic3Then, the size of the left-multiplied thermal infrared image is clipped to the registered thermal infrared image Ic3Is taken as a corrected thermal infrared image Iir

8) Using the first corrected visible light image Ia2And depth image IdepthConstructing a scene three-dimensional color point cloud PcUsing corrected thermal infrared image IirAnd depth image IdepthConstructing a scene three-dimensional temperature field point cloud PtAs shown in fig. 11 and 12, the depth threshold d is selected according to the known distance between the ground and the camera, and in an implementation, d is 145 mm. Respectively extracting scene three-dimensional color point cloud P according to depth threshold dcWith scene three-dimensional temperature field point cloud PtRespectively obtaining the background-removed three-dimensional color point cloud P 'of the point cloud above the middle ground surface'cAnd three-dimensional temperature field point cloud P't

9) Respectively removing the background three-dimensional color point cloud P 'by utilizing a DB-SCAN clustering method'cAnd three-dimensional temperature field point cloud P'tAfter clustering, obtaining corresponding clustering results respectively, as shown in fig. 13 and 14, selecting the cluster with the most points from the corresponding clustering results respectively, and using the cluster as the three-dimensional color point cloud P ″' of the chickencAnd a point cloud P of three-dimensional temperature field of chickentThe three-dimensional color point cloud P' of the chicken is obtainedcAnd a point cloud P of three-dimensional temperature field of chickentChicken point cloud model M containing RGBDT information obtained after cascading1As shown in fig. 15.

The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention in any way, and any modifications, equivalents and equivalent variations made in accordance with the technical spirit of the present invention are intended to be included within the scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种生产设备的数字三维模型构建方法、系统和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!