Bionic false color image fusion model and method based on sidewinder visual imaging

文档序号:9234 发布日期:2021-09-17 浏览:40次 中文

1. A bionic false color image fusion model based on sidewinder visual imaging is characterized by comprising the following components:

the image preprocessing module is used for extracting common information and specific information of the input infrared source image and the input visible light source image and preprocessing the infrared source image and the visible light source image;

the double-mode cell mechanism simulation module of the rattlesnake performs double-mode cell mechanism simulation on the preprocessed infrared source image and visible light source image through a double-mode cell mathematical model of the rattlesnake to obtain six output signals of the double-mode cell mechanism of the rattlesnake;

the enhanced image generation module is used for enhancing the output signals of the six types of double-mode cell models of the rattlesnake to obtain enhanced images;

the fusion signal generation module is used for carrying out fusion processing on the enhanced image to obtain a fusion signal; and

and the false color fusion image generation module is used for mapping the fusion signal to different color channels of an RGB color space to generate a false color fusion image.

2. The bionic false color image fusion model based on sidewinder visual imaging of claim 1, wherein the image preprocessing module comprises:

a common information acquisition unit for acquiring common information components of the infrared source image and the visible light source image;

a unique information acquisition unit for acquiring unique information components of the infrared source image and the visible light source image; and

the preprocessing unit is used for subtracting the specific information component of the visible light source image from the infrared source image to obtain a preprocessing result of the infrared source image, and subtracting the specific information component of the infrared source image from the visible light source image to obtain a preprocessing result of the visible light source image.

3. The bionic false color image fusion model based on the Crotalus viridis visual imaging is characterized in that the calculation formula of the common information components of the infrared source image and the visible source image is as follows:

Ir(i,j)∩Ivis(i,j)=min{Ir(i,j),Ivis(i,j)}

wherein, Ir(I, j) represents an infrared source image, Ivis(I, j) represents the visible light source image, (I, j) represents a certain pixel point corresponding to the two images, Ir(i,j)∩Ivis(i, j) represents a common information component of both;

the calculation formulas of the specific information components of the infrared source image and the visible light source image are respectively as follows:

Ir(i,j)*=Ir(i,j)-Ir(i,j)∩Ivis(i,j)

Ivis(i,j)*=Ivis(i,j)-Ir(i,j)∩Ivis(i,j)

wherein, Ir(i,j)*Representing an image of an infrared source Ir(I, j) a unique information component, Ivis(i,j)*Representing a visible light source image Ivis(i, j) a unique information component.

4. The bionic false color image fusion model based on the visual imaging of the sidewinder according to claim 1, wherein the dual-mode cellular mathematical model of the sidewinder comprises a visible light enhanced infrared cellular mathematical model, a visible light suppressed infrared cellular mathematical model, an infrared enhanced visible light cellular mathematical model, an infrared suppressed visible light cellular mathematical model, a cellular mathematical model and/or a cellular mathematical model.

5. The bionic false color image fusion model based on the visual imaging of the rattlesnake as claimed in claim 4, wherein the expression of the visible light enhanced infrared cell mathematical model is as follows:

I+IR←V(i,j)=IIR(i,j)exp[IV(i,j)]

wherein, I+IR←V(I, j) denotes an image obtained after visible light-enhanced infrared, IIR(I, j) represents an infrared image, IV(i, j) represents a visible light image;

the expression of the mathematical model of the visible light inhibition infrared cell is as follows:

I-IR←V(i,j)=IIR(i,j)log[IV(i,j)+1]

wherein, I-IR←V(i, j) represents an image obtained after visible light inhibits infrared;

the expression of the infrared enhanced visible light cell mathematical model is as follows:

I+V←IR(i,j)=IV(i,j)exp[IIR(i,j)]

wherein, I+V←IR(i, j) represents an image obtained after infrared enhancement of the visible light signal;

the expression of the mathematical model of the infrared inhibition visible light cell is as follows:

I-V←IR(i,j)=IV(i,j)log[IIR(i,j)+1]

wherein, I-V←IR(i, j) represents infrared-suppressed visible light messageImages obtained after number;

the expression with the cell mathematical model is as follows:

when I isV(i,j)<IR(i, j), the fusion result is:

IAND(i,j)=mIV(i,j)+nIR(i,j)

when I isV(i,j)>IR(i, j), the fusion result is:

IAND(i,j)=nIV(i,j)+mIR(i,j)

wherein m is more than 0.5, n is less than 0.5, IAND(i, j) represents an image obtained after the infrared image and the visible light image are weighted and acted;

the expression of the mathematical model of the cell is:

when I isV(i,j)<IR(i, j), the fusion result is:

IOR(i,j)=nIV(i,j)+mIR(i,j)

when I isV(i,j)>IR(i, j), the fusion result is:

IOR(i,j)=mIV(i,j)+nIR(i,j)

wherein m is more than 0.5, n is less than 0.5, IORAnd (i, j) represents an image obtained by weighting or applying the visible light image and the infrared image.

6. The bionic false color image fusion model based on sidewinder visual imaging is characterized in that the six sidewinder dual-mode cell model output signals comprise an AND output signal or an output signal, an infrared enhanced visible light output signal, an infrared suppressed visible light output signal, a visible light enhanced infrared output signal and a visible light suppressed infrared output signal.

7. The bionic false color image fusion model based on sidewinder visual imaging of claim 6, wherein the enhanced image generation module comprises:

an enhanced image + OR _ AND generating unit for feeding the OR output signal AND the AND output signal to a central excitation region AND a surround suppression region of an ON-center type receptive field, respectively, generating an enhanced image + OR _ AND;

an enhanced image + VIS generation unit for feeding the infrared enhanced visible light output signal and the infrared suppressed visible light output signal to a central excitation region and a surrounding suppression region of an ON-central receptive field, respectively, to generate an enhanced image + VIS; and

an enhanced image + IR generating unit for feeding the visible enhanced infrared output signal and the visible suppressed infrared signal to a central suppressed region and a surrounding excited region of the OFF-central receptive field, respectively, resulting in an enhanced image + IR.

8. The bionic false color image fusion model based on sidewinder visual imaging of claim 7, wherein the fusion signal generation module comprises:

an image feed-in unit for feeding the enhanced image + OR _ AND, the enhanced image + VIS AND the enhanced image + IR into the central AND surrounding regions corresponding to the two ON-center type receptive fields, respectively, obtaining a fusion signal + V1S + OR _ AND AND a fusion signal + V1S + IR; and

a linear OR operation unit for linearly OR-ing the enhanced image + VIS AND the enhanced image + OR _ AND generating a fusion signal + OR _ AND OR + V1S.

9. A bionic false color image fusion method based on sidewinder visual imaging is characterized by comprising the following steps:

acquiring an infrared source image and a visible light source image to be processed;

inputting the acquired infrared source image and visible light source image into a bionic false color image fusion model based on the Crotalus viridis visual imaging according to any one of claims 1-8, and outputting a false color fusion image.

Background

The image fusion technology aims to integrate image information of a plurality of images with advantages and disadvantages obtained by a plurality of sensors in the same environment to generate a single fused image containing more information, and further acquire more accurate information from the single fused image. In order to further research the image fusion technology, some researchers use the sidewinder as a research object to simulate the visual imaging mechanism of the sidewinder, for example, a.m. waxman et al of the american academy of massachusetts proposes a fusion structure of low-light and infrared images by using a visual receptive field model simulating the dual-mode cell working principle of the sidewinder.

In the Waxman fusion structure, an ON/OFF structure shows the contrast perception attribute of a center-surrounding confrontation receiving domain, the first stage is an enhancement stage, and the second stage is the treatment of infrared enhancement visible light and infrared inhibition visible light, and is consistent with the fusion mechanism of infrared and visible light of the tail rattle vision. The Waxman fusion structure simulates an infrared enhanced visible light cell and an infrared inhibited visible light cell, although an OFF countermeasure and an ON countermeasure are respectively carried out ON an infrared signal and are transmitted into a surrounding area of a ganglion cell, the infrared signal is still an inhibition signal substantially, so that the enhancement of the infrared signal ON the visible light signal is not obvious, and further, the obtained fusion image is not ideal in color expression, not obvious in target and not outstanding in detail.

Therefore, how to provide a bionic false color image fusion method based on the visual imaging of the rattlesnake with better fusion effect is a problem which needs to be solved urgently by the technical personnel in the field.

Disclosure of Invention

In view of the above, the invention provides a bionic false color image fusion model and method based on sidewinder visual imaging, which solve the problems of the existing image fusion method that the color expression of the obtained fusion image is not ideal enough, the target is not obvious enough, the details are not outstanding enough, and the like.

In order to achieve the purpose, the invention adopts the following technical scheme:

in one aspect, the invention provides a bionic false color image fusion model based on sidewinder visual imaging, which comprises:

the image preprocessing module is used for extracting common information and specific information of the input infrared source image and the input visible light source image and preprocessing the infrared source image and the visible light source image;

the double-mode cell mechanism simulation module of the rattlesnake performs double-mode cell mechanism simulation on the preprocessed infrared source image and visible light source image through a double-mode cell mathematical model of the rattlesnake to obtain six output signals of the double-mode cell mechanism of the rattlesnake;

the enhanced image generation module is used for enhancing the output signals of the six types of double-mode cell models of the rattlesnake to obtain enhanced images;

the fusion signal generation module is used for carrying out fusion processing on the enhanced image to obtain a fusion signal; and

and the false color fusion image generation module is used for mapping the fusion signal to different color channels of an RGB color space to generate a false color fusion image.

Further, the image pre-processing module comprises:

a common information acquisition unit for acquiring common information components of the infrared source image and the visible light source image, that is:

Ir(i,j)∩Ivis(i,j)=min{Ir(i,j),Ivis(i,j)}

wherein, Ir(I, j) represents an infrared source image, Ivis(I, j) represents the visible light source image, (I, j) represents a certain pixel point corresponding to the two images, Ir(i,j)∩Ivis(i, j) represents a common information component of both;

a unique information acquisition unit for acquiring unique information components of the infrared source image and the visible light source image, that is:

Ir(i,j)*=Ir(i,j)-Ir(i,j)∩Ivis(i,j)

Ivis(i,j)*=Ivis(i,j)-Ir(i,j)∩Ivis(i,j)

wherein, Ir(i,j)*Representing an image of an infrared source Ir(i, j) characteristicsHaving an information component, Ivis(i,j)*Representing a visible light source image Ivis(ii) a unique information component of (i, j);

the preprocessing unit is used for subtracting the specific information component of the visible light source image from the infrared source image to obtain a preprocessing result of the infrared source image, and subtracting the specific information component of the infrared source image from the visible light source image to obtain a preprocessing result of the visible light source image.

Further, the double-mode cellular mathematical model of the sidewinder comprises a visible light enhanced infrared cellular mathematical model, a visible light inhibited infrared cellular mathematical model, an infrared enhanced visible light cellular mathematical model, an infrared inhibited visible light cellular mathematical model, a cellular mathematical model and/or a cellular mathematical model.

Further, the expression of the mathematical model of the visible light-enhanced infrared cell is as follows:

I+IR←V(i,j)=IIR(i,j)exp[IV(i,j)]

wherein, I+IR←V(I, j) denotes an image obtained after visible light-enhanced infrared, IIR(I, j) represents an infrared image, IV(i, j) represents a visible light image;

the expression of the mathematical model of the visible light inhibition infrared cell is as follows:

I-IR←V(i,j)=IIR(i,j)log[IV(i,j)+1]

wherein, I-IR←V(i, j) represents an image obtained after visible light inhibits infrared;

the expression of the infrared enhanced visible light cell mathematical model is as follows:

I+V←IR(i,j)=IV(i,j)exp[IIR(i,j)]

wherein, I+V←IR(i, j) represents an image obtained after infrared enhancement of the visible light signal;

the expression of the mathematical model of the infrared inhibition visible light cell is as follows:

I-V←IR(i,j)=IV(i,j)log[IIR(i,j)+1]

wherein, I-V←IR(i, j) represents an image obtained after infrared suppression of visible light signals;

the expression with the cell mathematical model is as follows:

when I isV(i,j)<IR(i, j), the fusion result is:

IAND(i,j)=mIV(i,j)+nIR(i,j)

when I isV(i,j)>IR(i, j), the fusion result is:

IAND(i,j)=nIV(i,j)+mIR(i,j)

wherein m is>0.5,n<0.5,IAND(i, j) represents an image obtained after the infrared image and the visible light image are weighted and acted;

the expression of the mathematical model of the cell is:

when I isV(i,j)<IR(i, j), the fusion result is:

IOR(i,j)=nIV(i,j)+mIR(i,j)

when I isV(i,j)>IR(i, j), the fusion result is:

IOR(i,j)=mIV(i,j)+nIR(i,j)

wherein m is>0.5,n<0.5,IORAnd (i, j) represents an image obtained by weighting or applying the visible light image and the infrared image.

Further, the six types of dual-mode cellular model output signals of the rattlesnake comprise an AND output signal, or an output signal, an infrared-enhanced visible light output signal, an infrared-suppressed visible light output signal, a visible-enhanced infrared output signal and a visible-suppressed infrared output signal.

Further, the enhanced image generation module includes:

an enhanced image + OR _ AND generating unit for feeding the OR output signal AND the AND output signal to a central excitation region AND a surround suppression region of an ON-center type receptive field, respectively, generating an enhanced image + OR _ AND;

an enhanced image + VIS generation unit for feeding the infrared enhanced visible light output signal and the infrared suppressed visible light output signal to a central excitation region and a surrounding suppression region of an ON-central receptive field, respectively, to generate an enhanced image + VIS; and

an enhanced image + IR generating unit for feeding the visible enhanced infrared output signal and the visible suppressed infrared signal to a central suppressed region and a surrounding excited region of the OFF-central receptive field, respectively, resulting in an enhanced image + IR.

Further, the fusion signal generation module includes:

an image feed-in unit for feeding the enhanced image + OR _ AND, the enhanced image + VIS AND the enhanced image + IR into the central AND surrounding regions corresponding to the two ON-center type receptive fields, respectively, obtaining a fusion signal + VlS + OR _ AND AND a fusion signal + VlS + IR; and

a linear OR operation unit for linearly OR-ing the enhanced image + VIS AND the enhanced image + OR _ AND generating a fusion signal + OR _ AND OR + VlS.

On the other hand, the invention also provides a bionic false color image fusion method based on the sidewinder visual imaging, which comprises the following steps:

acquiring an infrared source image and a visible light source image to be processed;

and inputting the acquired infrared source image and visible light source image into the bionic false color image fusion model based on the Crotalus viridis visual imaging, and outputting a false color fusion image.

According to the technical scheme, compared with the prior art, the invention discloses the bionic false color image fusion model and the method based on the sidewinder visual imaging, the model carries out image preprocessing by extracting the common information and the specific information of the infrared source image and the visible light source image, and the quality of the fusion image is improved; an image fusion structure is designed by introducing a double-mode cell mathematical model of the rattlesnake, so that a double-mode cell fusion mechanism of the rattlesnake is effectively utilized, and a visual perception mechanism of the rattlesnake is better simulated; the obtained fusion image has improved color expression, more obvious details and more prominent target, and is more in line with the visual characteristics of human eyes.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.

FIG. 1 is a schematic structural diagram of a bionic false color image fusion model based on sidewinder visual imaging provided by the present invention;

FIG. 2 is a schematic diagram of an implementation of an image pre-processing module;

FIG. 3 is a schematic diagram of the structure of the ON-centric receptor field model and the OFF-centric receptor field model;

FIG. 4 is a schematic diagram of an implementation flow of a bionic false color image fusion method based on sidewinder visual imaging provided by the invention;

fig. 5 is a schematic diagram of an implementation principle of a bionic false color image fusion method based on the visual imaging of the rattlesnake.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In one aspect, referring to fig. 1, an embodiment of the present invention discloses a bionic false color image fusion model based on sidewinder visual imaging, which includes:

the image preprocessing module 1 is used for extracting common information and specific information of an input infrared source image and an input visible light source image and preprocessing the infrared source image and the visible light source image;

the double-mode cell mechanism simulation module 2 for the rattlesnake is used for simulating the double-mode cell mechanism of the rattlesnake on the preprocessed infrared source image and visible light source image through a double-mode cell mathematical model for the rattlesnake, so as to obtain six output signals of the double-mode cell mechanism simulation module for the rattlesnake;

the enhanced image generation module 3 is used for enhancing the output signals of the six types of double-mode cell models of the rattlesnake to obtain enhanced images;

the fusion signal generation module 4 is used for carrying out fusion processing on the enhanced image to obtain a fusion signal; and

and the false color fusion image generation module 5 is used for mapping the fusion signal to different color channels of the RGB color space to generate a false color fusion image.

Specifically, the image preprocessing module 1 includes:

the common information acquisition unit is used for acquiring common information components of the infrared source image and the visible light source image, namely:

Ir(i,j)∩Ivis(i,j)=min{Ir(i,j),Ivis(i,j)}

wherein, Ir(I, j) represents an infrared source image, Ivis(I, j) represents the visible light source image, (I, j) represents a certain pixel point corresponding to the two images, Ir(i,j)∩Ivis(i, j) represents a common information component of both;

the special information acquisition unit is used for acquiring special information components of the infrared source image and the visible light source image, namely:

Ir(i,j)*=Ir(i,j)-Ir(i,j)∩Ivis(i,j)

Ivis(i,j)*=Ivis(i,j)-Ir(i,j)∩Ivis(i,j)

wherein, Ir(i,j)*Representing an image of an infrared source Ir(I, j) a unique information component, Ivis(i,j)*Representing a visible light source image Ivis(ii) a unique information component of (i, j);

a preprocessing unit for preprocessing the infrared source image Ir(I, j) subtracting a unique information component I of the visible light source imagevis(i,j)*Obtaining the preprocessing result of the infrared source image, namely Ir(i,j)-Ivis(i,j)*And combining the visible light source image Ivis(I, j) subtracting the specific information component I of the infrared source imager(i,j)*Obtaining the preprocessing result of the visible light source image, i.e. Ivis(i,j)-Ir(i,j)*Is shown byr(i,j)-Ivis(i,j)*And Ivis(i,j)-Ir(i,j)*As the preprocessed infrared image and visible image, respectively, it is noted as IR and VIS, namely:

fig. 2 shows the principle that the units in the image preprocessing module acquire and preprocess the common and unique features of the infrared source image and the visible light source image to finally obtain the preprocessed infrared image IR and visible light image VIS.

The preprocessing operation in the embodiment is to process the source image input by image fusion according to the following requirements, retain or improve some image information, and omit some image information which is not important for the subsequent processing, so as to achieve the effect of enhancing the image, and further improve the quality of the finally obtained fusion image.

If the infrared image and the visible light image are fused into one image to be displayed, the image information of the two source images is necessarily selected and emphasized, the proportion of the image information common to the infrared source image and the visible light source image is reduced by subtracting the common information of the infrared source image and the visible light source image, the unique image information of the infrared source image but lacking in the visible light source image is more highlighted, and the original purpose of using the visible light source image to subtract the common information of the infrared source image and the visible light source image is also the same. The integration and the presentation of the infrared source image and the visible light source image information are facilitated by the fused image during the subsequent image fusion.

In this embodiment, the sidewinder bimodal cell mathematical model includes a visible light enhanced infrared cell mathematical model, a visible light suppressed infrared cell mathematical model, an infrared enhanced visible light cell mathematical model, an infrared suppressed visible light cell mathematical model, a cellular mathematical model and/or a cellular mathematical model.

In the visible light enhanced infrared cell, the infrared signal stimulation is dominant, so the infrared signal stimulation occupies a main position in a mathematical model of the cell, but the single action of the visible light signal stimulation does not produce response, the auxiliary enhancement effect is achieved, the enhancement effect of the visible light image can be represented by an exponential function, and finally the mathematical model of the visible light enhanced infrared cell is obtained as follows:

I+IR←V(i,j)=IIR(i,j)exp[IV(i,j)]

wherein, I+IR←V(I, j) denotes an image obtained after visible light-enhanced infrared, IIR(I, j) represents an infrared image, IV(i, j) represents a visible light image.

In the visible light inhibition infrared cell, the infrared signal stimulation is dominant, so the infrared signal stimulation occupies a main position in a mathematical model of the cell, but the single action of the visible light signal stimulation does not produce response and plays an auxiliary inhibition role, the inhibition effect of a visible light image can be represented by a logarithmic function, and finally the mathematical model of the visible light inhibition infrared cell is obtained as follows:

I-IR←V(i,j)=IIR(i,j)log[IV(i,j)+1]

wherein, I-IR←V(i, j) represents the image obtained after visible light suppresses infrared.

In the infrared enhanced visible light cell, the visible light signal stimulation is dominant, so the visible light signal stimulation occupies a main position in a mathematical model of the cell, and the infrared signal stimulation does not produce response under the independent action and plays an auxiliary enhancing role, the enhancing effect of an infrared image can be represented by an exponential function, and finally the mathematical model of the infrared enhanced visible light cell is obtained as follows:

I+V←IR(i,j)=IV(i,j)exp[IIR(i,j)]

wherein, I+V←IR(i, j) represents an image obtained after infrared enhancement of the visible light signal.

In the infrared inhibition visible light cells, visible light signal stimulation is dominant, so the visible light signal stimulation occupies a main position in a mathematical model of the cells, the infrared signal stimulation does not produce response under the independent action, the auxiliary inhibition effect is achieved, the inhibition effect of an infrared image can be represented by a logarithmic function, and finally the mathematical model of the infrared inhibition visible light cells is obtained as follows:

I-V←IR(i,j)=IV(i,j)log[IIR(i,j)+1]

wherein, I-V←IR(i, j) represents an image obtained after infrared suppression of visible light signals;

in the method, when two kinds of signal stimuli exist simultaneously in the cell, the cell has a relatively obvious response, the infrared signal and the visible light signal have no substantial difference, and only the magnitude of the respective stimulus intensities can influence the response, so that the combined effect of the visible light image and the infrared image can be simulated in a 'weighted sum' mode, and finally, the mathematical model of the cell is obtained as follows:

when I isV(i,j)<IR(i, j), the fusion result is:

IAND(i,j)=mIV(i,j)+nIR(i,j)

when I isV(i,j)>IR(i, j) the fusion result is

IAND(i,j)=nIV(i,j)+mIR(i,j)

Wherein m is>0.5,n<0.5,IANDAnd (i, j) represents an image obtained by weighting and acting on the infrared image and the visible light image.

For cells, either the infrared signal stimulus or the visible light stimulus alone will produce a response, while the presence of both signal stimuli will provide a gain or the cell will still produce a response.

In the cell, the response is generated by the independent action of any one of the infrared signal stimulation and the visible light stimulation, the gain effect is generated by the simultaneous existence of the two signal stimulations, and a cooperative and win-win partnership is embodied between the two signals, so that the cooperative effect of the visible light image and the infrared image is simulated in a weighting or mode, and finally the mathematical model of the cell is obtained or is as follows:

when I isV(i,j)<IR(i, j) the fusion result is

IOR(i,j)=nIV(i,j)+mIR(i,j)

When I isV(i,j)>IR(i, j) the fusion result is

IOR(i,j)=mIV(i,j)+nIR(i,j)

Wherein m is>0.5,n<0.5,IORAnd (i, j) represents an image obtained by weighting or applying the visible light image and the infrared image.

The six types of double-mode cell mathematical models of the rattlesnake are used for processing a visible light image (VIS) and an infrared Image (IR) to obtain or output a signal V U IR, and six types of double-mode cell model output signals of the rattlesnake, namely an output signal V U IR, an infrared enhanced visible light output signal + V ← IR, an infrared inhibited visible light output signal-V ← IR, a visible enhanced infrared output signal + IR ← V and a visible inhibited infrared output signal-IR ← V, are obtained.

Specifically, the enhanced image generation module 3 includes:

an enhanced image + OR _ AND generating unit for feeding the OR output signal V U IR to a central excitation region of the ON-central receptive field AND feeding the OR output signal V U IR to a surrounding inhibition region of the ON-central receptive field to generate an enhanced image + OR _ AND;

an enhanced image + VIS generating unit, for feeding the infrared enhanced visible light output signal + V ← IR into the central excitation region of the ON-center type receptive field, and feeding the infrared suppressed visible light output signal-V ← IR into the surround suppression region of the ON-center type receptive field, generating an enhanced image + VIS; and

and the enhanced image + IR generating unit is used for feeding the visible light enhanced infrared output signal + IR ← V into the central inhibition area of the OFF-central receptive field, and feeding the visible light inhibited infrared signal-IR ← V into the surrounding excitation area of the OFF-central receptive field to obtain the enhanced image + IR.

In this embodiment, the enhanced image generation module 3 performs enhancement processing on the output signals of the six types of double-mode cell models of the rattlesnake by using the visual receptive field and the mathematical model thereof to obtain an enhanced image.

The above-mentioned visual field and its mathematical model are explained as follows:

physiological characteristics indicate that the basic action mode of the receptor field of retinal nerve cells is the spatial antagonism of concentric circles, and the two types of action modes can be divided into: one is the ON-center/OFF-surround system (i.e., ON-center excitation/OFF surround suppression receptive field), commonly referred to as simply the ON-center receptive field, and the structure is shown as a in FIG. 3. And the other is the OFF-center/ON-surround system (i.e., OFF center suppression/ON surround excitation receptive field), commonly referred to as OFF-center receptive field for short, the structure of which is shown in FIG. 3 b. The ganglion cell receptor domain can be simulated by a Gaussian difference function model through mathematical modeling, the cell activity of different regions of the ganglion cell receptor domain can be described by Gaussian distribution, and the sensitivity of the ganglion cell receptor domain is gradually reduced from the center to the periphery.

One kinetic description of the center-surround antagonistic domain is the Passive membrane equation (Passive membrane evolution). According to the description of the visual perception field dynamics equation, the visual perception field dynamics equation is given as follows:

ON versus steady state output of the system:

OFF versus System steady state output:

wherein, Ck(i, j) and Sk(i, j) represent the convolution of the central input image and the surrounding input image with a gaussian function, respectively, a being an attenuation constant and E being a polarization constant.

Wherein, Ck(i, j) is the center of the receptive field, and the expression is:

Sk(i, j) is a receptive field surrounding area, and the expression is:

wherein, Ik(i, j) is the input image, is the convolution operator, Wc、WsAre the Gaussian distribution functions of the central region and the surrounding region, respectively, with the sizes of the Gaussian templates being respectively m × n and p × q, σc、σsThe spatial constants of the central and surrounding regions, respectively, are used to distinguish the central region (Center) from the surrounding region (Surround) using c, s as subscripts, respectively.

Specifically, the fusion signal generation module 4 includes:

an image feed-in unit, which is used for respectively feeding the enhanced image + OR _ AND, the enhanced image + VIS AND the enhanced image + IR into the central AND surrounding areas corresponding to the two ON-central receptive fields to obtain two image signals of fusion signal + VlS + OR _ AND AND fusion signal + VlS + IR; and

a linear OR operation unit for linearly OR-ing the enhanced image + VIS AND the enhanced image + OR _ AND generating a fusion signal + OR _ AND OR + VlS.

Finally, the false color fusion image generation module 5 uses the RGB color space to map the + VIS + OR _ AND, + OR _ AND OR + VlS, AND + VIS + IR fusion signals obtained in the fusion signal generation module to R, G, B three channels, respectively, AND uses the image obtained through the above processing as the false color fusion image finally generated.

On the other hand, referring to fig. 4 and fig. 5, the embodiment of the invention also discloses a bionic false color image fusion method based on the sidewinder visual imaging, which comprises the following steps:

s1: acquiring an infrared source image and a visible light source image to be processed;

s2: and inputting the acquired infrared source image and visible light source image into the bionic false color image fusion model based on the sidewinder visual imaging, and outputting a false color fusion image.

In summary, the embodiment of the invention designs a false color image fusion model based on a sidewinder vision system imaging system from a bionics perspective, which is used for acquiring an infrared light and visible light fusion image, and performs image preprocessing by extracting common information and specific information of the infrared and visible light images, thereby improving the quality of the fusion image. An image fusion structure is designed by introducing a double-mode cell mathematical model of the rattlesnake, so that a double-mode cell fusion mechanism of the rattlesnake is effectively utilized, and a visual perception mechanism of the rattlesnake is better simulated. Meanwhile, the bionic false color image fusion method better simulates the fusion mechanism of the rattlesnake on the infrared image and the visible light image, the obtained fusion image is improved in color performance, target positions such as people and the like can be better presented in the fusion image, better performance can be realized in certain details, the influence of illumination, smoke shielding and weather conditions on the imaging effect can be better improved, the visual characteristics of human eyes are better met, and the method is convenient for later-stage personnel to observe, understand and further study.

The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.

The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:无人机集群监测三维机器视觉构图方法、设备和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!