System and method for image deblurring in a vehicle
1. A method of deblurring an image in a vehicle, the method comprising:
receiving, via at least one processor, a blurred input image from an imaging device;
receiving, via at least one processor, vehicle motion data from at least one vehicle motion sensor;
receiving, via at least one processor, a depth map corresponding to the blurred input image from a depth sensor;
determining, via at least one processor, a point spread function based on the vehicle motion data and the depth map;
calculating, via at least one processor, a deblurring matrix based on a point spread function;
deblurring, via at least one processor, the blurred input image based on the deblurring matrix and the blurred input image, thereby providing a deblurred output image; and
controlling, via the at least one processor, a function of the vehicle based on the deblurred output image.
2. The method of claim 1, comprising computing a blur matrix based on the point spread function; and calculating a regularized deblurring matrix corresponding to the deblurring matrix based on the blur matrix and using a deconvolution function.
3. The method of claim 2, wherein the deconvolution function is a Tikhonov regularized deconvolution function.
4. The method of claim 1, wherein the point spread function represents a degree of smearing in image space of each pixel of the blurred input image due to motion of the real world scene relative to the imaging device resulting from vehicle motion, wherein each pixel in the two-dimensional image space of the blurred input image has at least one corresponding location in the real world scene, wherein the vehicle motion is defined by the vehicle motion data, wherein the degree of smearing in image space of each pixel of the blurred input image is inversely proportional to a depth of the at least one corresponding location in real world space, wherein the depth in real world space is defined by a depth map.
5. The method of claim 1, wherein the imaging device is a side-view camera of a vehicle,
the method further comprises, in the event of translational vehicle motion:
estimating, via at least one processor, a magnitude of the light flow line based on the depth map and the vehicle motion data;
determining, via at least one processor, a point spread function based on the magnitude of the optical flow lines;
calculating, via at least one processor, a blur matrix based on a point spread function;
calculating, via at least one processor, a regularized deblurring matrix based on the blur matrix and using a deconvolution function; and
deblurring, via the at least one processor, the blurred input image based on the regularized deblurring matrix and the blurred input image to provide a deblurred output image.
6. The method of claim 1, wherein the imaging device is a forward or rear view camera of the vehicle,
the method further comprises, in the event of translational vehicle motion:
resampling, via at least one processor, the blurred input image and the depth map to polar coordinates converging at an extended focus of an imaging device;
estimating, via at least one processor, a magnitude of the light flow line based on the vehicle motion data;
determining, via at least one processor, a point spread function based on the magnitude of the optical flow lines;
calculating, via at least one processor, a blur matrix based on the point spread function and the resampled depth map;
calculating, via at least one processor, a regularized deblurring matrix based on the blur matrix and using a deconvolution function;
deblurring, via at least one processor, the blurred input image based on the regularized deblurring matrix and the resampled blurred input image, thereby providing a polar deblurred image; and
the polar deblurred image is resampled to cartesian coordinates to provide a deblurred output image.
7. The method of claim 1, comprising:
estimating, via at least one processor, a magnitude and a direction of optical flow lines in cartesian coordinates based on the depth map and the vehicle motion, thereby estimating optical flow;
resampling, via at least one processor, the optical flow along the optical flow line from the cartesian coordinates;
resampling, via at least one processor, the blurred input image along the optical flow lines from the cartesian coordinates;
determining, via at least one processor, a point spread function based on the vehicle motion and the resampled optical flow;
calculating, via at least one processor, a blur matrix based on a point spread function;
calculating, via at least one processor, a regularized deblurring matrix based on the blur matrix and using a deconvolution function;
deblurring, via the at least one processor, the blurred input image based on the regularized deblurring matrix and the resampled blurred input image, thereby providing an optical flow coordinate deblurred image; and
the optical-flow coordinate deblurred image is resampled to cartesian coordinates via at least one processor to provide a deblurred output image.
8. The method of claim 1, wherein the steps of determining a point spread function, calculating a deblurring matrix, and deblurring the blurred input image are performed on and for each single row of the blurred input image.
9. A vehicle, comprising:
an imaging device;
a vehicle controller;
a vehicle actuator;
at least one vehicle motion sensor;
a depth sensor; and
at least one processor configured to execute program instructions, wherein the program instructions are configured to cause the at least one processor to:
receiving a blurred input image from an imaging device;
receiving vehicle motion data from at least one vehicle motion sensor;
receiving a depth map corresponding to the blurred input image from a depth sensor;
determining a point spread function based on the vehicle motion data and the depth map;
calculating a deblurring matrix based on a point spread function; and
deblurring the blurred input image based on the deblurring matrix and the blurred input image, thereby providing a deblurred output image;
wherein the vehicle controller is configured to control a function of the vehicle via the vehicle actuator based on the deblurred output image.
10. The vehicle of claim 9, wherein the program instructions are configured to cause the at least one processor to:
estimating the magnitude and direction of optical flow lines in cartesian coordinates based on the depth map and vehicle motion, thereby estimating optical flow;
resampling the optical flow along the optical flow line from the cartesian coordinates;
resampling the blurred input image along the optical flow lines from the cartesian coordinates;
determining a point spread function based on the vehicle motion and the resampled optical flow;
calculating a fuzzy matrix based on the point spread function;
calculating a regularized deblurring matrix based on the blur matrix and using a deconvolution function;
deblurring the blurred input image based on the regularized deblurring matrix and the resampled blurred input image, thereby providing an optical flow coordinate deblurred image; and
the optical flow coordinate deblurred image is resampled to cartesian coordinates to provide a deblurred output image.
Background
The image captured by the camera may be blurred for a variety of reasons. For example, a camera may be moving or shaking at all times during image capture. Image blur may also be caused by optical aberrations. Chromatic blur is also common, whereby the degree of refraction differs for different wavelengths. Non-blind deconvolution (deconvolution) techniques are known by which a blurred input image is processed to obtain a sharper, deblurred output image. According to such deconvolution techniques, a blur matrix is utilized to transform the blurred input image into a deblurred output image. The blur matrix may be determined from a point spread function representing the nature of the expected blur effect. In the case where the camera is attached to a running vehicle, a point spread function is derived based on knowledge of the vehicle motion, and a deblurring kernel is determined based on the point spread function. That is, blur sources are generally well known in imaging and the blur process can be well modeled using Point Spread Functions (PSFs) either measured directly or derived from knowledge of the blur physics.
Motion-induced blur is a common artifact that affects the performance of perception, localization, and other algorithms in various applications, including automated vehicle systems. Motion blur occurs when the imaging device moves relative to the real scene to be captured, especially when the camera exposure time is increased. The exposure time may be extended to improve the signal-to-noise ratio of the captured image, for example, in low light conditions.
Optical flow is a pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by relative motion between an imaging device and the scene. The optical flow represents motion blur to be included in a captured image of a scene. Motion-induced blur has proven difficult to model by a suitable PSF for a variety of reasons. In a vehicular environment, although the motion of a vehicle is known, the motion blur due to the motion of the vehicle is not the same for all objects in a real-world scene.
Accordingly, it is desirable to provide systems and methods that can accurately and efficiently deblur motion-induced blur in terms of processing resources. Furthermore, other desirable features and characteristics of the present invention will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
Disclosure of Invention
According to an exemplary embodiment, a method of deblurring an image in a vehicle is provided. The method includes receiving, via a processor, a blurred input image from an imaging device, receiving vehicle motion data from one or more vehicle motion sensors, and receiving a depth map corresponding to the blurred input image from a depth sensor. The processor determines a point spread function based on the vehicle motion data and the depth map, calculates a deblurring matrix based on the point spread function, and deblurrs the blurred input image based on the deblurring matrix and the blurred input image, thereby providing a deblurred output image. Controlling a function of the vehicle based on the deblurred output image.
In an embodiment, the method comprises computing a blur matrix based on a point spread function; and calculating a regularized deblurring matrix corresponding to the deblurring matrix based on the blur matrix and using a deconvolution function. In one embodiment, the deconvolution function is a gihonov (Tikhonov) regularized deconvolution function.
In an embodiment, the point spread function represents a degree of smearing in the image space of each pixel of the blurred input image caused by movement of the real world scene relative to the imaging device due to vehicle motion. Each pixel in the two-dimensional image space of the blurred input image has one or more corresponding locations in the real-world scene. Vehicle motion is defined by vehicle motion data. The extent of smearing in image space for each pixel of the blurred input image is inversely proportional to the depth of one or more corresponding locations in real world space. The depth in real world space is defined by a depth map.
In one embodiment, the imaging device is a side view camera of a vehicle. In case of a translational vehicle movement, the method comprises the processor steps of: estimating a magnitude of the light flow line based on the depth map and the vehicle motion data; determining a point spread function according to the magnitude of the optical streamline; calculating a fuzzy matrix based on the point spread function; calculating a regularized deblurring matrix based on the blur matrix and using a deconvolution function; deblurring the blurred input image based on the regularized deblurring matrix and the blurred input image, thereby providing the deblurred output image.
In one embodiment, the imaging device is a forward or rear view camera of the vehicle. In case of a translational vehicle movement, the method comprises the processor steps of: resampling the blurred input image and depth map to polar coordinates converging at an extended focus of the imaging device; estimating a magnitude of the optical flow line based on the resampled depth map and the vehicle motion data; determining a point spread function according to the magnitude of the optical streamline; calculating a fuzzy matrix based on the point spread function; calculating a regularized deblurring matrix based on the blur matrix and using a deconvolution function; deblurring the blurred input image based on the regularized deblurring matrix and the resampled blurred input image, thereby providing a polar deblurred image; and resampling the polar deblurred image to cartesian coordinates, thereby providing a deblurred output image.
In one embodiment, the method includes estimating, via a processor, a magnitude and a direction of optical flow lines in cartesian coordinates based on the depth map and the vehicle motion, thereby estimating optical flow. The light flow is resampled from cartesian coordinates along the light flow lines. The blurred input image is resampled from cartesian coordinates along the optical flow lines. A point spread function is determined based on the vehicle motion and the resampled optical flow. A blur matrix is calculated based on the point spread function. A regularized deblurring matrix is computed based on the blur matrix and using a deconvolution function. The method includes deblurring, via a processor, the blurred input image based on a regularized deblurring matrix and the resampled blurred input image to provide an optical flow coordinate deblurred image. The optical flow coordinate deblurred image is resampled to cartesian coordinates to provide a deblurred output image.
In an embodiment, the steps of determining the point spread function, calculating the deblurring matrix, and deblurring the blurred input image are performed on a single line of the blurred input image and for each line.
In another embodiment, a system for image deblurring in a vehicle is provided. The system includes an imaging device, a vehicle controller, a depth sensor, and a processor configured to execute program instructions. The program instructions are configured to cause the processor to receive a blurred input image from the imaging device, receive vehicle motion data, receive a depth map corresponding to the blurred input image from the depth sensor, determine a point spread function based on the vehicle motion data and the depth map, calculate a deblurring matrix based on the point spread function, and deblur the blurred input image based on the deblurring matrix and the blurred input image, thereby providing a deblurred output image. The vehicle controller is configured to control a function of the vehicle based on the deblurred output image.
In an embodiment, the program instructions are configured to cause the processor to calculate a blur matrix based on a point spread function and calculate a regularized deblurring matrix corresponding to the deblurring matrix based on the blur matrix and using a deconvolution function. In one embodiment, the deconvolution function is a Tikhonov regularized deconvolution function.
In an embodiment, the point spread function represents a degree of smearing in image space of each pixel of the blurred input image due to movement of the real world scene relative to the imaging device caused by vehicle motion. Each pixel in the two-dimensional image space of the blurred input image has one or more corresponding locations in the real-world scene. Vehicle motion is defined by vehicle motion data. The extent of smearing in image space for each pixel of the blurred input image is inversely proportional to the depth of one or more corresponding locations in real world space. The depth in real world space is defined by a depth map.
In one embodiment, the imaging device is a side view camera of a vehicle. The program instructions are configured to cause the processor to perform the following steps in case of a translational vehicle motion: estimating a magnitude of the light flow line based on the depth map and the vehicle motion data; and determining a point spread function based on the magnitude of the optical flow line; calculating a fuzzy matrix based on the point spread function; calculating a regularized deblurring matrix based on the blur matrix and using a deconvolution function; and deblurring the blurred input image based on the regularized deblurring matrix and the blurred input image, thereby providing the deblurred output image.
In one embodiment, the imaging device is a forward or rear view camera of the vehicle. The program instructions are configured to cause the processor to perform the following steps in case of a translational vehicle motion: resampling the blurred input image and depth map to polar coordinates converging at an extended focus of the imaging device; estimating an optical flow line based on the resampled depth map and the vehicle motion data; determining a point spread function based on the magnitude of the optical streamline; calculating a fuzzy matrix based on the point spread function; calculating a regularized deblurring matrix based on the blur matrix and using a deconvolution function; deblurring the blurred input image based on the regularized deblurring matrix and the resampled blurred input image, thereby providing a polar deblurred image; and resampling the polar deblurred image to cartesian coordinates, thereby providing a deblurred output image.
In an embodiment, the program instructions are configured to cause the processor to: estimating the magnitude and direction of optical flow lines in cartesian coordinates based on the depth map and vehicle motion, thereby estimating optical flow; resampling the optical flow from the cartesian coordinates along the optical flow lines; resampling the blurred input image along optical flow lines from cartesian coordinates; determining a point spread function based on the vehicle motion and the resampled optical flow; calculating a fuzzy matrix based on the point spread function; calculating a regularized deblurring matrix based on the blur matrix and using a deconvolution function; deblurring the blurred input image based on the regularized deblurring matrix and the resampled blurred input image, thereby providing an optical flow coordinate deblurred image; and resampling the optical flow coordinate deblurred image to cartesian coordinates, thereby providing a deblurred output image.
In an embodiment, the steps of determining the point spread function, calculating the deblurring matrix, and deblurring the blurred input image are performed on a single line of the blurred input image and for each line.
In another embodiment, a vehicle is provided. The vehicle includes an image forming apparatus; a vehicle controller; a vehicle actuator; a depth sensor; one or more vehicle motion sensors; and a processor configured to execute the program instructions. The program instructions are configured to cause the processor to: receiving a blurred input image from an imaging device; and receiving vehicle motion data from one or more vehicle motion sensors; receiving a depth map corresponding to the blurred input image from a depth sensor; determining a point spread function based on the vehicle motion data and the depth map; calculating a deblurring matrix based on a point spread function; and deblurring the blurred input image based on the deblurring matrix and the blurred input image, thereby providing a deblurred output image. The vehicle controller is configured to control a function of the vehicle via a vehicle actuator based on the deblurred output image.
In an embodiment, the program instructions are configured to cause the processor to calculate a blur matrix based on a point spread function and calculate a regularized deblurring matrix corresponding to the deblurring matrix based on the blur matrix and using a deconvolution function.
In an embodiment, the program instructions are configured to cause the processor to: estimating the magnitude and direction of optical flow lines in cartesian coordinates based on the depth map and vehicle motion, thereby estimating optical flow; resampling the optical flow from cartesian coordinates along optical flow lines; resampling the blurred input image along optical flow lines from cartesian coordinates; determining a point spread function based on the vehicle motion and the resampled optical flow; calculating a fuzzy matrix based on the point spread function; calculating a regularized deblurring matrix based on the blur matrix and using a deconvolution function; deblurring the blurred input image based on the regularized deblurring matrix and the resampled blurred input image, thereby providing an optical flow coordinate deblurred image; and resampling the optical flow coordinate deblurred image to cartesian coordinates, thereby providing a deblurred output image.
In an embodiment, the steps of determining the point spread function, calculating the deblurring matrix, and deblurring the blurred input image are performed on a single line of the blurred input image and for each line.
Drawings
The present disclosure will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
FIG. 1 is a functional block diagram of a system for non-blind image deblurring according to an exemplary embodiment;
FIG. 2 is a functional block diagram of data processing in a regularized deconvolution sub-module in accordance with an illustrative embodiment;
FIG. 3 is a functional block diagram of data processing in a method for motion artifact image deblurring in the case of linear motion and side view imaging devices according to an exemplary embodiment;
FIG. 4 is a functional block diagram of data processing in a method for motion artifact image deblurring in the case of linear motion and forward or backward looking imaging devices according to an exemplary embodiment;
FIG. 5 is a functional block diagram of data processing in a method for motion artifact image deblurring in the general case, according to an exemplary embodiment;
fig. 6 shows an image transformation during simulation of motion artifact deblurring in the case of a side-looking camera according to an exemplary embodiment; and
fig. 7 shows image transformations during simulation of motion artifact deblurring in the case of a forward-view or rear-view camera according to an exemplary embodiment.
Detailed Description
The following detailed description is merely exemplary in nature and is not intended to limit the disclosure or the application and uses thereof. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description.
As used herein, the term "module" refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, alone or in any combination, including but not limited to: an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Embodiments of the disclosure may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, embodiments of the present disclosure may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that embodiments of the present disclosure can be practiced in conjunction with any number of systems, and that the systems described herein are merely exemplary embodiments of the disclosure.
For the sake of brevity, conventional techniques related to signal processing, data transmission, signaling, control, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the disclosure.
Systems and methods for motion blur removal are described herein in which depth maps from one or more depth sensors are utilized to estimate optical flow in images captured by an imaging device. The depth map provides depth information for features captured in the image, allowing at least the magnitude of the optical flow to be estimated so that accurate motion artifact deblurring can be performed. The deblurring systems and methods disclosed herein are based in part on the recognition that motion-induced blur depends on image depth. During exposure, objects closer to the imaging device will move more in image space (have larger light flux values) than objects further away. Thus, objects that are farther away will suffer from lower optical flow or motion blur. The systems and methods described herein take into account depth information of captured images in order to perform a depth variable motion deblurring process. In some embodiments described herein, the blurred input image is fed into a deconvolution algorithm. The deconvolution algorithm uses a depth-dependent Point Spread Function (PSF) based on the depth map. In some embodiments, the blurred input image is resampled along the optical flow lines prior to deconvolution.
In the embodiments described herein, the imaging apparatus is mounted to a vehicle. Autonomous Vehicle (AV) systems and/or Advanced Driver Assistance Systems (ADAS) use deblurred output images as part of a vehicle controller. Since the deblurred output image will be sharper than current systems, the vehicle controller can make decisions based on better source information, potentially allowing safer operation.
Fig. 1 shows a system 10 for image deblurring (non-blind). The system 10 includes a vehicle 12, an imaging device 14 mounted to the vehicle 12, a vehicle motion sensor 16, one or more depth sensors 92, a vehicle controller 18, a vehicle actuator 96, and an image processing system 26.
The system 10 is shown in the context of (e.g., included in) a vehicle 12, particularly an automobile. However, the system 10 is useful in other vehicle environments, such as aircraft, marine vessels, and the like. The system 10 may be applied outside of a vehicular environment including any electronic device, such as mobile phones, cameras, and tablet devices, that captures images that are prone to motion blur. The present disclosure relates particularly, but not exclusively, to blur due to motion of the vehicle 12 at night or other times when the exposure time is extended.
In various embodiments, the vehicle 12 is an autonomous vehicle, and the system 10 is incorporated into the autonomous vehicle 12. However, the system 10 may be used with any kind of vehicle (autonomous or otherwise) that includes an imaging device 14, the imaging device 14 producing images that are subject to motion-induced blur. The autonomous vehicle 12 is, for example, a vehicle that is automatically controlled to transport passengers from one location to another. The vehicle 12 is depicted in the illustrated embodiment as a passenger vehicle, but it should be understood that any other vehicle, including motorcycles, trucks, Sport Utility Vehicles (SUVs), Recreational Vehicles (RVs), boats, airplanes, etc., may also be used. In the exemplary embodiment, the autonomous vehicle 12 is a so-called four-level or five-level automation system. The four-level system represents "highly automated," meaning that the autonomous driving system has a particular driving pattern performance in all aspects of the dynamic driving task, even if the driver does not respond appropriately to the intervention request. A five-level system represents "fully automated" and refers to the full-time performance of an autonomous driving system on various aspects of a dynamic driving task under all road and environmental conditions that a driver can manage.
In an embodiment, the vehicle 12 includes a vehicle controller 18, the vehicle controller 18 controlling one or more vehicle functions based on images from the imaging device 14. The vehicle controller 18 may include one or more advanced driver assistance systems that provide electronic driver assistance based on images from the imaging device 14. The vehicle controller 18 may include an autonomous or semi-autonomous driver that controls the vehicle 12 via one or more vehicle actuators 96 (e.g., actuators of a propulsion, braking, and steering system) based on imaging inputs from the imaging device 14. In an embodiment, the vehicle controller 18 includes a control module that receives the deblurred output image 20 (e.g., as a frame of video or as a still image) from the image processing system 26 to determine the control instructions 90 to be applied to the vehicle actuators 96. The control module of the vehicle controller 18 may run a localization and context-aware algorithm that processes the deblurred output image 20 to determine the control instructions 90. The deblurred output image 20 removes or greatly improves the self-motion induced blur by using a deblurring matrix that has been calculated to account for vehicle motion, exposure time, and depth information for a given input image, as will be further described herein. In all of these embodiments, a better deblurred image will allow the vehicle controller 18 to more safely control the vehicle 12.
According to various embodiments, the system 10 includes an imaging device 14 (front, rear, or side mounted camera) or a plurality of such imaging devices 14. The imaging device 14 is any suitable camera or video device that produces images. For purposes of this disclosure, the image is assumed to include blur (and is therefore labeled as a blurred input image 24) due to motion blur caused by relative motion between the real world scene and the imaging device 14 caused by motion of the vehicle 12. The imaging device 14 may be a color imaging device or a grayscale imaging device. The imaging device 14 may operate in the visible and/or infrared spectrum. The imaging device 14 may produce a one-, two-, or three-dimensional (1D, 2D, or 3D) image that is used as the blurred input image 24.
The vehicle motion sensors 16 include various sensors used by a vehicle controller 18 to control operation of the vehicle 12. Of particular relevance to the present disclosure are velocity sensors, such as wheel velocity sensors, acceleration sensors, such as accelerometers and/or gyroscopes, and other vehicle motion sensors 16 that provide vehicle motion data 22 representative of parameters of sensed vehicle motion. As further described herein, the image processing system 26 uses the vehicle motion data 22 to determine the PSF. In some embodiments, the vehicle motion data 22 may be estimated from other data sources, rather than sensed directly. For example, the vehicle motion data 22 may be estimated based on the perception capabilities of the vehicle 12.
The one or more depth sensors 92 provide a depth channel for obscuring the input image 24 so that three-dimensional position information for obscuring features in the input image may be obtained. The depth sensor 92 thus produces a depth map 94 corresponding to the blurred input image 24. Depth sensor 92 may be any of a number of types including a stereo camera system, a lidar device, a time-of-flight (TOF) camera, a radar device, an ultrasound device, and a laser range finder, and may be associated with appropriate processing capabilities to allow depth or range information to be obtained. Although the depth sensor 92 and the imaging device 14 are shown as separate devices in fig. 1, they may be integrated devices, such as with an enhanced CCD camera.
With continued reference to fig. 1, the image processing system 26 includes at least one processor 70, memory 72, and the like. The processor 70 may execute program instructions 74 stored in the memory 74. The processor 70 may refer to a Central Processing Unit (CPU), a Graphics Processing Unit (GPU) or a dedicated processor on which the methods and functions according to the present invention are performed. The memory 72 may be comprised of volatile and/or non-volatile storage media. For example, the memory 72 may include Read Only Memory (ROM) and/or Random Access Memory (RAM). The memory 72 stores at least one instruction that is executed by the processor 70 to implement the blocks, modules, and method steps described herein. The modules implemented by the image processing system 26 include a Point Spread Function (PSF) module 28, a blur matrix calculation module 30, a deblurring module 34, and a regularization deconvolution sub-module 36. Although modules 28, 30, 34, and 36 (described further below) are shown separately from processor 70, memory 72, and programming instructions 74, this is purely for visualization. Indeed, the modules 28, 30, 34, and 36 are embodied by programming instructions 74 stored on the memory 74 and executable by the one or more processors 70 of the image processing system 26.
The image processing system 26 is configured to receive the blurred input image 24 from the imaging device 14, the depth map 94 from the one or more depth sensors 92, and the vehicle motion data 22 from the vehicle motion sensor 16 through programming instructions 74 executed on the processor 70 (as further described below). The image processing system 26 determines the PSF based not only on the vehicle motion data 22 (e.g., movement speed) and the camera data 76 (e.g., exposure time), but also on depth information for the blurred input image 24 to account for the fact that more distant objects move with a lesser amount of blur than closer objects. The image processing system 26 calculates a deblurring matrix based on a point spread function through a deconvolution process. The image processing system 26 provides the deblurred output image 20 by operating on a deblurring matrix on the blurred input image 24.
The blur of the input image can be mathematically represented by the following equation:
IB=I×KB(equation 1)
Wherein IBIs a blurred input image 24, I is an unknown, unblurred image corresponding to the deblurred output image 20, KBIs a blur matrix that models the PSF that describes the blur properties in the blurred input image 24. Since the present disclosure relates to non-blind deblurring, assuming that the PSF is known, the blur matrix K can be derived from the PSFB. PSFs for all blur modes are known in the art, including blur caused by motion of the imaging device 14 during exposure. In theory, the inverse of the fuzzy matrix (the inverse is defined byRepresentation) is multiplied by the blurred input image 24 to resolve the unblurred image I. However, blurring noise in the input image 24 makes simple deconvolution impractical. During the deconvolution process, the noise component will be amplified in an uncontrolled manner, which may result in a blurred input image 24 compared to the originalA sharp (or more blurred) deconvolved image. One solution to this noise amplification problem is to deblur the blurred input image 24 using a regularized inverse of the blurring matrix. Such regularized deconvolution functions are known in the art. The regularized deconvolution function relies on a regularization parameter λ to mitigate the effects of noise. Referring to FIG. 1, the regularization deconvolution sub-module 36 receives the blurred input image 24 and operates a regularization deconvolution function 44 thereon.
The image processing system 26 includes a PSF module 28 that receives vehicle motion data 22 including at least speed data and depth map data 94. These inputs are processed by the PSF module 28 to determine PSF data 31 representing the PSF function. During the course of operation of the PSF module 28, the optical flow lines are implicitly or explicitly estimated, as will be further described herein. The PSF data 31 may vary depending on vehicle motion (e.g., the faster the vehicle, the greater the spread or blur defined by the PSF) and camera data 76 representing relevant camera parameters obtained from the imaging device 14 (e.g., exposure time, such that the longer the exposure time, the greater the blur assuming constant vehicle speed). The PSF data 31 is determined by a point spread function module 28, which point spread function module 28 includes an optical flow modeling function for determining an expected PSF based on the vehicle motion data 22 and the camera data 76. In one embodiment, assuming translational motion of the vehicle 12, the imaging device 14 points in a direction perpendicular to the velocity vector (side looking imaging device 14), the point spread function module 28 determines the point spread function data 31 based on:
where u and V are coordinates in two-dimensional image space, X, Y and Z are coordinates in three-dimensional real world space, f is the focal length of the imaging device 14 and is predetermined for the particular system 10, V is the speed of the vehicle 12, which is derived from the vehicle motion data 22, and t corresponds to the exposure time of the imaging device 14, which is derived from the camera data 76. Assuming translational motion of the imaging device 14, the imaging device 14 points in a direction perpendicular to the velocity vector (side looking imaging device 14), the smearing is expected to occur only along the u-axis, and the degree of smearing is expected to be:
where D is a depth map defined by depth map data 94:
d (u, v) ═ Z (u (x), v (y)) (equation 4)
Optical flow is a pattern of apparent motion of objects, surfaces, and edges in an image scene caused by relative motion between the imaging device 14 and the real-world scene. In the example looking at the translational motion of the imaging device 14 in the y-direction and the x-direction, the light streamlines occur only in one dimension-along the u-direction in image space, which corresponds to the x-direction in real world space. Thus, the optical flow line is estimated by equation 3. The PSF module defines PSF data 31 based on the optical flow lines implicitly estimated from equation 3.
As can be understood from equations 2 and 3, the amount of movement of the image in image space during the exposure time is inversely proportional to the depth of the imaged object in real world space. The present disclosure proposes to use the depth map data 94 in determining the PSF data 31 to take depth information into account in order to determine a depth adaptive point spread function that allows accurate motion artifact deblurring.
The blur matrix calculation module 30 converts the PSFs defined in the PSF data 31 into a matrix form and outputs a corresponding blur matrix 32. The blur matrix 32 may take the form:
the PSF in equation 5 is obtained from the PSF data 31, where L is derived from equation 3, as follows:
the regularization deconvolution submodule 36 receives a representation blur momentMatrix KBAnd utilized to perform a regularized deconvolution on the blurred input image 24 to generate a deconvolved image 40.
FIG. 2 illustrates an exemplary data flow diagram of a regularization deconvolution function 44 used by the regularization deconvolution sub-module 36. In this case, the regularized deconvolution function 44 is a Tikhonov regularized deconvolution function. As shown in FIG. 2, the PSF module 28 generates PSF data 31 representing a PSF, which PSF data 31 is converted into a fuzzy matrix K defined by a fuzzy matrix 32BAs described above. The blur matrix 32 is subjected to a Single Value Decomposition (SVD) to generate a USV decomposition matrix according to equation 7:
KB=USVT(equation 7)
The inverse of the regularized blur matrix is found to be:
where I (the unsharp version of the blurred input image 24) is:
IBis a blurred input image 24.
With continued reference to FIG. 2, since the fuzzy matrix K is alignedBThe canonical deconvolution function 44 is composed of inverse decomposition matrices 80, 82, 84. The matrix 82 is a function of the regularization parameter λ 78, which is of the form S (S)2+λ2I)-1. The regularization parameter 78 may be a constant for the particular system 10. The blurred input image 24 is multiplied by the decomposition matrices 80, 82, 84 as part of the regularized Tikhonov deconvolution function 44 to provide the deconvolved image 40. Other deconvolution functions are known and may be suitable, such as wiener regularized deconvolution functions.
Referring back to FIG. 1, the deconvolved image 40 may have image artifacts as an inherent result of the regularized deconvolution process. For this reason, the deblurring module 34 may include a Convolutional Neural Network (CNN) sub-module (not shown) or some other artifact removal sub-module that removes any image artifacts in the deconvolved image 40. After any further image artifact removal processing, the deblurring module 34 outputs the deblurred output image 20 based on the deconvolved image 40.
Referring now to fig. 3-5, with continued reference to fig. 1 and 2, a flow chart illustrates a method that may be performed by the image processing system 26 of fig. 1 in accordance with the present disclosure. As can be appreciated in light of this disclosure, the order of operations within the method is not limited to being performed in the order shown in fig. 3-5, but may be performed in one or more varying orders as applicable and in accordance with this disclosure. In various embodiments, the method may be scheduled to run based on one or more predetermined events, and/or may run continuously during operation of the autonomous vehicle 12.
Fig. 3 is a functional block diagram of data processing 300 in a method for motion artifact image deblurring in the case of a linear motion and side view imaging device 14, according to an embodiment. The data process 300 may be activated when the image processing system 26 determines, based on the vehicle motion data 22, that the vehicle 12 is translating in the y-direction (where the facing direction of the imaging device 14 is in the x-direction). In process 302, a blurred input image 24 is obtained by the side looking imaging device 14. In process 304, vehicle motion data 22 is obtained by the vehicle motion sensor 16. In process 306, depth map data 94 is obtained by the depth sensor 92. In process 308, optical flow lines are estimated. Since the present case is a pure translational motion, the light flow line is a straight line, independent of the depth of the imaged object, and extends in the y-direction. Thus, the result of equation 3 is embodied in the PSF data 31, which implicitly estimates the magnitude of the light flow lines of the blurred input image 24. The process 308 estimates the optical flow lines by computing the result of equation 3 based on the vehicle speed, the exposure time, and the depth map. Process 310 calculates blur matrix 32 according to equation 5 above. Process 312 deconvolves blur matrix 32 into a regularized deblurring matrix using deconvolution function 44. In process 314, the regularized deblurring matrix from process 312 is applied to the blurred input image from process 302, thereby outputting a deblurred output image 20 in process 316.
In an embodiment, some of the processes 300 are performed line-by-line, thereby facilitating simplified one-dimensional image processing. That is, the data processing 310, 312, and 314 in block 320 is performed for each line. A single line of image data is read from the blurred input image 24 (the rows and columns are two-dimensional). In steps 310, 312, the blur matrix and the regularized deblurring matrix are computed in one dimension (e.g., for a single row) based on the depth information for the single row of the blurred input image 24. In this manner, the one-dimensional Tikhonov regularization deconvolution function 44 described above is used in process 312. The regularized deblurring matrix from process 312 is applied to a single line of image data of the blurred input image to produce a single line of output data. The processes 310, 312 and 314 are repeated for each line of the blurred input image 24. In process 316, the rows of deblurred output image data are combined and output, thereby providing a deblurred output image 20.
Fig. 6 shows an image 600 generated during the simulation of motion artifact deblurring according to the data processing of fig. 3 in the case of side view camera and translational motion according to an exemplary embodiment. The blurred input image 602 includes a blur in the u-direction of the image space, which corresponds to the direction of translational motion of the vehicle 12. The image 600 includes a depth map 604, the depth map 604 corresponding (in size and position) to the blurred input image 602. According to the process 300 of fig. 3, the PSF is calculated for each row of the blurred input image 602 using the rows of the depth map 604, thereby implicitly estimating the magnitude of the light flow lines in the row or u direction according to the process 308. From the PSF, a regularized deblurring matrix is computed using the PSF and the processes 310 and 312, which is applied to one of the rows of the blurred input image 602 in process 314. These processes 310, 312, and 314 are repeated to produce a deblurred output image 606. Image 608 shows ground truth for purposes of comparison with deblurred output image 606. As can be seen by comparing the blurred input image 602 and the deblurred output image 606 in fig. 6, efficient one-dimensional image processing steps have been used to form sharp images.
Fig. 4 is a functional block diagram of data processing 400 in a method for motion artifact image deblurring in the case of linear motion and a forward or rear view imaging device 14, according to an embodiment. The data processing 400 may be activated when the image processing system 26 determines that the vehicle 12 is translating in the y-direction (where the facing direction of the imaging device 14 is also in the y-direction) based on the vehicle motion data 22. In translational forward or backward motion and using the forward or backward imaging device 14 aligned with the translational motion of the vehicle 12, the optical flow is aligned with a set of straight lines converging at the focal point of expansion (FOE) of the imaging device 14. The FOE may, but need not, coincide with the center of the image captured by the imaging device 14. The position depends on the orientation of the imaging device 14 relative to the vehicle motion.
In process 402, a blurred input image 24 is obtained by the forward or rear looking imaging device 14. In process 404, vehicle motion data 22 is obtained by the vehicle motion sensor 16. In process 406, depth map data 94 is obtained by the depth sensor 92. In process 410, the blurred input image 24 is resampled from Cartesian coordinates to polar coordinates centered on the FOE. In the present example, the motion-induced blur occurs in the radial direction (along a line of constant θ) only along a line converging at the FOE of the imaging device 14. By converting the blurred input image into polar coordinates, the motion blur is aligned along the u direction in the image space, so that the above equations 2 to 9 still apply. That is, by resampling the blurred input image 24, the problem of motion deblurring using depth information has been made solvable by the aforementioned one-dimensional approach. In process 412, the depth map defined by the depth map data 94 is resampled to polar coordinates centered on the FOE for reasons similar to those outlined above.
In process 408, optical flow lines are estimated. Since the present case is only a translational motion, the light flow line is a straight line regardless of the depth of the imaging subject and extends in the radial direction. Thus, the result of equation 3 is reflected in the PSF data 31, which PSF data 31 implicitly estimates the magnitude of the light flow lines that blur the input image 24. The process 408 estimates the optical flow lines by calculating the result of equation 3 based on the vehicle speed, the exposure time, and the depth map. The process 410 computes the blur matrix 32 according to equation 5 above. According to process 410, a PSF is defined with respect to the resampled depth map. That is, the PSF uses the values of the resampled depth map at each coordinate [ m, n ] (or number of pixels) of the resampled depth map. As a result of the resampling process, the u and v coordinates in the depth map and blurred input image 24 are aligned with r (radial extent) and θ (polar angle). The process 412 deconvolves the blur matrix 32 into a regularized deblurring matrix using the deconvolution function 44. In process 414, the regularized deblurring matrix from process 412 is applied to the resampled input image from process 410 to produce a polar deblurred output image. In process 420, the polar deblurred output image is resampled or converted to cartesian coordinates such that the deblurred output image 20 is output in process 422.
Some of the processes 400 are performed line-by-line, similar to the processes described with respect to fig. 3 and 4, thereby facilitating simplified one-dimensional image processing. That is, data processing 414, 416, and 418 in block 430 is repeated for each single line of the resampled input image from process 410 and each single line of the resampled depth map from process 412.
Fig. 7 shows an image 700 generated during the simulation of motion artifact deblurring according to the data processing of fig. 4 in the case of a forward-looking or rear-looking camera and translational motion according to an exemplary embodiment. Blurred input image 702 includes deblurring in the r-direction of image space along a line of constant θ that converges at the FOE of imaging device 14. The image 700 includes a depth map 704, the depth map 704 corresponding to the blurred (in terms of size, position and FOE) input image 702.
The blurred input image 702 and depth map 704 are resampled into polar coordinates according to the processes 410 and 412 of fig. 4 to produce the resampled input image 706 and resampled depth map 708 shown in fig. 7. In processes 408 and 414, the values of the resampled depth map 704 are used to determine a point spread function and a blur matrix that implicitly estimates the magnitude of the optical streamlines. In process 416, the blur matrix from process 416 is applied to one of the rows of the blurred input image 702. In process 416, based on one of the rows of the depth map 704, a regularized deblurring matrix is computed row by row, which is then applied to a corresponding single row of the blurred input image 702 in processes 414, 416, and 418. These processes 414, 416 and 418 are repeated for all rows of the input image 702 and the depth map 704 to produce a polar coordinate deblurred output image, which is then resampled to cartesian coordinates in process 420 to produce a deblurred output image 710. Image 712 shows a ground truth image and the sharpness closely corresponds to the deblurred output image 710. As by comparing the blurred input image 702 and the deblurred output image 710 in fig. 7, efficient one-dimensional image processing steps have been used to form sharp images, substantially free of motion-induced blur.
Fig. 3 and 4 depict data processing for motion artifact image deblurring in a specific example of translational motion of side and front/back view cameras, respectively, according to an exemplary embodiment. The embodiments of fig. 3 and 4 are practical in themselves and cover a wide range of usage scenarios for vehicles. However, a more general approach can be derived from the same principles that do not rely on assumptions about translational motion and specific camera orientation. Fig. 5 shows a data process 500 of a method for image deblurring according to a more general exemplary embodiment. The data processing of fig. 5 is applicable to hybrid (rotational plus translational) motion of the vehicle 12. As the vehicle 12 makes mixed motions, the direction of the light flow lines and the magnitude of the light flow depend on depth and will vary locally throughout the blurred input image 24. This is in contrast to the case of translational motion of the vehicle 12, where the direction of the light flow lines is consistent through the blurred input image 24 and only its magnitude varies as a function of depth.
The data processing 500 may be activated when the image processing system 26 determines that the vehicle 12 has significant motion contributions, both translationally and rotationally, based on the vehicle motion data 22. It should be noted that for pure rotational motion, the optical flow is not depth dependent and therefore does not need to be corrected by the depth adaptive motion deblurring systems and methods described herein.
In process 502, a blurred input image 24 is obtained by the imaging device 14. In process 504, vehicle motion data 22 is obtained by the vehicle motion sensor 16. In the present embodiment, the vehicle motion data includes three-dimensional velocity and acceleration information. In process 506, depth map data 94 is obtained by the depth sensor 92. In process 508, optical flow is estimated. In contrast to the processes of FIGS. 3 and 4, optical flow is estimated explicitly, rather than this step being performed implicitly. The optical flow may be determined by:
where x and y represent normalized three-dimensional coordinates of image features in the real world:
x, Y and Z are regular three-dimensional coordinates of image features in the real world, Vx、Vy、VzIs a velocity vector component along each of the x, y and z axes, and:
Ω=(ΩX ΩY ΩZ)T(equation 11).
Ω is an angular velocity vector, which is obtained from vehicle motion data 22, in particular acceleration data obtained from an Inertial Measurement Unit (IMU) included in the vehicle motion sensor 16.
Process 508 provides a light flow graph that includes the magnitude and direction of the optical flow lines at each coordinate. The optical flow may vary locally throughout the optical flow graph. In process 510, the blurred input image 24 is resampled from Cartesian coordinates along the optical flow lines based on the estimated optical flow obtained in process 508. The resampling step comprises resampling to a local coordinate system based on optical flow lines at each pixel of the blurred input image. In process 512, the optical flow (optical flow graph) itself, which has been obtained in process 508, is resampled along the optical flow lines.
Process 514 computes blur matrix 32 according to equation 5 above. In accordance with process 514, the PSF is defined with respect to the resampled light flow graph obtained from process 512. That is, the PSF uses the values of the resampled light flow map at each of its coordinates m, n (or number of pixels). The resampling process enables one-dimensional alignment (in the u-direction of the image space) of the blurred input image 24 and the complex optical flow in the optical flow map, so that the one-dimensional deblurring equation described above remains operational. Process 516 deconvolves blur matrix 32 into a regularized deblurring matrix using deconvolution function 44. In process 518, the regularized deblurring matrix from process 516 is applied to the resampled input image from process 510 to produce a polar deblurred output image. In process 520, the optical-flow coordinate deblurred output image is resampled to Cartesian coordinates such that the deblurred output image 20 is output in process 522.
Some of the processes 500 are performed line-by-line, similar to that described with respect to fig. 3 and 4, thereby facilitating simplified one-dimensional image processing. That is, the data processing 514, 516, and 518 in block 530 is repeated for each single line of the resampled input image from process 510 and the resampled optical flow map from process 512.
Thus, for the most general case, the optical flow is calculated in cartesian coordinates using equations 10 and 11 above. The optical flow itself is then resampled along the optical flow lines. In this way, the optical flow is aligned with the resampled input image and deblurring can be performed independently for each line of the resampled image. Since in the embodiments described herein with reference to fig. 3-5, the deblurring period is performed row-by-row, the deblurring period can be efficiently parallelized on more than one processor 70.
In an embodiment of the present disclosure, the deblurred output image 20 is output by the image processing system 26 and received by the vehicle controller 18. The vehicle controller 18 generates the control instructions 90 based on the deblurred output image. The control commands 90 are output to vehicle actuators 96 to control one or more functions of the vehicle 12, such as steering, braking, and propulsion.
It should be understood that the disclosed methods, systems, and vehicles may differ from those illustrated in the figures and described herein. For example, the vehicle 12 and the image processing system 26 and/or various components thereof may differ from that shown in fig. 1 and 2 and described in connection with fig. 1 and 2. Additionally, it will be appreciated that certain steps of the method may differ from those shown in fig. 3-5. It should similarly be appreciated that certain steps of the above-described methods may occur simultaneously or in a different order than shown in fig. 3 through 5.
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the application and its legal equivalents.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:图像增强处理方法、装置、设备及介质