当前位置:天才代写 > Python代写 > defog python代写 | 图片处理代写 | 图像优化代写

defog python代写 | 图片处理代写 | 图像优化代写

2019-09-27 12:13 星期五 所属: Python代写 浏览:8

defog python代写 雾气场景中, 由于大气颗粒对光线的散射, 造成场景中目标表面的反射光散射损失, 使其光强度降低, 且随传播距离呈指数衰减, 同时在反射光传播过程中, 附加了环境中大气光源, 并随着传播距离的增加而改变光强.

图像去雾实施方案

 

  • 介绍
  • 方法介绍

雾气场景中, 由于大气颗粒对光线的散射, 造成场景中目标表面的反射光散射损失, 使其光强度降低, 且随传播距离呈指数衰减, 同时在反射光传播过程中, 附加了环境中大气光源, 并随着传播距离的增加而改变光强.

根据上述大气光散射理论, 在计算机视觉与图形学中, 形成了广泛使用的大气光散射模型:

 

其中, I表示采集到的有雾图像, J是去雾后的场景图像, t为光线传播介质的透视率, A表示大气光值, x为图像中的像素点。

在现有的方法中,有些方法使用CNN计算出t(x),然后根据大气光散射模型计算得出I(x),这样做存在的问题是,也要能很好的估算A的值,去雾结果才比较好,否则t(x)算得的结果再准确,也无法达到很好的去雾效果。有些方法使用CNN分别计算t(x)和A的值,再根据大气光散射模型计算得出I(x),这样得到的结果可靠性更强。本文提出的模型就是采用这样的方法实现图像去雾。

  • 网络架构

模型的结构分为四部分(蓝色方框):

  • Transmation Map Estimation

Transmation Map Estimation网络的作用是得到图像的透视图,即t(x)。该网络的结构如下图所示:

 

该网络是一种密集连接的编码-解码结构,使用密集块作为基本结构。密集块保留了densenet的优势,能保证不同网络层间信息的传递,从而更好的保留空间结构信息,在网络训练时也能保证更好地收敛。编码部分(Dense Block)采用预训练的dense-net121结构,包括一个conv层和三个dense block层。解码部分包括五个dense block和一个conv层。

全局结构的上下文信息有助于表达图像特征,为了使用本地信息来表示图像的全局结构,网络采用四个不同尺度的池化操作,因此编码-解码器部分输出四个不同尺度的feature map(1/4,1/8,1/16,1/32),通过上采样将其转化成原图片大小,并与编码-解码器的输出特征拼接,由此可获得不同尺度信息。

  • Atmospheric Light Estimation

Atmospheric Light Estimation网络的作用是得到大气光值A(x),由于大气光A(x)对于给定的图像是均匀的,因此A(x)是2D图,与输入图像具有相同的尺寸,因此,我们采用U-net网络。该网络是一种编码-解码器结构,编码器逐渐减少池化层的空间维度,解码器逐步修复物体的细节和空间维度。编码器和解码器之间通常存在快捷连接,因此能帮助解码器更好地修复目标的细节。U-Net常用于image-to-image的问题。

U-net网络结构:

卷积层的数量大约在20个左右,4次下采样,4次上采样。

  • Atmospheric Scattering Model

Atmospheric Scattering Model 是根据大气光散射模型变形的公式:

 

通过将上述两个网络生成的t(x)和A以及有雾图片I(x)代入该公式,即可得到去雾图片J(x)。

  • Discriminator

该部分采用GAN网络的原理,仅使用discriminator部分。该部分使用四个conv层,1个fc层(参考论文《Single Image Dehazing via Convolutional Generativa Adversarial Network》)。discriminator将由(3)中计算得出的去雾图与原图(无雾)做比较,训练网络,直到discriminator判断不出输入的图片是去雾图还是原图。这样就能够达到比较好的去雾效果。

  • 相关说明

这里提出的方案主要参考论文《Densely Connected Pyramid Dehazing Network》,由于时间原因无法实现代码验证结果,所以不好作出较大改动,但是相较于论文中的方法有创新的部分,就是在原网络的第(4)部分判别器的输入这里。原文是将透视图和去雾图作为判别器的输入,达到的目的是使透视图和去雾图达到同一分布,提高去雾效果。(修改:原文将透视图与去雾图拼接(即:透视图+去雾图),达到的目的是使得到的透视图、去雾图透视图+去雾图这三部分,与原图透视图、图、图透视图+原图这三部分基本一致。)我们这里改成直接将去雾图和无雾图作为输入,效果会更好,但前提是使用的数据集中包括去雾图和无雾图。

 

 

 

这样改进的好处:

1、原文将透视图与去雾图拼接起来,作为判别器的输入,在损失函数中,通过利用联合分布优化,可以更好地利用它们之间的结构相关性,这样做的问题是去雾图是根据公式由透视图t(x)与A得到的,当A确定时,两者之间的关系可由公式:

得到,这样的话,直接将去雾图与原图比较,与将(去雾图+透视图)与(原图+原透视图)比较,理论上来说能得到类似效果,而且更简单(简化损失函数)。

2、原文方法的损失函数如下:

该损失函数过多的考虑透视图t(x)的效果,透视图类似于深度图,能表示图片物体大致轮廓信息,但是彩色图片中包含更丰富的信息,包括颜色、纹理、以及微小物体等外观信息等,如果在损失函数中过于重视透视图效果,相对的可能会忽视彩色图中上述信息,影响去雾效果。

 

如下图,透视图只能表现原图的部分特征。

defog python代写
defog python代写

 

最先出自天才代写 python代写
合作:幽灵代写

 


Image defogging implementation

Introduction

Method introduction
In the fog scene, due to the scattering of light by the atmospheric particles, the reflected light of the target surface in the scene is scattered, causing the light intensity to decrease, and exponentially decay with the propagation distance. At the same time, in the process of the reflected light propagation, the atmosphere in the environment is added. Light source, and change the light intensity as the distance traveled.

According to the above theory of atmospheric light scattering, in the computer vision and graphics, a widely used atmospheric light scattering model is formed:

Wherein, I represents the collected foggy image, J is the image of the scene after defogging, t is the perspective of the light propagation medium, A represents the atmospheric light value, and x is the pixel point in the image.

In the existing method, some methods use CNN to calculate t(x), and then calculate I(x) according to the atmospheric light scattering model. The problem with this is that the value of A can be well estimated. The result of defogging is better, otherwise the result of t(x) is more accurate and can not achieve a good defogging effect. Some methods use CNN to calculate the values ​​of t(x) and A, respectively, and then calculate I(x) according to the atmospheric light scattering model, so that the results obtained are more reliable. The model proposed in this paper is to achieve image defogging in this way.

Network Architecture

The structure of the model is divided into four parts (blue box):

Transmation Map Estimation
The role of the Transmation Map Estimation network is to get a perspective view of the image, t(x). The structure of the network is shown below:

The network is a densely connected code-decoding structure that uses dense blocks as the basic structure. The dense block retains the advantage of the densenet, which can ensure the transmission of information between different network layers, thereby better retaining the spatial structure information and ensuring better convergence during network training. The code portion (Dense Block) uses a pre-trained dense-net121 structure, including a conv layer and three dense block layers. The decoding part includes five sense blocks and one conv layer.

The context information of the global structure helps to express the image features. In order to use the local information to represent the global structure of the image, the network uses four different scales of pooling operations, so the code-decoder part outputs four different scale feature maps (1 /4, 1/8, 1/16, 1/32), which is converted into the original picture size by upsampling and spliced ​​with the output features of the code-decoder, thereby obtaining different scale information.

Atmospheric Light Estimation

The role of the Atmospheric Light Estimation network is to obtain the atmospheric light value A(x). Since the atmospheric light A(x) is uniform for a given image, A(x) is a 2D image with the same size as the input image, so , we use U-net network. The network is a code-decoder structure, the encoder gradually reduces the spatial dimension of the pooling layer, and the decoder gradually repairs the details and spatial dimensions of the object. There is usually a quick connection between the encoder and the decoder, which helps the decoder better fix the details of the target. U-Net is often used for image-to-image problems.

U-net network structure:

The number of convolutional layers is about 20, 4 downsamples and 4 upsamples.

Atmospheric Scattering Model
The Atmospheric Scattering Model is a formula that deforms according to the atmospheric light scattering model:

By substituting the t(x) and A generated by the above two networks and the foggy picture I(x) into the formula, the defogging picture J(x) can be obtained.

Discriminator

This part uses the principle of the GAN network, using only the discriminator part. This part uses four conv layers, one fc layer (refer to the paper “Single Image Dehazing via Convolutional Generativa Adversarial Network”). The discriminator compares the dehazing map calculated in (3) with the original image (no fog) and trains the network until the discriminator cannot determine whether the input image is a defogging pattern or an original image. This will achieve a better defogging effect.

Related instructions

The proposed scheme mainly refers to the paper “Densely Connected Pyramid Dehazing Network”. Due to the time, the code verification result cannot be realized, so it is not easy to make major changes, but compared with the innovative method in the paper, it is the first in the original network. (4) The input of the partial discriminator is here. The original text uses the perspective and defogging diagrams as input to the discriminator. The purpose is to achieve the same distribution of the perspective and defogging diagrams, and to improve the defogging effect. (Modification: The original text will be stitched together with the defogging pattern (ie: perspective + defogging), and the purpose is to make the three parts of the perspective, defogging, perspective + defogging, and the original The perspective, original image, original image perspective + original image are basically the same.) We changed the direct defogging and fogless images as input, the effect will be better, but the premise is that the data set used includes Fog map and no fog map.

The benefits of this improvement:

1. The original text is spliced ​​together with the defogging diagram. As the input of the discriminator, in the loss function, by using the joint distribution optimization, the structural correlation between them can be better utilized. The problem is to defogg. The graph is obtained from the perspectives t(x) and A according to the formula. When A is determined, the relationship between the two can be given by the formula:

Obtain, in this case, directly compare the dehazing map with the original image, and compare (defogging + perspective) with (original + original perspective), in theory, can get a similar effect, and simpler (simplify the loss) function).

2. The loss function of the original method is as follows:

The loss function considers the effect of the perspective t(x) too much. The perspective is similar to the depth map and can represent the outline information of the picture object, but the color picture contains more information, including color, texture, and tiny objects. Appearance information, etc., if the perspective effect is too much emphasized in the loss function, the above information may be ignored in the color map, which affects the defogging effect.

As shown in the figure below, the perspective view can only represent some features of the original image.

 


天才代写-代写联系方式