﻿ defog python代写 | 图片处理代写 | 图像优化代写 - Python代写, 作业代写, 北美代写, 澳大利亚代写

# defog python代写 | 图片处理代写 | 图像优化代写

2019-09-27 12:13 星期五 所属： Python代写 浏览：74

defog python代写 雾气场景中, 由于大气颗粒对光线的散射, 造成场景中目标表面的反射光散射损失, 使其光强度降低, 且随传播距离呈指数衰减, 同时在反射光传播过程中, 附加了环境中大气光源, 并随着传播距离的增加而改变光强.

## 图像去雾实施方案

• 介绍
• 方法介绍

• 网络架构 • Transmation Map Estimation

Transmation Map Estimation网络的作用是得到图像的透视图，即t(x)。该网络的结构如下图所示： • Atmospheric Light Estimation

Atmospheric Light Estimation网络的作用是得到大气光值A(x)，由于大气光A(x)对于给定的图像是均匀的，因此A(x)是2D图，与输入图像具有相同的尺寸，因此，我们采用U-net网络。该网络是一种编码-解码器结构，编码器逐渐减少池化层的空间维度，解码器逐步修复物体的细节和空间维度。编码器和解码器之间通常存在快捷连接，因此能帮助解码器更好地修复目标的细节。U-Net常用于image-to-image的问题。

U-net网络结构： • Atmospheric Scattering Model

Atmospheric Scattering Model 是根据大气光散射模型变形的公式：

• Discriminator

• 相关说明

1、原文将透视图与去雾图拼接起来，作为判别器的输入，在损失函数中，通过利用联合分布优化，可以更好地利用它们之间的结构相关性，这样做的问题是去雾图是根据公式由透视图t（x）与A得到的，当A确定时，两者之间的关系可由公式： 2、原文方法的损失函数如下： ## Image defogging implementation

### Introduction

Method introduction
In the fog scene, due to the scattering of light by the atmospheric particles, the reflected light of the target surface in the scene is scattered, causing the light intensity to decrease, and exponentially decay with the propagation distance. At the same time, in the process of the reflected light propagation, the atmosphere in the environment is added. Light source, and change the light intensity as the distance traveled.

According to the above theory of atmospheric light scattering, in the computer vision and graphics, a widely used atmospheric light scattering model is formed:

Wherein, I represents the collected foggy image, J is the image of the scene after defogging, t is the perspective of the light propagation medium, A represents the atmospheric light value, and x is the pixel point in the image.

In the existing method, some methods use CNN to calculate t(x), and then calculate I(x) according to the atmospheric light scattering model. The problem with this is that the value of A can be well estimated. The result of defogging is better, otherwise the result of t(x) is more accurate and can not achieve a good defogging effect. Some methods use CNN to calculate the values ​​of t(x) and A, respectively, and then calculate I(x) according to the atmospheric light scattering model, so that the results obtained are more reliable. The model proposed in this paper is to achieve image defogging in this way.

### Network Architecture

The structure of the model is divided into four parts (blue box):

Transmation Map Estimation
The role of the Transmation Map Estimation network is to get a perspective view of the image, t(x). The structure of the network is shown below:

The network is a densely connected code-decoding structure that uses dense blocks as the basic structure. The dense block retains the advantage of the densenet, which can ensure the transmission of information between different network layers, thereby better retaining the spatial structure information and ensuring better convergence during network training. The code portion (Dense Block) uses a pre-trained dense-net121 structure, including a conv layer and three dense block layers. The decoding part includes five sense blocks and one conv layer.

The context information of the global structure helps to express the image features. In order to use the local information to represent the global structure of the image, the network uses four different scales of pooling operations, so the code-decoder part outputs four different scale feature maps (1 /4, 1/8, 1/16, 1/32), which is converted into the original picture size by upsampling and spliced ​​with the output features of the code-decoder, thereby obtaining different scale information.

### Atmospheric Light Estimation

The role of the Atmospheric Light Estimation network is to obtain the atmospheric light value A(x). Since the atmospheric light A(x) is uniform for a given image, A(x) is a 2D image with the same size as the input image, so , we use U-net network. The network is a code-decoder structure, the encoder gradually reduces the spatial dimension of the pooling layer, and the decoder gradually repairs the details and spatial dimensions of the object. There is usually a quick connection between the encoder and the decoder, which helps the decoder better fix the details of the target. U-Net is often used for image-to-image problems.

###### U-net network structure:

The number of convolutional layers is about 20, 4 downsamples and 4 upsamples.

Atmospheric Scattering Model
The Atmospheric Scattering Model is a formula that deforms according to the atmospheric light scattering model:

By substituting the t(x) and A generated by the above two networks and the foggy picture I(x) into the formula, the defogging picture J(x) can be obtained.

#### Discriminator

This part uses the principle of the GAN network, using only the discriminator part. This part uses four conv layers, one fc layer (refer to the paper “Single Image Dehazing via Convolutional Generativa Adversarial Network”). The discriminator compares the dehazing map calculated in (3) with the original image (no fog) and trains the network until the discriminator cannot determine whether the input image is a defogging pattern or an original image. This will achieve a better defogging effect.

#### Related instructions

The proposed scheme mainly refers to the paper “Densely Connected Pyramid Dehazing Network”. Due to the time, the code verification result cannot be realized, so it is not easy to make major changes, but compared with the innovative method in the paper, it is the first in the original network. (4) The input of the partial discriminator is here. The original text uses the perspective and defogging diagrams as input to the discriminator. The purpose is to achieve the same distribution of the perspective and defogging diagrams, and to improve the defogging effect. (Modification: The original text will be stitched together with the defogging pattern (ie: perspective + defogging), and the purpose is to make the three parts of the perspective, defogging, perspective + defogging, and the original The perspective, original image, original image perspective + original image are basically the same.) We changed the direct defogging and fogless images as input, the effect will be better, but the premise is that the data set used includes Fog map and no fog map.

##### The benefits of this improvement:

1. The original text is spliced ​​together with the defogging diagram. As the input of the discriminator, in the loss function, by using the joint distribution optimization, the structural correlation between them can be better utilized. The problem is to defogg. The graph is obtained from the perspectives t(x) and A according to the formula. When A is determined, the relationship between the two can be given by the formula:

Obtain, in this case, directly compare the dehazing map with the original image, and compare (defogging + perspective) with (original + original perspective), in theory, can get a similar effect, and simpler (simplify the loss) function).

2. The loss function of the original method is as follows:

The loss function considers the effect of the perspective t(x) too much. The perspective is similar to the depth map and can represent the outline information of the picture object, but the color picture contains more information, including color, texture, and tiny objects. Appearance information, etc., if the perspective effect is too much emphasized in the loss function, the above information may be ignored in the color map, which affects the defogging effect.

As shown in the figure below, the perspective view can only represent some features of the original image. 