Research on lightweight algorithm for gangue detection based on improved Yolov5 | Scientific Reports – Nature.com

The following improvements have been made to Yolov5s. The EfficientVIT network was proposed by Liu et al.22 to cascade groups of attentional modules and give different complete features to divide the attentional head, which saves computational costs and increases attentional diversity. Comprehensive experiments demonstrate that the efficiency is significantly better than existing effective models, yielding a better speed-capacity trade-off. Mpdiou is a modern bounding box similarity comparison metric based on minimum point distance, Mpdiou, proposed by Ma23 and others, which incorporates all the relevant factors considered in the existing loss functions, i.e., overlapping or non-overlapping areas, centroid distances, width and height biases while simplifying the computation process. C3_Faster, as a current Partial Convolution (PConv) technique proposed by Chen et al.24, performs spatial feature extraction more efficiently due to both reduced redundant computation and reduced memory access. Based on PConv, FasterNet, a novel family of neural networks, is additionally proposed, which achieves higher operation speed than others on different devices without compromising the accuracy of visual tasks. This is because the lightweight improvement of Yolov5s requires a reduction in both the number of parameters and the amount of computation, which can be achieved by all of the above methods and satisfies the experimental requirements. Thus, firstly, the entire backbone network in the original Yolov5s is replaced by the EfficientVIT network in the backbone module, secondly, the C3 module is replaced by C3_Faster in the HEAD module, and again, the Neck region of the Yolov5 model is appropriately streamlined, the 2020 feature map branch, which has the largest sensory field and is suitable for detecting objects of larger size, is deleted, and finally Mpdiou is used to replace CIOU, while the SE attention mechanism is introduced, which is conducive to the model's better fusion of valuable features to improve the detection performance. A schematic of the structure of the improved model is shown in Fig.2.

Structure of Yolov5s improved model.

EfficientVit is a lightweight network model. EfficientVit designs a different building block with a mezzanine layout, namely a single memoryless bound MHSA between valid FFN layers, which improves channel communication while increasing memory efficiency. EfficientVit also proposes a cascade group attention module that assigns different complete feature segmentations to the attention head25, and the overall framework is shown in Fig.3. Containing three phases, each phase contains a number of sandwich structures, which consist of 2N DWConv (spatially localized communication) and FFN (channel communication) and cascaded packet attention. Cascading group attention differs from previous MHSA in that heads are first segmented and then Q, K, and V are generated. Alternatively, to learn richer feature maps and increase the model capacity, the output of each head is summed with the input of the next head. Finally, multiple header outputs are concatenated and mapped using a linear layer to obtain the final output, which is denoted as Eq:

$${X}_{ij} = Attn(X_{ij} W_{ij}^{Q} ,X_{ij} W_{ij}^{K} ,X_{ij} W_{ij}^{V} )$$

(1)

$${X}_{i + 1} = Concat[{X}_{ij} ]_{j = 1:h} W_{i}^{P}$$

(2)

$$X^{prime}_{ij} = X_{ij} + {X}_{i(j - 1)} ,1 < j le h$$

(3)

The jth head in Eqs. (1), (2) computes the self-attention on Xij, which is the jth partition of the input feature Xi, i.e., Xi=[Xi1, Xi2, , Xih] and 1jh is the total number of heads, (W_{ij}^{Q}), (W_{ij}^{K}), and (W_{ij}^{V}) are the projection layers that partition the input feature into different subspaces, and (W_{i}^{P}) is a linear layer that projects the connected output features back to the input dimension that is consistent with the input.

Equation(3) where (X^{prime}_{ij}) is the sum of the jth input segmentation point Xij and the (j-1)th head output (widetilde{X}_{i(j - 1)}) computed according to Eq.(1). It replaces Xij as the original input feature for the j-th head when computing self-attention. In addition, another label interaction layer is applied after Q-projection, which allows self-attention to jointly capture local and global relations and greatly enhance the feature representation.

The loss function is an influential component in neural networks whose main role is to measure the distance between the information predicted by the network and the desired information, i.e. The closer the two are to each other, the smaller the value of the loss function. The loss functions of the YOLO algorithm family mainly include the localization loss function (lossrect), the confidence prediction loss function (lossobj), and the category loss functions (loscls). The localization loss function used by Yolov5 is the CIOU function, which is computed as follows.

$$CIOU_Loss = 1 - IOU + frac{{lambda^{2} (a,a^{gt} )}}{{c^{2} }} + alpha mu$$

(4)

$$alpha = frac{mu }{(1 - IOU) + mu }$$

(5)

$$mu = frac{4}{pi }left[ {(arctan frac{{w^{gt} }}{{h^{gt} }}) - arctan frac{w}{h}} right]^{2}$$

(6)

Equations(4)(6) in which a and agt are the centroids of the prediction and target frames, respectively, and is the Euclidean distance between the two centroids; C is the diagonal length of the smallest closed region of the predicted and target frames. is the weight of the function; is the consistency of the aspect ratios of the two frames; Here, h and w are the height and width of the predicted frame, respectively. The hgt and wgt are the height and width of the target frames, respectively. The CIOU function mainly notices the overlapping parts of the prediction and target frames. The Mpdiou loss function is used.

Mpdiou is a bounding box similarity comparison metric based on the minimum point distance that includes all the relevant factors considered in existing loss functions. Mpdiou simplifies the similarity comparison between two bounding boxes and is suitable for overlapping or non-overlapping bounding box regression. Therefore, Mpdiou can be a decent alternative to the intersection and merging ratio as a metric for all performance metrics in 2D/3D computer vision tasks. It also simplifies the computation by directly minimizing the upper-left and lower-right point distances between the predicted bounding boxes and the actual labeled bounding boxes. Mpdiou is computed as follows.

$${text{d}}_{1}^{2} = (x_{1}^{B} - x_{1}^{A} )^{2} + (y_{1}^{B} - y_{1}^{A} )^{2}$$

(7)

$${text{d}}_{2}^{2} = (x_{2}^{B} - x_{2}^{A} )^{2} + (y_{2}^{B} - y_{2}^{A} )^{2}$$

(8)

$$M{text{pdiou}} = frac{A cap B}{{A cup B}} - frac{{d_{1}^{2} }}{{w^{2} + h^{2} }} - frac{{d_{2}^{2} }}{{w^{2} + h^{2} }}$$

(9)

In Eqs. (7)(9) d1, d2 denote the intersection and minimum point distance, two arbitrary shapes: A, BSRn, and the width and height of the input image: w, h. Output: Mpdiou.Let ((x_{1}^{A} ,y_{1}^{A} )), ((x_{2}^{A} ,y_{2}^{A} )) denote the coordinates of the upper left and lower right points of A. Let ((x_{1}^{B} ,y_{1}^{B} )), ((x_{2}^{B} ,y_{2}^{B} )) denote the coordinates of the upper left and lower right points of B, respectively.

The object detection head is part of the feature pyramid used to perform object detection, which includes multiple convolutional, pooling, and fully connected layers, among others. In the Yolov5 model, the detection head module is mainly responsible for multiple object detection feature maps extracted from the backbone network. The module consists of three main parts. The C3 module is an essential part of the Yolov5 network and its main role is to increase the depth and receptive field of the network and improve the feature extraction capability. C3-Faster is implemented as C3-Faster by multiple Faster_Blocks, which can be used to replace the C3 module in Yolov5 thereby achieving accelerated network inference, where the Faster_Block is implemented by the lightweight convolutional PConv proposed in the literature21 in combination with additional operations. Replace the C3 module with C3-Faster in the HEAD module.

The Neck region in the Yolov5 model uses a multipath structure to aggregate features and enhance network feature fusion. The size of the coal and gangue is too narrow with respect to the whole image, making the Neck region redundant for large object detection. In order to improve the model detection speed, the Neck region of the Yolov5 model is properly streamlined by removing the 2020 feature map branch that has the largest receptive field and is suitable for detecting objects of larger sizes. Elimination is performed to reduce the model complexity and improve the real-time performance of detection. As shown in Fig.4.

Improved neck and prediction structure.

The SE attention mechanism is introduced into the original model to improve the object detection accuracy. The SE attention mechanism consists of three parts, namely, Tightening Squeeze, Incentive Expiration, and Feature Schema Calibration, with the main purpose of enhancing useful features. First, the global information of the feature maps is obtained by global average pooling, and the individual channels refine this information to derive the channel weights and adjust the weights of the original feature maps for better performance. The resulting feature maps are compressed along the spatial dimension, and the dimensionality of the feature maps is compressed using a global average pooling compression operation to turn each two-dimensional feature channel into a real number, with the output dimension matching the number of input feature channels. The feature map from WHC is compressed into a 11C vector by The feature map is compressed from WHC to a 11C vector by the Excitation operation using the completely connected layer acting on the feature map, and the Sigmoid activation function to obtain the normalized weights. The weight information is obtained through learning, and the weights are applied to the corresponding channels, and finally The scale operation is performed, and the weights of each feature channel obtained after the Excitation operation are multiplied with the original feature map channels one by one, and the generated feature vectors are multiplied with the corresponding channels of the feature map to obtain the weights of the corresponding channels, which are re-calibrated to the feature map. The SE module is shown in Fig. 5.

See more here:
Research on lightweight algorithm for gangue detection based on improved Yolov5 | Scientific Reports - Nature.com

Related Posts

Comments are closed.