Welcome to Francis Academic Press

Academic Journal of Agriculture & Life Sciences, 2025, 6(1); doi: 10.25236/AJALS.2025.060118.

BS-Yolov8n: An Improved Yolov8n Network for Tomato Detection at Different Ripeness Degrees in Complex Greenhouse Environments

Author(s)

Zhengdong Li1, Yu Wang1, Minghua Han2, Zhou Zheng1

Corresponding Author:
Yu Wang
Affiliation(s)

1College of Information Engineering, Nanjing University of Finance and Economics, Nanjing, China

2Changshu Binjiang Agricultural Technology Co., Ltd., Suzhou, China

Abstract

With the advancement of artificial intelligence, computer vision has become a widely adopted method to replace human visual observation. However, the complexity of the greenhouse tomato-growing environment poses significant challenges in using computer vision to quickly and accurately assess the ripeness of tomatoes. In order to solve these problems, we incorporate SPD-Conv and BoTNet to enhance the YOLOv8n network’s performance in feature extraction and target recognition capabilities in greenhouse tomato-growing environments. In simulations, we compare the performance of YOLOv8n with that of BS-YOLOv8n. Empirical findings demonstrate that the proposed BS-YOLOv8n performs better than YOLOv8n in both the accuracy and the response speed of tomato recognition in complex greenhouse environments.

Keywords

Fruit Detection; Tomato; Deep Learning; Production Forecasts

Cite This Paper

Zhengdong Li, Yu Wang, Minghua Han, Zhou Zheng. BS-Yolov8n: An Improved Yolov8n Network for Tomato Detection at Different Ripeness Degrees in Complex Greenhouse Environments. Academic Journal of Agriculture & Life Sciences (2025), Vol. 6, Issue 1: 130-136. https://doi.org/10.25236/AJALS.2025.060118.

References

[1] C. Wang, F. Gu, J. Chen, et al., “Assessing the response of yield and comprehensive fruit quality of tomato grown in greenhouse to deficit irrigation and nitrogen application strategies,” Agricultural water management, vol. 161, pp. 9–19, 2015.

[2] W.J.Kuijpers, M. J. van de Molengraft, S. van Mourik, A. van’t Ooster, S. Hemming, and E. J. van Henten,“Model selection with a common structure: Tomato crop growth models,” Biosystems engineering, vol. 187, pp. 247–257, 2019.

[3] A.Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A survey,” Computers and elec tronics in agriculture, vol. 147, pp. 70–90, 2018.

[4] A. Cravero, S. Pardo, S. Sepúlveda, and L. Muñoz, “Challenges to use machine learning in agricultural big data: A systematic literature review,” Agronomy, vol. 12, no. 3, p. 748, 2022.

[5] R.Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.

[6] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137–1149, 2016.

[7] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.

[8] W.Liu, D. Anguelov, D. Erhan, et al., “Ssd: Single shot multibox detector,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, Springer, 2016, pp. 21–37.

[9] J. Redmon, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.

[10] D.WangandD.He,“Fusion of mask rcnn and attention mechanism for instance segmentation of apples under complex background,” Computers and Electronics in Agriculture, vol. 196, p. 106864, 2022.

[11] Z. Wang, Y. Ling, X. Wang, et al., “An improved faster r-cnn model for multi-object tomato maturity detection in complex scenarios,” Ecological Informatics, vol. 72, p. 101886, 2022.

[12] F. Liu, Y. Liu, S. Lin, W. Guo, F. XU, and Z. Bai, “Fast recognition method for tomatoes under complex environments based on improved yolo,” Transactions of the Chinese society for agricultural machinery, vol. 51, no. 6, pp. 229–237, 2020.

[13] B.Yan,P.Fan, X. Lei, Z. Liu, and F. Yang, “A real-time apple targets detection method for picking robot based on improved yolov5,” Remote Sensing, vol. 13, no. 9, p. 1619, 2021.

[14] Xue Yueju, Huang Ning, Tu Shuqin, Mao Liang, Yang Aqing, Zhu Xunmu, Yang Xiaofan, Chen Pengfei. Immature mango detection based on improved YOLOv2. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(7): 173-179. DOI: 10.11975/j.issn.1002-6819.2018.07.022

[15] R. Tang, Y. Lei, B. Luo, J. Zhang, and J. Mu, “Yolov7-plum: Advancing plum fruit detection in natural environments with deep learning,” Plants, vol. 12, no. 15, p. 2883, 2023.

[16] G. Jocher, A. Chaurasia, and J. Qiu, Ultralytics YOLO, version 8.0.0, Jan. 2023.[Online]. Available: https://github.com/ultralytics/ultralytics.

[17] C.-Y. Wang, H.-Y. M. Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, “Cspnet: A new backbone that can enhance learning capability of cnn,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 390–391.

[18] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8759–8768.

[19] R. Sunkara and T. Luo, “No more strided convolutions or pooling: A new cnn building block for low resolution images and small objects,” in Joint European conference on machine learning and knowledge discovery in databases, Springer, 2022, pp. 443–459.

[20] A. Srinivas, T.-Y. Lin, N. Parmar, J. Shlens, P. Abbeel, and A. Vaswani, “Bottleneck transformers for visual recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 16519–16529.

[21] A. Vaswani, “Attention is all you need,” Advances in Neural Information Processing Systems, 2017.

[22] P. Adarsh, P. Rathi, and M. Kumar, “Yolo v3-tiny: Object detection and recognition using one stage improved model,” in 2020 6th international conference on advanced computing and communication systems (ICACCS), IEEE, 2020, pp. 687–694.

[23] G. Jocher, Ultralytics/yolov5: V3.1- bug fixes and performance improvements, https://github. com/ultralytics/yolov5, version v3.1, Oct. 2020. DOI: 10.5281/zenodo.4154370.

[24] C. Li, L. Li, H. Jiang, et al., “Yolov6: A single-stage object detection framework for industrial applications,” arXiv preprint arXiv:2209.02976, 2022.

[25] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of the-art for real-time object detectors,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2023, pp. 7464–7475.