Indiziert in
  • Öffnen Sie das J-Tor
  • Genamics JournalSeek
  • Akademische Schlüssel
  • Forschungsbibel
  • Kosmos IF
  • Zugang zu globaler Online-Forschung in der Landwirtschaft (AGORA)
  • Elektronische Zeitschriftenbibliothek
  • RefSeek
  • Verzeichnis der Indexierung von Forschungszeitschriften (DRJI)
  • Hamdard-Universität
  • EBSCO AZ
  • OCLC – WorldCat
  • Gelehrtersteer
  • SWB Online-Katalog
  • Virtuelle Bibliothek für Biologie (vifabio)
  • Publons
  • Genfer Stiftung für medizinische Ausbildung und Forschung
  • Euro-Pub
  • Google Scholar
Teile diese Seite
Zeitschriftenflyer
Flyer image

Abstrakt

Explainable Convolutional Neural Network Based Tree Species Classification Using Multispectral Images from an Unmanned Aerial Vehicle

Ling-Wei Chen, Pin-Hui Lee, Yueh-Min Huang*

We seek to address labor shortages, in particular, the aging workforce of rural areas and thus facilitate agricultural management. The movement and operation of agricultural equipment in Taiwan is complicated by the fact that many commercial crops in Taiwan are planted on hillsides. For mixed crops in such sloped farming areas, the identification of tree species aids in agricultural management and reduces the labor needed for farming operations. General optical images collected by visible-light cameras are sufficient for recording but yield suboptimal results in tree species identification. Using a multispectral camera makes it possible to identify plants based on their spectral responses. We present a method for tree species classification using UAV visible light and multispectral imagery. We leverage the differences in spectral reflectance values between tree species and use near infrared band images to improve the model’s classification performance. CNN based deep neural models are widely used and yield highaccuracies, but 100% correct results are difficult to achieve, and model complexity generally increases with performance. This leads to uncertainty about the system’s final decisions. Interpretable AI extracts key information and interprets it to yield a better understanding of the model’s conclusions or actions.We use visualization(four pixel level attribution methods and one region level attribution method) to interpret the model post-hoc. Fuzzy IG for pixel level attribution best represents texture features, and region level attribution represents life regions more effectively than pixel level attribution, which aids human understanding.