Tian et al. (2026) RTCC-Net: tropical cyclone generation classification model based on multi-source information fusion
Identification
- Journal: Scientific Reports
- Year: 2026
- Date: 2026-01-08
- Authors: Wei Tian, Xiaotian Li, Jiachen Fan, Hang Zhao
- DOI: 10.1038/s41598-025-33207-z
Research Groups
- School of Software, Nanjing University of Information Science and Technology, Nanjing, China
- Department of Atmospheric and Oceanic Sciences, Fudan University, Shanghai, China
- School of Atmospheric Sciences, Nanjing University of Information Science and Technology, Nanjing, China
Short Summary
This paper proposes RTCC-Net, a deep learning model that fuses multi-source information (infrared images, convective cores, and polar-coordinate representations) to classify whether a tropical cloud cluster will develop into a tropical cyclone within 24 hours. The model achieves a high detection rate of 98.7% and a low false alarm rate of 1.3%, significantly outperforming existing methods.
Objective
- To develop a highly accurate and stable deep learning model (RTCC-Net) capable of classifying tropical cyclogenesis 24 hours in advance by effectively extracting and fusing key spatiotemporal features from multi-source satellite data.
Study Configuration
- Spatial Scale: Tropical cloud clusters, approximately 870 km × 870 km (8° × 8°), represented by 112 × 112 pixel infrared images (0.07° latitude iso-angular grid). Polar coordinate images are 233 × 69 pixels (angular resolution approximately 1.546°, radial resolution approximately 6 km per pixel). Global coverage across six major tropical cyclone basins.
- Temporal Scale: Prediction 24 hours in advance. Input data from three consecutive time steps (t-12 hours, t-6 hours, t hours). Data coverage from 1980-2023 (GridSat), 1982-2018 (TCC Dataset), and IBTrACS (updated three times per week).
Methodology and Data
- Models used:
- RTCC-Net (Real Time Tropical Cyclogenesis Classification-Net)
- Subnet 1: Convolutional Neural Network (CNN) with R(2+1)D-like convolution strategy for satellite infrared imagery.
- Subnet 2: CNN with R(2+1)D-like convolution strategy for convective core data.
- Subnet 3: Reconstructed 12-layer ResNet for polar coordinate images.
- Vision Transformer (ViT) encoder with a global attention mechanism for feature fusion and classification.
- Cross-entropy loss function.
- Data sources:
- GridSat-B1: Geostationary satellite infrared window channel images (0.07° resolution, 1980-2023).
- IBTrACS (International Best Track Archive for Climate Stewardship): Comprehensive global tropical cyclone best track dataset (6-hour intervals).
- Global Tropical Cloud Cluster (TCC) Dataset: Global tropical cloud cluster activity (3-hour temporal resolution, 1982-2018).
- Convective core maps: Computed in real-time from GridSat infrared images.
- Polar-coordinate representations: Transformed from Cartesian infrared images.
Main Results
- The RTCC-Net model achieved a detection rate (POD) of 98.7% and a false alarm rate (FAR) of 1.3% for classifying tropical cyclogenesis 24 hours in advance, with a Heidke Skill Score (HSS) of 0.973.
- The model significantly outperformed previous deep learning and traditional machine learning models, demonstrating superior classification performance using solely infrared data.
- Multi-source information fusion progressively improved model performance: the full RTCC-Net (IR, convective cores, polar images) achieved a POD of 98.4% and FAR of 1.6% for the Genesis class, compared to 89.1% POD and 3.0% FAR with IR data alone.
- The model exhibited strong generalizability, maintaining high detection rates (ranging from 74.6% to 81.5% POD for Genesis) and low false alarm rates (1.6% to 2.6% FAR) even when predicting 30-60 hours in advance.
- GradCAM visualizations confirmed that the model effectively focuses on critical regions, such as the principal parts of cloud clusters in IR images and the cluster center in polar images, for accurate classification.
Contributions
- Proposed RTCC-Net, an end-to-end deep learning framework for tropical cyclogenesis classification based on multi-source information fusion (satellite images, physical factors, and radial structures).
- Introduced a novel multi-source data fusion strategy that processes sequential information from satellite images, real-time computed convective cores, and polar-coordinate representations through dedicated CNNs, subsequently fused by a Vision Transformer.
- Incorporated convective core maps and a physical prior matrix to enhance the model's focus on key regions of convective activity, which are crucial for tropical cyclone formation.
- Utilized polar projection of satellite images to naturally capture radial and angular variations, providing a more intuitive representation of the rotational structure of tropical cyclones.
- Achieved state-of-the-art classification accuracy and stability, significantly surpassing existing models, particularly in reducing false alarm rates while maintaining high detection rates.
Funding
- National Natural Science Foundation of China (42375147)
Citation
@article{Tian2026RTCCNet,
author = {Tian, Wei and Li, Xiaotian and Fan, Jiachen and Zhao, Hang},
title = {RTCC-Net: tropical cyclone generation classification model based on multi-source information fusion},
journal = {Scientific Reports},
year = {2026},
doi = {10.1038/s41598-025-33207-z},
url = {https://doi.org/10.1038/s41598-025-33207-z}
}
Original Source: https://doi.org/10.1038/s41598-025-33207-z