Authors - Madhusmita Mishra, R. Kanagavalli Abstract - This comprehensive survey examines advancements in semi-supervised learning (SSL) techniques developed to address imbalanced multi-class classification problems across a variety of real-world applications, including healthcare, fraud detection, and industrial monitoring. Traditional machine learning models often struggle with highly skewed data distributions, leading to biased predictions that favour majority classes while overlooking minority classes. SSL, which leverages both labelled and unlabelled data, has emerged as a promising approach, reducing the need for extensive labelled datasets while improving model generalization for minority classes. This review focuses on methodologies such as re-sampling, cost-sensitive learning, ensemble learning, hybrid techniques, active learning, and evolutionary algorithms, each offering unique approaches to mitigate the impact of class imbalance. Re-sampling methods, such as SMOTE (Synthetic Minority Over-sampling Technique) and its variants, augment minority classes by creating synthetic samples, addressing imbalances within SSL frameworks. Costsensitive learning introduces penalties for misclassifications, improving sensitivity to minority classes, while ensemble learning methods, like bagging and boosting, combine multiple classifiers to enhance predictive accuracy in multi-class settings. Additionally, hybrid techniques that integrate re-sampling with cost-sensitive approaches show promise in balancing class representation and boosting model robustness. Active learning, which iteratively selects the most informative samples, and meta-learning, which enables models to adapt dynamically to different class distributions, provide further innovation in tackling imbalances in SSL applications.