Loading…
or to bookmark your favorites and sync them to your phone or calendar.
Type: Virtual Room 7E clear filter
Thursday, January 30
 

3:00pm IST

Opening Remarks
Thursday January 30, 2025 3:00pm - 3:05pm IST
Moderator
Thursday January 30, 2025 3:00pm - 3:05pm IST
Virtual Room E Pune, India

3:00pm IST

A Comparative Evaluation of ML, DL, and Transformer Models in Arabic Sentiment Analysis
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - Amani A. Aladeemy, Sachin N. Deshmukh
Abstract - Sentiment analysis (SA) discerns the subjective tone within text, categorising it as positive, neutral, or negative. Arabic Sentiment Analysis (ASA) has distinct obstacles owing to the language's intricate morphology, many dialects, and elaborate linguistic frameworks. This study compares SA models for Arabic text across multiple datasets, evaluating traditional machine learning (ML) algorithms, such as Random Forest (RF) and Support Vector Machine (SVM); deep learning (DL) models, including Bidirectional Long Short-Term Memory (BiLSTM) and Bidirectional Gated Recurrent Unit (BiGRU); and transformer-based models like BERT, AraBERT, and XLM-RoBERTa. Experiments on datasets—HARD, Khooli, AJGT, and Ar-Tweet—covering MSA and dialects such as Gulf and Egyptian demonstrate that transformer-based models, particularly AraBERT v02, achieve the highest accuracy of 93.9% on the HARD dataset. The study highlights the significance of dataset characteristics and the advantages of advanced models, offering valuable insights into Arabic NLP and advancing SA research.
Paper Presenter
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

A comparative study of Rainfall Prediction on Indian Regions using Gradient Boosting and Random Forest Algorithms
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - Ritika Upadhyay, Eshita Dey, Munmun Patra, Roji Khatun, Chinmoy Kar, Somenath Chaterjee
Abstract - Predicting accurate rainfall is crucial for a country like India, which has a diverse economy. Agriculture is a vital aspect of life for many rural communities in India, making timely rainfall a significant concern for improving agricultural yields. However, predicting rainfall has become increasingly challenging due to drastic climate changes, resulting in more frequent natural calamities like floods and soil erosion. To address this issue, extensive research is underway to enhance rainfall prediction, allowing people to take appropriate precautions to protect their crops. Currently, predictive models tend to be complex statistical frameworks that can be expensive in terms of both computation and budget. As a more effective solution, using historical data combined with machine learning algorithms is being proposed. This research aims to improve rainfall prediction through algorithms such as Gradient Boosting and Random Forest. Model evaluation will utilize metrics like Mean Squared Error (MSE) and Root Mean Squared Error (RMSE). This study has considered approximately 150 years of historical rainfall data (from 1813 to 2006) for different regions of India.
Paper Presenter
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

AFR System: Optimizing Traffic Signals for Emergency Vehicle Prioritization
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - G.KALANANDHINI, VIGNESHWARAN.D, R.KARTHIKA, S.PUSHPALATHA, D.SAKTHIPRIYA
Abstract - Activity delays confronted by crisis vehicles regularly result in basic time misfortune, imperiling lives. The Help to begin with Responders (AFR) framework addresses this issue by utilizing LoRa SX1276 communication modules designed as transmitters in crisis vehicles to communicate with recipients at activity intersections. This framework empowers programmed green light signals for drawing closer crisis vehicles, guaranteeing continuous section. GPS NEO-6M modules give directional data, whereas a centralized authorization component anticipates abuse. Particular vehicle IDs permit for prioritized reaction, with fire motors taking the most noteworthy need, taken after by ambulances and police cars. Activity policemen are informed to oversee synchronous mediations successfully. Typical activity flag operations continue when no crisis vehicle is recognized. The AFR framework leverages Arduino Nano for LoRa modules, Arduino UNO for activity control, and ESP8266 for authorization. This integration improves crisis reaction times and moves forward security for both patients and responders, displaying a noteworthy progression in urban activity management.
Paper Presenter
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

Analysis of the Audio in the Game of Cricket Using Machine Learning
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - Varun M V, Venkat Raghavendra A H, V Hemanth, Ashwini Bhat
Abstract - The work undertaken is a comprehensive analysis of cricket sounds, focusing on the interaction of the ball with the bat and the wicket, the study aims to distinguish between edged, shot, and bowled audio in both noisy and noise-free environments. Upon feature extraction, machine learning models XGBoost and Random Forest were trained, to accurately classify these distinct cricketing events. This not only enriches the realm of cricket analysis by facilitating informed decision-making and insights into player performance but also showcases the potential of audio-based sports analytics.
Paper Presenter
avatar for Varun M V
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

Cross-Modality Attention Networks for Multi Phase Lung Tumour Detection
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - N.Janani
Abstract - In clinical practices, almost 18-20% cases go either unnoticed or misdiagnosed due to overlapping and subtle features in imaging, especially in complicated cases. We tackle this by using Cross-Modality Attention Network (CMAN) which integrates details from multi-phase CTs. By leveraging the distinct advantages of each scan phase, this method provides a comprehensive understanding of the tumor’s structure and characteristics. The cross-modality approach employs an attention mechanism to integrate information from multiple scan modalities, each capturing unique details. This process emphasizes the most critical tumor-related features while effectively minimizing noise, ensuring enhanced classification accuracy. Achieving an impressive accuracy of 98.47% on the LIDC-IDRI dataset, the CMAN significantly reduces misdiagnosis in complex cases. This approach can be really helpful in filling the diagnostic gaps, facilitating more informed clinical decision-making and improved patient outcomes.
Paper Presenter
avatar for N.Janani
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

Enhancing Visual Question Answering for Medical Images using Transformers and Convolutional Autoencoder
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - Parekh Rikita Dhaval, Hiteishi M. Diwanji
Abstract - The Visual Question answering is an emerging multidisciplinary research field that intersect computer vision and natural language processing. Medical Visual Question Answering is one of the prominent area of VQA. Medical images and Clinical Questions related to the medical image is given as input to the VQA model and VQA model respond with corresponding answer in natural language. The aim of Medical VQA is to enhance interpretability of medical image data for enhancing diagnostic accuracy, clinical decision making and patient care. This paper presents a novel framework that integrates Vision Transformer (ViT), Language transformer (BERT), and a Convolutional Autoencoder (CAE) to improve the performance of Medical VQA task. The Vision Transformer is used to capture complex visual features from medical images, while BERT processes the corresponding clinical question to understand its context and generate meaningful language embedding. To further enhance visual feature extraction, a Convolutional Autoencoder [1], [2] is incorporated to preprocess and denoise the medical images, capturing essential patterns, compressing medical images without losing key features, thereby providing cleaner input to the ViT. The combined use of these three components enables the model to effectively align visual features with textual information, leading to more precise and context-aware answers. We evaluate the proposed ViT+BERT+CAE model on benchmark medical VQA dataset MEDVQA-2019, showing significant improvements over traditional methods based solely on convolutional or recurrent networks. The results demonstrate significant increase in accuracy, precision, recall, F1-Score and WuPS score after applying Covolutional AutoEncoder in Preprocessing stage.
Paper Presenter
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

FFAFER: Fiducial Focus Augmentation for Facial Expression Recognition
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - Ritu Raj Pradhan, Darshan Gera, P. Sunil Kumar
Abstract - This paper explores static facial expression recognition (FER) and presents a novel facial augmentation technique designed to enhance model training. By utilizing pre-trained facial landmark detection models, we analyze the spatial structure of faces within the FER training dataset. Based on the predicted landmark coordinates, facial images are augmented by strategically masking patches of varying sizes at key landmark locations. This approach emphasizes the structural significance of facial landmarks while preserving other critical facial features, enabling models to capture both global facial structure and nuanced expression-related details. Extensive experiments on benchmark datasets validate the effectiveness of the proposed method, showcasing its potential to improve FER performance, particularly in challenging scenarios.
Paper Presenter
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

Hybrid Model of Chaos Theory and Quantum Techniques for Portfolio Optimisation
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - Deepali, Karuna Kadian, Kashish Arora, Saumya Johar, Liza
Abstract - The stock market has become increasingly unpredictable in recent years due to various factors like public sentiments, economy and geopolitical issues. The Traditional methods being used like time series model and Long Short-Term Memory (LSTM) models, often don’t make the correct predictions as they rely mostly on historical data of stock market and so they fail to grasp how market behaves or how chaotic behavior of market can be analyzed. These models hence may fail in case of making wise investment decisions. Our proposed methodology comes up with a hybrid approach using chaos theory, sentimental analysis for overcoming these challenges by analyzing the how stock prices might change according to the sentiments of people. We analyze 65,000 tweets of 95 organizations and their stocks and use chaos theory to find hidden patterns in stock movements. The classical computers take high computational time to analyze complex problems like stock market predictions. Hence, we combine these approaches with the Quantum Approximate Optimization Algorithm (QAOA) to solve the complex patterns of stock price prediction faster and more accurately than classical methods. We have used sentimental analysis, chaos theory with QAOA which is a combinatorial algorithm, being used to optimize the stock portfolio based on specific stock metrics- inclusive of F1 score(from sentimental anaylsis) and chaos theory assessments, it researches for the organisations with stability and low risk-high returns in stock market. Thus aiding investors and traders to make an informed decisions regarding where to invest with low risk and high returns.
Paper Presenter
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

Learning beyond Limits: Exploring Augmented Reality and Virtual Reality in Education and Training
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - Dipti Varpe, Kalyani Kulkarni, Vaidehi Deshpande, Vedaant Deshpande, Vaishnavi Habbu
Abstract - Augmented Reality (AR) and Virtual Reality (VR) enhance traditional pedagogical methods by providing immersive, interactive and experiential learning environments, while catering to diverse learning styles. The paper examines their effectiveness in improving knowledge retention, fostering engagement, and enabling hands-on practice in simulated real-world scenarios, citing comparisons with traditional teaching tools. In education, AR and VR allow visualization of abstract concepts, collaborative virtual environments and gamified learning experiences that make complex subjects accessible and engaging. For training purposes, these technologies are instrumental in safe skill acquisition, particularly in high-risk fields such as healthcare and military operations. Challenges such as high costs of facility maintenance and safe implementation are also addressed. This review concludes with recommendations for leveraging this technology to create impactful and scalable solutions for learners and trainees in various disciplines.
Paper Presenter
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

3:00pm IST

Quality assessment of Fruits and Vegetables using Deep Learning
Thursday January 30, 2025 3:00pm - 5:00pm IST
Authors - R.V. Sai Sriram, A. Srujan, K. Rahul, K. Sathvik, Para Upendar
Abstract - Freshness plays a crucial role in determining the quality of fruits and vegetables, directly impacting consumer health and influencing nutritional value. Fresh produce used in food processing industries must go through multiple stages—harvesting, sorting, classification, grading, and more—before reaching the customer. This paper introduces an organized and precise approach for classifying and detecting the freshness of fruits and vegetables. Leveraging advanced deep learning models, particularly convolutional neural networks (CNNs), this method analyzes images of produce. The training and evaluation dataset is large and varied, including diverse fruits and vegetables in various conditions. Freshness is determined by analyzing key features like color, texture, shape, and size. For example, fresh produce typically shows vibrant color and is free from mold or brown spots. Traditional methods for assessing quality through manual inspection and sorting are often slow and error prone. Automated detection techniques can significantly mitigate these challenges. Therefore, this paper proposes an automated approach to freshness detection, which first identifies whether an image shows a fruit or vegetable and then classifies it as either fresh or rotten. The ResNet18 deep learning model is employed for this identification and classification task. It also estimates the size of the fruit/vegetable using OpenCV. The qualitative analysis of this approach demonstrates outstanding performance on the fruits and vegetables dataset.
Paper Presenter
Thursday January 30, 2025 3:00pm - 5:00pm IST
Virtual Room E Pune, India

4:45pm IST

Session Chair Remarks
Thursday January 30, 2025 4:45pm - 4:50pm IST
Invited Guest/Session Chair
avatar for Dr. Kaushal Shah

Dr. Kaushal Shah

Assistant Professor, Pandit Deendayal Energy University, Gujarat, India.
Thursday January 30, 2025 4:45pm - 4:50pm IST
Virtual Room E Pune, India

4:50pm IST

Closing Remarks
Thursday January 30, 2025 4:50pm - 5:00pm IST
Moderator
Thursday January 30, 2025 4:50pm - 5:00pm IST
Virtual Room E Pune, India
 

Recently active attendees

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
  • Inaugural Session
  • Physical Session 1A
  • Physical Session 1B
  • Physical Session 1C
  • Physical Session 2A
  • Physical Session 2B
  • Physical Session 2C
  • Physical Session 3A
  • Physical Session 3B
  • Physical Session 3C
  • Physical Session 4A
  • Physical Session 4B
  • Physical Session 4C
  • Virtual Room 5A
  • Virtual Room 5B
  • Virtual Room 5C
  • Virtual Room 5D
  • Virtual Room 5E
  • Virtual Room 6A
  • Virtual Room 6B
  • Virtual Room 6C
  • Virtual Room 6D
  • Virtual Room 6E
  • Virtual Room 6F
  • Virtual Room 7A
  • Virtual Room 7B
  • Virtual Room 7C
  • Virtual Room 7D
  • Virtual Room 7E
  • Virtual Room 7F
  • Virtual Room 8A
  • Virtual Room 8B
  • Virtual Room 8C
  • Virtual Room 8D
  • Virtual Room 8E
  • Virtual Room 9A
  • Virtual Room 9B
  • Virtual Room 9C
  • Virtual Room 9D
  • Virtual Room 9E
  • Virtual Room 9F
  • Virtual Room_10A
  • Virtual Room_10B
  • Virtual Room_10C
  • Virtual Room_10D
  • Virtual Room_10E
  • Virtual Room_10F