Loading…
or to bookmark your favorites and sync them to your phone or calendar.
Type: Virtual Room 9E clear filter
Friday, January 31
 

12:15pm IST

Opening Remarks
Friday January 31, 2025 12:15pm - 12:20pm IST
Moderator
Friday January 31, 2025 12:15pm - 12:20pm IST
Virtual Room E Pune, India

12:15pm IST

A Multilingual Deep Learning Approach for Sign Language Recognition
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - A.Kousar Nikhath, J.Ananya Reddy, P.Aishwarya, S.Maadhurya Sri, P.Gowthami
Abstract - Sign language is the medium between the people who can hear and speak and those who cannot. This project is set to be used in the development of technologies that are beneficial for the lives of individuals with disabilities. The project studies in-depth the use of computer vision and deep learning. The accurate and the regional language translation began with the gestures of the sign languages as the input information, and finally the software produced the accurate translation in the regional language. I am inspired by the prospect of using Artificial Intelligence technology in developing hereditary transmission from a worldwide venue and health diagnosis in a timely manner. Convolutional Neural Network (CNN) is employed to pick up characteristics from hand movement that belongs to the sign language. These attributes are used in the training set as features, themselves in the classification of the gesture, and the process is the learning of this model for the recognition of gestures in real-time. Further, there is an inclusion of computer vision for preprocessing and the sake of accuracy prediction of the recognition process. The functionality of the sign language recognition system is assessed by using a variety of experiments, including accuracy and speed. In general, the developed Sign Language Recognition System with integration of deep learning and computer vision techniques facilitates the precise and quick recognition of sign language gestures. Integration with a translator in addition to this not only makes it multi-language support but also guarantees the correct translation into regional languages.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

A review on Imagify and Image NFT Marketplace
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - Hritesh Kumar Shanty, Padirolu Moses, Tulasiram Nimmagadda, Samson Anosh Babu Parisapogu
Abstract - In today’s digital world, combining image editing with secure NFT trading is essential. Imagify addresses this need by offering a unified platform with advanced artificial intelligence tools for image enhancement, recoloring, restoration, and object removal, empowering users to customize images to their preferences. Imagify also simplifies the NFT creation process, allowing users to seamlessly transform their edited images into NFTs that can be bought and sold on a blockchain-secured marketplace. This ensures transparent and secure transactions, providing peace of mind for both creators and buyers. With a flexible, credit-based system, users pay only for the features they choose, making it a cost-effective option. By merging intuitive image editing with a streamlined NFT marketplace, Imagify offers an accessible, user-friendly platform where creators and collectors can engage in digital image trading confidently. This integration creates an efficient and transparent process, supporting both casual creators and seasoned collectors seeking a secure, comprehensive solution for managing and trading digital images.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

AI and Computer Vision Techniques for Fitness Training and Form Analysis: A Comprehensive Review
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - R V S S Surya Abhishek, T Sridevi
Abstract - This paper provides an overview of the current state of AI-based approaches in virtual fitness coaching, focusing on posture estimation and exercise tracking along with real-time feedback. Advances in pose estimation models, including OpenPose, MediaPipe, and AlphaPose, are boosting personalized exercise correction and injury prevention within the sphere of fitness applications. Current literature varies from 2D to 3D pose estimation that includes action recognition and deep learning framework for specific inputs toward movement analysis and user engagement. There is still much room for improvement in current models, with regards to adaptation to individual needs and environments, such as the real-time accuracy that often has not been matched by the personal feedback and robustness of exercise variations. It discusses the approaches currently in use, their applications, and challenges, and by looking at the topic, this paper insinuates the improvement in the adaptability and customization of AI fitness solutions to perfectly emulate human trainers.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

Comparative Analysis of Deep Learning Models for Speech-to-Text and Text-to-Speech conversion
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - Mrudul Dixit, Rajiya Landage, Prachi Raut
Abstract - The paper presents a comprehensive comparison of Speech-to-Text (STT) and Text-to-Speech (TTS) models, two foundational technologies in the field of natural language processing and human-computer interaction. The paper examines the evolution of these models, focusing on state-of-the-art approaches such as Whisper Automatic Speech Recognition (ASR), DeepSpeech, and Wav2vec, Kaldi, SpeechBrain for STT, and Tacotron, WaveNet, gTTS and FastSpeech for TTS. Through an analysis of architectures, performance metrics, and applications, the paper highlights the strengths and limitations of each model, particularly in domains requiring high accuracy, multilingual support, and real-time processing. The paper also explores the challenges faced by STT and TTS systems, including handling diverse languages, background noise, and generating natural-sounding speech. There are recent advances in end-to-end models, transfer learning, and multimodal approaches that are pushing the boundaries of both STT and TTS technologies. By providing a detailed comparison and identifying future research directions, this review aims to guide researchers and practitioners in selecting and developing speech models for various applications, particularly in enhancing accessibility for specially-abled individuals.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

Deep Learning Model for Lip-Based Speech Synthesis
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - A.Kousar Nikhath, Aanchal Jain, Ananya D, Ramana Teja
Abstract - The project focuses on creating an advanced system for visual speech recognition by performing lipreading at the sentence level. Traditional approaches, which were limited to word-level recognition, often lacked sufficient contextual understanding and real-world usability. This work aims to overcome those limitations by utilizing cutting-edge deep learning models, such as CNNs, RNNs, and hybrid architectures, to effectively process visual inputs and generate coherent speech predictions. The system's development follows a systematic approach, beginning with a review of existing solutions and their shortcomings. The proposed framework captures both temporal and spatial dynamics of lip movements using specialized neural networks, significantly enhancing the accuracy of sentence-level predictions. Extensive testing on diverse datasets validates the system’s efficiency, scalability, and practical applications. This study underscores the critical role of robust feature extraction, sequential data modeling, and hierarchical processing in achieving effective sentence-level lipreading. The results demonstrate notable improvements in performance metrics. Additionally, the project outlines future advancements, including optimizing the system for real-time processing and resource-constrained environments, paving the way for practical implementation in multiple fields.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

Exploration of Galactic Redshift and Its Impact on Galaxy Properties Using Machine Learning
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - Randeep Singh Klair, Gurkunwar Singh, Ritik Verma, Satvik Rawal, Rajan Kakkar, Agamnoor Singh Vasir, Nilimp Rathore
Abstract - The most accurate way to measure galaxy redshifts is using spectroscopy, but it takes a lot of computer power and telescope time. Despite their speed and scalability, photometric techniques are less precise. Thanks to large astronomical datasets, machine learning has become a potent technique for increasing cosmology research’s scalability and accuracy. On datasets such as the Sloan Digital Sky Survey, algorithms such as k-Nearest Neighbors, Random Forests, Support Vector Machines, Gradient Boosting, and Neural Networks are assessed using metrics like R-squared, Mean Absolute Error, and Root Mean Square Error. Ensemble approaches provide reliable accuracy, whereas neural networks are excellent at capturing non-linear correlations. Improvements in feature selection, hyperparameter tuning, and interpretability are essential to improving machine learning applications for photometric redshift estimation and providing deeper insights into cosmic structure and development.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

Innovative Bow Tie Antenna Design for Enhanced MRI Imaging
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - Sudha K L, Navya Holla K, Kavita Guddad
Abstract - The antenna is a vital component of the Magnetic Resonance Imaging (MRI) machine which receives the radio frequency signals emitted by the protons in the body after the RF pulse is turned off. Specialized high frequency antennas can improve the quality, clarity, and resolution of the resulting MRI images. This paper deals with the design of Bow Tie antenna for X-Band in the frequency range 8–12 GHz, used in ultra-high field MRI systems. Using the Ansys HFSS tool, the antenna is designed and simulated and analysed. The fabricated antenna with the design specifications is tested in anechoic chamber for its working. Reflection coefficient at 10.5GHz is found to be around -14 dB for simulated antenna and -12 dB for fabricated antenna, which is satisfactory for practical application. Differences between the measured and simulated values were seen in results which are caused by cable loss in the measuring apparatus.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

Integrating Federated Transfer Learning and Blockchain to Enhance IoT Security: A Comprehensive Survey
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - Bharati B Pannyagol, S.L Deshpande, Rohit Kaliwal, Bharati Chilad
Abstract - The Internet of Things has revolutionized markets by connecting previously isolated devices, but this integration raises security risks from malicious nodes that can corrupt data or disrupt operations. This evaluation of Federated Learning's possible application as a decentralized node identification technique highlights its advantages over standard machine learning approaches. Internet of Thing devices may collaborate on model training while protecting sensitive data and reducing network use. Federated Learning and Blockchain interactions creates a robust framework addressing critical IoT challenges like data privacy, security, and trust. Blockchain enhances this system by providing a decentralized, tamper-resistant ledger that ensures data integrity and transparency. Automated processes, including model validation and incentive distribution, are facilitated by smart contracts. While this integrated approach improves data protection and scalability, challenges such as computational demands and consensus delays remain. The survey discusses practical applications, challenges, and future research directions for combining Federated Learning and Blockchain in IoT systems.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

Power Electronics: A Pivotal Role in Strengthening Cybersecurity
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - Bhadouriya Khushi Mukeshsingh, Rajput Adityasingh Shashikantsingh, Patel Swayam Vinodkumar, Ashish P. Patel, Nirav D. Mehta, Anwarul M. Haque
Abstract - As digital infrastructure becomes more interconnected, effective cyber security has never been more important. This article explodes how advances in power electronics technology can support and improve cyber security frameworks. Energy management strategies, control systems, and semiconductor technologies can be used to increase the systems resilience to potential vulnerabilities that serve as possible entry points for cyber attackers. The research discussed in this article seeks to demonstrate that optimized distribution systems with adaptive control techniques can improve the stability and reliability of critical infrastructure, even in the face of cyber threats. This article discusses the inter-relationship between energy management and cyber security, showing the reader how power electronics can be important in developing a holistic security strategy. It describes a proposed approach to integrating power electronics into cyber security to create an adaptive, robust defence mechanism. This study provides valuable insights into the design of systems that are not only efficient but also fortified against evolving cyber threats, contributing to the broader understanding of how technology convergence can enhance overall infrastructure security.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

12:15pm IST

YOLO Algorithm-Based Effective Orange Detection and Localization with Improved Data Augmentation
Friday January 31, 2025 12:15pm - 2:15pm IST
Authors - Madhura Shankarpure, Dipti D. Patil
Abstract - This paper presents a robust framework for YOLO (You Only Look Once) algorithm- based orange detection and localization in photos and videos is presented. The system combines contour-based bounding box localization with deep learning-based item recognition for increased accuracy. Transfer learning was used to refine a pre-trained YOLOv10 model on a Fruit 360 dataset. Data augmentation techniques such as random rotations, brightness changes, and scaling were applied to improve the model's resilience. Bounding boxes are created around identified oranges with a confidence threshold greater than 0.5 as part of the real-time video processing methodology. The model performed well on a balanced test dataset, achieving 95% accuracy, 92% precision, and 90% recall. These findings show how well YOLO works when combined with conventional computer vision methods for real-world uses like automated fruit sorting, fruit harvesting, and real-time market monitoring. The processed video output confirms the system's suitability for real-world situations.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room E Pune, India

2:00pm IST

Session Chair Remarks
Friday January 31, 2025 2:00pm - 2:05pm IST
Invited Guest/Session Chair
avatar for Dr. Vandna Rani Verma

Dr. Vandna Rani Verma

Associate Professor,nCSE department,nGalgotias College of Engineering and Technology, Greater Noida, India
Friday January 31, 2025 2:00pm - 2:05pm IST
Virtual Room E Pune, India

2:05pm IST

Closing Remarks
Friday January 31, 2025 2:05pm - 2:15pm IST
Moderator
Friday January 31, 2025 2:05pm - 2:15pm IST
Virtual Room E Pune, India
 

Recently active attendees

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
  • Inaugural Session
  • Physical Session 1A
  • Physical Session 1B
  • Physical Session 1C
  • Physical Session 2A
  • Physical Session 2B
  • Physical Session 2C
  • Physical Session 3A
  • Physical Session 3B
  • Physical Session 3C
  • Physical Session 4A
  • Physical Session 4B
  • Physical Session 4C
  • Virtual Room 5A
  • Virtual Room 5B
  • Virtual Room 5C
  • Virtual Room 5D
  • Virtual Room 5E
  • Virtual Room 6A
  • Virtual Room 6B
  • Virtual Room 6C
  • Virtual Room 6D
  • Virtual Room 6E
  • Virtual Room 6F
  • Virtual Room 7A
  • Virtual Room 7B
  • Virtual Room 7C
  • Virtual Room 7D
  • Virtual Room 7E
  • Virtual Room 7F
  • Virtual Room 8A
  • Virtual Room 8B
  • Virtual Room 8C
  • Virtual Room 8D
  • Virtual Room 8E
  • Virtual Room 9A
  • Virtual Room 9B
  • Virtual Room 9C
  • Virtual Room 9D
  • Virtual Room 9E
  • Virtual Room 9F
  • Virtual Room_10A
  • Virtual Room_10B
  • Virtual Room_10C
  • Virtual Room_10D
  • Virtual Room_10E
  • Virtual Room_10F