Authors - Ketaki Bhoyar, Suvarna Patil Abstract - Fashion is the canvas of our identity. Fashion can be so inclusive, expressive and sustainable. In the growing landscape of fashion, every individual face an overwhelming array of choices. It becomes difficult to discover a personalized wardrobe that reflects their preferences, taste and needs. Traditional Fashion Recommendation Systems (FRSs) limits their ability to scale and adapt the ever-growing styles as they heavily rely on manual design. Around the world, a large number of users buy cloths online through the e-commerce websites. These websites primarily use recommender systems. Appropriate recommendations given by FRS helps to enhance user satisfaction and makes it more enjoyable and accessible. Artificial Intelligence (AI) tools have revolutionized FRS enabling them to consume beyond conventional methods by taking in contextual data, user preferences and visual content for recommendations with a more individualized suggestion. Recently, Generative Adversarial Networks (GANs) have emerged as a potent technique to enhance these systems by generating diverse fashion designs with high fidelity. In this paper, a systematic review of parameters used to evaluate FRS using Generative Algorithms is discussed. Various parameters to evaluate system performance and the recommendation quality are analyzed. Detailed analysis of the input parameters, to be considered to design the efficient AI based FRS (AI-FRS) is also presented. Along with this, research gaps are explored by surveying numerous review papers. This review will help in deciding the evaluation parameters to develop and examine more efficient AI based FRS.
Authors - Shobha K, Rajashekhara S Abstract - The Service Selection Board (SSB) evaluates candidates for admission to military services like the Indian Army, Navy, and Air Force through a rigorous five- to six-day selection process. This process assesses a candidate’s psychological and physical fitness, communication skills, and leadership qualities. Despite its importance, the low selection rate highlights a lack of preparation platforms for aspirants. Many candidates cannot afford offline coaching, and no comprehensive online platforms exist to simulate SSB tests. The proposed solution is an interactive online platform offering real-time test simulations, feedback, and guidance, replicating the SSB interview experience to enhance aspirants’ chances of success.
Authors - Payal Khode, Shailesh Gahane, Arya Kapse, Pankajkumar Anawade, Deepak Sharma Abstract - The COVID-19 pandemic has led to a widespread trend toward remote work, drastically altering the nature of the traditional workplace. While there are many advantages to working remotely, such as flexibility and less time spent traveling, there are also major cybersecurity risks. The inherent vulnerabilities in technologies used for remote work pose a persistent threat to cybersecurity. But social distancing measures imposed by the pandemic have made workers work from home, which has increased internet usage. These widespread modifications have been used by malicious hackers to launch extensive phone scams, phishing attacks, and other computer-based exploits. Organization have quickly embraced remote work without fully understanding the impact on cybersecurity. Because remote work policies have been widely adopted without first consulting cybersecurity experts or implementing comprehensive security measures, there are now more vulnerabilities. This study focuses on people because they are the weakest link in cybersecurity. It highlights how important it is to protect business and personal information when working from a distance. The study looks at the cybersecurity risks associated with changing employee behaviors during the transition to remote work in light of the COVID-19 pandemic. The aim of this research is to investigate the cybersecurity risks and challenges that companies and organizations encounter when workers change their work habits to work remotely during the COVID-19 pandemic.
Authors - Ananya Solanki, Leander Braganza, Aarol D’Souza, Sana Shaikh Abstract - For plant lovers, there has always been a barrier to accessing a wider variety of plants. This restriction is due to the absence of a dedicated marketplace to buy and sell plants. This platform includes an interactive Augmented Reality (AR) feature that enables users to visualize the plants they select in their selected environment. Furthermore, it utilizes location-based Air Quality Index (AQI) and recommends plants according to the user’s location. This Platform will educate the users about the plants by providing plant care tips.
Authors - Prajkta Dandavate, Ameya Badge, Mohit Badgujar, Aditi Badkas, Rutuja Badgujar, Orison Bachute, Vedant Badve Abstract - This paper presents a cutting-edge combination system designed to integrate personalized music recommendations with real-time face-based emotion recognition by using adaptive emotion-driven user interaction. The approach demonstrates how, given a continuously streamed video coming from a PC camera, advantage is taken to analyze emotions as the CNN feeds in user-defined emotions in the emotion categorization task and indicates that such categories of emotions have been quite accurately identified or classified up to around 65% into defined categories, say for example sadness, happiness, and many more. It detects emotions within a room in real time while online building up a playlist of music. The system remains smooth and adaptive, constantly readjusting the emotional responsiveness of the interaction, supported by a multi-threaded architecture. In addition to entertainment, the paper explores other applications in home automation, healthcare, and mental health as well as opportunities for emotion-driven content and advertisements that match the real-time emotional states of users. It brings to the foreground the prospects of machine learning and the possibility of real-time processing in creating deeply personalized, emotionally driven user experiences across diverse settings.
Authors - Shailesh Gahane, Payal Khode, Arya Kapse, Deepak Sharma, Pankajkumar Anawade Abstract - In Mozambique, in recent years, the construction sector has seen a lot of loss of life caused by accidents at work, mainly due to the lack of control of international standards for OHS, the production process and employee orientation. Risk analysis and management advocates that risks can be characterized by being partially known, changing over time and being managed in the sense that human action can be applied to change their form and/or the magnitude of their effect. The field of artificial intelligence (AI) is experiencing rapid growth and is increasingly integrating into various sectors, including healthcare, industry, education, and the workplace. Its overall objective is to develop an environmental, health, and safety management system integrating artificial intelligence (AI) and blockchain to prevent accidents, facilitate decision-making, and comply with international construction regulations at sites in Maputo, Mozambique. To achieve this goal, the system will focus on administration and legal compliance, education and training, safety and emergency
Authors - Swapnil M Maladkar, Praveen M Dhulavvagol, S G Totad Abstract - Blockchain technology has emerged as a powerful tool for secure, decentralized data management across various industries, but it faces significant scalability challenges due to the limitations of existing sharding methods. Traditional static sharding approaches often result in inefficient resource allocation, while adaptive sharding techniques can lead to increased complexity and delayed adjustments, hampering overall system performance. This paper proposes an innovative blockchain network management approach by integrating Long Short-Term Memory (LSTM) models with dynamic sharding. This system leverages predictive analytics to optimize real-time sharding adjustments, significantly enhancing blockchain performance. By addressing the shortcomings of both static and adaptive sharding methods, the proposed approach avoids the extra infrastructure and delays associated with Layer 2 solutions. Future research will focus on advancing LSTM techniques, integrating them with other optimization strategies, and testing in real-world scenarios to further enhance scalability and efficiency. This LSTM-integrated dynamic sharding method represents a significant step forward in blockchain network optimization, offering a more efficient and adaptable solution for contemporary blockchain applications. Experimental results reveal a 22% increase in transaction throughput and a 25% reduction in latency compared to conventional static sharding.
Authors - Prema Sahane, Anand Dhadiwal, Devvrath Datkhile, Harshal Deore, Atharva Shinde, Amruta Hingmire Abstract - The paper provides information about different healthcare applications that are built to develop the healthcare sector digitally with the help of modern technologies. It describes the need for making the particular application with its advantages and disadvantages. Though there are many health record management systems existing for electronic health record management, the accuracy and efficiency are not up to the level that society need. People find a lot of time wastage in maintaining the records manually. Also, Patients find it difficult to track their previous records. So, our system “Medicard” is an application for interaction between doctors, patients, and pharmacists. It is a multi-tasking application for all healthcare tasks like Centralized Storage of patient health records, Drug Analysis, Allergy Analysis, Online receipt generation, Community creation, Booking Doctor’s Appointments and Online Payment. It has three different interfaces for doctors, patients, and pharmacists.
Authors - Anant Chovatiya, Priyanka Patel Abstract - Attendance management holds significant importance for all organizations, serving as a determining factor in their success, whether they operate in educational institutions or the public and private sectors. Efficiently tracking individuals within the organization, including employees and students, is crucial for optimizing their performance. Managing employee attendance during lecture periods has become a challenging endeavor. The task of computing attendance percentages poses a significant challenge as manual calculations often result in errors and consume excessive time, leading to inefficiencies and time wastage. In response to the challenges posed by traditional paper-based practices in educational institutions, this paper introduces a digital solution for managing university lecture slots and attendance. The proposed system, named the "Speed Check system," aims to streamline faculty and student attendance processes through a mobile application, eliminating the need for manual recording and reducing paper consumption. Leveraging a cloud-based NoSQL database, real-time data synchronization ensures seam-less communication across users. The system offers distinct functionalities for Time Table Coordinators and Attendance Coordinators, facilitating efficient slot scheduling, modification, and attendance marking. Utilizing Flutter SDK and Firebase technology, the application provides a user-friendly inter-face and robust data protection. Future enhancements include role-based access control and advanced analytics for informed decision-making. Overall, this digital solution presents a significant stride towards optimizing academic administration and enhancing the effectiveness of attendance management in educational institutions.
Authors - Nirali Arora, Harsh Mathur, Vishal Ratansing patil Abstract - Achieving relevance in search results is difficult in today's complex information environment, particularly when single-algorithm ranking models find it difficult to account for a variety of user circumstances. In order to improve search relevancy in a variety of circumstances, this study presents a unified ranking strategy that integrates many algorithms. Hybrid system adapts dynamically to user intent and situational details by combining conventional models like BM25 and PageRank with cutting-edge neural techniques like BERT-based transformers and learning-to-rank algorithms. A key component of this strategy is a context recognition mechanism that continuously evaluates user history, query type, and behavioural patterns to fine-tune relevance score according to the particular requirements of every search context. This method, called Contextual Rank, combines algorithmic scores to prioritize relevance, enabling more flexibility and response to user demands. Here presented about the theoretical ramifications, covering problems like scalability and processing needs as well as gains in relevance. The benefits of unified ranking models are highlighted in this paper, opening up new avenues for contextual optimization in recommendation systems and search engines and paving the way for improved user experiences across a range of search settings.
Authors - Tushar Kulkarni, Pradyumna Khadilkar, Behara Roshan Kumar, Rupesh Jaiswal Abstract - In the modern era of rapid Internet access, it is essential to safeguard electronic devices and the sensitive data they contain from cybercriminals, who constantly find new ways to exploit users by seeking holes in the system and manipulating them. Globally, the average cost of a data breach in 2024 is expected to exceed $4 million, an increase of ten percent from the previous year. A better intrusion detection system capable of handling both legacy and zero-day threats is needed. To address this, we reviewed several relevant publications, most of which were published in the last five years. This has enabled us to compile the latest techniques and breakthroughs. Although NSL-KDD, CICIDS-2017, and UNSW-NB15 received the most attention, other datasets were included. The performance of various intrusion detection systems has been compared in the literature that has been cited. We have proposed a novel methodology known as hybrid intrusion detection system (Hybrid-IDS), which combines both signature- and anomaly-based IDS.
Authors - Vaishali Rajput, Swapnil Patil, Diksha Shingne, Uzair Tajmat, Yashodip Undre, Urja Wagh Abstract - With the rapid advancements in artificial intelligence (AI) and machine learning (ML) within the Ed-Tech sector, our project explores the development of an Adaptive Learning Platform aimed at enhancing student comprehension and engagement. The platform is specifically designed to cater to diverse learning styles by delivering personalized learning experiences that adapt to each individual’s pace, needs, and preferences. Leveraging AI-driven features, the platform ensures that students achieve mastery of core concepts. By identifying patterns in user interactions, such as repeated engagement with specific video segments or prolonged pauses, the system recognizes areas where learners may struggle and responds with customized explanations powered by GPT technology to clarify complex concepts in text form. Upon successful course completion, the system generates certificates as formal recognition of the user's mastery, providing tangible proof of their progress and achievement. This innovative approach not only addresses the inherent limitations in traditional e-learning platforms by fostering a dynamic and responsive learning environment but also sets a new benchmark in personalized education technology. By incorporating cutting-edge AI and ML, this platform promises to significantly enhance student learning efficiency, knowledge retention, and overall academic performance, offering a transformative shift in how education is delivered and experienced.
Authors - Sara Umalkar, Aditya Toshniwal, Varad Vanga, Siddhesh Upasani, Vedant Joshi, Sanchit Joshi Abstract - This paper surveys the integration of an Online Judge System (OJS) within a Learning Management System (LMS) for engineering education, focusing on its role in enhancing programming skills and student engagement. The OJS automates the evaluation of coding assignments, providing real-time feedback, scalability and secure code execution through containerization and sandboxing techniques. The LMS, equipped with interactive modules and gamification, uses the OJS to assess algorithmic solutions against predefined test cases, supporting both learning and competitive programming. The study explores key features such as automated grading, performance evaluation and robust security measures to ensure fairness. By analyzing existing OJS platforms and their applications, this paper highlights the effectiveness of such systems in fostering problem-solving skills and preparing students for industry challenges. The survey also identifies emerging trends and opportunities for improving the OJS in educational settings.
Authors - Ruchita Borikar, Sakshi Thorat, A.S. Ingole, U.A. Kandare Abstract - Credit card fraud remains a significant issue in the financial industry, with increasing numbers of transactions being processed online and through digital platforms. Traditional fraud detection systems, relying on predefined rules and manual analysis, are no longer adequate to combat the growing complexity and scale of fraudulent activities. In response, machine learning (ML) has emerged as an effective solution for detecting fraud in real time, offering the ability to analyze large datasets and recognize patterns that may indicate suspicious behavior. This project aims to build a system for fraud using machine learning methods, based on a dataset sourced from Kaggle and guided by an IEEE paper. The project involves several key stages, starting from raw data preprocessing and feature selection, followed by training and evaluating machine learning models with algorithms like Decision Trees, Logistic Regression and Random Forest and Support Vector Machines (SVM). Additionally, the model is evaluated through 5-fold cross validation to ensure robustness. This system not only enhances fraud detection accuracy but also minimizes false positives, thereby improving overall efficiency.
Authors - Md Imran Alam, Swagatika Sahoo, Angshuman Jana Abstract - Wildlife is a crucial part of our environment that must be protected to preserve various animal species and their habitats. Effective wildlife conservation faces numerous challenges, including securing funding, preventing illegal wildlife trade, protecting animals from unnecessary harm, ensuring proper treatment, and managing shelters for domesticated animals. Current solutions to these challenges are often centralized, making them vulnerable to corruption and single points of failure. The adoption of blockchain technology to wildlife protection provides secure tracking of animal medical records, transparent management of funding transactions, and real-time alerts to authorities regarding potential threats, all automated through smart contracts. This paper explores the theoretical and practical benefits of integrating blockchain into animal welfare systems. By addressing key issues like illegal trade and medical record management, blockchain technology can significantly improve the security and effectiveness of wildlife conservation efforts.
Authors - Vaishali S. Pawar, Mukesh A. Zaveri, Radhika P. Chandwadkar, Varsha H. Patil Abstract - Pattern recognition involves identifying specific patterns or features within the provided data. Social Network Analysis, Fraud Detection, Biological and Medical Networks, Recommendation Systems, Telecommunication Networks, Traffic and Transportation Networks, Computer Vision and Image Processing, Natural Language Processing (NLP) are among the constantly expanding applications of pattern recognition. Graphs are a potent model utilized in several fields of computer science and technology. This study presents a technique based on graph databases for graphical symbol identification. The suggested method employs graph-based clustering of the graph database, which markedly decreases the computational complexity of graph matching. The suggested algorithm is assessed with a substantial quantity of input hand drawn images, and the output results indicate that it surpasses previous algorithms.
Authors - Aatish Aher, Chinmay Sanjay Lonkar, Ganak Bangard, Sanjay T. Gandhe Abstract - Bluetooth Low Energy (BLE) technology is increasingly recognized as an effective indoor navigation solution, particularly in areas where GPS cannot function. This review focuses on BLE-based systems tailored for visually impaired users, examining their advantages and limitations. It compares various indoor localization methods such as Wi-Fi, BLE, and image processing in terms of accuracy, efficiency, and deployment simplicity. BLE beacons are highlighted for their low power consumption, adaptability, and cost-effectiveness, making them suitable for large-scale, real-time navigation. To enhance accessibility, the review discusses integrating assistive features like audio navigation using BLE signals paired with voice assistants, ensuring hands-free operation. Additionally, it covers technologies such as haptic feedback and obstacle detection, which provide non-verbal cues to alert users about obstacles or landmarks. The system architecture is explored, focusing on app integration, user-friendly interfaces for visually impaired users, and cloud management for delivering real-time updates. By identifying research gaps, this review suggests directions for future development of BLE navigation systems aimed at enhancing the independence and mobility of visually impaired users in indoor settings.
Authors - Nivedita Shimbre, Prema sahane, Shrutika Amzire, Arya Ganorkar, Pavan Kulkarni, Pawan Bondre Abstract - As INDIA is on the verge of getting digitalised India, farmers also need to become smart. They need a system that recommends them crops and its varieties and proper scheduling methods for increasing their crop yield. To obtain the best crop varieties for a given area factors such as soil moisture, fertility and climatic condition are required. And our system will suggest crops and its varieties on the basis of previous History of crop production and time required for the crop. This system also focuses on recommending suitable fertilizers for crops. It also provides the schedule and quantity of fertilizer to be used. This system will use various Machine learning Algorithms like Support Vector Machine, Random Forest Regressor, Gradient Boosting, Linear Regression etc to recommend best crops, its varieties and fertilizers. It also provides Automatic Irrigation Management by using IoT.
Authors - Jay Bodra, Anshuman Prajapati, Priyanka Patel Abstract - India is one of the leading agricultural countries in the world, and the nation's economy depends heavily on agriculture. For good crop yield, prediction of precipitation is necessary to increase agricultural output and ensure a supply of food and water to maintain public health. To reduce the issue of drought and floods occurring in the nation, wise use of rainfall water should be planned for and implemented. Numerous studies have been carried out utilizing data mining and machine learning approaches on environmental datasets from various nations in order to forecast rainfall. This study's primary goal is to pinpoint the amount of rainfall in several regions of India in the past hundred years and apply machine learning techniques to forecast the amount of rain that will fall in a particular month and year in a given region. The dataset was collected from the government site of the rainfall database for performing machine learning techniques. The Random Forest model's ensemble approach, robustness to noise, ability to handle nonlinear relationships, feature importance analysis, scalability, and tuning flexibility make it a particularly effective choice for rainfall prediction in this project. Its versatility and performance make it a valuable asset for providing accurate and reliable rainfall forecasts to support decision-making in various sectors, such as agriculture, water resource management, and disaster preparedness.
Authors - Mangesh Salunke, Tilak Shah, Vishal Bhokre, R. Sreemathy Abstract - Chatbot also known as conversational agents is an interactive software that responds to users’ queries using artificial intelligence. While traditional machine-learning chatbots have shown promise, LLM-powered chatbots offer more natural and relevant conversations, enhancing the user experience. The rise of OpenAI’s ChatGPT, Google’s Gemini, LangChain, etc has widened the horizon of applications of chatbots to almost every sector including education, healthcare, banking, entertainment, e-commerce and telecommunications. The main objective of this comprehensive study is to explore and analyze the current advancements in chatbot development using different artificial techniques. This survey paper examines key trends in the development of chatbots, the components and techniques used, and the evaluation metrics employed to measure performance. We discuss different metrics used to evaluate chatbots' performance like accuracy, BLEU, ROUGE and relevancy. The results suggest that LLM-powered chatbots facilitate more natural and contextually appropriate conversations than traditional machine-learning models, leading to a marked enhancement in user experience. By synthesizing insights from existing research, we aim to provide a comprehensive understanding of RAG-based chatbot technology.
Authors - Aditya Bhabal, Aditi Bharimalla, Shruti Balankhe, Vaibhav Chavan, Vaibhav Narawade Abstract - The overnight growth of OFD service businesses, due to technological advancement and a change in consumer behavior, has made reviews furnished by customers imperative for improvement in service quality, demand forecasting, and customer satisfaction. The vast amount of unstructured data makes the conventional method too ineffective. The following review thus provides valuable insights from diverse studies that are being done to apply deep learning, reinforcement learning, and ensemble learning in analyzing customer reviews of food delivery platforms. It goes on to provide ways through sentiment analysis, demand forecasting, dynamic recommendations of orders, and personalized marketing that these studies have proven how machine learning can make a difference in operatively effectively producing efficiency. It also provides an overview of the challenges in terms of data imbalance, scalability, and sustainability concerns, thus showing perspectives for further research in developing OFD platforms' capabilities for optimized and personalized services that take into account environmental and social impacts.
Authors - Manav Bagthaliya, Madhav Desai, Priyanka Patel Abstract - This paper introduces a black and white image colorization model based on deep learning techniques and OpenCV by combining available open-source toolkits. The model uses the Lab color space, in which a single grayscale image (the luminance or L channel) that is processed by a convolutional neural network (CNN) predicts chromatic values (a and b channels). The colorized version of the image is, therefore, generated by merging it with the original L channel. This approach makes use of deep learning in order to enhance the quality and performance of the reconstructed images. Experimental results verify that the proposed method is both valid and flexible and, hence, can vividly restore color from monochrome photographs in such domains as historical photo restoration and artistic creation.
Authors - Kavisruthi K, Anet Reji, Adhrushta V, Sangeetha Gunasekar Abstract - The aim of the study is to identify and analyze the factors that contribute to positive emotions expressed by visitors about the Taj Mahal based on user-generated content, a top destination for tourists in India. The reviews were scrapped from Trip Advisor.com, from which 77 themes have been identified using BERTopic modeling, reflecting both positive and negative aspects of the visitor experience. These 77 themes were grouped into 9 broader themes like value for money, history, time of visit, architecture, location, facility, tours, memories, and crowd which influence the positive and negative emotions of the visitors to heritage sites. Logistic regression is used to study the impact of the variables on customer satisfaction. Except for the variables location and memories, all the other variables significantly affect whether visitors give a star rating or not. These insights are valuable to tourist destination managers in helping them better strategies for enhancing tourist satisfaction.
Authors - Pushpavati V. Kanaje, Pratik Raj Dahal, Shivam Anant Ghorpade, Aditya Jaikumar Sharma, Atharva Dinesh Phatak, Parth Ravindra Sawant Abstract - In this paper, we strive to explore the various technologies that can be implemented to streamline the process of getting medication, and to make this a patient-centric experience. Exploring all the various solutions and methods that we can implement. Furthermore, these advancements will be beneficial as it will strive to remove the discrepancies that persist in the conventional methods. The advent of all the new technologies open up numerous doors for the medical sector to improve substantially and to make advances beyond what the existing models can do. Incorporation of various techniques like telemedicine, digital health records, and mobile health apps have all been put into test in the past few years. But, with the surge of new methods and techniques these existing techniques can further be enhanced and be made to work a lot more efficiently both figuratively and literally.
Authors - Anil R. Surve, Vijay R. Ghorpade, Ganesh S. Sagade Abstract - Service-Oriented Architecture (SOA) consists of autonomous, standardized, and self-describing components known as services that communicate with each other and are provisioned as per the rules decided. This architecture is being widely used worldwide. SOA is verified for dynamic, automatic, and self-configuring distributed systems such as in automation systems. This paper explores SOA paradigm for active spaces in an IoT environment with devices realized by device profile for web service (DPWS) wherein the context information is acquired, processed, and submitted to a composition engine so as to provision relevant services which suit to the context. Identification of profiled users in the context is achieved using RFID tags accordingly composition plans are created for every user. A six-phased composition process is employed to complete this task. DPWSim simulator is deployed for illustration and testing the system. WS4D explorer is used to scan the available devices in the network. IoT platform is employed for providing context information notifications comprising of various sensors and IoT environments.
Authors - Nilay Vaidya, Kamini Solanki, Krishna Kant, Jay Panchal Abstract - With the seamless integration of state-of-the-art technology and customization of learning experiences, Education 5.0 signifies a transformative shift in the educational landscape. This paradigm leverages big data analytics, virtual reality, and artificial intelligence (AI) to personalize learning paths and facilitate the development of critical 21st-century skills. Unlike outdated one-size-fits-all methods, Education 5.0 emphasizes personalized learning, enhancing student engagement and productivity. This transformation calls for collaborative efforts between educators, students, and industry stakeholders, ensuring education is relevant and future-ready. This chapter highlights the potential of Education 5.0 to foster flexible, inclusive, and progressive learning environments. Blended learning models, which combine AI technology with traditional teaching methods, demonstrate how tailored learning paths can improve outcomes. AI-based solutions offer engaging learning experiences through gamification, virtual reality, and simulations while streamlining administrative tasks like attendance and grading. By providing data-driven insights, AI helps teachers identify growth areas and adjust strategies in real-time to improve student success. This approach enhances accessibility, offering diverse learning methods, and ensuring tools are available for students with impairments. Scalable and adaptable, blended learning encourages lifelong learning, collaborative skills development, and peer interaction through AI powered platforms.
Authors - Jay Madane, Aniket Jaiswal, Sejal Balkhande, Kirti Deshpande Abstract - Using machine learning (ML) and artificial intelligence (AI) methods, this article suggests creating a 360-degree feedback software for the Indian government. The system’s objective is to automatically analyze and classify news articles from local media sources according to department and sentiment. Press Information Bureau (PIB) officials will receive real-time notifications concerning unfavorable stories, and stakeholders will be able to efficiently visualize and filter news coverage thanks to an intuitive dashboard. With the use of this program, the Indian government would be able to make more informed decisions by increasing the effectiveness of media monitoring.
Authors - B Shilpa, Shaik Abdul Nabi, G Sudha Reddy, Puranam Revanth Kumar, Thayyaba Khatoon Mohammad Abstract - The Internet of Medical Things (IoMT) has made it possible for digital devices to collect, infer, and disseminate data related to health through the use of cloud computing. Securing data for use in health care has unique challenges. Various studies have been carried out with the aim of securing healthcare data. The best way to protect sensitive data is to encrypt them so that no one can decipher them. Conventional encryption methods are inapplicable to e-health data due to capacity, redundancy, and data size restrictions, particularly when patient data is transmitted across unsecured channels. Due to the inherent dangers of data loss and confidentiality breaches associated with data, patients may no longer be able to fully protect the privacy of their data contents. These security threats have been recognized by researchers, who have then proposed various methods of data encryption to fix the problem. As a result, the area of computer security is deeply concerned with finding solutions to the security and privacy issues associated with IoMT. This research presents an intrusion detection system (IDS) for IoMT that utilizes the machine learning techniques: Decision Tree (DT), Naive bayes (NB), and K-Nearest Neighbor (KNN). Feature scaling using the minimum-maximum (min-max) normalization method was performed on the CIC IoMT 2024 dataset to prevent information leakage into the test data. The effectiveness of the output was then evaluated, ensuring that the scaling process was correctly implemented as the initial step of this work approach. Five types of assaults are identified in this dataset: DDoS, DoS, RECON, MQTT, and Spoofing. Principal Component Analysis (PCA) was used to reduce dimensionality in the subsequent stage. The suggested methods have a high detection rate of accuracy 98.2%, specificity of 97.6%, recall of 98.0%, and f1- score of 97.8%, which offer a viable option for protecting IoMT devices from attacks.
Authors - Vinay Nawandar, Sakshi Rokade, Nitin B. Patil Abstract - This paper reviews recent progress in Smart Surveillance Systems, with an emphasis on their roles in crowd control, crime prevention, and behavior monitoring in educational and workplace environments. The adoption of technologies like Machine Learning and Artificial Intelligence, particularly deep learning models such as YOLO (You Only Look Once), has greatly improved the ability to detect and evaluate events in real time. The paper also explores the drawbacks of traditional surveillance systems, including human error and inefficiencies, and how modern AI-driven solutions are addressing these issues. In addition, it discusses key approaches in face detection, behavior analysis, and anomaly detection, along with a comparative evaluation of various algorithms in intelligent surveillance. It also highlights potential energy-saving strategies and future developments in AI-driven surveillance.
Authors - Pradnya Vishwas Chitrao, Pravin Kumar Bhoyar, Rajiv Divekar Abstract - Background Significant advancements in technology, economy, and society have ushered in the twenty-first century. Advances in IT will significantly expand opportunities in energy, commerce, production, transportation, education, and health. New learning opportunities have multiplied due to advancements in information and communication technology. Numerous educational institutions, including colleges, universities, and organizations, employ the notion of online learning to augment students' knowledge, skills, and capacities in an efficient and participatory way. Through its ability to help students grasp fundamental ideas in topics like physics, arithmetic, and reading, technology can broaden the scope of what they learn. Use of Technology by Businesses to Enhance Education in India Given that both parents work these days, there is a great demand for resources that will meet the needs of the student body as well as the parents. A lot of computer and internet technology has become available in the twenty-first century, and this technology is being employed in the creation of educational resources for students. Zucate is one such business that was started by Dr. Moitreyee Goswami and Ms. Roli Pandey. Objective The company Zucate, which aims to establish “a teacher-learner-parent ecosystem with learner at the center,” is the subject of this paper (https://www.f6s.com/zucate). The aim of this study is to investigate how two women entrepreneurs attempted to help students, particularly school-age children, learn independently beyond school hours by utilizing technology in an economical way. It also aims to study how they survived the pandemic and expanded the enterprise Research Methodology The principal research method employed by the researchers involved conducting in-person interviews with the enterprise's founders. In addition, they made reference to papers from reputed data bases and other secondary sources. Importance of the Research The research is significant because it sheds light on how technical knowledge can be utilized to develop pedagogical and learning materials that help kids and learners understand ideas and learn on their own without having to rely on pricey, time-consuming tuition or even tech-savvy, expensive learning resources provided by traditional businesses. It is also important because it helps us understand how women entrepreneurs strategize and use technology to expand and flourish their enterprises.
Authors - Sandeep Shinde, Parth Kedari, Atharva Khaire, Shaunak Karvir, Omkar Kumbhar Abstract - With the use of cutting-edge technologies like Flask, web technology, API rendering, Make It Talk, and machine learning (ML), an AI smart tutor bot is being implemented with the goal of giving users an engaging and customized learning experience. The bot uses machine learning techniques to analyze responses and generates quiz-style questions with multiple-choice possibilities and extended answers. This allows for quick feedback. Additionally, it has an interview mode in which the user engages with an AI avatar that reads their body language and facial emotions. Using written material and specialized alphabets, the AI avatar is dynamically educated, gaining comprehensive knowledge and an accurate evaluation of user performance. The research article goes into detail about the system architecture, how different technologies were integrated, and the process for training the avatar and gauging user response. Through user feedback and experimental trials, the AI Smart Tutor Bot's performance is assessed, showcasing its potential as an advanced teaching tool that can adapt to each student's unique learning needs while boosting comprehension and engagement.
Authors - Bhagyashree D. Lambture, Madhavi A. Pradhan Abstract - Phytochemical qualities, geographic information, environmental conditions, and traditional medicinal knowledge are some of the sources of information that are incorporated into this research project, which presents a comparative examination of machine learning (ML) algorithms for the qualitative evaluation of medicinal plants. In order to categorize and forecast the medicinal value of plants based on multi-modal data, the purpose of this study is to investigate the effectiveness of various machine learning algorithms. For the purpose of determining which method is the most effective for evaluating complicated and diverse datasets, a full evaluation is carried out utilizing well-known machine learning models. These models include decision trees, random forests, support vector machines, and deep learning algorithms. Key criteria including as accuracy, precision, recall, F1-score, and computing efficiency are utilized in order to evaluate the levels of performance achieved by each method. For the purpose of gaining a deeper comprehension of the role that each data source plays in determining the medicinal potential of plants, the value of features and their interpretability are also investigated. A basis for the ongoing development of AI-driven tools in pharmacological research and plant-based drug discovery is provided by the findings of this comparative analysis, which offer vital insights into the usefulness of machine learning for medicinal plant assessment. Contributing to the expanding fields of computational botany and natural product science, the purpose of this study is to improve the precision and effectiveness of the evaluation of medicinal plants.
Authors - Azhar Abbas, Farha Abstract - Fraudulent claims in the insurance industry lead, to significant financial losses and negatively affect both policyholders and insurance firms. Machine learning has proven to be revolutionizing fraud detection since it is more than just performing the ordinary rule-based systems while automating and optimizing detection processes. The current work proposes a novel hybrid approach that combines supervised and unsupervised techniques in machine learning with applications in accurately and robustly detecting insurance fraud. Three primary models include in the framework are Decision Tree, Random Forest, and Voting Classifier, which improve detection performance on real-world datasets. In addition, an embedding-based model interprets sequential claims data, and a statistically validated network is used to detect patterns of collusion and fraud among related entities. Extensive experimentation was conducted using large-scale motor, and general insurance datasets and showing that the proposed hybrid model achieved an accuracy of 89.60%. Hyperparameter tuning and data preprocessing were used to further refine the model's performance; it was able to counterbalance all issues brought forth by imbalances, complexities, and complexities due to variations in fraud types. The methodology outperformed the existing models, better at identifying rare sophisticated cases of fraud. The practical implications of deploying machine learning models in the insurance sector are also discussed from the angle of best practices for data governance, model interpretability, and stakeholder trust. In Future this work will be improved by incorporating real-time analytics to provide quicker detection, enhancing interpretability features, and adapting the model to emerging fraud patterns in evolving data environments.
Authors - Poonam Yadav, Meenu Vijarania, Meenakshi Malik, Neha Chhabra, Ganesh Kumar Dixit Abstract - Parkinson's disease is aging-associated degenerative brain illness that results in the degeneration of certain brain regions. Early medical diagnosis of Parkinson's disease (PD) is difficult for medical professionals to make with precision. Magnetic Resonance Imaging (MRI) and single-photon emission computed tomography, or SPECT are two medical imaging strategies that can be used to non-invasively and safely assess quantitative aspects of brain health. Strong machine learning and deep learning methods, along with the efficiency of medical imaging techniques for evaluating neurological wellness, are necessary to accurate the identification of Parkinson's disease (PD). In this study, we have used dataset of MRI images. This study suggests three deep learning models: ResNet 50, MobileNetV2 and InceptionV3 for early diagnosis of PD utilizing MRI database. From these three models, MobileNetV2 demonstrated superior accuracy in training, testing and validation with a rate of 99%, 94%, and 96%, respectively. With its effectiveness and precision, MobileNet V2 has a lot of potential for PD identification using MRI scans in the future. We may further advance the development of dependable and easily accessible AI-powered solutions for early diagnosis and better patient care by tackling the issues and investigating the above-mentioned future paths.
Authors - Aye Thiri Nyunt, Nishi Vora, Devanshi Vaghela, Brij Kotak, Ravi Chauhan, Kirtirajsinh Zala Abstract - This paper is an AI and Machine Learning Algorithm - based dualistic Gesture-to-Speech and Speech-to-Gesture framework. The core of this initiative is to enable machines and humans to converse with each other by enabling the translation of physical body movements into reasonable speech and vice versa. We used deep learning models- Convolutional Neural Networks (CNN)- to train our system using a dataset consisting of human gestural movements and the relevant speech patterns. For the Gesture-to-Speech module, real-time gesture recognition and interpretation were used, which involved computer vision and were implemented to interpret gestures into speech output containing words and phrases representing the message illustrated by the gestures. The Speech-to-Gesture module, on the other hand, uses speech as input to produce context-related gestures-the main purpose of which is to improve user interaction and experiences. In the system, multiple applications were tested, including sign language and webcams. Further research will try to extend the flexibility of the system to include various languages, cultural backgrounds and characteristics of individual gesture styles which eventually has a high level of customization. We had designed the CNN architecture for real-time gesture recognition and taken care of data preprocessing as well to increase accuracy concerning different types of gestures. We created Gesture-to-Speech translation with the use of an LSTM, then added in a Text-to-Speech engine for it to have a very natural sound. We then worked on Speech-to-Gesture and even refined the gestures through a CNN-based network, to ensure transitions are very fluid. Everything was coordinated such that there would be synchronous gestures and speech for extremely natural real-time interaction. We coached on how one would integrate, test, and further optimize models with dropout and batch normalization for higher performance.
Authors - Varun Maniappan, Praghaadeesh R, Bharathi Mohan G, Prasanna Kumar R Abstract - This paper constitutes a comprehensive review of how language models have changed, focusing specifically on the trends toward smaller and more efficient models rather than large, resource-hungry ones. We discuss technological progress in the direction of language models applied to attention mechanisms, positional embeddings, and architectural enhancements. The bottleneck of LLMs has been their high computational requirements, and this has kept them from becoming more widely used tools. In this paper, we outline how some very recent innovations, notably Flash Attention and small language models (SLMs), addressed these limitations by paying special attention to the Mamba architecture that uses state-space models. Moreover, we describe the emerging trend of open-source language models, reviewing major technology companies efforts such as Microsoft’s Phi and Google’s Gemma series. We trace here the evolution from early models of transformers to the current open-source implementations and report on future work to be done in making AI more accessible and efficient. Our analysis shows how such advances democratize AI technology while maintaining high performance standards.
Authors - S.K. Manjula Shree, Shreya Vytla, J. Santharoopan, Harisudha Kuresan, A.Anilet Bala, D.Vijayalakshmi Abstract - The goal is to use a Random Forest classifier to categorize future price movements as "up" or "down" in order to forecast stock market trends. In order to guide investing strategies, this model will examine pertinent attributes and previous stock data. The effectiveness of Logistic Regression, Support Vector Machines (SVM), and Random Forest Classifier in forecasting stock market movements is compared in this study. The ensemble approach Random Forest is very resilient under erratic market situations since it is excellent at handling noisy, complex data and capturing non-linear patterns. SVM performs best on smaller, more structured datasets, however noise and non-linearity might be problematic. Despite its simplicity and interpretability, logistic regression is constrained by its linear character and finds it difficult to account for the dynamic, non-linear behavior of stock prices. In recall focused tasks, logistic regression is helpful because it performs well in identifying true positives (such preventing missed opportunities in stock predictions). SVM's reliance on kernel functions makes it computationally expensive, but it can also be helpful when handling smaller datasets with clear patterns and where accuracy is needed. All things considered, Random Forest offers the greatest results around 99% especially for difficult stock market prediction assignments.
Authors - Umakant Singh, Ankur Khare Abstract - This paper aims to find the optimal model for stock price forecast. In examining the different approaches and aspects that need to be considered, it is exposed the methods decision tree and Gradient Boosted Trees Model. This paper aims to propose a more practical approach for making more accurate stock predictions. The dataset including the stock bazaar values from the prior year has been considered first. The dataset was optimized for actual analysis through pre-processing. Therefore, the preprocessing of the raw dataset will also be the main emphasis of this work. Again, decision trees and gradient tree models are used on the pre-processed data set and the results thus obtained are analyzed. In addition, forecasting papers also address issues related to the usage of forecasting systems in actual situations and the correctness of certain normal value. This paper also presents a machine learning model for predicting stock stability in financial markets. Successful stock price forecasting greatly benefits stock market organizations and provides real solution to problems faced by investors.
Authors - Aneesh Kaleru, Chaitanya Neredumalli, Mrudul Reddy, Ramakrishna Kolikipogu Abstract - One major risk factor that contributes to traffic accidents globally is poor visibility in foggy situations. Drivers are seriously threatened by fog because it weakens contrast, hides important objects, and makes lane markings almost invisible. Recent developments in visibility enhancement methods for foggy circumstances are summarized in this paper, with a focus on picture defogging combined with object detection and lane aid. We analyze the application of models such as Conditional Generative Adversarial Networks (cGANs), Single Shot Multibox Detectors (SSD), All-in-One Defogging Network (AODNet), and You Only Look Once (YOLO) from the perspective of deep learning and computer vision. These methods have the potential to increase driver safety in inclement weather by identifying impediments, improving visibility, and offering lane guidance. The review also covers the limitations of these solutions, such as computational demands and requirements for real-time processing. Our goal is to provide researchers and practitioners with a comprehensive understanding of the current methods and their uses, allowing for the development of effective visibility enhancement systems that can prevent accidents and save lives.
Authors - Sindhu C, Taruni Mamidipaka, Yoga Sreedhar Reddy Kakanuru, Summia Parveen, Saradha S Abstract - India is a country with very rich ancient historical legacy. It preserved vast cultural and linguistic knowledge through stone inscriptions. Extracting text from ancient stone inscriptions and translating it into a language which is understandable by everyone is a very challenging task due to script variations, natural wear, and the uneven degraded surfaces of stone carvings. Our idea is to build a model which can extract the text from these stone inscriptions which were written in Telugu language and translate them into other Indian local languages. The Region-Based Convolutional Neural Network (R-CNN) model which is integrated with Tesseract OCR is trained on a custom dataset of 30,000 labelled images of Telugu script, encompassing Achulu (vowels), Hallulu (consonants), and Vathulu. By achieving a 96% accuracy in character detection, this model demonstrates significant reliability in recognizing Telugu characters from degraded and complex inscriptions. Data augmentation techniques, including rotations, flips, and shifts were used to further enhance the model’s robustness to different orientations and environmental conditions encountered in historical artifacts. The text which is extracted from the image is ultimately translated into Indian local languages using an API-based translation module, enabling a seamless interpretation of ancient content. This research contributes a comprehensive and automated solution for cultural preservation, giving us a scalable method to digitize and make historical inscriptions accessible to everyone which are integral to Telugu heritage and linguistic history.
Authors - Satya Kiranmai Tadepalli, Sujith Kumar Akkanapelli, Sree Harsha Deshamoni, Pranav Bingi Abstract - This paper in detail analyzes how generative AI and encoder-based architectures are drastically changing the realm of video generation with multimodal inputs such as images and text. The application of CNNs, RNNs, and Transformers so neatly serves to encode divergent modalities that blend into the seamless synthesis of realistic video sequences. It is based on the up-and-coming fields of generative models like GANs and VAEs, in bridging from static images to video generation. However, this represents a significant leap forward in the technology of video creation. It also goes into great detail on the complexities of multimodal input, working to balance coherence over time as well as semantic alignment of what's being produced. In the above-described context, it can now be realized how the role of encoders translates visual and textual information into actionable representations for generating video. What follows is a survey on recent progress in adopting Generative AI and multimodal encoders, discussions on what challenges are encountered today, and possible future directions that ultimately lay emphasis on their potential to assist video-related tasks and change the multimedia and AI communities.
Authors - Vathsal Tammewar, Bharat Sharma, Dharti Sorte, R. Sreemathy Abstract - Accelerated facial aging using GANs has been the key interest area in generative modeling and facial analysis fields, which offers significant breakthrough in age progression and regression solutions. This survey conducted an extensive review on techniques of GAN- based approaches for accelerated facial aging, emphasizing highly realistic and controllable aging transformations. Many of these methods applied in forensic investigations, entertainment industries, or age-invariant facial recognition systems, which are vivid demonstrations of the versatility and practical relevance of such methods. While such recent breakthroughs hold great promises, several issues remain; namely high-fidelity transformations to preserve important facial details do not fully diminish biases due to imbalanced datasets, and temporal consistency when age progressions or regressions consist of sequential ages is also critical. Computational efficiency and real-time applicability are still the most critical areas of focus. This paper probes into the strengths, limitations, and open challenges of existing approaches, while emphasizing the importance of innovations such as improved loss functions, diverse and representative training datasets, and hybrid architectures. Thus, this survey contributes to synthesizing current progress and outlining future research directions for advancing the field of GAN-based facial aging technologies.
Authors - Khushi Mantri, Abhishek Masne, Shruti Patil, Girish Mundada Abstract - In medical diagnostics, identifying bone fractures is a crucial task that is traditionally dependent on radiologists deciphering X-ray pictures. However, human factors like experience or exhaustion can occasionally cause delays or inaccuracies in diagnosis. The construction of an automated system for bone fracture identification utilizing Convolutional Neural Networks (CNN), a deep learning method that performs especially well in picture processing, is examined in this research. With the use of a tagged dataset of X-ray pictures, the suggested method can efficiently and accurately detect fractures. Prior to feature extraction using CNN layers which are trained to distinguish between fractured and non-fractured bones the images are pre-processed to improve clarity. In order to assist medical practitioners in making prompt, correct judgments, the final classification attempts to increase diagnostic accuracy while decreasing the amount of time needed for analysis. The potential of incorporating machine learning into healthcare to lower diagnostic errors and enhance patient outcomes is also discussed in this overview paper, which includes examines recent developments in CNN-based medical picture categorization.
Authors - Shradha Jain, Sneha Suman, Insha Khan, Ashwani Kumar, Surbhi Sharma Abstract - With continual advancements in deep learning, the potential misuse of deep fake is increasing and its detection is in a major scope of work. A model is trained to recognize the patterns in input data, deep fake recognize those patterns in a fabricated way. Sometimes a small, intentional change is added in the data points, these changes are undetectable to humans and confuse the learning model. Those changes are called adversarial perturbations. Compressive adversarial perturbations aim to make those changes even smaller and harder to detect. Authors explore a sophisticated framework - ComPAD (Compressive Adversarial Perturbations and Detection) which is used to detect adversarial attacks. This paper explores the strategies, and provides comparative analysis of methods used by different researchers. Various datasets including UADFV, DeepfakeTIMIT, LFW, FF++, and Deeperforensics are evaluated to achieve the highest metrics. Methods based on convolution neural networks, particle swarm optimization, genetic algorithm and D4 (Disjoint Diffusion Deep Face Detection) are used for detection. Authors also discuss the challenges such as generalization of models across the new data, the continuous evolution of adversarial perturbations that leads to consistent attacks, and the scalability issues for the real time deep fake. Concluding that models can significantly improve the accuracy, robustness and generalization.
Authors - Krisha Zalaria, Jaitej Singh, Priyanka Patel Abstract - The ubiquitous use of mobile phones in modern society has sparked increasing concern in environments where their usage is restricted, such as hospitals, schools, religious sites, and hazardous zones. Mobile phones, although integral to daily life, pose risks such as privacy breaches, interference with sensitive equipment, and even serious safety hazards. In response, this paper investigates the efficacy of various state-of-the-art object detection models for real-time mobile phone detection in restricted areas. We benchmarked YOLOv8, YOLOv9, EfficientDet, Faster R-CNN, and Mask R-CNN to identify optimal solutions balancing speed, accuracy, and adaptability. This study introduces a two-class detection framework to distinguish between individuals texting or talking on the phone, catering to differing levels of restriction. Evaluations using a customized, diverse dataset reveal YOLOv8 and YOLOv9 as superior, achieving high precision and recall, thus positioning these models as effective solutions for scalable, real-time surveillance systems in sensitive environments. Our research contributes significant insights into mobile phone detection, paving the way for enhanced safety and privacy in restricted zones.
Authors - Aniket Gupta, Chris Dsouza, Sarah Pradhan, Amiya Kumar Tripathy, Phiroj Shaikh Abstract - This paper is based on the development of voice chatbots and their configuration to make sure that e-commerce websites are in compliance with all the customer care requirements. The authors talk about the introduction of natural language processing in an e-commerce company and provide a review of recent developments in that area. The research specifically focuses on natural language processing techniques, steps involved in developing a chatbot, problems encountered during design, and functions and benefits of voice-based chatbot in e-commerce. This A study emphasizes chatbots as tools to support customer service systems. Keywords: Machine Learning, Natural Language Processing, Data Analysis, Customer support system, and CHATBOT.
Authors - Siddharth Lalwani, Abhiram Joshi, Atharva Jagdale, M.V.Munot, R. C. Jaiswal Abstract - Empathetic response generation is a rapidly evolving field focused on developing AI systems capable of recognizing, understanding, and responding to human emotions in a meaningful way. This paper investigates the integration of the Big Five OCEAN personality model with generative AI to generate emotionally relevant, personalized responses tailored to individual users' personality traits. The Big Five model categorizes individuals into five core personality dimensions—Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. By combining this model with advanced generative AI techniques, the system can deliver empathetic responses aligned with users' emotional states and personality profiles. Through the use of various machine learning algorithms, the study demonstrates that incorporating personality traits significantly improves the quality, accuracy, and emotional resonance of AI-generated responses, leading to more effective human-AI interactions.
Authors - Shaga Anoosha, B Seetharamulu Abstract - Brain tumor segmentation is a critical task in medical imaging, essential for accurate diagnosis and treatment planning. Recent advancements in federated learning (FL) and deep learning (DL) offer promising solutions to the challenges posed by traditional centralized learning methods, particularly regarding data privacy and security. This review paper delves into state-of-the-art approaches that FL and DL to enhance brain tumor segmentation. Each institution trains a deep learning model, typically a Convolutional Neural Network (CNN) or a specialized architectures like U-Net on its local dataset. U-Net, particularly effective for image segmentation tasks, consists of an encoder that extracts hierarchical features from MRI scans and a decoder that reconstructs the segmented output, creating a segmentation map outlining tumor boundaries. Instead of sharing raw MRI scans, federated learning allows each institution to share model updates with a central server. The central server aggregates the updates from all participating institutions to create a global model using Federated Averaging, which averages the weights of the local models. The updated global model is then sent back to each institution, which continues training on their local data using this improved model. This iterative process ensures high accuracy, robustness, and privacy preservation, making it a promising approach for collaborative brain tumor detection and segmentation. By combining the strengths of federated learning and deep learning, these state-of-the-art methodologies provide a powerful solution to the challenges posed by traditional centralized models. This integration not only improves segmentation performance but also ensures that sensitive patient data remains secure. As advancements in this field progress, the collaborative use of these state-of-the-art techniques is poised to significantly enhance diagnostic accuracy and improve patient outcomes in medical imaging.
Authors - Adarsh Singh Jadon, Rohit Agrawal, Aditya A. Shastri Abstract - This study investigated the efficacy of various deep learning models in performing sentiment analysis on code-mixed Hinglish text, a hybrid language widely used in digital communication. Hinglish presents unique challenges due to its informal nature, frequent code-switching, and complex linguistic structure. This research leverages datasets from the SemEval-2020 Task 9 competition and employs models such as RNN (LSTM), BERT-LSTM, CNN, and a proposed Hybrid LSTM-GRU with 1D-CNN model. Combining the strengths of LSTM and GRU units along with 1D-CNN, demonstrated superior performance with an accuracy of 93.21%, precision of 93.57%, and recall of 93.02%, along with Sensitivity & Specificity of 93.62% and 93.24% respectively. It also achieved F1 Score of 93.44%. We also evaluated the model on some other parameters, such as PPV, PNV, RPV, and RNV. This model outperformed existing approaches, including the HF-CSA model from the SemEval-2020 dataset, which achieved an accuracy of 76.18%.
Authors - Chetana Shravage, Shubhangi Vairagar, Priya Metri, Akanksha Madhukar Pawar, Bhagyashri Dhananjay Dhande, Siddhi Vaibhav Firodiya, Tanmay Pramod Kale Abstract - Emotion Recognition has gained significant popularity, driven by its wide range of applications. Emotion recognition methods use various human cues such as use of speech, facial expressions, body postures or body gestures. The methods built for emotion recognition use combinations of different human cues together for better accuracy in results. This paper explores different methods which use different human cues.
Authors - Ravin Kumar Abstract - This paper introduces the Adaptive Base Representation (ABR) Theorem and proposes a novel number system that offers a structured alternative to the binary number system for digital computers. The ABR number system enables each decimal number to be represented uniquely and using the same number of bits, n, as the binary encoding. Theoretical foundations and mathematical formulations demonstrate that ABR can encode the same integer range as binary, validating its potential as a viable alternative. Additionally, the ABR number system is compatible with existing data compression algorithms like Huffman coding and arithmetic coding, as well as error detection and correction mechanisms such as Hamming codes. We further explore practical applications, including digital steganography, to illustrate the utility of ABR in information theory and digital encoding, suggesting that the ABR number system could inspire new approaches in digital data representation and computational design.
Authors - Aarya Pendharkar, Tanmay Pampatwar, Mrunal Zombade, Ashwini Bankar Abstract - This study offers an effective approach for forecasting changes in stock prices using a binary classification model that makes use of sentiment analysis, technical indicators, and historical stock data. The model forecasts whether a stock will gain or lose the following day, rather than predicting actual stock prices. Technical indicators including moving averages, the Relative Strength Index (RSI), and Bollinger Bands are among the input elements, along with historical price data (open, close, high, low, and volume). Market news and social media data are subjected to sentiment analysis, which produces sentiment ratings (positive, neutral, or negative) in order to identify general patterns in market sentiment. When combined with technical indicators, these mood scores provide additional context for stock movements. The model uses machine learning techniques like XGBoost, SVC, Logistic Regression, and Random Forest, and it outputs a confidence score and a binary forecast. Performance indicators like accuracy, precision, recall, and F1 score are used to assess the model's efficacy. Back testing is also done to evaluate the robustness and performance of the past. The suggested model offers a comprehensive perspective of stock movements by integrating technical and sentimental aspects, producing better prediction skills than conventional models that only use past price data.
Authors - Divyashree H.B., Shirshendu Roy, Supraja Eduru, Dev Sharma, Prathamesh M.Naik Abstract - In today's tech scenario maximum farmers are practicing unconventional farming which needs hard work, in detail if say it is physical practicing. Especially the day-to-day work if talk about that watering the crop manually without measuring the temperature or having the knowledge of soil moisture in the field. As this is practiced from generation to generation, instead of any gain they are losing manpower, water loss which leads to low production and lower the income of farmer. The development of smart agriculture which is built, gives the surety about the soil's water level and fertility outcome by using several sensors. The sensors which are included is temperature sensors, soil moisture sensors and humidity sensors. The coordinated work with these sensors integrated with IoT and raspberry pi will make it convenient and limits the excessive work of the farmers. The integrated sensor will be placed on the water tank and interconnected with pump source, will give alert notification to the farmer phone about the need of water supply. Most the problems are related to electricity is there this issue can be resolved by connecting the sensors with power source and integrating it with cloud so that every controls of the farm will be in the fingertips of farmers. Similarly for soil moisture sensors in case of water requirement by the soil will be directly reach to users phone. So they can perform irrigation. Cattles responsibility is there, farmers owns livestock in the time of grazing, it may lost or distracted from the pathway. Collar tracker with map support will be beneficial at that time. Livestock abnormal behaviors can be detected, there feeding and water tank refilling can be done by just one click. Cows milk thickness health issue and certain things can be managed. Not only limited to cow but for other livestocks. Climate and weather conditions will be directly updated on the applications. Data analytics support for managing expenses. Graph guidance for the soil moisture, temperature and irrigation support. Water tank percentage filled, air composition whether drop irrigation or sprinkler irrigation needed, temperature, humidity cattles live location on custom based maps will be displayed on the dashboard. Application usage guidance and query support will be there for smooth use of application.
Authors - Ankit Patne, Hritika Phapale, Kaushik Aduri, Hemantkumar B Mali Abstract - Cloud-based Learning Management Systems (LMS) are secure online platforms that enable L&D professionals to upload their resources and build a comprehensive suite of learning materials. This paper presents an overview of the cloud LMS technologies landscape and examines architecture, scalability solutions, and security perspectives on deploying these tools. We take a look at how these platforms are also incorporating machine learning into their personalization of learning experiences. If you investigate some of the case studies on platforms like Coursera, you will get a sense of practical ways to implement and maintain performance improvements. The present paper, by reviewing current literature reviews major benefits of cloud technologies in improving educational outcomes, which include reducing cost, better scalability, and enhanced security. Such a study contributing to the evolving knowledge base of cloud-based education is shedding new light into the possibility of how cloud LMS can revolutionize IT security education delivery.
Authors - R.Mehala, K.Mahesh Abstract - The Content Preserve for 3D Video Stabilization using Warping Techniques for making a hand-held video camera captured using a guided camera motion. This technique enables the simulation of 3D camera movements by modifying the video look it was captured from adjacent views. Its algorithms successfully reproduce dynamic scenes from a single source video by focusing solely on perceptual plausibility rather than perfect reconstruction. The method that modifies a hand-held video camera's output to make it look as though it captured using a directed camera motion. This technique enables the simulation of 3D camera motions by modifying the video to look as though it was captured from adjacent views. It is possible to automatically select a particular wanted camera path. The warp calculated the content maintains the video frame while adhering to sparse deletions suggested by the restored 3D structure. This method works well as seen by the experiments stabilizing difficult movies with dynamic sceneries.
Authors - K.Sarvani, Dinesh, Bijith Narayanan, Aayush Rai Abstract - Digital finance has become a buzzword in every financial service to identify any country's solvency position and competitive environment. This study emphasizes the performance of the banking sector with respective to macro-economic variables to assess the solvency and profitability position of commercial banks in India. Two macroeconomic variables namely gross domestic product, and inflation were considered to identify the performance of nonperforming assets of the public sector banks. There are twelve public sector banks in India as of 2013-24 as per the RBI database. All the public sector banks were considered for the study for ten years. The data was collected from PROWSSIQ for the financial data of public sector banks. Macroeconomic variables were taken from Economic Times data from the published data from web sources. The findings of the study are that non-performing is negatively correlated to inflation and GDP growth rates. The adjusted R Squared value is 61 percent implying that the regressors are perfectly explained that the dependent and independent had a relation. Forecasting the performance of non-performing was done using the SARIMA model. It is found that for all the select banks, non-performing assets are continuously increasing which implies that the recovery of bad debts may be done by the adoption of new fintech apps and it is a positive sign for the performance of the banks in coming years.
Authors - Anudeep Arora, Neha Arora, Neha Tomer, Ranjeeta Kaur, Vibha Soni, Lida Mariam George, Anil Kumar Gupta, Prashant Vats Abstract - Effective human resource management is a major issue for the Business Process Outsourcing (BPO) business, which is marked by a high staff turnover rate and a dynamic operating environment. These issues are frequently not adequately addressed by traditional HR management techniques, which results in inefficiencies and higher expenses. BPO companies may improve employee engagement, optimize staffing levels, and anticipate workforce demands with the use of predictive analytics, which makes it a potent option. The use of predictive analytics for efficient HR utilization in the BPO sector is examined in this article. It explores important technologies, tools, and processes; talks about the advantages and difficulties of implementation; and provides case studies of effective deployments. BPO firms may increase labor productivity, lower attrition, and boost overall company success by utilizing predictive analytics.
Authors - Kavita Patil, Rohit Patil, Vedanti Koyande, Amaya Singh Thakur, Kshitij Kadam, Kavita Moholkar Abstract - This paper evaluates a chatbot system designed for personalized business interactions using advanced Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). The system combines proprietary business data with external databases to improve contextual relevance. A comparative analysis of leading LLMs—Gemini Pro, GPT-4, Claude 2, GPT-3.5, and LLaMA 2—was conducted across benchmarks like MMLU, GSM8K, BigBench Hard, HumanEval, and DROP. Gemini Pro outperformed the others, with scores of 88.9% on MMLU, 86.3% on GSM8K, 78.1% on BigBench Hard, 73.5% on HumanEval, and 79.2% on DROP, showcasing its strength in complex reasoning and long-context retrieval. Fine-tuned with business-specific data, Gemini Pro sets a new standard for high-accuracy, scalable chatbot solutions, ideal for enterprise applications.
Authors - Darshana Naik, Aishwarya Bhagat, Amman Baheti, Atharva Kulkarni, Hitesh Kumar Abstract - This paper examines the potential of AI-powered chatbots to address the growing global need for accessible and effective mental health support. It traces the evolution of chatbots, from rudimentary systems to sophisticated AI-driven platforms, emphasizing advancements in artificial intelligence and natural language processing that enable personalized responses. Driven by the need to overcome barriers of cost, availability, and stigma in mental health care, the paper explores chatbot integration strategies. These include using chatbots for screening and triage, extending therapist reach, bridging care gaps, reaching underserved populations, and leveraging data for personalized interventions. While chatbots show promise in delivering therapeutic support and improving symptoms, they are envisioned as a complement to, rather than a replacement for, traditional therapy. The paper advocates for leveraging AI to enhance the scalability, reach, and personalization of mental health care, ultimately aiming to improve global mental health outcomes. By exploring both the potential and the challenges of AI-powered chatbots, this paper contributes to the ongoing dialogue about the future of mental health care in an increasingly digital world.
Authors - Ashwini Bhosale, Laxmi Patil, Gitanjali Netake, Sakshi Surwase, Rutuja Gade, Prema Sahane Abstract - This paper discusses a project that aims to create a system for translating sign language into spoken words while also recognizing the emotions of the signer. The goal is to make communication easier for Deaf and hard-of-hearing individuals by converting hand gestures into speech and reflecting the signer’s emotional tone in the voice output. This would make conversations feel more natural and expressive, enhancing interactions in both social and work environments. The project uses computer vision and Convolutional Neural Networks (CNNs) to accurately recognize various sign language gestures. To identify emotions, it uses deep learning models like VGG-16 and ResNet, which focus on facial expressions. It also uses Long Short-Term Memory (LSTM) networks to analyze audio input and detect emotional tones in speech. For turning sign language into spoken words, the system employs Text-to-Speech (TTS) technologies like Tacotron 2 and WaveGlow. These tools create natural-sounding speech, and the detected emotions are added to the voice by adjusting tone, pitch, and speed to match the signer’s feelings. With real-time processing and an easy-to-use interface, this system aims to provide quick translation and emotion detection. The expected result is a fully functional system that not only translates sign language into speech but also effectively conveys emotions, making communication more inclusive for Deaf and hard-ofhearing individuals.