Authors - Sahil Shelote, Ritesh Chaudhari, Payal Sirmokadam, Rupali Kamathe, Meghana Deshpande, VandanaHanchate, Sheetal Borde Abstract - Traditional traffic enforcement methods pose significant challenges to public safety in order to effectively detect and resolve violations. Using the ESP32-Cam module for video capturing, YOLOv3 for object detection, and OCR for license plate recognition, it offers an innovative approach to improving road safety and traffic management. ESP32-CAM module captures realtime videos of intersections. What sets this research work apart is the integration of YOLOv3, an advanced object detection model, to detect possible traffic violations such as helmet detection, rider detection. OCR technology allows extraction of license plate information, ensuring accurate identification of the vehicle involved in violation. Enabling the creation of Echallans and sending the registered vehicle owner an SMS with the payment gateway link when an Echallan is generated. This represents an important development in traffic management and safety, with promising results in terms of increased compliance, reduced accidents and general improvements in road safety. ESP32-CAM integrates YOLOV3 and OCR technologies to provide an efficient and technologybased solution to improve public safety on the road.
Authors - Rohini Hongal, Supriya K, Rajeshwari .M, Rahil Sanadi Abstract - Computer vision applications like object detection, picture matching, 3D reconstruction, and depth estimation in navigation rely on the synchronization of stereo frames. In stereo vision, two cameras separated by known distance are used to capture an image and analyze for differences in both images. To use stereo images in any application, synchronization between the corresponding frames must be ensured. This paper presents an approach to detect the synchronization between the stereo pair images. The synchronization information between the stereo frames can be achieved in two ways: one is by using the temporal data of the image pair and the other is by analyzing the spatial data in the images. This study uses the temporal data i.e. timestamps of the stereo images and validates results with the spatial data, to identify the stereo image pair as synchronous or asynchronous. The spatial algorithm is executed once the timestamp algorithm identifies a possible synchronization. In order to generate a template and extract spatial information from the left frame, this technique makes use of the Sobel filter. An appropriate correlation approach is then used to match the template to the right, right+1, and right-1 frames. If the chosen frame matches the correct frame, the frames are deemed to be synchronized. The frame with the highest correlation is chosen. On the other hand, the frames are considered asynchronous, if the frame with the highest correlation is either the right+1 or right- 1 frame. The suggested approach offers an accuracy of 90.33for static datasets and 96.67frame synchronization. The technique also provides information on the duration of asynchrony when frames are not synchronized. A variety of computer vision applications that depend on synchronized stereo frames might benefit greatly from the presented technique. It allows for more reliable object detection, picture matching, and 3D reconstruction by precisely detecting the synchronization state, which improves visual perception and comprehension in real-world circumstances.
Authors - Karuppasamy M, Jansi Rani M, Poorani K Abstract - Diabetes is the leading cause of mortality since its prevalence is higher globally. Since it contributes to various kinds of complications it leads to a high mortality rate. Early diagnosis and prediction of contributing features are found with the assistance of machine learning models. These models are instrumental in assisting healthcare sectors in prediction, diagnosis, prognosis, and disease prevention. If diseases are found at earlier stages, it would save many people’s lives. In that aspect, machine learning models are developed to find diseases at earlier stages. However, accuracy of the predictions at not much satisfied. This proposed work explores the techniques to predict diabetes at earlier stages. Several data mining approaches to XAI are discussed. The major features contributing to diabetes are also identified with the feature importance technique. This results in a greater way of understanding which feature contributes more to diabetic progression. The proposed model resulted in 94% accuracy with random forest which is also elaborated with Explainable AI (XAI).
Authors - Payal Khode, Shailesh Gahane, Arya Kapse, Pankajkumar Anawade, Deepak Sharma Abstract - An important subject that has always remained on top of the most important areas of concern universally is security as the world deals with dynamic change in technology. It is with this background that this paper explores the frailties that arise from the current technological gadgets such as mobile phones, Internet of Things (IoT) devices, and personal computers that are prone to a range of cyber threats. A comprehensive examination of the security threat is taken to show how application weaknesses and system susceptibilities and network-based threats allow the attacker to erode user confidentiality and data integrity. Moreover, this study compares traditional and modern assessment and protection mechanisms, including cryptography techniques, flow inspection tools, signals intelligence technologies, and hardware-based and artificial intelligence-based security measures with the intention of identifying the most effective paradigm for combatting these threats. That way, the present paper is relevant to the ongoing work in the field aiming at designing new countermeasures to improve the vulnerability of assorted present-day technologies to cyber threats.
Authors - N V Bharani Subramanya Kumar, C V Mahesh Reddy, CH. Samyana Reddy, Krishn Chand Kewat, Laxmi Narsimha Talluri, Shaik Mohammed, Rahil Sarfaraz, Sushama Rani Dutta Abstract - This work showcases an improvement over existing methods by developing a novel deep convolutional neural network (CNN) architecture for image classification specifically targeting the images in the CIFAR-10 dataset [4] which consists of 60,000 color images ( 32 x 32 pixels size) divided into 10 classes. So far, the model architecture incorporates a number of convolution and pooling layers which are then followed by the fully connected layers to better learn the complex structure existing within the input spatial configuration. The typical challenge of overfitting is addressed by employing various techniques such as data augmentation and dropout regularization strategy. Immediately from the experimental evidence, it is clear that the deep CNN performs superior to other traditional models in the case of image recognition classifying problems and therefore the model has proved to be robust in discerning the differences that exist in the categories in the images within the CIFAR-10 dataset.
Authors - Shubham Kadam, Chhitij Raj, Pankajkumar Anawade, Deepak Sharma, Utkarsha Wanjari, Vijendra Sahu, Anurag Luharia Abstract - This paper examines the modern role of information and communication technology (ICT) in healthcare, which has revolutionised patient care, data management, and service delivery. While ICT was initially used solely for administrative purposes, it is now broadly defined to include a range of information and communication technologies such as electronic health records (EHR), telemedicine and analytics that improve operational Efficiency, patient access and quality of care. The ability to innovate, such as AI, cloud computing, etc., provides real-time data access that helps healthcare professionals make better decisions and also improves patient outcomes. In particular, the paper showcases the government's initiative to create an integrated digital health system. The study highlights the need for strategic implementation of ICT to optimize health outcomes and availability and access to services, particularly in resource-poor settings.
Authors - Sachin Naik, Rajeshree Khande, Sheetal Rajapurkar, Kartik Dalvi, Shubham Rajpure, Vaibhav Kalhapure Abstract - SmartMail Insights is an intelligent web-based toolkit that is created for email management and all goes above delivering the basic functions of most online mail applications. Through the automation priority ranking, auto-responses and emails summarization, it makes it easier for the users to deal with urgency and important mails to emails that may not be very tiresome. The ML algorithms that it uses help easily sort the emails by content, sender, and, there are separate filters to highlight important emails with variable options. Auto replies are supported by NLP and there is the summarization of text to make it easier to read. Despite this, there are ways that SmartMail Insights could advance its current model of categorization one way is to incorporate its model for identifying and sorting through spam emails and promotional ones at least, into more refined sort of emails such as personal, business and so on since doing so would prove helpful in improving categorization accuracy
Authors - Shailesh Gahane, Payal Khode, Arya Kapse, Deepak Sharma, Pankajkumar Anawade Abstract - The accessibility for every type of user, including disability, ensures that the websites and applications are developed to allow the access of every user in this electronic world. For my research paper, I aimed to report important techniques and best practices for developing accessible websites and applications while researching the effectiveness of the established accessibility guidelines, the role of assistive technologies, and inclusive design strategies. The first objective of this research is concerned with the practical application and the effectiveness of the general core standards on accessibility overall, including the Web Content Accessibility Guidelines (WCAG) and the Americans with Disabilities Act (ADA), in terms of their positioning on promoting compliance and inclusion. Three are the targets of this paper. The first target concerns how assistive technologies, like screen readers and voice-control programs, interact with web applications along best practice recommendations for optimizing these tools to access better by following accessibility. Third is about inclusive strategies for design issues with color contrast, font selection, and responsiveness meant to improve accessibility for both visual, auditory, and cognitive impairment. This research gives a comprehensive and definitive understanding of the present techniques and best practices in accessible web and app development. Therefore, how the developers can possibly enhance usability and ensure digital inclusivity for all users is provided.
Authors - Madhuri Thorat, Priyanshu Kapadnis, Neel Kothimbire, Rameshkumar Choudhary, Atharva Jadhav Abstract - The emergency of Generative AI has led to the development of various tools that present new opportunities for businesses and professionals engaged in content creation. The education sector is undergoing a significant transformation in the methods of content development and delivery. AI models and tools facilitate the creation of customized learning materials and effective visuals that enhance and simplify the educational experience. The advent of Large Language Models (LLMs) such as GPT and Text-to-Image models like Stable Diffusion has fundamentally changed and expedited the content generation process. The capability to generate high-quality visuals from textual descriptions has exceeded expectations from just a few years ago. Nevertheless, current research predominantly concentrates on text generation from text, with a notable lack of studies exploring the use of multimodal generation capabilities to tackle critical challenges in instruction supported by multimodal data. In this paper, we propose a framework for generating situational video content based on English poetry, which is executed through several phases: context analysis, prompt generation, image generation, and video synthesis. This comprehensive process necessitates various types of AI models, including text-to-text, text-to-video, text-to-audio, and image-to-image. This project illustrates the potential of combining multiple generative AI models to produce rich multimedia experiences derived from textual content.
Authors - Rashmy Moray, Sridevi Chenammasetti, Shikha Jain, Ankita, Shivani Abstract - This study explores the determinants influencing the adoption of robo-advisory services among Generation Z and Millennials. Leveraging the DeLone and McLean Information Systems Success (DM ISS) model, the research examines four key dimensions—system quality, information quality, service quality, and user satisfaction—to evaluate their impact on users' intention to adopt these services. A structured questionnaire was utilized to collect primary data, which was analyzed using Structural Equation Modeling (SEM) via SmartPLS software. Findings highlight that service quality and user satisfaction significantly influence the adoption intent of robo-advisory services. This research expands the DM ISS model's application to robo-advisory services, providing valuable insights for stakeholders on how these dimensions contribute to user satisfaction and overall system performance.
Authors - Sunil Kumar, Sanya Shree, Saswati Gogoi, Anshika Shreshth Abstract - The evolution of wireless communication, which led to the introduction of cellular networks, has enabled the highly interconnected world we experience today. This research paper discusses the developmental course from the first generation (1G), which introduced analog voice communication to the higher generations of networks, which brought digital signals into play. Multiple access technologies and significant emerging technologies of cellular networks from 1G to 5G are discussed. A comparative analysis among different generations of networks is presented, and a vision of the forthcoming sixth generation is presented. The role of the present widely used fifth-generation (5G) in defence, healthcare and education is discussed along with other applications. The challenges and future directions of the sixth generation (6G) Wireless Communication Network (WCN), which aims for ultra-low latency and extremely high energy efficiency using the specifications of artificial intelligence are discussed.
Authors - Shradha Naik, Suja Palaniswamy, Nicola Conci, Vishal Metri Abstract - Anomaly detection in videos from CCTV cameras can be an important strategy for crime analysis and prevention. The main focus of our work is on detecting the crime of chain snatching from videos captured in India. Due to the absence of a training set of similar Indian videos, it is challenging to design a classifier for this crime. Hence a technique called Model Agnostic Meta-Learning (MAML) is used to train a network on the well-known UCF crime dataset for detection of chain-snatching in a dataset custom built by us. MAML is further developed to result in a method called Sampling-based Meta-Learning Anomaly Detection (SMLAD). With this, the characteristics of MAML are used automatically to classify chain-snatching as an anomaly and obtain best accuracy and AUC scores of 86 % and 84 % respectively. Thus the proposed work demonstrates the efficacy of MAML to correctly classify chain-snatching which constitutes completely unseen data, as a crime-related anomaly.
Authors - Sanal Kumar S P, Arun K Abstract - Aspiring researchers have to consider choosing an appropriate Ph.D. subject. However, the complexity of the regulations and the large number of possible choices especially in context of cross and multi disciplinary approach render it challenging. The manual processing of applications by universities is time-consuming and prone to errors, which leads to inefficiencies and in-ordinate delays. We created DSPredict, a novel approach that employs machine learning to identify the most appropriate Ph.D. subject for each applicant. Our methodology assesses application profiles and predicts the most suitable subjects. The findings suggest that DSPredict surpasses traditional methods, resulting in increased accuracy and significantly shorter time to identify appropriate subjects.
Authors - Sudha S K, Aji S Abstract - Rapid advancements in video surveillance and analysis require advanced frameworks capable of detecting, segmenting, and tracking objects in complex, dynamic scenes. This paper introduces DySAMRefine, a novel dynamic scene adaptive mask refinement strategy for robust video object segmentation and tracking (VOST) in dynamic environments. DySAMRefine is built upon a Mask R-CNN pipeline for instance-level segmentation and incorporates a long short-term memory (LSTM) network to capture temporal dependencies, ensuring smooth and consistent object tracking across frames. A spatio-temporal attention block (STAB) is introduced to maintain temporal coherence, supported by a temporal consistency loss (TCL) that penalizes abrupt changes in masks between consecutive frames, promoting temporal smoothness. DySAMRefine dynamically adjusts mask refinement based on the complexity of the scene and optimizes performance in static and highly dynamic environments through a deformable convolutional network (DCN). The training process employs an efficient mixed precision scheme to minimize computational overhead, enabling real-time performance without sacrificing tracking precision. Extensive experiments and ablation analysis demonstrate that DySAMRefine enhances the accuracy and robustness of VOST, achieving superior J&F scores on benchmark datasets.
Authors - Roshan Kamthe, Yash Gaikwad, Shubham Pawar, Kishan Chandel, Pushpavati Kanaje Abstract - The purpose of this research is to develop a system that can identify hand movements, facilitating easier communication for the deaf and mute. Apart from providing voice output for calls coming in from non-deaf individuals, the system also includes a mobile application that allows users to communicate through hand gestures. Our solution gives those who are hard of hearing or deaf a straightforward way to communicate by utilizing modern technologies like computer vision and machine learning. The goal of this project is to develop a hand gesture detection system that will improve communication accessibility for people with speech and hearing problems, especially the deaf and mute community. Our project's primary objective is to provide individuals who are incapable a clear solution.
Authors - Atharva Desai, Anurag Raut, Aditya Thatte, Ramchandra Mangrulkar Abstract - This project proposes an advanced, multi-threaded, opensource NoSQL database architecture designed to extend and improve upon existing database systems. The architecture utilizes a shared-nothing approach, sharding the keyspace into multiple parts, each managed by a dedicated thread. By employing hash-based ownership, the need for synchronization is eliminated, thereby reducing performance bottlenecks. The system is optimized for distribution within a single machine, leveraging a thread pool technique to manage potential thread overhead efficiently. Additionally, the database replaces traditional hash tables with dashtables, which minimize rehashing overhead and optimize memory usage by segmenting the hash space into smaller, more manageable portions. This novel approach significantly improves efficiency and scalability, providing a compelling alternative to existing solutions like Redis.
Authors - Trupal J. Patel, Mahek D. Viradiya, Jaykumar B. Patel, Dhruvi J. Patel, Prisha M. Patel, Dhruv Dalwadi Abstract - In the era of surging urbanization, the problem of managing waste effectively has become a major concern. This research paper provides a solution by providing a cutting-edge system for real-time monitoring and management of waste bins using IoT sensors integrated with cloud computing technologies. By using an ultrasonic sensor (HC-SRO4) to precisely and accurately gauge levels of waste with a DHT22 sensor to monitor conditions related to the environment. This solution provides innovation that enables the data collected precisely to enhance the efficiency of waste management. The data that is collected is then processed by a Raspberry Pi, which is the core unit of the whole system, that transmits the whole information to a cloud platform where analysis and visualization are done. This makes it possible for stakeholders to access real-time insights of waste levels and factors affecting the environment, which constantly improves the process of decision-making. Moreover, the system integrates predictive analysis to predict waste collection trends, enabling the optimization of collection schedules and minimizing the trips that are unnecessary for collection. By this way, the operational cost can be reduced, and it helps in improving the efficiency of service. This approach not only considers logical challenges but also serves sustainable waste management practices. Ultimately, this research illustrates the potential of IoT technologies to transform, creating smarter and more adaptive environments in urban areas.
Authors - Divyashree HB, Deepthi Chamkur V, Preesha Tandon, Laranya Subudhi Abstract - In today's technology age, a Ground Control Station System application software for offboard mode and manual control of unmanned aerial vehicles is essential for a variety of onboard activities like tracking, surveillance, and patrolling. This study discusses software that controls and collects important data from unmanned aerial vehicles. The program is developed in Python 3, and the graphical user interface is created with the Qt5 framework. Melodic is the robot operating system (ROS) that facilitates communication and networking. The software allows you to control the drone's forward, backward, up, down, left, and right motions. The live feed from the RGB camera (day camera) and the night vision camera may be watched and saved as snapshots. It is also possible to save the live stream footage to a CD. Object tracking and detection functions are offered for surveillance purposes. The software may also be used to operate a gimbal fitted to the drone. The entire program is beta tested on the Gazebo real-world simulation, and the experimental findings are based on a real-world hexacopter flight.
Authors - Sandeep M.Chaware, Mohit Matte, Pratik Dahagaonkar, Anurag Deotale, Laukik Pagar, Jayesh Sarwade Abstract - Agriculture is the land-cultivation, crop-growing, livestock-raising processes. A nation's economic growth depends on its agricultural sector. Agriculture makes for about 58% of a nation's primary revenue source. Up to now, farmers sow and cultivate or practice agriculture based on favorable weather and soil conditions without considering the future supply and demand of crops and the type of agriculture practiced, thus often doing reduce profits from agriculture. Typically, when demand for a crop is low and supply is high, the price drops too low, leading to debt for the farmer and vice versa. Predicting what crops should be grown or what type of agriculture should be adopted in today's world is essential to meet people's needs and increase farmer productivity. Machine learning, data mining, and data analytics can be used to collect data, train models, and predict the market demand, supply chain, demanding type of agriculture and location of agriculture for revenue generating agriculture. This will help reduce losses for farmers. Due to the ongoing changes in the world, the proposed Machine Learning assistanat helps determine how to manage agriculture intelligently. It assists an individual towards profitable agriculture This work's primary goal is to sustain a single farm profitably while achieving high output at reasonable expenses. Questions including pricing comparisons, government activities, plant protection, animal husbandry, weather, and fertilizer management are addressed by the proposed method.
Authors - Priyanshi Desai, Parth Shah Abstract - The increasing adoption of voice-controlled IoT devices, such as Amazon Alexa, Google Home, and Apple’s Siri, has transformed modern interactions with smart systems in various sectors, including home automation, healthcare, and industry. While these devices offer convenience and enhanced accessibility, they are also vulnerable to significant cybersecurity threats. This paper examines the security challenges associated with voice-controlled IoT systems, focusing on key vulnerabilities such as voice spoofing, man-in-the-middle attacks, insecure APIs, and data privacy concerns. Additionally, the paper explores various attack vectors, including adversarial attacks and physical tampering, and assesses current mitigation techniques like biometric voice authentication, secure data transmission, and anomaly detection. Privacy concerns are also discussed, particularly in relation to data retention and third-party access. As the use of these systems continues to grow, advanced cybersecurity measures, including quantum-resistant encryption and enhanced biometric methods, are essential for securing voice-controlled IoT devices. Furthermore, the establishment of regulatory frameworks to govern the handling of voice data is critical. This paper concludes by identifying future directions to improve the security and privacy of voice-controlled IoT devices, emphasizing the need for innovative solutions to counter an expanding array of cyber threats.
Authors - Shiva Kumar Bandaru, Upendra Pratap Singh Abstract - In a federated learning-based setup, parameter aggregation plays a pivotal role in obtaining global parameter estimates that assimilate the knowledge learned by the different clients. With an efficient parameter aggregation strategy, the global parameter estimates derived are more generalizable, accelerating the local client training in the subsequent communication rounds. In the proposed approach, we propose a novel m-ary improvisation-based parameter aggregation algorithm to obtain the global parameters. Specifically, after a threshold number of communication rounds has elapsed, the performance of the clients is evaluated on an independent test set, and the clients with better generalization are labeled as strong and do not participate in the next set of a threshold number of communication rounds. In this way, weak clients participate in the federated learning for more communication rounds; after the next set of threshold communication rounds has elapsed, the clients undergo a similar evaluation to be labeled as strong or weak again. The proposed algorithm ensures weak clients get more attention/exposure to learn the model parameters collaboratively. The global model trained on the BraTS2020 dataset in a federated learning-based framework reports the Dice coefficient, Jaccard index and pixel accuracy values of 0.8851, 0.8965, and 99.92%, respectively. Further, we show empirically that the training time for the different clients reduces from 180 minutes in the first phase of federated learning to only 64.8 minutes in the last phase, highlighting an accelerated training process. Consequently, the results reported by the proposed federated learning-based segmentation model highlight its usability for efficiently carrying out brain segmentation involving private and sensitive brain scans.
Authors - Saurav Kumar, Shivani, Rashmy Moray, Shikha Jain, Sridevi Chennamsetti Abstract - The aim of the study is to inspect the factors determining the use of web 3.0 Meta based banking services. Diffusion of innovation theory has been used to explain the influence of perceived factors on attitude and behavioural intention to use the meta based banking services. Structured questionnaire as primary source of data collection has been applied and data gathered was analyzed using Structural equation model as statistical technique to achieve the stated objectives. SmartPLS as statistical tool was employed in analyzing the data and the outcome reveal that compatibility, observability and trialability showed a significant impact on attitude towards usage intent of Web 3.0 based meta banking services. The study has proved to be significant in the field of banking on metaverse for various stake holders and policy makers and be helpful to understand the perception of the customers in the usage of web 3.0 based banking.
Authors - Shubham Kishor Kadam, Pankajkumar Anawade, Deepak Sharma, Anurag Luharia Abstract - Artificial Intelligence (AI) may be defined as utilization of computer systems in undertaking processes, which are typical of human intelligence. AI is an incomparably new and actively developing scientific direction, which can qualitatively change most of the social processes. In the context of the increased usage of AI, the different educational settings are applying this technology to create new perspectives in the sphere of pedagogy nowadays. Today it is utilized to sift through incalculable quantities of information in order to discover patters, which would help devise better and more appropriate policies and educational strategies than the existing ones. This paper determine the pertinence of the AI in consideration of education along with the challenges using AI in education.
Authors - Harishh N, Drisya Murali, Suresh M Abstract - The study explores the possibilities of green logistics and the adoption of biodegradable packaging in freight transportation, focusing on the impact on reducing packaging waste and bringing in sustainability. The research uses the Grey Influence Analysis (GINA) methodology to analyze the identified eleven significant factors, which impact the adoption of biodegradable packaging in freight transportation. The primary role of packaging is to protect products during storage and transport, reduce costs, and sustainable way of product distribution and safety. The study also highlights the importance of improving the material properties of packaging, which can mitigate or minimize adverse environmental impacts. The study's findings highlight the need for various perspectives in future studies and the need for a comprehensive understanding of the relationship between various factors influencing biodegradable packaging in freight transportation.
Authors - Utkarsha Wanjari, Shubham Kadam, Chhitij Raj, Pankajkumar Anawade, Deepak Sharma Abstract - The digital divide continues to be a global issue since it accounts for the marginalization between the group owning access to Information and Communication Technology (ICT) and those without access. This report looks at the crucial role of ICT in bridging this gap and ensuring integral social and economic development. ICT does hold tremendous transforming potential through its power to enrich education, modify healthcare delivery systems, and strengthen governance through digital inclusion. Economically, it propels innovation, expands access to global markets, and creates financial inclusion through digital tools. Though still highly significant, challenges persist in the form of infrastructure deficits, digital literacy gaps, and socioeconomic inequalities. Through case study examples and successful global initiatives, this report is shaped by best practices and strategies to work around these challenges. It draws attention to public-private partnership efforts, policy reform, and investment in ICT infrastructure and ICT training. Bridging the digital divide is not just technical but also a pathway to achieving equitable and sustainable development in an increasingly digitalizing world.
Authors - Vasudha V. Ayyannavar, Lokesh B. Bhajantri Abstract - The healthcare sector is rapidly evolving, making the continuous exchange of healthcare data essential for both patient care and maintaining operational efficiency. In today’s landscape, file and data synchronization is no longer optional but a crucial requirement. This work presents a real-time data synchronization system tailored for hospital records management, enabling seamless and secure communication among healthcare users. The system uses real-time synchronization to ensure that updates made on the server are instantly reflected across all connected clients. In this work, a robust architecture is developed to support both MySQL and MongoDB databases, offering flexible data storage. It associates with Node.js and Express.js, utilizing Socket Input and Output for real-time and bidirectional communications. On the front end, HTML, CSS, and JavaScript are combined with Bootstrap to create a responsive and user-friendly interface, allowing easy data input and retrieval by healthcare users. The proposed solution ensures conflict-free data dissemination across various devices and is compared against existing methods, analyzing key metrics such as synchronization time, memory usage, and data accuracy. Overall, the system aims to enhance hospital records management through a reliable, scalable, and intuitive real-time synchronization solution.
Authors - Ganesh Haricharan Mungara, Pranai Govind Soorneedi, Karthik Mungara, C.N.S.Vinoth Kumar Abstract - The proliferation of smartphones has transformed communication, work, and information access. However, this convenience has brought significant security challenges, particularly from malware that can compromise user data and privacy. Despite numerous antivirus applications, detecting and removing malware from Android devices remains a challenge. Current solutions of ten fail to detect sophisticated malware, necessitating the intervention of cyber security experts, which can compromise user privacy. This project aims to develop a tool that detects malware on Android devices based on installed applications, eliminating the need for users to install third-party software. The proposed solution leverages pattern matching by checking installed packages against a database of known malware. If a match is found, the tool indicates potential malware presence. This method offers a privacy-preserving approach, focusing on app behavior rather than relying solely on signatures, making it harder for malware to evade detection. The tool addresses the limitations of existing antivirus solutions, which often require extensive permissions and access to personal data. By providing a user-friendly interface and ensuring privacy, this project aims to enhance the overall security of Android devices. Future enhancements include incorporating machine learning models to improve detection accuracy and expanding the tool to other mobile platforms like iOS. This innovative approach offers a reliable and privacy-focused alternative for malware detection on Android devices.
Authors - Pratibha Verma, Sanat Kumar Sahu, Latika Tamrakar Abstract - Coronary Artery Disease (CAD) is a major crisis midst populace worldwide. So, we prerequisite a system that is effective for the identification of CAD problems. In this study we formed a model substance on the classification technique that can clarification the problem of CAD. The Ensemble Bagging classification method develops the creation of multiple classifier models and their mutual outputs to achieve a unified classification outcome. This technique has been implemented in the field of CAD using Artificial Neural Network (ANN) models. The ANN based models are Multi-layer Perceptron Network (MLPN or MLP), Radial Basis Function Network (RBFN), ensemble bagging –RBFN (EB-RBFN), and ensemble bagging MLP (EB-MLP). Our experimental outcomes indicate that the anticipated ensemble bagging model suggestively enhances dataset classification accuracy when compared to individual MLP and RBFN classifiers. This ensemble model consistently delivers more accurate and valuable classification results. Its implementation substantially improves CAD diagnostic accuracy, enabling the more precise identification of patients affected by this condition. These findings imply that the utilization of ensemble learning techniques, specifically ensemble bagging with ANN models, holds great potential in enhancing the precision of CAD diagnosis. This advancement has the potential to improve patient management and treatment outcomes.
Authors - Pampati Sreya, Yashaswi D, Stephen R, Gobinath R, Ramkumar S Abstract - Predicting stock prices remains a challenging problem due to the highly dynamic and non-linear nature of financial markets. Traditional statistical models like ARIMA and GARCH often fail to capture the complexities inherent in stock market data. This paper investigates the use of deep learning techniques, focusing on Convolutional Neural Networks (CNNs) and a hybrid CNN-LSTM ensemble model for stock price prediction in the Indian stock market. The CNN model efficiently extracts temporal patterns from sequential data, while the CNN-LSTM ensemble leverages temporal dependencies for improved long-term prediction accuracy. Historical data from Tata Motors, spanning over two decades, was used to train and evaluate the models. Experimental results highlight the CNN-LSTM ensemble's superior performance in capturing volatile trends and long-term dependencies, with a notable decrease in test loss compared to standalone CNN. This study underscores the effectiveness of hybrid deep learning architectures in enhancing prediction reliability, paving the way for more adaptive and robust financial forecasting systems.
Authors - Mohmed Umar, Jeevakala Siva Rama Krishna Abstract - In the era of complete digital connectivity, it is the need of the hour to keep the networks safe from a wide range of cyberattacks. Traditional Network Intrusion Detection Systems (NIDS) rely mainly on signature-based approaches; though highly efficient in identifying known threats, they suffer from weaknesses in discovering new and developing attacks, such as zero-day vulnerabilities. This results in higher false positives and lower detection efficiency. We present a novel NIDS based on the ensemble methods in machine learning, namely Random Forest and Bagging Classifiers, with which we may promise detection accuracy at the cost of a reduced level of false alarms. We conduct extensive evaluations based on systematic data preprocessing, feature selection, and model training against benchmark datasets like KDD Cup 99 and NSL-KDD. The system being considered achieves a detection accuracy of 99.81%, along with an F1 score of 99.82% and an AUC score of 99.81%, thus significantly surpassing the performance from traditional approaches. These results show the aptness of machine learning methodologies in enhancing network security, as it makes for a flexible and scalable solution suited for real-time deployment in extensive environments. Future work will focus on further developing the scalability of the system and minimizing latency to ensure seamless real-time operation.
Authors - Kajal Joseph, Deepa Parasar Abstract - This study conducts a predictive analysis of company status using various machine learning algorithms, aiming to identify the models that deliver the highest accuracy and reliability for decision-making in finance and business intelligence. The study employs a range of algorithms, including Logistic Regression, DecisionTreeClassifier, Random Forest, Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Naive Bayes, Gradient Boosting Machines (GBM), XGBoost, AdaBoost, LightGBM, CatBoost, and Extra Trees Model, each rigorously tested on a preprocessed dataset split into training and testing sets to ensure robust validation. (Kunjir et al., 2020) Results indicate that ensemble models, particularly XGBoost and Random Forest, outperformed other methods, achieving accuracy rates exceeding 93%. This high level of performance highlights the value of ensemble techniques for handling complex predictive tasks, showcasing their suitability for applications where precise forecasting is critical. The study underscores the importance of model selection in predictive analytics, as it directly impacts the reliability of predictions in financial contexts. These findings suggest that machine learning, especially ensemble models like XGBoost and Random Forest, can significantly improve the accuracy of company status predictions, offering a dependable tool for stakeholders operating in uncertain environments. This research contributes valuable insights into the efficacy of machine learning in predictive tasks, advocating for data-driven decision-making approaches that can enhance business intelligence and strategic planning. (Liaw et al., 2019)
Authors - Meenu Suresh, Tonny Binoy, Saritha M S, Vimal Babu P, Dheeraj N, Aiswarya R Lakshmi Abstract - The present work introduces a video steganography technique which employs Finite Ridgelet Transform (FRT) and Elliptic Curve Cryptography (ECC)-ChaCha20 encryption to hide confidential information. The proposed method begins by identifying key frames through the detection of scene changes. The FRT is employed to analyze the key frames, extracting their orientation and subbandswithin which the secret data is encoded. To boost security, ECC-ChaCha20 encryption technique serves as a preprocessing step prior to incorporating the secret data. The technique attains an embedding capacity of 72%, SSIM of 0.9890 and PSNR value range from 70dB and 72 dB. The experimental results highlight that the algorithm besidesboosting security also ensures superior resilienceand video quality.
Authors - Jagriti Singh Chundawat, Ashish Kumar, Monika Saini Abstract - The purpose of this paper is to optimize the availability of a thermal power plant. A thermal power plant (TPP) is a comprehensive system with multiple interconnected subsystems which are used for power generation. This TPP system has three subsystems such boiler, superheater and reheater. These subsystems connect to each other in series configuration. To improve the availability of the system a study-state availability is derived with the help of normalizing equations and the chapman Kolmogorov equations are derived from Markov birth-death process. The system’s failure and repair rates are statistically independent and exponentially distributed. The numerical results show that availability increases from 0.997903 to 0.998725 as the repair rate increases.
Authors - Mannem Sri Nishma, Satendra Gupta, Tapas Saini, Harshada Suryawanshi, Anoop Kumar Abstract - Face recognition-based authentication has become a critical component in today's digital landscape, particularly as most business activities transition to online platforms. This is especially evident in the finance and banking sectors, which have shown significant interest in adopting online processes. By leveraging this technology, these industries can enhance operational efficiency, promote business growth, reduce reliance on manpower, and automate several processes effectively. However, face recognition systems are susceptible to face spoofing attacks, where malicious actors can attempt to deceive these systems using facial images or videos. Some attackers even use masks resembling authorized individuals to trick recognition cameras into perceiving them as real users. To counter such threats, liveness detection has emerged as a critical research area, focusing on identifying and preventing face spoofing attempts. The proposed approach utilizes a deep learning technique tailored for face liveness detection. The experiments are conducted using the Replay-Mobile, MSU-MFSD, Casia-FASD and our own datasets, which are widely used for recognizing live and spoofed faces. The system achieved an impressive area under the ROC curve (AUC) of 0.99, demonstrating its effectiveness in detecting face spoofing.
Authors - Kusuma B S, Meghana Murthy B V, Preksha R, Srushti M P, C Balarengadurai Abstract - Against the backdrop of either a Deaf World or hearing people, the major challenges which face modern society concern communication barriers in general. The paper proposes a system for translation through gestures in Indian Sign Language to audio and video outputs for non-signers to enable easy interaction with them. Advanced machine learning techniques, such as Support Vector Machine and Convolutional Neural Network, will be used to enable this tool to recognize motions of ISL in real time. It converts these into the correct format for video and audio. In this respect, the paper claims to "make communication more accessible and bridge the gap in communication in which gestures are recognized and translated." Real-time recognition algorithms overcome the challenges faced by hand gesture detection to provide an intuitive and seamless interaction experience. This approach is an effective strategy to enhance communications in government and industry with special focus on smart writing. Results confirm this method's promise in the broader social interaction by significantly improving the speed and accuracy of deaf individuals.
Authors - Bitan Pratihar Abstract - We, human-beings, have two different forms of memory, namely pulling memory and pushing memory (also known as working memory). A pure pulling memory pulls a person towards itself, and consequently, he/she spends some significant amount of time on memorizing the incident but does not gain anything significant in his/her decision making directly. On the other hand, a pure pushing memory pushes a human-being to take some decisions, and thus, it may have direct influence on his/her learning. However, neither pure pulling memory nor pure pushing memory alone may be beneficial to effective learning of human brain. A proper combination of pulling and pushing memories may be required to ensure a significant effect of memory on learning of neural networks. The novelty of this study lies with the fact of formulating it as an optimization problem and solving the same using a recently proposed nature-inspired intelligent optimization tool. The effectiveness of this novel idea of correlating the combined form of memory with learning of neural networks has been demonstrated on two well-known data sets. This combined form of memories is found to have a significant influence on learning of neural networks, and this proposed approach may have the potential to solve the well-known memory loss problem of neural networks.
Authors - Martin Mollay, Deepak Sharma, Pankajkumar Anawade, Chetan Parlikar Abstract - The primary research intention of the present study is to find out the impacts of laws such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) on the digital marketing landscape. The set of regulations relates to data protection, which involves a stringent regime for how the firms gather, process, and hold the privacy of their clients. Therefore, the two main bottlenecks of marketers are fewer consent mechanisms, less data, and a need for more options for personalizing. Still, technology is a fashionable thing that has been launched, and the effectiveness of the new technology revitalizes it. Firms mostly turn to first-party data that question the need for intermediaries. This means that they can collect information directly from the consumer, which then naturally results in much more productive and meaningful customer relation-ships. Getting hold of advanced technologies, for instance, artificial intelligence and machine learning, which work with smaller datasets, also provides a window for companies to discover a large number of customized, and possibly even more valuable, aspects through customer behavior without invading the privacy of the person who is identical to the threat of the law. Additionally, the price of compliance with the regulations is high, notably for Small and Medium sized Enterprises (SMEs). In contrast, it is the most highly cost-effective way for the consumer to win consumers’ trust in the brand and make them loyal to it in the long run. In this new era, ethical marketing follows the footsteps of the evolutionary journey where complete openness and consumers’ private space value are the main topics. Personal data can be acquired in a way that is not compliant with privacy laws. However, zero-party data or consumer information given to businesses might still be the source for personalized experiences that are privacy compliant.
Authors - Sachi Joshi, Upesh Patel Abstract - Cancer is a grave category of illnesses in which the body's aberrant cells proliferate and spread uncontrollably. It can appear in nearly every tissue or organ and take many different forms, each with its own distinct set of symptoms and side effects. Environmental variables, lifestyle decisions, and genetic abnormalities are typically linked to the development of cancer. The varied approaches to cancer diagnosis are examined in this study, with a focus on early detection and therapeutic strategies. This literature review covers a wide range of cancer kinds, such as brain tumours, leukaemia, breast, lung, and cervical cancer, and offers recommendations for creating reliable ma-chine learning-enhanced cancer detection techniques. The research elucidates several applications, techniques, and comparative analysis in this significant subject, ranging from imaging analysis to biomarker identification. The study explores the developing methods that lead to a more precise diagnosis. The study offers insights with a thorough examination of the benefits, drawbacks, and innovations of each technique, ranging from conventional diagnostic procedures to state-of-the-art technologies. It also directs future research efforts towards the hunt for more effective personalized illness management.
Authors - Akshay Honnavalli, Hrishi Preetham G L, Aditya Rao, Preethi P Abstract - In todays information-driven world, organizing vast amounts of textual data is crucial. Topic modelling, a subfield of NLP, enables the discovery of thematic structures in large text corpora, summarizing and categorizing documents by identifying prevalent topics. For Hindi speakers, adapting topic modelling methods used for English texts to Hindi is beneficial, as much of the research has focused primarily on English. This research addresses this gap by focusing on Hindi language topic modelling using a news category dataset, providing a comparative analysis between traditional approaches like LDA, LSA, NMF and BERT-based approaches. In this study, six open-source embedding models supporting Hindi were evaluated. Among these, the l3cube-pune/hindi-sentence-similarity-sbert model exhibited strong performance, achieving coherence scores of 0.783 and 0.797 for N-gram (1,1) and N-gram (1,2), respectively. Average coherence scores of all embedding models significantly exceeded traditional models, highlighting the potential of embedding models for Hindi topic modelling. Also, this research introduces a novel method to assign meaningful category labels to discovered topics by using dataset annotations, enhancing the interpretation of topic clusters. The findings illustrate both the strengths and areas for improvement in adapting these models to better capture the nuances of Hindi texts.
Authors - Guhan Senthil Sambandam, Priyadarshini J Abstract - Machine learning has significantly impacted daily life, with machine translation emerging as a rapidly advancing domain. In healthcare, machine learning presents opportunities for innovation, particularly in translating medical documents into low-resource languages like Tamil. This research develops a transformer-based model fine-tuned for medical terminology translation from English to Tamil. A major challenge was the lack of English-Tamil medical datasets, addressed through innovative data collection methods, such as extracting bilingual subtitles from Tamil YouTube videos. These datasets complement existing resources to enhance model performance. The final model was deployed as a REST API using a Flask-based server, integrated into a React Native mobile application. The app enables users to scan English medical documents, extract text via on-device Optical Character Recognition (OCR), and obtain Tamil translations. By combining advanced Natural Language Processing (NLP) techniques with user-friendly application design, this end-to-end system bridges linguistic gaps in healthcare, providing Tamil-speaking populations with improved access to critical medical information. This study highlights the potential of NLP-driven solutions to address healthcare disparities and demonstrates the feasibility of adapting machine translation systems to specialized domains with resource limitations. The approach also emphasizes scalability for broader applications in similar low-resource settings.
Authors - Sakshi Limkar, Chhandavi Gowardhan, Devyani Dahake, Sneha Naik, Arti Vasant Bang Abstract - Chatbots powered by artificial intelligence (AI) are becoming more and more inventive tools in the field of mental health treatment. They provide scalable, affordable, and easily accessible support for people struggling with stress, anxiety, depression, and other mental health conditions. These conversational bots provide real-time therapeutic interventions, such as promoting emotional well-being by mimicking human interaction through Natural Language Processing (NLP) and Machine Learning (ML) techniques. We have therefore developed a chatbot for mental health assistance named CalmConnect. It is designed to assist users in identifying and addressing mental health concerns.
Authors - Akshar Thakor, Tanya Khunteta, Kaushal Shah, Hargeet Kaur Abstract - It is essential to control access to data. Since the beginning of digital communication, scholars have been at work figuring out ways to prevent eavesdropping on data. They have been successful in doing so through cryptography techniques such as RSA, and AES. Modern Quantum computing is shown to be able to break them. Thus, research and development in this field have been rapid in the last decade. It is better to take precautions in its infancy and develop a future-proof cryptography technique. The following article describes the issues with contemporary ciphers and how they are vulnerable against quantum computers, then goes on to suggest lattice-based cryptography as a strong contender for the solution to this vulnerability by providing its benefits and properties synergizing with certain domains in digital technology. It explains why it is a strong contender by providing examples of its performance in IoT devices that are prevalent today and are only going to increase as this era progresses. By providing a comprehensive overview of developments in this realm, it presents to, new researchers in this field, the importance of Lattice-Based Cryptography, by suggesting why Lattice-Based Cryptography should be the focus of the field of cryptography in the future.
Authors - Nitin Pandit, Sandeep Chaware, Adit Bagati, Yashraj Shegokar, Omkar Jadhav, Om Nikam Abstract - DOS attacks or denial of service have become common among hackers who use them as a way to gain reputation and respect in the cyber underground. A denial-of-service attack essentially means denying legitimate and user network services to a target network or server. Its main purpose is to attack so that legitimate users are temporarily unable to use the services on the network. In other words, we can define a DOS attack as an attack that clogs the target’s memory, making legitimate users unable to help. Or, you send packets that the target cannot process, causing the target to fail, reboot, or deny service to legitimate users. We develop an online DOS protection software that can protect web servers.
Authors - Vedant Patil, Bhargavi Bhende, Omkar Jadhav, Gitanjali Shinde, Kavita Moholkar Abstract - The increasing realism of AI-generated faces, driven by advancements in Generative Adversarial Networks (GANs) like StyleGAN and ProGAN, poses significant challenges in security, identity verification, and digital forensics. Current detection methods, primarily relying on Convolutional Neural Networks (CNNs), struggle to identify subtle artifacts in high-quality synthetic imagery. This paper proposes a hybrid model combining Vision Transformers (ViT) and XceptionNet in a soft-voting ensemble framework. ViT captures global spatial patterns, while XceptionNet excels in detecting localized texture inconsistencies. The ensemble achieves 92.3% accuracy, 92.5% precision, and an F1-score of 0.922 on a dataset of 188,800 real and AI-generated faces. Extensive experiments demonstrate the model’s robustness against diverse deepfake architectures, including those with minimal artifacts. This approach offers a state-of-the-art solution for differentiating real and AI-generated faces, with significant implications for fraud prevention, content moderation, and digital forensics.
Authors - Rupali Ramdas Shevale, Monika Sharad Deshmukh Abstract - For efficient real-time decision-making in a variety of domains, including cybersecurity, finance, and the Internet of Things, accurate and trustworthy event categorization is crucial. By maximizing feature integration, this study explores how incorporating Redpanda, a real-time data streaming platform, into predictive algorithms might improve event categorization. Continuous, high-throughput data processing is made possible by Redpanda's low-latency, fault-tolerant architecture, which enables the real-time extraction of a variety of accurate attributes. Predictive models may use Redpanda's capability to access current, augmented feature sets, which will greatly increase classification accuracy and dependability. The integration process is thoroughly examined in the research, along with its effects on feature variety, model accuracy, and system robustness. The benefits of real-time data streaming in predictive analytics are demonstrated by empirical results, which indicate a significant boost in event categorization performance. By improving feature extraction and enhancing the dependability of predictive systems in dynamic contexts, the results establish Redpanda as a scalable and robust solution.
Authors - Jayashri D.Palkar, Anuradha S. Deshpande Abstract - The crop protection plays vital roles in the food supply and depends on how healthy the crops are, which influences the agricultural production; any adverse condition on crops will be leading to economic loss. Grapes find much use, being important and widely cultivated crops primarily in the Mediterranean regions that control an outgoing market of over 189 billion United States dollars. They are grown for consumption as fresh fruits, as well as in various processed forms such as drinks and sweets. These would be grapes, which, unlike many other plants, thrive and develop despite sickness, thus their control mechanisms must also function well. At the same time, many instances of diagnosis of these infections being wrong can lead to inadequate treatments for the known diseases, inducing even more generalized losses amounting from 5-80% on the crop under inspection. Current computer-based solutions may not be precise enough, leading to high running costs, operational difficulties, and image quality issues due to distortions. The body of literature based on different algorithms for the detection and classification of grape crop diseases remains vast and continues to grow rapidly with the newly emerging algorithms. It presents the overview of different disease-detection algorithms for optimizing grape disease detection, thereby aiding farmers in choosing the appropriate algorithm based on particular diseases and weather condition. This study presents a systematic review of various methods implemented in literature and provides a framework for use of AI-ML for effective detection of disease.
Authors - Shilpa M Katikar, Vikas B Maral, Nagaraju Bogiri, Vilas D Ghonge, Pawan S Malik, Suyash B Karkhele Abstract - Effective forecasting and modeling in food demand supply chains are critical to minimizing waste, reducing costs, and ensuring product availability. This paper explores a comprehensive approach to forecasting food demand by leveraging regression-based models for analysis. We investigate how various machine learning regressors can predict food demand more accurately by examining key supply chain factors such as seasonal trends, price fluctuations, and consumer behavior. The study implements and compares multiple regressors to assess their performance in predicting demand. Metrics Evaluation is done by predicting various models which are Ensemble Learning Models and Neural Network Models to calculate the model’s accuracy. By doing prediction, we identified that Gradient Boosting and XGBoost have overall good accuracy in forecasting and it has provided optimized solutions in the supply food chain. This research mainly focuses on using the best modeling techniques which will help the end users to make proper decisions and bring efficiency in food demand management.
Authors - S. B. Hema Anjali, Manikanta Sai Sumeeth, Sushama Rani Dutta Abstract - This study makes use of a machine learning system that predicts health insurance costs, a relevant issue given the increasing need for such estimates in a post-COVID-19 world. Using the Medical Cost Personal Dataset available at Kaggle offering 1,338 entries, we applied various ensemble models, notably XGBoost, Gradient Boosting Machine (GBM), Random Forest, and Support Vector Machines (SVM). Among our results, XGBoost gives out the best accuracy of the estimates, but the implementation of this technique was expensive. Random Forest was non-intrusive and went on to be of high efficacy. We also discussed how the big data paradigm was implemented using Spark as a means to enhance performance in working on large datasets. As a whole, this work positions XGBoost the ban for the cost of health insurance prediction claiming that there exists scope for improvement by deploying ML methods in decision making in healthcare processes.
Authors - Shrikant Bhopale, Tahseen Mulla, Madhav Salunkhe, Sagarkumar Dange, Sagar Patil, Rohit Raut Abstract - Cardio-Vascular Disease (CVD) continues to be a prominent issue in worldwide health, emphasizing the crucial importance of accurate forecasting and timely prevention. Machine learning (ML) has become a vital tool in the quest to improve CVD diagnosis. The present study aims to conduct a comparative analysis of various machine learning (ML) algorithms in terms of their performance, which includes Naïve Bayes, Logistic Regression, Random Forest, Decision Tree, Artificial Neural Network, Support Vector Machine and XGBoost, in the prediction of CVD. Our results reveal that XGBoost outshines other models, achieving outstanding accuracy, precision, recall, and F-measure. Its exceptional ability to balance precision and recall makes it an excellent choice for the early identification of CVD. This study makes a valuable addition to the expanding field of study on CVD prediction. It underscores the significance of employing advanced ML algorithms, that have the possibility to significantly influence public health outcomes.
Authors - Yathin Reddy Duvuru, Seshank Mahadev, Saranya P Abstract - In this paper, we implement a deep learning model for photovoltaic (PV) power forecasting using Global Horizontal Irradiance (GHI) values which are the major determiner of photovoltaic cell power output. We use a multilayer Long Short-Term Memory (LSTM) model combined with explainable AI (XAI) techniques, aimed at improving the interpretability of predictions across various forecasting horizons. The model utilizes global horizontal irradiance (GHI) data, which undergoes thorough pre-processing, including cleaning and downsampling to ensure data quality and computational efficiency. The LSTM model is designed with multiple layers to capture temporal dependencies and nonlinearities, which are crucial for accurately forecasting PV power under variable environmental conditions. To evaluate model performance, multiple error metrics such as R², MAE, RMSE, and MAPE are utilized. In addition, a benchmark model is built as a reference to compare against the LSTM-based model, providing a baseline for assessing performance improvements. The use of XAI further enables the interpretation of the LSTM model’s predictions, providing an understanding of feature importance and model behavior. We use the SHAP library to perform XAI analysis by calculating Shapley Values. We demonstrate how the SHAP library can be used on 3D LSTM data. Furthermore, the SHAP graphs provide a sense of the importance of each feature’s role in the prediction.
Authors - Suresh V Reddy, Sanjay Bhargava Abstract - Cybercrime on social media platforms such as Facebook and Twitter has emerged as a significant challenge due to the open, interactive nature of these platforms. Various machine learning (ML) and deep learning (DL) techniques have been deployed to detect different forms of cybercrime, including phishing, spamming, hate speech, and identity theft. This paper provides a comparative analysis of these approaches, focusing on their application to cybercrime detection on Facebook and Twitter. Through a detailed literature review, we evaluate the strengths and weaknesses of these techniques, considering their performance and scalability. Moreover, the ethical challenges and the need for privacy-preserving mechanisms are discussed, along with future directions for research.
Authors - Rakesh Babu B, Rajesh V, Syed Inthiyaz, Srinivasa Rao K, Sri Sravan V Abstract - Brain tumours are life-threatening disorders with significant fatality rates. Patients have a higher chance of survival when brain tumours are diagnosed early and treated more effectively. Therefore, for the purpose of better and boost the early identification of brain tumours, computerized segmentation as well as classification techniques are needed. It is possible to safely and promptly detect tumours using brain scans such as computed tomography (CT), magnetic resonance imaging (MRI) and other techniques. Revolutionary changes have occurred in many different disciplines as a result of recent developments in artificial intelligence (AI). AI models are becoming essential tools for interpreting images in bio medical field. Deep learning is one of these that signifies extraordinary capacity to deal with enormous data collection, revolutionizing numerous fields in the biomedical profession. This article evaluates a state-of-the-art AI based segmentation and classification system and discovers major classes for brain tumours. The potent learning capability and effectiveness of AI approaches have been assessed. Convolutional Neural Network (CNN) is one of the AI subfields that has demonstrated remarkable performance in analysing medical imagery. Consequently, the processing of medical imagery, particularly brain MRI images, was the main emphasis of this review paper, which also examined different deep learning model architectures in addition to CNN.
Authors - Shubham Kadam, Chhitij Raj, Pankajkumar Anawade, Deepak sharma, Utkarsha Wanjari, Janhvi Shirbhate, Sharvari Pipare Abstract - Artificial Intelligence (AI) is increasingly being hailed as the key to the future of healthcare supply chain management in countries such as India, where healthcare is a particularly complex setting for an integrated supply chain. This review presents the various Data-driven Artificial Intelligence (AI) technologies such as Machine Learning (ML), Natural Language Processing (NLP), Computer Vision, and Robotic Process Automation (RPA) that help in the automation of essential processes like demand forecasting, inventory management, and cold chain logistics in an efficient and timely manner. AI helps deliver vital supplies on time and minimizes any disruptions of services by utilizing predictive analytics and real-time monitoring. However, high implementation costs, data privacy concerns, the need for integration with legacy systems, and a need for more skilled professionals are barriers to the adoption of AI computing. To extract the maximal potential AI can offer healthcare logistics, the issues above need to be addressed. Upcoming research directions include further development in quantum computing, IoT integration, and collaborative AI platforms to fulfil resilience and sustainability objectives for supply chains. The results underscore the potential of AI to transform health supply chains and provide an opportunity to realize more scalable, responsive, and efficient health services.
Authors - Rajeshree Khande, Sachin Naik, Akshay Tayade, Amar Kale, Kunal Phalke Abstract - The authors propose for the LSTM-XGBoost model for portfolio optimization as well as stock price prediction. The model has incorporated the benefits derived from XGBoost, a gradient-boosting algorithm that enhances the ability of a model to predict structured and improved data, and Long Short-Term Memory (LSTM) networks, which excel at characterizing time-series data based on temporal relationships. The XGBoost model takes advantage of the LSTM model by utilizing the anticipated outputs it makes for improving the precision and overall efficiency of the model while the LSTM model is designed to work with ordered data peculiar to stock markets specifically on patterns and trends over time. In the study authors employ this type of hybrid to determine variables such as volatility and the moving average of historical stock price index of NIFTY50. The authors have obtained total model accuracy of 98.33%. Authors also use the Sharpe ratio to maintain an optimal portfolio because it shows investors the optimal ratio of expected stock returns. This research contributes to enhancing financial forecasting by integrating deep learning and machine learning techniques, ultimately offering the formulation of a new risk avert portfolio as well as stock price prediction.
Authors - Jaiditya Nair, Sunil Kumar Abstract - The increasing demand for AI-driven solutions in development has encouraged people to conduct various research into generating code from natural language prompts. My paper presents a Retrieval-Augmented Generation (RAG) pipeline for code generation, making use of embedding models, contextual retrieval, and advanced language models such as Mistral and CodeLLama. This approach incorporates document indexing and metadata extraction to create context-aware code snippets and at the end of the process, we get a python file with the generated code present in it.
Authors - Nishita Shekhar Bala, Sree Vani Bandi, Stephen R, Ravi Dandu, Balakrishnan C Abstract - These days internet is became an essential part of human life and affects various domains which includes education, business, social interactions, mental health. It pushes the society ahead through increasing innovations, amplifying learning techniques, connecting people across the globe and access to vast resources which makes it a valuable tool in this modern society. But it comes with problems such as internet addiction, sleeping disorders, health complications. This abstract discusses about dual impact of internet uses, focusing on its significant benefits and possible dangers. Hence, there is need to mange use of internet so one can make use of its benefits at the same time reducing the affects which are caused by internet on human life.
Authors - Chandan Raj B R, A. Yasaswi, Deepika K, Uday Bhaskar Reddy, Delina Yadav K, Joshna K Abstract - It is quite difficult to communicate with deaf individuals. This article extends the complexity of Indian Sign Language (ISL) character classification. Sign language is insufficient for the hearing and speaking disabled. Hand gestures of disabled individuals may appear confused to those who have not learnt the language. Communication should be two-way. In this essay, we will discuss how to learn a language through sign language. Images are processed using computer vision processes, including grayscale conversion, dilation, and masking. We employ Convolutional Neural Networks (CNN) to train and recognize images. Our example has an accuracy of approximately 95%. Gestures serve as a nonverbal communication tool in language. People with hearing or speech difficulties frequently utilize them to communicate with others or among themselves. Many loudspeakers are created by various manufacturers around the world. This study demonstrates that many experiments are undertaken each year, with several articles published in journals and conferences, and that research on vision-based gesture recognition is ongoing. Cognitive navigation focuses on three areas: information retrieval, environmental information, and gesture representation. In terms of identity verification, we also evaluated the authentication system's effectiveness. The physical movement of the human hand generates gestures, and gesture recognition contributes to improvements in autonomous vehicle operation. This paper use the convolutional neural network (CNN) classification technique to detect and recognize human motions. This workflow consists of region-of-interest coordination via masking, finger segmentation, normalization of segmented finger pictures, and finger recognition using a CNN classifier. Use the mask to separate the hand portion of the image from the rest of the image. The histogram equalization approach is used to improve the contrast of each pixel in an image. This work uses a variety of scanning techniques to classify fingerprints from hand photographs. The segmented fingers from the hand image are put into the CNN classification algorithm, which separates the image into different groups. This research proposes gesture recognition and recognition methods based on CNN classification, and the technology achieves good performance using cutting-edge methodologies.
Authors - Anudeep Arora, Ranjeeta Kaur, Neha Tomer, Vibha Soni, Neha Arora, Anil Kumar Gupta, Lida Mariam George, Prashant Vats Abstract - The incorporation of data analytics into internal audit operations is a noteworthy progression in augmenting the efficacy and productivity of audits. In this paradigm, strategic analysis refers to using data-driven insights to evaluate risks, expedite audit procedures, and enhance organizational controls. This article examines the use of strategic analysis in data analytics and internal audits, including important techniques, advantages, and difficulties. It talks about how sophisticated data analytics methods, such as machine learning, statistical analysis, and visualization software, can change the way that auditing is done today. In addition, the paper looks at case studies and potential future developments in the subject, giving readers a thorough understanding of the various ways internal auditors might use data analytics to provide audit results that are more precise and useful.
Authors - Artika Singh, Manisha Jailia Abstract - Effective management of infectious disease outbreaks rely heavily on informed decision-making processes. There are many approaches given for decision-making some of them are expert decision-making, creative problem solving, public engagement, and decision-making under deep uncertainty (DMDU) in outbreak management (OM). The integration of these aspects is critical to enhancing the responsiveness and efficiency of public health interventions. This paper discusses the current state of expert decision-making processes, the role of creativity in managing complex situations, the impact and challenges of incorporating public and patient engagement (PPE) in OM. The paper concludes with recommendations for future research and practice to improve outbreak management strategies.
Authors - Prateeksha P Malagi, Priyanka R Patil, Shamshuddin K G, Suneeta V Budihal Abstract - The advent of blockchain technology presents a transformative opportunity for enhancing the integrity and efficiency of voting systems. This paper explores the design and implementation of a blockchain-based voting system aimed at addressing common challenges faced in traditional electoral processes, such as voter fraud, lack of transparency, and low participation rates. By leveraging the decentralized and immutable nature of blockchain, our proposed system ensures secure voter authentication, real-time vote tracking, and tamper-proof record keeping. The study outlines the technical architecture, including smart contracts and cryptographic techniques, while evaluating the system's performance through simulated voting scenarios. Furthermore, we discuss the implications of this technology for promoting democratic engagement and restoring public trust in electoral outcomes. Our findings suggest that a blockchain-based voting system not only enhances security and transparency but also offers a scalable solution to modern electoral challenges.