Authors - Bimal Patel, Jalpesh Vasa, Ravi Patel, Martin Parmar, Krunal Maheriya Abstract - Software robustness, a vital component of software quality, encompasses key attributes such as reliability, usability, efficiency, maintainability, and portability. This paper offers a comprehensive overview of these attributes and examines the role of optimization tools in enhancing software robustness. Reliability, which ensures a system’s consistent dependability, is achieved through techniques like redundancy, error handling, and extensive testing. Usability, focusing on the user experience, is improved through user-centered design, usability testing, and heuristic evaluation. Efficiency targets the optimal use of system resources such as CPU and memory, with performance profiling, load testing, and code optimization techniques helping identify and resolve bottlenecks. Maintainability, ensuring that systems can be easily modified or updated, is enhanced through modularity, code readability, and design patterns that simplify future changes. Portability, which allows software to operate across diverse platforms, is achieved through cross-platform frameworks and containerization technologies such as Docker and Kubernetes. Optimization tools, including profilers, load testing tools, static code analyzers, and dependency management tools, play a critical role in maintaining software robustness. These tools help identify performance issues, ensure resource efficiency, and improve code quality. By leveraging these tools, developers and project managers can build more reliable, efficient, and maintainable software systems. This paper serves as a valuable resource for improving the overall quality, resilience, and portability of software products.
Authors - Siddhesh Joshi, Manoj Naidu, Athrva Kulkarni, Sahil Kadam, Nilesh P. Sable, Pranjal Pandit Abstract - Artificial Intelligence (AI) is a new type of experience that uses computer-generated content to augment the Real World (RW). An emerging type of experience known as augmented reality (AR) involves adding computer generated content to the real world (RW) that is connected to certain places and/or activities. AR is starting to show up in audio-visual media (news, entertainment, sports, etc.) and is starting to make a real and exciting appearance in other areas of lives (e-commerce, travel, marketing, etc.) [2]. This paper proposes the development of a marker-based AR system that overlays interactive 3D models of industrial machinery onto real-world views. The proposed project will be based on Unity software, which will make it feasible to present complex industrial equipment and move away from typical product manuals that are only text-laden. Users can intuitively see and interact with different parts of their machinery during maintenance, troubleshooting, or training through the AR-based system. This will probably ensure more user interaction with less learning time for a technical operation and is, therefore, most beneficial for the industries that heavily depend on machineries' setups and maintenance. This paper describes methodology, and the probable effects of the marker-based AR digitizing product manuals with first-hand observation in the future of digital documentation across industrial life.
Authors - Ashita C. Kolla, Dattatray G. Takale, Parikshit N. Mahalle, Bipin Sule, Gopal Deshmukh Abstract - The research paper mainly focuses on algorithmic bias in facial recognition technology using parameters like race and hairstyle. It involves a CNN model following the pre-processing step of the data and custom annotation. It further talks about advanced methods for dataset balancing, such as normalization and sampling, along with detailed annotations involving characteristics of different races and hairstyles. Compared to other models, the CNN model contains powerful feature extraction methods and other bias mitigation methods such as adversarial training and annotation to enhance the chance of predictions. The results reveal that the model has made significant progress with good performance and lesser bias. This study helps the industry develop more reliable FRT systems with effective strategies for reducing bias and maintaining accuracy. These advancements are important for applications in various industries, where unbiased facial recognition is important for fairness and effectiveness.
Authors - Shrivardhansinh Jadeja, Bimal Patel, Jalpesh Vasa Abstract - The evolution of software development methodologies has profoundly influenced the gaming industry, marking a transition from traditional approaches to agile frameworks that emphasize flexibility and responsiveness. Traditional methodologies often struggled to meet the rapidly changing demands of game design and player expectations, leading to extended development cycles and reduced competitiveness. In contrast, agile methodologies have emerged as a viable solution, focusing on iterative development, continuous feedback, and collaboration among cross-functional teams. This paradigm shift has enabled companies like Riot Games to significantly enhance operational efficiency and swiftly respond to market dynamics. This research paper investigates Riot Games' agile transformation as a case study, illustrating how the adoption of agile practices has fortified its competitive advantage over alternative players in the gaming sector. By examining Riot's innovative team structures, leadership models, and iterative development processes, this study elucidates the critical role of agility in fostering creativity, improving product quality, and accelerating time-to-market. The findings highlight the necessity of embracing agile methodologies not only for individual organizations but also for the broader gaming industry seeking sustainability and growth in an increasingly competitive landscape. Ultimately, this paper offers valuable insights into how agile transformation can act as a catalyst for success in game development, providing a framework for other companies aiming to enhance their competitive positioning.
Authors - Rashmy Moray, Sejal Vaishav, Sangam Dey, Sridevi Chennamsetti, Harsha Thorve Abstract - This paper investigates the impact of behavioural biases, specifically Loss Aversion, Regret Aversion, Reference Dependence, and Risk Perception on algorithmic trading using the framework of Prospect Theory. Using structured questionnaire, the Primary data was collected from the traders who use algorithm. Statistical tool, SmartPLS was employed to assess the endogenous factors and the behavioural biases that influence the intention to trade using algorithms. The findings indicate that Risk Perception and Reference Dependence significantly impact trading intent, whereas Loss Aversion and Regret Aversion do not show a significant influence on trading intent. This advocates that the systematic and emotion-free nature of algorithmic trading minimizes the effects of certain emotional biases. The study contributes profound understanding of behavioural biases of traders adopting algorithm offer distinctive path for future scope of research.
Authors - Padmanabh khunt, Martin Parmar, Het Khatusuriya, Mrugendra Rahevar, Bimal Patel, Krunal Maheriya Abstract - The smart garage system presented in this paper incorporates advanced security and remote-control functionalities to enhance the user experience and ensure secure access. The implementation of a One-Time Password (OTP) authentication mechanism provides an additional layer of security, effectively preventing unauthorized access to the garage. Central to the system are ESP32 microcontrollers, which facilitate reliable and efficient communication between the keypad, relay module, and the mobile application. Utilizing LoRa communication, the system achieves long-range wireless connectivity, enabling seamless interaction between ESP32 microcontrollers even in areas with limited network coverage. The mobile application, developed using React Native, offers a user-friendly interface for homeowners, featuring login/signup options, direct garage door control, and OTP generation for secure access. A robust server backend, built with Node.js and supported by a MongoDB database, ensures efficient management of user data, including login credentials and generated OTPs. Furthermore, an admin panel is integrated to enhance user administration and access control capabilities. This comprehensive smart garage system not only improves security but also provides convenience and reliability for modern homeowners.
Authors - Varalatchoumy M, Syed Hayath, Dinesh D, Dhanush C P, Manu R, V Sadhana Abstract - This paper presents an advanced Generative AI-powered system for video-to-text summarization, leveraging state-of-the-art Computer Vision (CV) technologies and Natural Language Processing (NLP) techniques. The developed system addresses the growing need to extract key information efficiently from lengthy videos across diverse domains such as education, entertainment, sports, and instructional content. By integrating visual and textual data, it pinpoints essential moments and generates concise summaries that capture the core message of the video, reducing the time users spend understanding extensive media. At the heart of this system lies a robust, open-source large language model (LLM), finetuned to produce human-like summaries from video transcripts. The system processes visual cues using advanced CV techniques—such as keyframe extraction and scene segmentation—and textual cues via Automatic Speech Recognition (ASR), which converts audio into text. This dual approach facilitates a deep understanding of spoken and visual content, ensuring that summaries are precise, relevant, and contextually accurate. The system has been evaluated on a diverse dataset, comprising videos of various genres, qualities, and lengths, demonstrating its capability to generalize effectively across a wide spectrum of content. Applications of this video summarization tool include content management, video indexing, educational platforms, and beyond, offering significant time-saving benefits to users and organizations. By incorporating real-time feedback, the system continuously refines its summarization techniques, enhancing accuracy and ensuring that users quickly access the most relevant information, thereby promoting greater accessibility and usability of video content.
Authors - Chetana Shravage, Shubhangi Vairagar, Priya Metri, Shreya C. Jaygude, Pradnya P. Sonawane, Pradnya D. Kudwe, Siddheshwari S. Patil Abstract - This project presents an innovative phishing detection system that addresses the limitations of traditional methods by combining URL-based and content- based features to accurately identify fraudulent websites. Unlike conventional approaches that rely heavily on blacklisting and heuristics, which struggle with zero-day attacks and frequent updates, this system employs machine learning algorithms to automatically extract and analyze critical features from URLs and webpage content. By leveraging a comprehensive dataset that consists of fraudulent (phishing) websites along with legitimate websites, the system aims to improve detection rates to optimize performance based on evaluation metrics like accuracy, precision, F-1 score, recall, and false-positive rates. The system makes use of selective machine learning models like Random Forest, Decision Tree, and Support Vector Machine (SVM), which provide the benefit of increased scalability, robustness and improved effectiveness in phishing detection. Ultimately, this project aims to deliver a scalable, real-time detection solution that effectively mitigates phishing threats in a rapidly evolving landscape.
Authors - Chetana Shravage, Shubhangi Vairagar, Priya Metri, Rohit Rajendra Kalaburgi, Harsh Anil Shah, Abhinandan Vaibhav Sharma, Shubham Shatrughun Godge Abstract - Heart sound categorization is critical for the early identification and detection of cardiovascular illness. Recently deep learning methods have resulted in promising improvements in the correctness of heart sound classification systems. This work introduces a unique transformer-based model for heart sound classification that uses powerful attention mechanism to capture both local as well as global dependencies in heart sound data. Transformers, in contrast to traditional models that rely on handmade features or recurrent networks, can dynamically focus on the most important characteristics in time-series data, making them perfect for dealing with the complexity and variability of phonocardiogram (PCG) signals
Authors - Shubhangi Vairagar, Chetana Shravage, Priya Merti, Nikhil M. Ingale, Sakshi N. Gaikwad, Dhruv G. Yaranalkar, Atharva R. Pimple Abstract - Artificial Intelligence, specifically large language models and generative AI, has dramatically changed the finance industry. According to a few research studies, the current article discusses some of those findings that provided insight into different applications of artificial intelligence in the sector of financial operations and improved predictive analytics, operational efficiency, and quality in decision-making processes. Most critical findings point towards the efficiency of LLMs, where it has achieved automation, financial analysis improvements, and does comply with enforcement standards in their usage of such technologies as towards most security concerns, privacy, and ethical perspectives. Specifically, the long-term implications for financial decision-making and the potential consequences arising from the use of such technologies in an ethical view stand out starkly as red flags of concern. Thus, the review brings new knowledge in the sphere of AI in finance and grounds further justification to be done with proper research motives toward complete responsible development of AI.
Authors - Ashutosh Patil, Gayatri Bhangle, Sejal Kadam, Poonam Vetal, Prajakta Shinde Abstract - Advances in deep learning have enabled effective applications in agriculture, including fruit disease detection. Accurate identification of diseases in fruits such as Annonaceae and Rutaceae families is crucial for yield and quality improvements. Many studies employ deep learning models like CNN, ResNet, VGG, and DenseNet for disease detection across fruits such as apple, orange, guava, and grapes. This article reviews recent research on deep learning for fruit disease detection and classification, focusing on model performance, data utilization, and visualization techniques. We analyze existing studies to identify optimal strategies for fruit species and other underrepresented crops, outlining challenges and areas for future research on various types of fruit species and their family.
Authors - Susheela Vishnoi, Monika Roopak, Prashant Vats Abstract - Pathological tissue image categorization is essential in medical diagnostics, offering insights into disease types, progression, and treatment alternatives. The significant variability in tissue morphology and the overlapping visual patterns across different classes complicate accurate categorization. This study introduces an improved categorization model utilizing a bag-of-features (BoF) methodology integrated with the Roulette Wheel Whale Optimization Algorithm (RWWOA) to enhance classification accuracy and optimize feature selection efficiency. The proposed model utilizes the Bag of Features (BoF) technique to extract discriminative features from tissue images, thereby generating a feature-rich dictionary that represents various pathological structures. The RWWOA is employed to optimize feature selection, thereby reducing dimensionality and concentrating on the most pertinent features for precise categorization. Our method integrates the exploration capabilities of the Whale Optimization Algorithm (WOA) with the probabilistic selection mechanism of the roulette wheel, thereby dynamically balancing exploitation and exploration, which enhances convergence speed and categorization accuracy. Experimental results indicate that the RWWOA-BoF method outperforms traditional methods across various datasets, showing enhancements in classification precision, recall, and F1-score. This method offers a reliable resource for aiding pathologists in diagnostic imaging, which may expedite diagnostic processes and improve consistency in clinical practice.
Authors - Rowan Cowper, Grant Oosterwyk, Jean-Paul Van Belle Abstract - The rapid and widespread growth of AI use has brought about a number of important areas for research. This paper aims to examine the human factors that impact AI use: whether demographic attributes, trust in AI, and perceptions about AI influence whether someone will use AI or not. A survey among South African industry and academic respondents was used. The key findings of this study include that age and education have a significant impact on trust in AI. Domain knowledge and education levels were significant indicators of perception of AI, with higher levels of domain knowledge and education leading to lower perceptions of AI. Both AI trust and perception were found to have a significant positive impact on whether someone made use of AI or not. These findings may inform decision makers on targeted interventions, such as education, to increase the use of AI in industry and academic contexts. Hopefully further academic research will also validate our findings in other research contexts, such as India and/or different population segments.
Authors - Mihlali Mqoqi, Marita Turpin, Jean-Paul Van Belle Abstract - The manufacturing industry is undergoing significant changes due to the emergence of artificial intelligence (AI) technologies. This transition has implications, particularly with regards to the skills necessary to adopt and leverage these technologies effectively. This paper addresses the issue of the expanding gap between the skill set in the current workforce and those required for the future of work. A systematic literature review was conducted to investigate these skills, which resulted in the selection of 28 studies out of 216 initially identified that offered insights into the benefits and applications of AI in manufacturing. The findings show how AI has altered manufacturing processes through predictive maintenance, quality assurance, and product design. Further, there is a need for targeted upskilling and reskilling programs to bridge the current skill gap and equip the workforce to meet the changing demands of the industry. Initiatives that could be implemented for successful skills development are also discussed.
Authors - Anupama Pandey, Anubha Dwivedi, Rashmy Moray, Vivek Divekar, Shikha Jain, Sridevi Chennamsetti Abstract - The objective of the research is to identify the factors that affects the adoption of digital currencies by the Generation Z and the millennials, utilizing UTAUT model. It highlights the role of performance expectancy, effort expectancy, social influence, and facilitating conditions on the use of digital currency. A designed questionnaire was used to collect the primary data from Gen Z and millennials. Statistical techniques SmartPLS was used to analyse the correlation and impact of the various UTAUT constructs on the intent to use digital currency. Results highlight that performance expectancy and user influence have the significant influence on the adoption of innovative solutions. The paper also describes issues concerning the digital infrastructure and various regulations. Prospective strategies to increase the acceptability of the digital currency with the young people are advanced to articulate the perceived usefulness and to make the use of digital currencies easier.
Authors - G.G. Rajput, Sumitra M Mudda Abstract - This paper presents abnormality detection from segmentation techniques for leg fracture segmentation from animal x-ray images. Gaussian filtering is used to remove the noise from the x-ray images. fracture is segmented from x-ray image by performing thresholding segmentation operations. Experiments are performed on clinical data set to present the severity of the fracture in images for threshold segmentation methods studied. Extract the Features from segmented images using GLCM techniques. SVM algorithm is use for classify the given animal x-ray image is fractured or not. Using thresholding segmentation techniques, fractures are separated from x-ray images. Utilizing GLCM techniques, extract the features from segmented images. The SVM method is used to determine whether or not the provided animal x-ray image is broken.
Authors - Rakhi Bharadwaj, Nikhil Patil, Riddhi Patel, Raj Pagar, Suchita Padhye Abstract - Integrated AI within the context of mental health has continued to be an emergent area of interest as more and more mental health applications in mobile devices, include chatbots. Such chatbots are the focus of this paper, and their design based on NLP, machine learning, and sentiment analysis is illustrated in order to help people who suffer from anxiety and depression providing them with individual therapeutic support. Despite these applications equip users with means of tracking moods, sharing concerns and getting ideas about mental health, they are not a license to practice. The study also responds to fundamental questions about their usability, usefulness, privacy, security, and protection of data. Because the need for mental health services continues to increase in parallel with technological progress, one has to maintain the harmony between innovation and ethics. Discussing the current mental health platforms, their technological model, and case studies of their usage, this paper also emphasizes the related risks – the misuse of AI in mental health treatment. Leaving aside the beautiful language and the fascinating discussions with the machines, the goal of the study is to provide theoretical perspectives as well as pragmatic suggestions for way AI can be used to improve mental health care responsibly.
Authors - Anushka Ashok Pote, Laukik Nitin Marathe, Suvarna Abhijit Patil, Sneha Kanawade, Deepali Samir Hajare, Varsha Pandagre, Arti Singh, Rasika Kachore Abstract - Due to the increase in vulnerability of different types of diseases, the use of artificial intelligence is seen to be rapidly increasing in the healthcare industry for creating systems that will provide diagnosis, treatment, and patient care. One of the major challenges that is faced in recent days is that most of the traditional healthcare systems are not transparent and comprehensible. This review explores the importance of Explainable Artificial Intelligence in order to make advancements in precision medicine, focusing on personalized treatment and disease prediction. Despite being powerful, traditional AI models function as "black boxes," which do not offer any insights into how decisions are made. This limits their application in critical sectors like healthcare where trust and accountability are crucial. Explainable AI makes systems more transparent and interpretable allowing healthcare professionals to understand and trust AI-driven insights. It exhibits significant enhancements in diagnostic accuracy and treatment personalization across various areas like oncology, cardiovascular disease, neurology, etc. The review performs comparisons between explainability driven models and traditional models. It reveals that XAI-based models offer better accuracy along with precision. It provides interpretable decision-making which makes them more suitable for clinical applications. Even though these systems exhibit certain challenges like computational complexity and need for standardized evaluation metrics. This paper highlights transformative potential of XAI in healthcare industry by fostering more ethical, transparent and patient-centered solutions. It is poised to revolutionize precision medicine by improving patient outcomes and exhibiting significant contributions in the healthcare industry.
Authors - Pinakci Kathotia, Ridhima Rathore, Kush Gupta, Eshita Vijay, Ujwala Kshirsagar Abstract - The developing subject of contactless voltage sensing and its revolutionary effects on the electrical sector are explored here. The basics of contactless voltage detection are first described, along with how it differs from conventional techniques. It then goes into detail on the different uses for this technology, such as non-invasive electrical testing and its employment in high-voltage environments. This paper also discusses the advantages of contactless voltage detection, such as improved efficiency and safety, as well as the difficulties that must yet be overcome for this technology to reach its full potential. Examples from everyday life are utilized throughout the paper to highlight the useful uses for contactless voltage detection.
Authors - Vishalsinh Bais, Mansi Pagdhune, Wamik Khan, Gaurav Maske, Aditya Umredkar, Amol P. Bhagat Abstract - The rise of deepfakes has raised questions about the veracity of digital content. This has led to a lot of research into reliable detection techniques. In this study, we introduce a new deepfakes detection approach based on the mesoNet architecture and the use of convolutional neural networks (CNNs). The proposed model has a multi-layer structure that includes convolutional layers, pooling layers, and dropout techniques to effectively extract and discriminate features. Training on a dataset that includes our own forged images and deepfakes, this model shows promising results in identifying manipulated content. With the activation function of leaky ReLU, our mesoNet model shows great promise in accurately distinguishing deepfakes from real images. Our experimental results demonstrate its effectiveness in distinguishing between forged and real images, demonstrating its value as a powerful tool in the fight against digitally manipulated content.
Authors - Shruti Gandhi, Charmy Patel Abstract - This research presents an innovative framework for developing educational chatbots that redefine student support by integrating advanced natural language understanding (NLU), intent recognition, and pragmatic analysis. Leveraging machine learning techniques, including pre-trained models like BERT, the chatbot achieves state-of-the-art performance in recognizing user intents and delivering contextually relevant responses. By addressing the limitations of traditional systems, such as poor personalization and difficulty in handling nuanced queries, this framework enables dynamic, adaptive, and engaging interactions. The chatbot transcends conventional query handling through pragmatic analysis, allowing it to interpret subtle nuances, emotional states, and real-world contexts. This ensures personalized responses that align with individual learning needs, fostering deeper student engagement and comprehension. Finetuned with diverse datasets and instructional materials, the system is robust and scalable, making it suitable for a wide range of educational applications. This approach also emphasizes human-like interaction, combining emotional intelligence with context-aware capabilities to create a supportive learning environment. By enhancing response accuracy, adaptability, and user engagement, the chatbot sets a new benchmark in educational technology. Ultimately, this research demonstrates transformative potential in creating intelligent, scalable, and highly effective tools for modern education, paving the way for a more personalized and interactive learning experience.
Authors - Komal V. Papanwar Abstract - MorseMate is a user-friendly Morse code converter designed to simplify the process of converting Morse code to alphanumerical values by allowing input through three buttons for three separate functions involved in the transmission of the code. This approach eliminates the need for timing-based input, making Morse code more accessible and easier to use. The device, powered by an ESP8266 microcontroller and featuring a 128x64 Organic Light Emitting Diode (OLED) display, converts Morse code into readable text in real-time. Users can input their sequences with the press of a button, and a long press of the submit button clears the display, allowing for continuous use. Custom I2C configuration provides flexibility in hardware setup, while the compact design ensures portability. This paper elaborates on how the system combines simplicity, efficiency, and practicality, to make Morse code more accessible to a wider audience.
Authors - Tanuja Zende, Ramachandra. Pujeri, Suvarna Pawar Abstract - Human Emotion Recognition (HER) has gained considerable attention in recent years, driven by advances in machine learning, acoustic signal analysis and natural language processing. The advancements in HER using speech as a principal modality is addressed in this survey systematically. The importance of emotional intelligence in HCI, social robotics and mental health assessment in this review comprehends a complete analysis of approaches including feature extraction techniques, classification algorithms and data representation used in the field of speech emotion recognition. Furthermore, in this survey, existing methods into traditional rule-based systems, machine learning algorithms and state-of-the-art deep learning frameworks, stressing on the strengths and limitations are discussed. Additionally, thoughtful challenges such as the density of human emotions, the influence of contextual factors, and the need of annotated datasets to train robust emotion recognition systems find their involvement in this work. Current trends in multimodal emotion recognition (MER) and the incorporation of speech with other modalities are also discussed to provide a complete view. This amalgamation of existing literature aims to notify future research directions in emotion recognition systems, enhancing their pertinency across varied fields.
Authors - Akshat Vashisht, Ishika Tekade, Juhi Shah, Aniket Sawarn, Deependra Singh Yadav, Palash Sontakke, Rajkumar Patil Abstract - Segmenting grape leaf stress condition accurately is a critical step in precision agriculture, as it enables early detection and treatment to mitigate crop losses. In this research, we propose a novel approach leveraging the Segment Anything Model 2 (SAM-2) for precise segmentation of stress condition regions on grape leaves. SAM-2 is a foundation model for promptable visual segmentation in images and videos. This model can generate high-quality masks with minimal user input, which makes it an ideal tool for such tasks. The SAM-2 model was tested on field images and we achieved an accuracy of nearly 70% without fine-tuning. Experimental results demonstrate that SAM-2 outperforms traditional segmentation models like U2Net and V-net. Data augmentation can improve the performance of SAM-2, especially in challenging tasks such as detecting early-stage leaf spots or stress condition symptoms in overlapping leaves. Techniques such as rotation, adjusting brightness, and scaling, can simulate different conditions, balance the dataset and improve generalization. This helps SAM-2 to adapt to different scenarios and improve its ability to detect complex patterns. This shows the potential of SAM-2 in agricultural applications and provides a framework that can be integrated into advanced plant monitoring systems. By automating the segmentation process with minimal user intervention, SAM-2 significantly reduces the labour-intensive task of manual stress condition detection, thus saving time and resources in agricultural operations. Integrating SAM-2 into the grape leaf stress condition detection pipeline enhances the precision and reliability of stress condition identification systems. Furthermore, its adaptability across different segmentation tasks provides a foundation for scalable and automated plant health monitoring systems. This study establishes SAM-2 as a transformative tool for advancing sustainable farming practices and precision agriculture.
Authors - Safwan Ahmed, Rojas Binny, S N Velukutty, Animesh Giri Abstract - Industrial Internet of Things (IIoT) networks have started to play a crucial role in revolutionizing manufacturing and Industry 4.0. However, the distributed nature of IIoT and its relative infancy make it a prime target for cyberattacks. This paper proposes a new approach to address the threats faced by industries using a conversational Artificial Intelligence (AI)-interfaced deep neural network model for detecting attacks on an IIoT network. The proposed approach extends to threat mitigation and has been evaluated using an extensive IIoT network traffic data set, demonstrating 93% accuracy in detecting 14 of the most common threats plaguing the industry. This model introduces the integration of conversational AI with deep learning, offering a user-friendly interface for naive users and accurate threat detection. The broader impact of this work lies in its potential to significantly enhance access to a robust and accurate real-time cyber-threat detection and mitigation system, thus contributing to a more secure and resilient industrial landscape.