Authors - Kajal Joseph, Deepa Parasar Abstract - This study conducts a predictive analysis of company status using various machine learning algorithms, aiming to identify the models that deliver the highest accuracy and reliability for decision-making in finance and business intelligence. The study employs a range of algorithms, including Logistic Regression, DecisionTreeClassifier, Random Forest, Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Naive Bayes, Gradient Boosting Machines (GBM), XGBoost, AdaBoost, LightGBM, CatBoost, and Extra Trees Model, each rigorously tested on a preprocessed dataset split into training and testing sets to ensure robust validation. (Kunjir et al., 2020) Results indicate that ensemble models, particularly XGBoost and Random Forest, outperformed other methods, achieving accuracy rates exceeding 93%. This high level of performance highlights the value of ensemble techniques for handling complex predictive tasks, showcasing their suitability for applications where precise forecasting is critical. The study underscores the importance of model selection in predictive analytics, as it directly impacts the reliability of predictions in financial contexts. These findings suggest that machine learning, especially ensemble models like XGBoost and Random Forest, can significantly improve the accuracy of company status predictions, offering a dependable tool for stakeholders operating in uncertain environments. This research contributes valuable insights into the efficacy of machine learning in predictive tasks, advocating for data-driven decision-making approaches that can enhance business intelligence and strategic planning. (Liaw et al., 2019)
Authors - Meenu Suresh, Tonny Binoy, Saritha M S, Vimal Babu P, Dheeraj N, Aiswarya R Lakshmi Abstract - The present work introduces a video steganography technique which employs Finite Ridgelet Transform (FRT) and Elliptic Curve Cryptography (ECC)-ChaCha20 encryption to hide confidential information. The proposed method begins by identifying key frames through the detection of scene changes. The FRT is employed to analyze the key frames, extracting their orientation and subbandswithin which the secret data is encoded. To boost security, ECC-ChaCha20 encryption technique serves as a preprocessing step prior to incorporating the secret data. The technique attains an embedding capacity of 72%, SSIM of 0.9890 and PSNR value range from 70dB and 72 dB. The experimental results highlight that the algorithm besidesboosting security also ensures superior resilienceand video quality.
Authors - Jagriti Singh Chundawat, Ashish Kumar, Monika Saini Abstract - The purpose of this paper is to optimize the availability of a thermal power plant. A thermal power plant (TPP) is a comprehensive system with multiple interconnected subsystems which are used for power generation. This TPP system has three subsystems such boiler, superheater and reheater. These subsystems connect to each other in series configuration. To improve the availability of the system a study-state availability is derived with the help of normalizing equations and the chapman Kolmogorov equations are derived from Markov birth-death process. The system’s failure and repair rates are statistically independent and exponentially distributed. The numerical results show that availability increases from 0.997903 to 0.998725 as the repair rate increases.
Authors - Mannem Sri Nishma, Satendra Gupta, Tapas Saini, Harshada Suryawanshi, Anoop Kumar Abstract - Face recognition-based authentication has become a critical component in today's digital landscape, particularly as most business activities transition to online platforms. This is especially evident in the finance and banking sectors, which have shown significant interest in adopting online processes. By leveraging this technology, these industries can enhance operational efficiency, promote business growth, reduce reliance on manpower, and automate several processes effectively. However, face recognition systems are susceptible to face spoofing attacks, where malicious actors can attempt to deceive these systems using facial images or videos. Some attackers even use masks resembling authorized individuals to trick recognition cameras into perceiving them as real users. To counter such threats, liveness detection has emerged as a critical research area, focusing on identifying and preventing face spoofing attempts. The proposed approach utilizes a deep learning technique tailored for face liveness detection. The experiments are conducted using the Replay-Mobile, MSU-MFSD, Casia-FASD and our own datasets, which are widely used for recognizing live and spoofed faces. The system achieved an impressive area under the ROC curve (AUC) of 0.99, demonstrating its effectiveness in detecting face spoofing.
Authors - Kusuma B S, Meghana Murthy B V, Preksha R, Srushti M P, C Balarengadurai Abstract - Against the backdrop of either a Deaf World or hearing people, the major challenges which face modern society concern communication barriers in general. The paper proposes a system for translation through gestures in Indian Sign Language to audio and video outputs for non-signers to enable easy interaction with them. Advanced machine learning techniques, such as Support Vector Machine and Convolutional Neural Network, will be used to enable this tool to recognize motions of ISL in real time. It converts these into the correct format for video and audio. In this respect, the paper claims to "make communication more accessible and bridge the gap in communication in which gestures are recognized and translated." Real-time recognition algorithms overcome the challenges faced by hand gesture detection to provide an intuitive and seamless interaction experience. This approach is an effective strategy to enhance communications in government and industry with special focus on smart writing. Results confirm this method's promise in the broader social interaction by significantly improving the speed and accuracy of deaf individuals.
Authors - Bitan Pratihar Abstract - We, human-beings, have two different forms of memory, namely pulling memory and pushing memory (also known as working memory). A pure pulling memory pulls a person towards itself, and consequently, he/she spends some significant amount of time on memorizing the incident but does not gain anything significant in his/her decision making directly. On the other hand, a pure pushing memory pushes a human-being to take some decisions, and thus, it may have direct influence on his/her learning. However, neither pure pulling memory nor pure pushing memory alone may be beneficial to effective learning of human brain. A proper combination of pulling and pushing memories may be required to ensure a significant effect of memory on learning of neural networks. The novelty of this study lies with the fact of formulating it as an optimization problem and solving the same using a recently proposed nature-inspired intelligent optimization tool. The effectiveness of this novel idea of correlating the combined form of memory with learning of neural networks has been demonstrated on two well-known data sets. This combined form of memories is found to have a significant influence on learning of neural networks, and this proposed approach may have the potential to solve the well-known memory loss problem of neural networks.
Authors - Martin Mollay, Deepak Sharma, Pankajkumar Anawade, Chetan Parlikar Abstract - The primary research intention of the present study is to find out the impacts of laws such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) on the digital marketing landscape. The set of regulations relates to data protection, which involves a stringent regime for how the firms gather, process, and hold the privacy of their clients. Therefore, the two main bottlenecks of marketers are fewer consent mechanisms, less data, and a need for more options for personalizing. Still, technology is a fashionable thing that has been launched, and the effectiveness of the new technology revitalizes it. Firms mostly turn to first-party data that question the need for intermediaries. This means that they can collect information directly from the consumer, which then naturally results in much more productive and meaningful customer relation-ships. Getting hold of advanced technologies, for instance, artificial intelligence and machine learning, which work with smaller datasets, also provides a window for companies to discover a large number of customized, and possibly even more valuable, aspects through customer behavior without invading the privacy of the person who is identical to the threat of the law. Additionally, the price of compliance with the regulations is high, notably for Small and Medium sized Enterprises (SMEs). In contrast, it is the most highly cost-effective way for the consumer to win consumers’ trust in the brand and make them loyal to it in the long run. In this new era, ethical marketing follows the footsteps of the evolutionary journey where complete openness and consumers’ private space value are the main topics. Personal data can be acquired in a way that is not compliant with privacy laws. However, zero-party data or consumer information given to businesses might still be the source for personalized experiences that are privacy compliant.
Authors - Sachi Joshi, Upesh Patel Abstract - Cancer is a grave category of illnesses in which the body's aberrant cells proliferate and spread uncontrollably. It can appear in nearly every tissue or organ and take many different forms, each with its own distinct set of symptoms and side effects. Environmental variables, lifestyle decisions, and genetic abnormalities are typically linked to the development of cancer. The varied approaches to cancer diagnosis are examined in this study, with a focus on early detection and therapeutic strategies. This literature review covers a wide range of cancer kinds, such as brain tumours, leukaemia, breast, lung, and cervical cancer, and offers recommendations for creating reliable ma-chine learning-enhanced cancer detection techniques. The research elucidates several applications, techniques, and comparative analysis in this significant subject, ranging from imaging analysis to biomarker identification. The study explores the developing methods that lead to a more precise diagnosis. The study offers insights with a thorough examination of the benefits, drawbacks, and innovations of each technique, ranging from conventional diagnostic procedures to state-of-the-art technologies. It also directs future research efforts towards the hunt for more effective personalized illness management.
Authors - Akshay Honnavalli, Hrishi Preetham G L, Aditya Rao, Preethi P Abstract - In todays information-driven world, organizing vast amounts of textual data is crucial. Topic modelling, a subfield of NLP, enables the discovery of thematic structures in large text corpora, summarizing and categorizing documents by identifying prevalent topics. For Hindi speakers, adapting topic modelling methods used for English texts to Hindi is beneficial, as much of the research has focused primarily on English. This research addresses this gap by focusing on Hindi language topic modelling using a news category dataset, providing a comparative analysis between traditional approaches like LDA, LSA, NMF and BERT-based approaches. In this study, six open-source embedding models supporting Hindi were evaluated. Among these, the l3cube-pune/hindi-sentence-similarity-sbert model exhibited strong performance, achieving coherence scores of 0.783 and 0.797 for N-gram (1,1) and N-gram (1,2), respectively. Average coherence scores of all embedding models significantly exceeded traditional models, highlighting the potential of embedding models for Hindi topic modelling. Also, this research introduces a novel method to assign meaningful category labels to discovered topics by using dataset annotations, enhancing the interpretation of topic clusters. The findings illustrate both the strengths and areas for improvement in adapting these models to better capture the nuances of Hindi texts.
Authors - Guhan Senthil Sambandam, Priyadarshini J Abstract - Machine learning has significantly impacted daily life, with machine translation emerging as a rapidly advancing domain. In healthcare, machine learning presents opportunities for innovation, particularly in translating medical documents into low-resource languages like Tamil. This research develops a transformer-based model fine-tuned for medical terminology translation from English to Tamil. A major challenge was the lack of English-Tamil medical datasets, addressed through innovative data collection methods, such as extracting bilingual subtitles from Tamil YouTube videos. These datasets complement existing resources to enhance model performance. The final model was deployed as a REST API using a Flask-based server, integrated into a React Native mobile application. The app enables users to scan English medical documents, extract text via on-device Optical Character Recognition (OCR), and obtain Tamil translations. By combining advanced Natural Language Processing (NLP) techniques with user-friendly application design, this end-to-end system bridges linguistic gaps in healthcare, providing Tamil-speaking populations with improved access to critical medical information. This study highlights the potential of NLP-driven solutions to address healthcare disparities and demonstrates the feasibility of adapting machine translation systems to specialized domains with resource limitations. The approach also emphasizes scalability for broader applications in similar low-resource settings.