Authors - K Sai Geethanjali, Nidhi Umashankar, Rajesh I S, Jagannathan K, Manjunath Sargur Krishnamurthy, Maithri C Abstract - This survey provides a comprehensive review of the methods used for lung cancer detection through thoracic CT images, focusing on various image processing techniques and machine learning algorithms. Initially, the paper discusses the anatomy and functionality of the lungs within the respiratory system. The review examines image processing methods such as cleft detection, rib and bone identification, and segmentation of the lung, bronchi, and pulmonary veins. A detailed literature review covers both basic image enhancement techniques and advanced machine learning methods, including Random Forests (RF), Decision Trees (DT), Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Gradient Boosting. The review highlights the necessity for reliable validation techniques, explores alternative technologies, and addresses ethical issues associated with the use of patient data. The findings aim to assist researchers and practitioners in developing more accurate and efficient diagnostic tools for lung cancer detection by providing a concise review, thereby helping to save time and focus efforts on the most promising advancements.
Authors - Bijeesh TV, Bejoy BJ, Krishna Sreekumar, T Punitha Reddy Abstract - Integrating artificial intelligence (AI) and advanced imaging technologies in medical diagnostics is revolutionizing brain tumor recurrence prediction. This study aims to develop a precise prognosis model following Gamma Knife radiation therapy by utilizing state-of-the-art architectures such as EfficientNetV2 and Vision Transformers (ViTs), alongside transfer learning. The research identifies complex patterns and features in brain tumor images by leveraging pre-trained models on large-scale image datasets, enabling more accurate and reliable recurrence predictions. EfficientNetV2 and Vision Transformers (ViTs) produced prediction accuracy of 98.1% and 94.85% respectively. The study’s comprehensive development lifecycle includes dataset collection, preparation, model training, and evaluation, with rigorous testing to ensure performance and clinical relevance. Successful implementation of the proposed model will significantly enhance clinical decision-making, providing critical insights into patient prognosis and treatment strategies. By improving the prediction of tumor recurrence, this research advances neuro-oncology, enhances patient outcomes, and personalizes treatment plans. This approach enhances training efficiency and generalization to unseen data, ultimately increasing the clinical utility of the predictive model in real-world healthcare settings.
Authors - Melwin Lewis, Gaurav Mishra, Sahil Singh, Sana Shaikh Abstract - This paper focuses on the development of a 2D Fighting Game, using Simple DirectMedia Layer 2 (“SDL2”), Good Game Peace Out (“GGPO”) and the Godot Game Engine. This project was made with the help of the Godot Engine and the prototype test was implemented in the C++ Language with the intent to showcase the GGPO library for the implementation of Rollback Networking in Fighting Games, a technology which makes seamless online-play possible without input delay.
Authors - Anudeep Arora, Neha Tomer, Vibha Soni, Neha Arora, Anil Kumar Gupta, Lida Mariam George, Ranjeeta Kaur, Prashant Vats Abstract - Improving patient outcomes, maximizing operational efficiency, and guiding strategic decision-making all depend on the capacity to analyze and interpret data effectively in the quickly changing healthcare sector. Finding and analyzing outliers is a major difficulty in healthcare analytics as it can have a big influence on the accuracy and dependability of data-driven conclusions. The significance of business outlier analysis in healthcare analytics is examined in this article, along with its methods, uses, and consequences for payers, providers, and legislators. Healthcare companies may improve their analytical skills, which will improve patient care by improving forecast accuracy and resource allocation. This can be achieved by detecting and resolving outliers.
Authors - Aswini N, Kavitha D Abstract - Obstacle detection is vital for safe navigation in autonomous driving; however, adverse weather conditions like fog, rain, low light, and snow can compromise image quality and reduce detection accuracy. This paper presents a pipeline to enhance image quality under extreme conditions using traditional image processing techniques, followed by obstacle detection with the You Only Look Once (YOLO) deep learning model. Initially, image quality is improved using Contrast Limited Adaptive Histogram Equalization (CLAHE) followed by bilateral filtering to enhance visibility and preserve edge details. The enhanced images are then processed by pre-trained YOLO v7 model for obstacle detection. This approach highlights the effectiveness of integrating traditional enhancement techniques with deep learning for robust obstacle detection, even under adverse weather, offering a promising solution for enhancing autonomous vehicle reliability.
Authors - Hetansh Shah, Himangi Agrawal, Dhaval Shah Abstract - The paper outlines the design, implementation, and evaluation the SHA-256 cryptographic hash function on an FPGA platform, focusing on its use in Bitcoin mining. SHA-256 is a key part of the Bitcoin system, generating unique hash values from data to keep it secure and intact. The goal was to create a fast and low resource utilized, hardware-based version of SHA-256 using VHDL and implement it on the Zed- Board FPGA development platform. The main focus was on the VHDL implementation, making it modular and pipelined to improve speed and efficiency regarding resource utilization. The Zed-Board features the Xilinx Zynq-7000 SoC has been considered for hardware implementation. The design also included message buffering, preprocessing, and a pipeline for hash computation, allowing the system to handle incoming data in real time while producing hash outputs quickly. The algorithm’s functionality was verified using simulation tools in Xilinx Vivado, and the hardware implementation results were compared to previous works. It is clearly depicted the proposed method utilizes fewer resources as compared to the previous works while maintaining a throughput 27% greater than the software solution. The hardware design significantly outperforms software as well as SW/HW (HLS) versions in speed and energy use. The total on-chip power utilized was 12.898 W.
Authors - Janwale Asaram Pandurang, Minal Dutta, Savita Mohurle Abstract - Black Friday shopping event is one of the most awaited events worldwide now a day, it offers huge discounts and promotions of various products categories. For sellers, it’s important to know the customer purchasing behaviors during this period to predict sales, manage inventory and planning for marketing strategies. This research paper will focus on developing a machine learning model that will predict customer expenses capacity based on previous data from Black Friday, by considering factors such as demographics, product types and previous purchases. After collecting and processing a different dataset, exploratory data analysis was conducted to find important trends. Different machine learning models, like linear-regression, K-nearest-Neighbors (KNN) Regression, Decision-Tree-Regression and Random-Forest-Regression, were applied and tested. The Regression Forest Model with R2 value of 0.81, was found with strong predictive accuracy among those models. This study focuses on machine learning models which will help sellers to improve their productivity and will increase revenue.
Authors - Srikaanth Chockalingam, Saummya B. Gaikwad, Lokesh P. Shengolkar, Dhanbir S. Sethi Abstract - This paper presents an innovative microcontroller-based system designed to convert text files into Braille script, making Braille content more accessible for visually impaired users. The system leverages an ARM-based microcontroller and servo motors to enable real-time, mechanical translation of text into tactile Braille characters. To facilitate ease of use and to allow offline operation, an SD card is used as the primary storage medium for text files, enabling users to load and convert documents without requiring an internet connection or additional devices. This design emphasizes affordability, scalability, and usability, with the primary aim of making Braille conversion technology more accessible to educational institutions, libraries, and individuals, particularly in resource-limited settings. By reducing dependency on costly, proprietary Braille technology, this system can improve access to information and literacy among visually impaired communities, especially in developing countries where Braille materials are often scarce or prohibitively expensive. The paper thoroughly explores the system’s hardware and software components, detailing the architecture and function of each element within the overall design. A focus on energy efficiency is highlighted to extend the device’s operational time, and efforts to minimize manufacturing costs ensure this solution remains within a low-cost budget. These design choices make this Braille converter a sustainable option for broad deployment and adoption. Further development aims to expand the device's functionality by integrating wireless connectivity for text input, allowing users to access a greater range of content through online sources. Additionally, future iterations could support a larger tactile display, accommodating more Braille cells simultaneously, which would improve the reading experience for users and enhance the system’s application in educational environments.
Authors - Olukayode Oki, Abayomi Agbeyangi, Jose Lukose Abstract - Subsistence farming is an essential means of livelihood in numerous areas of Sub-Saharan Africa, with a significant segment of the population depending on it for food security. However, animal welfare in these agricultural systems encounters persistent challenges due to resource constraints and insufficient infrastructure. In recent years, technological integration has been seen as a viable answer to these difficulties by enhancing livestock monitoring, healthcare, and overall farm management. This study investigates the effects of technological integration on enhancing animal well-being, with an emphasis on a case study from Nxarhuni Village in the Eastern Cape province of South Africa. The study employs a random sampling method of 63 subsistence farmers to investigate the intricacies of technology adoption in rural areas, highlighting the necessity for informed strategies and sustainable agricultural practices. Both descriptive and regression analyses were employed to highlight the trends, relationships, and significant predictors of technology adoption. The descriptive analysis reveals that 56.6% of respondents had a positive perception of technology, even though challenges like animal health concerns, environmental conditions, and financial constraints persist. Regression analysis results indicate that socioeconomic status (coef = 1.4468, p = 0.059) and gender (coef = -1.1786, p = 0.062) are key predictors of technology adoption. The study recommends the need for specialised educational programs, improvement in infrastructure, and community engagement to support sustainable technology use and enhance animal care practices.
Authors - Dinesh Rajput, Prajwal Nimbone, Siddhesh Kasat, Mousami Munot, Rupesh Jaiswal Abstract - We introduce a system based on neural networks that combines real-time avatar functionality with TTS synthesis. The which system can produce speech in the voices of various talkers, including ones that were not seen during training. To generate a speaker embedding from a brief reference voice sample, the system makes use of a unique encoder that was trained using a large volume of voice data. Using this speaker voice, the algorithm converts text into a mel-spectrogram graph, and a vocoder turns it into an audio waveform. Concurrently, the produced speech is synced with a three-dimensional avatar that produces equivalent lip motions in real time. By using this method, the encoder's learned speaker variability is transferred to the TTS job, enabling it to mimic genuine conversation in the voices of unseen speakers. On a web interface, precise lip syncing of speech with facial movements is ensured via the integration of the avatar system. We also demonstrate that The system's ability to adapt to novel voices is markedly improved by training the encoder on a diverse speaker dataset. In addition, The capacity of the model to generate unique voices that are distinct from those heard during training and retain smooth synchronization with the avatar's visual output is demonstrated by the use of random speaker embeddings, which further showcases the model's capacity to produce high-caliber, interactive voice cloning experiences.