Authors - Sheela S. J, Rajeshwari B. S, Harsha M, Subhash T. D, Tejas H. S, Thanmaya Ganesh C. S, Harsha S. M, Keerthana T. V Abstract - One of the leading causes of death Globally cardiovascular diseases (CVDs). 2019 key Statistics on CVDs is as follows: Total Deaths: 17.9 million people, 32% of global deaths, 85% of CVD deaths which approximately 15.2 million deaths from heart attacks and strokes. Hence, early diagnosis plays a crucial role in reduction of heart related diseases. Usually, the healthcare professionals collect the initial cardiac data using their quintessential instrument called stethoscope. Traditionally, these stethoscopes have significant drawbacks such as weak sound enhancement and limited noise filtering capabilities. Moreover, the low frequency signals such as below 50 Hz may not be heard because of the variation in sensitivity of a human ear. Hence, the usage of conventional stethoscopes requires experienced medical practitioners. In order to overcome these limitations, it is necessary to develop a device which is more sophisticated than conventional stethoscopes. In this context, the proposed work aims in the development of digital stethoscope which has the capability of displaying heart and lungs sound separately. Further, the proposed digital stethoscope permits to document, convert and transmit heart and lungs sounds to dB range digitally thereby reducing unnecessary travelling to medical facilities. The proposed stethoscope results are compared and validated with conventional techniques.
Authors - Prem Gaikwad, Parth Masal, Mandar Kulkarni, Mousami P. Turuk Abstract - Visual Language Models (VLMs) are an emerging technology that integrates computer vision with natural language processing, offering transformative potential for healthcare. VLMs significantly enhance disease detection, diagnosis, and report generation by enabling automated analysis and interpretation of medical images. These models are designed to support healthcare professionals by streamlining workflows, improving diagnostic accuracy, and assisting in clinical decision-making. Applications include early disease detection through image analysis, automated report generation, and integration with electronic health records (EHR) for personalized medicine. Despite their promise, challenges such as data privacy, interpretability, and the scarcity of labelled datasets remain. However, ongoing advancements in AI-driven medical systems and the integration of multimodal data can potentially revolutionize patient care and operational efficiency in healthcare settings. Addressing these challenges is crucial for realizing the full potential of VLMs in clinical practice.
Authors - Kamini Solanki, Nilay Vaidya, Jaimin Undavia, Jay Panchal Abstract - Polycystic ovary disease (PCOD) is a condition in which the ovaries of women of childbearing age produce too many immature or partially mature eggs. As time passes, these eggs develop into cysts within the ovaries. These cysts can lead to enlargement of the ovaries and an elevated production of male hormones (androgens). Consequently, this hormonal imbalance can result in a range of issues like fertility challenges, irregular menstrual cycles, unanticipated weight gain, and various other health complications. The associated symptoms often exert a long-term impact on both the physical and mental well- being of affected women. Statistics indicate that approximately 34% of individuals facing PCOD also grapple with depressive symptoms, while almost 45% experience anxiety. The primary object of this proposed framework is to detect and classify PCOD disease from standard X-ray pictures with assistance of volume datasets using deep learning model. Polycystic Ovary Disease (PCOD) significantly affects women's reproductive health, leading to various long-term complications. This work introduces a novel framework for automated PCOD detection using integrating Convolutional Neural Networks (CNN) with deep learning, applied to ultrasound imaging. Unlike traditional diagnostic methods, which rely on manual interpretation and are prone to subjectivity, the proposed system leverages the powerful feature extraction capabilities of CNNs to classify infected and non-infected ovaries with 100% accuracy. This high level of precision outperforms existing models and can be seamlessly integrated into clinical workflows for real-time diagnosis during sonography, facilitating early detection and improved fertility management. By focusing on a deep learning approach, this work provides a scalable, reliable, and automated solution for PCOD diagnosis, marking a significant advancement in the use of medical imaging with artificial intelligence.
Authors - Poornima E. Gundgurti, Shrinivasrao B. Kulkarni Abstract - Latent fingerprints play a crucial role in forensic investigations, driven by both public demand and advancements in biometrics research. Despite substantial efforts in developing algorithms for latent fingerprint matching systems, numerous challenges persist. This study introduces a novel approach to latent fingerprint matching, addressing these limitations through hybrid optimization techniques. Recognizing latent fingerprints as pivotal evidence in law enforcement, our comprehensive method encompasses fingerprint pre-processing, feature extraction, and matching stages. The proposed latent fingerprint matching utilizes a novel approach named as, Randomization Gravity Search Forest algorithm (RGSFA). Acknowledging the shortcomings of traditional techniques, our method enhances convergence speed and performance evaluation by integrating weighted factors. Precision, recall, F-measure, and recognition rate serve as performance metrics. The proposed approach has a high recognition rate of 99.9% and is successful in identifying and matching latent fingerprints, furthering the development of biometric-based personal verification techniques in forensic science and law enforcement. Experimental analyses, using publicly accessible low-quality latent fingerprints from FVC-2004 datasets, demonstrate that the proposed framework outperforms various state-of-the-art approaches.
Authors - Krunal Maheriya, Mrugendra Rahevar, Martin Parmar, Deep Kothadiya, Arpita Shah Abstract - Plant diseases pose a significant threat to agricultural output, causing food insecurity and economic losses. Early detection is crucial for effective treatment and control. Traditional diagnosis methods are labor intensive, time-consuming, and require specialized knowledge, making them unsuitable for large scale use. This study presents a novel approach for classifying cassava leaf diseases using stacked convolutional neural networks (CNNs). The proposed model leverages pre-trained ResNet-18 features to enhance feature learning and classification accuracy. The dataset includes images of cassava leaves with various diseases, such as Cassava Mosaic Disease (CMD), Cassava Green Mottle (CGM), Cassava Bacterial Blight (CBB), and Cassava Brown Streak Disease (CBSD). Our method begins with data preparation, including image augmentation to increase robustness and variability. The ResNet-18 model is then used to extract high-level features, which are then fed into a stacked CNN architecture made up of pooling layers, several convolutional layers, and non-linear activation functions. A fully connected layer is then used for classification. Experimental results demonstrate high accuracy in categorizing cassava leaf diseases. The proprietary stacked CNN architecture combined with pre-trained ResNet-18 features offers a significant improvement over conventional machine learning and image processing methods. This study advances precision agriculture by providing a scalable and effective method for early disease identification, enabling farmers to control diseases more accurately and promptly, thereby increasing crop yield. The findings point to the promise of deep learning techniques in agricultural applications and provide directions for further study to create more complex models for the classification and diagnosis of plant diseases.
Authors - Ruchi Tripathi, Anjan Mishra, Subrata Mondal, Arunangshu Giri, Dipanwita Chakrabarty, Wendrila Biswas Abstract - Agricultural product shares a significant part of retail industry. The growing popularity of digital ecosystem can immensely affect agricultural sector as well. The consumers and retailers both can get benefited from Internet of Things or IoT, as it has a vast application in agricultural product retailing. IoT helps a retailer to establish an efficient supply chain with minimum wastage without compromising with quality. On the other side, it delivers authentic real-time information to the consumers, so that they can take efficient decision. This study has identified some factors that yields decision satisfaction to the consumers through application of IoT in agricultural product retailing sector.
Authors - Khush Zambare, Amol Wagh, Sukhada Mahale, Mayank Sohani Abstract - In the all-digital world of today, the search engines are more of entry points to knowing most things. In this regard, most search engines often service the general user; most other needs, specific to a profession, go unattended. Use of "Amazon" will return results for the e-commerce giant, even when the user is the environmental scientist looking for something about the Amazon rainforest or the cloud developer searching for Amazon Web Services (AWS). This generic approach leads to inefficiencies as users need to sift through lots of useless information. This paper allows for a browser extension that personalizes the result of search on Artificial Intelligence and Machine Learning, with the aim of catering to individual users, based on their profession, interests, and specific needs. The solution dynamically re-ranks the search results as it learns from user behavior and search patterns to provide the most relevant information to save the precious time of the users. The paper will discuss current trends in SEO, AIML applications, and personalization techniques to outline how this solution can revolutionize the search engine experience.
Authors - Suruchi Pandey, Hemlata Vivek Gaikwad Abstract - The rapid shift in the integration of AI in various sector for more personalized and efficient training. This research explores into the potential of AI in various training methods, the challenges and vast opportunities of learning and growth while using it. The potential for AI-driven training is vast, spanning fields like corporate, healthcare, education, and the military. This study examines how emerging technologies like virtual reality, augmented reality, and simulation-based training can personalize learning experiences, enhance skill development, and provide real-time feedback. It also addresses critical challenges to implementing AI in training, such as costs and data privacy concerns. Additionally, the paper discusses how AI-enabled training could transform traditional learning and development practices, opening up new possibilities for advanced, adaptive learning methods.
Authors - Shripad Kanakdande, Atharva Kanherkar, Ayush Dhoot, P.B.Tathe Abstract - Efficient inventory forecasting and waste management are essential for streamlining supply chains and cutting expenses, particularly in sectors like retail and food services where inadequate stock management can lead to large losses and environmental damage. This study presents a data-driven approach to inventory prediction that makes use of sophisticated machine learning models that evaluate past data, sales patterns, and seasonal fluctuations. The model seeks to increase demand forecasting accuracy by utilizing predictive skills, which would ultimately result in improved stock management and customer satisfaction. In order to help organizations reduce waste and increase resource efficiency, it also focuses on improving waste management through real-time monitoring and forecasting of surplus inventory. Furthermore, combining sustainable practices with predictive analytics promotes long-term corporate viability while minimizing environmental harm. In addition to increasing operational effectiveness, this all-encompassing strategy supports more general environmental sustainability goals. The suggested framework gives businesses a practical way to optimize and streamline their supply chain operations while fulfilling sustainability goals by offering a complete solution that can minimize the ecological footprint and the costs associated with keeping inventory.
Authors - U.Sakthi, Aman Parasher, Akash Varma Datla Abstract - This work seeks to classify various ship categories on the high-resolution optical remote sensing dataset known as FGSC-23 using deep learning models. The dataset contains 23 types of ships, but for this study, six categories are selected: Medical Ship, Hovercraft, Submarine, Fishing Boat, Passenger Ship and Liquified Gas Ship. The adopted ship categories were thereafter used to train four deep learning models which included VGG16, EfficientNet, ResNet50v2, and MobileNetv2. The accuracy, precision, and AUC parameters were used to evaluate the models where the best one, the ResNet50v2, was set up as accurate. Using these models, it should be possible to achieve a practical deployment aiming at fine-grained classification of ships that will contribute to enhancing maritime surveillance techniques. ResNet50v2 model had the highest precision of 0.9058 and on the other hand MobileNetv2 had the highest AUC of 0.9932. The analysis of the identified models is performed further in this work to illustrate their advantages and shortcomings in adherence to fine-grained ship classification tasks. Based on this research, the practical implications transcend theoretical comparisons of performance metrics, as useful information is provided to improve security applications in the maritime domain, surveillance, and monitoring systems. Categorization and identification of ships is a very important process in going global maritime business because it is used in decision-making processes in fields like security and surveillance, fishing control, search and rescue and conservation of the environment. The models highlighted are namely ResNet50v2 as well as MobileNetv2, proved to be robust in real-time applications such scenarios because of their ability to accurately and proficiently distinguish the differences between the ship types. In addition, this study suggests the luminal possibility of doing further improvement on these models using data enhancement strategies like transfer learning, data augmentation, and hyperparameter optimization which would enable it to perform impressively on any other maritime datasets. Therefore, the outcomes are beneficial for furthering work in automated ship detection and classification and is important toward enhancing the overall effectiveness and safety of navies across the globe.
Authors - Shruti Anghan, Tirth Chaklasiya, Priyanka Patel Abstract - Technology is an indispensable tool that many industries use to transcend and arrive at the best possible results. A very significant part of the Indian economy constitutes the agricultural sector. Half of the country's workforce is still employed by the agriculture industry. What plays a critical role in affecting the agricultural sector is the natural environment within which it operates, and it throws up many challenges in real farming operations. Most agricultural processes in the country have been old-fashioned and the industry is not ready to step into new technologies. Effective technology can enhance production and reduce the greatest barriers in the field. Today, farmers mostly plant crops not based on soil quality but the market value of the crop and what the crops can return to them. This might impact the nature of the land and the farmer also. Properly applied, modern technologies such as machine learning and deep learning can help revolutionize these industries. It shall show how to apply these technologies properly to give the farmer maximum support in the crop advice field.
Authors - Bimal Patel, Ravi Patel, Jalpesh Vasa, Mikin Patel Abstract - The study delves into Tableau's unique characteristics, including its intuitive interface, robust analytics capabilities, and advanced visualization features. By leveraging these features, Tableau empowers users to transform complex datasets into actionable insights, facilitating data-driven decision-making across various domains. The paper explores the extensive applications of Tableau in key industries such as finance, healthcare, retail, and education. In finance, Tableau aids in risk management and performance analysis, while in healthcare, it enhances patient care and operational efficiency through detailed data visualizations. The retail sector benefits from Tableau's ability to analyze sales performance and customer behavior, and in education, it tracks student performance and engagement metrics. Additionally, this research identifies and addresses common challenges associated with data visualization using Tableau, such as handling large datasets, ensuring data accuracy, and maintaining user engagement. The paper provides practical solutions and best practices to overcome these hurdles, ensuring optimal use of Tableau's capabilities. The paper shows how Tableau can be used to help different industries with their specific needs and problems using real-life examples. This study serves as a valuable resource for professionals and researchers seeking to maximize the potential of Tableau in their respective fields.
Authors - Aditi Zeminder, Vaibhav Patil, Prathamesh Raibhole, S V Gaikwad Abstract - This paper presents a part of the study of a collaborative robot (cobot) designed for optimization of work tasks, focusing on selection and workplace. This project investigates best practices by developing a kinematic editing library and using ROS and RViz to perform simulations to analyze and improve motion planning. Conducted an exhaustive review of the existing research literature on collaborative robot control and efficiency and will examine the usage of commercial collaborative software, such as Elephant Robotics' myCobot and Dobot, in introducing the interface design. The Kivy-based control interface was designed to allow users to effectively interact with the robots and adjust parameters to complete tasks. This paper provides an overview of the process adopted, the challenges encountered during development and initial testing, and lays the groundwork for future developments including hardware integration and additional kinematic optimization.
Authors - Eshwari Khurd, Shravani Kamthankar, Avani Kelkar, Ravinder B. Yerram Abstract - One of the major challenges encountered when it comes to speech recognition, medical imaging, and multimedia processing for radar or weather forecasting applications, is noise interference in audio and image signals that invariably affect algorithmic precision and dependability. Denoising is responsible for removing unwanted noise while keeping intact the necessary details in the signal. An effective denoising method for audio and image signals is under continuous research across multiple parameters taken into consideration giving priority to signal-to-noise ratio (SNR). In this paper, we have surveyed various such denoising methods with a focus on the ones using Principal Component Analysis (PCA) and Ensemble Empirical Mode Decomposition (EEMD).
Authors - Renuka Deshmukh, Babasaheb Jadhav, Srinivas Subbarao Pasumarti, Mittal Mohite Abstract - In response to the issue of growing garbage, researchers, foundations, and businesses worldwide developed concepts and created new technology that sped off the procedure. Trash comes from a variety of sources, including municipal solid trash (such as discarded food, paper, cardboard, plastics, and textiles) and industrial garbage (such as ashes, hazardous wastes, and materials used in building and demolition). Contemporary waste management methods often take sociological factors into account in addition to technological ones. This review paper's goal is to talk about the potential applications of cutting-edge digital technology in the waste disposal sector. With reference to smart cities, this study aims to comprehend the environment, including the opportunities, barriers, best practices at present, and catalysts and facilitators of Industry 4.0 technologies. An innovative approach for examining the use of digital technology in smart city transformation is put out in this study. Analysis of the suggested conceptual framework is done in light of research done in both developed and developing nations. The study offers case studies and digital technology applications in trash management. This article will examine the ways in which waste management firms are utilizing cutting-edge technology to transform waste management and contribute to the development of a healthier tomorrow.
Authors - Arvin Nooli, Preethi P Abstract - Recognizing emotional states from electroencephalogram, or Electroencephalogram (EEG), signal data is challenging due to its large dimension and intricate spatial dependencies. Our project illustrates a novel approach to Electroencephalogram (EEG) data analysis in emotion recognition tasks that employ Dynamic Graph Convolutional Neural Networks (DGCNN). Our novel architecture takes advantage of the inherent graph structure of Electroencephalogram (EEG) electrodes to effectively capture spatial relationships and dependencies. Our approach used a refined DGCNN model to process and classify the data into four primary emotional states- Happy, Sad, Fear, and Neutral, we configured the DGCNN with 20 input features per electrode, optimized across 62 electrodes, and utilized multi-layered graph convolutions. The model achieved an overall classification accuracy of 97%, with similarly high macro and weighted average scores for precision, recall, and F1-score, demonstrating its resilience and accuracy.
Authors - Chitraksh Madan Singh, Yash Kumar, Lakshya Gattani, A.Anilet Bala, Harisudha Kuresan Abstract - This study presents an analysis of Instagram reach using Passive Aggressive, Decision Tree, Random Forest, and Linear Regression models. The goal is to predict the impressions generated by posts based on features like likes, saves, comments, shares, profile visits, and follows. Using Instagram data, machine learning algorithms are applied to forecast the post reach, helping marketers optimize content strategies. Quantitative metrics such as Mean Squared Error (MSE) and R-squared (R2) are used to evaluate model performance, with Random Forest showing superior accuracy compared to other models.
Authors - Shreyas Shewalkar, Shweta Autade, Aditi Sonje, M.R. Kale Abstract - With the growing need for automated text recognition and image processing, we have explored techniques that enhance the accuracy of handwritten character recognition while simultaneously addressing image restoration challenges. Handwritten English Character Recognition leverages deep learning (DL) techniques to classify and accurately identify characters from scanned or photographed documents. A deep learning-based approach is employed to recognize the patterns in handwritten text, ensuring high precision in distinguishing between characters despite variances in writing styles. In addition to recognition, colorization of grayscale images has gained attention, where DL models predict and apply realistic colors to black and white images. The recognition process applies CNN (Convolutional Neural Networks) for character identification.
Authors - Sangjukta Halder, Renuka Deshmukh Abstract - This study scrutinizes the impact of ICT-driven financial literacy agendas in India, focusing on their role in promoting financial inclusion and enhancing governance. By leveraging digital tools such as mobile apps, online courses, and e-governance platforms, these programs have effectively increased financial literacy, particularly among underserved populations. The research highlights that while challenges such as the digital divide, language barriers, and varying levels of digital literacy persist, these programs significantly empower citizens to make conversant financial choices and participate more actively with public fiscal management. The incorporation of financial literateness into digital platforms also fosters greater transparency and accountability in governance. For the purpose of improving these programs, legislators, educators, and tech developers may benefit greatly from the insights this research offers. Additionally, it makes recommendations for future research topics to investigate the long-term effects of financial literacy programs powered by ICT on financial behaviours and governance in various socioeconomic situations across India.
Authors - Yasharth Sonar, Piyush Wajage, Khushi Sunke, Anagha Bidkar Abstract - Emotion recognition from speech is a crucial part of human-computer interaction and has applications in entertainment, healthcare, and customer service. This work presents a speech emotion recognition system that integrates machine learning and deep learning techniques. The system processes speech data using Mel Frequency Cepstral Coefficients (MFCC), Chroma, and Mel Spectrogram properties that were extracted from the RAVDESS dataset. A variety of classifiers are employed, including neural network-based multi-layer percept, Random Forest, Decision Trees, Support Vector Machine, and other traditional machine learning models. We have created a hybrid deep learning system to record speech signals' temporal and spatial components. a hybrid model that combines convolutional neural networks (CNN) with long short-term memory (LSTM) networks. With an accuracy of identifying eight emotions—neutral, calm, furious, afraid, happy, sad, disgusted, and surprised—the CNN-LSTM model outperformed the others. This study demonstrates how well deep learning and conventional approaches may be used to recognize speech emotions.
Authors - K Sai Geethanjali, Nidhi Umashankar, Rajesh I S, Jagannathan K, Manjunath Sargur Krishnamurthy, Maithri C Abstract - This survey provides a comprehensive review of the methods used for lung cancer detection through thoracic CT images, focusing on various image processing techniques and machine learning algorithms. Initially, the paper discusses the anatomy and functionality of the lungs within the respiratory system. The review examines image processing methods such as cleft detection, rib and bone identification, and segmentation of the lung, bronchi, and pulmonary veins. A detailed literature review covers both basic image enhancement techniques and advanced machine learning methods, including Random Forests (RF), Decision Trees (DT), Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Gradient Boosting. The review highlights the necessity for reliable validation techniques, explores alternative technologies, and addresses ethical issues associated with the use of patient data. The findings aim to assist researchers and practitioners in developing more accurate and efficient diagnostic tools for lung cancer detection by providing a concise review, thereby helping to save time and focus efforts on the most promising advancements.
Authors - Bijeesh TV, Bejoy BJ, Krishna Sreekumar, T Punitha Reddy Abstract - Integrating artificial intelligence (AI) and advanced imaging technologies in medical diagnostics is revolutionizing brain tumor recurrence prediction. This study aims to develop a precise prognosis model following Gamma Knife radiation therapy by utilizing state-of-the-art architectures such as EfficientNetV2 and Vision Transformers (ViTs), alongside transfer learning. The research identifies complex patterns and features in brain tumor images by leveraging pre-trained models on large-scale image datasets, enabling more accurate and reliable recurrence predictions. EfficientNetV2 and Vision Transformers (ViTs) produced prediction accuracy of 98.1% and 94.85% respectively. The study’s comprehensive development lifecycle includes dataset collection, preparation, model training, and evaluation, with rigorous testing to ensure performance and clinical relevance. Successful implementation of the proposed model will significantly enhance clinical decision-making, providing critical insights into patient prognosis and treatment strategies. By improving the prediction of tumor recurrence, this research advances neuro-oncology, enhances patient outcomes, and personalizes treatment plans. This approach enhances training efficiency and generalization to unseen data, ultimately increasing the clinical utility of the predictive model in real-world healthcare settings.
Authors - Melwin Lewis, Gaurav Mishra, Sahil Singh, Sana Shaikh Abstract - This paper focuses on the development of a 2D Fighting Game, using Simple DirectMedia Layer 2 (“SDL2”), Good Game Peace Out (“GGPO”) and the Godot Game Engine. This project was made with the help of the Godot Engine and the prototype test was implemented in the C++ Language with the intent to showcase the GGPO library for the implementation of Rollback Networking in Fighting Games, a technology which makes seamless online-play possible without input delay.
Authors - Anudeep Arora, Neha Tomer, Vibha Soni, Neha Arora, Anil Kumar Gupta, Lida Mariam George, Ranjeeta Kaur, Prashant Vats Abstract - Improving patient outcomes, maximizing operational efficiency, and guiding strategic decision-making all depend on the capacity to analyze and interpret data effectively in the quickly changing healthcare sector. Finding and analyzing outliers is a major difficulty in healthcare analytics as it can have a big influence on the accuracy and dependability of data-driven conclusions. The significance of business outlier analysis in healthcare analytics is examined in this article, along with its methods, uses, and consequences for payers, providers, and legislators. Healthcare companies may improve their analytical skills, which will improve patient care by improving forecast accuracy and resource allocation. This can be achieved by detecting and resolving outliers.
Authors - Aswini N, Kavitha D Abstract - Obstacle detection is vital for safe navigation in autonomous driving; however, adverse weather conditions like fog, rain, low light, and snow can compromise image quality and reduce detection accuracy. This paper presents a pipeline to enhance image quality under extreme conditions using traditional image processing techniques, followed by obstacle detection with the You Only Look Once (YOLO) deep learning model. Initially, image quality is improved using Contrast Limited Adaptive Histogram Equalization (CLAHE) followed by bilateral filtering to enhance visibility and preserve edge details. The enhanced images are then processed by pre-trained YOLO v7 model for obstacle detection. This approach highlights the effectiveness of integrating traditional enhancement techniques with deep learning for robust obstacle detection, even under adverse weather, offering a promising solution for enhancing autonomous vehicle reliability.
Authors - Hetansh Shah, Himangi Agrawal, Dhaval Shah Abstract - The paper outlines the design, implementation, and evaluation the SHA-256 cryptographic hash function on an FPGA platform, focusing on its use in Bitcoin mining. SHA-256 is a key part of the Bitcoin system, generating unique hash values from data to keep it secure and intact. The goal was to create a fast and low resource utilized, hardware-based version of SHA-256 using VHDL and implement it on the Zed- Board FPGA development platform. The main focus was on the VHDL implementation, making it modular and pipelined to improve speed and efficiency regarding resource utilization. The Zed-Board features the Xilinx Zynq-7000 SoC has been considered for hardware implementation. The design also included message buffering, preprocessing, and a pipeline for hash computation, allowing the system to handle incoming data in real time while producing hash outputs quickly. The algorithm’s functionality was verified using simulation tools in Xilinx Vivado, and the hardware implementation results were compared to previous works. It is clearly depicted the proposed method utilizes fewer resources as compared to the previous works while maintaining a throughput 27% greater than the software solution. The hardware design significantly outperforms software as well as SW/HW (HLS) versions in speed and energy use. The total on-chip power utilized was 12.898 W.
Authors - Janwale Asaram Pandurang, Minal Dutta, Savita Mohurle Abstract - Black Friday shopping event is one of the most awaited events worldwide now a day, it offers huge discounts and promotions of various products categories. For sellers, it’s important to know the customer purchasing behaviors during this period to predict sales, manage inventory and planning for marketing strategies. This research paper will focus on developing a machine learning model that will predict customer expenses capacity based on previous data from Black Friday, by considering factors such as demographics, product types and previous purchases. After collecting and processing a different dataset, exploratory data analysis was conducted to find important trends. Different machine learning models, like linear-regression, K-nearest-Neighbors (KNN) Regression, Decision-Tree-Regression and Random-Forest-Regression, were applied and tested. The Regression Forest Model with R2 value of 0.81, was found with strong predictive accuracy among those models. This study focuses on machine learning models which will help sellers to improve their productivity and will increase revenue.
Authors - Srikaanth Chockalingam, Saummya B. Gaikwad, Lokesh P. Shengolkar, Dhanbir S. Sethi Abstract - This paper presents an innovative microcontroller-based system designed to convert text files into Braille script, making Braille content more accessible for visually impaired users. The system leverages an ARM-based microcontroller and servo motors to enable real-time, mechanical translation of text into tactile Braille characters. To facilitate ease of use and to allow offline operation, an SD card is used as the primary storage medium for text files, enabling users to load and convert documents without requiring an internet connection or additional devices. This design emphasizes affordability, scalability, and usability, with the primary aim of making Braille conversion technology more accessible to educational institutions, libraries, and individuals, particularly in resource-limited settings. By reducing dependency on costly, proprietary Braille technology, this system can improve access to information and literacy among visually impaired communities, especially in developing countries where Braille materials are often scarce or prohibitively expensive. The paper thoroughly explores the system’s hardware and software components, detailing the architecture and function of each element within the overall design. A focus on energy efficiency is highlighted to extend the device’s operational time, and efforts to minimize manufacturing costs ensure this solution remains within a low-cost budget. These design choices make this Braille converter a sustainable option for broad deployment and adoption. Further development aims to expand the device's functionality by integrating wireless connectivity for text input, allowing users to access a greater range of content through online sources. Additionally, future iterations could support a larger tactile display, accommodating more Braille cells simultaneously, which would improve the reading experience for users and enhance the system’s application in educational environments.
Authors - Olukayode Oki, Abayomi Agbeyangi, Jose Lukose Abstract - Subsistence farming is an essential means of livelihood in numerous areas of Sub-Saharan Africa, with a significant segment of the population depending on it for food security. However, animal welfare in these agricultural systems encounters persistent challenges due to resource constraints and insufficient infrastructure. In recent years, technological integration has been seen as a viable answer to these difficulties by enhancing livestock monitoring, healthcare, and overall farm management. This study investigates the effects of technological integration on enhancing animal well-being, with an emphasis on a case study from Nxarhuni Village in the Eastern Cape province of South Africa. The study employs a random sampling method of 63 subsistence farmers to investigate the intricacies of technology adoption in rural areas, highlighting the necessity for informed strategies and sustainable agricultural practices. Both descriptive and regression analyses were employed to highlight the trends, relationships, and significant predictors of technology adoption. The descriptive analysis reveals that 56.6% of respondents had a positive perception of technology, even though challenges like animal health concerns, environmental conditions, and financial constraints persist. Regression analysis results indicate that socioeconomic status (coef = 1.4468, p = 0.059) and gender (coef = -1.1786, p = 0.062) are key predictors of technology adoption. The study recommends the need for specialised educational programs, improvement in infrastructure, and community engagement to support sustainable technology use and enhance animal care practices.
Authors - Dinesh Rajput, Prajwal Nimbone, Siddhesh Kasat, Mousami Munot, Rupesh Jaiswal Abstract - We introduce a system based on neural networks that combines real-time avatar functionality with TTS synthesis. The which system can produce speech in the voices of various talkers, including ones that were not seen during training. To generate a speaker embedding from a brief reference voice sample, the system makes use of a unique encoder that was trained using a large volume of voice data. Using this speaker voice, the algorithm converts text into a mel-spectrogram graph, and a vocoder turns it into an audio waveform. Concurrently, the produced speech is synced with a three-dimensional avatar that produces equivalent lip motions in real time. By using this method, the encoder's learned speaker variability is transferred to the TTS job, enabling it to mimic genuine conversation in the voices of unseen speakers. On a web interface, precise lip syncing of speech with facial movements is ensured via the integration of the avatar system. We also demonstrate that The system's ability to adapt to novel voices is markedly improved by training the encoder on a diverse speaker dataset. In addition, The capacity of the model to generate unique voices that are distinct from those heard during training and retain smooth synchronization with the avatar's visual output is demonstrated by the use of random speaker embeddings, which further showcases the model's capacity to produce high-caliber, interactive voice cloning experiences.