Authors - Hemlata V. Gaikwad, Sushma S. Kulkarni Abstract - The present article explores the dynamic changes in required graduate attributes to develop industrial revolution 4.0 (IR4) ready. The assurance of graduate attributes assumes more importance given the fact that only 20% of the engineers are employable for any job in the knowledge economy (NER, 2019). Indian Engineering institutes across India are facing pressure and striving hard to equip the graduates with the right set of attributes to enhance their employability. The authors examine the shifts of graduate attributes from industrial revolution I to IV to perceive the most important ones from employment perspective. Second, we recommend the strategies at multiple levels to develop the identified attributes by mapping them with program educational objectives and finally we argue that all these strategies must be aligned to strengthen outcomes based education and as a result prepare employable and environmentally sensitive graduates responsible for a sustainable future..
Authors - Payal Khode, Shailesh Gahane, Arya Kapse, Pankajkumar Anawade, Deepak Sharma Abstract - IoT has brought significant changes in different aspect of the society, social-relational and economic life and how society interacts with environment and physical space. Starting from light switches, locks that open doors, self-driving cars, various bracelets that track a person’s health state, sensors in industries that control machines, the Internet of things brought us to the world of increased connectivity that is filled with various conveniences, effectiveness, and automation. However, this link has given a new level of added insecurity since this makes it easier for cybercriminals to get into systems, manipulate important services, and steal important information. Consequently, this research paper provides the analysis of the complex nature of IoT security and identifies the primary threats inherent in IoT devices and networks. Comparing these models, the outcome is also presented, according to which these vulnerabilities can lead to an organized attack on data and personal accounts that can encompass the theft of personal information, attack on organizational accounts, and disruption of infrastructure and utilities. Moreover, the paper presents material considering IoT security issues as complex would be an understatement, one having to do with its technical, non-standards, and dynamic nature. Analyzing such vulnerabilities and risks, this paper intends to clarify the significance of sound security measures and protective actions that must be put in place to reduce possible threats in the IoT sphere.
Authors - Rohini Hongal, Rahil Sanadi, Supriya Katwe, Rajeshwari .M Abstract - Image compression involves reducing digital image file sizes while keeping their quality intact. Interval arithmetic, a mathematical technique, deals with ranges of values rather than precise numbers, allowing for robust computations across various applications. Vector quantization is a data compression method that groups similar data points to represent data efficiently. This study attempts to create an advanced image reduction method by integrating Convolutional Neural Networks (CNNs) and Interval-Arithmetic Vector Quantization (IAVQ). The study also examines and validates the practical relevance of attribute preservation. In the compression stage, the trained CNN is employed to extract features from input images, and the interval-arithmetic-based quantization maps these features to the predefined quantization intervals, considering attributes like sum, difference, and product. The proposed framework involves two main stages: training and compression. During the training stage, a CNN is trained to learn feature representations that encapsulate important image characteristics such as contrast and luminance intensity. This project thoroughly examines Normal Compression and Interval Arithmetic Compression, with the latter displaying promising results. Notably, Interval Arithmetic Compression consistently yields superior outcomes in pixels per second, compression file size, and PSNR compared to normal compression techniques. The IAVQ method achieves nearly 20 per cent higher compression quality, reduces pixel count by 0.5 to 0.9 per second, lowers PSNR ratio by 0.7 to 2.3 dB, and saves 11 to 19 KB of storage compared to the standard method across all image types.”
Authors - Harsh Murjani, Kabir Mota, Nishat Shaikh Abstract - This research paper explores the critical importance of Open Source Intelligence (OSINT) in modern law enforcement practices. The study aims to elucidate the essential role of OSINT tools and methodologies in enhancing the capabilities of law enforcement agencies to gather, analyze, and utilize intelligence from publicly available sources. The findings underscore the significant impact of OSINT on various aspects of law enforcement operations, including proactive threat detection, investigative support, and strategic decision-making. Moreover, the study identifies key benefits of OSINT, such as its cost-effectiveness, scalability, and ability to provide timely and actionable intelligence.
Authors - Abdul Razzak R Yergatti, Prajwal Shiggavi, Mohammed Azharuddin, Suneeta V Boodihal Abstract - Traditional defense solutions like intrusion detection and thorough packet inspection are not so accurate. These techniques include signature-based detection, which uses known patterns, and heuristic or behavioral analysis, which evaluates program behavior to detect suspect activities. The demand for more advanced and continuously innovative methods to combat malware, botnets, and other malicious activities is urgent. Machine Learning (ML) emerged as a promising approach due to increasing computing power and reduced costs, offering potential as either an alternative or complementary defense mechanism to enhance detection accuracy by learning from large datasets of known malware behaviors. This investigation delves into the capability of Machine Learning in detecting malicious malwares within a network. Initially, a thorough analysis of the Netflow datasets is conducted, resulting in the extraction of 22 distinct characteristics. Subsequently, a feature selection procedure is employed to compare all these characteristics against each other. Following this, five machine learning algorithms are assessed using a NetFlow dataset that encompasses typical botnets. The outcomes reveal that the Random Forest Classifier successfully identifies over 95% of the botnets in 8 out of the 13 scenarios, with detection rates exceeding 55% in the most challenging datasets.
Authors - Shailesh Gahane, Payal Khode, Arya Kapse, Deepak Sharma, Pankajkumar Anawade Abstract - An essential concern of our community is enhancing education. Everyone would wish to see a smaller class and school, but technology cannot materialize it physically. For the instructor, though, technology may function as a "force multiplier." For the guidance of researchers, a comprehensive questionnaire is formulated for pertinent data from the primary source, and a survey is conducted in the targeted area for determining influence. Using the questionnaire, in-depth conversations were carried out with certain main data sources toward gaining an understanding of the perspectives, mindsets, and behaviors. This would give the researchers any kind of recommendation they may deem necessary and helpful. Statistical tools such as tabulations, grouping, percentages, averages, hypothesis testing, etc. are applied to process the questionnaire. The following are considered when it comes to streams: arts, science, commerce, engineering, medicine. Because technology is always changing, even though we update these pages often, we cannot promise that all the information will remain current. Please visit our technology-focused top page to view the most recent and pertinent tech headlines. In the information age, we can now communicate with one another in ways that were before unthinkable. Educators and administrators are facing a new problem as they try to figure out. Technology has advantages. Using web conferencing or other tools, parents and teachers may be able to work together virtually. This holds true whether they use the internet for virtual communication with professionals or fellow students, or for research purposes. These programs also impart to students the technology skills needed to succeed in the modern workforce.
Authors - More Swami Das, N.N.S.S.S.Adithya, Gunupudi Rajesh Kumar, R. P. Ram Kumar Abstract - In image processing and computer vision, human activity detection is a significant activity. There are various techniques and approaches for key point detection that identify the external Skelton key points. Some methods will detect key points and recognise the human pose. The proposed work aims to utilise the Random Forest (RF) approach and classify the human activity into 15 classes using media pipes. The library trained with 30,000 samples. The objective of this paper is to capture the human face, like the angles of limbs and key points, and train the machine learning model to recognize the human action using media pipe. In the future, we can extend this work to capture real-time video poses using intelligent methods for key points to identify the actions of human facial expressions.
Authors - Muskan Dave, Mrugendrasinh Rahevar, Arpita Shah Abstract - These days, bone breaks are an all-over issue that can be invited on by a couple of special events, similar to lamentable lifestyle decisions and car collisions. The human body's ability to move and expect different shapes depends upon bones. This can cause serious misery, developing, and inconvenience moving the influenced appendage. X-radiates are a useful and sensible technique for finding breaks. For the patient, missing a break has serious outcomes. The revelation of bone breaks using CNNs ought to be conceivable using a couple of unmistakable estimations. This paper proposes a couple of regularly used computations incorporate move learning estimation using a pre-arranged CNN model, as VGG. The proposed method includes optimized time, resources and different CNN models. Multi-view CNN estimation uses various X-pillar viewpoints on comparative bone, similar to front back and equal points of view, to chip away at the precision of break revelation. Hybrid CNN computation joins various CNN models, similar to a 2D CNN and 3D CNN, to deal with the precision of break disclosure. Existing systems are being investigated and developed ceaselessly, and it similarly depends upon the dataset open, the kind of imaging and the essential of the use case. Significant learning uses additional mystery layers of the ANN that rely upon mind associations. This paper summarise significant learning approaches for separating bone breaks.
Authors - More Swami Das, Gunupudi Rajesh Kumar, R. P. Ram Kumar Abstract - Cloud-Computing enables ubiquitous on demand, convenient network access to a shared pool of computing resources. Cloud services are provided by organizations that manage huge data. The problem is to provide cloud security and availability of services to all authenticated users. In this work, We use Cloud Security Model (i.e. Encryption and description of cloud data) to enhance security and also increase availability through the use of virtualization technologies like Hyper-V and efficient utilization of cloud services. In the future, we can extend this architecture to prevent hacking and trusting models.
Authors - Vanishree Pabalkar, Ruby Chanda, Debjyoti Abstract - When it comes to n recent manufacturing, lines of Production and the Work that is in process are considered as the crucial aspects that determine the Production Performance. TAKT is Takzeit, which means Rhythm of Music. TAKT is a defined as the tool that measures the methods of Production. TAKT times is nothing but the complete time within the defined range. The current study explains the way in which technologically advanced cutting tools can be used for cycle time reduction. The impact of these advanced cutting tools on Production Performance is studied here. The objective is to increase productivity of a particular aspect. This is done by assessing the challenges that occur in the Production process. The ways to do away with the challenges that create obstacles are also identified. The Action taken to enhance Production is discussed. The existing mapping that is termed as the value stream mapping has been considered to explain the current situation of Production and provide suitable solutions. Technologically advanced cutting tools reduce the cycle time and reduce the cost in the Production process.
Authors - Smita Mehendale, Reena (Mahapatra) Lenka Abstract - This system is provided to make healthcare services responsive to visually impaired patients’ needs in various circumstances, predominantly during a medical emergency. The system includes multiple stakeholders, including visually impaired patients, patient family members, friends, neighbors, hospitals, healthcare providers, insurance companies & agents, private medical attendants & agencies, pharmacies, blood banks, and medical equipment providers using voice-assisted AI and with the help of various proposed systems. The design and implementation of voice-assisted personalized, comprehensive medical service, both emergency and non-emergency, will use data from shared information by patients and healthcare service providers like hospitals, pharmacies, and pathology and allied services like insurance and medical attendant services. The system uses a voice assistant chatbot to communicate with patients and a user interface with medical and allied service providers. It communicates between multiple service providers, clearly showing the entire patient care cycle for the visually impaired, a special group of people.
Authors - Jay Bhatt, Bimal Patel, Anshuman Prajapati, Jalpesh Vasa Abstract - Software engineering has progressed extensively, adopting structured methodologies and systematic frameworks for developing reliable, scalable, and efficient systems. A key advancement has been the introduction of software process models, which guide development activities. Traditional models use linear, sequential phases, but are rigid and less suited for projects with changing requirements. To address dynamic market demands and evolving business needs, the Agile methodology emerged, providing an iterative, flexible approach to software development. Agile promotes incremental delivery, collaborative team dynamics, and continuous customer feedback, making it highly effective in rapidly changing environments. Agile methodologies have expanded beyond software development into industries like media. With fast-evolving technology and shifting audience behaviors, media companies are under pressure to innovate. BBC News adopted Agile to overhaul its content production and delivery processes. This transition has improved newsroom agility, enabling faster response times, fostering cross-functional collaboration, and enhancing iteration capabilities in digital media workflows. The shift to Agile represents a strategic transformation, positioning BBC News to adapt to audience demands and technological advancements. This paper investigates the integration of Agile at BBC News, detailing the operational benefits, challenges, and the methodology’s influence on sustaining their leadership in the competitive news industry.
Authors - Ankit Aal, Priyanka Patel Abstract - In today’s interconnected world, ignoring ethical decision-making can have dire consequences. As businesses expand and globalize, the pressure to cut corners and maximize profits can lead to severe ethical breaches. William C. Butcher, retired chairman of the Chase Manhattan Corporation, highlighted the growing recognition that ethics in business is not a luxury but a necessity. Rooted in the concept of “ethos,” the importance of ethics has evolved, especially as business practices have become more complex. Over the decades, unethical business behavior has left a significant mark: the 1960s were defined by social upheaval, the 1980s by rampant financial scandals, and the 1990s by the challenges of a newly globalized economy. However, the rapid growth of markets was paralleled by troubling issues such as the exploitation of child labor, environmental degradation, and product counterfeiting. The 21st century introduced even more sophisticated threats—cybercrimes, intellectual property theft, and workplace discrimination—placing companies at greater risk if they neglected ethical practices. Despite the increasing awareness of these challenges, many businesses still struggle to balance profit with principle. Those that fail to integrate ethics into their strategies risk damaging their reputation, alienating customers, and facing legal repercussions. On the other hand, companies that proactively embrace ethical standards benefit from increased trust, a loyal workforce, and sustainable profitability. As ethics become an integral part of strategic business planning, they act not only as a safeguard against malpractice but also as a catalyst for long-term success in the global marketplace.
Authors - Samrat Subodh Thorat, Dinesh Vitthalrao Rojatkar, Prashant R Deshmukh Abstract - Vehicular Adhoc Networks (VANETs) play a vital role in enhancing road safety, and traffic management, and providing infotainment services. Various protocols have been developed to facilitate communication in VANETs, each with its advantages and limitations. This paper shows a comparative analysis of different VANET protocols, mentioning their performance, scalability, and reliability. It also illustrates the need for a hybrid protocol using satellite communication and GPS networks to overcome existing issues and challenges thus improving overall system efficiency.
Authors - Ruby Chanda, Rahul Dhaigude Abstract - There is a need to map, analyse, and visually convey a product's environmental impact over its complete life cycle or a specific aspect of it, given the urgency with which climate change must be addressed. If "what-if" scenarios can be additionally supported this can accelerate decision making towards improved environmental outcomes. Practically such outcomes need to also understand the economic ramifications so the map needs to support a mix of environmental and operational efficiency metrics. This paper explores the adaptation of a leading commercial value stream mapping software (eVSM Mix) for this purpose. Value stream maps come from the Lean domain and provide a high level view of the activities required to provide customer value. The work involved has technical and marketing aspects tied to a new product introduction and to a new customer segment.
Authors - Masakona Wavhothe, Khutso Lebea Abstract - This paper focuses on the vulnerabilities present in Bluetooth Low Energy (BLE) Beacons by exploring the background of BLE technology and the need to explore the chosen topic. The problem statement and structure of the paper are also explored in the introductory section. The subsequent section covers the case study that will be used to explore the chosen topic in deeper detail. Then, the background explores BLE beacons in detail, explaining their applications and vulnerabilities. The paper then concludes by highlighting all the important facts established in the research and suggesting how the study can be improved.
Authors - Mahek Viradiya, Shivam Patel, Sansriti Ishwar, Veer Parmar, Simran Kachchhi, Utsavi Patel, Hardikkumar Jayswal, Axat Patel Abstract - Moisture content identification in soil is crucial for various applications in agriculture, construction, and environmental monitoring. Traditional methods for moisture detection often involve labor-intensive processes and specialized equipment which can be invasive, time-consuming, and expensive. This study explores use of spectrometry data, acquired through multispectral sensors using visible light and near-infrared (NIR) spectrum ranging from 400-1000nm, for rapid and accurate moisture identification in soil and sand samples. The sensors leverage on-chip filtering to integrate up to eight wavelength selective photodiodes into a compact 9x9mm array, facilitating the development of simpler and smaller optical devices. The neural network model compromises of input layer, one hidden layer, and an output layer, developed using Tensor-flow and Keras libraries. It was trained using the Adam optimizer and sparse categorical cross-entropy loss function for 35 epochs with a batch size of 16. Results indicate that the neural network model and appropriate classifiers can successfully classify soil moisture levels into 4 distinct categories based on given dataset, demonstrating its potential as a cost-effective and efficient alternative to traditional soil moisture measurement techniques.
Authors - Ruby Chanda, Reena Lenka Abstract - E-games and gamification stand out among the innovative pedagogical techniques brought out by the swift integration of digital technology in educational settings because of their capacity to revolutionize the learning process. In order to outline the development, present trends, and future research directions on e-games and gamification in education, this study uses a predictive bibliometric analysis. Using an extensive dataset of Scopus publications from large academic databases, we use cutting-edge bibliometric methods to pinpoint important research themes, significant figures, and foundational works in this emerging subject. Our study displays that, particularly in the previous ten years, there has been a perceptible upsurge in scholarly interest in and publications about e-games and gamification, which is indicative of the rising understanding of these technologies' capacity to engage and encourage students. The study shows that this research area is multidisciplinary, with notable contributions from computer science, psychology, educational technology, and game design. The design and execution of educational games, the psychological principles behind gamified learning environments, and the effectiveness of gamification in improving learning outcomes are among the key study areas that have been highlighted. Our study offers useful insights for academics, educators, and policymakers looking to maximize the educational potential of e-games and gamification by identifying existing tendencies and predicting future developments. In addition to highlighting the current status of research, this predictive bibliometric study also lays out a roadmap for future studies and applications in the digital frontier of education.
Authors - Ruby Chanda, Vanishree Pabalkar Abstract - The revolutionary potential of artificial intelligence (AI) to improve agricultural practices and outcomes in India is examined in this research. India, a country that depends mostly on agriculture, has many difficulties, such as erratic weather patterns, pest infestations, and ineffective resource management. Artificial intelligence (AI) technologies provide creative answers to these issues, including machine learning, predictive analytics, and IoT-enabled gadgets. AI has the capacity to analyse enormous volumes of data and deliver timely insights and practical recommendations to farmers, resulting in increased agricultural yields, more efficient use of resources, and sustainable farming methods. This paper looks at the use of AI in Indian agriculture today, namely in the areas of automated irrigation systems, insect detection, and precision farming. It also covers the socioeconomic effects of AI adoption, emphasising how farmers could benefit from higher productivity and profits. In order to fully realise the benefits of artificial intelligence (AI), the study finishes with an analysis of the opportunities and obstacles related to its deployment in the Indian agriculture industry. It emphasises the necessity for supportive policies and infrastructure.
Authors - Ruby Chanda, Reena Lenka Abstract - This invention utilized image processing to achieve the MCQ revision in an extremely simple way. It creates extraordinary work to arrange to eliminate the boundaries of multi-decision evaluation remedies. We are here utilizing the Open-Source PC Vision Library (Open CV) to process and address the responses. The utilization of Numerous Decision Questions (MCQs) to test the information on an individual has been expanded progressively. These tests can be assessed either utilizing OMR innovation or physically. Continuously, it is very challenging to have an OMR machine under all conditions, and simultaneously, manual adjustment is tedious and mistake-prone. These disservices have been conquered in our proposed framework by utilizing an advanced picture-handling strategy to address the responses on the OMR sheet.
Authors - Ravi Tene, Dasari Kalyani, N. Sudhakar Yadav, Kondabala Renuka, Gunupudi Rajesh Kumar, Nimmala Mangathayaru Abstract - A deepfake is a misleading video or image that looks genuine. GANs (Generative Adversarial Networks) are the known name in the domain of machine learning. GANs generate a huge amount of fake human writing with deep-learning-wide-models. The generator model learns to sample points from a latent space so that new samples of the same distribution can be fed in and produce different observable model outputs. Deepfakes for most applications can be convincingly created using Generative Adversarial Networks (GANs). There are fears on the Internet related to deepfake. However, the authors use ResneXt and LSTMs for using Deep Learning Network to identify fake areas of deepfake uses Python facial recognition and C++ visual libraries to identify a face in this video. Fake videos are further validated using models trained on various edge groups.
Authors - Vanishree Pabalkar, Rahul Dhaigude Abstract - The purpose to carry out the research study is to understand the concepts of introducing advanced technology that prevails in EV market and the challenges and strategic solutions. This research validates customer feedback and allows companies to get closer to the true opinion of potential Indian customers. In addition, this can eliminate misunderstandings and problems to trade better. This study was conducted to understand the factors that influence the choice of an electric vehicle. The current research has been conducted to study the purchasing behavior of consumers when purchasing electric cars by identifying the importance ratings assigned to different factors during the selection process. in the electric car, and analyze the reasons for the brand's success by identifying the levels of excellence. Current users use different types. Characteristics and identification of the gap between importance ratings and current ratings.
Authors - Vatsal Suchak, Harmin Rana, Ayush Verma, Nilesh Dubey, Hardikkumar Jayswal, Dipika Damodar, Chirag Patel Abstract - Cattle farming plays a crucial role in global food production, but monitoring the health of large herds poses significant challenges. Traditional manual inspections are inefficient, reactive, and prone to error, highlighting the need for scalable, automated health monitoring systems. This paper introduces a smart cattle health monitoring system that utilize the Internet of Things technology and machine learning algorithms to provide real-time health tracking. The system used a proposed wearable devices equipped with ESP32 microcontrollers and sensors to monitor cattle’s vital parameters, such as body temperature and heart rate. Data collected from the devices is transmitted to a local XAMPP server and analysed by an edge-computing device, Jetson Nano, which processes the data using supervised and unsupervised machine learning models for anomaly detection. If health anomalies are detected, the system sends real-time alerts to farmers, allowing for timely intervention. The system’s design focuses on local processing for low-latency performance, scalability for large herds, and robust security measures. This project demonstrates the potential of IoT-based livestock health monitoring systems to enhance productivity, improve animal welfare, and reduce economic losses due to illness.
Authors - Teena Bambal, Dipesh Chavan, Nikhil Gadiwadd, Deepak M. Shinde Abstract - This survey paper investigates the application of artificial intelligence (AI) and machine learning (ML) techniques for the early detection and diagnosis of liver disease. Traditional methods of liver disease diagnosis, such as blood tests and imaging techniques, can be time-consuming and prone to human error. AI-based approaches offer the potential to improve accuracy, efficiency, and accessibility of liver disease diagnosis. The research investigates a range of AI and ML algorithms, such as decision trees, support vector machines, random forests, neural networks, and deep learning models. These algorithms are applied to analyze large datasets containing patient information and medical test results. The performance of the models is evaluated using metrics such as F1-score, precision, accuracy, recall, and AUC. The findings demonstrate the effectiveness of AI-based approaches in accurately detecting liver disease. Compared to traditional methods, AI models can provide more reliable and timely diagnoses, leading to improved patient outcomes. The research highlights the potential of AI to revolutionize the field of liver disease management and improve global healthcare.
Authors - Yatin Nargotra, Tanya Jagavkar, Tushar Birajdar, L.P.Patil Abstract - The Mahavitaran Help App is a mobile application aimed at revolutionizing the process of reporting electrical outages in India. Current systems for outage reporting are often slow, inefficient, and lack the integration needed to quickly address user complaints. The Mahavitaran Help App simplifies this process by allowing users to submit complaints via mobile devices, integrating location services with Google Maps and supporting the upload of complaint-relevant images. Moreover, this project introduces a critical migration from Firebase to AWS or Google Cloud, offering improved scalability, reliability, and faster processing of complaints. This paper presents a detailed review of existing mobile complaint management systems and explores cloud-based scalability and security features, including OTP authentication for securing user data.
Authors - Rani S. Lande, Amol P. Bhagat, Priti A. Khodke Abstract - Visual memes have become a pervasive form of communication in digital spaces, presenting a unique challenge and opportunity for content analysis due to their blend of visual, textual, and often humorous elements. This paper reviews and synthesizes methodologies employed in the analysis of visual memes, aiming to provide a comprehensive overview of current practices and future directions. The methodologies discussed encompass a range of approaches, including qualitative, quantitative, and mixed-methods strategies. Qualitative methods delve into semiotic analysis, exploring how visual and textual components interact to convey meaning and cultural references. Quantitative approaches employ computational tools to analyze large datasets, focusing on metrics such as image recognition, sentiment analysis, and virality metrics. Mixed-methods studies combine these approaches to offer nuanced insights into the multifaceted nature of visual memes. Challenges in visual memes content analysis include the rapid evolution of meme formats, cultural context sensitivity, and the ethical implications of meme reuse and modification. Additionally, the paper explores emerging trends such as deep learning techniques for image recognition and natural language processing for text analysis within memes. By synthesizing these methodologies, this paper aims to provide researchers and practitioners with a foundational understanding of how to effectively analyze visual memes, highlighting opportunities for interdisciplinary research and applications in fields ranging from communication studies to digital humanities and beyond.
Authors - Nitika Sharma, Rohan Patel, Hardikkumar Jayswal, Nilesh Dubey, Hasti Vakani, Mithil Mistry, Dipika Damodar, Shital Sharma Abstract - This study explores the use of advanced machine learning models to forecast trends in Apple’s stock market performance. Stock market forecasting presents a formidable challenge, given the inherent volatility and unpredictability of market behavior. The study investigates various advanced models, such as Logistic Regression, XGBoost, Artificial Neural Networks, Recurrent Neural Networks, Long Short-Term Memory (LSTM), and ARIMA, for predicting stock prices. Analyzing historical data spanning from 2014 to 2024, which includes Apple's daily stock prices and trading volume metrics, the research applies Grid Search optimization to fine-tune model parameters, thus enhancing predictive accuracy. The findings reveal that LSTM achieved the highest accuracy at 96.50%, followed closely by ARIMA at 90.91%. These results highlight the critical role of machine learning in improving stock price predictions, thereby facilitating more informed investment decisions.
Authors - Anant Nikam, Atharva Gangapure, Samarth Deshpande, Sonali Shinkar Abstract - Among the primary concerns in the digital era, secure sharing of data stands prominent. The integrity, confidence, and authenticity of information being shared form a very significant concern. Blockchain technology promises much towards overcoming such challenges due to its decentralized and immutable nature. It significantly enhances data security through its use of encryption techniques, such as hashing and digital signatures, which eliminate the need for middlemen in transactions while also reducing the probability of data breaches. It leverages smart contracts that provide mechanisms for automating access controls wherein data is shared appropriately, according to agreed terms, among the participants in a trustless environment. Some practical illustrations of use cases from healthcare, supply chain management, and finance are found in the context provided below. Findings thus reveal the needed innovation to revolutionize the state of secure data sharing on blockchain technology by providing a strengthened, decentralized infrastructure that promotes trust, transparency, and accountability of stakeholders involved.
Authors - Dibyendu Rath, Arunangshu Giri, Dipanwita Chakrabarty, Puja Tiwari, Satakshi Chatterjee, Shamba Chatterjee Abstract - The study reveals how mobile health apps and information technology can take a pivotal role for healthcare improvement, especially for rural population, where people suffer from medical infrastructural inadequacy. Healthcare apps facilitate the users by providing a 24x7 accessibility at a cost-effective rate. The study used cross-sectional surveys for analyzing responses across different demographic profile, like age, gender, qualification, income group etc. This study has identified some key factors that help to engage customers with healthcare apps. The study also reveals that trust on healthcare app will enhance intention to adopt healthcare apps and trust will be positively influenced by Perceived Benefits (PB). Again, trust will be negatively induced by Perceived Risks (PR) and Technology Anxiety (TA). Four hypotheses were made to validate the relationships among the factors. Finally, the study balances the benefits and risks of using healthcare apps and guides how m-health technology can increase adoption intentions.
Authors - Vasu Agrawal, Nupur Chaudhari, Tanisha Bharadiya, Manisha Sagade Abstract - The recent progress in AI and deep learning has significantly transformed the public safety landscape, particularly in the area of real-time threat detection in public domains. This comes with increased complexity and density as urban environments become more complex; traditional surveillance systems are no longer enough for monitoring large crowds, detecting potential threats, or ensuring public safety. This has necessitated the development of automated systems that could process large volumes in real time to pick anomalies, suspicious behaviors, and objects liable to imperil security. We delve into the core methodologies that object detection models, such as YOLO, Faster R-CNN, SSD, and compare them .To further improve the accuracy in detection of anomalous and illegal activities and reduce false positives and negatives we created a custom dataset by fusing data from different sources, these systems enhance the overall reliability of the surveillance systems.
Authors - Sheela S. J, Rajeshwari B. S, Harsha M, Subhash T. D, Tejas H. S, Thanmaya Ganesh C. S, Harsha S. M, Keerthana T. V Abstract - One of the leading causes of death Globally cardiovascular diseases (CVDs). 2019 key Statistics on CVDs is as follows: Total Deaths: 17.9 million people, 32% of global deaths, 85% of CVD deaths which approximately 15.2 million deaths from heart attacks and strokes. Hence, early diagnosis plays a crucial role in reduction of heart related diseases. Usually, the healthcare professionals collect the initial cardiac data using their quintessential instrument called stethoscope. Traditionally, these stethoscopes have significant drawbacks such as weak sound enhancement and limited noise filtering capabilities. Moreover, the low frequency signals such as below 50 Hz may not be heard because of the variation in sensitivity of a human ear. Hence, the usage of conventional stethoscopes requires experienced medical practitioners. In order to overcome these limitations, it is necessary to develop a device which is more sophisticated than conventional stethoscopes. In this context, the proposed work aims in the development of digital stethoscope which has the capability of displaying heart and lungs sound separately. Further, the proposed digital stethoscope permits to document, convert and transmit heart and lungs sounds to dB range digitally thereby reducing unnecessary travelling to medical facilities. The proposed stethoscope results are compared and validated with conventional techniques.
Authors - Prem Gaikwad, Parth Masal, Mandar Kulkarni, Mousami P. Turuk Abstract - Visual Language Models (VLMs) are an emerging technology that integrates computer vision with natural language processing, offering transformative potential for healthcare. VLMs significantly enhance disease detection, diagnosis, and report generation by enabling automated analysis and interpretation of medical images. These models are designed to support healthcare professionals by streamlining workflows, improving diagnostic accuracy, and assisting in clinical decision-making. Applications include early disease detection through image analysis, automated report generation, and integration with electronic health records (EHR) for personalized medicine. Despite their promise, challenges such as data privacy, interpretability, and the scarcity of labelled datasets remain. However, ongoing advancements in AI-driven medical systems and the integration of multimodal data can potentially revolutionize patient care and operational efficiency in healthcare settings. Addressing these challenges is crucial for realizing the full potential of VLMs in clinical practice.
Authors - Kamini Solanki, Nilay Vaidya, Jaimin Undavia, Jay Panchal Abstract - Polycystic ovary disease (PCOD) is a condition in which the ovaries of women of childbearing age produce too many immature or partially mature eggs. As time passes, these eggs develop into cysts within the ovaries. These cysts can lead to enlargement of the ovaries and an elevated production of male hormones (androgens). Consequently, this hormonal imbalance can result in a range of issues like fertility challenges, irregular menstrual cycles, unanticipated weight gain, and various other health complications. The associated symptoms often exert a long-term impact on both the physical and mental well- being of affected women. Statistics indicate that approximately 34% of individuals facing PCOD also grapple with depressive symptoms, while almost 45% experience anxiety. The primary object of this proposed framework is to detect and classify PCOD disease from standard X-ray pictures with assistance of volume datasets using deep learning model. Polycystic Ovary Disease (PCOD) significantly affects women's reproductive health, leading to various long-term complications. This work introduces a novel framework for automated PCOD detection using integrating Convolutional Neural Networks (CNN) with deep learning, applied to ultrasound imaging. Unlike traditional diagnostic methods, which rely on manual interpretation and are prone to subjectivity, the proposed system leverages the powerful feature extraction capabilities of CNNs to classify infected and non-infected ovaries with 100% accuracy. This high level of precision outperforms existing models and can be seamlessly integrated into clinical workflows for real-time diagnosis during sonography, facilitating early detection and improved fertility management. By focusing on a deep learning approach, this work provides a scalable, reliable, and automated solution for PCOD diagnosis, marking a significant advancement in the use of medical imaging with artificial intelligence.
Authors - Poornima E. Gundgurti, Shrinivasrao B. Kulkarni Abstract - Latent fingerprints play a crucial role in forensic investigations, driven by both public demand and advancements in biometrics research. Despite substantial efforts in developing algorithms for latent fingerprint matching systems, numerous challenges persist. This study introduces a novel approach to latent fingerprint matching, addressing these limitations through hybrid optimization techniques. Recognizing latent fingerprints as pivotal evidence in law enforcement, our comprehensive method encompasses fingerprint pre-processing, feature extraction, and matching stages. The proposed latent fingerprint matching utilizes a novel approach named as, Randomization Gravity Search Forest algorithm (RGSFA). Acknowledging the shortcomings of traditional techniques, our method enhances convergence speed and performance evaluation by integrating weighted factors. Precision, recall, F-measure, and recognition rate serve as performance metrics. The proposed approach has a high recognition rate of 99.9% and is successful in identifying and matching latent fingerprints, furthering the development of biometric-based personal verification techniques in forensic science and law enforcement. Experimental analyses, using publicly accessible low-quality latent fingerprints from FVC-2004 datasets, demonstrate that the proposed framework outperforms various state-of-the-art approaches.
Authors - Krunal Maheriya, Mrugendra Rahevar, Martin Parmar, Deep Kothadiya, Arpita Shah Abstract - Plant diseases pose a significant threat to agricultural output, causing food insecurity and economic losses. Early detection is crucial for effective treatment and control. Traditional diagnosis methods are labor intensive, time-consuming, and require specialized knowledge, making them unsuitable for large scale use. This study presents a novel approach for classifying cassava leaf diseases using stacked convolutional neural networks (CNNs). The proposed model leverages pre-trained ResNet-18 features to enhance feature learning and classification accuracy. The dataset includes images of cassava leaves with various diseases, such as Cassava Mosaic Disease (CMD), Cassava Green Mottle (CGM), Cassava Bacterial Blight (CBB), and Cassava Brown Streak Disease (CBSD). Our method begins with data preparation, including image augmentation to increase robustness and variability. The ResNet-18 model is then used to extract high-level features, which are then fed into a stacked CNN architecture made up of pooling layers, several convolutional layers, and non-linear activation functions. A fully connected layer is then used for classification. Experimental results demonstrate high accuracy in categorizing cassava leaf diseases. The proprietary stacked CNN architecture combined with pre-trained ResNet-18 features offers a significant improvement over conventional machine learning and image processing methods. This study advances precision agriculture by providing a scalable and effective method for early disease identification, enabling farmers to control diseases more accurately and promptly, thereby increasing crop yield. The findings point to the promise of deep learning techniques in agricultural applications and provide directions for further study to create more complex models for the classification and diagnosis of plant diseases.
Authors - Ruchi Tripathi, Anjan Mishra, Subrata Mondal, Arunangshu Giri, Dipanwita Chakrabarty, Wendrila Biswas Abstract - Agricultural product shares a significant part of retail industry. The growing popularity of digital ecosystem can immensely affect agricultural sector as well. The consumers and retailers both can get benefited from Internet of Things or IoT, as it has a vast application in agricultural product retailing. IoT helps a retailer to establish an efficient supply chain with minimum wastage without compromising with quality. On the other side, it delivers authentic real-time information to the consumers, so that they can take efficient decision. This study has identified some factors that yields decision satisfaction to the consumers through application of IoT in agricultural product retailing sector.
Authors - Khush Zambare, Amol Wagh, Sukhada Mahale, Mayank Sohani Abstract - In the all-digital world of today, the search engines are more of entry points to knowing most things. In this regard, most search engines often service the general user; most other needs, specific to a profession, go unattended. Use of "Amazon" will return results for the e-commerce giant, even when the user is the environmental scientist looking for something about the Amazon rainforest or the cloud developer searching for Amazon Web Services (AWS). This generic approach leads to inefficiencies as users need to sift through lots of useless information. This paper allows for a browser extension that personalizes the result of search on Artificial Intelligence and Machine Learning, with the aim of catering to individual users, based on their profession, interests, and specific needs. The solution dynamically re-ranks the search results as it learns from user behavior and search patterns to provide the most relevant information to save the precious time of the users. The paper will discuss current trends in SEO, AIML applications, and personalization techniques to outline how this solution can revolutionize the search engine experience.
Authors - Suruchi Pandey, Hemlata Vivek Gaikwad Abstract - The rapid shift in the integration of AI in various sector for more personalized and efficient training. This research explores into the potential of AI in various training methods, the challenges and vast opportunities of learning and growth while using it. The potential for AI-driven training is vast, spanning fields like corporate, healthcare, education, and the military. This study examines how emerging technologies like virtual reality, augmented reality, and simulation-based training can personalize learning experiences, enhance skill development, and provide real-time feedback. It also addresses critical challenges to implementing AI in training, such as costs and data privacy concerns. Additionally, the paper discusses how AI-enabled training could transform traditional learning and development practices, opening up new possibilities for advanced, adaptive learning methods.
Authors - Shripad Kanakdande, Atharva Kanherkar, Ayush Dhoot, P.B.Tathe Abstract - Efficient inventory forecasting and waste management are essential for streamlining supply chains and cutting expenses, particularly in sectors like retail and food services where inadequate stock management can lead to large losses and environmental damage. This study presents a data-driven approach to inventory prediction that makes use of sophisticated machine learning models that evaluate past data, sales patterns, and seasonal fluctuations. The model seeks to increase demand forecasting accuracy by utilizing predictive skills, which would ultimately result in improved stock management and customer satisfaction. In order to help organizations reduce waste and increase resource efficiency, it also focuses on improving waste management through real-time monitoring and forecasting of surplus inventory. Furthermore, combining sustainable practices with predictive analytics promotes long-term corporate viability while minimizing environmental harm. In addition to increasing operational effectiveness, this all-encompassing strategy supports more general environmental sustainability goals. The suggested framework gives businesses a practical way to optimize and streamline their supply chain operations while fulfilling sustainability goals by offering a complete solution that can minimize the ecological footprint and the costs associated with keeping inventory.
Authors - U.Sakthi, Aman Parasher, Akash Varma Datla Abstract - This work seeks to classify various ship categories on the high-resolution optical remote sensing dataset known as FGSC-23 using deep learning models. The dataset contains 23 types of ships, but for this study, six categories are selected: Medical Ship, Hovercraft, Submarine, Fishing Boat, Passenger Ship and Liquified Gas Ship. The adopted ship categories were thereafter used to train four deep learning models which included VGG16, EfficientNet, ResNet50v2, and MobileNetv2. The accuracy, precision, and AUC parameters were used to evaluate the models where the best one, the ResNet50v2, was set up as accurate. Using these models, it should be possible to achieve a practical deployment aiming at fine-grained classification of ships that will contribute to enhancing maritime surveillance techniques. ResNet50v2 model had the highest precision of 0.9058 and on the other hand MobileNetv2 had the highest AUC of 0.9932. The analysis of the identified models is performed further in this work to illustrate their advantages and shortcomings in adherence to fine-grained ship classification tasks. Based on this research, the practical implications transcend theoretical comparisons of performance metrics, as useful information is provided to improve security applications in the maritime domain, surveillance, and monitoring systems. Categorization and identification of ships is a very important process in going global maritime business because it is used in decision-making processes in fields like security and surveillance, fishing control, search and rescue and conservation of the environment. The models highlighted are namely ResNet50v2 as well as MobileNetv2, proved to be robust in real-time applications such scenarios because of their ability to accurately and proficiently distinguish the differences between the ship types. In addition, this study suggests the luminal possibility of doing further improvement on these models using data enhancement strategies like transfer learning, data augmentation, and hyperparameter optimization which would enable it to perform impressively on any other maritime datasets. Therefore, the outcomes are beneficial for furthering work in automated ship detection and classification and is important toward enhancing the overall effectiveness and safety of navies across the globe.
Authors - Shruti Anghan, Tirth Chaklasiya, Priyanka Patel Abstract - Technology is an indispensable tool that many industries use to transcend and arrive at the best possible results. A very significant part of the Indian economy constitutes the agricultural sector. Half of the country's workforce is still employed by the agriculture industry. What plays a critical role in affecting the agricultural sector is the natural environment within which it operates, and it throws up many challenges in real farming operations. Most agricultural processes in the country have been old-fashioned and the industry is not ready to step into new technologies. Effective technology can enhance production and reduce the greatest barriers in the field. Today, farmers mostly plant crops not based on soil quality but the market value of the crop and what the crops can return to them. This might impact the nature of the land and the farmer also. Properly applied, modern technologies such as machine learning and deep learning can help revolutionize these industries. It shall show how to apply these technologies properly to give the farmer maximum support in the crop advice field.
Authors - Bimal Patel, Ravi Patel, Jalpesh Vasa, Mikin Patel Abstract - The study delves into Tableau's unique characteristics, including its intuitive interface, robust analytics capabilities, and advanced visualization features. By leveraging these features, Tableau empowers users to transform complex datasets into actionable insights, facilitating data-driven decision-making across various domains. The paper explores the extensive applications of Tableau in key industries such as finance, healthcare, retail, and education. In finance, Tableau aids in risk management and performance analysis, while in healthcare, it enhances patient care and operational efficiency through detailed data visualizations. The retail sector benefits from Tableau's ability to analyze sales performance and customer behavior, and in education, it tracks student performance and engagement metrics. Additionally, this research identifies and addresses common challenges associated with data visualization using Tableau, such as handling large datasets, ensuring data accuracy, and maintaining user engagement. The paper provides practical solutions and best practices to overcome these hurdles, ensuring optimal use of Tableau's capabilities. The paper shows how Tableau can be used to help different industries with their specific needs and problems using real-life examples. This study serves as a valuable resource for professionals and researchers seeking to maximize the potential of Tableau in their respective fields.
Authors - Aditi Zeminder, Vaibhav Patil, Prathamesh Raibhole, S V Gaikwad Abstract - This paper presents a part of the study of a collaborative robot (cobot) designed for optimization of work tasks, focusing on selection and workplace. This project investigates best practices by developing a kinematic editing library and using ROS and RViz to perform simulations to analyze and improve motion planning. Conducted an exhaustive review of the existing research literature on collaborative robot control and efficiency and will examine the usage of commercial collaborative software, such as Elephant Robotics' myCobot and Dobot, in introducing the interface design. The Kivy-based control interface was designed to allow users to effectively interact with the robots and adjust parameters to complete tasks. This paper provides an overview of the process adopted, the challenges encountered during development and initial testing, and lays the groundwork for future developments including hardware integration and additional kinematic optimization.
Authors - Eshwari Khurd, Shravani Kamthankar, Avani Kelkar, Ravinder B. Yerram Abstract - One of the major challenges encountered when it comes to speech recognition, medical imaging, and multimedia processing for radar or weather forecasting applications, is noise interference in audio and image signals that invariably affect algorithmic precision and dependability. Denoising is responsible for removing unwanted noise while keeping intact the necessary details in the signal. An effective denoising method for audio and image signals is under continuous research across multiple parameters taken into consideration giving priority to signal-to-noise ratio (SNR). In this paper, we have surveyed various such denoising methods with a focus on the ones using Principal Component Analysis (PCA) and Ensemble Empirical Mode Decomposition (EEMD).
Authors - Renuka Deshmukh, Babasaheb Jadhav, Srinivas Subbarao Pasumarti, Mittal Mohite Abstract - In response to the issue of growing garbage, researchers, foundations, and businesses worldwide developed concepts and created new technology that sped off the procedure. Trash comes from a variety of sources, including municipal solid trash (such as discarded food, paper, cardboard, plastics, and textiles) and industrial garbage (such as ashes, hazardous wastes, and materials used in building and demolition). Contemporary waste management methods often take sociological factors into account in addition to technological ones. This review paper's goal is to talk about the potential applications of cutting-edge digital technology in the waste disposal sector. With reference to smart cities, this study aims to comprehend the environment, including the opportunities, barriers, best practices at present, and catalysts and facilitators of Industry 4.0 technologies. An innovative approach for examining the use of digital technology in smart city transformation is put out in this study. Analysis of the suggested conceptual framework is done in light of research done in both developed and developing nations. The study offers case studies and digital technology applications in trash management. This article will examine the ways in which waste management firms are utilizing cutting-edge technology to transform waste management and contribute to the development of a healthier tomorrow.
Authors - Arvin Nooli, Preethi P Abstract - Recognizing emotional states from electroencephalogram, or Electroencephalogram (EEG), signal data is challenging due to its large dimension and intricate spatial dependencies. Our project illustrates a novel approach to Electroencephalogram (EEG) data analysis in emotion recognition tasks that employ Dynamic Graph Convolutional Neural Networks (DGCNN). Our novel architecture takes advantage of the inherent graph structure of Electroencephalogram (EEG) electrodes to effectively capture spatial relationships and dependencies. Our approach used a refined DGCNN model to process and classify the data into four primary emotional states- Happy, Sad, Fear, and Neutral, we configured the DGCNN with 20 input features per electrode, optimized across 62 electrodes, and utilized multi-layered graph convolutions. The model achieved an overall classification accuracy of 97%, with similarly high macro and weighted average scores for precision, recall, and F1-score, demonstrating its resilience and accuracy.
Authors - Chitraksh Madan Singh, Yash Kumar, Lakshya Gattani, A.Anilet Bala, Harisudha Kuresan Abstract - This study presents an analysis of Instagram reach using Passive Aggressive, Decision Tree, Random Forest, and Linear Regression models. The goal is to predict the impressions generated by posts based on features like likes, saves, comments, shares, profile visits, and follows. Using Instagram data, machine learning algorithms are applied to forecast the post reach, helping marketers optimize content strategies. Quantitative metrics such as Mean Squared Error (MSE) and R-squared (R2) are used to evaluate model performance, with Random Forest showing superior accuracy compared to other models.
Authors - Shreyas Shewalkar, Shweta Autade, Aditi Sonje, M.R. Kale Abstract - With the growing need for automated text recognition and image processing, we have explored techniques that enhance the accuracy of handwritten character recognition while simultaneously addressing image restoration challenges. Handwritten English Character Recognition leverages deep learning (DL) techniques to classify and accurately identify characters from scanned or photographed documents. A deep learning-based approach is employed to recognize the patterns in handwritten text, ensuring high precision in distinguishing between characters despite variances in writing styles. In addition to recognition, colorization of grayscale images has gained attention, where DL models predict and apply realistic colors to black and white images. The recognition process applies CNN (Convolutional Neural Networks) for character identification.
Authors - Sangjukta Halder, Renuka Deshmukh Abstract - This study scrutinizes the impact of ICT-driven financial literacy agendas in India, focusing on their role in promoting financial inclusion and enhancing governance. By leveraging digital tools such as mobile apps, online courses, and e-governance platforms, these programs have effectively increased financial literacy, particularly among underserved populations. The research highlights that while challenges such as the digital divide, language barriers, and varying levels of digital literacy persist, these programs significantly empower citizens to make conversant financial choices and participate more actively with public fiscal management. The incorporation of financial literateness into digital platforms also fosters greater transparency and accountability in governance. For the purpose of improving these programs, legislators, educators, and tech developers may benefit greatly from the insights this research offers. Additionally, it makes recommendations for future research topics to investigate the long-term effects of financial literacy programs powered by ICT on financial behaviours and governance in various socioeconomic situations across India.
Authors - Yasharth Sonar, Piyush Wajage, Khushi Sunke, Anagha Bidkar Abstract - Emotion recognition from speech is a crucial part of human-computer interaction and has applications in entertainment, healthcare, and customer service. This work presents a speech emotion recognition system that integrates machine learning and deep learning techniques. The system processes speech data using Mel Frequency Cepstral Coefficients (MFCC), Chroma, and Mel Spectrogram properties that were extracted from the RAVDESS dataset. A variety of classifiers are employed, including neural network-based multi-layer percept, Random Forest, Decision Trees, Support Vector Machine, and other traditional machine learning models. We have created a hybrid deep learning system to record speech signals' temporal and spatial components. a hybrid model that combines convolutional neural networks (CNN) with long short-term memory (LSTM) networks. With an accuracy of identifying eight emotions—neutral, calm, furious, afraid, happy, sad, disgusted, and surprised—the CNN-LSTM model outperformed the others. This study demonstrates how well deep learning and conventional approaches may be used to recognize speech emotions.
Authors - K Sai Geethanjali, Nidhi Umashankar, Rajesh I S, Jagannathan K, Manjunath Sargur Krishnamurthy, Maithri C Abstract - This survey provides a comprehensive review of the methods used for lung cancer detection through thoracic CT images, focusing on various image processing techniques and machine learning algorithms. Initially, the paper discusses the anatomy and functionality of the lungs within the respiratory system. The review examines image processing methods such as cleft detection, rib and bone identification, and segmentation of the lung, bronchi, and pulmonary veins. A detailed literature review covers both basic image enhancement techniques and advanced machine learning methods, including Random Forests (RF), Decision Trees (DT), Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Gradient Boosting. The review highlights the necessity for reliable validation techniques, explores alternative technologies, and addresses ethical issues associated with the use of patient data. The findings aim to assist researchers and practitioners in developing more accurate and efficient diagnostic tools for lung cancer detection by providing a concise review, thereby helping to save time and focus efforts on the most promising advancements.
Authors - Bijeesh TV, Bejoy BJ, Krishna Sreekumar, T Punitha Reddy Abstract - Integrating artificial intelligence (AI) and advanced imaging technologies in medical diagnostics is revolutionizing brain tumor recurrence prediction. This study aims to develop a precise prognosis model following Gamma Knife radiation therapy by utilizing state-of-the-art architectures such as EfficientNetV2 and Vision Transformers (ViTs), alongside transfer learning. The research identifies complex patterns and features in brain tumor images by leveraging pre-trained models on large-scale image datasets, enabling more accurate and reliable recurrence predictions. EfficientNetV2 and Vision Transformers (ViTs) produced prediction accuracy of 98.1% and 94.85% respectively. The study’s comprehensive development lifecycle includes dataset collection, preparation, model training, and evaluation, with rigorous testing to ensure performance and clinical relevance. Successful implementation of the proposed model will significantly enhance clinical decision-making, providing critical insights into patient prognosis and treatment strategies. By improving the prediction of tumor recurrence, this research advances neuro-oncology, enhances patient outcomes, and personalizes treatment plans. This approach enhances training efficiency and generalization to unseen data, ultimately increasing the clinical utility of the predictive model in real-world healthcare settings.
Authors - Melwin Lewis, Gaurav Mishra, Sahil Singh, Sana Shaikh Abstract - This paper focuses on the development of a 2D Fighting Game, using Simple DirectMedia Layer 2 (“SDL2”), Good Game Peace Out (“GGPO”) and the Godot Game Engine. This project was made with the help of the Godot Engine and the prototype test was implemented in the C++ Language with the intent to showcase the GGPO library for the implementation of Rollback Networking in Fighting Games, a technology which makes seamless online-play possible without input delay.
Authors - Anudeep Arora, Neha Tomer, Vibha Soni, Neha Arora, Anil Kumar Gupta, Lida Mariam George, Ranjeeta Kaur, Prashant Vats Abstract - Improving patient outcomes, maximizing operational efficiency, and guiding strategic decision-making all depend on the capacity to analyze and interpret data effectively in the quickly changing healthcare sector. Finding and analyzing outliers is a major difficulty in healthcare analytics as it can have a big influence on the accuracy and dependability of data-driven conclusions. The significance of business outlier analysis in healthcare analytics is examined in this article, along with its methods, uses, and consequences for payers, providers, and legislators. Healthcare companies may improve their analytical skills, which will improve patient care by improving forecast accuracy and resource allocation. This can be achieved by detecting and resolving outliers.
Authors - Aswini N, Kavitha D Abstract - Obstacle detection is vital for safe navigation in autonomous driving; however, adverse weather conditions like fog, rain, low light, and snow can compromise image quality and reduce detection accuracy. This paper presents a pipeline to enhance image quality under extreme conditions using traditional image processing techniques, followed by obstacle detection with the You Only Look Once (YOLO) deep learning model. Initially, image quality is improved using Contrast Limited Adaptive Histogram Equalization (CLAHE) followed by bilateral filtering to enhance visibility and preserve edge details. The enhanced images are then processed by pre-trained YOLO v7 model for obstacle detection. This approach highlights the effectiveness of integrating traditional enhancement techniques with deep learning for robust obstacle detection, even under adverse weather, offering a promising solution for enhancing autonomous vehicle reliability.
Authors - Hetansh Shah, Himangi Agrawal, Dhaval Shah Abstract - The paper outlines the design, implementation, and evaluation the SHA-256 cryptographic hash function on an FPGA platform, focusing on its use in Bitcoin mining. SHA-256 is a key part of the Bitcoin system, generating unique hash values from data to keep it secure and intact. The goal was to create a fast and low resource utilized, hardware-based version of SHA-256 using VHDL and implement it on the Zed- Board FPGA development platform. The main focus was on the VHDL implementation, making it modular and pipelined to improve speed and efficiency regarding resource utilization. The Zed-Board features the Xilinx Zynq-7000 SoC has been considered for hardware implementation. The design also included message buffering, preprocessing, and a pipeline for hash computation, allowing the system to handle incoming data in real time while producing hash outputs quickly. The algorithm’s functionality was verified using simulation tools in Xilinx Vivado, and the hardware implementation results were compared to previous works. It is clearly depicted the proposed method utilizes fewer resources as compared to the previous works while maintaining a throughput 27% greater than the software solution. The hardware design significantly outperforms software as well as SW/HW (HLS) versions in speed and energy use. The total on-chip power utilized was 12.898 W.
Authors - Janwale Asaram Pandurang, Minal Dutta, Savita Mohurle Abstract - Black Friday shopping event is one of the most awaited events worldwide now a day, it offers huge discounts and promotions of various products categories. For sellers, it’s important to know the customer purchasing behaviors during this period to predict sales, manage inventory and planning for marketing strategies. This research paper will focus on developing a machine learning model that will predict customer expenses capacity based on previous data from Black Friday, by considering factors such as demographics, product types and previous purchases. After collecting and processing a different dataset, exploratory data analysis was conducted to find important trends. Different machine learning models, like linear-regression, K-nearest-Neighbors (KNN) Regression, Decision-Tree-Regression and Random-Forest-Regression, were applied and tested. The Regression Forest Model with R2 value of 0.81, was found with strong predictive accuracy among those models. This study focuses on machine learning models which will help sellers to improve their productivity and will increase revenue.
Authors - Srikaanth Chockalingam, Saummya B. Gaikwad, Lokesh P. Shengolkar, Dhanbir S. Sethi Abstract - This paper presents an innovative microcontroller-based system designed to convert text files into Braille script, making Braille content more accessible for visually impaired users. The system leverages an ARM-based microcontroller and servo motors to enable real-time, mechanical translation of text into tactile Braille characters. To facilitate ease of use and to allow offline operation, an SD card is used as the primary storage medium for text files, enabling users to load and convert documents without requiring an internet connection or additional devices. This design emphasizes affordability, scalability, and usability, with the primary aim of making Braille conversion technology more accessible to educational institutions, libraries, and individuals, particularly in resource-limited settings. By reducing dependency on costly, proprietary Braille technology, this system can improve access to information and literacy among visually impaired communities, especially in developing countries where Braille materials are often scarce or prohibitively expensive. The paper thoroughly explores the system’s hardware and software components, detailing the architecture and function of each element within the overall design. A focus on energy efficiency is highlighted to extend the device’s operational time, and efforts to minimize manufacturing costs ensure this solution remains within a low-cost budget. These design choices make this Braille converter a sustainable option for broad deployment and adoption. Further development aims to expand the device's functionality by integrating wireless connectivity for text input, allowing users to access a greater range of content through online sources. Additionally, future iterations could support a larger tactile display, accommodating more Braille cells simultaneously, which would improve the reading experience for users and enhance the system’s application in educational environments.
Authors - Olukayode Oki, Abayomi Agbeyangi, Jose Lukose Abstract - Subsistence farming is an essential means of livelihood in numerous areas of Sub-Saharan Africa, with a significant segment of the population depending on it for food security. However, animal welfare in these agricultural systems encounters persistent challenges due to resource constraints and insufficient infrastructure. In recent years, technological integration has been seen as a viable answer to these difficulties by enhancing livestock monitoring, healthcare, and overall farm management. This study investigates the effects of technological integration on enhancing animal well-being, with an emphasis on a case study from Nxarhuni Village in the Eastern Cape province of South Africa. The study employs a random sampling method of 63 subsistence farmers to investigate the intricacies of technology adoption in rural areas, highlighting the necessity for informed strategies and sustainable agricultural practices. Both descriptive and regression analyses were employed to highlight the trends, relationships, and significant predictors of technology adoption. The descriptive analysis reveals that 56.6% of respondents had a positive perception of technology, even though challenges like animal health concerns, environmental conditions, and financial constraints persist. Regression analysis results indicate that socioeconomic status (coef = 1.4468, p = 0.059) and gender (coef = -1.1786, p = 0.062) are key predictors of technology adoption. The study recommends the need for specialised educational programs, improvement in infrastructure, and community engagement to support sustainable technology use and enhance animal care practices.
Authors - Dinesh Rajput, Prajwal Nimbone, Siddhesh Kasat, Mousami Munot, Rupesh Jaiswal Abstract - We introduce a system based on neural networks that combines real-time avatar functionality with TTS synthesis. The which system can produce speech in the voices of various talkers, including ones that were not seen during training. To generate a speaker embedding from a brief reference voice sample, the system makes use of a unique encoder that was trained using a large volume of voice data. Using this speaker voice, the algorithm converts text into a mel-spectrogram graph, and a vocoder turns it into an audio waveform. Concurrently, the produced speech is synced with a three-dimensional avatar that produces equivalent lip motions in real time. By using this method, the encoder's learned speaker variability is transferred to the TTS job, enabling it to mimic genuine conversation in the voices of unseen speakers. On a web interface, precise lip syncing of speech with facial movements is ensured via the integration of the avatar system. We also demonstrate that The system's ability to adapt to novel voices is markedly improved by training the encoder on a diverse speaker dataset. In addition, The capacity of the model to generate unique voices that are distinct from those heard during training and retain smooth synchronization with the avatar's visual output is demonstrated by the use of random speaker embeddings, which further showcases the model's capacity to produce high-caliber, interactive voice cloning experiences.