Authors - Anushka Ashok Pote, Laukik Nitin Marathe, Suvarna Abhijit Patil, Sneha Kanawade, Deepali Samir Hajare, Varsha Pandagre, Arti Singh, Rasika Kachore Abstract - Due to the increase in vulnerability of different types of diseases, the use of artificial intelligence is seen to be rapidly increasing in the healthcare industry for creating systems that will provide diagnosis, treatment, and patient care. One of the major challenges that is faced in recent days is that most of the traditional healthcare systems are not transparent and comprehensible. This review explores the importance of Explainable Artificial Intelligence in order to make advancements in precision medicine, focusing on personalized treatment and disease prediction. Despite being powerful, traditional AI models function as "black boxes," which do not offer any insights into how decisions are made. This limits their application in critical sectors like healthcare where trust and accountability are crucial. Explainable AI makes systems more transparent and interpretable allowing healthcare professionals to understand and trust AI-driven insights. It exhibits significant enhancements in diagnostic accuracy and treatment personalization across various areas like oncology, cardiovascular disease, neurology, etc. The review performs comparisons between explainability driven models and traditional models. It reveals that XAI-based models offer better accuracy along with precision. It provides interpretable decision-making which makes them more suitable for clinical applications. Even though these systems exhibit certain challenges like computational complexity and need for standardized evaluation metrics. This paper highlights transformative potential of XAI in healthcare industry by fostering more ethical, transparent and patient-centered solutions. It is poised to revolutionize precision medicine by improving patient outcomes and exhibiting significant contributions in the healthcare industry.