This project compares LIME and SHAP for explaining heart disease predictions using the UCI Heart Disease dataset. Several models were built from scratch, with XGBoost achieving the highest accuracy (83.3%). While SHAP produced stable, theory-based feature attributions, LIME offered intuitive, instance-level insights that benefited from an aggregated approach to improve consistency. Together, they highlight how combining interpretability tools can make machine learning in healthcare both accurate and transparent.