Multimodal Prediction of Alzheimer’s Disease
Washington University in St. Louis, Fall 2024
This project implements a comprehensive multimodal approach for predicting Alzheimer’s Disease using machine learning and deep learning techniques. Developed as part of Washington University in St. Louis’s CSE 419A: Introduction to AI for Health course, the project utilizes the OASIS-1 dataset to analyze various imaging and clinical data for early detection and prediction of Alzheimer’s Disease.
Key Features
- Multimodal data integration from OASIS-1 dataset
- Deep learning models for image analysis
- Machine learning models for clinical data analysis
- Comprehensive performance evaluation metrics
- Visual analysis of model predictions
- Combined classifier leveraging both imaging and clinical data
Technologies & Tools
- Python 3.12
- Deep Learning Frameworks: TensorFlow/Keras, PyTorch
- Machine Learning: Scikit-learn, XGBoost
- Data Analysis: Pandas & NumPy
- Visualization: Matplotlib & Seaborn
- Development Environment: Jupyter Notebooks
Project Components
- Main implementation notebook with data processing, model training, and evaluation
- Detailed project report in NeurIPS format
- CNN architecture visualization and implementation
- Final classifier architecture combining multiple modalities
- Comprehensive performance analysis and visualization
Project Documentation
- GitHub Repository
- Project Report: Available in NeurIPS format
- Presentation: Final Demo slides covering methodology and results