About Me
Hi, I'm David Kan, a second-year computer science major with a minor in business and a fintech certificate. I've
gained hands-on experience in AI and machine learning through research internships, where I worked on projects
like mental disorder classification using EEG data and model optimization with image classification. These
experiences have inspired me to pursue a career where I can apply AI to solve problems and create solutions.
Currently, I'm working on a computer vision project with Dr. Bradley Barnes to automate attendance tracking
through image classification. In addition to my technical pursuits, I'm actively involved in organizations like
the Terry Fintech Society where I'm exploring the intersection of technology and business. I’m excited about
opportunities to grow in AI, but I’m also eager to explore other areas of computer science to gain a broader
understanding of how different fields interact, allowing me to build a strong foundation for creating innovative
solutions.
Experience
National Science Foundation
- Developed competancy in machine learning techniques for EEG data clssification using PyTorch
- Initialized Lambda Machine Learning Workstation with cuDNN and PyTorch Libraries
Spiking Networks and Wavelet Denoising for Robust EEG Feature Extraction
- Developed and trained a novel Spiking Neural Network on the BCI Competition 4 2a moter imagery dataset
achieving 84% tseting accuracy.
- Converted EEG Data into spectrograms for use in image classification.
- Continuing work on the MODMA mental-disorder analysis datset for conference submission.
Kennesaw State University
Investigating the Effects of Different Activation Funtions on the Performance of CNN Image
Classifiers
- Conducted analysis of 4 activation functions (ReLU, Leaky ReLU, ELU, and Tanh) and their impact on the
performance of a CNN image classifier on the CIFAR-10 benchmark dataset.
- Evaluated the peroformance using accuracy, loss, and F-1 scoreperformance metrics finding the ELU
function having the best performance with 90% training accuracy with the VGG-9 model architecture.
- Gained experience of machine learning techniques for image classification probelms using Tensorflow.
- Addressed a relative gap in research surrounding activation functions.
Kennesaw State University
Federated Learning with Knowledge Distillation and Integrated Gradient: A Communication-Efficient
Approach to Enhance Model Performance
- Researched current state-of-the-art methods of federated learning and knowledge distillation.
- Analyzed and created graphics based on the system architecture.
- Wrote Introduction, Related Works, and System Diagram sections of the research paper.