Papers
Topics
Authors
Recent
Search
2000 character limit reached

Utilizing Machine Learning and 3D Neuroimaging to Predict Hearing Loss: A Comparative Analysis of Dimensionality Reduction and Regression Techniques

Published 30 Apr 2024 in cs.LG and cs.CV | (2405.00142v2)

Abstract: In this project, we have explored machine learning approaches for predicting hearing loss thresholds on the brain's gray matter 3D images. We have solved the problem statement in two phases. In the first phase, we used a 3D CNN model to reduce high-dimensional input into latent space and decode it into an original image to represent the input in rich feature space. In the second phase, we utilized this model to reduce input into rich features and used these features to train standard machine learning models for predicting hearing thresholds. We have experimented with autoencoders and variational autoencoders in the first phase for dimensionality reduction and explored random forest, XGBoost and multi-layer perceptron for regressing the thresholds. We split the given data set into training and testing sets and achieved an 8.80 range and 22.57 range for PT500 and PT4000 on the test set, respectively. We got the lowest RMSE using multi-layer perceptron among the other models. Our approach leverages the unique capabilities of VAEs to capture complex, non-linear relationships within high-dimensional neuroimaging data. We rigorously evaluated the models using various metrics, focusing on the root mean squared error (RMSE). The results highlight the efficacy of the multi-layer neural network model, which outperformed other techniques in terms of accuracy. This project advances the application of data mining in medical diagnostics and enhances our understanding of age-related hearing loss through innovative machine-learning frameworks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (6)
  1. Available at: \urlhttps://torchio.readthedocs.io/transforms/augmentation.html
  2. Deepak Birla. Understanding Autoencoders using PyTorch. Medium, 2020. Available at: \urlhttps://medium.com/@birla.deepak26/autoencoders-76bb49ae6a8f
  3. Understanding the Difference Between an Autoencoder (AE) and a Variational Autoencoder (VAE). Towards Data Science, 2021.
  4. Leo Breiman. Random Forests. Machine Learning, 2001. Available at: \urlhttps://link.springer.com/article/10.1023/A:1010933404324
  5. Tianqi Chen, Carlos Guestrin. XGBoost: A Scalable Tree Boosting System. KDD ’16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. Available at: \urlhttps://dl.acm.org/doi/10.1145/2939672.2939785
  6. Extracting and Composing Robust Features with Denoising Autoencoders. ICML ’08: Proceedings of the 25th international conference on Machine learning, 2008. Available at: \urlhttps://dl.acm.org/doi/10.1145/1390156.1390294

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.