Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AI prediction of cardiovascular events using opportunistic epicardial adipose tissue assessments from CT calcium score (2401.16190v1)

Published 29 Jan 2024 in q-bio.QM and cs.AI

Abstract: Background: Recent studies have used basic epicardial adipose tissue (EAT) assessments (e.g., volume and mean HU) to predict risk of atherosclerosis-related, major adverse cardiovascular events (MACE). Objectives: Create novel, hand-crafted EAT features, 'fat-omics', to capture the pathophysiology of EAT and improve MACE prediction. Methods: We segmented EAT using a previously-validated deep learning method with optional manual correction. We extracted 148 radiomic features (morphological, spatial, and intensity) and used Cox elastic-net for feature reduction and prediction of MACE. Results: Traditional fat features gave marginal prediction (EAT-volume/EAT-mean-HU/ BMI gave C-index 0.53/0.55/0.57, respectively). Significant improvement was obtained with 15 fat-omics features (C-index=0.69, test set). High-risk features included volume-of-voxels-having-elevated-HU-[-50, -30-HU] and HU-negative-skewness, both of which assess high HU, which as been implicated in fat inflammation. Other high-risk features include kurtosis-of-EAT-thickness, reflecting the heterogeneity of thicknesses, and EAT-volume-in-the-top-25%-of-the-heart, emphasizing adipose near the proximal coronary arteries. Kaplan-Meyer plots of Cox-identified, high- and low-risk patients were well separated with the median of the fat-omics risk, while high-risk group having HR 2.4 times that of the low-risk group (P<0.001). Conclusion: Preliminary findings indicate an opportunity to use more finely tuned, explainable assessments on EAT for improved cardiovascular risk prediction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Tao Hu (146 papers)
  2. Joshua Freeze (1 paper)
  3. Prerna Singh (9 papers)
  4. Justin Kim (7 papers)
  5. Yingnan Song (6 papers)
  6. Hao Wu (623 papers)
  7. Juhwan Lee (15 papers)
  8. Sadeer Al-Kindi (9 papers)
  9. Sanjay Rajagopalan (5 papers)
  10. David L. Wilson (13 papers)
  11. Ammar Hoori (11 papers)

Summary

We haven't generated a summary for this paper yet.