Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Flash Calculation through Deep Learning Methods (1809.07311v2)

Published 19 Sep 2018 in cs.CE, physics.chem-ph, and physics.comp-ph

Abstract: In the past two decades, researchers have made remarkable progress in accelerating flash calculation, which is very useful in a variety of engineering processes. In this paper, general phase splitting problem statements and flash calculation procedures using the Successive Substitution Method are reviewed, while the main shortages are pointed out. Two acceleration methods, Newton's method and the Sparse Grids Method are presented afterwards as a comparison with the deep learning model proposed in this paper. A detailed introduction from artificial neural networks to deep learning methods is provided here with the authors' own remarks. Factors in the deep learning model are investigated to show their effect on the final result. A selected model based on that has been used in a flash calculation predictor with comparison with other methods mentioned above. It is shown that results from the optimized deep learning model meet the experimental data well with the shortest CPU time. More comparison with experimental data has been conducted to show the robustness of our model.

Citations (47)

Summary

We haven't generated a summary for this paper yet.