The Global Active Subspace Method
Abstract: We present a new dimension reduction method called the global active subspace method. The method uses expected values of finite differences of the underlying function to identify the important directions, and builds a surrogate model using the important directions on a lower dimensional subspace. The method is a generalization of the active subspace method which uses the gradient information of the function to construct a reduced model. We develop the error analysis for the global active subspace method, and present examples to compare it with the active subspace method numerically. The results show that the global active subspace method is accurate, efficient, and robust with respect to noise or lack of smoothness in the underlying function.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.