Papers
Topics
Authors
Recent
Search
2000 character limit reached

A preconditioned deepest descent algorithm for a class of optimization problems involving the $p(x)$-Laplacian operator

Published 22 May 2022 in math.NA, cs.NA, and math.FA | (2205.10945v2)

Abstract: In this paper we are concerned with a class of optimization problems involving the $p(x)$-Laplacian operator, which arise in imaging and signal analysis. We study the well-posedness of this kind of problems in an amalgam space considering that the variable exponent $p(x)$ is a log-H\"older continuous function. Further, we propose a preconditioned descent algorithm for the numerical solution of the problem, considering a "frozen exponent" approach in a finite dimension space. Finally, we carry on several numerical experiments to show the advantages of our method. Specifically, we study two detailed example whose motivation lies in a possible extension of the proposed technique to image processing.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.