We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.CE

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Computational Engineering, Finance, and Science

Title: Introducing a microstructure-embedded autoencoder approach for reconstructing high-resolution solution field data from a reduced parametric space

Abstract: In this study, we develop a novel multi-fidelity deep learning approach that transforms low-fidelity solution maps into high-fidelity ones by incorporating parametric space information into a standard autoencoder architecture. This method's integration of parametric space information significantly reduces the need for training data to effectively predict high-fidelity solutions from low-fidelity ones. In this study, we examine a two-dimensional steady-state heat transfer analysis within a highly heterogeneous materials microstructure. The heat conductivity coefficients for two different materials are condensed from a 101 x 101 grid to smaller grids. We then solve the boundary value problem on the coarsest grid using a pre-trained physics-informed neural operator network known as Finite Operator Learning (FOL). The resulting low-fidelity solution is subsequently upscaled back to a 101 x 101 grid using a newly designed enhanced autoencoder. The novelty of the developed enhanced autoencoder lies in the concatenation of heat conductivity maps of different resolutions to the decoder segment in distinct steps. Hence the developed algorithm is named microstructure-embedded autoencoder (MEA). We compare the MEA outcomes with those from finite element methods, the standard U-Net, and various other upscaling techniques, including interpolation functions and feedforward neural networks (FFNN). Our analysis shows that MEA outperforms these methods in terms of computational efficiency and error on test cases. As a result, the MEA serves as a potential supplement to neural operator networks, effectively upscaling low-fidelity solutions to high fidelity while preserving critical details often lost in traditional upscaling methods, particularly at sharp interfaces like those seen with interpolation.
Subjects: Computational Engineering, Finance, and Science (cs.CE); Machine Learning (cs.LG)
Cite as: arXiv:2405.01975 [cs.CE]
  (or arXiv:2405.01975v2 [cs.CE] for this version)

Submission history

From: Natalie Rauter [view email]
[v1] Fri, 3 May 2024 10:00:36 GMT (42349kb,D)
[v2] Tue, 7 May 2024 11:28:02 GMT (14262kb,D)

Link back to: arXiv, form interface, contact.