Lawrence Livermore National Laboratory



rheology

While it may not make sense to the lay person, this type of parallel plot chart can say a lot to the trained eye. For instance, this plot conveys that we can accommodate uncertainty in laser velocity and beam size variables to meet a certain powder melt-depth threshold, but not with laser power. Each line represents the data from a different simulation, and the red lines represent the simulations that generated desired melt-depth results.

Making sense of data to reduce trial and error

Data mining and uncertainty quantification aim to help us generate a predictive understanding of how to process additive manufacturing powder to achieve unique part properties.

Part of the internally funded strategic initiative to accelerate the certificaiton of additively manufactured metals, this effort draws useful insights from simulations and experiment data, for example, how to alter laser velocity or beam size to change how deeply the powder bed melts, in turn changing a part's strength or ductility characteristics. The goal is to develop predictive models that can be used to reduce trial and error with novel parts and guarantee their performance.

Uncertainty quantification is a tool that the Laboratory has been developing for several years. It provides for management of research by focusing effort on the most important problems, identifying where experiment can provide input, and where additional theory, simulation, and modeling is required.

Areas of investigation

Uncertainty analysis What impact do parameter/model uncertainties have on model outputs?
Sensitivity analysis Which parameters contribute most to the output uncertainties?
Inverse uncertainty quantification Which parameters contribute most to the output uncertainties?
Calibration How to use experimental data to find the best parameter values?
Risk analysis In view of uncertainty, how do we quantify risk?

The uncertainty quantification task has been broadened to comprise both data mining, which is used to gain insights into the data produced by experiments and simulations, and uncertainty quantification or reasoning under uncertainty, which allows us to reason about the outputs (that is, the properties of the manufactured part), given uncertain inputs (such as the variables controlling the manufacturing process).

Approach

The data mining part of this effort has several aspects. The first is the analysis of images and simulation output to convert low-level data at pixels, grid points, or particles, into higher-level features such as distributions of grain sizes or shapes (Love et al.: 2007; Kamath et al.: 2011) or the identification of parameters in scaling laws (Kamath et al.: 2009; Kamath et al.: 2010).

We are analyzing images from EBSD (or other instruments used to characterize the manufactured part), images of the powder bed (Körner et al.: 2011; Schwerdtfeger et al.: 2012), and the simulation outputs from the different models. The higher level features extracted from these data can be used to validate the simulations (Kamath et al.: 2007; Körner et al.: 2011), to transfer information between scales, or to gain scientific insights.

The second aspect of data mining is to use the information extracted from the simulations and the experiments in further analysis. For example, we could represent an experiment using its inputs (laser power, laser speed, powder size distribution, etc.) and the corresponding outputs (strength, ductility, image features of the part, etc.).

A collection of such experiments, compiled into a table, can be analyzed using dimension-reduction methods to identify which input features are important for a particular output or to build data-driven models to predict an output based on the inputs (Kamath: 2009). For example, we could identify which image features result in improved strength of the part or predict the ductility based on the process parameters. This aspect can be considered as building a signature for parts with desired properties.

The third use of data mining is to complement the work on design of experiments by using ideas from active learning and surrogate models to identify which simulations and experiments should be run next to best complement the existing data or to provide the most information.

Research expectations

As we run different simulations and experiments, we expect to generate enough data to enable us to use probabilistic techniques, such as Gaussian processes and uncertain decision trees, and incorporate uncertainties into the analysis. The similarities between data mining and uncertainty quantification techniques (Kamath: 2012) will allow us to build on prior work. In addition, once we have enough data to build good surrogate models, we can use them for further analysis. For example, we could perturb a data point representing the inputs to an experiment and determine how much the outputs change, using both the surrogate models and real experiments.

  • Uncertainty quantification helps to make decisions under uncertainty.
  • Feature selection to identify important inputs.
  • Data-driven models for code surrogates.
  • Active learning for new sample points.
  • Image analysis for validation and experimental insights.
  • Uncertainty estimation using data driven models, Gaussian processes, Bayesian inference, and so on.

Chandrika Kamath

  • kamath2@llnl.gov

    Research Staff, Informatics Group - Center for Applied Scientific Computing

    Chandrika Kamath

  • kamath2@llnl.gov

    Research Staff, Informatics Group - Center for Applied Scientific Computing