A new approach to artificial intelligence that builds in uncertainty
Date:
October 19, 2020
Source:
University of Delaware
Summary:
Artificial intelligence isn't perfect. In fact, it's only as good
as the methods and data built into it. Researchers have detailed a
new approach to artificial intelligence that builds uncertainty,
error, physical laws, expert knowledge and missing data into its
calculations and leads ultimately to much more trustworthy models.
FULL STORY ==========================================================================
They call it artificial intelligence -- not because the intelligence is
somehow fake. It's real intelligence, but it's still made by humans. That
means AI -- a power tool that can add speed, efficiency, insight and
accuracy to a researcher's work -- has many limitations.
==========================================================================
It's only as good as the methods and data it has been given. On its
own, it doesn't know if information is missing, how much weight to
give differing kinds of information or whether the data it draws on
is incorrect or corrupted. It can't deal precisely with uncertainty or
random events -- unless it learns how.
Relying exclusively on data, as machine-learning models usually do,
it does not leverage the knowledge experts have accumulated over years
and physical models underpinning physical and chemical phenomena. It has
been hard to teach the computer to organize and integrate information
from widely different sources.
Now researchers at the University of Delaware and the University of Massachusetts-Amherst have published details of a new approach to
artificial intelligence that builds uncertainty, error, physical laws,
expert knowledge and missing data into its calculations and leads
ultimately to much more trustworthy models. The new method provides
guarantees typically lacking from AI models, showing how valuable --
or not -- the model can be for achieving the desired result.
Joshua Lansford, a doctoral student in UD's Department of Chemical and Biomolecular Engineering, and Prof. Dion Vlachos, director of UD's
Catalysis Center for Energy Innovation, are co-authors on the paper
published Oct. 14 in the journal Science Advances. Also contributing
were Jinchao Feng and Markos Katsoulakis of the Department of Mathematics
and Statistics at the University of Massachusetts-Amherst.
The new mathematical framework could produce greater efficiency, precision
and innovation for computer models used in many fields of research. Such
models provide powerful ways to analyze data, study materials and complex interactions and tweak variables in virtual ways instead of in the lab.
"Traditionally in physical modelings, we build a model first using only
our physical intuition and expert knowledge about the system," Lansford
said. "Then after that, we measure uncertainty in predictions due to
error in underlying variables, often relying on brute-force methods,
where we sample, then run the model and see what happens." Effective,
accurate models save time and resources and point researchers to more
efficient methods, new materials, greater precision and innovative
approaches they might not otherwise consider.
==========================================================================
The paper describes how the new mathematical framework works in a chemical reaction known as the oxygen reduction reaction, but it is applicable
to many kinds of modeling, Lansford said.
"The chemistries and materials we need to make things faster or even make
them possible -- like fuel cells -- are highly complex," he said. "We
need precision.... And if you want to make a more active catalyst, you
need to have bounds on your prediction error. By intelligently deciding
where to put your efforts, you can tighten the area to explore.
"Uncertainty is accounted for in the design of our model," Lansford
said. "Now it is no longer a deterministic model. It is a probabilistic
one." With these new mathematical developments in place, the model itself identifies what data are needed to reduce model error, he said. Then a
higher level of theory can be used to produce more accurate data or more
data can be generated, leading to even smaller error boundaries on the predictions and shrinking the area to explore.
"Those calculations are time-consuming to generate, so we're often dealing
with small datasets -- 10-15 data points. That's where the need comes
in to apportion error." That's still not a money-back guarantee that
using a specific substance or approach will deliver precisely the product desired. But it is much closer to a guarantee than you could get before.
==========================================================================
This new method of model design could greatly enhance work in renewable
energy, battery technology, climate change mitigation, drug discovery, astronomy, economics, physics, chemistry and biology, to name just a
few examples.
Artificial intelligence doesn't mean human expertise is no longer
needed. Quite the opposite.
The expert knowledge that emerges from the laboratory and the rigors
of scientific inquiry is essential, foundational material for any
computational model.
========================================================================== Story Source: Materials provided by University_of_Delaware. Original
written by Beth Miller.
Note: Content may be edited for style and length.
========================================================================== Journal Reference:
1. Jinchao Feng, Joshua L. Lansford, Markos A. Katsoulakis,
Dionisios G.
Vlachos. Explainable and trustworthy artificial intelligence for
correctable modeling in chemical sciences. Science Advances, 2020;
6 (42): eabc3204 DOI: 10.1126/sciadv.abc3204 ==========================================================================
Link to news story:
https://www.sciencedaily.com/releases/2020/10/201019125537.htm
--- up 8 weeks, 6 hours, 50 minutes
* Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1337:3/111)