基于机器学习技术的智能答疑助教外文翻译资料

 2022-04-25 22:19:01

英语原文共 9 页,剩余内容已隐藏,支付完成后下载完整资料


Probabilistic machine learning and artificial intelligence

Headnote

How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

The key idea behind the probabilistic framework to machine learning is that learning can be thought of as inferring plausible models to explain observed data. A machine can use such models to make predictions about future data, and take decisions that are rational given these predictions. Uncertainty plays a fundamental part in all of this. Observed data can be consistent with many models, and therefore which model is appropriate, given the data, is uncertain. Similarly, predictions about future data and the future consequences of actions are uncertain. Probability theory provides a framework for modelling uncertainty.

This Review starts with an introduction to the probabilistic approach to machine learning and Bayesian inference, and then discusses some of the state-of-the-art advances in the field. Many aspects of learning and intelligence crucially depend on the careful probabilistic representation of uncertainty. Probabilistic approaches have only recently become a mainstream approach to artificial intelligence1, robotics2 and machine learning3,4. Even now, there is controversy in these fields about how important it is to fully represent uncertainty. For example, advances using deep neural networks to solve challenging pattern-recognition problems such as speech recognition5, image classification6,7, and prediction of words in text8, do not overtly represent the uncertainty in the structure or parameters of those neural networks. However, my focus will not be on these types of patternrecognition problems, characterized by the availability of large amounts of data, but on problems for which uncertainty is really a key ingredient, for example where a decision may depend on the amount of uncertainty.

I highlight five areas of current research at the frontier of probabilistic machine learning, emphasizing areas that are of broad relevance to scientists across many fields: probabilistic programming, which is a general framework for expressing probabilistic models as computer programs and which could have a major impact on scientific modelling; Bayesian optimization, which is an approach to globally optimizing unknown functions; probabilistic data compression; automating the discovery of plausible and interpretable models from data; and hierarchical modelling for learning many related models, for example for personalized medicine or recommendation. Although considerable challenges remain, the coming decade promises substantial advances in artificial intelligence and machine learning based on the probabilistic framework.

Probabilistic modelling and representing uncertainty

At the most basic level, machine learning seeks to develop methods for computers to improve their performance at certain tasks on the basis of observed data. Typical examples of such tasks might include detecting pedestrians in images taken from an autonomous vehicle, classifying gene-expression patterns from leukaemia patients into subtypes by clinical outcome, or translating English sentences into French. However, as I discuss, the scope of machine-learning tasks is even broader than these pattern classification or mapping tasks, and can include optimization and decision making, compressing data and automatically extracting interpretable models from data.

Data are the key ingredients of all machine-learning systems. But data, even so-called big data, are useless on their own until one extracts knowledge or inferences from them. Almost all machine-learning tasks can be formulated as making inferences about missing or latent data from the observed data - I will variously use the terms inference, prediction or forecasting to refer to this general task. Elaborating the example mentioned, consider classifying people with leukaemia into one of the four main subtypes of this disease on the basis of each person#39;s measured gene-expression patterns. Here, the observed data are pairs of gene-expression patterns and labelled subtypes, and the unobserved or missing data to be inferred are the subtypes for new patients. To make inferences about unobserved data from the observed data, the learning system needs to make some assumptions; taken together these assumptions constitute a model. A model can be very simple and rigid, such as a classic statistical linear regression model, or complex and flexible, such as a large and deep neural network, or even a model with infinitely many parameters. I return to this point in the next section. A model is considered to be well defined if it can make forecasts or predictions about unobserved data having been trained on observed data (otherwise, if the model cannot make predictions it cannot be falsified, in the sense of the philosopher Karl Popper#39;s proposal for evaluating hypotheses, or as the theoretical physicist Wolfgang Pauli said the model is 'not even wrong'). For example, in the classification setting, a well-defined model should

全文共73134字,剩余内容已隐藏,支付完成后下载完整资料


资料编号:[13341],资料为PDF文档或Word文档,PDF文档可免费转换为Word

原文和译文剩余内容已隐藏,您需要先支付 30元 才能查看原文和译文全部内容!立即支付

以上是毕业论文外文翻译,课题毕业论文、任务书、文献综述、开题报告、程序设计、图纸设计等资料可联系客服协助查找。