UE Mathematical Foundations of Machine Learning

Diplômes intégrant cet élément pédagogique :

Descriptif

Machine Learning is one of the key areas of Artificial Intelligence and it concerns the study and the development of quantitative models that enables a computer to perform tasks without being explicitly programmed to do them. Learning in this context is hence to recognize complex forms and to make intelligent decisions. Given all existing entries, the difficulty of this task lies in the fact that all possible decisions is usually very complex to enumerate. To get around that, machine learning algorithms are designed in order to gain knowledge on the problem to be addressed based on a limited set of observed data extracted from this problem. To illustrate this principle, consider the supervised learning task, where the prediction function, which infers a predicted output for a given input, is learned over a finite set of labeled training examples, where each instance of this set is a pair constituted of a vector characterizing an observation in a given vector space, and an associated desired response for that instance (also called desired output). After the training step, the function returned by the algorithm is sought to give predictions on new examples, which have not been used in the learning process, with the lowest probability of error. The underlying assumption in this case is that the examples are, in general, representative of the prediction problem on which the function will be applied. We expect that the learning algorithm produces a function that will have a good generalization performance and not the one that is able to perfectly reproduce the outputs associated to the training examples. Guarantees of learnability of this process were studied in the theory of machine learning largely initiated by Vladimir Vapnik. These guarantees are dependent on the size of the training set and the complexity of the class of functions where the algorithm searches for the prediction function. Emerging technologies, particularly those related to the development of Internet, reshaped the domain of machine learning with new learning frameworks that have been studied to better tackle the related problems. One of these frameworks concerns the problem of learning with partially labeled data, or semi-supervised learning, which development is motivated by the effort that has to be made to construct labeled training sets for some problems, while large amount of unlabeled data can be gathered easily for these problems. The inherent assumption, in this case, is that unlabeled data contain relevant information about the task that has to be solved, and that it is a natural idea to try to extract this information so as to provide the learning algorithm more evidence. From these facts were born a number of works that intended to use a small amount of labeled data simultaneously with a large amount of unlabeled data to learn a prediction function.

The intent of this course is to propose a broad introduction to the field of Machine Learning, including discussions of each of the major frameworks, supervised, unsupervised, semi-supervised and reinforcement learning.

Informations complémentaires

Langue(s) : Anglais