MIE324 Introduction to Machine Intelligence
This course provides students with an overview of the major, introduces them to basic techniques, and illustrates these through case studies. Techniques include linear and logistic regressions, support-vector machines, and neural networks, and their use to improve decision making through improved predictions or directly in optimization models. A significant component of the course is exposure to a state-of-the-art machine-learning software framework with a series of assignments. In the culminating design project, students work in teams to build a larger-scale machine learning application, and communicate and demonstrate their accomplishments.
ROB311 Artificial Intelligence
This course introduces the fundamental principles of artificial intelligence, and will explore the subject matter in rigorous mathematical terms. Topics include the history and philosophy of AI, search methods in problem solving, knowledge and reasoning, probabilistic reasoning, decision trees, Markov decision processes, natural language processing, and elements of machine learning such as neural-network paradigms.
ECE368 Probabilistic Reasoning
This course focuses on different classes of probabilistic models and how, based on those models, one deduces actionable information from data. The course starts by reviewing basic concepts of probability including random variables and first and second-order statistics. Building from this foundation the course then covers probabilistic models including vectors (e.g., multivariate Gaussian), temporal (e.g., stationarity and hidden Markov models), and graphical (e.g., factor graphs). On the inference side, topics such as hypothesis testing, marginalization, estimation, and message passing are covered. Applications of these tools cover a vast range of data processing domains including machine learning, communications, search, recommendation systems, finance, robotics and navigation.
ECE421 Introduction to Machine Learning
An introduction to the basic theory, the fundamental algorithms, and the computational toolboxes of machine learning. The focus is on a balanced treatment of the practical and theoretical approaches, along with experience with relevant software packages. Supervised learning methods covered in the course include: the study of linear models for classification and regression, neural networks and support vector machines. Unsupervised learning methods covered in the course include: principal component analysis, k-means clustering, and Gaussian mixture models. Theoretical topics include: bounds on the generalization error, bias-variance tradeoffs and the Vapnik-Chervonenkis (VC) dimension. Techniques to control overfitting, including regularization and validation, are covered.