One reason I find artificial intelligence so interesting is the opportunity to think about the nature of thinking. About the only thing that is cooler than that is learning how learning takes place. In the past, I've organized the class in a somewhat traditional machine learning framework. For this instance during Spring 2015, I want to focus on the essence of inference. But before we can think about inference, we should consider what it is that we are consuming and producing when we make inferences; we tend to call such stuff information.
This topics-course will explore the fundamental theorems of information and coding, and then look at mechanisms for automating inference. With those as background, we will look at the relationships between them and, more generally, between information and everything else.
The course will be organized as a seminar with some small project components. As students in this course, you will give multiple presentations on material from the text or other assigned readings. You will also implement some of the techniques we will be studying and demonstrate and evaluate your systems' learning behavior in code walks and demonstrations using real-world data sets.