(Other posts tagged ai-course.)
Here is my ninth participant's report from the Stanford Introduction to Artificial Intelligence course.
1. I wrote last week that the units on Games, Game Theory, and Advanced Planning had been hard work; and that the associated homework had felt demanding and been very time-consuming. Securing an adequate mark on that homework despite too many "stupid errors" means I probably learned more than I had realised. But I also know that I was answering some of the questions from the "stored fat" of knowledge I already had, rather than from what the course had been teaching me.
2. This feeling has been accentuated by work this week on three short units about computer vision, some of which has been about the basic optics that I learned through childhood tinkering with science kits and home-made cameras. So, a lighter week, and a slightly disatisfying one: in a future course perhaps a bit more is needed on how the underlying principles of computer vision are applied in practice?
3. For the first time I contributed a question and voted on a few others in advance of this week's "office hours" session, mainly to see how the process works. Questions are gathered from students using Google moderator (if you have or create a Google account you should be able to review them and the way the process is being organised at AI Class Office Hours for Week 7).
4. On the assumption that the number of active students on the AI course is now down to about 25,000, then only a small proportion (less than 2%) of participants are actively engaged with the office hours process. (Shades of Nielsen's 90:9:1 rule.) But this did not make the video recordings of Norvig and Thrun responding to questions any the less useful. See for example Thrun talking about the relationship between problem solving and reading ("Don't read too much: and you can quote me on that"), or Norvig and Thrun talking about the differences between satisficing and optimising, with specific reference to creation of the AI course itself.
5. If there is a problem with this approach to office hours it is that students have no real control over which questions teachers respond to. In point of fact, from the Google moderator ZIP/CSV file of questions and votes, it can be seen that a substantial cluster of "most voted on" questions were ignored in this session:
- Can you provide some statistics about ai-class? Like, how many active students are there or how many of us switched from advanced to basic track etc... [59 votes 3rd largest number of votes in 106 questions put];
- Can I (we) please see the distribution of scores for the Midterm? I would like to know if my score was very good or if the questions were relatively easy? [54 votes 5/106];
- How many people are still on the advanced track ? What are the median, mean and variance for the homework and the midterm exams. Are these scores in line with your expectations ? [48 votes 6/106];
- Is there any chance we could get some statistics on course participation? [41 votes 8/106];
- Can we see a distribution of grades for midterm? [21 votes 16/106];
- How about giving us stats on the midterm? I'm sure that you do this for your regular classes, so why not for this one? [20 votes 20/106].
6. I've no objection at all to these information-seeking questions not being answered through the valuable and discursive medium of a video recording; but I do think that some of the data sought should be made available; or the reasons for not doing so be stated.
7. Finally, to the screen-shot at the top of this post. This shows how a plug-in for the Chrome browser integrates real time student discussion on the OSQA-based question and answer site for the course straight into the space below each course video. The plug-in is the work of Filip Wasilewski, and I encourage you to read his explanation of its development.