Word-cloud by Sea Otter
(Other posts tagged ai-course. Post originally published on 20/12/2011, with paragraph one updated to incorporate completion numbers kindly provided by David Stavens of Know Labs, and a new concluding sentence to the final paragraph.)
Along with just over 20,000 others (some 3,000 fewer than had taken the midterm exam) I completed the final examination for the free online Introduction to Artificial Intelligence course taught by Sebastian Thrun and Peter Norvig. Here is my final participant's report from the course.
1. The final section of the course concerned Natural Language Processing. I've had an interest in machine translation for some years [e.g.]: and it was this interest that initially made me aware of Peter Norvig's work.) So for me this meant that the best part of the course came last, and if you want to gain an underlying appreciation of the science of natural language processing, it will take you a couple of hours to work through the courses 42 short videos about NLP, starting here. It is probably worth doing despite a certain amount of dependency on earlier sections of the AI course.
2. The course has been mercifully free from programming assignments: being capable of completion using pen and paper, a calculator (and on a couple of occasions a slide rule unused since 1973). To conclude the NLP unit there were two optional programming problems, both of which could be tackled without programming. I did the second of the problems (recovering a message from a shredded version) using scissors and adhesive tape:
3. The final examination took place over the weekend, with a one-day extension granted because the course web site was subjected to a denial of service attack in the hours leading up to the submission deadline. In the run-up to the closing deadline the unofficial discussion forum to which the course linked was busy; as were the volunteer moderators, whose frustration at users' attempts to find clues to how to answer the exam questions was apparent. (I've got mixed feelings about "help" in an open book exam. Why, is finding out how to answer a question through searching and reading any different from finding out through discussion?)
4. Ambiguity is obviously a known and general issue in exam questions, made worse for students whose first language is not the language in which the exam is written, and this showed up in the questions posed in the forum. In a production version of a course like this you would expect a bit more care with the wording of exam questions, and an even more systematic process of catching ambiguity and issuing pre-deadline guidance to students. (A more expensive alternative would be for there to be some proper pre-release testing of all questions, with a group of users whose language is not English.) Either approach would avoid needing to make post-exam announcements about acceptable alternative solutions to questions. Alongside this there are plenty of difficult-to-tackle machine-marking issues that would ideally need to be addressed in a production version of the course: and AI surely has a part to play here in helping to identify (and perhaps penalise less harshly) systematic errors, or the kinds of errors that can be revealed if a candidate "shows their working".
5. I'll end with this screenshot of a posting by a student on a pre-Web online course I helped develop and run nearly 20 years ago. Compared with the AI course:
- the technology was different (dialup using 2400 baud modems connecting to a "point of presence" run by the phone company, connecting through to a PortaCOM conferencing server in Aarhus, Denmark);
- the marginal cost per student hour on line was two or three orders of magnitude greater
- the content was different (texts emailed to students through the conferencing system);
- the learning process was different (small group online discussions using an asynchronous text-based conferencing system, rather than learning from video-clips of experts talking and explainin; there was no assessment);
- the subject matter was different (a course for trade union representatives about European integration);
- the teachers were bog-standard practitioners rather than people with stellar reputations in their field;
- the number of students was four orders of magnitude smaller (15 students as compared with, initially, >150,000);
- the "staff student ratio" was four orders of magnitude larger.
But the underlying sense of connection between students and teachers felt similar; and the way in which education would be changed irrevocably by the Internet was already apparent. I feel exceptionally lucky to have had the chance to take part in this course, which, if the data now collected is analysed by Thrun, Norvig, and others involved, will have proved genuinely to have been the "bold experiment" that was promised at the start.