Thursday, December 15, 2011

Stanford classes -- what I'd do next


Now that the ML and AI courses are at an end, here are some of the things I would do moving forward.

Both courses already have a basic track where students just watch the lectures and do the in lecture quizzes and an advanced track where students also complete weekly assignments. I think we can be certain that there were students who just watched a few lectures, many who completed every assignment, and those who fell at all points in between.

On top of this, there were students who made use of the on line discussion groups and those who didn't.

This means there ware a wide range of experiences to be had.

With this in mind, here's what I would do.

Suggestions dealing with basic site content:

More practice problems, particularly in AI


While there were in video quizzes each week that provided practice, it would have been nice if there was a link to additional optional problems (preferably with solutions available).  This would be easy to implement. The ML class would also benefit from this, but since you could retake the weekly assignments and get some variation on the questions, it would be as necessary.

Better reference materials

Reference sections would be nice as well. The AI staff posted related sections from the text, but there were a number of great on line resources I discovered by reading the discussion groups. Perhaps some of these could be linked to from the main site.

Grading


I'm pretty sure that having weekly assignments that were actually graded helped keep me honest.  The fact that the ML course was submit as many times as you want and the AI course was one shot didn't matter. I put the same effort into both classes. In a way I preferred the ML course.  I was frustrated a few times when I mis-entered something on a homework or forgot to convert units and got a lower grade than I thought I should have (I know, the grade doesn't really count).

I'd actually kind of like the AI course to move more towards the ML class model. The grades don't really count for anything anyway, and if they did, there are so many X-factors.

For example, if some one has to do the weekly assignment early due to obligations later in the week, he or she can't make use of clarifications. Likewise, students probably had widely varying amounts of time to dedicate towards the course. Contrast that to the traditional undergrad student probably has a similar workload to the other people in their classes. In the ML class, it all really didn't matter.

Office Hours:


I wasn't a huge fan of the office hour questions in the AI class but I very much liked the idea of seeing the profs directly answering weekly questions, it helped connect the instructors and the class. This was lacking in the ML class and should be added.


On running the class in the future:

What made these classes different from other on line lectures was that these were "live" with a staff releasing new content, opening and closing assignments, and adjusting as the course progressed. Each class also had a large number of people taking the class at the same time. Far different than say someone arbitrarily to watch videos from an Open Class Ware course.

I'd like to believe that the live staff, real deadlines, and large cohorts had a significant psychological effect.  I've started on line courses in the past but rarely finished them. I think the weekly deadlines and "live" aspect of the course got me to start early each week and forced me to stay up to date.

With this in mind, Stanford could just run the courses again in a similar manner, possibly with some one else acting as "instructor" to field office hours and oversee the course.

In addition I'd allow people to take the courses in the following ways:

Solo:


Since many people probably didn't avail themselves of the discussion groups, there's no reason not to allow someone to start at any time. All that would be needed is the ability to have them submit projects, quizes, etc. If the system could do that, Anyone could take the course at any time, albeit without interaction with others.

Cohort:


People could sign up with a start date or number of students in mind. When that's reached, a cohort group can start the class. The discussion pages could be modified so that a cohort can go to it's own discussion page and the system can dole out lectures and assignments on a pre-determined schedule. This would allow the course to start at a range of times while making sure that students had a community of learners to support each other via discussion groups.

Facilitated:


Similar to Cohort but someone would sign up as a facilitator. They would moderate the discussion group and control the flow of lectures and assignments. There could even be a way of "licensing" facilitators so they could run official versions of the classes. This way, a local group or school could run the class on their schedule.


So, there you have it. How I'd modify Stanford's great educational experiment. Next time, I'll share my thoughts on on-line education and how it's (mis) used in our high schools.

Thursday, December 8, 2011

ML and AI Courses - how they were taught


This is the first in a three part series.

Part 1 talks about my take on how the courses were presented.
in Part 2, I'll discuss my take on how to improve the experience
and finally, in part 3, we'll look at on line education with an emphasis on the high school market.







As some of you know, I've been taking the on line Machine Learning and Artificial Intelligence courses offered by Stanford this semester. I took my AI class a hundred years ago and I never formally studied ML so I figured this would be a fun way to keep current.

Lots of people have already "reviewed" the courses, compared the instructors, assignments, and what have you. Now that the courses are almost over, I thought I'd try to look at it a little differently, wearing my hat as a high school CS educator rather than just a consumer.

I've enjoyed both courses tremendously and I'd like to thank everyone involved in making them available to the public.

Teaching style:


Every teacher has their own style.  Here's my take on our three instructors. I don't think any one style is universally better than any other, rather different styles speak to different students.

Peter Norvig: 

While watching Professor Norvig's videos, I felt that he was the learned sage imparting information. He's the wise man in the village that everyone turns to for answers.

Andrew Ng:

I felt like I was with a tutor or a coach, everything was gently presented and at the end of the lecture I looked back and said "wow, I got all of that, it made sense." As he was the only lecturer for the ML class, I'll explain in more detail in the next section.

Sebastian Thrun:

I can't come up with an analogy for Profressor Thrun, but I could feel him saying "let's try something neat, make some mistakes, explore neat things, and learn a whole bunch as a result." It took a while to get used to this, particularly when being asked questions before given enough information to approach them. Once used to the technique, however, I really enjoyed his approach to teaching.

Conclusions: I would love to have the opportunity to sit in on live classes with all three as sitting in on a class can be very different from watching a video, but being on the east coast, I don't think that will happen any time soon.


Lecture style:


In the ML lectures, Prof. Ng gently guided the viewer through the topics. Generally first by describing the various parts of the topic in question and then by bringing it all together, completely describing the algorithm or technique.

There are points in the lectures where Prof Ng states that the material is hard and that he had a tough time with some of it. This empathy and his assurances go a long way. I found the lectures easy to absorb and didn't generally have to think too hard. By itself this might have limited the educational experience, but combined with the assignments, it worked great.

The AI class had a different approach. The class was frequently tasked with solving problems before material was presented. This turned me off early on. As the class progressed, the professors started to emphasize the fact that your quiz scores didn't matter (they appear on the web site but aren't calculated in the final grade, not that the final grade matters anyway) and that these questions were to get you thinking about the topic more deeply. Once I started looking at the approach from this point of view, I enjoyed the class much more.

That said, I found the ML class lectures much more self contained and found myself looking for additional resources to learn the "base" material at times in the AI class.

The AI lectures forced  me to think more than the ML class which is probably a good thing since there were no programming projects to take up the slack.

Conclusions: Styles differ but both can be effective. I could make as much or as little a mental effort as I wanted for the ML class and I'd get out of it what I put in. The AI required more effort to get anything out of it -- the approach forced you to think where the ML class encouraged you to think. In the end, I put comparable amounts of time into both and got about the same amount out of each.

Homework and Projects:


Both classes had weekly homework assignments. Without these, I would probably have slacked off on the videos.

In the AI class, these were submitted over the course of the week and then graded. Results and explanation videos were provided after grading was done. The process was fine but I found the interface occasionally frustrating. There were some complaints on the message boards about losing points due to mis-entry or insufficient accuracy of answers. I  had a few problems with both but since I wasn't obsessed with getting a perfect score, it didn't bother me too much.

I'm not sure how great the assignments were in terms of assessments but attempting them and then watching the video explanations turned out to be a strong pedagogical approach. I would recommend including the explanation videos in the regular sequence for the in lecture quizzes. I frequently gleaned a tidbit or two from them even when I answered the questions correctly.

The only downside to the AI class quizzes and homeworks is that they were all in video form. A PDF of the midterm was published and something similar, at least for the weekly homework assignments would be a plus.

The ML class also had weekly assignments. They were in the form of an interactive five question quiz. You could attempt them up to 100 times and your top score would count towards your grade.

The real value added to these assignments was the explanations when you answered one wrong. There were even a couple of times I answered a question or two incorrectly on purpose to see the explanations provided.  This style of assessment provided a feedback loop that could really help a student to be sure they understood the work.

The one thing the AI class lacked that the ML class included was programming assignments. Probably a good thing for me since I don't think I would have had the time to be able to complete both courses with that added burden. That said, I loved the ML class programming assignments.

For the most part, they were extremely well constructed, stepping the student through all of the weeks topics. By the end of each project, we had a working system and a good understanding of the weeks concepts. You could take shortcuts and finish the assignments by merely copying and coding up formulas but if you did it right, you'd learn a lot.

The only assignment that I felt was less than stellar was the SVM project. Even then, it had redeeming features. For part of the project we had to process emails and build a table of word counts. Not directly related to SVMs but something that's frequently done with data to be processed and therefore still worthwhile.

Conclusions: The programming projects really reinforced the lecture content in the ML class and I would imagine that adding them to the AI class would benefit students. Even without them, one could go to the actual Stanford class'es web site and work on their projects.






Other random thoughts:


Both courses used the web site, email, and twitter to periodically communicate information, but the AI did one thing the ML class didn't. They periodically sent messages of congratulations and encouragement. They also repeatedly mentioned how well we were all doing in the lectures and in the office hours. Prof. Ng also provided encouraging words, but they seemed more self contained and generic.


On the other hand, I wasn't happy with the large numbers of hints and deadline extensions that the AI class offered. I felt that it rewarded people who left things to for the last minute and gave them an advantage over students who were more diligent or had to complete the weeks work early and could not take advantage of the last minute hints and extensions. Ultimately it doesn't matter, but that's the type of thing that pushes my buttons.

Conclusions: Again, both courses were great, but the AI course seemed to do a better job in connecting with the class, that is, making me feel like I'm part of the class rather than just watching.


Wow, that was long. I hope some one finds this interesting. In the next installment, I'll talk about what I would do if I were moving these projects ahead.





Saturday, December 3, 2011

Where's Waldo - Text style



Ok, it's a word search.

We're always looking for interesting applications to build lessons around. Over the years, I've tried different things when teaching 2 dimensional arrays. Simple game boards, representing a crossword puzzle, tables of various sorts, etc.

This year, JonAlf, one of my amazingly talented colleagues, decided to go with building a word search. I decided to steal the idea. It's a great one.

I thought I'd use this post to go through the project and why I like it.

Ultimately, the students end up with a program that will generate an n by m word search filled with random words from a dictionary. We gave the kids a skeleton of the base class. The only actual code we had to supply was the method that loaded a dictionary file into memory. You can check out the assignment here and the finished code here (we updated the repository as the project developed).


The first part of the project are pretty mundane. The kids write a couple of constructors and toString. Basically just practice traversing a 2D array. The project starts to get interesting at part 2, when they write the methods that add words into the grid. First horizontally:



After they write the method to add words vertically, we can start to refine things. We notice that the routines are essentially the same. The only difference between adding a word horizontally and vertically is  what we add  to the row and column each time. For one, there is a delta column of + 1, for the other it's a delta row. Further, they realize that adding diagonal words just needs both deltas. This leads us to factoring out the common aspects of the code and writing something like:



All of a sudden, they've written one piece of code that can add words in 8 orientations.

After filling the rest of the grid with random letters, we turn our attention to building a random puzzle.

This part of the project involves using an ArrayList of words. Our students frequently mix up array and ArrayList notation early on so by having a project that uses both but in clearly delineated areas, the students can be more comfortable with each.

For this piece, the code is again straight forward. Students run a loop that gets a random word from our dictionary and tries to place it in our grid at a random location choosing one of our possible orientations randomly. We get to see another nice little refinement again when we move from the typical first take at building a random puzzle which uses a three (or more) way if statement to select how to add words:



to using our more general addWords method described above:




When we're all done, we had some time to project the word search on the board and fun was had by all.

Peter, another one of our CS teachers had a great suggestion that I think I'll try. Start a competition to have the students modify the program so that it generates as densely packed  a wordsearch  as possible (giving higher scores first for longer words, then number of words).

Between the way the project broke down, the topics covered and the little refinements, I really enjoyed working with my classes on this project -- I'm hoping they enjoyed it as well.