Wednesday, December 31, 2014

2014 in Review

As 2014 comes to an end, I decided I want to continue my newfound tradition of summarizing my thoughts in a "year in review" post.  So here are some thoughts on academia, machine learning, and theory, again in no particular order.
  • Every company seems to have its own superstar leading a neural nets effort.  And deep learning keeps making impressive advances.  My hope that a nicer learning theory gets developed around this topic.
  • Computer science enrollments continue to soar, and the "sea change" may be here to stay. It's becoming a better and better time to study computer science.
  • On the other hand, research labs have continued to be vulnerable.  Perhaps we'll see a reverse-trend, with academic jobs temporarily making up for losses in the research job market.
  • It's an interesting time for online education, which has had some setbacks recently.  Yet, it seems even Yale would rather stream Harvard CS50 than hire enough faculty to teach its introductory computer science course.
  • With the release of The Imitation Game, more people than ever will know about Alan Turing.  But will their impressions be accurate?
  • My favorite "popular" AI article this year was on computer Go.  My guess is that by 2020, computers will be able to beat the best humans.
  • After teaching a learning theory course last semester, next semester I'll be teaching a graduate-level "Foundations of Data Science" course, loosely following Hopcroft and Kannan's new book.  I'll have to make some tough choices about what to material include and what to skip.  Any thoughts?

2 comments:

  1. I was running a reading group two years ago based on the Hopcroft-Kannan book. Back then it was called "Computer Science for the Information Age". We covered the first 7 chapters during one semester and I liked it a lot. If I had to do it again, I would probably skip the G(n,p) chapter in favor of covering more of the rest.

    ReplyDelete