We try to follow a flipped classroom wherein students will study traditional lecture content and complete straightforward self-guided exercises before our class meeting times. During class, we’ll review trouble spots and expand upon some of the more subtle aspects covered in at-home assignments.  In addition, we use our limited time together to take on more challenging lab exercises as well as hold small group discussions on material not easily reduced to assigned programming exercises or readings.

  • Assignments will appear on our course website several weeks in advance
  • Homework assignments are listed with the class date they are due
  • Larger homework assignments are generally scheduled for over the weekends
  • DO NOT LET YOURSELF GET STUCK  If you are unclear about any assigned homework, please take notes, bring questions to the following class and continue through any difficult sections to the end of the assigned homework.
  • DO NOT FALL BEHIND  This course is cumulative wherein most weeks depend upon concepts learned in prior weeks.  Please quickly contact instructors to work around any obstacles you may encounter.
  • FRONT LOAD WORK  The first 4 full weeks are the most important in the class and will likely provide the value far beyond this class.  Please lend particular attention to homework and attendance in these critical weeks.

 

PART I (Weeks 1-5): Python Essentials

 

Week 1 – Introduction to AI for Humanities

Friday, Aug 30th:  

  • Introduction and Outline of Course
  • Complete all assignments listed under Monday, Sep 3rd over the weekend and be prepared to discuss, use and hand-in papers as listed.

 

Week 2 – AI models and The Limits of AI

Monday, Sep 3rd: (to be completed before the start of class)

Wednesday, Sep 5th:

Friday, Sep 7th:

Sunday, Sept 9th:

  • 2-5pm O’Connor House (same room as regular class)
  • Python Tutorial Sessions
  • Laptop Configuration – especially (but not only) for Windows 10 users

 

Week 3 – Digital Humanities and Stylometry

Monday, Sep 10th:

Wednesday, Sep 12th:

Friday, Sep 14th:

 

Week 4 – Scipy Stack: Wrangling Information into Data

Monday, Sep 17th:

  • NOTE:  Prof Elkins notes that for those without signification programming experience, these two readings are challenging.  Read over these two library descriptions the best you can. We’ll go over programming exercises in class that demonstrate and reinforce the main features of numpy and pandas that are used in machine learning.  Turn in printout of last page on Moodle.
  • Numpy Quickstart Tutorial, SciPy.org
  • 10 Minutes to Pandas, PyData.org

Wednesday, Sep 19th:

Friday, Sep 21st

 

Week 5 – Visual Storytelling

Monday, Sep 24th

Wednesday, Sep 26th:

Friday, Sep 28th:

 

PART II (Weeks 6-7): Machine Learning with Scikit-Learn

 

Week 6 – Machine Learning I, What is Intelligence?

Monday, Oct 1st:  Administrative:

Programming Assignment:

  • Open this Pokemon Partial Notebook and save to your local Google drive.  This is the notebook we worked halfway through in class on Friday.  Edit the rest of code cells in the notebook (replace the ‘???’ with the correct syntax) so it executes properly and submit your final corrected Jupyter notebook via Moodle for credit.
  • If you have any problems, please email Prof Elkins (elkinsk@kenyon.edu) and let us know if you’d like to attend a special session on this Sunday, August 30th from 3-4pm in the Seminar Room in O’Connor House.

Reading/Video Assignments (~40mins total):

Don’t be intimidated – there are many links but they are generally very short videos. The goal is to give you a high level view of various future directions of how Machine Learning/AI is being made easier to use.  Don’t focus on the details, we’ll do that in class.  Instead, focus on the overall process of creating a ML/AI pipeline using these forward-looking tools.

Wednesday, Oct 3rd:

Friday, Oct 5th:

Listen to this podcast that defines the field of Computational Neuroscience and raises questions about how we model reality and the human mind.  Then compare this with the Science Wars described in the Wikipedia article.

Questions for Discussion

  • What was the dominant model of scientific explanation until recently?
  • How are the New Mechanists different (and what tradition do they come out of)?
  • How have computers affected neuroscience in two ways? (Include the definition of computational neuroscience).
  • John Searle the philosopher notes “Simulated thunderstorm don’t grow wheat.” How is modeling thinking different if you accept both strands of computational neuroscience.
  • What is computational chauvinism?
  • What are the speakers’ thoughts about this when it comes to disciplinary independence?
  • Do philosophers who think about the mind need to understand the brain and “implementation”?
  • How might we (or might we not) decide whether a model is true?

Week 7 – Machine Learning II, Political, Economic, and Social Issues

Monday, Oct 8th:

Here is a link to The New York Times coverage of what is being compared to the Alan Sokal affair many years ago. They do a good job of representing the diversity of reactions to it:

  • Hoaxers Slip Breastaurants and Dog Park Sex into Journals, Jennifer Schuessler, NYTimes.com, 4 Oct 2018
  • Think about this NYTimes article with regards to our discussion on Friday and the question of Modeling Truth and Reality.  How is this the same or different from the Replication Crisis in the Natural Sciences like Psychology?

Wednesday, Oct 10th:

This week we begin to study ML at both a higher level of abstraction and with a more realistic degree of complexity.  Read through the following tutorials.  Do not be discouraged if you don’t understand everything discussed, that’s expected given this assignment.  Just persevere, skim over sections you don’t understand and to read through to the end to get the overall gist.  Come to class with specific questions.  We’ll walk through the theory and practice of decision trees, XGBoost and overall stacked model architecture in class.

[NOTE STUDENT PRESENTATIONS BEGIN MID-WEEK]

 

PART III (Week 8): Computational Linguistics and Vectorized Text

 

Week 8 – Computational Linguistics and Teaching Machines to Read

Monday, Oct, 15th

We end our coverage of Machine Learning with an overall recap as well as a special focus on Decision Trees.  Beyond individual trees, we’ll also explore ways we can combine them into Random Forests or create ensembles along with non-tree models. Finally, we question if ML is just glorified statistics and how complexity can perhaps help us understand types and degrees of ‘Intelligence’ in AI.

Wednesday, Oct 17th

Friday, Oct 19th

 

PART IV (Weeks 9-11): Neural Networks

Week 9 – Introduction to Neural Networks: The Basis of Intelligence?

Monday, October 22nd

  • Instead of a screenshot, complete and submit a hyperlink on Moodle class webpage to your corrected/completed version of this Jupyter notebook: Introduction to Scikit-Learn recapping ML.  Remember to change the permissions on your local copy of this corrected Jupyter notebook so we can access, verify execution and grade it (give read permissions with chunj@kenyon.edu and elkinsk@kenyon.edu).
  • Chapter 1:  What is Deep Learning?, Francois Chollet, Deep Learning with Python, Manning, Nov 2017

Wednesday, October 24th

Friday, October 26th

 

Week 10 – Machine Vision and CNNs, Neuroscience and Psychology of Perception

Monday, October 29th

(We suggest you watch the 2 videos on word vectorization, topic modeling and visualization first.  They review and expand upon concepts covered in both class and previous readings that will help you understand the Stanford Literary Lab Pamphlet.)

Wednesday, October 31st

Friday, November 2nd

NOTE:  We’ll be calling on students who have not participated as much on topics directly related to readings.  Please come prepared to be called upon with specific questions and interpretations of the readings.

 

[NOTE: WEEKLY PROJECT LAB STARTS THIS WEEK]

Week 11 – Text Processing and RNNs, Theory of Mind and Consciousness

Monday, Nov 5th

This weekend’s Jupyter notebook assignment is a sample Image Classification model using CNNs that we just studied.  Signup for a free account on Floydhub.com which hosts the notebook we’ll use. Configure your computer to not go into sleep mode for at least an hour while you let the CNN model train on the dataset.

A Jupyter notebook titled ‘image-classification-demo‘ will automagically appear in your Projects when you create your account. Just run/open this notebook and read through the text blocks/comments as you execute each code cell.  Don’t panic if you don’t understand everything, we’ll review the notebook in class. Again, DO read the textual explanations before each code block as well as the ‘# comments’ lines within code blocks to get a feel for what is happening.

This exercise will expose you to a more realistic CNN architecture (based upon Chollet’s Xception model) as well as illustrate the important concepts of transfer learning and photo data augmentation.  It will also give you a better feeling for the computational complexity of DNNs as it will take approximately 30min-1hr to train (learning over 2 million trainable parameters in our model over 3 epochs on only a fraction of the original dataset and classes with a CPU).  Production models may take days or weeks to train up depending upon the architecture/# parameters, dataset size, required performance metrics, hardware (CPU/GPU/TPU+RAM) and decomposition for parallel execution.

Find a hyperlink to ONE dog photo on the Intertubes (don’t share images), insert it into the 2nd last code block to have your trained model predict the which of the 10 breeds it ‘thinks’ it is and with what probability. Email chunj@kenyon.edu with (a) the URL of your selected dog jpg and cut-and-paste into the body of the email (b) the text output from your model’s prediction/probability. We’ll use these 2 pieces of information to verify assignment completion.

Wednesday, Nov 7th

Friday, Nov 9th

We began our study of Neural Networks with three basic building blocks of Deep Neural Networks: Feed Forward, Convolutional and Recurrent Neural Networks (FFN, CNN and RNN). Until now, our focus has been on higher-level abstractions and visualizations to broadly understand how each structure can be used and intuit how they operate.

Today we’ll go deeper and get into implementation details by exploring the Keras Deep Learning Library we use in most of our examples. We’ll explore the Keras API through examining simple FFN, CNN and RNN implementations. In doing so we’ll see common structural motifs and operational concerns. Equally important, we’ll highlight the important but practical decisions that contextualize each line of code we explore.

PART V (Weeks 12-13): Reinforcement Learning and Generative Adversarial Networks

Week 12 – Reinforcement Learning, Independent Learning in an Incomplete World

Monday, Nov 12th

Today we have a number of generally short videos/readings.  We start with two very short videos illustrating the surprising ability of Reinforcement Learning (RL) to teach itself without human guidance or intervention.  Two longer videos outline RL in general and describe Monte Carlo Tree Search (MCTS) which are both key components in understanding AlphaGo Zero. A web tutorial walks us through using RL/Deep Q-Learning to master OpenAI’s pole balancing game which we’ll see in class.  Finally, we look at a study that contrasts how AI and humans play video games.

The Deep Q-Learning link below can get a bit technical, but read on through and focus on the general architecture it uses to independently observe, learn and master tasks in it’s environment. Think about parallels with how humans learn through interactions with their environment.

Wednesday, Nov 14th

  • Move 37 Explained, Siraj Ravel, Nov 2018 (11:35)
  • Why Has Critique Run Out of Steam, Bruno Latour, Critical Inquiry, Winter 2004
    • BRING TO CLASS ANSWERS TO THE FOLLOWING QUESTIONS:
      • Why has critique run out of steam?
      • What is Critical Gesture Move One?
      • What is Critical Gesture Move Two?
      • Why is the critic always right?
      • What is the critical trick?
      • How do we move from matters of fact to matters of concern?
      • (bonus for humanists) What does Heidegger have to do with all of this?
  • (read upto but not including ‘Even a Good Reward, Local Optima Can be Hard to Escape‘) Deep Reinforcement Learning Doesn’t Work Yet, Alex Irpan, Jun 2018
  • (optional) Spinning Up as a Deep RL Researcher, OpenAI.com, Oct 2018

Friday, Nov 16th

 

THANKSGIVING BREAK:  Monday, Nov 19th – Friday, Nov 23rd

 

Week 13 – Generative and GANs, Creating Art

Monday, Nov 26th

With cancellation of class the previous Friday, we’ll have to find a way to make up our DH tools and techniques seminar.  Nonetheless, we want to keep focus on AI and DNNs and continue onto our last DDN techniques which are all related to generative deep learning.

Your assignment over the long Thanksgiving break is to read one chapter on Generative Deep Learning (you can access the entire book via LBIS.kenyon.edu).  This is a great chapter that reviews and extends what you’ve learned about FFN/FCN, CNN and RNN/LSTM models with techniques like style transfer as well as VAE and GAN architectures.  There are a few important terms/concepts that we have not encountered yet in this reading so we will review them in class.  Do not be discouraged–read through the entire chapter and bring any questions you have to class.  You will need to be familiar with much of this information to complete our final class programming assignments on Deep Neural Networks.

Wednesday, Nov 28th

Friday, Nov 30th

  • MIT Quest for Intelligence (4:19 Video) (Why study AI)
  • Finalize Outline of Class Project: Research Guidelines, Resources and Examples
  • Catchup/Review previous readings on RL/GAN architecture/code time permitting

 

PART VI (Week 14): Genetic Algorithms and Probabilistic Models

 

Week 14 – Core Concepts, Optimization and Visualizations of Neural Networks with Keras/TensorBoard

In our second last week, we focus on core neural network concepts, the Keras/TensorBoard framework and optimizations/visualizations at the code and theory levels.  This is to reinforce and make concrete many of the concepts we introduced earlier and leave you with a stronger working knowledge of neural networks.  The lectures this week will move quickly and assume content/ideas contained in the readings assigned below. Together the readings and lectures this week should boost your confidence in feeling mastery of Neural Networks.

Monday, Dec 3rd

Wednesday, Dec 5th

Friday, Dec 7th

Week 15 – Bayesian Statistics, Probabilistic Programming, Genetic Algorithms and Neuromorphic Computing

Monday, Dec 10th

Wednesday, Dec 12th

Friday, Dec 14th

Congratulations – END OF SEMESTER !!!

Exercise Topics Below

 

 

Design a site like this with WordPress.com
Get started