We try to follow a flipped classroom wherein students will study traditional lecture content and complete straightforward self-guided exercises before our class meeting times. During class, we’ll review trouble spots and expand upon some of the more subtle aspects covered in at-home assignments. In addition, we use our limited time together to take on more challenging lab exercises as well as hold small group discussions on material not easily reduced to assigned programming exercises or readings.
- Assignments will appear on our course website several weeks in advance
- Homework assignments are listed with the class date they are due
- Larger homework assignments are generally scheduled for over the weekends
- DO NOT LET YOURSELF GET STUCK If you are unclear about any assigned homework, please take notes, bring questions to the following class and continue through any difficult sections to the end of the assigned homework.
- DO NOT FALL BEHIND This course is cumulative wherein most weeks depend upon concepts learned in prior weeks. Please quickly contact instructors to work around any obstacles you may encounter.
- FRONT LOAD WORK The first 4 full weeks are the most important in the class and will likely provide the value far beyond this class. Please lend particular attention to homework and attendance in these critical weeks.
PART I (Weeks 1-5): Python Essentials
Week 1 – Introduction to AI for Humanities
Friday, Aug 30th:
- Introduction and Outline of Course
- Complete all assignments listed under Monday, Sep 3rd over the weekend and be prepared to discuss, use and hand-in papers as listed.
Week 2 – AI models and The Limits of AI
Monday, Sep 3rd: (to be completed before the start of class)
- Have taken online Philosophy & Theory of AI Survey (print out answers before clicking submit button and hand-in at beginning of class or email digital version)
- Complete first half (2hrs) of DataCamp.com’s free interactive online course titled Intro to Python for Data Science (additional learning options at Python.org and LearnPython.org)
Wednesday, Sep 5th:
- Create and test/login to accounts on Google colaboratory and Microsoft Azure Machine Learning Studio (You need these accounts on Wednesday to create several canonical ‘hello, world!’ on these two cloud platforms)
- MIT Tech Reunion Panel: The Science + Engineering of Intelligence (0:00-33:00), 8 Jun 2018
- Stanford Seminar: AI: Current and Future Paradigms and Implications (0:00-22:00, optional 45:00-1:11:00), 31 May 2018
Friday, Sep 7th:
- Why Technology Favors Tyranny, Yuval Noah Harari, The Atlantic Monthly, Oct 2018
- How the Enlightenment Ends, Henry Kissinger, The Atlantic Monthly, Jun 2018
- (only Section 6. Contrary Views on the Main Question) Computing Machinery and Intelligence, A.M. Turing, 1950
Sunday, Sept 9th:
- 2-5pm O’Connor House (same room as regular class)
- Python Tutorial Sessions
- Laptop Configuration – especially (but not only) for Windows 10 users
Week 3 – Digital Humanities and Stylometry
Monday, Sep 10th:
- Complete second half (2hrs) of DataCamp.com’s free interactive online course titled Intro to Python for Data Science (additional learning options at Python.org and LearnPython.org)
- RegexOne.com Tutorial (print out last exercise page and hand in as proof)
- Student Faculty Interview, Prioritized list of 10 features for a ‘Good Fit’ hire at Kenyon
Wednesday, Sep 12th:
- Neoliberal Tools (and Archives): A Political History of Digital Humanities, Allington, Brouilette, Golumbia, LA Review of Books, 1 May 2016
- (select two) Responses (Part I and Part II) to Allington, Brouilette and Golumbia
- (optional) Introduction to Stylometry (up to but not including ‘First Stylometric Test: Mendenhall’s Characteristic Curves of Composition), The Programming Historian
Friday, Sep 14th:
- Quickly peruse The Alan Turing Institute Data Science and Digital Humanities website
- Object Oriented Ontology (try to bring 3 main points to class)
Week 4 – Scipy Stack: Wrangling Information into Data
Monday, Sep 17th:
- NOTE: Prof Elkins notes that for those without signification programming experience, these two readings are challenging. Read over these two library descriptions the best you can. We’ll go over programming exercises in class that demonstrate and reinforce the main features of numpy and pandas that are used in machine learning. Turn in printout of last page on Moodle.
- Numpy Quickstart Tutorial, SciPy.org
- 10 Minutes to Pandas, PyData.org
Wednesday, Sep 19th:
- (only exercises 0-6) SQL Tutorials, SQLzoo.net
- The Bestseller Code, Anatomy of the blockbuster novel, Archer and Jockers
Friday, Sep 21st
- The Emotional Arcs of Stories are Dominated by Six Basic Shapes, Andy Reagan, ArXiv.org, 26 Sep 2016 (Read main body p.1-10 and Appendix A, B, (p.S1-S4). Optionally, those interested in math can read Appendix C and literature students may be interested Appendix D)
Week 5 – Visual Storytelling
Monday, Sep 24th
- Matplotlib Tutorial: Python Programming, Karlijn Willems, DataCamp.com
- Python Seaborn Tutorial for Beginners, Karlijn Willems, DataCamp.com
Wednesday, Sep 26th:
- The Trouble with Bias, (49:30), Kate Crawford, NIPS 2017 Keynote, 2017
- Humanities on the Brain, James Williford, Humanities, National Endowment for the Humanities, 2012
Friday, Sep 28th:
- Goto https://public.tableau.com/s/ and sign-up for a free account
- Biases: An Introduction, Rob Bensinger
- Reducing Bias and Ensuring Fairness in Data Science, Henry Hinnefeld, The Civis Journal, 22 Feb 2018
- How to Lie with Data, Karolis Urbonas, Amazon, DataScienceCentral.com, Apr 2017
PART II (Weeks 6-7): Machine Learning with Scikit-Learn
Week 6 – Machine Learning I, What is Intelligence?
Monday, Oct 1st: Administrative:
- Goto https://algorithmia.com and sign-up for a free account
- Goto https://data.world and sign-up for a free account
- Goto https://www.openml.org and sign-up for a free account
Programming Assignment:
- Open this Pokemon Partial Notebook and save to your local Google drive. This is the notebook we worked halfway through in class on Friday. Edit the rest of code cells in the notebook (replace the ‘???’ with the correct syntax) so it executes properly and submit your final corrected Jupyter notebook via Moodle for credit.
- If you have any problems, please email Prof Elkins (elkinsk@kenyon.edu) and let us know if you’d like to attend a special session on this Sunday, August 30th from 3-4pm in the Seminar Room in O’Connor House.
Reading/Video Assignments (~40mins total):
Don’t be intimidated – there are many links but they are generally very short videos. The goal is to give you a high level view of various future directions of how Machine Learning/AI is being made easier to use. Don’t focus on the details, we’ll do that in class. Instead, focus on the overall process of creating a ML/AI pipeline using these forward-looking tools.
- What is data.world?, Data.world, Jun 2018 (2:17)
- (Algorithmia) API in 60 Seconds, Data.world/YouTube.com, Oct 2017 (1:04)
- TensorFlow: Machine Learning for Everyone, Google, Feb 2017 (4:03)
- The 7 Steps to Machine Learning, Google, Aug 2017 (10:35)
- Introducing Cloud AutoML, Google, Jan 2018 (1:42)
- AutoML Vision (Part 1), Google, Jul 2018 (8:00)
- AutoML Vision (Part 2), Google, Jul 2018 (3:36)
- Computational Universe (Stephan Wolfram), MIT AGI, Mar 2018 (19:26-27:45)
Wednesday, Oct 3rd:
- (only do the first free section/first bar at the bottom) Diagnose Data for Cleaning, DataCamp.com
Friday, Oct 5th:
Listen to this podcast that defines the field of Computational Neuroscience and raises questions about how we model reality and the human mind. Then compare this with the Science Wars described in the Wikipedia article.
- The Science Wars, Wikipedia
- (only Intro and Sources of Uncertainty sections) Uncertainty Quantifications, Wikipedia
- What is an Explanation? (Part 1), Unsupervised Learning Podcast, 29 Aug 2018
Questions for Discussion
- What was the dominant model of scientific explanation until recently?
- How are the New Mechanists different (and what tradition do they come out of)?
- How have computers affected neuroscience in two ways? (Include the definition of computational neuroscience).
- John Searle the philosopher notes “Simulated thunderstorm don’t grow wheat.” How is modeling thinking different if you accept both strands of computational neuroscience.
- What is computational chauvinism?
- What are the speakers’ thoughts about this when it comes to disciplinary independence?
- Do philosophers who think about the mind need to understand the brain and “implementation”?
- How might we (or might we not) decide whether a model is true?
Week 7 – Machine Learning II, Political, Economic, and Social Issues
Monday, Oct 8th:
Here is a link to The New York Times coverage of what is being compared to the Alan Sokal affair many years ago. They do a good job of representing the diversity of reactions to it:
- Hoaxers Slip Breastaurants and Dog Park Sex into Journals, Jennifer Schuessler, NYTimes.com, 4 Oct 2018
- Think about this NYTimes article with regards to our discussion on Friday and the question of Modeling Truth and Reality. How is this the same or different from the Replication Crisis in the Natural Sciences like Psychology?
- Fitting a Line to Data, StatQuest via YouTube (9:21)
- (optional) Introduction to Linear Regression, Pt 1, StatQuest via YouTube (27:26)
- Copy, complete and submit electronic printout/PDF to Moodle the Jupyter notebook Introduction to Linear Regression UK1850Crime
Wednesday, Oct 10th:
This week we begin to study ML at both a higher level of abstraction and with a more realistic degree of complexity. Read through the following tutorials. Do not be discouraged if you don’t understand everything discussed, that’s expected given this assignment. Just persevere, skim over sections you don’t understand and to read through to the end to get the overall gist. Come to class with specific questions. We’ll walk through the theory and practice of decision trees, XGBoost and overall stacked model architecture in class.
- Decision Trees in Python with Scikit-Learn, StackAbuse, Feb 2018
- Introduction to Python Ensembles, DataQuest, Jan 2018
[NOTE STUDENT PRESENTATIONS BEGIN MID-WEEK]
PART III (Week 8): Computational Linguistics and Vectorized Text
Week 8 – Computational Linguistics and Teaching Machines to Read
Monday, Oct, 15th
We end our coverage of Machine Learning with an overall recap as well as a special focus on Decision Trees. Beyond individual trees, we’ll also explore ways we can combine them into Random Forests or create ensembles along with non-tree models. Finally, we question if ML is just glorified statistics and how complexity can perhaps help us understand types and degrees of ‘Intelligence’ in AI.
- No, Machine Learning is not just glorified Statistics, Joe Davison, Jun 2018 (read a few of the longer contrary comments below the article)
- P vs NP and the Computational Complexity Zoo, (10:43) Hackerdashery via YouTube, Aug 2014
- What is Machine Learning?, Jake VanderPlas, Python Data Science Handbook
Wednesday, Oct 17th
- In Depth: Decision Trees and Random Forests, Jake VanderPlas, Python Data Science Handbook
- Getting Started with Competitions: A Peer to Peer Guide, William Koehrson, Kaggle.com, Aug 2018
Friday, Oct 19th
- REDUCED READINGS to accommodate in-class presentations
- (Caution: spicy language) Machine Translation. From the Cold War to Deep Learning, Vasily Zubarev, 7 Feb 2018
PART IV (Weeks 9-11): Neural Networks
Week 9 – Introduction to Neural Networks: The Basis of Intelligence?
Monday, October 22nd
- Instead of a screenshot, complete and submit a hyperlink on Moodle class webpage to your corrected/completed version of this Jupyter notebook: Introduction to Scikit-Learn recapping ML. Remember to change the permissions on your local copy of this corrected Jupyter notebook so we can access, verify execution and grade it (give read permissions with chunj@kenyon.edu and elkinsk@kenyon.edu).
- Chapter 1: What is Deep Learning?, Francois Chollet, Deep Learning with Python, Manning, Nov 2017
Wednesday, October 24th
- Neural Networks Video Series by 3Blue1Brown:
- Chapter 1: But what *is* a Neural Network? (19:14), Oct 2017
- Chapter 2: Gradient Descent, How Neural Networks Learn (21:01), Oct 2017
- (run in Google Colab) Train your first Neural Network: Basic Classification Tutorial
Friday, October 26th
- Neural Networks Video Series by 3Blue1Brown:
- Chapter 3: What is backpropagation really doing? (13:54) Nov 2017
- (optional) Chapter 4: Backpropagation Calculus (10:18) Nov 2017
- (run in Google Colab) Text Classification with Movie Reviews Tutorial
- (run in Google Colab) Predict House Prices: Regression Tutorial
Week 10 – Machine Vision and CNNs, Neuroscience and Psychology of Perception
Monday, October 29th
(We suggest you watch the 2 videos on word vectorization, topic modeling and visualization first. They review and expand upon concepts covered in both class and previous readings that will help you understand the Stanford Literary Lab Pamphlet.)
- Patterns and Interpretation, Stanford Literary Lab, Pamphlet #15, Sep 2017
- word2vec (introduce and tensorflow implementation), Minsuk Heo (9:47)
- LDAviz: A method for visualizing and interpreting topic models, statgraphics, Jan 2015 (8:08) Play with an online interactive demo of movie reviews
Wednesday, October 31st
- A Walk-through of Mammalian Vision System, Allen Institute, 2012 (30:00-40:00)
- Neural Mechanism of Recognition Part 1, James DiCarlo, MIT, Apr 2018 (0:00-25:00)
- Visualizing A Convolutional Neural Network (Interactive drawing exercise)
- Image Kernels, Victor Powell (Interactive image kernel demo)
- (We’ll review topics like Convolutions and Max Pooling in class. Just read through the article to get a high-level understanding of CNNs and orient yourself for Wednesday’s lecture.) Visualizing Parts of a CNN using Keras and Cats, Hackernoon, Erik Reppel, Jan 2017
Friday, November 2nd
NOTE: We’ll be calling on students who have not participated as much on topics directly related to readings. Please come prepared to be called upon with specific questions and interpretations of the readings.
- (skim the beginning and focus on the design of the machine learning techniques to extract pathos from images of the human form observing the constraints of the art theory presented) Totentanz. Operationalizing Aby Warburg’s Pathosformeln, Leonardo Impett, Franco Moretti, Stanford Literary Lab, Nov 2017
- (Read only first overview page) OpenCV Overview, Tutorialspoint.com
- (Read for overview and code template, not math details) Face Detection using Haar Cascades, OpenCV.org
[NOTE: WEEKLY PROJECT LAB STARTS THIS WEEK]
Week 11 – Text Processing and RNNs, Theory of Mind and Consciousness
Monday, Nov 5th
This weekend’s Jupyter notebook assignment is a sample Image Classification model using CNNs that we just studied. Signup for a free account on Floydhub.com which hosts the notebook we’ll use. Configure your computer to not go into sleep mode for at least an hour while you let the CNN model train on the dataset.
A Jupyter notebook titled ‘image-classification-demo‘ will automagically appear in your Projects when you create your account. Just run/open this notebook and read through the text blocks/comments as you execute each code cell. Don’t panic if you don’t understand everything, we’ll review the notebook in class. Again, DO read the textual explanations before each code block as well as the ‘# comments’ lines within code blocks to get a feel for what is happening.
This exercise will expose you to a more realistic CNN architecture (based upon Chollet’s Xception model) as well as illustrate the important concepts of transfer learning and photo data augmentation. It will also give you a better feeling for the computational complexity of DNNs as it will take approximately 30min-1hr to train (learning over 2 million trainable parameters in our model over 3 epochs on only a fraction of the original dataset and classes with a CPU). Production models may take days or weeks to train up depending upon the architecture/# parameters, dataset size, required performance metrics, hardware (CPU/GPU/TPU+RAM) and decomposition for parallel execution.
Find a hyperlink to ONE dog photo on the Intertubes (don’t share images), insert it into the 2nd last code block to have your trained model predict the which of the 10 breeds it ‘thinks’ it is and with what probability. Email chunj@kenyon.edu with (a) the URL of your selected dog jpg and cut-and-paste into the body of the email (b) the text output from your model’s prediction/probability. We’ll use these 2 pieces of information to verify assignment completion.
- (Jupyter notebook on Floydhub.com) After creating your free account go to the ‘Projects’ tab and open the sample notebook titled ‘image-classification-demo‘.
- A Friendly Introduction to Recurrent Neural Networks, Luis Serrano, Udacity, Aug 2017 (23:43)
- Bankspeak: The Language of World Bank Reports 1946-2012, Franco Moretti and Dominique Pestre, Stanford Literary Lab, Mar 2015
Wednesday, Nov 7th
- (Skip middle code, read beginning descriptions and end examples which we’ll reference to implement a text generator in class) The Unreasonable Effectiveness of RNN, Andrej Kaparthy, May 2015
- Natural Language Processing, Wikipedia.org (scan to understand the main subfields in NLP)
- Learning Explanatory Rules from Noisy Data, DeepMind, Jan 2018 (a clear distinction between symbolic and DNN AI with a hint towards future research)
Friday, Nov 9th
We began our study of Neural Networks with three basic building blocks of Deep Neural Networks: Feed Forward, Convolutional and Recurrent Neural Networks (FFN, CNN and RNN). Until now, our focus has been on higher-level abstractions and visualizations to broadly understand how each structure can be used and intuit how they operate.
Today we’ll go deeper and get into implementation details by exploring the Keras Deep Learning Library we use in most of our examples. We’ll explore the Keras API through examining simple FFN, CNN and RNN implementations. In doing so we’ll see common structural motifs and operational concerns. Equally important, we’ll highlight the important but practical decisions that contextualize each line of code we explore.
- How to Solve 90% of NLP Problems: a Step-by-Step Guide, Emmanuel Ameisen, Insight Data Science, Jan 2018 (excuse the business-speak, but this article condenses a lot of NLP wisdom in a very approachable way)
- Keras: The Python Deep Learning Library (read from the top of this web page including ‘Guiding principles‘ and ‘Getting started: 30 seconds to Keras‘ upto but not including ‘Installation’ subtitle), Keras.io
- Getting Started with the Keras Sequential Model (read the top of this web page including ‘Specifying the input shape‘, ‘Compilation‘, ‘Training‘ and two sections on Multilayer Perceptrons (MLPs) upto but not including ‘VGG-like convnet’)
PART V (Weeks 12-13): Reinforcement Learning and Generative Adversarial Networks
Week 12 – Reinforcement Learning, Independent Learning in an Incomplete World
Monday, Nov 12th
Today we have a number of generally short videos/readings. We start with two very short videos illustrating the surprising ability of Reinforcement Learning (RL) to teach itself without human guidance or intervention. Two longer videos outline RL in general and describe Monte Carlo Tree Search (MCTS) which are both key components in understanding AlphaGo Zero. A web tutorial walks us through using RL/Deep Q-Learning to master OpenAI’s pole balancing game which we’ll see in class. Finally, we look at a study that contrasts how AI and humans play video games.
The Deep Q-Learning link below can get a bit technical, but read on through and focus on the general architecture it uses to independently observe, learn and master tasks in it’s environment. Think about parallels with how humans learn through interactions with their environment.
- Google’s DeepMind AI Just Taught Itself to Walk, Tech Insider, Jul 2017 (1:50)
- AlphaGo Official Trailer, AlphaGoMovie.com, (1:30)
- An Introduction to Reinforcement Learning, Arxiv Insights, Apr 2018 (16:27)
- Monte Carlo Tree Search (MCTS) Tutorial, Full Stack Academy, May 2017 (12:38)
- (Focus on how RL/Deep-Q architects a solution to problem solving in unfamiliar environments without explicit rules and minimum feedback, not on the math/code.) Deep Q-Learning with Keras and Gym, Keon.io, Feb 2017
- Why Humans Learn Faster than AI… For Now, MIT Technology Review, March 2018
Wednesday, Nov 14th
- Move 37 Explained, Siraj Ravel, Nov 2018 (11:35)
- Why Has Critique Run Out of Steam, Bruno Latour, Critical Inquiry, Winter 2004
- BRING TO CLASS ANSWERS TO THE FOLLOWING QUESTIONS:
- Why has critique run out of steam?
- What is Critical Gesture Move One?
- What is Critical Gesture Move Two?
- Why is the critic always right?
- What is the critical trick?
- How do we move from matters of fact to matters of concern?
- (bonus for humanists) What does Heidegger have to do with all of this?
- BRING TO CLASS ANSWERS TO THE FOLLOWING QUESTIONS:
- (read upto but not including ‘Even a Good Reward, Local Optima Can be Hard to Escape‘) Deep Reinforcement Learning Doesn’t Work Yet, Alex Irpan, Jun 2018
- (optional) Spinning Up as a Deep RL Researcher, OpenAI.com, Oct 2018
Friday, Nov 16th
- (for those working on their own PC/Macs) Install R Programming Language then RStudio and RapidMiner before class. Be sure to signup for a free RapidMiner.com account which we’ll need in class.)
- (from earlier in the course) Be sure you have both a (a) Google Gmail.com account and a (b) Microsoft Outlook.com account with remembered passwords. You’ll need these to explore their AI/ML Cloud services.
- Introducing OpenAi, OpenAI.com
- Dataism Is Our New God, Yuval Noah Harari, New Perspectives Quarterly, Spring 2017
- The Genius Neuroscientist Who Might Hold the Key to True AI, Shaun Raviv, Wired.com, Nov 2018
- (optional) Advice for Short-Term ML Projects, Tim Rocktaschel, Aug 2018
THANKSGIVING BREAK: Monday, Nov 19th – Friday, Nov 23rd
Week 13 – Generative and GANs, Creating Art
Monday, Nov 26th
With cancellation of class the previous Friday, we’ll have to find a way to make up our DH tools and techniques seminar. Nonetheless, we want to keep focus on AI and DNNs and continue onto our last DDN techniques which are all related to generative deep learning.
Your assignment over the long Thanksgiving break is to read one chapter on Generative Deep Learning (you can access the entire book via LBIS.kenyon.edu). This is a great chapter that reviews and extends what you’ve learned about FFN/FCN, CNN and RNN/LSTM models with techniques like style transfer as well as VAE and GAN architectures. There are a few important terms/concepts that we have not encountered yet in this reading so we will review them in class. Do not be discouraged–read through the entire chapter and bring any questions you have to class. You will need to be familiar with much of this information to complete our final class programming assignments on Deep Neural Networks.
- Generative Deep Learning, Francois Chollet, Deep Learning with Python, Manning, 2017
Wednesday, Nov 28th
- Generating Custom Photo-Realistic Faces using AI, Shaobo Guan, Oct 2018
- Imaginary Worlds Dreamed by BigGAN, AIwierdness.com Oct 2018 (OpenReview.net)
Friday, Nov 30th
- MIT Quest for Intelligence (4:19 Video) (Why study AI)
- Finalize Outline of Class Project: Research Guidelines, Resources and Examples
- Catchup/Review previous readings on RL/GAN architecture/code time permitting
PART VI (Week 14): Genetic Algorithms and Probabilistic Models
Week 14 – Core Concepts, Optimization and Visualizations of Neural Networks with Keras/TensorBoard
In our second last week, we focus on core neural network concepts, the Keras/TensorBoard framework and optimizations/visualizations at the code and theory levels. This is to reinforce and make concrete many of the concepts we introduced earlier and leave you with a stronger working knowledge of neural networks. The lectures this week will move quickly and assume content/ideas contained in the readings assigned below. Together the readings and lectures this week should boost your confidence in feeling mastery of Neural Networks.
Monday, Dec 3rd
- Keras Explained, Siraj Raval, Jan 2018 (9:20)
- 25 Must Know Terms and Concepts for Beginners in Deep Learning, AnalyticsVidhya.com, May 2017
- Deep Learning with Python, TensorFlow and Keras Tutorial, sentdex, Aug 2018 (20:33)
- Keras Tutorial: Deep Learning in Python, DataCamp.com, May 2017
Wednesday, Dec 5th
- Using TensorBoard with Keras, TensorFlow/Google, Nov 2018 (1:57)
- Hands-on TensorBoard, Google Developers, Feb 2018 (23:46)
- Hyperparameter Optimization with Keras, Mikko, May 2018
- Talos Github Doc (Keras Model Optimization), Mikko
Friday, Dec 7th
- Explainable AI, DARPA (perhaps THE biggest issue in AI today)
- Stakeholders in Explainable AI, Alun Preece et. al., Sep 2018
- An Introduction to Explainable AI, Patrick Ferris, FreeCodeCamp.org, Aug 2018
- Ensembling ConvNets Using Keras, Max Lawnboy, Medium.com, Dec 2017
Week 15 – Bayesian Statistics, Probabilistic Programming, Genetic Algorithms and Neuromorphic Computing
Monday, Dec 10th
- 13 Practical Ideas to Make You A Better Forecaster, Mark Steed, Farnam Street, Jun 2015
- Conditional Probability Explained Visually (Bayes Theorem), Art of the Problem, 2013 (5:05)
- A Visual Guide to Bayes Theorem, Julia Galef, Jul 2015 (11:24)
- Visualizing Bayes’ Theorem, Oscar Bonilla, May 2009
Wednesday, Dec 12th
- Meta Learning, Siraj Ravel, Oct 2018 (10:17)
- An Intro to Probabilistic Programming, Siraj Raval, Nov 2017 (8:56)
- The Evolution of A Traveling Salesman, Eric Stoltz, Toward Data Science, Jul 2018
- The Surprising Creativity of Digital Evolution (p.1-5), Joel Lehman et. al, Aug 2018
Friday, Dec 14th
- Reinforcement Learning or Evolutionary Strategies? Nature has a solution: Both, Arthur Juliani, Beyond Intelligence/Medium, Apr 2017
- Human Brain Project, Nov 2012 (7:25)
- Brain to Brain Communication, 2 Minute Papers, Oct 2018 (2:59)
- Libet Experiments (Do humans have free will?), TheInformationPhilosopher
- MIT: Long-Term AI Future, Russell Stewart, Dec 2018 (32:30-1:36:00)
Congratulations – END OF SEMESTER !!!
Exercise Topics Below
- Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell, Future of Life Institute, Apr 2018 (60:00)
- A Survey of Research Questions for Robust and Beneficial AI, Future of Life Institute
- A Comprehensive List of Hyperparameter Optimization Tuning Solutions, Mikko, May 2018
- FAT: What-If Tool, PAIR, Google, Sept 2018
- Seq2Seq-Viz, Harvard, Apr 2018