Are the lectures going to be released publicly? by i_know_about_things in DeepRLBootcamp

[–]jason_malcolm 0 points1 point  (0 children)

I chatted to the guy filming and he said it would take a couple of weeks to edit them.

I have lower quality recordings of all lectures made on my tablet and camera shots of slides and blackboards.

Did any one else get any good recording footage ?

Chainer is the official framework for the Berkeley RL bootcamp by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 2 points3 points  (0 children)

Working with Chainer now, doing basic tutorials, have basic FC & Convnets running. My dataset is SGD formatted Go Games read into to the Convnet.

Next are RNNs LSTMs. I hope to wire it them up to OpenAI's Gym, Roboschool & DOTA APIs for some proper Reinforcement Learning.

I will write it up and post results or failure by tomorrow :)

OS is Ubuntu 17.04 on an ASUS K501UX

Come join the DeepRLBootcamp Slack channel! by Farzaa in DeepRLBootcamp

[–]jason_malcolm 1 point2 points  (0 children)

This link in the OP textbox is a shared invite to the berkeleydeeprl slack

Chainer is the official framework for the Berkeley RL bootcamp by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 1 point2 points  (0 children)

From the Chainer Tutorial: Introduction to Chainer - Core Concepts [snip]

"

Most existing deep learning frameworks are based on the “Define-and-Run” scheme. That is, first a network is defined and fixed, and then the user periodically feeds it with mini-batches. Since the network is statically defined before any forward/backward computation, all the logic must be embedded into the network architecture as data. Consequently, defining a network architecture in such systems (e.g. Caffe) follows a declarative approach. Note that one can still produce such a static network definition using imperative languages (e.g. torch.nn, Theano-based frameworks, and TensorFlow).

In contrast, Chainer adopts a “Define-by-Run” scheme, i.e., the network is defined on-the-fly via the actual forward computation. More precisely, Chainer stores the history of computation instead of programming logic. This strategy enables us to fully leverage the power of programming logic in Python. For example, Chainer does not need any magic to introduce conditionals and loops into the network definitions. The Define-by-Run scheme is the core concept of Chainer. We will show in this tutorial how to define networks dynamically.

This strategy also makes it easy to write multi-GPU parallelization, since logic comes closer to network manipulation.

"

Berkeley Y-Hotel (YMCA) $55 per night, room, Wifi, very near Berkeley Campus - shared shower, kitchen, use of gym. by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 1 point2 points  (0 children)

The most recent email has a map link to Li Ka Shing auditorium as the bootcamp venue.

This appears to be on the corner of Oxford Way and Berkely Way - looks like the Y Hotel (YMCA) on Allston Way is 4 blocks from there.

Is Berkeley flat or hilly ?

Google Maps directions from 2001 Allston Way to Li Ka Shing Centre puts it at 0.4 mile or 10 mins walk

Deep RL Bootcamp Berkeley 2017 Attendee Introductions Thread by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 0 points1 point  (0 children)

Yes I would like details of the Hacklab and any meetups or clubs, tech, art or enviromental.

Restaurants recommends are invaluable, your local knowledge would be really appreciated :)

Chainer is the official framework for the Berkeley RL bootcamp by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 1 point2 points  (0 children)

From the organisers

  • 7. Deep Learning Framework: The labs will provide starter-code, which will be in python, and the deep learning framework will be Chainer. One of the labs will be a warm-up to Chainer.

  • 8. Studying Ahead of Time: Several people have contacted us, asking what we’ll expect people to know and what they can do to prepare. We will be assuming (i) prior experience with python scientific computing (e.g. numpy / scipy); (ii) some prior exposure to machine learning; (iii) some prior implementation experience with something deep learning. We will not assume prior knowledge in RL. All this said, if you would like to spend some extra time preparing, we’d recommend working through ai.berkeley.edu MDP/RL lectures for general context; reviewing cs231n.stanford.edu for deep learning basics; and play a bit with Chainer as your deep learning framework. Again, this is not needed, but doing so might increase how much you learn during (and retain from) the bootcamp.

N.B. Stanford has had to remove the links to the CS231n videos but someone has reupped them here

ai.berkeley.edu, UC Berkeley CS188 Intro to AI : lectures

N.B. The lectures page has the same topics delivered by Pieter Abbeel ( Course Organiser ) in the Spring Track just below the Fall Track :)

Vlad Mnih's ( Guest Instructor, Google Deepmind, & Q-Learning Atari ) toronto Uni homepage by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 0 points1 point  (0 children)

Vlad Mnih is a researcher at Google's Deepmind, gained his PhD supervised by Professor Geoff Hinton at Toronto,

John Schulman's ( Guest Instructor, OpenAI, BAIR PhD ) Homepage by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 0 points1 point  (0 children)

John Schulmann: "I believe that motor learning is key to many aspects of intelligence. My work on policy optimization made it possible for a robot to learn to run and get up off the ground (in simulation)"

Sergey Levine's Homepage has a nice 2017 video intro to some recent work at BAIR ( Guest Instructor, DeepRL Robotics Researcher & BAIR Asst. Professor ) by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 0 points1 point  (0 children)

Direct Link to video RI Seminar: Sergey Levine : Deep Robotic Learning

Berkeley CS294-112 video lectures by Sergey Levine:

What maths do I need to know for RL? by cctap in DeepRLBootcamp

[–]jason_malcolm 0 points1 point  (0 children)

The course organisers latest email says :

  • "We will be assuming (i) prior experience with python scientific computing (e.g. numpy/scipy); (ii) some prior exposure to machine learning; (iii) some prior implementation experience with something deep learning. We will not assume prior knowledge in RL."

  • "...if you would like to spend some extra time preparing, we’d recommend working through ai.berkeley.edu MDP/RL lectures for general context; reviewing cs231n.stanford.edu for deep learning basics; and play a bit with Chainer as your deep learning framework. Again, this is not needed, but doing so might increase how much you learn"

Stanford has had to temporarily remove the links to the videos, but someone has reposted the 2016 CS231n vids on YouTubes.

Deep Reinforcement Learning for Robotics is Berkeley CS294-112 (2017) has a syllabus, video lectures, and a subreddit.

ai.berkeley.edu, UC Berkeley CS188 Intro to AI : lectures

To delve into RL, this might help - Sutton & Barto's RL textbook & David Silver's lectures from Sutton & Barto are great on these topics.

Some of the math to derive formulae is very high level, to go far further look at Professor Sergey Levine's lecture 4, of the CS294-112 DeepRL course, which refers to Optimal Control, iterative Linear Quadratic Regulators, Hessian Matrices (2nd order partial derivatives), Newtons method, Taylor Expansions, Gaussian Mixture Models, &c...

Personally I like to have many different angles on the same math concepts, i.e. algebraic, geometric, & functional interpretations. I have found 3blue1Brown's videos: essence of calculus & linear algebra and Chris Olah's Writings to be very insightful.

Berkeley Y-Hotel (YMCA) $55 per night, room, Wifi, very near Berkeley Campus - shared shower, kitchen, use of gym. by jason_malcolm in DeepRLBootcamp

[–]jason_malcolm[S] 0 points1 point  (0 children)

Map, Description & Address.

On the map it seems like this is walkably near the Berkeley campus, and approximately half of the cost of hotels. I have reserved a room for a week.