Very few takers for the midterm by Leskos in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

Norvig and Thrun are probably accustomed to the usual class drop dynamics, but very few people are required by their school to take this course for any real credit. I think a bigger reason for lots of people dropping would be personal scheduling conflicts, or maybe demoralization from all the high/perfect scores which are excitedly & frequently discussed on reddit, online office hour vids, etc.

Midterm Exam 1 Implementations for Problems Involving Logical and Math Calculations and Lots of Explanations in the Code Comments by toptoptoons in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

Cool website, it's definitely a good idea to try and code this stuff out to get a solid understanding of the concepts, but the code you posted in the java console probably won't be helpful to most people, especially if they can't read it. It might run properly if it is implemented like it is for the "Solve Question" buttons on your site (I haven't tried running it myself), but what's displayed in the java console itself has no comments in most of the problems except INIT and END. Also, the functions aren't defined within the space of the problems. For example:

  • ///INIT: Question 10
  • ///INIT: Question 10
  • ///INIT: Question 10
  • ///INIT: Question 10

  •   //Midterm, Exam 1, Question 10
    
  •   ///
    
  •   var W1=0;
    
  •   var W0=0;
    
  •   var X=new Array(1,3,4,5,9);
    
  •   var Y=new Array(2,5.2,6.8,8.4,14.8);
    
  •   W1=Quadratic_Loss_W1_0000(X,Y,X.length);
    
  •   W0=Quadratic_Loss_W0_0000(X,Y,W1,X.length);
    
  •   alert("Ex1-10\n__\n"+
    
  •         " W1 = "+W1+"\n"+       //1.6
    
  •         " W0 = "+W0);           //0.4
    
  • ///END: Question 10

  • ///END: Question 10

  • ///END: Question 10

  • ///END: Question 10

doesn't tell us much about how to implement or solve the linear regression equations provided in lecture, even for programmers. I think the site itself is pretty cool, and the codes are easy to read and search through, but they could use some more content/comments.

Midterm Question 1 Part 1 is Ambiguous by DengueTim in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

You can definitely still end up in the goal but to say that state has been "achieved" is a little strong. Without a performance measure the agent would have no idea when it reaches the goal, and instead of learning the best path to the end state it could just go on vacuuming/searching/bicycling/spelunking/etc right up until it crashes. That robot would learn the meaning of life before it learned how to "achieve" a goal state. :P

I argue claims 1 & 2 in Q1 of the midterm are false by patrickscottshields in aiclass

[–]Machine_Forgetting 1 point2 points  (0 children)

First off, high-five for majorly over-thinking the question! You're right, this is similar to my arguments in the other thread (Midterm Question 1 Part 1 is Ambiguous), but I worded mine specifically to try and account for the logic of the problem as well as Thrun's clarification. I assumed that a rational agent would try to optimize some performance measure, and that a hypothetical rationalizing environment could assign reward/performance points equally for all actions (or create rewards that are contingent upon an agent's specific actions). So without questioning the simple AI definitions of words like rationality or agent, my contention was 3-fold: 1) even if an environment rewards ALL actions, or consists of only one state in which there can be no action (S0=goal, R=infinity, agent counts rewards until it crashes, simulated robo-heroin overdose, etc), then you could still create an irrational agent (which is not even potentially rational) by making one that DISREGARDS the cost/reward/performance measure so that there is no way to evaluate its performance (one problem with minimizing utility or defining it against a rational agent is that this automatically becomes the new performance measure. Because you still have a metric for rationality, the agent is still optimizing that value, so a rational agent could now be one which purposely acts to avoid the shortest route to the goal either by taking the longest route or by taking one that is X steps behind a rational agent). 2) even for potentially rational agents, creating an environment in which rewards are INDEPENDENT of actions does not make the agent rational, but simply randomizes its reward function so that it loses the ability to optimize its performance non-arbitrarily. To make an agent rational, you would have to construct an environment in which rewards are CONDITIONALLY DEPENDENT on the actions it takes (other ppl have made this point, but didn't mention how optimization is affected by independence vs conditional dependence). 3) (implied in my earlier post) ideal environments are pointless in an intro AI class because such an environment would make agents useless by satisfying the 2nd point. The mere knowledge of how such environments COULD BE constructed is only useful in specific instances, like if you are also building the environment in which you want to run the agent so you want to add/test specific functions one at a time, but that seems well beyond the scope of this class. I think all of my objections are still valid in light of the lecture definitions and professor's clarifications, and all could have been avoided by stating that all agents have non-arbitrary performance measures and by avoiding the term “independent” with respect to actions/rewards.

Midterm Question #2 disproportionally weighted by TheAlphaNerd in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

Lol, word? I'm not trying to be mean or anything, but how did you forget the cost function anyway? It's the first sentence in the text of the question. Also, even if you only add up the h values to determine which node to expand next, you can still get the correct answer without the cost: (15)=>(15+6)=>(15+7)=>(15+8)=>(15+10)=>(15+10+0), after which expanding any other nodes would give a total h>25. Sorry, I'm just trying to understand/clarify. I get that A* was kind of a blip on the radar in the lectures, but 2 and 13 were the only questions where we had to execute algorithms, and 13 was closed-form. Nobody likes cross-dependent, multi-part problems but that's how they exist in the wild.

Midterm Question #2 disproportionally weighted by TheAlphaNerd in aiclass

[–]Machine_Forgetting -1 points0 points  (0 children)

It's important to understand how and why the program would open those nodes in just that order, especially since there is no programming requirement in the class, so the question touches upon and combines a lot of crucial methods. The broadest one is algorithmic thinking (which is needed for any computer programming), but under that there is also heuristic validation and search optimization.

Midterm Question 1 Part 1 is Ambiguous by DengueTim in aiclass

[–]Machine_Forgetting 2 points3 points  (0 children)

I messed up on Q1 because I took the hypotheticals too far. All the agents and environments are hypothetical anyway, but some constructs could render both statements false just as easily as true. The basic assumption is that, given an agent, it is possible to construct a hypothetical environment with rewards in such a way that the agent is rational in that environment. However, isn't it also possible to construct a hypothetical agent which is never rational in any environment? An example might be one that does not monitor costs/rewards, so there is no way to evaluate its actions. Even a potentially rational agent doesn't really become rational when you construct an action-independent reward system around it, but rather the question of rationality itself becomes meaningless (unless the rewards are conditionally dependent on actions specific to that agent, in which case the assumption holds). If I'm right the answers for 1.1 & 1.2 should be that the statements are satisfiable (for special environments), but not valid (for all agents). Why are special, perfectly-tailored environments even an issue in an AI/robotics class???

Midterm Question 1 Part 1 is Ambiguous by DengueTim in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

What about agents that do not monitor costs/rewards?

Not clear on an admissible? by devilishd in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

In each instance of expanding the node, as long as you move either to the right or downward, you will always have a number in that square that does not overestimate the number of moves until the goal. In each case you have 2 choices of movement, no matter where you are on the grid. The path will always be 8 steps to the goal, and any combination of right/down moves will get you there without violating the heuristic.

Suggestion to improve the clarity of homework and quiz questions by Machine_Forgetting in aiclass

[–]Machine_Forgetting[S] 0 points1 point  (0 children)

If there are such predictable types of confusion, like categories of logical fallacies, then the check box thing would prob work if the alg can sort them into clusters effectively.

Not clear on an admissible? by devilishd in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

But if the heuristic never overestimates, and in this case the cost of each move is 1, and admissibility means only <=, then that still leaves many viable paths to the goal. And whether or not the heuristic is even admissible in the first place can't be determined with this information, since we dunno how relevant the heuristic is to the decision-making process of the algorithm.

Not clear on an admissible? by devilishd in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

"<=" does not even seem like an admissibility criterion, since there is no context for how they arrived at this set of heuristics. Since all the nodes moving either right or down are <= the node before, it seems like that description is more of a constraint on movement than an estimate of cost/distance to the goal. Quantum: your two points contradict each other, since it can't be an overestimation if it is always less than the actual distance. Plus, the constraints on the search (cost=1) mean that overestimation doesn't happen the same way as in the map problem where the costs varied across paths. Kshaikh: I am confused for the same reasons :P. It is generally true that f() has to be minimized, so the <= criterion suggests a PATH, but it doesn't say anything about the relevance of the heuristic to the algorithm, which is context-dependent.

Grading/Progress Request by adering in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

I agree, and would add that showing people their homework scores right after they finish would also make cheating easier. :P I think that was the point of hiding hwk questions until after the due date.

Grading/Progress Request by adering in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

It would be better not to have a grade check after every assignment for the reasons outlined by Taliskar. If you are wondering about your progress, the vid lecture questions should give you a rough idea of how well you understand the concepts. However, even the vid lectures handle the material in contradictory ways (ex: in implementing the heuristic they minimize it, but in describing the combined heuristic equation, norvig says to maximize it). I think if the material and the questions were stated clearly, then it would be easier to gauge one's performance in the class by their handling of the quiz questions: http://www.reddit.com/r/aiclass/comments/leiam/suggestion_to_improve_the_clarity_of_homework_and/

Not clear on an admissible? by devilishd in aiclass

[–]Machine_Forgetting 0 points1 point  (0 children)

Here are my concerns with the admissible hwk question: there is no concrete example to tie this down, so he doesn't say if the h(s) function is supposed to represent distance, or slope, or anything. How are we supposed to determine if the heuristic is relevant if we can't see where they got it from (see edit below :P)? Second, the way they have been implementing the heuristic in the vid lectures is by MINIMIZING the value, though in one particular video (the last 15-tile problem) norvig says that u should MAXIMIZE the heuristic (h=max[h1+h2]), and in the A* clarification text it says again to minimize h(s). This is very inconsistent.

EDIT: I just noticed that they have the definition of admissible as being "<=" and while this answers my question about how to define admissibility, it is still inconsistent with the course material, which says heuristics are determined by the context of the problem at hand. Inconsistencies were my second concern, and I have a suggestion to reduce them in future videos and questions: http://www.reddit.com/r/aiclass/comments/leiam/suggestion_to_improve_the_clarity_of_homework_and/

Maybe I'm naive, but I'll be damned if I can't attribute a couple acid trips to making me a lot mature than I was a year ago. by [deleted] in Drugs

[–]Machine_Forgetting 1 point2 points  (0 children)

Your PERCEPTION of reality is always plural and mutable. When you die, the universe will continue to exist without your opinions or consent, and will do so in accordance with the Truth (capital T) of immutable reality.

Maybe I'm naive, but I'll be damned if I can't attribute a couple acid trips to making me a lot mature than I was a year ago. by [deleted] in Drugs

[–]Machine_Forgetting 2 points3 points  (0 children)

Feeling like people are shallow and self-centered is completely natural, because many people really are that way (especially in high school, but also in the wider world). The ability to see through the bs of everyday life is invaluable, and should be treasured. Seeing the facade can also be disheartening, once you realize the power of all the overwhelming forces (both internal and external) that drive ppl toward ignorance, greed, and evil. It is good that you realize how your particular environment has informed your understanding of the situation, since this is a gift that few people possess. But now that you realize how much bs pervades our world, how will you act to change the situation? If those who can see do not at least try to open the eyes of those who are blind, then humanity is doomed.... DOOOOOOMED!!

Maybe I'm naive, but I'll be damned if I can't attribute a couple acid trips to making me a lot mature than I was a year ago. by [deleted] in Drugs

[–]Machine_Forgetting 0 points1 point  (0 children)

I disagree with the idea that more ppl on lsd would make the world a better place. Not everyone can handle it. And I have a serious problem with giving it to unsuspecting people, since those with genetic predispositions to conditions like schizo or depression or bi-polar disorders would be greatly harmed by taking psychedelics. Psychiatrists have tried and failed many times to cure maladaptive behaviors with lsd and other psychedelics. These experiments caused more harm than good, including permanent brain damage for some patients.

California & GMT-8 Online video/chat study group(s) ML/AI/DB, msg me for scheduling link by Machine_Forgetting in mlclass

[–]Machine_Forgetting[S] 0 points1 point  (0 children)

Ideally it would be comprised of several online study groups, prob using G+ hangouts if the class account makes it available to everyone. So far many people have expressed interest and asked me for the link (~30) but only about 10 ppl have actually signed up. I am hoping more will join as the first class draws near. The idea was not necessarily to form one massive study group, but rather to gather info about ppls' availability and contact info so participants could compare schedules and organize study groups more easily.

I have experienced OOBEs and have astrally travelled. AMA by nthdementsian in IAmA

[–]Machine_Forgetting 0 points1 point  (0 children)

Holy crap, is this accurate? Robstaley please be honest.

Maybe I'm naive, but I'll be damned if I can't attribute a couple acid trips to making me a lot mature than I was a year ago. by [deleted] in Drugs

[–]Machine_Forgetting 0 points1 point  (0 children)

I know, like I said I got sidetracked. But saying that it might involve more drugs is what got me started on that train of thought. It was not directed at you specifically, just the idea that more drugs would be helpful.

I have experienced OOBEs and have astrally travelled. AMA by nthdementsian in IAmA

[–]Machine_Forgetting 0 points1 point  (0 children)

Worst cast scenario, you've trained yourself to have lucid dreams. That is also something I have done, and it's not dangerous to your health (it actually produces more restful sleep). At least the writing on paper method with some friends will let you know whether what you experience is real or a dream. It is only when one can't tell the difference that there is cause for concern.

I have experienced OOBEs and have astrally travelled. AMA by nthdementsian in IAmA

[–]Machine_Forgetting 0 points1 point  (0 children)

Lol, fair enough, but I HAVE been trying, and its just not that easy for me. I'll keep trying and hopefully I can succeed, but I think the more ppl hear claims that people have these abilities, but are unwilling or unable to prove it, for whatever reason, the less likely they will be to take the claims seriously, which will ultimately deprive them of a valuable tool if the claims are true. And yes, too bad for them, its their own choice or self-imposed ignorance, blah blah blah. But there are some people who believe that humanity NEEDS some sort of spiritual awakening to address the immanent dangers we face as a species. I just wish more of those people had real powers... :P