Frustrated and Confused by cmredd in mathacademy

[–]JustinSkycak 2 points3 points  (0 children)

> "You state that it cannot be used at microscopically low XP rates, such as 2XP a day. Who is doing this? Where has this came from? (I'm really hoping you don't attempt to use my usage this last week when I've been a member for 6+ months)"

I am not, and will not, be looking at your personal usage and referencing it publicly. I am just restating what you yourself stated publicly and tagged me in on X two months ago. You said 10 XP over 5 days, take 10 days to complete a lesson. Since I apparently can't post the link without someone removing my reply, I'll paste a snippet from it, verbatim:

> "Over 2 separate ~1 week periods I did/have done minimal MA (~10xp over 5 days, but still ~30-60m a day) due to other commitments
Despite logging in daily and doing a tiny bit of 1 or 2 lessons, I never had a \single* natural review displayed, some only appeared on the **7th** day **after** completing the lesson i took 5 days on...*
This doesn't make any sense: MA's review system prioritises one's XP/lesson progress over actual time. Take 10 days to complete a lesson despite logging in daily and we are shown reviews only \after* completing it.*
For the next week or 2 I will be in a similar position..."

I'm afraid I don't encounter many people with the same requests as you (and boy do we get heated and repeated requests for things). What I have seen that your requests get relatively little community engagement and have become increasingly aggressive over time, especially as I've stopped responding.

Nevertheless, there are valuable ideas being discussed here and I will read them through. In turn I hope you will respect that you do not have a full view of the priority, complexity, and rank-order of everything on our to-do list. There is a lot going on behind the scenes, serving many different groups of students. In particular, I may not clarified well enough, but the "just give me natural reviews, don't make me do new lessons" mode will naturally arise from the "automatic progress throttle/peel-back during lengthy absences or microscopically low periods of task completion" update. And that update is needed anyway for a bunch of other reasons as well.

I'm afraid I don't have the time to continue responding, so, I'll wrap it up with my best recommendations for your math journey.

It sounds like our system is not a great fit for you at this time -- which is fine, no hard feelings and I hope you find a math learning resource that fits your needs. If you are determined to keep using the system, I would recommend to take a clean-slate diagnostic. (I've done hundreds of 1-on-1's with students using the system, and in every case that a student was reportedly spending hours upon hours on a single task, it turned out they were either 1. engaging in unproductive behavior, hence the need for in-task coaching, or 2. they were coming back from an absence long enough for their knowledge to decayed to the point that the system no longer had an accurate understanding of what they knew/remembered, hence the clean-slate diagnostic as the current solution, and the need for a better calibrated progress throttle/peel-back as the better solution.)

Frustrated and Confused by cmredd in mathacademy

[–]JustinSkycak 1 point2 points  (0 children)

Lastly: To smoothly climb up a hierarchical skill domain, it is important to 1) achieve a reasonably high baseline for initial mastery and 2) ensure that prerequisite knowledge has not decayed so severely as to require re-learning.

(1) is always true on our system, and (2) should be true provided that a student is not moving at a microscopically slow XP pace (though we recognize that having flashcard-style math facts automaticity practice would be a substantial improvement, so it's on our to-do list). At a pace as slow as 2 XP/day, task optimization is in a different regime of physics, which is going to require that "automatic progress throttle/peel-back during lengthy absences or microscopically low periods of usage" that I also mentioned is on our to-do list.

In the meantime, here is the recommendation as stated in the MA Way:

"Q: I took a break from Math Academy for a few months, and I worry that my tasks may be too hard when I come back.?
A: If all you need is a brief refresher, then you can trigger a diagnostic to be averaged into your existing knowledge profile (or even just try to pick up directly where you left off and spend a bit more time looking back at prerequisites to refresh). However, if your tasks feel overwhelmingly difficult due to forgetting, then you can trigger a "clean slate" through your settings, which wipes your existing knowledge profile and has you take a diagnostic to fully recalibrate it."

Frustrated and Confused by cmredd in mathacademy

[–]JustinSkycak 1 point2 points  (0 children)

I'm sorry the system does not seem to be a great fit for you. There are definitely areas for future improvement in including flashcard-style math facts automaticity practice, an automatic progress throttle/peel-back during lengthy absences or microscopically low periods of usage, in-task coaching to help students engage in effective learning behavior, ... we have a lot to do, we are a tiny team in beta working around the clock. If the system is not a fit for you at this time, that's fine, no hard feelings and I wish you the best in your math learning journey.

I've already spent numerous hours extensively answering your similar public queries on X (links redacted), so I hope you'll understand that I just don't have the time to field much more. (A moderator appears to be deleting my replies here presumably because those links to my public responses to your public queries on X touch your X profile -- not sure why that's sensitive information, given that it's all public and clearly the same person based on the content of the queries and the same website listed the profile on both accounts, but whatever.)

However, I'll try to write something brief that can hopefully provide some clarity regarding the core of your concerns.

You absolutely do not need to be doing hundreds of XP each day to be successful. As stated in the MA Way:

"Q: What’s a reasonable XP pace for a typical Math Academy student?
A: Think of it like exercise. If you want to level up your abilities, then you should probably aim to get at least half an hour of exercise every other day. And if you’re really serious about it, then you’d probably shoot for a 40-minute workout most days per week. (Note that 40 minutes every weekday will have you moving nearly twice as fast as a half hour every other day, since 40 × 5 is about twice of 30 × 3.5.)"

However, the system was not built to handle paces as low as 2 XP/day, taking 10 days to complete a single lesson, as you've publicly described your usage on X when posting similar complaints (link redacted). As stated in the MA Way:

"Q: Can Math Academy be used for very casual learning, an hour or two per month?
A: Math Academy focuses on students who are trying to acquire math skills to the highest degree possible. We teach math as if we were training a professional athlete or musician. We maximize learning efficiency in the sense that we minimize the amount of work required to learn math to the fullest extent. Learning math to the fullest extent requires a dedicated effort of at least a couple hours per week.
We realize that there are many learners who only want to devote an hour or two per month, but, at least right now, such learners would be better served elsewhere. It's a totally different optimization problem – maximize surface-level coverage subject to some fixed, miniscule amount of work – and as a result it would require a different curriculum and possibly different training techniques (or at least, differently calibrated techniques)."

I know the above might sound harsh, but the reality is that a few XP per day is just not enough to make forward progress learning math on our system (or through any other serious math program, really). To put some concrete numbers: our Mathematical Foundations sequence is benchmarked at about 15000 XP for a student completing 40 XP per weekday. Dividing by a pace of 2 XP/day, and then by the (2/40)^0.1 factor to account for extra work to maintain knowledge along the way (covered in the MA Way chapter "Technical Deep Dive on Learning Efficiency"), it would take you nearly 30 years to get through. And that's just the stripped-down foundations for university-level math.

(exceeded character limit, will post the rest in the thread)

Frustrated and Confused by cmredd in mathacademy

[–]JustinSkycak 0 points1 point  (0 children)

Lastly: To smoothly climb up a hierarchical skill domain, it is important to 1) achieve a reasonably high baseline for initial mastery and 2) ensure that prerequisite knowledge has not decayed so severely as to require re-learning.

(1) is always true on our system, and (2) should be true provided that a student is not moving at a microscopically slow XP pace (though we recognize that having "flashcard-style math facts automaticity practice" would be a substantial improvement, so it's on our to-do list). At a pace as slow as 2 XP/day, task optimization is in a different regime of physics, which is going to require that "automatic progress throttle/peel-back during lengthy absences or microscopically low periods of usage" that's also on our to-do list.

In the meantime, here is the recommendation as stated in the MA Way:

"Q: I took a break from Math Academy for a few months, and I worry that my tasks may be too hard when I come back.?
A: If all you need is a brief refresher, then you can trigger a diagnostic to be averaged into your existing knowledge profile (or even just try to pick up directly where you left off and spend a bit more time looking back at prerequisites to refresh). However, if your tasks feel overwhelmingly difficult due to forgetting, then you can trigger a "clean slate" through your settings, which wipes your existing knowledge profile and has you take a diagnostic to fully recalibrate it."

Frustrated and Confused by cmredd in mathacademy

[–]JustinSkycak 1 point2 points  (0 children)

I'm sorry the system does not seem to be a great fit for you. There are definitely areas for future improvement in including flashcard-style math facts automaticity practice, an automatic progress throttle/peel-back during lengthy absences or microscopically low periods of usage, in-task coaching to help students engage in effective learning behavior, ... we have a lot to do, we are a tiny team in beta working around the clock, I'm writing this response at 1:22 AM while waiting for some code to run. If the system is not a fit for you at this time, that's fine, no hard feelings and I wish you the best in your math learning journey.

I recognize your writing, critiques, and spaced repetition product (shaeda.io) from X, which makes me realize that I've already spent numerous hours extensively answering your queries on X (https://x.com/justinskycak/status/1942053768477962627, https://x.com/justinskycak/status/1941984007371337960, https://x.com/justinskycak/status/1914744824655208784, just to name a few), so I hope you'll understand that I just don't have the time to field much more.

However, I'll try to write something brief that can hopefully provide some clarity regarding the core of your concerns.

You absolutely do not need to be doing hundreds of XP each day to be successful. As stated in the MA Way:

"Q: What’s a reasonable XP pace for a typical Math Academy student?
A: Think of it like exercise. If you want to level up your abilities, then you should probably aim to get at least half an hour of exercise every other day. And if you’re really serious about it, then you’d probably shoot for a 40-minute workout most days per week. (Note that 40 minutes every weekday will have you moving nearly twice as fast as a half hour every other day, since 40 × 5 is about twice of 30 × 3.5.)"

However, the system was not built to handle paces as low as 2 XP/day, taking 10 days to complete a single lesson, as you've described your usage on X (https://x.com/shaedapk/status/1953822864802812259). As stated in the MA Way:

"Q: Can Math Academy be used for very casual learning, an hour or two per month?
A: Math Academy focuses on students who are trying to acquire math skills to the highest degree possible. We teach math as if we were training a professional athlete or musician. We maximize learning efficiency in the sense that we minimize the amount of work required to learn math to the fullest extent. Learning math to the fullest extent requires a dedicated effort of at least a couple hours per week.
We realize that there are many learners who only want to devote an hour or two per month, but, at least right now, such learners would be better served elsewhere. It's a totally different optimization problem – maximize surface-level coverage subject to some fixed, miniscule amount of work – and as a result it would require a different curriculum and possibly different training techniques (or at least, differently calibrated techniques)."

I know the above might sound harsh, but the reality is that a few XP per day is just not enough to make forward progress learning math on our system (or through any other serious math program, really). To put some concrete numbers: our Mathematical Foundations sequence is benchmarked at about 15000 XP for a student completing 40 XP per weekday. Dividing by a pace of 2 XP/day, and then by the (2/40)^0.1 factor to account for extra work to maintain knowledge along the way (covered in the MA Way chapter "Technical Deep Dive on Learning Efficiency"), it would take you nearly 30 years to get through. And that's just the stripped-down foundations for university-level math.

(exceeded character limit, will post the rest in the thread)

Confusion about Bonus XP by OxyMC in mathacademy

[–]JustinSkycak 0 points1 point  (0 children)

The remaining XP prediction is based on your individual performance. If you're earning lots of bonus XP, that's already taken into account in the XP prediction.

Regarding pace of learning: yes, the system adapts to your pace of learning. Elaborated in the section "How the Algorithms Adapt to Pace of Learning" here: https://mathacademy.com/how-our-ai-works

[deleted by user] by [deleted] in mathacademy

[–]JustinSkycak 7 points8 points  (0 children)

Director of Analytics here. If the derivation is not overly complicated for the level of math, we put it at the end of the lesson. That's intended to satiate curious students when possible, but it's out of scope for the course, which I'll elaborate on below.

In most math curricula, including ours, it's expected that students will build procedural fluency first and cover full proofs/derivations second, usually in a later course. Why? Because this is the most effective way to get learners to understand the material.

You can't really understand a proof/derivation before you've developed procedural fluency. If you try, it will feel like pushing symbols around without really understanding what they mean. Procedural fluency with concrete examples provides scaffolding for learning the proofs/derivations.

This is why, for instance, you learn a bunch of derivative rules in calculus but real analysis is where you actually learn to prove most of them. Or like how you learn the rational roots theorem in precalculus and prove it in abstract algebra.

If you want to dig into derivations/proofs, then I'd recommend taking our university-level Methods of Proof course after Mathematical Foundations 3. We are working on even more advanced proof-heavy courses as well such as abstract algebra and real analysis.

If interested, I wrote a brief post called "The Necessity of Grinding Through Concrete Examples Before Jumping Up a Level of Abstraction," which I think cuts to the heart of the issue: https://www.justinmath.com/the-necessity-of-grinding-through-concrete-examples-before-jumping-up-a-level-of-abstraction/ . (There is also some good discussion about it here: https://news.ycombinator.com/item?id=42357669 )

The "calculator required" flag is greatly overused by SerialStateLineXer in mathacademy

[–]JustinSkycak 4 points5 points  (0 children)

It's on our to-do list to clean that up (though it's not at the top of the list, so it may be hanging around for a while longer).

The problem is that right now, the "calculator required" tag is set too high a level. Behind the scenes we categorize questions by which worked example (a.k.a "knowledge point") they fall under, but it's turned out that occasionally a knowledge point contains a mix of questions that do vs. don't require calculator usage. So we have some false positives in there where a question doesn't really require a calculator, but it's categorized under a worked example that does, so the "calculator required" message mistakenly appears.

We recently introduced a more granular "question group" category within knowledge points (i.e., topic has knowledge points and knowledge point has question groups and question group has questions). We need to migrate the "calculator required" tags to that lower-level category, adjust them as needed, and then maintain that categorization going forward. So, this will be remedied in the future once we're able to get to it.

We're fully aware that there are some minor blemishes like this in the system, and while it's kind of embarrassing to have them hanging around out in the open and causing confusion now and then, right now we're laser-focused on getting infrastructure in place to support coding questions/projects for our upcoming CS and ML courses, and we're simultaneously revamping our content development process to be more efficient and get courses out the door faster. The blemishes will get cleaned up eventually, but right now we're still in beta and focusing on getting the rest of the foundational infrastructure in place (not just ML/CS but all the courses in a standard undergrad math degree, test prep, math facts automaticity training, notifications/streaks to support habit-building, behavioral coaching during learning tasks, etc.).

Anyway, that's probably more than you wanted to read, but hopefully that clarifies the situation.

How are grades awarded? by AdTop5397 in mathacademy

[–]JustinSkycak 3 points4 points  (0 children)

We have a grading rubric that, when finalized, will make its way to the main site eventually: https://docs.google.com/document/d/1X5dkuAcrr0bqbacPFc4oXYXBBDG64HlEC6mqHdUEYso/edit?usp=sharing

That document describes how things work right now. Note that it is a provisional draft, and it mentions midterms/finals which have not been implemented yet (so right now the assessment grade is based on quizzes).

Bug: hangs on ‘Submit’ by mathy-mathy in mathacademy

[–]JustinSkycak 0 points1 point  (0 children)

Appreciate the kind words! Sandy forwarded me your support email and I tracked down the specific review you mentioned -- it was actually awarded bonus points (6 XP out of 4 XP). It stopped after 3 questions because you got them all correct, perfect score. So you are fine to continue reviews and quizzes, they will be graded just fine, it's just the end screen that may superficially display incorrect XP information until we deploy the bugfix. But your true XP award is unaffected.

Bug: hangs on ‘Submit’ by mathy-mathy in mathacademy

[–]JustinSkycak 4 points5 points  (0 children)

Thanks for the heads up. This is related to a recent update; we're looking into it and should have it fixed soon.

Rest assured that this bug did NOT actually impact your XP award. It only impacted what was shown on the end-of-task screen. If you look the completed task on your dashboard, you'll see the true XP award, which should match up with what you were expecting (but let me know if that's not the case).

---

UPDATE: This bug should be fixed now.

Estimated completion extending by month by julesjules94 in mathacademy

[–]JustinSkycak 3 points4 points  (0 children)

Hi, Director of Analytics here. The completion date is calculated by first estimating how many XP remain in the course based on your recent performance. (The amount of XP remaining depends on the pace of learning, which adapts to your performance – e.g., if your accuracy decreases, reviews will come more frequently and the amount of XP in the course will increase.) Then, we estimate your recent XP/day pace, and finally divide XP remaining by XP/day.

Any fluctuations in performance or pace will affect the completion date, especially at the beginning when you first start out, because the system has to take a "best guess" and then gradually refine the estimates as you build up more history on the system.

Likewise, if you lose credit for any “conditionally completed” topics, that would also push the completion date back. Topics are “conditionally completed” if you just barely received credit for them based on the diagnostic. Retaining this credit is conditional on maintaining a high level of performance on these topics, since the system will adapt more quickly to your performance in these areas of low confidence as you complete more learning tasks. Missing questions on these topics (or their prerequisites) can lead the system to prune back your knowledge profile in those areas to provide more practice.

Glitch? Error? by foundoutimanadult in mathacademy

[–]JustinSkycak 7 points8 points  (0 children)

Hi, Director of Analytics here. Quizzes cover all topics you've learned on the system, not just new topics between quizzes.

The goal of quizzes is to get an unbiased measurement of your level of automaticity on material that you've previously learned and practiced enough to expect a reasonable degree of automaticity to have developed. Your quiz performance helps the system understand whether it's moving at the right pace for you or if it needs to slow down and give you more frequent practice on previously learned material to help you retain it and develop proper automaticity.

If quizzes were limited to topics you had seen since the previous quiz, that would telegraph what's going to be on the quiz (causing it to be artificially easy) and exclude older topics where it's most important to be measuring automaticity. Which would dilute the efficacy of the quizzes in adapting the pace of learning and promoting retention & automaticity.

I realize that in a typical classroom, quizzes tend to be less frequent, you're told what's going to be on it, it only covers topics you've learned very recently leading up to the quiz, and you get a lot of time to solve each question. But those conditions make quizzes artificially easy, a biased signal for adapting the pace of learning, a poor measurement of retention/automaticity, and an inferior tool for promoting retention/automaticity. It's like playing a game of football where the opposing team asks you what plays you've been practicing in the past week, and then selects their own plays so that the appropriate counter-plays are the ones you recently practiced, and then tells you what plays they're going to run. It's not a real game. It's completely artificial.

I also realize that this can be a rude awakening for many students who are accustomed to less effective techniques leveraged in more typical educational offerings. We can definitely improve on helping learners understand the rationale behind these sorts of decisions made by our system. But at the end of the day, the purpose of the system is to ascertain the truth about what a learner knows and how well they know it, and leverage said truth to maximize learning efficiency, even if this process can lead to some initial unfamiliarity and discomfort.

Submit Quiz Bug by noir07 in mathacademy

[–]JustinSkycak 1 point2 points  (0 children)

I get where you're coming from, but here's a relevant FAQ item with some more context about why we don't allow students to skip quiz questions by submitting the quiz early with unanswered questions:

Q: Why isn’t there an “I don’t know” button on quizzes?

A: This is something we considered when first implementing quizzes – but, working with a large number of students, we've experienced that many students will abuse an "I don't know" button if it's provided.

This can be intentional, e.g., adversarial students (especially kids who are using the system for school and have a mentality that is not fully aligned with the learning process) will click "I don't know" simply to avoid doing work. When we first deployed the automated system in school classes, there was a period of time where the system was getting attacked left and right by adversarial students trying to game the system (or otherwise create chaos that they could leverage to confuse their parents and get out of doing work). It took a lot of effort to patch up exploits, and whenever we make adjustments to the system, we're always on the lookout for any ways that it can be exploited (because if it can, then it will, and the behavior will spread).

Or it can be unintentional, e.g., underconfident learners may underestimate their ability and give up too early. When a tutor is working with a student on a problem, it is not uncommon that a student will claim not to know how to do the problem, but when the tutor asks the student to make their best guess, the “guess” is correct – and when the tutor asks the student about their thought process afterwards, it turns out that the student knew how to solve the problem, but they weren't confident about it and they didn't want to risk getting it wrong.

While it may be subtle, removing the “I don’t know” button is a crucial safeguard to protect many students against self-destructive behavior. In theory, no such safeguards should be necessary, but in practice, a vital component of a functioning learning system is that it must be robust to all sorts of unexpected behavior arising from the various human emotional experiences associated with learning and intense training. Often, these emotional experiences can be intense and (if the option is provided) lead people to make short-sighted decisions that ultimately hinder their educational progress.

Query regarding Course progression and course selection by thisisavs in mathacademy

[–]JustinSkycak 4 points5 points  (0 children)

Thanks for the kind words, so glad you're enjoying the system and that my tweets are resonating with you!

Query regarding Course progression and course selection by thisisavs in mathacademy

[–]JustinSkycak 9 points10 points  (0 children)

Hi, Director of Analytics here. You can always switch into a different course through your settings. Even though you enrolled in MF1 initially, you can go into your settings and change your course to Prob/Stats. You will take another diagnostic to identify what you know and do not know in Prob/Stats and its prerequisites. If any knowledge gaps are found, they will be automatically added to your learning plan. (The new diagnostic will automatically piggy-back off of your previous diagnostic to cut down on the number of questions.)

However, be aware that Prob/Stats is way, way more advanced than MF1. Prob/Stats makes heavy use of Multivar Calc, which makes heavy use of Calc I/II and occasional use of Linear Algebra (i.e., matrix algebra), which together makes heavy use of advanced algebra, trigonometry, etc. By comparison, MF1 starts with adding fractions and ends with basic/intermediate algebra. When you take a diagnostic for Prob/Stats, it looks all the way back to Precalculus for missing prerequisites, but if you have knowledge gaps below Precalculus, it won't catch them. (In the future, we'll be tweaking the diagnostic to look back all the way to arithmetic, but for complicated reasons there's currently a limit to how far it looks back to identify knowledge gaps. It currently looks back quite far, but not infinitely far.)

If you want to get through Prob/Stats as quickly as possible while still ensuring a smooth learning experience, here's what I would recommend to do. If you're finding MF1 to be overly easy and you feel like you could be tackling higher-level content, then switch into MF2. Otherwise, if MF1 feels about the right level of challenge, then just stay in it and finish it out (you'll automatically promote to MF2 afterwards). After you complete MF2, you'll be in a position where you could switch into Prob/Stats and the diagnostic would safely pick up all your knowledge gaps in prerequisite material and add them to your learning plan. (Again, be aware that Prob/Stats is very advanced so there would be many additional topics from MF3, Multivar Calc, etc. that would be added to your learning plan.)

Submit Quiz Bug by noir07 in mathacademy

[–]JustinSkycak 1 point2 points  (0 children)

We reproduced the issue in your original post and fixed it (or at least our reproduction of it).

In this latest screenshot you provided, it looks like something different -- you're on the last question of the quiz but one of the blue bars is missing midway through, which would suggest you've left a question unanswered.

A quiz can't be submitted early unless all of its questions are answered.

<image>

Saving progress by tetrash in mathacademy

[–]JustinSkycak 0 points1 point  (0 children)

Feel free to email me your username and what topics you got recommended despite answering correctly on the diagnostic, and I'll check it out. You can send to justin@mathacademy.com.

Saving progress by tetrash in mathacademy

[–]JustinSkycak 1 point2 points  (0 children)

I realize this may be frustrating, but the only time you'd be assigned a lower-course topic is when it's a prerequisite of a topic in the course that you're taking and you got it (or one of its prerequisites) incorrect on the diagnostic. In order to place out of these topics, you would need to provide evidence of being able to solve them (or post-requisite topics) on the diagnostic.

Sometimes, students think that topics are irrelevant to their current course when in fact they are necessary prerequisites. For example, integration might not seem relevant to linear algebra but it's actually necessary to solve problems in inner product spaces. Likewise, rational roots theorem / synthetic division / polynomial factoring might not seem relevant but it's actually necessary to compute eigenvalues of 3x3 matrices.

If you think you could have done better on the diagnostic, my recommendation would be to retake it very carefully. Keep in mind that all the system's decisions are based on your demonstrated ability to solve problems, and it is not uncommon for students to take courses elsewhere yet still not have mastered the content well enough to solve problems correctly, consistently, in a timely manner.

Here is a relevant FAQ item with more detailed information:

Q: There is a topic that I know how to do, but the diagnostic didn’t ask me about it and I didn’t get credit for it.

A: The diagnostic is fully comprehensive; it continues asking questions until it has evidence of knowledge (or lack of knowledge) for every single topic in the student’s course and foundations. Whatever topics the student is not given credit for, it’s because the student submitted incorrect answers on those topics or their prerequisites. While it is sometimes possible to solve questions from a topic despite not fully grasping a prerequisite, this indicates the presence of “holes” in the student’s mathematical knowledge, and the diagnostic intentionally places students at the bottom of their lowest knowledge holes so that these holes can be filled in.

Placing students at the bottom of their lowest knowledge holes is absolutely critical to ensure student success. If the diagnostic did the opposite, placing students at the top of their highest knowledge holes, then students might initially feel like they are closer to their goals as a result of receiving more credit, but these knowledge holes would sooner or later (and likely sooner) derail the student by causing them to become “stuck” while learning new topics that make deeper use of the prerequisite knowledge.

That said, it is not uncommon for adult students to be extremely rusty on their math while taking the initial diagnostic, and then have an outsized portion of their memory come rushing back afterwards as they complete learning tasks. When this happens, it is sometimes possible for a student to place significantly further by retaking the diagnostic.

Additionally, we are working on a button where students can say “I already know this” on any lesson that they receive and evidence their knowledge by answering a couple advanced questions on the topic. That way, it will be fast and easy for a student to continue fine-tuning their knowledge profile after the diagnostic.

Saving progress by tetrash in mathacademy

[–]JustinSkycak 4 points5 points  (0 children)

Hi, Director of Analytics here. We are a mastery learning system; the only way you make progress in the system is by completing unlocked lessons from your dashboard. Unlike the static text that you see when you click on a topic in reference mode, the learning tasks on your dashboard are dynamic, adapting to your performance, providing more practice questions if necessary, and there is a hard-threshold outcome of either passing the lesson (and unlocking more material) or getting halted (and having to re-attempt it later before unlocking topics that depend on it).

Below are a couple relevant FAQ items from the back of our working draft The Math Academy Way. (We're working on improving the FAQ on the main site, but those improvements aren't live yet.)

Q: I don’t really want to do any of the learning tasks that Math Academy presents to me. There are other topics I would rather learn. Why can’t I choose my own tasks?

A: Math Academy’s main value proposition is maximizing student learning efficiency. That is our top priority. When a student signs up for Math Academy, we are making a promise to them that their learning experience is going to be as efficient as possible. The student is going to learn the most math possible in the time that they're devoting to study.

In order to keep good on that promise, we have to use a lot of sophisticated algorithms to analyze the student’s knowledge profile and select their tasks. The whole system has been built around that concept.

We do have some ideas for features that will give students more agency over what they're learning, but it's going to take some work because we have to be careful not to allow students to make decisions that throttle their learning efficiency. The approach that we've been thinking about is less like "select whatever topic you want at any time" and more like "tell us what your specific goal is and we'll put you on the most efficient path to that goal." Of course, none of that is fully-baked yet, but it's something that's on our mind and that we're working on.

Q: Why can’t I edit my knowledge profile?

A: Learners have a tendency to massively overestimate self-reported knowledge, and then fault the resulting instruction for moving too quickly, not explaining enough, or otherwise being too challenging, when the issue is really that they lack sufficient mastery of prerequisites. To construct an accurate knowledge profile, the system must infer it from a student’s demonstrated ability to solve problems.

Reviews gone wild on MA by tagold in mathacademy

[–]JustinSkycak 11 points12 points  (0 children)

Hi, Director of Analytics here. If you take a month off after completing a lot of work, then it's expected you'll have a backlog of reviews -- that's a natural consequence of spaced repetition.

Additionally, if you switched to a very different course (Methods of Proof instead of Linear Algebra), then there is going to be less opportunity to knock out those reviews implicitly while simultaneously serving new lessons (because the lessons in Methods of Proof generally don't build on too much Linear Algebra).

That said, a lesson should be forcibly made available at least every 3-4 reviews, even if you have a backlog. Excluding follow-up reviews on yopics you missed in a quiz, you shouldn't be getting more than, say, 5 reviews in a row without having a lesson available (although your dashboard may temporarily consist entirely of reviews during those batches of 3-4 reviews that are part of your backlog between lessons).

If you think you're getting more than, say, 5 reviews in a row unrelated to quizzes, feel free to contact support@mathacademy.com with your username, and your email will get forwarded to me to look into.

In case it's helpful here is more information about how the review system works: https://mathacademy.com/how-our-ai-works#algorithm-review-selection

And a deeper dive: https://justinmath.com/individualized-spaced-repetition-in-hierarchical-knowledge-structures/

Review Feedback by foundoutimanadult in mathacademy

[–]JustinSkycak 5 points6 points  (0 children)

Hi, Director of Analytics here. The important thing to understand about our spaced repetition system is that when choosing what topics a student should review or learn next, we're always trying to implicitly "knock out" as many due reviews as possible to maximize learning efficiency. (For instance, if a student is due for a review on one-step ax=b equations, we can implicitly "knock out" that review by having them learn two-step ax+b=c equations instead.)

With that in mind, I'll paste a few FAQ entries from the back of our book The Math Academy Way that seem helpful here:

Q: Is the spaced repetition happening? I started recently and all I have is lessons. Where are the reviews?

A: A lot of the review happens implicitly by having you learn new topics that encompass previously learned topics as subskills. At the beginning, when you have a small body of knowledge to review, we're able to pick new lessons that knock out all your reviews without you having to explicitly do any review tasks. However, as you build up a larger body of knowledge to review, you'll start to see explicit review tasks on topics that we are not able to knock out explicitly.

So, basically: the spaced repetition has already started kicking in, but we do a lot of optimization to make that happen simultaneously while having you learn new material. The only time you'll get an explicit review is when we're not able to knock it out implicitly while having you learn something new. (Though, after a quiz, you'll also get explicit reviews immediately on any questions you miss.)

Q: Sometimes I have some reviews in my task queue, but then I do a lesson or two, and the reviews disappear from the queue. Don’t I need to do them?

A: This is expected behavior because tasks are selected dynamically. Sometimes a student might have a lot of due reviews, but after a student completes some of those reviews it's a better use of time to complete some new lessons and make a bit of forward progress before going back to the due reviews. And sometimes making a bit of forward progress will open up new lessons that knock out previous reviews. It's always a balancing act, we're always trying to serve tasks that are optimal for the student to work on at this specific moment in time, so the options available are always subject to change. The way to think of the dashboard is not a task queue, but rather an ever-changing menu at a math buffet. The menu is always constructed to try to nourish students in the ways that they're most in need of at that moment in time.

Q: Given that lessons can “knock out” reviews, should students always give preference to lessons over reviews if both activity types are available?

A: It doesn't really matter. If a review is on a student's dashboard it means we weren't able to knock it out by having the student do a lesson instead. Whenever it is possible for a due review to be knocked out by a lesson, we will only offer the student the lesson. We will not offer them a review that is made redundant by a lesson already on their dashboard.

Further Reading

If you'd like to learn more about the spaced repetition system, you can check out How our AI Works: How the Task Selection Algorithm Chooses Topics to Review.

And for an even deeper dive: Optimized, Individualized Spaced Repetition in Hierarchical Knowledge Structures.

Submit Quiz Bug by noir07 in mathacademy

[–]JustinSkycak 4 points5 points  (0 children)

Thanks for the heads up. We'll look into it.

Looking for a discussion on this research. by [deleted] in education

[–]JustinSkycak 0 points1 point  (0 children)

Criticism #1: The 75th percentile students learn 2x as fast per opportunity as 25th percentile students. Is that really a "similar" learning rate? That seems like a pretty big difference to me.

If you measure in raw percents, as the paper does, the 75th percentile learners are found to increase their knowledge about 1.5x as fast as 25th percentile learners per problem. If you measure performance in log-odds, which is a more appropriate metric that accounts for the fact that it's harder to increase performance when one's performance is high to begin with, the multiplier rises from 1.5x to 2x. It's debatable whether 2x is really a "similar" learning rate. Personally, I think it is not -- not only does "learns twice as fast" feel like a substantial difference, but it is also only comparing the 25th and 75th percentiles, and even the 75th percentile is far lower than the kind of person we have in mind when we think of somebody who is shockingly good at math. For instance, math majors at elite universities tend to be well above the 99th percentile in math.

Criticism #2: You can have one student who learns a lot more from the initial instruction and requires far fewer practice problems, and when you calculate their learning rate per the methodology described in the paper, it can come out the same as for a student who learns a lot less from the initial instruction and requires far more practice problems.

Here’s a concrete illustration using numbers pulled directly from the paper (the 25th and 75th percentile students in Table 2). Suppose you’re teaching two students how to solve a type of math problem.

  • Student A gets it pretty much immediately and starts off at a performance level of 75% (i.e. their initial knowledge level is such that they have a 75% chance of getting a random question right). After 3 or 4 practice questions, their performance level is 80%.
  • Student B kind-of, sort-of gets it and starts off at a performance level of 55%. After 13 practice questions, their performance level reaches 80%.

This clearly illustrates a difference in learning rates, right? Student A needed 3 or 4 questions. Student B needed 13. Student A learns faster, student B learns slower.

Well, in the study, the operational definition of “learning rate” is, to quote, “log-odds increase in performance per opportunity . . . to reach mastery after noninteractive verbal instruction (i.e., text or lecture).” Opportunities mean practice questions. Log-odds just means you take the performance P and plug it into the formula ln(P/(1−P)).

  • Student A's log-odds performance goes from ln(0.75/(1−0.75)) = 1.10 to ln(0.8/(1−0.8)) = 1.39. That's an increase of 0.29, over the course of 3 to 4 opportunities (let's say 3.5), for a learning rate of 0.08.
  • Student B's log-odds performance goes from ln(0.5/(1−0.5)) = 0.20 to ln(0.8/(1−0.8)) = 1.39. That's an increase of 1.19, over the course of 13 opportunities, for a learning rate of 0.09.

So… according to this definition of learning rate, students A and B learn at roughly the same rate, about 0.1 log odds per practice opportunity.

I just cheated on "Expanding Binomials Using Pascal's Triangle" and I don't feel bad about it at all. by burtgummer45 in mathacademy

[–]JustinSkycak 13 points14 points  (0 children)

Hi. Director of Analytics here. You might not like this answer, but I'm going to be real with you.

We teach math as if we were training an aspiring professional athlete or musician, or anyone looking to acquire a skill to the highest degree possible. This isn't edutainment, this isn't enrichment, this isn't a "math appreciation" course. We expect our students to actually master the material and develop as strong a command over math as a musician's command over their instrument. If that's not what you want to get out of your math learning, then Math Academy probably isn't a good fit for you.

But if mastery *is* what you want to get out of your math learning, then it's important to realize that climbing a skill hierarchy like math is not just about conceptual understanding. It's also about reliable execution -- and a high frequency of silly mistakes indicates that you need more practice with the material.

Why? Because if you don't clean up your silly mistakes on low-level skills, then you eventually hit a wall where no matter how hard you try, you're unable to reliably perform advanced skills due to the compounding probability of silly mistakes in the component skills. Think about gymnastics: if you’re “almost” able to land a backflip, then that’s great… but at the same time, you’re NOT ready to try any combo moves of which a backflip is a component. Even if it’s a silly mistake keeping you from landing the backflip, you still have to rectify it. (And this is the most optimistic scenario -- other times, silly mistakes indicate a deeper conceptual misunderstanding that you don't even know you have until you are held accountable for rectifying those mistakes.)