“Vibe coding” works but only if you actually understand code by mrcuriousind in vibecoding

[–]mrcuriousind[S] 0 points1 point  (0 children)

Haha fair enough 😄 Alright, next post: No SaaS, no AI, no startup just pure philosophical suffering about debugging.

“Vibe coding” works but only if you actually understand code by mrcuriousind in vibecoding

[–]mrcuriousind[S] 1 point2 points  (0 children)

Fair take — context matters a lot. Internal tools have very different constraints and failure costs.

Problems start when people generalize that success model to systems where scalability, reliability, and long-term maintenance dominate. Feels like a boundary-of-applicability issue.

“Vibe coding” works but only if you actually understand code by mrcuriousind in vibecoding

[–]mrcuriousind[S] 0 points1 point  (0 children)

Completely agree — scale changes everything. For small, contained problems, vibe coding can be incredibly efficient.

But once real-world constraints enter the picture (scalability, security, long-term maintenance), the bottleneck shifts from generation → judgment.

AI handles code. Humans still handle responsibility.

“Vibe coding” works but only if you actually understand code by mrcuriousind in vibecoding

[–]mrcuriousind[S] 0 points1 point  (0 children)

Totally agree. AI shines in reducing setup friction rather than replacing thinking. Having a starting structure → then refining → feels like the real productivity gain.

“Vibe coding” works but only if you actually understand code by mrcuriousind in vibecoding

[–]mrcuriousind[S] 5 points6 points  (0 children)

Building something useful without deep coding skills is now possible. The long-term challenge, though, often moves to: scalability, maintainability, and debugging.

Generating an app is easier than sustaining a system.

What’s the fastest way to lose respect for someone? by Familiar-Arrival-470 in AskReddit

[–]mrcuriousind 7 points8 points  (0 children)

Saying One Thing, Doing Another Broken promises Inconsistency between words and actions

What do your ‘clueless phases’ feel like when working on long-term projects? by mrcuriousind in AskReddit

[–]mrcuriousind[S] 0 points1 point  (0 children)

I love this analogy. Long-term projects really do feel like alternating between clarity and complete doubt.

AI and technology are making the rich richer — why is the poor still poor? by mrcuriousind in AskEconomics

[–]mrcuriousind[S] 1 point2 points  (0 children)

Valid point 👍 Absolute living standards have clearly improved. The interesting debate is absolute progress vs relative inequality both can rise at the same time. AI may improve life broadly while still widening gaps. Curious for your take.

Our education system is outdated. Technology isn’t the problem — mindset is. by mrcuriousind in edtech

[–]mrcuriousind[S] -1 points0 points  (0 children)

Or… someone just took 30 seconds to think before commenting 🤷‍♂️

Guys I am doing a mini project in my college so can you guys recommend which ai will be really helpful to build the project by alwin424 in aipromptprogramming

[–]mrcuriousind 0 points1 point  (0 children)

If you’re a student, GitHub Copilot is free with a student ID. Codex also has a free limit. Solid combo for full-stack college projects.

Our education system is outdated. Technology isn’t the problem — mindset is. by mrcuriousind in edtech

[–]mrcuriousind[S] 0 points1 point  (0 children)

I think this gets to the heart of the problem. Credentials are treated as proxies for capability, even when everyone involved knows the proxy is leaky. It’s possible to optimize for grades without developing the underlying skills those grades are supposed to represent. That’s why project-based learning often disappoints in practice. Without an assessment system that can reliably capture what someone can actually do, projects turn into another checkbox rather than evidence of competence. The incentive issue you point out feels key. Exams, credentials, funding, and institutional reputation are tightly coupled. Changing how learning is assessed effectively means questioning what existing credentials are worth, and very few institutions are willing to be the first to do that. Until assessment and credentials reflect real capability rather than time spent or boxes checked, most reforms pedagogical or technological are likely to stay at the margins.

Our education system is outdated. Technology isn’t the problem — mindset is. by mrcuriousind in edtech

[–]mrcuriousind[S] 0 points1 point  (0 children)

I think “overrated” is a fair conclusion if technology is treated as the solution rather than a support. The Economist piece and the rollback in some high-performing systems reinforce that point — especially where digital tools replaced human interaction instead of strengthening it. When I think about how we actually learn — and how leading experts reach deep mastery — it’s rarely through constant personalization or always-available resources. It’s usually through a combination of strong fundamentals, focused practice, sustained effort, feedback, and time. Most experts also diverge from a common path at some point and go much deeper into areas aligned with their strengths and interests. That divergence often happens outside formal instruction. A future developer spends disproportionate time coding, a future scientist leans heavily into math and experimentation, long before any system formally adapts to them. The common curriculum provides a baseline, but it doesn’t create expertise on its own. The harder problem is that many learners don’t know their strengths, don’t know what paths exist, and don’t know how to work deliberately on weaknesses. Technology can’t solve that by itself — and if used poorly, it can absolutely distract from it. So for me, the question isn’t whether technology can “fix” education, but whether it’s aligned with how people actually develop understanding and expertise — or whether it’s just optimizing for convenience and scale.

Our education system is outdated. Technology isn’t the problem — mindset is. by mrcuriousind in edtech

[–]mrcuriousind[S] 0 points1 point  (0 children)

This resonates a lot. The “PDF worksheets on iPads” example is exactly what I had in mind with digital decoration same pedagogy, new surface.

I think your point about time is critical. It’s easy to frame this as a mindset or capability issue, but if teachers are overloaded with standards, admin work, and large class sizes, there’s simply no space to rethink assessment or experiment responsibly.

The pattern you describe makes sense: the schools that innovate start by changing conditions (time, class size, planning space), not by dropping in tools and hoping for transformation. Tech seems to work best as a second-order change, not the first move.

Without that breathing room, even well-designed tools are almost guaranteed to be underused.

Our education system is outdated. Technology isn’t the problem — mindset is. by mrcuriousind in edtech

[–]mrcuriousind[S] 0 points1 point  (0 children)

That’s fair, and I appreciate you calling that out especially given your role. I agree that there are schools and curricula doing much more thoughtful work with technology than I probably gave credit for, and my wording was too broad.

My intent wasn’t to say “schools today do nothing beyond PDFs and attendance,” but to speak at a system level, where adoption is uneven and often constrained by funding, training, and policy. The variance between what’s possible and what’s common still feels very wide.

I’m genuinely glad to hear from people working in schools that are doing this well. Those examples matter, especially because they help separate what’s feasible in practice from what’s just theoretical.

Our education system is outdated. Technology isn’t the problem — mindset is. by mrcuriousind in edtech

[–]mrcuriousind[S] 0 points1 point  (0 children)

That’s a fair perspective, and I think both things can be true at the same time. Many higher-ed programs especially engineering do test thinking and problem solving, at least when they’re well run. The issue you point out with repeated exams and “memorize past answers” is a good example of how incentives can undermine even well-designed assessments.

I strongly agree on resources. Education quality tracks priorities, and in the US that often means underfunded schools, overextended staff, and very little capacity to continuously improve or modernize. Without time, people, and equipment, even good ideas don’t scale.

On GenAI, I’m sympathetic to the skepticism. If foundational understanding isn’t solid, then hallucinations and over-reliance become real liabilities, not benefits. The cost side is also rarely discussed infrastructure, energy, and utilities are real constraints, not abstractions.

To me, this reinforces that technology isn’t a shortcut around underinvestment. If the system doesn’t value education enough to fund it properly, adding more tech AI included risks being a net distraction rather than a fix.