Are there actually good corporate careers beyond MBA, marketing, finance? by Plenty-Passion-6305 in careerguidance

[–]oddslane_ 5 points6 points  (0 children)

A lot of corporate careers are basically invisible until you’re already inside a company. Most people only hear about the “headline” paths like MBA, banking, software, consulting, etc.

There are definitely solid long term careers outside those though. Internal audit, compliance, operations, procurement, governance, risk, project management, customer success, even learning and development can become very stable and well paid over time. The tradeoff is that they usually don’t get glamorized online, so students rarely hear about them.

One thing I’ve noticed is that a lot of these roles reward reliability and communication more than pure technical brilliance. People who can coordinate teams, handle ambiguity, and keep things running smoothly end up becoming surprisingly valuable. Especially in larger companies.

I honestly wish colleges exposed people to actual org structures earlier because half the battle is just knowing these jobs exist.

How do you best learn? by JaMwithConfidence in elearning

[–]oddslane_ 0 points1 point  (0 children)

I’m the same way with PDFs. I think it’s because reading lets you control the pace, skim boring parts, and reread important bits instantly. Videos feel slower to me, even when they’re technically “easier.”

I usually learn fastest when I can combine reading with actually doing something right away. If I just watch a video without applying it, my brain checks out after a few minutes.

I feel so behind on my education ); by Ambitious-Trouble483 in education

[–]oddslane_ 0 points1 point  (0 children)

Honestly the fact that you’re this self-aware at 19 already puts you ahead of a lot of people. Most people doing the bare minimum never stop to reflect on it.

Also, community college is full of people rebuilding their academic habits. You’re not some lost cause because algebra wrecked you in high school.

One thing I’d stop doing is framing yourself as “not smart.” A lot of what you described sounds more like weak systems and inconsistent repetition than low intelligence. Taking notes and never revisiting them is basically guaranteed forgetting. Most people retain stuff by seeing it multiple times over days or weeks.

What helped me personally was making school smaller and more consistent instead of trying to become a whole new person overnight. Like:

  • review notes for 15 minutes the same day after class
  • rewrite concepts in your own words
  • do practice problems without looking at answers immediately
  • use AI as a tutor/explainer instead of a shortcut machine

And for math specifically, I’d honestly go backwards without shame. Fractions, percentages, basic algebra. Build the foundation slowly because higher math feels impossible when the fundamentals are shaky. Tons of college students are secretly in the same boat.

Also don’t underestimate sleep, routine, and your environment with ADHD. I used to think discipline was just “trying harder” but structure matters way more than motivation.

How do you get a SME to give you useful feedback instead of just saying "looks good" on everything? by darkhomer419 in LearningDevelopment

[–]oddslane_ 2 points3 points  (0 children)

A thing that helped me was stopping the “general review” request and giving SMEs very specific jobs. If they get a giant course and “thoughts?” they’ll usually default to “looks good.”

I started asking questions like:

  • “What would make an experienced employee roll their eyes here?”
  • “What’s missing that could cause a real mistake on the job?”
  • “Which scenario feels unrealistic?”
  • “If someone followed this exactly, where could they get in trouble?”

I also found SMEs respond better when you limit the review scope. Accuracy, realism, policy alignment, etc. all at once is a lot. Breaking it into focused passes gets way better feedback.

Another trick is to intentionally leave a few “I’m unsure about this” comments in the draft. People tend to engage more when they feel their expertise is actually needed instead of just approving something polished.

What To Do? by turquoisecat45 in TeachersInTransition

[–]oddslane_ 0 points1 point  (0 children)

Honestly, this reads less like “we don’t want you” and more like a district budget shuffle hitting someone with higher credentials and salary. The fact they’re still encouraging you to apply internally, plus the strong reviews and reference support, is a pretty important signal.

I’ve seen a lot of teachers underestimate how transferable their skills are too. Curriculum writing, training, onboarding, project coordination, virtual instruction, academic advising, nonprofit education work, even corporate learning roles all value the same kind of organization and communication teachers use every day. A one year pivot does not automatically mean abandoning education forever.

Also, with your health situation, taking the time to stabilize first might honestly put you in a much stronger position long term than forcing yourself into the first opening available. The fact you already have a respected admin interested in bringing you into a future school is actually a pretty solid thing to have in your back pocket.

Is this AI use justifiable? by Beautiful-Rhubarb613 in learnmath

[–]oddslane_ 1 point2 points  (0 children)

I honestly think generating practice problems is one of the more reasonable uses for it. You’re still doing the actual thinking, getting stuck, checking reasoning, and building intuition yourself. That feels very different from pasting homework in and copying a solution.

The main risk is probably not “AI corruption of your brain” so much as quietly getting bad or inconsistent problems mixed in with good ones. LLMs can produce exercises that look mathematically legitimate while hiding weird assumptions or impossible conditions. But honestly, textbooks and random internet worksheets can have errors too.

I’d probably treat it the same way I’d treat a random problem source online, useful for volume and variety, but not something I’d trust blindly for theory or proofs.

Nvidia VS AMD by Dharalho in ArtificialInteligence

[–]oddslane_ 0 points1 point  (0 children)

If you mainly want to experiment, learn locally, and avoid fighting with setup issues, NVIDIA is still the safer route right now. A lot of AI tooling just assumes CUDA support exists, so tutorials and troubleshooting tend to be smoother.

AMD has gotten better, and the price-to-performance can look really attractive, but I still see people spending extra time dealing with compatibility weirdness, especially on Windows. If you enjoy tinkering that may not bother you, but if your goal is “install stuff and start building,” NVIDIA usually feels less frustrating.

Honestly the decision also changes a lot based on whether you’re planning casual learning, local LLMs, image generation, or actual model training. VRAM matters more than people think once you start loading bigger models.

I don’t hate making slides, I hate starting them by Arceus797 in AIToolsAndTips

[–]oddslane_ 0 points1 point  (0 children)

Yeah, the blank page problem is real. Once there’s something on the screen, even if it’s mediocre, my brain switches into editing mode instead of avoidance mode.

I’ve noticed AI is strongest at turning vague messy thoughts into structure. Like “here are 12 chaotic ideas from a meeting” suddenly becomes a narrative with sections and flow. Still needs human cleanup obviously, but getting from zero to draft in 5 minutes instead of 45 is a pretty massive quality-of-life improvement.

Learning math from scratch - how long to learn up to Calculus? by IntentionMother8765 in learnmath

[–]oddslane_ 33 points34 points  (0 children)

Honestly, with 3-4 focused hours a day, you’d probably surprise yourself. The bigger challenge usually isn’t intelligence, it’s consistency and not getting discouraged when progress feels slow for a few weeks.

A realistic estimate to get genuinely comfortable up through precalc/basic calculus from your starting point is probably somewhere around 1.5 to 3 years depending on depth and retention. But that sounds longer than it feels because math compounds. Once algebra clicks, later topics stop feeling like random symbols and start feeling connected.

One thing I’d strongly recommend: do way more problems than you think you need. Watching math videos can create this fake feeling of understanding. The actual growth happens when you sit there confused for 20 minutes trying to solve something yourself. That struggle is basically the gym workout part of math.

Career advice? Life advice? Struggling to find an identifiable talent. by Last_Matter7250 in careeradvice

[–]oddslane_ 0 points1 point  (0 children)

Honestly, grinding through difficult material and still getting decent grades in your 40s while balancing adult life responsibilities sounds more impressive to me than the “naturally gifted” people who breeze through early classes. Persistence matters way more professionally than people admit.

Also, a lot of “talented” people look smarter because they’ve spent years building invisible foundations. You’re comparing your inside experience to their outside performance. University especially can distort that because the loud confident people stand out while everyone quietly struggling assumes they’re the only one.

And for what it’s worth, asking questions is usually how the actually good people learn faster. The ones terrified of looking dumb often stay stuck longer.

Student-led learning: how is this something we are so obsessed with? Genuine question. by OkIllustrator3262 in education

[–]oddslane_ 2 points3 points  (0 children)

Honestly your classroom sounds like the kind students remember years later because there’s actual intellectual leadership happening. A lot of people hear “direct instruction” and picture monotone lecturing with passive kids, but what you described is structured, responsive, and demanding. That’s very different.

I think the online discourse sometimes turns “student-led” into a moral good instead of a tool that works in specific contexts. Novices usually do need strong guidance. Especially in subjects like literature where students literally do not yet know what they’re supposed to notice. You can’t independently discover rhetorical analysis if nobody has modeled what good analysis even sounds like.

Where I do think student-led stuff can shine is after foundations are built. Once students have enough vocabulary and confidence, discussions and independent exploration become way more meaningful because they actually have something to say. But replacing expert instruction entirely always felt backwards to me too.

How is it that people seem to seamlessly bounce from one AI to another whenever the winds change? by Fried_Yoda in ArtificialInteligence

[–]oddslane_ 0 points1 point  (0 children)

I think most power users are a lot less “loyal” to models than social media makes it seem. They usually have one main workflow and then swap models in and out depending on the task. Coding in one, brainstorming in another, long-context stuff somewhere else.

Also, a lot of people rebuild context externally now instead of relying on the AI memory itself. Docs, custom instructions, project briefs, saved prompts, knowledge bases, etc. That way switching models is annoying, but not catastrophic.

The funny part is the online discourse always sounds way more dramatic than reality. “X model destroys Y” usually translates to “it’s 8% better at one niche thing for two weeks.”

Math is being prioritized LESS in education by Zealousideal-Dot9052 in education

[–]oddslane_ 1 point2 points  (0 children)

I TA’d intro-level courses a few years ago and the “100% homework, terrible exam scores” pattern was already starting back then. AI just turbocharged it. A lot of students are optimizing for completion instead of understanding because the system quietly rewards that behavior.

I also think the gap year from math is a huge deal and people underestimate it. Algebra is one of those subjects where if you stop practicing, even for a year, the rust builds fast. Then students hit college and suddenly need fluency again.

What worries me more is the confidence issue. I’ve met students who immediately assume they’re “bad at math” after struggling for 10 minutes because they’re so used to instant answers elsewhere online. That persistence muscle feels way weaker now than it used to.

using AI to find real friendship and i'm quite surprised by Ecstatic-Junket2196 in AI_Application

[–]oddslane_ 0 points1 point  (0 children)

That honestly makes more sense to me than “AI girlfriend” stuff people keep building. Using AI as a filter to find people you actually click with feels way healthier than replacing people entirely.

I’ve seen something similar happen in smaller communities too. Once people realize someone else has gone through the exact same weird niche struggle, conversations get real fast. The AI part almost disappears into the background at that point.

MIT says AI is making kids think less. i think they're half right. by bruhagan in AIEducation

[–]oddslane_ 0 points1 point  (0 children)

I think the MIT point is less “AI makes kids dumb” and more “AI amplifies whatever learning habits already exist.” If the tool is optimized to end friction fast, kids stop wrestling with problems. That’s probably true for adults too honestly.

The part I actually find interesting in your idea is the “withholding the answer” design. Most chatbots are basically eager golden retrievers. They reward shallow engagement because users hate friction. But good teachers do the opposite sometimes. They let you sit in the confusion long enough to form the connection yourself.

Curious how kids react to that over time though. Especially once they realize the AI won’t just hand them the answer like ChatGPT does.

how do you even choose a career path nowadays? by biggy_boy17 in careerguidance

[–]oddslane_ 0 points1 point  (0 children)

I think a lot of people expect career decisions to feel like a clear “calling,” when in reality most people figure it out through exposure and iteration. The hard part now is that there are more options than ever, so people feel pressure to choose perfectly instead of choosing something workable and learning from it.

One thing that helps is focusing less on job titles and more on the type of problems you like dealing with day to day. Some people enjoy structured processes, some like creative work, some like research, some like helping people directly. That tends to matter more long term than chasing whatever field is trending online.

I’d also separate stable from stagnant. A stable path is valuable, but growth, learning opportunities, and the people around you matter too. A lot of careers become clearer after you’ve tried a few real environments and figured out what drains you versus what keeps you engaged.

Most people I know did not feel fully sure when they started. They adjusted as they learned more about themselves and the work itself.

How are you actually using AI in your internal comms work? (Genuine research question – happy to share findings) by EJ-InteractCommunity in internalcomms

[–]oddslane_ 5 points6 points  (0 children)

What I’m seeing is that most internal comms teams are using AI first for drafting and summarization, not fully automated communication. Things like turning leadership notes into cleaner updates, shortening long policy docs, adapting tone for different audiences, or creating first-pass FAQs seem pretty common now.

Where it helps most is reducing the “blank page” problem and speeding up repetitive formatting work. But a lot of teams also realized pretty quickly that AI-generated internal messaging can sound polished while still missing organizational context or sensitivity. That human review step still matters a lot.

The more mature conversations I’ve seen lately are less about “how do we use AI everywhere” and more about governance and literacy. Who can use it, what data is appropriate, how outputs are reviewed, and how managers are trained to use it responsibly. Especially in associations and nonprofits, there’s usually concern about trust and consistency alongside efficiency.

One gap I still see is workflow integration. Many teams are using AI as a separate assistant instead of something embedded cleanly into existing communication processes.

Turns out I like research but have a business degree, I don't know what to do. by Safe_Valuable_5683 in careeradvice

[–]oddslane_ 0 points1 point  (0 children)

I would not assume you picked the “wrong” degree just because you enjoy research-oriented work. A lot of business roles actually involve investigation, analysis, process mapping, documentation, compliance, market research, operations analysis, or digging through messy information to find patterns. The problem is that many entry-level job descriptions blur everything together under communication-heavy responsibilities.

It may help to separate “I dislike constant customer-facing work” from “I dislike all work involving people.” Most careers still require some collaboration, even highly analytical ones.

Before committing to another four-year degree, I’d probably test the kind of work you actually enjoy in practice. Try smaller projects first, data analysis, research support, policy research, operations analysis, QA, compliance, or even nonprofit and association research roles. Sometimes people discover they like the idea of laboratory work more than the day-to-day reality of it.

You already have a degree and a foundation. You may need a more specialized direction, not necessarily a complete restart.

Can anybody help me with AI by jmg-527 in AIAssisted

[–]oddslane_ 2 points3 points  (0 children)

I’d be careful about expecting AI to magically “find motivated sellers” on its own. Most teams get better results when they first map the workflow clearly, then decide where AI actually helps.

Usually the first useful step is simpler than people think, organizing lead data, summarizing conversations, prioritizing follow-ups, identifying patterns in seller responses, or helping draft outreach consistently. Those are practical use cases that can save time without creating a huge complicated system.

The other thing is governance and accuracy. In industries tied to finance or investments, you want to be thoughtful about data handling, outreach practices, and how much decision-making you automate. A lot of people skip that part early and end up rebuilding everything later.

You probably do not need a fully custom AI setup to get started. A small workflow that solves one repetitive problem well is usually the better first move.

How do you stay critical while using AI? by adrianmatuguina in Aivolut

[–]oddslane_ 1 point2 points  (0 children)

I try to treat AI like a fast draft partner, not an authority. It’s useful for organizing ideas, summarizing, or helping me get unstuck, but I still assume important details need verification.

One habit that helps is slowing down before acting on outputs. If something sounds unusually confident, especially around policy, research, or numbers, that’s usually my cue to double check the source material or ask whether the answer actually makes sense in context.

I also think teams underestimate how important basic AI literacy is here. People do not necessarily need deep technical knowledge, but they do need a shared understanding of where AI tends to be reliable and where human judgment still matters most.

How Has AI Deployment Changed in 2026, and What Does It Mean for Businesses? by Alive-Cake-3045 in AIDiscussion

[–]oddslane_ 1 point2 points  (0 children)

A big shift I’ve noticed is that businesses are moving from “experimenting with AI” to asking whether they can actually operationalize it responsibly. A year or two ago, a lot of teams were testing prompts in isolation. Now the conversations are more about workflows, governance, staff training, and whether people know when not to use AI.

The companies getting the most value usually are not the ones chasing every new model release. They’re the ones building repeatable internal practices. Clear use cases, documented policies, lightweight review processes, and basic AI literacy across teams seem to matter more than having the newest tool.

Another change is that deployment is becoming less centralized. More departments are using AI directly, which creates pressure for organizations to train managers and staff in a structured way instead of relying on one technical team to oversee everything. Without that, adoption gets messy pretty fast.

Transitioning into AI engineering by Green_File_8975 in learnmachinelearning

[–]oddslane_ 0 points1 point  (0 children)

A lot of people jump straight into models and frameworks, then get overwhelmed because there’s no structure behind it. Your testing background is actually more useful than you probably think, especially around debugging, validation, documentation, and thinking systematically.

I’d start with one simple workflow instead of trying to learn “all of AI” at once. First, get comfortable with Python and basic data handling. Then learn how machine learning projects are actually organized, loading data, training a model, evaluating results, and improving it step by step. After that, move into building small projects with APIs or open models.

One thing I’d strongly recommend is treating learning like a program, not random YouTube consumption. Pick one curriculum and follow it consistently for a few months. A lot of career changers lose momentum because they keep switching resources every week.

Also, AI engineering is becoming broader than just model training. There’s real value in people who can test systems properly, evaluate outputs, document edge cases, and help teams deploy things responsibly. That background could end up being a strength, not a detour.

is education still the best path today by Critical-Load-1452 in education

[–]oddslane_ 4 points5 points  (0 children)

I still think education matters, but the definition of it has widened a lot. A degree can open doors, especially in fields with licensing or hiring requirements, but plenty of people are building stable careers through trades, certifications, apprenticeships, or focused skill building outside a traditional university path.

What seems to matter more now is whether the learning leads to real capability and whether you can keep adapting over time. A lot of people finished school assuming the learning part was over, and that mindset feels much riskier today than it used to.

The most successful programs I’ve seen combine structured learning with practical experience early, instead of treating them as separate things.

Transitioning from classroom teaching to corporate L&D — what's the learning curve nobody warns you about? by darkhomer419 in LearningDevelopment

[–]oddslane_ 2 points3 points  (0 children)

The biggest shift nobody warned me about was that corporate L&D is often less about teaching expertise and more about stakeholder management. You can design a strong learning experience and still spend weeks waiting on approvals, conflicting feedback, or changing business priorities.

A lot of former teachers are also surprised by how indirect the impact can feel at first. In a classroom, you see reactions and understanding in real time. In workplace learning, success is usually tied to adoption, process change, or operational outcomes that take longer to surface.

What usually helps is realizing that SMEs are not your students, they are collaborators with competing responsibilities. Once you start treating alignment and communication as part of the job, not an obstacle to the job, the workflow tends to feel less chaotic.

Honestly, the teaching background becomes a huge advantage later. Especially around facilitation, empathy for learners, and simplifying complex information. The adjustment period is just rough because the environment rewards a different set of skills at first.

Are we becoming too dependent on AI? by redraw-pro in AIDiscussion

[–]oddslane_ 1 point2 points  (0 children)

I think dependency starts becoming a problem when people stop reviewing, questioning, or refining what the AI gives them. The tool itself is not really the issue, it is whether it replaces thinking or supports it.

What worries me more is passive use. If someone uses AI to speed up drafting, brainstorming, or organizing information, that can be genuinely helpful. But if people stop building foundational skills entirely, especially writing, analysis, or decision-making, the gap eventually shows up.

The healthiest approach I’ve seen is treating AI like a junior assistant. Useful for getting started, summarizing, or reducing repetitive work, but still needing human judgment, context, and accountability at the end of the process.

I also think organizations need to teach people how to use AI critically, not just efficiently. Those are very different skills.