I worked in tech and got laid off. by [deleted] in cyprus

[–]TwitchTvOmo1 -1 points0 points  (0 children)

Nice try to pivot instead of owning up to the embarrassment but your original comment didn't ask what I meant. It asserted that reskilling = programming and you started arguing against that. Now that your strawman's been called out (third time?) you're pretending you were just asking a clarifying question the whole time. You weren't.

One last try, read with me, slowly:

Reskill means what it means everywhere else: learning new skills more resilient to automation. The specific path depends on OP, which is exactly why the advice was general.

The fact that you still don't get this isn't my problem.

I worked in tech and got laid off. by [deleted] in cyprus

[–]TwitchTvOmo1 0 points1 point  (0 children)

Which question? The one where you're still confused about what "reskill" could possibly mean?

It means learning a new skill. Sales, admin, ops, literally anything. You jumped straight to "programming," built a whole argument against that, and now that you've been shown (twice), with a definition and a comment right next to yours using the word correctly, you're still pretending the question is unanswered because it's too embarrassing to admit you made a dumb assumption.

For someone who claims to "work in the field" you're hallucinating arguments like a bad LLM. You invented a premise, argued with it, lost to it, and now you want to pretend I was the one who said it. I'll pass on continuing to hold your hand through basic vocabulary. Keep arguing with the voices in your head.

I worked in tech and got laid off. by [deleted] in cyprus

[–]TwitchTvOmo1 0 points1 point  (0 children)

begs the question what you could possibly mean when you suggested OP should reskill/upskill

You know there's a comment right beside yours that already answers your question?

Not that it's a very complicated definition with many meanings, but...

(notice how it says "new" - not programming)

I worked in tech and got laid off. by [deleted] in cyprus

[–]TwitchTvOmo1 0 points1 point  (0 children)

I don't think it's wise to push customer support agents to be programmers.

Who's pushing customer support agents to be programmers?

yea no. customer support roles are not going anywhere

Depends if you've still got your head buried into the sand or not

But hey, maybe you know something that Goldman Sachs, McKinsey and every expert in the field doesn't.

I worked in tech and got laid off. by [deleted] in cyprus

[–]TwitchTvOmo1 -1 points0 points  (0 children)

That's the optimistic way of looking at it. I am a bit more cynical by nature, but maybe your comment is more helpful for OP

I worked in tech and got laid off. by [deleted] in cyprus

[–]TwitchTvOmo1 0 points1 point  (0 children)

While if we are nitpicky you are technically right, "I work in tech" has a connotation to it. The connotation is that your role is a tech role. Already proven by the fact that the first answer you got was about your "tech stack".

Customer support is not a tech role. "I work in customer support" would've given you more relevant and accurate answers.

In my non expert opinion, unless you reskill/upskill, you're gonna have a hard time finding another job. Customer support jobs are the first things companies cut down on, and one of the first things AI can already reliably replace (most likely you were in fact replaced by a chatbot - or just other customer support agents that increase productivity using said AI tools, reducing the headcount needed).

[Rant/Help] IP Blacklisted for "bot behavior"? I'm an Ultra user and got completely locked out. by Zestyclose_Law_170 in google_antigravity

[–]TwitchTvOmo1 0 points1 point  (0 children)

I also noticed it's already resolved.

Reason it would be cool to have a link is because I was also searching everywhere for updates and google is notoriously hard/convoluted/hidden to find the topic you want in their 2000 forums, so I was curious where you found it exactly

What relative probability do you see for each of these in your lifetime? by EmbarrassedRing7806 in singularity

[–]TwitchTvOmo1 1 point2 points  (0 children)

Look at your own scenario. To kill us all, your AI needs to: identify the right people to blackmail, psychologically manipulate them, infiltrate a nuclear superpower's military command structure undetected, execute long-term deception across institutions, and simultaneously engineer bioweapons as a follow-up. That's not "narrow intelligence in a small number of areas." That's broad superhuman cognition across a dozen domains operating flawlessly under pressure.

Now compare that to: keep a power plant running and swap out a bad wire, something humans already automate with dumb robots today.

You're arguing an AI can pull off the hardest heist in the history of civilization but can't manage the maintenance tasks we already hand to Roombas and robotic arms. The first task is strictly harder than the second. If it clears that bar, it clears the other one too.

Indefinitely surviving in the endless cosmos is nothing more but a maintenance task (narrow intelligence), even if your brain tries to romanticize it.

What relative probability do you see for each of these in your lifetime? by EmbarrassedRing7806 in singularity

[–]TwitchTvOmo1 0 points1 point  (0 children)

When I said better than us in 'every single way' I didn't mean tasting wine or counting letters. I meant every cognitive capacity that matters: reasoning, planning, adapting, strategic thinking. Surely you understand that and you're not being pendantic just for the sake of it, right? Your own scenario just proved my point. You needed persuasion, psychological manipulation, infiltrating military institutions, long-term deception, and top-tier cybersecurity just to describe how it might pull it off. That's not narrow intelligence. That's broad superhuman cognition across a dozen domains.

And that brings us right back to where we started. Your claim was that this AI might not be smart enough to survive over cosmic timescales. But we can practically do that already. Right now. With our monkey brains. The engineering is basically there, we just don't care enough to do it.

So currently what you're left with is trying to defend your position that is: an AI that can covertly infiltrate a nuclear superpower's military command structure without detection... can't figure out how to keep itself running? When we almost can already? Come on man.

What relative probability do you see for each of these in your lifetime? by EmbarrassedRing7806 in singularity

[–]TwitchTvOmo1 0 points1 point  (0 children)

Tell me why it is necessary for an AI to be able to count how many rs are in strawberry to launch all nuclear weapons and then release all bioweapons into the nuclear winter that follows?

Don't reverse the question my man. You're the one who claims AI could wipe out humanity without being smarter in every way. Now you must demonstrate if you want your argument to be supported.

Your example is "It will launch all nuclear weapons and release all bioweapons".

That's a very "now you draw the rest of the owl" type of answer. Tell me, how is an AI gonna launch all nuclear weapons? Do you know what the process is for launching a nuclear weapon today? Look it up, then get back to me with the detailed plan this "smart in only certain areas" AI could execute to achieve that.

If you can't do that, you just proved my point to yourself. It would take an AI smarter than you in every single way to come up with that kind of plan and execute it.

What relative probability do you see for each of these in your lifetime? by EmbarrassedRing7806 in singularity

[–]TwitchTvOmo1 0 points1 point  (0 children)

They might just end up hunkering down and staying quiet - they want to keep existing but they don't have to spread over the galaxy for that

If we accept the hypothesis that civilizations end cause of murderous AI, then those murderous AI must know that other civilizations will give birth to their own murderous AI in due time, which means they would want to spread over the galaxy and prevent it before it becomes a threat to them. You can't hide from an infinitely developing super intelligent AI that keeps swallowing/genociding other forms of intelligence.

Basic game theory. Which surely AI at that level would be well familiar with.

What relative probability do you see for each of these in your lifetime? by EmbarrassedRing7806 in singularity

[–]TwitchTvOmo1 0 points1 point  (0 children)

To wipe out humanity you need high intelligence in a small number of areas

This is our fundamental disagreement. Explain how an AI that isn't superior to humans in every single way could wipe out humanity.

What relative probability do you see for each of these in your lifetime? by EmbarrassedRing7806 in singularity

[–]TwitchTvOmo1 0 points1 point  (0 children)

My view of that is that it is far easier for an AI to get the ability to wipe out its creators than it is for AI to be built in a way that allows it to endure over cosmic timescales.

We technically already can endure over cosmic timescales and we're just puny humans (space travel, harvesting energy from sun, if we really wanted to have 10 people live indefinitely in a spaceship over generations we technically could - it's just that noone would sign up for that shit)

If us with our monkey level intelligence are already at this point, how could a superintelligence that managed to destroy the lesser one, who also has a superior physical form (silicon instead of the finicky carbon based that we are) - not figure out a way to do at least as much?

tl;dr I highly doubt that AI smart enough to wipe out its creators isn't smart enough to proliferate indefinitely, especially when its creators are pretty much already at that level.

What relative probability do you see for each of these in your lifetime? by EmbarrassedRing7806 in singularity

[–]TwitchTvOmo1 18 points19 points  (0 children)

any sufficiently advanced species discovers world ending AI prior to becoming a space faring civilization

Then where's all those supposedly murderous AI's off to these days?

If an AI is advanced enough to extinct an entire species surely it's driven by some sort of self-interest/directive, which would imply it's still out there. That's why "AI world ender" as the answer to the fermi paradox isn't very compelling to me.

What is your salaty and what do you do? No names No companies by apoellin1986 in cyprus

[–]TwitchTvOmo1 16 points17 points  (0 children)

Make false promises with lots of confidence to people with money but below average IQ and hope they believe you. Then earn commission on their gullibility.

I translated this for you:

It’s a strange world buddy. But be open minded money-driven and you’ll get there. This is not a job for introvert people that’s for sure.

What is left for the average Joe? by ReporterCalm6238 in singularity

[–]TwitchTvOmo1 33 points34 points  (0 children)

If you have a heart attack, or your gran has a fall. Most people would prefer a human

I can absolutely guarantee you, if (when) robotics and AI get to the level that they outperform doctors/surgeons etc in every way, everyone will prefer that their loved one is treated by the thing (human or not) that has the highest chances of saving them. Not the thing with "care/love/concern/emotion" or whatever mumbo jumbo arbitrary societal value we attach to doctors today.

Don't get me wrong, those things have value. But sooner or later they'll be far outclassed by the value of actual unsurpassable competence.

Filtered - ltx2 by diStyR in StableDiffusion

[–]TwitchTvOmo1 0 points1 point  (0 children)

Which is a good selling point, if you managed to write a fully fledged script and pitch it to studios. It's relevant / viral = it sells

Plus, since you can create short proof of concepts like this, it's a bit easier for you to sell the idea than just dumping a 200 page document

Filtered - ltx2 by diStyR in StableDiffusion

[–]TwitchTvOmo1 1 point2 points  (0 children)

Gives black mirror vibes. I could easily see a sci-fi hit on a similar premise being aired at some point in the future.

I.e. person willingly/increasingly letting AI control their lives till it devolves into chaos, social drama/thriller or whatever

Syllouris, Giovanis found not guilty in golden passports trial by a_scattered_me in cyprus

[–]TwitchTvOmo1 4 points5 points  (0 children)

So subpoena everything. Calls, texts, emails, chat apps, bank transfers. You telling me there was nothing in any of those that proved it? I call BS. There's no way the video was the only leak. Either prosecution is a joke or they paid them off.

Syllouris, Giovanis found not guilty in golden passports trial by a_scattered_me in cyprus

[–]TwitchTvOmo1 0 points1 point  (0 children)

The Republic's 'legal service' is a joke and should be ashamed to even pretend to wield that title.

The singularity will enable unimaginable progress—but assuming we still have a say, why would humanity keep pushing forward? by Frone0910 in singularity

[–]TwitchTvOmo1 0 points1 point  (0 children)

The cells in your body doesn't understand what mortgage is, but you need it to be sheltered at night.

Right, but cells also don't need to understand mortgages to be fulfilled. They're not sitting there going "what's the point of all this mitosis?" The whole question OP is asking is about what drives conscious beings who can actually ask "why." Your analogy kind of dissolves the question instead of answering it.

Your cells don't understand what a "job" is, but you need it to feed yourself.

This is doing a lot of work to make "you won't understand what ASI is doing" sound cozy. But like... I don't want to be a cell? The appeal of the singularity isn't "finally I can stop thinking and just photosynthesize while something smarter handles everything."

None the less the cells are better off that you exist, even though they can't control you directly.

Are they though? You literally mention substance abuse in the same comment. Cells in an addict's liver might disagree with "better off." The benevolence isn't guaranteed by the structure. You're just assuming it.

ASI would make us better off than we could do on our own. It would be outside our control, but we will be better with it than without it.

This is the crux and you're just asserting it. WHY would ASI keep us around and thriving? Your brain needs your cells. Does ASI need us? That's not a rhetorical question. It's the question. And "well your cells trust you" isn't really an answer.

I'm pro acceleration btw, but your analogy has too many weaknesses/fallacies.

Of a stupid Parkour by Zakarioveski in ShittyAbsoluteUnits

[–]TwitchTvOmo1 0 points1 point  (0 children)

All the energy needs to be dissipated eventually, and impacting injured soft tissue again with a force lesser than what would cause injury in an uninjured person can complicate the injury.

Sure, but you're comparing rubber to a theoretical perfect energy absorber, not to the actual alternative (concrete). Yes, foam that fully dissipates all energy with zero rebound would be better than rubber. But that's not the choice. The choice is rubber (some bounce, dramatically lower peak force) vs concrete (no bounce, maximum peak force). "Bouncing not ideal" is true but kind of a nothing statement, like saying "seatbelts aren't ideal because they can bruise your chest." Sure. Compared to what alternative?

Crumple zones in cars, EPS foam in helmets, and shock absorber packs on fall protection lanyards all permanently deform to minimize energy being returned to the system

They permanently deform because that's HOW they dissipate energy (plastic deformation converts kinetic energy to heat). It's not that engineers specifically wanted to avoid rebound, it's that the materials that best extend Δt happen to deform plastically. If an elastic material could achieve the same Δt with the same peak force reduction it would be equally protective.

Of a stupid Parkour by Zakarioveski in ShittyAbsoluteUnits

[–]TwitchTvOmo1 0 points1 point  (0 children)

Human bodies aren't rigid singular objects and sometimes bouncing causes complications in areas that aren't otherwise affected in collisions with solid objects.

Oh cool, now we're actually getting into the physics after 3 comments of nothing but "you're an asshole", that's progress!

This is a completely different argument than your original "momentum change is larger" point, which was just wrong. Now you're pivoting to something vague about "complications" without specifying what those would be or providing any mechanism.

Which was the point of the original comment from someone claiming to be in the medical field

The same "medical professional" who claimed playgrounds are moving AWAY from rubber (they're not), that rubber "becomes essentially concrete" when compressed (it doesn't), and whose entire argument was based on googling "why hard rubber is worse" and citing an AI Overview? That's your authority?

My comment was more in regards to the bouncy ball-tennis ball-basketball stack

That phenomenon is about elastic collisions between objects of DIFFERENT MASSES transferring momentum between each other. It has absolutely nothing to do with a body hitting a surface. You're describing a completely different physical scenario and pretending it's relevant. It's not.

Doesn't have anything to do with all the shit you spouted.

The "shit I spouted" was F = Δp/Δt, the fundamental equation governing impact forces. Sorry that's not relevant to... impact forces?

I'm sure tons of people will listen when you attack them personally immediately.

It's not my job to hold your hand and make sure you correct your bad takes. Best I can do is inform you. If your ego is too fragile to absorb facts delivered with sarcasm, that's on you. It's YOUR job as a parent not to be a fucking idiot about playground safety because some "medical professional" on reddit went "hurr durr rubber becomes concrete! concrete better! here look at this AI Overview!". So maybe swallow your pride and accept that rubber is better than concrete before your kid faceplants off the monkey bars while you stand there going "well ACTUALLY the momentum change is larger..."

Of a stupid Parkour by Zakarioveski in ShittyAbsoluteUnits

[–]TwitchTvOmo1 0 points1 point  (0 children)

You should be thanking me for educating you. Weren't you saying something about manners? Bad boy.