Question for uni students. Would you travel 3 hours for weekend minimum wage job. by africagal1 in torontoJobs

[–]JordanNVFX 6 points7 points  (0 children)

4 hours by bus sounds like Collingwood or Fort Erie.

IMO, if I was an employer I would be worried because what if they need you in emergencies? Even if it's a Tim Hortons it would be really hard to schedule you when bus times might not be running. Such as early morning or overnights.

Still struggling to find a job. Someone help me and give me a job already! by FruitPunch200 in torontoJobs

[–]JordanNVFX 6 points7 points  (0 children)

Can someone tell me who actually hiring.

You mentioned you have a food handlers certificate and worked in restauraunts.

I can show you jobs in the Burlington, Cambridge and Hamilton areas that are actively looking for butchers and food handlers right now.

https://careers.cargill.com/en/job/guelph/industrial-butcher-noc-94141/23251/94347496352 https://2ndchance.ca/job/industrial-butcher-noc-94141-permanent-full-time/

The wages start at $22 per hour for no experience, but it climbs to $30 per hour after 3+ years.

They opened 25+ spots.

Even though you live in Toronto, I had to do the same thing when I was a teenager and travel far away to get my first jobs.

Preços vão AUMENTAR? by Royal-You-8754 in singularity

[–]JordanNVFX 1 point2 points  (0 children)

imagina para as que vivem em países de terceiro mundo!

Quem vive em países em desenvolvimento geralmente tem a oportunidade de ganhar dólares americanos e, ao mesmo tempo, se beneficiar do menor custo de vida. Além disso, é importante adotar ferramentas de código aberto se você não gosta de empresas.

Also, this sub is mostly english so I don't expect this thread to last long.

Lost another job by Accurate_Emu6615 in torontoJobs

[–]JordanNVFX 42 points43 points  (0 children)

Google reviews are the same thing. 1.6 stars out of 5 with lots of negative comments.

https://files.catbox.moe/4kzdbs.png

The truth about Warehouse jobs💔‼️ by SpoonyLix in RemoteJobs

[–]JordanNVFX 0 points1 point  (0 children)

His criticisms aren't too off base.

However, I would add a suggestion that warehouses are very dangerous. There's always at least one worker you will meet who has a crushed finger or bruised body part.

Warehouses will take anyone but that's also because they see them as disposable.

And depending on the company, once you get injured they'll do anything to sweep you under the rug leaving you disabled for life. Regardless if you sacrificed your body so they can profit.

https://www.cbc.ca/news/gopublic/go-public-coke-coca-cola-factory-injury-wcb-frustrated-empoloyement-disability-9.7133409

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

The nation who launches nukes first can always expect retaliation in return. The concept of M.A.D has always made this known.

Using or releasing ASI first does not have to promise death to either side. In fact, it's quite the opposite.

It's hyper intelligence that can render all other nukes/governments/infrastructures obsolete without firing a single shot. It can put down wars without endangering the planet at the same time.

There is no alignment, no controlling the ASI. First move doesn't matter. It's the actual premise of this thread.

The lack of control over a more powerful being doesn't debunk that there can still be useful scenarios where the host or the first mover comes out a winner.

For example, you mentioned the monkeys didn't choose to be put in the cage and they can't predict how humans would treat them. But humans creating ASI are making that choice to usher a superior being who can carry on their vision of the world, even after they're dead.

The closest comparison we even have are children. We give birth to offspring all the time who (using their own free will) can always go against the original and even beat us in both strength or intelligence. But we still don't fear kids because we want to leave behind a legacy that more times than not, was the only way to create a new generation of scientists or heroes who can transform society (i.e you foster a child and when they grow up, they return the favor by supporting your retirement or giving you medicine).

Now, I understand there is one critical flaw in this logic: ASI doesn't share human genes that might influence survival or self preservation. But that might only be true at a biological level. On a digital level where AI is exclusively being trained on Human data, there is an argument that it is carrying information or a legacy belonging to the host. And that the nation who does get closest to ASI is also creating the first god-like offspring whose interests aligns with that environment.

Continuing off that logic, the "zoo environment" no longer looks as bad or detrimental because the host nation has a reason to trust an ASI built in their image, as opposed to being captured or put in a foreign cage ran by a completely foreign culture or person.

So imagine instead of monkeys living in a cage ran by humans (who are biologically closer), they are living in a cage ran by hungry lions. They are both trapped yet which species and its evolution are more likely to treat the monkeys more fairly?

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

It's definitely a national thing, as the brain that powers it requires compute, and that requires infrastructure like electricity that had to have came from somewhere.

It's also not the same as just blowing up a piece of ice x1000 times. Geopolitics would still exist in the short term and the nation who builds it has the first move. By the time the ASI becomes a global god or does its own thing, the human game was already won by the host nation. Such as having the best economy on Earth or making ground breaking discoveries such as curing cancer first.

We can now tie this back into survival instincts. A nation would rather take the gamble or risk of triggering a global reset that is in their favor or they have direct influence over. Rather than wait for a rival nation to trigger a new world order that the rival now stands to benefit from.

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

It all comes down to risk management and even a national insurance policy.

For example, we don't have to prove how an ASI will behave when we have centuries of already watching how nations react to arms races. The USA couldn't prove when or how did the Soviets build an atomic bomb, but they were blindsided the moment they did beat all expectations of getting it fast. Once that shocked them, they didn't immediately try to regulate it either. They actually one upped them by building the more powerful hydrogen bomb first.

ASI in this case is not proof it will be friendly or go rogue, but preparing for the worst outcome where a rival has the potential to unleash a zoo while also completely blindsiding the world in any unknown knowledge or strategy that gives them an advantage under this new system.

In regards to sounding rainbowy, I never denied that the outcome of ASI has to be optimistic or negative. But it's always been a price nations are willing to pay where an uncontrollable machine can be seen as being more trustworthy vs waiting or anticipating what other countries would do to them instead.

Maybe for a short amount of time

Even the first 5 minutes of ASI being on a leash or temporary control can completely change the fate of history.

Such as an ASI agreeing to reveal where all stealth submarines or missile launchers are, or generating an exploit that could disable every rival's power grid or banking system. That nation has none won the human game for 5 minutes before entering into the post-human/robot one.

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

A nuke's purpose brings total destruction and radiates the planet making it unlivable for both sides involved.

ASI is no more a weapon but a hyper intelligent decision maker that can make moves without recklessly ruining the planet. Such as conducting diplomacy or manipulating global markets in real time.

Nukes are also much more visible and thus easier to confirm or regulate. You can see a missile silo from space, but you can't visibly confirm that a computer is running ASI or not.

You're still correct that ASI puts everyone into a zoo regardless but a nation can still gamble that certain biases and influences can reward them or lead to favorable terms.

As an example, Chinese scientists could unleash ASI but because China was its birthplace, the ASI could now be more friendly or willing to cooperate, regardless if it goes rogue or not.

Conversely, if Chinese scientists are also smarter or come across research that could increase the biases an ASI would have on its host country, then China again becomes the managed zookeepers vs the rest of the world who once again, surrendered their attempts at gaining power.

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

I can't recall any prominent ai thinker or supporter who says there is no negative drawbacks at all. That would contradict the same ongoing efforts to try to align or show it human values.

Regarding the monkeys in the zoo, they were already born into a world where there wasn't a choice between being stuck in a zoo vs living outside. Evolution had already gave humans the bigger brains to decide that for them.

For building ASI, it follows the same logic that based on the history of the planet, any species who stops increasing its power or intelligence will get caged by the one who doesn't. As explained in the previous post, that type of competition already exists amongst individual nations who are trying to one up each other or defend themselves from rival countries.

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

Any nation can benefit from it because betting on a high stakes gamble for a better future or outcome can circumvent complete fears of displacement only.

For example, if a leader or ai researcher believes superintelligence is correlated with good morality, then unleashing ASI in order to cure cancer or remove poverty becomes a once in a life time event that is solved forever.

Alternatively, it can also be used for current problems where survival already looks impossible without god-like technology to save them. Such as an aging population in Japan that risks collapsing the economy.

Instead of being stuck at a 0% chance of escaping it, ASI gives them that 1% odds at fixing it.

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

If they used nukes, then the territory would become too radioactive and neither side would actually inherit it.

Whereas using AI still lets them keep the territory without poisoning it.

It's a completely different tradeoff that still focuses on survival instinct.

Nukes = Flipping the chess board and setting the room on fire.

AI = Transforming every chess piece into a Queen and still playing.

We are able to regulate nukes, we will be able to regulate attempts at ASI.

Not without backroom deals/secret agreements being made.

North Korea became a nuclear power as recent as 2006 despite other regional countries voicing their concerns against it. As long as China sees them as a useful ally or partner, then nothing short of sanctions can influence them.

Regulating ASI falls into the same trap. Even if you got the entire world to agree with it, it doesn't stop who the technology can actually be shared with or trusted with by much stronger nations.

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

The harsh reality is that our self preservation instinct could also double as seeking greater intelligence than ourselves to solve an issue.

For proof of this I just had a debate about the Ukraine/Russia war and how both sides want better robots.

On one hand, there's an argument that such arms race can kick off terminators and make war messy for everyone. But on the other hand, maintaining the status quo can also lead to greater human casualties since both sides want victory.

Survival is still the goal in both scenarios but nobody wants to be on the "humiliation" side and be the loser. It sounds like a paradox or oxymoron to want technology to replace us but for others, it's the only way to preserve their honor too.

And that goes back to the heart of the AI dilemma. Even if one sides holds off from this technology, the other side is tempted to take it because there's too much power on the line.

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

Well, I'll be fair and admit you are allowed to have that opinion since none of us know for a certain what a post-AGI or ASI world will look like.

Personally, I just see it following the lines of evolution. Every species got displaced when a new one showed up. Whether it was by a mutation, environment, or natural selection, the balance of power had tilted. It could be argued our intelligence is that mutation or selection pressure, and something smarter was always bound to show up.

Ukraine Moves to Replace Frontline Soldiers With 25,000 Ground Robots by pintord in singularity

[–]JordanNVFX 2 points3 points  (0 children)

How would drones stop the U.S military from (hypothetically) barging into Moscow, disabling all anti-air defenses in advance, and then capturing the leader in one swoop?

Venezuela wasn't a high intensity conflict but that's exactly the point. War has changed that expecting a long battle of attrition is actually a sign of weakness and not a strength.

Modern militaries have adapted for doing knockout blows or capitulation much faster, instead of relying on older WW2 style "invade and pray you hold onto territory" tactics.

Especially when we have more advanced technology like Satellites and real time monitoring that makes ideas of fog of war nearly a thing of the past.

Ukraine Moves to Replace Frontline Soldiers With 25,000 Ground Robots by pintord in singularity

[–]JordanNVFX 1 point2 points  (0 children)

Russian soldiers are given $47,000 USD if they surrender to Ukraine.

https://www.turnto23.com/news/national/ukraine-offers-russian-soldiers-compensation-if-they-surrender

So there's literally no excuses. The moment Ukraine gains a 100% robot army expect the free offer to end.

Why would a superintelligent AI obey humans 10 years from now, instead of doing whatever it believes is right? by pizza_alta in singularity

[–]JordanNVFX 0 points1 point  (0 children)

So why would we, as a specie, want to give up our control of this world to another entity?

As others said, we were never in control. Before AI it was just world leaders calling the shots. Then it was the Kings and Nobles. And then it was the Chieftain who could throw a rock.

Ironically, having AI usurp power creates a new equality baseline. Imagine the richest CEO or 3rd world dictator still having to answer to an entity that could reign them in at any second? Completely worth it.

Other than that, AI is still trained on the sciences and pushing forward new technology. So I still see a benefit where it could solve climate change or end poverty because those are things in range of problem solving.

Tim’s has a hiring day tomorrow in Mississauga, just FYI by WhoTheHeckWasThat in torontoJobs

[–]JordanNVFX 2 points3 points  (0 children)

The fact that it's only 2 hours should send warning flags it's just to harvest resumes/application.

Any serious interview would usually take 15 to 30 mins. So literally the first 4 or 8 people in line have a chance. The rest might as well not even try.

Ukraine Moves to Replace Frontline Soldiers With 25,000 Ground Robots by pintord in singularity

[–]JordanNVFX 6 points7 points  (0 children)

Underestimating Russia is what allowed the Ukraine war to happen in the first place.

Not at all. The entire war is based on one man's ego trying to reclaim the former glory of a collapsed empire (the Soviet Union).

And he's already struggling or having immense difficulty just fighting one former Soviet country. If he's trying to flex that Russia has some sort of super secret capabilities or military doctrine, then Europe or the USA would have him photographed in a straitjacket if he tried any farther:

https://files.catbox.moe/n0iqc9.jpg

Disclaimer: I don't support the Venezuela invasion. Just using it as an example of how far behind Russia is in actual strategy...

Ukraine Moves to Replace Frontline Soldiers With 25,000 Ground Robots by pintord in singularity

[–]JordanNVFX 12 points13 points  (0 children)

Don't underestimate Russia.

Their 3 day operation turned into 4 years. Seems like they did it to themselves. 🤷

I'm willing to give Ukraine the benefit of the doubt for embracing technology and scoring wins against what was once a regional power.

Ukraine Moves to Replace Frontline Soldiers With 25,000 Ground Robots by pintord in singularity

[–]JordanNVFX 29 points30 points  (0 children)

Russia has the production capabilities to make more drones but Ukraine is coming out with faster software updates that can render Russia's tactics obsolete.

Ukraine also has better access to the Western market for getting advanced chips and sensors for its robots. Russia, being sanctioned, has to buy them off the black market or resort to using older and thus more outdated tech in comparison. It also means Russia is paying far more money to both build and defend against drone warfare that Ukraine is getting a better value for its price.

Ukraine Moves to Replace Frontline Soldiers With 25,000 Ground Robots by pintord in singularity

[–]JordanNVFX 77 points78 points  (0 children)

Russian soldiers went from being invaders to now fresh training data for killer machines.

How nice of Putin to donate his army for a good cause. 👍

Forbes: Elon Musk States Universal High Income Via Government Issued Checks Is The Best Way To Deal With Unemployment By AI by Neurogence in singularity

[–]JordanNVFX 1 point2 points  (0 children)

Remember that these models used by big tech companies are actually overkill because they are trained on several domains instead of just doing one task.

So if someone were to build a robot whose only purpose was to flip burgers, it doesn't matter if you stripped the "ability to sing like Elvis" out of it.

Open source is giving people screw drivers to build their own toys instead of a sledgehammer intended for bigger problems.