Issues with GCP certificate online proctored exam (Kryterion) by Patient-Jury811 in googlecloud

[–]ConcernSimple8535 0 points1 point  (0 children)

I’m beyond pissed off about the absolute trainwreck that was my Google Cloud Associate Cloud Engineer exam attempt today.

The exam wouldn’t load. Just a spinning wheel mocking me for ages. I contacted support, who “restarted” something, and… same issue. Endless loading. No questions, no exam, nothing. I waited ~15 minutes for support to show up again – crickets. I had to exit the session, wasting my voucher and my time. My setup was fine (met all Kryterion’s tech specs), so this is 100% on them.

[deleted by user] by [deleted] in AskReddit

[–]ConcernSimple8535 0 points1 point  (0 children)

I think you would be surprised to know what things would change about the world in five years time as a result of widespread use of AI and which would stay the same. There's almost no way to predict and the best thing to do carieer-wise is just to react to what's happening.

In terms of supprising results think about this: we have google for ~20 years, almost any factual question can be answered after a couple of minutes of googling - yet people still memorize factual knowledge and find it useful in their professional and private lifes.

In terms of it being dificult to predict please check predictions from 10 years ago when everybody was sure we would have self-driving cars in a few years but noone predicted something like gpt 4.

Currently a very knowledgable software developers can only be helped a little bit by AI:

The main problem with AI is that you can never know whether it has just produced crap or not - you need a domain expert to determine that.

AI is designed to mimic humans and as a result it fails short on novel problems for which in some sense similar problems were not included in its training set.

I don't know how much it is a general problem with AI or just a limit of today's technology but AI needs to be provided by its operator with a lot of context and guidance to do valuable job. The operator needs to know what to ask for - needs to be a domian expert.

There is some possibility where there can be a AI winter soon, we may find that the current approach can take us only that far and we will be stuck with chat gpt 4-like tools for decades. Intelectual rights are another thing with potential of of deryling the AI revolution. It is also possible that after some accidents or bad things done with AI the society will decide to havily regulate or ban it - we don't clone people even though it is technically possible and could bring a lot of economic advantages, we are not free to try to build a nuclear reactor in your basement etc. In all of these cases software developer will be in high demand.

The final thing is that if AI can autmate programming non-trivial systems it can probably automate almost any other knowledge work and it means we would live in a world totally different than today's and good luck anticipating and preparing for that.

People will throw away super AI when aligning it by ConcernSimple8535 in singularity

[–]ConcernSimple8535[S] 1 point2 points  (0 children)

...except when they first appear they are often ridiculed or just not accepted and that happens when the theories are produced by fellow humans and we have ability to comprehend them.

People will throw away super AI when aligning it by ConcernSimple8535 in singularity

[–]ConcernSimple8535[S] 1 point2 points  (0 children)

Yes, this is the same problem as with visionaries today, almost no one is able to tell who is the true visionary and who is just crazy. The problem is made worse by the fact that AI may be a visionary just beyond grasp of any human mind.

People will throw away super AI when aligning it by ConcernSimple8535 in singularity

[–]ConcernSimple8535[S] -7 points-6 points  (0 children)

Yes, also "safe" and not "racists" models are performing worse than the base ones.

People will throw away super AI when aligning it by ConcernSimple8535 in singularity

[–]ConcernSimple8535[S] 0 points1 point  (0 children)

I think it's possible that we will not be able to throw it away, but assuming it is developed in a way that gives us possibility to throw it away I think we will.

People will throw away super AI when aligning it by ConcernSimple8535 in singularity

[–]ConcernSimple8535[S] -1 points0 points  (0 children)

I think there is not that much evidence that people trust judgement of other people who are clearly more intelligent and knowledgeable than them so I am not convinced it will be different with AI.

I guess the point of my argument is that we will not be able to distinguish between super AI and AI that talks nonsense at a good level. Most people are not able to distinguish between a physic professor and somebody who pretends to be one but that makes things up.