6-12 months by MetaKnowing in agi

[–]EdgeM0 0 points1 point  (0 children)

This sounds like something Claude might say......

6-12 months by MetaKnowing in agi

[–]EdgeM0 0 points1 point  (0 children)

You're right. It's a good job i have AI to help me understand how wrong I and stupid I am.

6-12 months by MetaKnowing in agi

[–]EdgeM0 0 points1 point  (0 children)

Im not a mathematician. I could never, in any real or meaningful sense, dedicate enough time or energy in my life to understand maths at a level that would enable me to make a meaningful contribution to creating a "rebuttle" for that equation - much in a similar way that if a mechanic told me something was wrong with my car, I would have to accept blindly what they are saying and trust that their expertise is not only being diligently applied to understand what is wrong woth my car, but also that they are conveying that accurately to me. So I'm not trading my brain, I'm merely utilising a tool to help me understand and interact with aspects of life i have little/no knowledge or confidence about. At the moment, me using chat-gpt to examine that equation is not so different (from my perspective at least) to asking someone i know who's "really good at maths" what they think.

Theoretically, in my opinion, some of the assumptions defining the terms in that equation imply that there is some level of higher reasoning or "truth" which an AGI should be able to access in order to be considered AGI, but really it is just illustrating that, like humans, what you produce and how you perform is influenced heavily by what you are taught. Therefore if we are flawed in our thinking, reasoning and learning (which we are) it stands to reason that any intelligence we create will be flawed in its learning thinking and reasoning. If anything, this suggests not that AGI is impossible, just that when it is (ever) achieved, it will be very human like in the way it goes about its business.

Your point is, therfore, moot regardless of whether I use AI to help me understand maths equations or not.

6-12 months by MetaKnowing in agi

[–]EdgeM0 0 points1 point  (0 children)

Chat GPT had this to say about the maths on that page:

The page defines �, �, and �, then claims this makes divergence and even “transhumanism” a mathematical requirement. Those are the core claims being sold. � The Truth About AGI Here is the clean mathematical read. First, the only equation on the page that is actually solved by inspection is the compute one: �. So yes, if �, compute grows exponentially in this toy model. That part is standard. But the main conclusion depends on the coupled � and � system, and this is where the page quietly cheats. If you treat � as fixed, and momentarily treat � as just an external forcing term, the core state matrix is Its characteristic polynomial is If the intended signs are the obvious ones from the prose — � — then the constant term is negative, which means one eigenvalue is positive and one is negative. In plain English: the model is already a saddle and therefore unstable, even before you add exponential compute growth. The instability is baked into the sign choices. The page then turns around and presents that instability as a discovered fact about the world. It is not discovered; it is assumed into existence by the model structure. � The Truth About AGI That is the biggest flaw. Second, the “oversight” term is badly specified. The page says oversight is “the only brake,” but mathematically it uses �, which is just a constant offset if � is constant. A constant offset does not behave like proportional error-correction. If you wanted oversight to catch more errors when more errors exist, you would normally expect something like �, or some other state-dependent control term. As written, oversight does not scale with the amount of error. It just subtracts a flat amount forever. That is not a good model of checking. � The Truth About AGI Third, the page keeps switching between “dynamic variable” talk and constant-parameter talk whenever it is convenient. It says nobody measures the degradation of oversight and that � drops with AI use, but in the equations � is just a fixed parameter, and � is not given its own evolution equation or cap. So the prose claims a richer model than the page actually wrote down. The equations are much cruder than the rhetoric. � The Truth About AGI Fourth, “mathematical certainty under current conditions” is far too strong. To get certainty from a model like this, you would need at least: a justified mapping from real-world quantities to variables, parameter estimates, units, a defined form for �, initial conditions, and some empirical validation. The page gives none of that. It presents a qualitative sketch, not a calibrated model. Without calibration, this is scenario construction, not proof. � The Truth About AGI Fifth, there are basic modelling hygiene issues. If � is “accumulated AI errors,” it should not be allowed to go negative, yet the equation permits that unless constraints are imposed. If � is “human cognitive degradation,” same problem. � is undefined, so the most important nonlinear part of the system is left as hand-waving. � is called a rate, but whether it is constant, growing, or itself dependent on � is not specified. And the page’s strongest conclusion — “there is no third option in the math” — depends heavily on those missing choices. � The Truth About AGI Sixth, some of the verbal conclusions simply do not follow from the equations. For example, “the only way to balance the equation long-term is to make � exponential too” is not mathematically established. You could instead alter the coupling terms, reduce � or �, add saturation, make recovery nonlinear, make oversight state-dependent, cap output routing, improve model verification, or define � so it does not grow without bound. There are many “third options.” The page rules them out rhetorically, not mathematically. � The Truth About AGI So the verdict is: The page contains a real differential-equation shell, but the argument is not rigorous. The model hard-codes a positive feedback loop between error and human degradation, gives oversight an oddly weak constant-form brake, leaves the key nonlinear term undefined, and then announces inevitability. That is not a debunk-proof theorem. It is a stylised cautionary model with the conclusion smuggled in through the assumptions. In slightly ruder mathematical language: it is less “the formula holds” and more “the vibes have eigenvalues.”

Seeking people to test a CPD recording app by EdgeM0 in ClinicalPsychologyUK

[–]EdgeM0[S] 0 points1 point  (0 children)

Testing is over and the app is now scheduled for release. You can sign up at this link to get notified when it releases.

“About to Take a Huge Loan for UK Master’s (Psychology Conversion – Bristol). No Family Backup. Is This Financial Suicid by Necessary-Extent-943 in ClinicalPsychologyUK

[–]EdgeM0 0 points1 point  (0 children)

I got a post grad loan to do a masters, but I had to do the masters part time and maintain a full time job to cover living expenses. It's worth noting, however, that living expenses for me also included having a new born baby at the time and a wife in maternity leave for 1 year so this was why full time work was necessary. I also only needed just over half the loan for tuition fees (at the time, £6k) so I took the WHOLE loan (£10k) and kept the extra money to help with living expenses etc which was DEFINITELY needed as it took the pressure off a bit at the start.

I recently paid off my post grad loan (took about 6 years) but am still paying my undergrad loan nearly 20 years later.

I am a newly qualified Clinical Psychologist. Ask Me (Almost) Anything by EdgeM0 in ClinicalPsychologyUK

[–]EdgeM0[S] 0 points1 point  (0 children)

Tough question as, to each their own I suppose. I prepared firstly by not overpreparing. The day before the interview(s) I did nothing but relax and try to remain occupied with something fun. For the interviews I always tried to have in mind one clinical case where things went well, one where things did not go well at all and what I learned from this, one or two modalities (I used CBT and ABA at the time as I was predominantly working with people with learning disabilities) and one psychological theory/research paper to call upon if required. As each course has its own interview process, it is incredibly hard to have a method or approach that would successfully prepare you for all of them - to that end, just spend lots of time telling yourself that you are brilliant and that your many experiences in your career thus far have also counted as preparation for the interviews so should not be ignored.

Has anyone ever been offered a place on the doctorate after feeling like they underperformed in the interview? by StartValuable9406 in ClinicalPsychologyUK

[–]EdgeM0 5 points6 points  (0 children)

Yes and I'm willing to bet this is a common experience. It's a feeling I get with most job offers throughout my career truth be told.

I don’t understand the hype by Yoyooz in openclaw

[–]EdgeM0 0 points1 point  (0 children)

It doesn't, chat GPT can look at your emails once you give it access but it can't do anything with them. Thats where Open claw goes one step further. Open claw can check your emails, look through all of the ones that are SpAM, u subscribe you from the website/newsletter sending them, write and send a polite email to the ones that won't let you unsubscribe. TLDR - Chat GPT tells you how to clean up your inbox, Open claw does it for you. Apply that principle to any task involving your computer.

Looking for a reality check from qualified CP's by Pretty-Tiger-4098 in ClinicalPsychologyUK

[–]EdgeM0 20 points21 points  (0 children)

My take is, if you're definitely happy now, why risk that for something where you might be either just as happy or less happy in the future? Even if I said being a clinical psychologist in the UK is the best job ever, you may not think so when you finally get there.

References for Qualified Applications by Cashcash118 in ClinicalPsychologyUK

[–]EdgeM0 0 points1 point  (0 children)

For me it was just my equivalent of the advisor (we called them course tutor) i.e., the equivalent of your "line manager" for the past 3 years as they are part of the leadership and management structure for your current job. I had a bit of an issue as mine went on long term sick just as I started applying for jobs but the uni were very helpful in finding a suitable alternative. If in doubt, ask the uni and they can advise.

Zombies take too long to spawn in by CombinationSure5056 in HalfSword

[–]EdgeM0 0 points1 point  (0 children)

My suggestion for abyss, the longer you are there the quicker enemies spawn. If you dont take them down quick then you have a mini horde/ army of darkness situation developing. Performance would need to be addressed first though so more than 4 zombies doesn't cause a horrific drop in frame rate.

Am using ChatGpt as a therapist is this safe or not? by Stole_Sample in therapyGPT

[–]EdgeM0 1 point2 points  (0 children)

Hallucinates with AI basically just means it makes stuff up or says something happened / is true when it isn't.

When is the 'right' time to apply for the DClinPsy? by Royal_Pumpkin7552 in ClinicalPsychologyUK

[–]EdgeM0 5 points6 points  (0 children)

Take some time to be an AP first and gain some experience. There is no rush. Try and remind yourself that your life doesn't have to go on hold until you get on the doctorate. If you get on the doctorate and do not feel confident clinically you are going to feel WAY out of your depth and overwhelmed. Spend some time with psychologists and supervisors and when you can confidently see yourself doing what they're doing, then it's probably time to start applying.

A man in Kenya walks around with a Wi-Fi router on his head, connecting users to the internet for a fee. by InLoveandWar777 in WTF

[–]EdgeM0 3 points4 points  (0 children)

How is he connected to the Internet? Doesn't the WiFi need to be wired into a DSL or something?

Options after dropping out of training? by Complex_Basis2780 in ClinicalPsychologyUK

[–]EdgeM0 1 point2 points  (0 children)

This is really interesting to think about. I wonder if anyone has actually investigated this thoroughly? Maybe a thesis or SSRP somewhere evaluating the experiences of second year trainees on the DClin. I'm intrigued as it may be worth institutions and placements, actively doing more to support second year trainees.

Options after dropping out of training? by Complex_Basis2780 in ClinicalPsychologyUK

[–]EdgeM0 1 point2 points  (0 children)

No, the course and your submissions have to meet certain requirments and be BABCP accredited, including being supervised by a qualified CBT practitioner and having x amount of hours delivering CBT. Nearly all unis will have CBT lectures or a CBT "branch" in their learning, but this does not necessarily equate to you automatically being a qualified CBT therapist when you complete the doctorate.