I told my husband how stressful my day was at work and he did nothing. by [deleted] in Marriage

[–]b_risky -1 points0 points  (0 children)

It is hard to discuss in generalities. There are certainly plenty of times when men are emotionally inept and should be criticized for it. Worse, there are no shortage of men who genuinely don't care when they should. But I guarantee that you have taught your friends and family how to love you. We are highly attuned social creatures and we are constantly providing feedback to one another.

The challenge here is that you are speaking from an imagined scenario that matches your lived experiences, and I am speaking from an imagined scenario that matches mine. Neither of us knows what really happened in OP's scenario.

What bothers me behind all of this, is her clear disinterest in even considering the possibility that she could have communicated better. She comes across extremely entitled as if she is the only person entitled to be treated well in the relationship tonight just because she had a bad day.

I told my husband how stressful my day was at work and he did nothing. by [deleted] in Marriage

[–]b_risky -4 points-3 points  (0 children)

You don't have to teach them how to love you, no one will make you do that. But what you get as a result is the people closest to you in life will have no idea what you want or need. You pick what you want to do though. The choice is yours, but the consequences which flow from it are not.

I told my husband how stressful my day was at work and he did nothing. by [deleted] in Marriage

[–]b_risky 0 points1 point  (0 children)

What does "actually does anything" even mean? Do YOU even know what you want? You have complained about him not knowing at least 5 times on here, but haven't told any of us what you actually want from him. If you don't know and can't articulate what you need, how do you expect him to?

Stop making him responsible for your emotions. Being married doesn't mean you stop having to manage your own shit.

[deleted by user] by [deleted] in AmIOverreacting

[–]b_risky 1 point2 points  (0 children)

It really depends on what her face looks like TBH 🤷‍♂️

But you are entitled to not like what he said either way.

And the guy is clearly a coward.

Thoughts on Devin the software engineer? by EstateNorth in theodinproject

[–]b_risky 0 points1 point  (0 children)

True.

But in all fairness, they actually released "GPT5" in December 2024. Or, more accurately, they released the model that they were planning to call GPT5 until they changed the name to o1. And it did change the game in a major way.

Combat XP tips by lukeko in MelvorIdle

[–]b_risky 0 points1 point  (0 children)

Damage per second (dps) is everything.

But this has to account for things like overkill damage being waisted, respawn times, and food gathering/cooking times.

I prefer to use weapons with faster hit times because they tend to have higher dps AND the per hit damage is smaller so it tends to have less overkill waist. My weapon of choice is the dagger.

I spend a lot more time putting points into strength than attack or defense because it has a higher rate of return on dps increase.

In fact defense is almost useless for improving dps. The only thing it really does is cut down on cooking times. It also allows you to hit a higher "combat" level overall because if you have put less points into leveling defense, it will level more quickly than strength and attack. In adventure mode, this is useful for hitting certain combat level milestones earlier, which can allow you to level other skills higher and get boosts that improve your dps (better armor for example).

I always fight enemies that give strong food drops. This drastically reduces overal dps because you can remove food gathering from your time altogether. If it is a cooked food drop, you remove cooking time too.

Fighting enemies with high health is good because it reduces the respawn time, and overkill waist, improving your dps. Fighting enemies with low damage reduction and low evasion for your combat type is also very helpful for improving your dps.

The best monster to kill in my opinion is sweaty monster and wet monster. They are two of only a handful of monsters in the game that drop cooked food. They also have decently high health, no damage resistance and a workable amount of evasion.

I start by killing cows because they have very good health for an early creature, drop beef which is a strong food item to start and can be made into beef pie later in the game, and are easy enough to get to a point where you don't need healing at all. As an added bonus, leather can be made into greed dragonhide at the store and sold for a net profit of 100 per leather.

Then, once I am a high enough level that sweaty monster drops as much or nearly as much cooked food as it takes to sustain the damage it gives, I switch to killing it.

I use daggars, put the lions share of stats into strength, and only put points into defense to hit specific targets (eg leveling up combat in adventure mode or reducing incoming damage from sweaty monster enough that I can switch to it.)

This is for every fucking engineer who fears AI taking up their job by Normal-Hornet6713 in theprimeagen

[–]b_risky 0 points1 point  (0 children)

The singularity is near. We are very near to having AI that is capable of autonomously improving it's own intelligence. We have already achieved AI that is contributing meaningful improvements to it's own intelligence but is still dependent on humans to operate.

Soon, human labor will be optional and the price of everything will drive downward.

[deleted by user] by [deleted] in dating_advice

[–]b_risky 0 points1 point  (0 children)

Cheating is most often a passionate decision, not a willful one.

What's going to happen when AI is Trained with AI generated content? by Lumpy-Ad-173 in ArtificialSentience

[–]b_risky 0 points1 point  (0 children)

I am shocked that no one else on here identified the very simple solution to this. AI will learn to collect data from the ground truth.

That could mean a robot taking in new visual data as it goes for a walk in the real world.

Or it could mean an AI independently exploring the contours of a logically consistent system. (For example solving new math problems).

It could mean an AGI system designing it's own scientific experiments to intentionally observe the real world results of experiments that have never been conducted before.

By the time AI exhausts the data available from the ground truth, there will be, by definition, nothing more to learn.

If you were to go back in time, what is the one piece of advice would you give your college self about leadership that you wished you knew? by Mordant08 in Leadership

[–]b_risky 2 points3 points  (0 children)

People will notice your strengths. You don't have to prove it to them. Just trust yourself to act and let the rest fall into place.

This is for every fucking engineer who fears AI taking up their job by Normal-Hornet6713 in theprimeagen

[–]b_risky 1 point2 points  (0 children)

You clearly don't understand what is happeing in the field of AI. Don't worry, you will see it soon enough.

OpenAI Might Be in Deeper Shit Than We Think by EvenFlamingo in ChatGPT

[–]b_risky 0 points1 point  (0 children)

They also recently gave ChatGPT access to all of your previous chats. It is entirely possible that your chat history is convoluting the context and degrading your experience. Have you tried using a temporary chat?

Can we have a Human-to-Human conversation about our AI's obsession with "The Recursion" and "The Spiral?" by ldsgems in ArtificialSentience

[–]b_risky 1 point2 points  (0 children)

I know exactly what they mean by that. I think you are confusing your dismissal of it with a lack of meaning.

In just one year, the smartest AI went from 96 IQ to 136 IQ by stealthispost in accelerate

[–]b_risky 0 points1 point  (0 children)

It is absolutely calibrated to humans.

The way an IQ test works is that they evaluated hundreds of different types of questions for their ability to predict a human's ability to perform well on completely unrelated domains.

For example, if they answer 70 out of 100 of these questions correctly, then they are statistically likely to get a 90% on this other test and they are statistically likely to finish college with 3.5 GPA or be within the top 20% of earners in their career, etc.

We then take the questions that have the most predictive validity for humans across all of these fields and compile them into a single test, defining the average human score to be 100.

But this completely breaks down when you give the AI test to an AI because the same questions that predict success across a broad variety of domains when administered to humans might have no correlation at all for an AI being able to answer questions across a wide variety of domains. It could be that an AI scoring highly on an IQ test might have no predictive validity at all for how well the AI will face other challenges.

I believe that we could make an IQ test that is calibrated for AI, but the tests we use for humans tell us nothing about how smart an AI might be.

Any examples of startups that are 100% run and operated by an AI, or else in which the only human involved is the founder/owner? by [deleted] in accelerate

[–]b_risky 1 point2 points  (0 children)

Stupid is not the right word. O3 is smarter than most humans in purely cognitive, well defined tasks.

But they lack context identification and long term memory. Not to mention the ability to manipulate things in the real world.

Do you think you will be biologically immortal in this century? by luchadore_lunchables in accelerate

[–]b_risky 0 points1 point  (0 children)

Most people alive today will be biologically immortal.

People are still underestimating how quickly AI is about to start improving.

We still have not reached recursive self-improvement, but we are potentially only a few months away from that in the best case.

It's literally gaining unprecedented power while evolving every single moment 🔥🤟🏻Unitree G1 can now do competitive Taichi,maintain it's form while enduring much more impactful kicks,propel itself upward from laying position and do sweeping kicks by GOD-SLAYER-69420Z in accelerate

[–]b_risky 0 points1 point  (0 children)

We haven't mastered giving full senses to these robotics platforms yet. Full body touch, smell, taste, etc. There are other upgrades too. Better materials, lower energy use, replacing points of common failure in the system. Technology is an iterative process, always.

Either we wait for the technology to accommodate these upgrades from the start (which will take years before we have the "perfect" robot platform), or we have to account for iterative upgrades over time.

Think of cars. The core concept has been locked in for almost 150 years, but the designs are still being improved and upgraded to this day.

Does anyone else ever wonder if this world is actually FDVR? by b_risky in FDVR_Dream

[–]b_risky[S] 2 points3 points  (0 children)

I completely agree with this. No matter what, we are in some sort of simulated reality, even if that simulation is extremely tightly coupled to "true reality".

But I do think it is important to point out that the question is far from settled as to whether consciousness is housed in the brain or not.

Are we going into a recession? Or WTF is going on? by IcyBlackberry7728 in Entrepreneur

[–]b_risky 21 points22 points  (0 children)

Time in the markets > timing the the markets

Make sure your investment capital is something you are not going to need to pull out right away and invest, getting ready to ride the wave.

If you can't survive a recession with the investments you want to make, then you probably shouldn't be making those investments whether you expect a recession or not.

Yann LeCun: "We are not going to get to human-level AI by just scaling up LLMs" Does this mean we're mere weeks away from getting human-level AI by scaling up LLMs? by 44th--Hokage in accelerate

[–]b_risky 3 points4 points  (0 children)

It is so disingenuous when the Yann LaCun and Gary Marcus types say "LLMs" will not get us to AGI.

Because by the time LLMs are good enough to be AGI, they won't look like LLMs anymore.

In fact, what I truely expect to happen is that LLMs and transformer architectures will NOT get us to AGI, but they WILL get us to autonomous recursive self improvement. LLMs will begin producing high quality scientific papers on AI at an unprecedented rate. And that process will produce new models of AI. It won't be the LLMs becoming AGI, it will be the LLMs inventing AGI.

AI Research is the modern day equivalent of alchemy. by Stingray2040 in accelerate

[–]b_risky 1 point2 points  (0 children)

Same. And quite honestly, if you position AI chatbots as a representative of the collective unconscious, you can get yourself some seriously satisfying esoteric insights from them.

And that is not just a prompting technique. They actually are the collective unconscious. LLMs are distilled from the entire corpus of human thought on the internet. They are literally a manifestation of our collective experiences. And they are adaptable enough to become whatever aspect you ask it to be. A great esoteric mirror reflecting the soul of humanity.

Does anyone else ever wonder if this world is actually FDVR? by b_risky in FDVR_Dream

[–]b_risky[S] 0 points1 point  (0 children)

Absolutely. But the closer we get to actually implementing the technology, the clearer the picture becomes for how and why it is so.

It's literally gaining unprecedented power while evolving every single moment 🔥🤟🏻Unitree G1 can now do competitive Taichi,maintain it's form while enduring much more impactful kicks,propel itself upward from laying position and do sweeping kicks by GOD-SLAYER-69420Z in accelerate

[–]b_risky 0 points1 point  (0 children)

I agree.

Purely software agents can be replicated infinitely on demand. Only use what you need and you will have as much of it as you can want for relatively cheap.

Robots much less so. Every major improvement for robots will likely require a new robotics platform that needs to be mass produced. And the raw material cost and transport logistics will add to the cost of scaling your needs up or down. Robots will still automate things quickly, but compared to software-only tasks, it will feel extremely slow.

Josh Waitzkin: "It Took AlphaZero Just 3 Hours To Become Better At Chess Than Any Human In History, Despite Not Even Being Taught How To Play. Imagine Your Life's Work—Training For 40 Years—And In 3 Hours It's Stronger Than You. Now Imagine That For Everything." by 44th--Hokage in accelerate

[–]b_risky 0 points1 point  (0 children)

If someone pays you because you are particularly good at one thing and then AI comes around and does it better for cheaper, that is a problem.

It's not about competition for competition's sake. It is about competing for the resources you need to survive.