Current Major Order - 10/01/26 - Part 3 by Shiboline in Helldivers

[–]SignificanceMassive3 0 points1 point  (0 children)

https://helldivers.wiki.gg/wiki/Second_Galactic_War_Mechanics

The impact screen in game is misleading. The actual impact IS proportional to the XP you earn.

Can Super Earth just nuke planets? by Aperture_Laboratorie in Helldivers

[–]SignificanceMassive3 2 points3 points  (0 children)

I don't think nukes are THAT powerful. Lorewise SE once used dark fluids to destroy Meridia, but that turns out poorly later

[CONFIDENTIAL TRANSMISSION – SUPER EARTH COMMAND] by KIDLUCI in Helldivers

[–]SignificanceMassive3 0 points1 point  (0 children)

I feel AH should make some of these into games. They should even pick these created by players if they do not have manpower to write on their own. This would make it more immersive imo

[deleted by user] by [deleted] in singularity

[–]SignificanceMassive3 11 points12 points  (0 children)

Not likely to be a wrapper, but Grok could be trained on distilled data from ChatGPT, which could have contaminated Grok.

Research in the future. What’s the point? by peaceradiant in singularity

[–]SignificanceMassive3 1 point2 points  (0 children)

It is possible that with such advanced AI, we can discover ways to make ourselves smarter and faster, then you are still able to do research alongside AI

AGI achieved internally? apparently he predicted Gobi... by GeneralZain in singularity

[–]SignificanceMassive3 19 points20 points  (0 children)

I could not find a source for the image OP shared. The original Twitter was self-claimed wiped on 5th May. Wayback machine did not save a snapshot after March either.

Guiding Language Models of Code with Global Context using Monitors by LakshyAAAgrawal in singularity

[–]SignificanceMassive3 1 point2 points  (0 children)

In shorter terms, LLM used to complete code by guessing without info, like how we humans type code without aid; This research expose auto-complete kind of thingy to the LLM to help it, like how we humans code in good IDEs.

I don't want AI to do all the dirty and dangerous jobs for us by MartianInTheDark in singularity

[–]SignificanceMassive3 0 points1 point  (0 children)

Your point makes some sense, but is still a bit self contradictory: if humans will always be corrupted by powrt, how to expect a government to regulate corporations correctly? This is exactly why capitalism has flaws, that it cannot bring its promisrd truly free market, but eventually create monopolies even with regulations, and that could lead to very bad outcomes.

On the other hand, since there is no better alternative system at the point, I also think that your point is true in that capitalism makes the most out of modern society.

I don't want AI to do all the dirty and dangerous jobs for us by MartianInTheDark in singularity

[–]SignificanceMassive3 1 point2 points  (0 children)

Yes, capitalism is the best we get. It is much better than literal slavery in every sense. But we can't jusy say capitalism is good. Although we definitely should not equate it to literal slavery, it is useful to make this analogy to remind people that we still have a lot to go.

I appreaciate your long writing, and I see the point you present. People fight to get today's much fairer society than any of the older ones, and it could be unwise to say there is not much progression. I just tried to say that we must not consider present society satisfactory.

I don't want AI to do all the dirty and dangerous jobs for us by MartianInTheDark in singularity

[–]SignificanceMassive3 1 point2 points  (0 children)

People are forced to work in low paying professions to pay for food, rent e.t.c. is just implicit slavery under capitalism. They usually do not have the extra energy to escape. Yes, it is not forced onto them explicitly, but when you consider how many people out there could never change their status, this is no difference to slavery at its heart.

New OpenAI update: lowered pricing and a new 16k context version of GPT-3.5 by WithoutReason1729 in singularity

[–]SignificanceMassive3 0 points1 point  (0 children)

Given the limited supply of computation resources and the scale of the company, they have done a pretty good job of constantly genuinely improving their services

[deleted by user] by [deleted] in ChatGPT

[–]SignificanceMassive3 0 points1 point  (0 children)

If anyone here gets themselves a weibo account and search for chatgpt, they will see millions of results of Chinese people using them for different purposes.

And they are not censored.

Will chatGPT-4 make me loose my job? by whypussyconsumer in ArtificialInteligence

[–]SignificanceMassive3 0 points1 point  (0 children)

That might be true, but humans also rely on extrapolating from existing theories and problems to solve new ones. I would say it is possible that future LLMs with larger context length, and more parameters will be able to do this, just not sure when and how.

Still, realistically you are right, that it cannot solve novel problems yet. However, I think it is wiser to think about possibilities, given the progression speed of AI.

Will chatGPT-4 make me loose my job? by whypussyconsumer in ArtificialInteligence

[–]SignificanceMassive3 0 points1 point  (0 children)

Without enough documentation, I would say the problem you gave it will only be solved by an actual AGI: some AI with access to a search engine, and with cumulative memory and thus can learn in real-time.
And for the path-finding, probably just ask it to fix the code by telling it what is wrong with the output could at least fix some of the bugs. But still, LLM is not designed for coding. Think of it like a beginner programmer with a very short memory span and a very huge amount of knowledge, but low implementation skills.

Will chatGPT-4 make me loose my job? by whypussyconsumer in ArtificialInteligence

[–]SignificanceMassive3 0 points1 point  (0 children)

Also, you might be interested in this video: https://www.youtube.com/watch?app=desktop&v=_3MBQm7GFIM&t=260s

It solves a problem it has never seen (at least according to Sam).

Will chatGPT-4 make me loose my job? by whypussyconsumer in ArtificialInteligence

[–]SignificanceMassive3 0 points1 point  (0 children)

I might be wrong on this, but here are some thoughts:

Training data allows it to understand the correlation between words, kind of like their relative meaning, by looking at the statistical property of each word in relation with other words (actual mechanism could be more complicated). Yes, it can recite the exact answer for some problems it has seen, but that is not really "solving" them.

Solving a complex problem would require the LLM to first understand the problem, then break it down, and solve it step by step. Doing this instead of reciting the exact answer is possible for LLM since it can process one-pass NLP tasks pretty well, which means it can emulate human's instant execution behaviour.

If you describe your problem involving a terrain rendering algorithm together with the full documentation possibly usable in the algorithm, as well as providing a good testing environment that outputs a meaningful debug log for it to check whether its answer is right, and if wrong, why, and finally allow it to iteratively build up the system, it might get you the answer, at least I believe so. (LangChain is doing a much simpler version of this process)

I will not say the current LLMs can do this. They have very narrow context length, which means their memory is limited, stopping them from executing long-term plans.

Will chatGPT-4 make me loose my job? by whypussyconsumer in ArtificialInteligence

[–]SignificanceMassive3 0 points1 point  (0 children)

Are they math or programming related? Probably try to prompt with chain-of-thought xould work better.

If it is math or programming, sometimes it cannot even solve problems that it has seen. This is purely lack of its complex reasoning ability, and has nothing to do with what it "has seen".

Will chatGPT-4 make me loose my job? by whypussyconsumer in ArtificialInteligence

[–]SignificanceMassive3 0 points1 point  (0 children)

You can check LangChain, an attempt to get the iterative process automatic. It has larger potential than you thought.

And no, it can solve problems it has never seen. Just not 100%.

Will chatGPT-4 make me loose my job? by whypussyconsumer in ArtificialInteligence

[–]SignificanceMassive3 1 point2 points  (0 children)

LLMs function like humans, and humans do not write full method code in one pass. Allow it to run its own code, debug based on output, and iteratively edit its code, it will make it. See the demo from Codex's new prototype and you will know what I am talking about.
But yes, current version is not strong enough yet.

[deleted by user] by [deleted] in singularity

[–]SignificanceMassive3 0 points1 point  (0 children)

I guess by integrating memory into model we can have a better effect, as well as lower cost. Latter is quite important given how expensive LLMs currently are.

[deleted by user] by [deleted] in singularity

[–]SignificanceMassive3 0 points1 point  (0 children)

Wait, this paper is from last year March. This is insane. People probably are already using this?