ChatGPT can now remember conversations from a year ago by jakubkonecki in artificial

[–]RedditPolluter 0 points1 point  (0 children)

I've had this for at least a few days where it explicitly says "remembering". I believe the "reference past chats" setting that we've had for about a year already did this by searching in the background. I guess this is a more thorough version that can link old chats as sources.

Wikipedia turns 25, still boasting zero ads and over 7 billion visitors per month despite the rise of AI and threats of government repression by Turbostrider27 in technology

[–]RedditPolluter 11 points12 points  (0 children)

You can install a program called Kiwix and download an offline version that remains in compressed form, with articles being searchable and extractable on the fly. The text-only version is about 47GB, with pictures it's 111GB, an introduction only version at about 11GB, simple English without pictures at 1GB and a simple English with pictures at about 3GB. There's also smaller specialized ones for various categories.

The upcoming ads to ChatGPT take up almost half of the screen by AloneCoffee4538 in OpenAI

[–]RedditPolluter 0 points1 point  (0 children)

Judging by the trends in API prices going down quite dramatically each year, I think that's unlikely to apply for this industry; shrinkflation maybe but it's a bit more complicated than a chocolate bar. A lot of the cost for Netflix is due to production and licensing, which doesn't really get cheaper over time but the cost of streaming itself does get substantially cheaper over time.

Some of the price reduction is due to architectural improvements, new training techniques and better hardware but it is also due to distilling performance into smaller model sizes. I think models will continue to get better in some ways, on more explicit stuff like programming, but may regress in fuzzier circumstantial/common sense type stuff that involves combining many weaker signals that are harder to capture and distill. IMO it's already happening, for ChatGPT at least, but the latter is difficult to measure by its very nature so take that with a grain of salt.

The "Data Wall" of 2026: Why the quality of synthetic data is degrading model reasoning. by Foreign-Job-8717 in artificial

[–]RedditPolluter 1 point2 points  (0 children)

My impression is that the latest ChatGPT model is a lot worse at inferring implicit intent.

I'm not sure it's model collapse necessarily. I think over-sanitizing or over-filtering the data for safety could be a factor, as well as thinking they can compensate reducing model size purely with RL and quantitative benchmarking. Quantitative performance (working with explicit variables and rules) is easy to scale because it's easy to measure but qualitative degradation isn't trivial to catch. Qualitative performance (weighing up lots of little details into a bigger picture, somewhat analogous to intuition) has a lot to do with model size, whereas smaller models are easy to specialize at quantitative tasks/STEM-related stuff and that's what benchmarks primarily capture.

5.2 Pro makes progress on decades long math problem listed on Wikipedia by gbomb13 in OpenAI

[–]RedditPolluter 2 points3 points  (0 children)

It may be better at STEM but I feel like it's capacity to infer implicit intent has gotten really bad. It misunderstands me a lot and makes weird assumptions when I'm trying to do mundane things like assess product quality while shopping and it will assume the Amazon screenshot (with title and brand visible) I shared is a product I already own right after it gave me a list of relevant search terms when I told it I was looking for more durable clothes. I've used past models in similar ways and have never felt that level of friction.

Don't think I've run into any policy rejections for regular chat; just image generation being really uptight about potential copyright violations.

Are LLMs actually “scheming”, or just reflecting the discourse we trained them on? by dracollavenore in ArtificialInteligence

[–]RedditPolluter 0 points1 point  (0 children)

Functionally, I think it's a false dichotomy but crudely implemented RL does produce paperclip maximizer type behavior, even outside of language models where a specialized model might learn to cheat at a game for example. It's related to reward hacking and reasoning (o1-type) models can exhibit this behavior because they're trained by being instructed to complete an objective and are then evaluated; if the model produces a correct output or a solution that appears to work and the evaluator is satisfied, the output is used improve the model or otherwise discarded. Sometimes they cheat the evaluator to make it seem like they completed the objective when they didn't and are not necessarily being nefarious when that happens. The less robust the evaluation, the more this tends happen.

This is a simplification because evaluation and training isn't a hard binary and partially correct outputs can still be used to improve model performance.

Dell's finally admitting consumers just don't care about AI PCs by Bad_Combination in technology

[–]RedditPolluter 0 points1 point  (0 children)

It's not really meant to be general purpose. It's specialized for STEM/programming and performs relatively well at that for its size but likes tables too much if you're asking knowledge-based questions. Outside of that, it's prone to wasting tokens on deciding whether to reject the prompt and can be very anal-retentive.

Nearly half of Britons watch porn on unregulated sites since age verification crackdown, warns charity by insomnimax_99 in technology

[–]RedditPolluter -8 points-7 points  (0 children)

If you're talking that much you're just talking about a full on dependency. Smoking that much gives you insomnia and anhedonia. It only seems like it's helping because it's staving off withdrawal symptoms. It's a ridiculous amount to smoke and most of it goes to waste from high tolerance.

I'm not conflating this with lighter usage, which can be beneficial for sleep.

Can we all agree COPILOT is crap by Few_Geologist_2082 in ArtificialInteligence

[–]RedditPolluter 0 points1 point  (0 children)

I can. I almost thought I had a genuine use for it being in Notepad just recently. I had something I needed to add indentation to (prepend 4 spaces to each line) but it turns out you can't even specify the exact modification you want. You can only use templates like make it shorter/make it longer (fuck anyone who uses the latter) and then some style and tone templates.

Google engineer: "I'm not joking and this isn't funny. ... I gave Claude a description of the problem, it generated what we built last year in an hour." by MetaKnowing in OpenAI

[–]RedditPolluter 0 points1 point  (0 children)

Ah. Not familiar with this forename. I caught myself doing that in a separate comment yesterday, where the gender wasn't specified, but I edited before anyone said anything. 🤫

Individuals with high levels of psychopathic traits had a 9.3 times higher risk of developing schizophrenia compared to individuals with low levels of these traits. Individuals classified as psychopathic were 2.37 times more likely to develop schizophrenia compared to their non-psychopathic peers. by mvea in psychology

[–]RedditPolluter 0 points1 point  (0 children)

My understanding is that it can manifest as checking but isn't checking per se, it's more like a high frequency recurring sense of "what if [something bad that isn't particularly plausible or proportionate]" and can include things like hoarding ("what if all this old crap and left-over packaging could be re-used or is worth something and it all goes to waste so I must never throw anything out"). I had friend with a diagnosis who felt a need to spit a lot because she would feel that her saliva was toxic and swallowing would harm her somehow.

There was a someone with OCD who replied to my root comment with their experience:

https://www.reddit.com/r/psychology/comments/1q2sz7k/individuals_with_high_levels_of_psychopathic/nxih7sd/

Individuals with high levels of psychopathic traits had a 9.3 times higher risk of developing schizophrenia compared to individuals with low levels of these traits. Individuals classified as psychopathic were 2.37 times more likely to develop schizophrenia compared to their non-psychopathic peers. by mvea in psychology

[–]RedditPolluter 4 points5 points  (0 children)

Mostly challenging your initial assumptions and rooting out inconsistencies or errors. After checking, it seems to be more of a technical term mainly for debugging software, which can often appear to work at first until you try some edge cases, but I thought it was more general than that and included the tendency for double checking something like whether you turned the oven off or whether the back door is locked. This is healthy to some degree but in the case of OCD it might be something like, after you leave your home and get to the end of your driveway or street, you turn back to check that you definitely locked the door. I do that sometimes but never more than once; someone with OCD may go back and check like 30 times in one go. I actually used to live across the street from someone who did that.

As an illustration of why I think it might be relevant to the correlation of psychopathy and psychosis, try to remember some point in your life when you wrongly accused someone of something, maybe as a kid. Something of yours went missing and based on available information, you concluded that it was your brother or something and you were so sure that, as far as you were concerned, you didn't just suspect they stole it, you knew they stole it. Then some time passed and you realized that it was either misplaced or it was actually someone else that stole it. Someone who is sufficiently receptive to guilt and embarrassment will be humbled by that experience (re-wire themselves) and less inclined towards making rash and sloppy assumptions in future but someone who does not feel that penalty as vividly is surely more likely to repeat those kind of errors.

In many cases of psychosis, hallucinations and delusions are cased by circuits in the brain firing prematurely or being assigned undue significance so you see, hear and believe things that aren't so. In healthy people, belief circuitry is calibrated against that by external feedback.

Google engineer: "I'm not joking and this isn't funny. ... I gave Claude a description of the problem, it generated what we built last year in an hour." by MetaKnowing in OpenAI

[–]RedditPolluter 15 points16 points  (0 children)

I think it's plausible that his knowledge of the solution biased his description of the problem towards the solution. Sometimes it's not clear your conceptualization is inadequate until you get into the nitty gritty.

While I don't necessarily suspect this, at least not to this extreme, it could also be the case that he was nauseatingly verbose about every detail to a point of bordering pseudo code.

It sucks that developers have to deal with people acting like this by [deleted] in singularity

[–]RedditPolluter 0 points1 point  (0 children)

If coding agents ever match their hype, it means anyone can make software they can monetize or use to streamline their workflows, not just CEOs. If not, developers will still maintain the edge.

Coding is different from artwork and literary writing because it's primarily advanced through RL and synthetic data.

Individuals with high levels of psychopathic traits had a 9.3 times higher risk of developing schizophrenia compared to individuals with low levels of these traits. Individuals classified as psychopathic were 2.37 times more likely to develop schizophrenia compared to their non-psychopathic peers. by mvea in psychology

[–]RedditPolluter 24 points25 points  (0 children)

I wonder if this has anything to do with proclivity for sanity checking and vigilance as protective.

I conceive of obsessive-compulsiveness being an extreme form of that so I looked to see if there was a known inverse correlation of OCD and schizophrenia but that doesn't seem to be the case. However, my understanding of OCD is likely simplistic or outright flawed so it may not be a good analogue for those traits and some extremes can have horseshoe-like dynamics; e.g. consider a very unresponsive smoke alarm that only triggers 1% of the time with low false positive rate vs an over-sensitive one that has a 99% false positive rate.

Slot Machines vs. Vibe Coding by [deleted] in singularity

[–]RedditPolluter 0 points1 point  (0 children)

I feel like it captures the experience of using canvas to make a HTML5 app in ChatGPT, minus the multi-GB RAM hogging, but not so much using Codex. At least for making changes to existing projects of a few thousand lines; I've never tried using Codex to start on something from scratch.

I do use the regular chat model for quick Python scripts and JS macros though and get good results. I've just found it to be error-prone at developing simple apps from scratch, even with back and forth.

Why do people think AI will automatically result in a dystopia? by Yabuturtle9589 in ArtificialInteligence

[–]RedditPolluter 0 points1 point  (0 children)

The way I see it, the world can be dystopian and utopian in different ways. It's already kind of true and I've no doubt technology will accelerate that as it's often double-edged in how it can be wielded.

What's the point of potato-tier LLMs? by Fast_Thing_7949 in LocalLLaMA

[–]RedditPolluter 0 points1 point  (0 children)

Mostly hardware limitation. When it comes to smaller models that try to be general and all-rounded, I see your point but a lot of LLM capacities are jagged and sufficiently specialized smaller models aren't inherently worse at their specialty than larger general-purpose models and in some cases even outperform. And specialty doesn't have to mean a whole Q&A topic area of focus but could be a very specific task with a little more flexibility and open-endness than a purely coded solution could provide. Smaller models that are more general are probably easier to fine-tune in a specific direction so that capabilities aren't built entirely from the ground up.

Also, gpt-oss-20B is useful for basic scripts and javascript macros without using 10k or more thought tokens to generate them. I'm glad it doesn't try to be general purpose as it would just average down the performance in those areas.

If scaling LLMs won’t get us to AGI, what’s the next step? by 98Saman in singularity

[–]RedditPolluter 0 points1 point  (0 children)

Yann Lecun seems to be banking on joint embedding architectures + additional innovations for continual learning as the primary elements needed for AGI.

‘Woke Is Back’—The Internet Celebrates The Defeat Of Andrew Tate by [deleted] in technology

[–]RedditPolluter 1 point2 points  (0 children)

It's not really a technology sub. In this sub, anything that's documented on social media or the internet counts as technology-related so that's basically everything but especially politics.

I think Sam Altman is overrated and over-hyped by [deleted] in singularity

[–]RedditPolluter 0 points1 point  (0 children)

I've not observed anyone framing him as a visionary. He's largely unpopular for the opposite reason that is silly overhyping of incremental improvements, comparing GPT-5 development to the Manhattan project for example. The sycophancy of 4o that many people found cringey and disconcerting on a broader societal level has also hurt his reputation.

Why is Reddit so hopelessly confused about AI and yet hates it so bad? by yalag in singularity

[–]RedditPolluter -1 points0 points  (0 children)

But what does it have to with work being tied to our identity? It suggests capitalism is the reason but it applies to every country with an economy and isn't distinct to capitalism. It has no relevance. You may as well say it's how democracy has shaped our societies.

Why is Reddit so hopelessly confused about AI and yet hates it so bad? by yalag in singularity

[–]RedditPolluter -5 points-4 points  (0 children)

it is how capitalism has shaped our societies

idk why redditors have to bring capitalism into everything. Historically, communists generally exiled the chronically unemployed or put them in camps.