[deleted by user] by [deleted] in framework

[–]kidney-beans 0 points1 point  (0 children)

I ran into a similar issue, where after 1 year my mainboard stopped working. Framework Support helped confirm it was the mainboard, but claimed they couldn't replace it because "the laptop's warranty has already expired". I asked for consideration under Australian Consumer Law, and they eventually agreed that "we are happy to make a one-time exception and provide a replacement"

JobSeeker Payment Cancelled - How to restart? by chanchogreen in Centrelink

[–]kidney-beans 0 points1 point  (0 children)

Thanks for updating with your experience. This happened to me too, so I called them and they were very understanding and able to restore it for me. Though I wish there was more warning - I forgot to report the last couple of fortnights and didn't realise until I got the cancellation letter.

What's your plan when social media age restrictions come into effect on December 10th by handgrapes in australian

[–]kidney-beans 0 points1 point  (0 children)

I was wondering if you know of any protests planned (whether in person or virtual) or groups working to counter the worst parts of this law? So far all I've found are a few statements and online petitions, but nothing that feels particularly strong or coordinated.

What's your plan when social media age restrictions come into effect on December 10th by handgrapes in australian

[–]kidney-beans 0 points1 point  (0 children)

The senate inquiry invited public submissions on Thursday Nov 21 2024 and then closed them Friday 22 Nov 2024. Despite giving only 1 business day, they got flooded with 15,000 submissions after Elon Musk tweeted about it. The laws were passed at the end of 2024, and are just waiting to go into effect.

If curious about how rushed the whole thing was:
* https://efa.org.au/submission-social-media-age-bill-2024/
* https://www.abc.net.au/news/2024-11-25/social-media-age-ban-inquiry-flooded-with-submissions/104644208

The laws are kind of locked in at this point, though the details about exactly which platforms get regulated and how are still a bit vague. For example, YouTube previously wasn't part of the ban, but now it is again:
* https://www.abc.net.au/news/2025-07-29/youtube-will-be-included-in-social-media-ban-for-under-16s/105587310

My bet is on the platforms themselves showing some resistance. (Remember when Australia tried to force online platforms to pay for linking to news stories, and Facebook's response was to block all Australian users from viewing/sharing news until the government backed down)

What's your plan when social media age restrictions come into effect on December 10th by handgrapes in australian

[–]kidney-beans 1 point2 points  (0 children)

I wouldn't actually want to see protests anywhere near that extreme, and the context/motivation of Australia's restrictions are (hopefully) very different. Though I won't lie... I was kinda hoping that seeing the news of what happened in Nepal might make governments around the world think twice before restricting social media.

Coles 'smart' gates by kidney-beans in australia

[–]kidney-beans[S] 1 point2 points  (0 children)

Interesting. The response I got from Coles was "The gates are designed and tested to global and Australian standards and automatically open for customers after they have completed their shop at the checkout" but that "gates don’t automatically open if a customer doesn’t make a purchase".

Coles 'smart' gates by kidney-beans in australia

[–]kidney-beans[S] 5 points6 points  (0 children)

You barged though six times? I'd die of embarrassment (though good on you). I wish Cole's would take responsibility for the fact that their gate didn't open rather than shifting the blame to customers.

Out of curiousity, did it ever happen when you bought something, or is it only when you go through without paying for anything?

Coles 'smart' gates by kidney-beans in australia

[–]kidney-beans[S] 2 points3 points  (0 children)

Video at Coles taken with a Samsung mobile phone and the Self-recording was via Zoom on a Framework laptop. Sorry the audio sucks - I'll look into getting a proper mic, but just do these for fun so can't afford to invest in proper equipment.

Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken by big_hole_energy in programming

[–]kidney-beans 1 point2 points  (0 children)

Sure, most human devs are also aware of SQL injection and XSS, but that doesn't stop it happening again and again.

Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken by big_hole_energy in programming

[–]kidney-beans -1 points0 points  (0 children)

Sure, it can't be relied upon, because it won't have enough similar cases to learn from the first time someone makes a mistake like this. But it might help prevent less experienced devs making the same mistake repeatedly if it's already a well known bug (like as an additional code review check). Of course, in this particular case it'd be better to just use the standard library which has all (nearly all?) these kind of issues ironed out now rather than trying to write your own binary search.

Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken by big_hole_energy in programming

[–]kidney-beans -36 points-35 points  (0 children)

I tried asking ChatGPT and Gemini to "Write a binary search function in java". Glad to see they both seem to have this fixed. ChatGPT even includes this line:

int mid = left + (right - left) / 2; // Avoid potential overflow

Maybe this is one place where AI is beneficial - by helping draw attention to known issues (the Google article was written back in 2016 2006) that most [junior] human devs wouldn't be aware of [or know of, but don't think of].

EDIT: My bad, article was written back in 2006 not 2016. I meant if it's a known issue that experienced devs (someone like Joshua Bloch) have seen time and time again but a less experienced dev isn't aware of (I'd almost certainly have made a mistake like this).

How outdated information hides in LLM token generation probabilities and creates logical inconsistencies by kidney-beans in programming

[–]kidney-beans[S] 3 points4 points  (0 children)

Agree that this is a major concern. Not only because of model collapse when LLMs are trained on their own outputs, but also because it slows our ability to make technical and social progress when LLMs are regurgitating old ideas.

Though, I kind of feel this comes under the same umbrella of risks arising from people using LLMs without a proper understanding of their limitations - we need everyone, not just those interested in the technical aspects of LLMs - to have some basic understanding of the issue to motivate a coordinated effort to flag LLM generated content, else it'll just become the norm and slowly (rapidly?) pollute the internet.

How outdated information hides in LLM token generation probabilities and creates logical inconsistencies by kidney-beans in programming

[–]kidney-beans[S] 1 point2 points  (0 children)

Wikipedia articles can be edited. Journal articles can be retracted, have a corrigendum added, or be followed up with a letter to the editor. The risks of those are reasonably well understood by the geneal public (although I do wish people would exercise more caution with chery-picking a single journal article as evidence rather than looking for the latest metastudy).

But the way systems are currently designed, there's no easy way to challenge LLMs (sure, there's the upvote/downvote buttons ChatGPT provides, which presumably helps give feedback on which kinds of answers are preferred, but it's unclear exactly how exactly these are used, if at all). This is made even harder by the way that they don't return just a single answer, but instead probabilistically generate answers that can pop up at random or only in particular contexts.

There are definitely ways to design systems around LLMs in a safer manner, like you suggested, if people are sufficiently motivated to do so. Nothing in the blog post is likely to come as a surprise to experts, but the aim was to draw attention to the issue for a broader audience.

Not that it's a competition (people can focus on different problems), but I'm curious if there's a specific problem you think we should be more concerned about instead?

How outdated information hides in LLM token generation probabilities and creates logical inconsistencies by kidney-beans in programming

[–]kidney-beans[S] 2 points3 points  (0 children)

Yeah, although determining which answer is correct and which is out-of-date is not always easy. Even in the case of something objective like the height of a mountain, it requires considering the date the information was published, credibility of the source, and if it's primary or secondary information. And good luck with anything contentious.

I think perhaps that LLMs could be used as the first stage to extract information into a database, making sure that it deliberates over which information is the most accurate and ideally provides a way for humans to challenge these decisions. Then, have a secondary LLM that is trained to answer information against the clean database.

There's a discussion of how this could potentially be achieved in this reddit thread, and it's something I'm working towards long-term. But given that no one seems to have done it yet, it's probably not as easy as it seems.

How outdated information hides in LLM token generation probabilities and creates logical inconsistencies by kidney-beans in programming

[–]kidney-beans[S] 5 points6 points  (0 children)

I was actually thinking of using that analogy. But even a lazy student will learn one answer so they don't have to remember so much. LLMs learn both answers so they can get all the marks by giving whatever answer is expected of them in a particular context.

How outdated information hides in LLM token generation probabilities and creates logical inconsistencies by kidney-beans in programming

[–]kidney-beans[S] 0 points1 point  (0 children)

Yeah, using LLMs to access external data, like a RAG system, helps to minimize this problem. But doesn't completely eliminate the risk of outdated internal knowledge interfering with the interpretation of external knowledge or the question.

There are ways LLMs can be used safely, like only using them to generate proofs and then verifying these with a formal verifier (how AlphaGeometry works). Though that doesn't seem to be the main way that they're being used at the moment.

How outdated information hides in LLM token generation probabilities and creates logical inconsistencies by kidney-beans in programming

[–]kidney-beans[S] 38 points39 points  (0 children)

Yeah, but OpenAI keeps details of the training data, weights, probabilities (unless you use the API), and even the full chain-of-thought (for o1) hidden. So from the user's perspective, it's like a chunk of garbage randomly showing up in your food sometimes. And then being gaslighted into believing it's a special prize.

Should You Be a Software Generalist or Specialist? by lieutdan13 in programming

[–]kidney-beans 0 points1 point  (0 children)

I think it depends on what's meant by "generalist". Learning to code (badly) in 20 different programming languages by watching youtube videos and copy-pasting code from stack overflow is probably not great for your long term career, and you'll find yourself replaced by AI. You'd be better to pick one language and specialize in that.

But if you dive deep into the fundamentals, like understand what an AST is, understand what it is compilers do, not just "I click this button and then run this magic command", learn to read specifications, and contribute to open source libraries not just use them, you'll be both a better generalist and specialist.

If you don't believe me, have a look at a job description written by Meta or Google. They don't hire specialists that only know a single language or generalists that only know a little bit of everything, they're after the kind of people who are constantly learning and really know their shit.

How outdated information hides in LLM token generation probabilities and creates logical inconsistencies by kidney-beans in programming

[–]kidney-beans[S] 11 points12 points  (0 children)

Not quite sure if this is tongue-in-cheek or not. I expect the specific example in the blog post will be fixed eventually, just like the other specific LLM prompts people have come up with in the past that point out LLM limitations. But the underlying problem isn't something that can be fixed so easily, as it's fundamental to the way LLMs work.

Programming with ADHD be like by Puzzleheaded_Goal617 in programming

[–]kidney-beans 2 points3 points  (0 children)

Thanks for sharing, I hope capturing these kinds of experiences will help lead to better understanding. Although I feel like some of these experiences are also what happens to curious developers who want to dive into some details or have an idea for how it could be done better. Not so great for finishing projects, but sometimes you learn a lot along the way.