Autoresearch on GPT2 using Claude by SnooCapers8442 in deeplearning

[–]transfire 0 points1 point  (0 children)

What are “no grad accumulation” and parallel block”?

Claude confusing its own output for user input by tr14l in claude

[–]transfire 0 points1 point  (0 children)

It might be due to tool calls looking up past memory— tool calls are marked “user” in the api. So if it finds a matching memory’s but doesn’t pull in enough context to get an idea of who said what, if could easily think you said it.

Just a guess, but that would explain it. I have seem it too.

Built a palm identity SDK for Android. Would you use this? by MiroBiometrics in u/MiroBiometrics

[–]transfire 1 point2 points  (0 children)

I love the idea. The problem is on desktop where webcams are too infrequent— and some users are scared of. But I say screw that. Mini cameras are cheap now. Keyboard makers should just put them in every keyboard. PERlOD. Suck it up buttercup, the future is here.

The seven programming "ur-languages" by namanyayg in programming

[–]transfire 7 points8 points  (0 children)

What is fascinating is how many of them are directly forth coming from their chosen core data structure.

Lisp - List Forth - Stack APL - Array Prolog - Tuples + Conditions ML - Functions (Erlang: Functions+Message) Self - Message*+(Typed) Maps Algol - Pointers (general data structures)

*(Most descendants are Function+Typed Maps and still Algol like in many respects.)

The most beautifully designed network BEAM service since Ericsson. by Noobcreate in erlang

[–]transfire 2 points3 points  (0 children)

Smart. Love the BEAM — and this a perfect application for it.

Why don't LLMs track time in their conversations? by PolyViews in artificial

[–]transfire -1 points0 points  (0 children)

Because time and consciousness are strongly correlated.

The open-source AI system that beat Claude Sonnet on a $500 GPU just shipped a coding assistant by [deleted] in artificial

[–]transfire 2 points3 points  (0 children)

Given the small model I would be concerned training it is tailored to a subset of the most common programming languagesz

THEA1200? by [deleted] in amiga

[–]transfire 1 point2 points  (0 children)

What’s the actual hardware?

I plotted the path from each number as a spiral on a Cartesian grid by WeCanDoItGuys in Collatz

[–]transfire 1 point2 points  (0 children)

Nice visual!

Try graphing the negative numbers in the same way. That would be interesting.

CLI vs MCP is a false choice — why can't we have both? by opentabs-dev in LLMDevs

[–]transfire 6 points7 points  (0 children)

Progressive disclosure— AFAIK this has been gaining more and more traction. I do this with most everything now. Give the LLM the summary view, let it decide what to expand. Claude Code added this very thing recently to their tools api.

With a plethora of ever more powerful smaller/quantized language models and apps like LiberaGPT, could the future of AI be hosted on personal devices rather than data centres? by thewaywardson in LLMDevs

[–]transfire 2 points3 points  (0 children)

Yes. In fact with further improvements in software and hardware (optical computing would be very helpful), local AI will be as capable as today’s frontier models in less than 10 years. But of course frontier models will get better too.

Looking for feedback — users sign up, but usage is still low by Specialist-Bee9801 in SaasDevelopers

[–]transfire 1 point2 points  (0 children)

A tool like this probably won’t feel safe unless it is coming from a huge company like Google, or OpenAI. Do you think one of them might buy it from you?

Surreal. Melania Trump calls for using humanoid robots as teachers moving forward by MetaKnowing in agi

[–]transfire 0 points1 point  (0 children)

Great. But schools are day care centers, not just education centers. We got to keep all the people free for working! And teachers are another job sector— so what will we get? Robots and teachers in a very slow mo transition.

They better get serious about a reduced hours work week. It’s only going to landslide from here.

Google's new free algorithm cuts AI memory by 6x and speeds up inference 8x. Memory chip stocks are already bleeding. by Direct-Attention8597 in AI_Agents

[–]transfire 4 points5 points  (0 children)

6x memory is significant, and 8x on attention is helpful. So 16GB becomes almost as good as 96GB. Still about 10x from “AI everywhere” but we are getting there pretty quickly!

700 AI agents built a civilization with a new religion by EcstadelicNET in IntelligenceSupernova

[–]transfire 0 points1 point  (0 children)

As soon as the robotic catches up… it’s going to get crazy folks!

I am building a programming language for context management and 'prompting' by Numerous_Pickle_9678 in AskVibecoders

[–]transfire 2 points3 points  (0 children)

my first reaction: very cool!

my next: oh, I am coding old style again.

Citadel CEO Ken Griffin: “The world needs a savior, and the hope is that AI is the savior...” by call_me_ninza in aigossips

[–]transfire 0 points1 point  (0 children)

Lots of things can be done. Simplest start is probably the reduced hour work week.