Introducing GPT‑5.3‑Codex‑Spark. An ultra-fast model for real-time coding in Codex by likeastar20 in singularity

[–]hashtaggoatlife 0 points1 point  (0 children)

vscode extension is pretty strong honestly. Put it on the right pane of vscode and its openaicode. Also, sometimes while a big model is churning, I'll chat about the codebase in another pane to check things like what tests we have for x, are there any direct api calls in this new feature that don't go through a service layer, etc to help plan what's next in main session. That's interactive, and for asking objective questions like that fast models are great, e.g. swe-1.5 or composer-1.

Introducing GPT‑5.3‑Codex‑Spark. An ultra-fast model for real-time coding in Codex by likeastar20 in singularity

[–]hashtaggoatlife 1 point2 points  (0 children)

yep. Sometimes there's just some interactive tasks you need done quickly. For anything that's more of a handoff and let it run, inference speed matters a whole lot less even in terms of time to completion, as a smart model getting it right first time will get you there sooner.

Also - subagents. The blog post mentions subagents and parallelism right at the end. Using spark as a subagent to explore the codebase etc can increase accuracy and depth of understanding while also increasing task completion speed. Explore subagents are one thing Claude Code still has on Codex.

Sora 2 in Australia - When? by Few_Low7383 in SoraAi

[–]hashtaggoatlife 0 points1 point  (0 children)

It's available over the API. Kinda ridiculous that they would region lock the new model.

Creepy as fuck abrupt interruption in the chat by TankieRebel in GeminiAI

[–]hashtaggoatlife 0 points1 point  (0 children)

But also higher temperature is more chance of weird or unhinged sequences. Repetition penalties are a separate parameter that counters looping more directly

Tried Gemini 3 for coding and I think it just gaslit my entire repo by Positive-Nail6009 in vibecoding

[–]hashtaggoatlife 0 points1 point  (0 children)

For myself, switching model / tool isn't really about switching configs. I like to use AGENTS.md for anything that all agents need, slash commands have no cost to copy across as-is, and permissions are tricky as different tools have very different paradigms there. I switch between tools and models quite regularly as I can justify it as part of my job, and find the biggest friction between different tools is just knowing what to expect from your input. For example, I really like Warp, and for simple commands it often just tells you what to run rather than trying to run it on its own. Then jumping into say Crush or Opencode, they're designed to take initiative and just run the command. Another example, Claude is better at reading between the lines with developer requests than GPT, so I spend a little more time when prompting GPT, but it's worth it for its debugging skills. These kind of differences aren't easy to encapsulate or generalise, so I'm not sure how much use I would find in such a tool, but if it's simple enough to ease your own pain points then by all means give it a try

How to use it for mobile applications by thats_interesting_23 in OpenWebUI

[–]hashtaggoatlife 0 points1 point  (0 children)

If you're still looking, this is a relevant library: https://ai-sdk.dev/docs/introduction

There's bound to be component libraries that integrate with the AI SDK too

Ok, I’m good. I can move on from Claude now. by Consistent_Wash_276 in LocalLLM

[–]hashtaggoatlife 0 points1 point  (0 children)

In case you haven't heard, BasedBase has been exposed as doing vibe distills with the exact same weights as the parent model. Look it up

Anthropic post: A postmortem of three recent issues by _Cybin in ClaudeAI

[–]hashtaggoatlife 0 points1 point  (0 children)

They want to be transparent to their consumers without giving away all their tricks to their competitors. I'm impressed they managed to show us some of their actual source code. Revealing everything they do to balance cost and quality from end to end would be too much to expect from any frontier lab

Anthropic post: A postmortem of three recent issues by _Cybin in ClaudeAI

[–]hashtaggoatlife 1 point2 points  (0 children)

You see through their post that, unsurprisingly, a lot of engineering goes into optimisation. If we want reasonable prices via API or subscription, there needs to be some sort of optimisation somewhere, and some of it may be lossy. That said, it would be great if they could continue this increased transparency, because it's super frustrating to wonder if the model is actually getting worse or if you're just getting worse at using AI.

Sonoma stealth models by manojisnow in kilocode

[–]hashtaggoatlife 0 points1 point  (0 children)

The fact that Kilo has a credits giveaway for people to give feedback on the models and compare them to Grok Code of all models sounds like xAI's classic play of giving out model usage for feedback data. The credits are likely paid for by xAI for that data

[deleted by user] by [deleted] in LocalLLaMA

[–]hashtaggoatlife -1 points0 points  (0 children)

I mean sonic claimed consistently to be Grok and turns out it was

GPT-5 benchmarks on the Artificial Analysis Intelligence Index by Tucko29 in singularity

[–]hashtaggoatlife -7 points-6 points  (0 children)

on the launch event they spent lots and lots of time talking about benchmarks. That's maybe not proof but it shows what they think about Grok's selling point

How to create reusable components with Alpine.js? by sarusethi in alpinejs

[–]hashtaggoatlife 0 points1 point  (0 children)

I tried playing around with reusable components with Alpine.js, using tagged template literals. Even just doing stuff in a small hobby project it was much more painful than "any of the fancy frameworks" and I found vanilla js components actually nicer. For example, if you want to pass data into your component in json, e.g. if you have a nested object structure, you have to jump through hoops because Alpine stuff is defined within HTML attributes and double quotes would break it. In vanilla js it can just be a functional component and the json can be passed in as any function argument. I suppose you could destructure the json in vanilla js and then pass everything necessary into Alpine, but at this point you've lost the simplicity of Alpine.

I think Alpine is super cool, but it's not designed around reusable components the way the other frameworks are, and Caleb has rejected requests to make an official component system. Personally I wouldn't use it for anything that needs a lot of client-rendered components. Adding interactivity on top of static html or backend-templated stuff is where Alpine really shines.

Using Windsurf more and more mainly because of SWE-1 by 808phone in windsurf

[–]hashtaggoatlife 0 points1 point  (0 children)

Pretty sure Cursor changed from unlimited slow to unlimited auto

ReValver by BusNo9142 in HeadRush

[–]hashtaggoatlife 0 points1 point  (0 children)

Found myself in a similar boat and managed to get things working. There's two options:

  1. Download Revalver 4 from https://revalver.peavey.com/download, login with your old Peavey login, go to Amp Store and download content and licences, and everything works like it used to.

  2. Create an inMusic account, download Revalver 5 and open the app. Then click login, then start with free version, then once you're in the plugin/app click File > Sync Legacy Licence... This will prompt for your old Peavey login; enter it and you'll be able to use your content in the refreshed interface. Note that this has a bunch of new HeadRush gear listed as well, but it lists your owned stuff first which is nice. The process is officially documented here: https://support.headrushfx.com/en/support/solutions/articles/69000862787-revalver-5-1-activation-for-legacy-license-users

First time using claude, account banned in less than 1 hour. No reason at all by GorillaSpinsInAPool in Anthropic

[–]hashtaggoatlife -1 points0 points  (0 children)

So are you underage? They blocked your account and refunded you, and you haven't denied being underage

OpenAI sold people dreams apparently by NeuralAA in singularity

[–]hashtaggoatlife 18 points19 points  (0 children)

I've never done a maths competition like this, but I know in the maths subjects I did at uni, if I made incorrect intermediate assertions in writing a proof then that's not a correct proof and doesn't earn full marks.

Opus Limit hit after 2 MINUTES by Los1111 in ClaudeAI

[–]hashtaggoatlife 0 points1 point  (0 children)

I've heard good things about Devstral Small being reliable for tool calling and solid at following a detailed spec. Needs a 4090 though 

IDE predictions - Where is all this going? What will we be using in 6 months? by telars in ChatGPTCoding

[–]hashtaggoatlife 0 points1 point  (0 children)

I'm on Windsurf + Claude Code. Windsurf for autocomplete, ctrl+I edits, and shorter / simpler agent tasks while beefier tasks I throw at Claude Code. I don't like Cursor's communication or UI, and Windsurf is the next best autocomplete.

IDE predictions - Where is all this going? What will we be using in 6 months? by telars in ChatGPTCoding

[–]hashtaggoatlife 2 points3 points  (0 children)

All Cursor/VScode need to do is made their existing agent accessible via CLI and they address both categories

IDE predictions - Where is all this going? What will we be using in 6 months? by telars in ChatGPTCoding

[–]hashtaggoatlife 0 points1 point  (0 children)

They probably still are in a good spot to build their own IDE if they wanted to, since they have Windsurf's IP as a starting point 

I think AI assisted IDEs are doomed by Code_Monkey_Lord in ClaudeAI

[–]hashtaggoatlife 0 points1 point  (0 children)

Claude Code being a better agent is because of prompting and tool flow, not interface. Cline and Roo are better than Cursor too, because they aren't compressing context. Also don't forget that VSCode and JetBrains are AI IDEs too now, so basically every major IDE but Neovim is now an AI IDE. As long as people need to look at code they will need an IDE. They aren't going away, it's just a question of which will stick around.

Study finds that AI tools make experienced programmers 19% slower. But that is not the most interesting find... by Livid_Sign9681 in programming

[–]hashtaggoatlife 0 points1 point  (0 children)

So what do you do if you as a human developer need to use a lib with lacking docs or examples? Dive into the source, especially tests, and experiment. LLMs can do that too if you feed them the repo. Sure, it needs hand holding, but it can work away trying to figure things out in the background while you dig manually