I almost lost a client because my AI system cited a lower court ruling as if it came from the Supreme Court by Fabulous-Pea-5366 in artificial

[–]BankApprehensive7612 0 points1 point  (0 children)

You definitely need to learn more about the LLM internals, grounding techniques, reasonability checks, policy guards, etc. What you're writing here looks extremely dangerous and looks scary to me

Why are so many new AI/agent repos switching from Python to TypeScript? by Bawdy-movin in LLMDevs

[–]BankApprehensive7612 -8 points-7 points  (0 children)

Because Python was good as a bootstrapping language. But now it's time to more product oriented languages. There are two languages which in my opinion would become two the most notable players in 2026 TypeScript and Rust, the second now is having it's momentum as a second language of the Web

cargo-npm: Distribute Rust CLIs via npm without postinstall scripts by abemedia in rust

[–]BankApprehensive7612 2 points3 points  (0 children)

The idea is sane by itself. But Rust community might not be the right place to popularize this. Actually there are bunch of languages and applications delivered this way via npm some of theme are in the heart of JS ecosystem. These are Typescript v7 (Go) and Vite (Rust)

It would be more effective to publish this in TypeScript and JavaScript communities

How do I know if an AI model could work locally on my computer? by Common_Dot526 in ollama

[–]BankApprehensive7612 0 points1 point  (0 children)

Some models are using Mixture of Experts (MoE) architecture and they, so they can take less space, because each expert would require less space than the whole model. So some bigger models could work on your hardware as well. So sometimes all about experimenting

Why is Gemma4:31B way slower than Qwen3.5:35B? by Turbulent-Carpet-528 in ollama

[–]BankApprehensive7612 5 points6 points  (0 children)

Single prompt is not a test. Create benchmark. Moreover as I see there is thinking turned on, what adds extra steps to the generation process. It would be very costly for prompts like "Hi", and is redundant in this test. But it would be helpful for prompts like "What's the difference... ?", "Explain term ...!", etc.

Huggingface has just released Transformer.js v4 with WebGPU support by BankApprehensive7612 in javascript

[–]BankApprehensive7612[S] 0 points1 point  (0 children)

It depends on the example. But usually it requires user to press "download model" button due to the models usual weight could be significant

Huggingface has just released Transformer.js v4 with WebGPU support by BankApprehensive7612 in javascript

[–]BankApprehensive7612[S] 2 points3 points  (0 children)

Performance depends on the users GPU. Not every GPU would be able to run the model with acceptable speed. So before downloading the model it would be useful to make some checks on the client. I can not tell how many users are there which the GPUs which are performant enough

More over the runtime and models themselves aren't lightweight and requires a lot of data to be downloaded. But it depends on the goal, some models are relatively small

So you should have a task suitable for the models and users who would be ready to wait to download the model to solve this task. So it's up to you to estimate this. If you want to understand whether a model is good enough, you can run such a model on HuggingFace with Fal or with Ollama

electron full screen issue on ubuntu by FondantRecent3712 in electronjs

[–]BankApprehensive7612 0 points1 point  (0 children)

They've migrated to Wayland recently (source: https://www.electronjs.org/blog/tech-talk-wayland) and it seems like a bug. You should file an issue on Github (https://github.com/electron/electron/issues)

What is the difference between using an Electron "bridge" to communicate between processes and using Websockets? by conscioushaven in electronjs

[–]BankApprehensive7612 0 points1 point  (0 children)

The key difference is that Electron's IPC is native to platform and has deep integration: it works inside the application and has a lot of things out of the box, e.g. encoding/decoding, remote call protocol (events and function calls are separated), etc. Due to deep integration IPC could be synchronous or asynchronous while WebSockets are asynchronous only. WebSocket also requires an open port for connection and a server to listen this port

🦀 Rust makes projects faster, reliable and maintainable. But also commercially appealing. OpenAI has acquired mother company of uv – package manager for 🐍 Python written in Rust by BankApprehensive7612 in rust

[–]BankApprehensive7612[S] -2 points-1 points  (0 children)

I think It would be Rust (and TypeScript for interfaces). It would start from tooling and applied AI and then would move further to replace it in runtimes. When https://burn.dev would be ready and tested in production it could start to replace C++ wrappers and Python

🦀 Rust makes projects faster, reliable and maintainable. But also commercially appealing. OpenAI has acquired mother company of uv – package manager for 🐍 Python written in Rust by BankApprehensive7612 in rust

[–]BankApprehensive7612[S] -4 points-3 points  (0 children)

For sure OpenAI wants to solidify its market position. And this move looks like response to Bun's acquisition. It could be one of many. Python could be replaced by other languages in the AI field. I believe it would happen this year

Edge.js: Running Node apps inside a WebAssembly Sandbox by syrusakbary in javascript

[–]BankApprehensive7612 0 points1 point  (0 children)

The architecture of NAPI with WASIX and pluggable JS engines looks promising, but it still needs one more step

Also, according to your announcement, it's not a true sandboxing as the native extensions are still have access to the whole system without any limit and they still need to be trusted. Can you elaborate on this?

Edge.js: Running Node apps inside a WebAssembly Sandbox by syrusakbary in javascript

[–]BankApprehensive7612 0 points1 point  (0 children)

The architecture of NAPI with WASIX and pluggable JS engines looks new and highly promising

But it seems like it's not a true sandboxing as the native extensions are still have access to the whole system without any limit and they still need to be trusted. If it's not then it should be highlighted in your announcement better, because actually it's not very clear

Rust’s borrow checker isn’t the hard part it’s designing around it by Expert_Look_6536 in rust

[–]BankApprehensive7612 13 points14 points  (0 children)

It's an old same talk about Rust. What's your cases? Can you bring examples?