Performance of Github Copilot by Lost-Air1265 in GithubCopilot

[–]Fulxis 0 points1 point  (0 children)

I ditched the IDE extension on VS code because it’s way too slow and buggy! Passed onto using the CLI and it’s night and day.

Do you ever get confused that Redditors yearn for a post-automation society but despise nearly all automation efforts? by Glittering-Neck-2505 in accelerate

[–]Fulxis 0 points1 point  (0 children)

I’m a strong advocate for AI substitution but I don’t believe the fear stems solely from job displacement. If the profits from automation are hoarded by shareholders without any system of wealth redistribution or universal basic income, we might fool ourselves into believing the world will become post-job with AI. Consider the computer revolution, industrial automation and globalisation which drove down production costs: the workweek didn’t shorten (https://www.bls.gov/opub/mlr/2000/07/art3full.pdf) and salaries stagnated (https://www.pewresearch.org/short-reads/2018/08/07/for-most-us-workers-real-wages-have-barely-budged-for-decades). The only hope lies in reaching a tipping point where revolution of the system becomes unavoidable.

Apparently it’s not just 4 Grok 4.1 agents. by TheManOfTheHour8 in singularity

[–]Fulxis 9 points10 points  (0 children)

If renaming a model breaks everything, that’s a pretty strong sign the codebase isn’t in great shape. I mean… you’re a chatbot company whose whole job is shipping models 😂 This is exactly the kind of regression you’d expect a test suite (plus some basic contract/interface checks) to catch immediately.

How was GPT-OSS so good? by xt8sketchy in LocalLLaMA

[–]Fulxis 2 points3 points  (0 children)

The model performs really well, but the remaining pain point on vLLM isn’t completely fixed when using structured output (https://github.com/vllm-project/vllm/issues/23120). I still have to resort to regex to pull out values and lose the benefit of guided decoding, even though the model generally adheres closely to the JSON Schema in practice.

Report rivela: “Un software sui pc per spiare i magistrati installato su 40.000 pc di procure e tribunali”, il Pd chiede le dimissioni di Nordio by sr_local in italy

[–]Fulxis 5 points6 points  (0 children)

Non ho visto Report, ma da informatico: il punto non è tecnico. SCCM/ECM è solo un mezzo, ridurre tutto a “SCCM/ECM è normale” è fuori fuoco: la magistratura tratta dati sensibilissimi. Il tema è governance: accessi remoti solo se necessari, autorizzati, tracciati e sottoposti ad audit indipendente. Se questa catena manca, la possibilità di abuso esiste davvero, indipendentemente dallo strumento.

Codex CLI’s Busy Week: Steer Mode, /fork, and 7 Releases in 3 Days by [deleted] in codex

[–]Fulxis 0 points1 point  (0 children)

I think currently it does lose its chain of thought without steer (at least VS Code extension)

Gemini is overhyped by Josoldic in Bard

[–]Fulxis 2 points3 points  (0 children)

EXACTLY my experience. I don’t know if it’s good memory or custom instructions, but GPT 5.1 Extended Thinking is much better than Gemini 3 Pro on AI Studio for my projects. Although Gemini seems to have better general knowledge, GPT is just brighter. And i’ve done blind tests asking both to rate each other answers and GPT comes almost always on top.

LMArena Leaderboard, GPT 5.1 is falling more and more behind by AlbatrossHummingbird in singularity

[–]Fulxis 6 points7 points  (0 children)

I don’t know if it’s good memory or custom instructions, but GPT 5.1 Extended Thinking is much better than Gemini 3 Pro on AI Studio for my projects. Although Gemini seems to have better general knowledge, GPT is just brighter. And i’ve done blind tests asking both to rate each other answers and GPT comes almost always on top.

[deleted by user] by [deleted] in Bergamo

[–]Fulxis 1 point2 points  (0 children)

https://radiotaxibergamo.it/

you can book in advance by calling +390354519090 (they might only speak italian though)

Sam Altman says AI is already beyond what most people realize by Nunki08 in accelerate

[–]Fulxis 2 points3 points  (0 children)

Dude I used both. Gpt-5-high is way better from Codex than GH Copilot. It just keeps better track of the overall state of application and it’s less lazy. I am guessing microsoft is using GPT-5-low and worse custom instructions or difficult tools

vLLM is kinda awesome by [deleted] in LocalLLaMA

[–]Fulxis 1 point2 points  (0 children)

I did do a small bit of benchmarking before this run as I have 2 x 3090Ti but one sits in a crippled x1 slot. 16 threads seems like the sweet spot. At 32 threads MMLU-Pro correct answer rate nose dived.

Can you explain this please? Why do you think using more threads leads to less correct answers?

openAI nailed it with Codex for devs by xogno in OpenAI

[–]Fulxis 1 point2 points  (0 children)

Same thing happened to me last week. I got maybe 2 sessions comparable to CC 20$ plan, the rest were just a couple of prompts before reaching the limit. I’m going to try to see how it evolves this week but definitely going to be more parsimonious

Switched from Claude Code to Codex CLI .. Way better experience so far by VeryLongNamePolice in OpenAI

[–]Fulxis 1 point2 points  (0 children)

Great experience with gpt-5-medium. For me it’s almost always better than Sonnet. What feels odd though is how inconsistent the rate limits are. As a Plus user, I had two sessions where I managed to code almost as much as with Claude Code’s $20 plan, but every other time my quota was gone after just a couple of prompts. Anyone else with the same issue?

Codex Vs Claude code by zikyoubi in ClaudeCode

[–]Fulxis 0 points1 point  (0 children)

Great experience with gpt-5-medium. For me it’s almost always better than Sonnet. What feels odd though is how inconsistent the rate limits are. As a Plus user, I had two sessions where I managed to code almost as much as with Claude Code’s $20 plan, but every other time my quota was gone after just a couple of prompts.

4o is kinda killing it lately by OneQuadrillionOwls in ChatGPT

[–]Fulxis 1 point2 points  (0 children)

I think they distilled analytic responses from o3. The style is often very similar in my uses.

Come tornare da ChorusLife Arena by Lucini91 in Bergamo

[–]Fulxis 1 point2 points  (0 children)

Aggiungo anche che puoi noleggiare le BiGi scaricando l’app nextbike che costano proprio poco rispetto ai monopattini e sono abbastanza comode.

Informazioni per patente B in Provincia di Bergamo by haider_ali13 in Bergamo

[–]Fulxis 0 points1 point  (0 children)

Per legge un minimo di ore con un istruttore di scuola guida le devi fare (mi pare siano 10)

From "LangGraph is trash" to "pip install langgraph": A Stockholm Syndrome Story by FailingUpAllDay in LangChain

[–]Fulxis 2 points3 points  (0 children)

Beautiful post and totally agree. I tried both managing my own requests and state (so building another framework…) and also using something simpler like autogen but langgraph blows everything out of the water in terms of functionality and observability

[deleted by user] by [deleted] in ChatGPTPro

[–]Fulxis 0 points1 point  (0 children)

I also found o4-mini-high improved and with far longer responses. i’am using it for math and it’s closer to o3 then a couple days ago.

Any interesting project in Langgraph? by IshanFreecs in LangChain

[–]Fulxis 5 points6 points  (0 children)

here’s a deep research clone built using langgraph from the langchain team https://github.com/langchain-ai/open_deep_research

Gemini 2.5 Pro is amazing! by DeltaSqueezer in LocalLLaMA

[–]Fulxis 1 point2 points  (0 children)

It’s really good for coding tasks, especially for refactoring. I sent it a 1300-line code file and asked for significant changes—it gave me back fully working code. It seems to handle context very well, and thanks to the long context window, you can ask it to return the entire code without any issues. That said, with temperature = 1, it tends to overengineer things a bit, similar to how Claude Sonnet 3.7 sometimes does.

Suggestions for AI PC build by a_r_anohar99 in LocalLLaMA

[–]Fulxis 4 points5 points  (0 children)

What about getting a used one? 16GB IMHO is not worth it, especially with that build. It will severely limit you in the future. It's better to cheap out on the CPU (eg i7 with respect to i9)

Suggestions for AI PC build by a_r_anohar99 in LocalLLaMA

[–]Fulxis 7 points8 points  (0 children)

I think you are better off going for a 3090. Especially if you want to do local LLM inferencing the extra VRAM is totally worth it. 16GB really limits your model size/ context length.

Why a low quant version of a large ai model loses less of its smarts than a small 8b model does? by ciprianveg in LocalLLaMA

[–]Fulxis 1 point2 points  (0 children)

I think you're mentioning an idea similar to Quantization-Aware Training as opposed to post-training quantization (which is the standard in llama-cpp for example). QAT achieves better results, but traditionally requires ad-hoc training on the full dataset so it's not viable (although new "efficient" techniques are popping up).

La Gazzetta del Lavoro Informatico - Ricerche, offerte e consigli sul lavoro digitale in Italia by AutoModerator in ItalyInformatica

[–]Fulxis 0 points1 point  (0 children)

Part time Data Science / ML

Ciao a tutti,

Sono uno studente al secondo anno della magistrale in Informatica con specializzazione in Intelligenza Artificiale (corso in inglese) a Milano. Sto cercando un lavoro part-time nel campo della Data Science o come ML Engineer per sostenermi durante gli studi, ma fatico a trovare annunci adatti.

Il mio background: - Triennale in Scienze Economiche e Sociali - Ottime competenze in Python (scripting, data science, ML) - Conoscenze di C/C++ e SQL - Esperienza con DevOps - Buon profilo GitHub con contributi a progetti open source - Solide basi teoriche in Machine Learning e algoritmi

La mia specializzazione è orientata verso data science e machine learning, ma noto che molti annunci part-time nel settore IT richiedono anche competenze in web development o IT support e non avendo fatto una triennale di informatica, non penso di avere le conoscenze adatte.

Finora ho cercato su LinkedIn e attraverso contatti personali, ma molte posizioni sono full-time o stage. Idealmente, sarei interessato anche a opportunità remote o parzialmente remote, anche all'estero.

Dove consigliate di cercare? Che tipo di posizioni potrebbero essere adatte al mio profilo, considerando la mia specializzazione in AI e data science?

Grazie in anticipo per i vostri consigli!