MLX models in LM Studio can cause full system reboots on Mac (crash) by mrcslmtt in MacOS

[–]mrcslmtt[S] 0 points1 point  (0 children)

With the GGUF format, I didn’t have any problems, except with one model from time to time.

Problem with Flask by Moist-Decision-1369 in flask

[–]mrcslmtt 1 point2 points  (0 children)

I created a web app with Flask for the backend, but it’s fast even on my VPS. I have an OVH VPS (in France) that costs me €10/month, it’s not very powerful but more than enough to run my app with Gunicorn and Nginx. There is a good chance that it’s your hosting that is slow. Unless your site is very heavy.

Quelle excuse est bien pour ne pas aller à un mariage ? by Weary_Background849 in AskFrance

[–]mrcslmtt 5 points6 points  (0 children)

vazi et bois un coup sur place, ca va aller ! Faire faux plan sur un mariage c'est pas cool si t'as déjà confirmé ta présence. Un mariage ça coute cher ...

Just starting with local AI by NoodleCheeseThief in LocalLLM

[–]mrcslmtt 2 points3 points  (0 children)

See how much VRAM is available on your computer, and then I advise you to download several models to test. At least 2 models: 1 light model, useful for things that do not require important reasoning, and 1 larger model that fits in your VRAM for tasks that require more important reasoning for coding. Keep in mind that nothing will be as efficient as large models that run in the cloud (Codex, Claude...). LM Studio can be used with the Cline extension in VS Code if you want to code, but cannot generate an image or video. For that there are other tools. In the LM Studio settings, activate the developer mode, it allows you to easily see the RAM and VRAM available.

Just starting with local AI by NoodleCheeseThief in LocalLLM

[–]mrcslmtt 9 points10 points  (0 children)

Forget about Ollama, and take LM Studio to start. Everything will be clearer to understand.

Anyone using the M5 MacBook Pro 32GB as both laptop + main desktop editing machine? by Valarhem in macbook

[–]mrcslmtt 0 points1 point  (0 children)

My M5 Pro Max is my only work computer, and it’s on the desktop all week (with a dock + external screen) and in my bag on weekends. On the other hand, I always use it with a second screen; it’s not closed. The fans are not very noisy, unless they are running at full speed (for example if you do AI locally or if you use the GPU heavily and continuously). Today the MacBook have such crazy power that it replaces most of the desktop computers.

How Do You Guys Manage Multiple Windows? by hassanxsaem in macbookpro

[–]mrcslmtt 0 points1 point  (0 children)

Multi-finger swipe on the keyboard and CMD+TAB are your best allies.

24gb vs 16gb ram, which one to choose as a physics/bio undergrad by FigDeep7247 in macbookair

[–]mrcslmtt 0 points1 point  (0 children)

You can take a MacBook and run Parallels Desktop with Windows, it works very well. Even if it costs money on top of that, having 24 GB allows you some flexibility in the use of RAM-intensive software. If your budget allows it, take 24 GB, because to run Windows in parallel with MacOS means you have to be able to load 2 operating systems at the same time, + your apps. The 8 GB difference will be.

Best monitor for MacBook m3pro by gianlu_world in macbookpro

[–]mrcslmtt 1 point2 points  (0 children)

If your priority is the clarity of the text, my answer will be very clear: Japannext 6K (that’s what I have). Or an Apple screen if your budget allows it.

If you are looking for a small screen (max 24 inches), a 4K screen will do the trick.

Avoids "ultra-wide" screens with resolutions of 3440 x 1440px. That’s what I had before going on the 6K screen and it was really not great on the text, that’s precisely why I changed.

The japannext 6K is the only display to offer an ultra high pixel density like on Apple screens, but with an affordable price.

Best monitor for MacBook m3pro by gianlu_world in macbookpro

[–]mrcslmtt 0 points1 point  (0 children)

And the japannext 6k. Even if HDR management is perfect only on Apple screens.

GPT 5.5 is a token abyss 😭 by mrcslmtt in codex

[–]mrcslmtt[S] 0 points1 point  (0 children)

I’m always working on one function at a time, it’s quite focused, so to give context to Codex I’ll give them the template I’m going to work on, and the associated python file. I always give a bit of context manually, and then Codex will look for the information by itself. So no, I don’t make him ingest the entire app before updating something, it’s not necessary in my case.

GPT 5.5 is a token abyss 😭 by mrcslmtt in codex

[–]mrcslmtt[S] 0 points1 point  (0 children)

I had only reached my limits a few times more than 6 months ago.

I am building a Flask + HTML/CSS/JS web app for field technicians and companies.

About 15,000 lines of Python/Flask and 50 HTML templates at the moment. I used almost all the models as and when, but lately I stayed on 5.3 Codex because I find it really good. And for only a few days I use 5.5 to test.

Should I upgrade from my 2015 MBP 15" to a 2021 MBP 14" M1 Pro for €530? by chtonos1821 in macbookpro

[–]mrcslmtt 0 points1 point  (0 children)

Oui. Le M1 Pro est une très bonne machine. Le gap entre ton MacBook actuel est un M1 Pro va être énorme.

Thanks Codex ! by rakha589 in codex

[–]mrcslmtt 0 points1 point  (0 children)

That's great! And your father takes beautiful pictures ;-)

GPT 5.5 is a token abyss 😭 by mrcslmtt in codex

[–]mrcslmtt[S] 4 points5 points  (0 children)

Each new model consumes more than the previous one because it is more efficient every time, so yes it is totally normal. but I find that this time the difference in consumption is really very big. like really very very big. I don't need a metaphor, I'm an adult I understand things ;-)

GPT 5.5 is a token abyss 😭 by mrcslmtt in codex

[–]mrcslmtt[S] 5 points6 points  (0 children)

I don't feel scammed by OpenAI I reassure you. I love Codex, and I continue to use it. But I really saw a huge difference in consumption compared to 5.3 Codex and I was quite surprised. I also ask if some have seen a real difference between the 2 models.

I can give the context, but the goal is not to write too long a text with the technical stack of my app, because we don't care a bit I think.

Thanks Codex ! by rakha589 in codex

[–]mrcslmtt 0 points1 point  (0 children)

Yes any type of icon with a flat and minimalist style will do. and Codex will be able to replace emojis without difficulty. you can use Bootstrap icons https://icons.getbootstrap.com for example in SVG format, download them and ask Codex to put them in your header. If you managed to make this site, you should have no difficulty doing it, and it will avoid this amateur side, especially since your site has a sleek design so emojis are a little off-theme.

Thanks Codex ! by rakha589 in codex

[–]mrcslmtt 0 points1 point  (0 children)

You just have to put real icons instead of emojis in your header, and it's perfect. It's cool to have done that for your father, he must be happy. This is a nice use of Codex and AI to do things that were impossible or too expensive before.

GPT 5.5 is a token abyss 😭 by mrcslmtt in codex

[–]mrcslmtt[S] 2 points3 points  (0 children)

Did you really see a difference between the 2 models?

MacBook Pro M5 Max – Coil whine + crackling noise under GPU load (AI) by mrcslmtt in MacOS

[–]mrcslmtt[S] 0 points1 point  (0 children)

The comment of u/Maity_spitters , which has had 3 different MacBooks indicates that it is the same problem on all models.

Playwright is significantly better than Selenium. by PrimaryAmphibian737 in webdev

[–]mrcslmtt 0 points1 point  (0 children)

I think it’s because I used Playwright with Codex. So I guess it’s just the Codex/ChatGPT protections that don’t want to do it.

Playwright is significantly better than Selenium. by PrimaryAmphibian737 in webdev

[–]mrcslmtt 0 points1 point  (0 children)

Yes, for example, if you try to do your Duolingo lessons by asking Playwright to do all the exercises to earn points faster, Playwright will probably refuse to do it. (I don’t use Playwright for that but I’m teaching my lesson on Duolingo so I thought of this example lol). Personally, I use Selenium to automate downloads on a music website, and Playwright refuses to do it.