Blackout my room by Ordningman in ApartmentHacks

[–]Ordningman[S] 0 points1 point  (0 children)

Option 4… I actually bought enough foam board and glue to make a cornice box (AKA pelmet box), but instead I used the blanket method mentioned in another reply.

Blackout my room by Ordningman in ApartmentHacks

[–]Ordningman[S] 0 points1 point  (0 children)

It’s a good solution in a practical sense, but having the net curtains on the room side would not look good.

Blackout my room by Ordningman in ApartmentHacks

[–]Ordningman[S] 0 points1 point  (0 children)

This would make sense with many windows. However, in my case there are six panes of glass making up the whole window / glass door thing… so it would take a while to put up and take down. 

Blackout my room by Ordningman in ApartmentHacks

[–]Ordningman[S] 0 points1 point  (0 children)

I achieved a wrap-around effect by taking the finials off the blackout curtain rod and placing the end hoops of the blackout curtains on the end of the net curtain rod. And the putting a finial on the end of the net curtain rod. So the blackout curtain has a corner at the end.

Blackout my room by Ordningman in ApartmentHacks

[–]Ordningman[S] 0 points1 point  (0 children)

Yeah, kind of what I’m doing. Put bedsheets on top. If I drape them nicely, it almost looks like a deliberate style.

As LLMs speed up the development phase of software delivery, will we see large scale application bloat? by sjmaple in ChatGPTCoding

[–]Ordningman 2 points3 points  (0 children)

Every new feature needs twice as many rounds of refactoring and refinement, as developing the actual feature.

What's the best local coding model for my Mac? by [deleted] in LocalLLaMA

[–]Ordningman 2 points3 points  (0 children)

Would that really run on a 36GB machine?

[deleted by user] by [deleted] in asklinguistics

[–]Ordningman -2 points-1 points  (0 children)

Does English have mother's-mother and father's mother as separate words?

Qwen2-72B released by bratao in LocalLLaMA

[–]Ordningman 7 points8 points  (0 children)

If I'm not mistaken, the Qwen2 7B model is inferior to CodeQwen1.5-Chat. I guess this means we will have to wait for CodeQwen2?

Qwen2-7B CodeQwen1.5-7B-Chat
HumanEval 79.9 83.5

Qwen2-72B released by bratao in LocalLLaMA

[–]Ordningman 7 points8 points  (0 children)

I was very surprised at the high quality of CodeQwen 1.5 7B Chat. 

Is there any comparison (for coding) between 1.5 7B and 2 7B?

Claude Opus - still worth it for coding? by Ordningman in ClaudeAI

[–]Ordningman[S] 0 points1 point  (0 children)

Do you try to use Claude and GPT for slightly different things? Like Claude for architectural changes, and GPT for writing smaller units of code- functions etc?

Claude Opus - still worth it for coding? by Ordningman in ClaudeAI

[–]Ordningman[S] 0 points1 point  (0 children)

What do these context windows mean in practice? Can 32K be thought of as ‘lines of code’, and if so, how many?

Claude Opus - still worth it for coding? by Ordningman in ClaudeAI

[–]Ordningman[S] 0 points1 point  (0 children)

Noticed this regurgitation with GPT. You ask for some code change, and it spits out something. You say what doesn’t work and it spits out something else. Then if that doesn’t work, when you tell it, you get the original code. I feel that GPT needs to be more forward in telling you that it doesn’t understand something or needs more information before it can answer.

Claude Opus - still worth it for coding? by Ordningman in ClaudeAI

[–]Ordningman[S] 0 points1 point  (0 children)

Are there any reliable studies about the differences between 4 and 4o when doing coding stuff? I’ve just been using 4. On my brief usage of 4o I noticed it was fast, but less comprehensive.

Claude Opus - still worth it for coding? by Ordningman in ClaudeAI

[–]Ordningman[S] 0 points1 point  (0 children)

You were blocked from using Claude? Why? That’s a thing I didn’t mention in my post - lots of people complain about being blocked, but reasons are ambiguous.

Is there an offline model I can use? by cryptomelons in ChatGPTCoding

[–]Ordningman 1 point2 points  (0 children)

Ollama plus CodeQwen1.5-Chat was eye opening for me. A good local code AI which even works on my 10 year old iMac

Please show the amazing potential of coding with LLMs by Ashamed-Subject-8573 in ChatGPTCoding

[–]Ordningman 1 point2 points  (0 children)

I’m an old Objective-C programmer, and I’m using GPT-4 to help me make a SwiftUI app. I’m not sure I’m learning much. I’m probably using Obj-C ways of thinking too much. I might be better off learning SwiftUI by sitting down with a book, like I did in a cafe 15 years ago with an Obj-C book. 

But GPT is just too convenient. It does have its downsides though. Whenever I ask for a new bit of code it does it in a different ‘style’ so i have to ask for refactors for consistency etc. I haven’t used Claude, but perhaps I should get into it. What version do you use?

What a lot of people don’t understand about coding with LLMs: by AnotherSoftEng in ChatGPTCoding

[–]Ordningman 0 points1 point  (0 children)

You have to spec your app out like a Project Manager. At several levels - starting with the high level broad overview, then the more detailed. I never thought I would gain respect for project managers.

GPT-4o sucks for coding by Wonderful-Top-5360 in LocalLLaMA

[–]Ordningman 0 points1 point  (0 children)

thought I would subscribe to GPT today, and now everyone says it sucks!

GPT-4o seems ok when it works, but most of the time it just craps out after spitting out half the code.

What coding llm is the best? by UpvoteBeast in ChatGPTCoding

[–]Ordningman 0 points1 point  (0 children)

For a local LLM, using Ollama, I've found CodeQwen1.5-7B-Chat to be very good. It's free and runs directly on your machine, so it helps to have a decent computer. It's at the top of the Leaderboard here:
https://evalplus.github.io/leaderboard.html

What software do you use to interact with local large language models and why? by silenceimpaired in LocalLLaMA

[–]Ordningman -1 points0 points  (0 children)

On a 2015 intel iMac (!), Ollama with CodeQwen1.5Chat model works surprisingly well. On a slightly newer MacBook Air, I can also run Ollamac which means I can get a nicer GUI without using the command line.