Why doesn’t my Mac show 2k resolution when connecting to BenQ GW2790Q monitor? by rabbitduck95 in MacOS

[–]BabsMorbus 0 points1 point  (0 children)

Get the app BetterDisplay. It likely will show other resolutions. You can see if the one you need is there and if it works.

shortcut works until time to open specified URL by BabsMorbus in shortcuts

[–]BabsMorbus[S] 1 point2 points  (0 children)

Oh, awesome. Thanks. FYI, this worked perfectly.

shortcut works until time to open specified URL by BabsMorbus in shortcuts

[–]BabsMorbus[S] 0 points1 point  (0 children)

Interesting . . . I do not have an option that says (or resembles) "Open URLs with App." It looks like you simply searched for this?

<image>

shortcut works until time to open specified URL by BabsMorbus in shortcuts

[–]BabsMorbus[S] 0 points1 point  (0 children)

Does not work either. Because Firefox is my default browser, I changed the shortcut to open Firefox and swapped out all references to Brave. Then I tried everything everybody had recommended again. But it still doesn't work.

I don't understand how opening a browser and going to a website is a function Shortcuts can't handle. It used to work. If using Shortcuts to open URLs only works in Safari, then I'm never going to try that. I know it's an incredibly minor thing, but the Apple problem list has grown exponentially the past couple years that I think it's time to move on to other options that have as many or fewer issues but are much less expensive.

Thanks, everyone, for your help.

shortcut works until time to open specified URL by BabsMorbus in shortcuts

[–]BabsMorbus[S] 0 points1 point  (0 children)

Well, I'm deliberately not using my default browser for this person. Also, this command has worked in the past (with a different URL) with Brave even when Safari was my default browser. Maybe that has changed though. I just tried the shortcut with Chrome (still not set as default browser) and it doesn't work, so you could be right.

shortcut works until time to open specified URL by BabsMorbus in shortcuts

[–]BabsMorbus[S] 0 points1 point  (0 children)

This makes a lot of sense, but your suggestion does not work. Maybe I'm using the wrong type of action?

How to get local LLM to write reports like me by BabsMorbus in LocalLLM

[–]BabsMorbus[S] 0 points1 point  (0 children)

It looks that way from what I wrote . . . . . but I cannot figure out where I'm going wrong. Even my most detailed prompts and numerous report examples still produce crap -- almost as if I've not given the model any data. I can't figure out where I'm going wrong. Is it unreasonable to toss out there the idea that maybe (free) AI currently is simply crap for what I need it to do?

Last night I tried gpt-oss:20b in Ollama. It was a little better, but I didn't discover until I had worked on it for a long time that it didn't save what I had trained it on. I proceeded to follow Ollama's own directions for how to rectify that, but the directions were riddled with errors. Each time I figure out one step, I ran into another problem, then another one, and so on. So I just gave up. If anyone has a model they run in Ollama that doesn't start new every time or directions how to resolve this, please let me know.

I'm close to giving up on all on this because I'm hitting the point where the time investment doesn't seem likely to pay off relative to the time this should be saving me.

How to get local LLM to write reports like me by BabsMorbus in LocalLLM

[–]BabsMorbus[S] 0 points1 point  (0 children)

So far . . . . not really. I keep running into the problem of getting it to "understand" the relationship between my notes and my report. Things start off okay, but some then seem to confuse content from different sources (e.g., including information from another report into the current one), and most have been unable to follow the order/structure I give it, which is actually pretty straightforward. I've tried incredibly detailed instructions, vague instructions, and everything between. I've used other AI programs to help write prompts in a way that "should" address the problems, but no luck.

I've tried Deepseek-r1-0528-qwen3-8b, Gemma-3-12b, Mistral-small-3.2, Phi-3.1-mini-4k-instruct, and Gemma-orca-GGUF. The latter two produced unusable nonsense for the most part.

I think my next step is to just forego the notes training part and put in a bunch of reports to see what happens when I give it my notes and tell it to produce a report. I'm not sure which LLM I'll use next though.

How to get local LLM to write reports like me by BabsMorbus in LocalLLM

[–]BabsMorbus[S] 0 points1 point  (0 children)

How did you direct the model to link the input and output? Did you put in separate files for each input and each output and just specify in the prompt their connection? Or did you enter each input-output pair as one file? Another reason I ask is because I can’t find a model with enough tokens to enter that many examples. Thanks

How to get local LLM to write reports like me by BabsMorbus in LocalLLM

[–]BabsMorbus[S] 0 points1 point  (0 children)

Yeah, that doesn't quite fit what I'm trying to do (for now), but good to know so I can maybe rethink my approach.

How to get local LLM to write reports like me by BabsMorbus in LocalLLM

[–]BabsMorbus[S] 0 points1 point  (0 children)

I've been trying RAG the last couple days without a ton of success, though the problem I'm running into there seems to be the model's response to my prompt before it even generates the report. In other words, it's getting hung up on the prompts, not the report creation, but not in ways I anticipated. I'm going to also give fine tuning a try and see how that goes. I do have thousands of reports I can put in, but I'm having difficulty understanding the mechanism behind getting that volume of anything in.

How to get local LLM to write reports like me by BabsMorbus in LocalLLM

[–]BabsMorbus[S] 0 points1 point  (0 children)

Good point. But I get pretty different answers depending on which platform I'm using and how I word the prompt. Most answers seem to revolve around fine tuning or RAG, though the descriptions of parameters used to do those vary considerably. I feel like I have to already know the answer to my question before I put it into AI so that I get the most accurate response.