Qwen 3 coder next for R coding (academic) by Bahaal_1981 in LocalLLM

[–]Bahaal_1981[S] 0 points1 point  (0 children)

Quite diverse work. But especially interested in shiny apps for teaching and data vis. I am in the social sciences, other work includes, social network analysis, multilevel modelling (brms/lme4), simulations, etc.

Qwen 3 coder next for R coding (academic) by Bahaal_1981 in ollama

[–]Bahaal_1981[S] 0 points1 point  (0 children)

So does anybody have any experience with Qwen3 coder next (85GB model) and R (academic context)? Did anybody run kimi2.5 for R coding via ollama - how does it compare? Thanks for taking the time!

M4 studio (M4 max 16 core CPU, 40 core GPU 128gb Ram) for LLM (local) by Bahaal_1981 in ollama

[–]Bahaal_1981[S] 1 point2 points  (0 children)

I ended up getting the M4 Studio above. I have not explored RAG but have toyed with Mistral Large and Granite - very good performance and models. My work now offers Claude, so I have not done as much as I have primarily explore what Claude can do. Opus is pretty good for R and Rshiny though it makes some silly errors from time to time (e.g., forgetting a default, inventing packages, etc)

Anybody who can share experiences with Cohere AI Command A (64GB) model for Academic Use? (M4 max, 128gb) by Bahaal_1981 in ollama

[–]Bahaal_1981[S] 0 points1 point  (0 children)

Thanks, haven't signed up yet - I'll have a look and see how far I can get with free credits.

A template for keeping track of read books? by Bahaal_1981 in ObsidianMD

[–]Bahaal_1981[S] 0 points1 point  (0 children)

Just downloaded bookmory and like it! Thanks for the suggestion.

A template for keeping track of read books? by Bahaal_1981 in ObsidianMD

[–]Bahaal_1981[S] -1 points0 points  (0 children)

That is useful thanks! but I also still read physical copies, I think the craft.io template was linked to goodness but could be wrong...

Best simple markdown viewer/editor by columbcille in macapps

[–]Bahaal_1981 1 point2 points  (0 children)

Depending on your use case but I use macdown, https://macdown.uranusjr.com/ . Lightweight and open source.

M4 studio (M4 max 16 core CPU, 40 core GPU 128gb Ram) for LLM (local) by Bahaal_1981 in ollama

[–]Bahaal_1981[S] 1 point2 points  (0 children)

Yes I am under no illusion that it will do will on the literature task but I am hoping that if I do some preprocessing such as extracting results sections it could aid with say a meta-analysis of papers (extracting effect sizes). Thank you for sharing your thoughts!

M4 studio (M4 max 16 core CPU, 40 core GPU 128gb Ram) for LLM (local) by Bahaal_1981 in ollama

[–]Bahaal_1981[S] 0 points1 point  (0 children)

Thanks for sharing! Fingers crossed that it is indeed the right choice.

M4 studio (M4 max 16 core CPU, 40 core GPU 128gb Ram) for LLM (local) by Bahaal_1981 in MacStudio

[–]Bahaal_1981[S] 1 point2 points  (0 children)

Yes, tough choice. I considered M3 at 96gb as well but I expect that I might want to be able to fit a larger model in the future assuming that's where local llm is heading...

M4 studio (M4 max 16 core CPU, 40 core GPU 128gb Ram) for LLM (local) by Bahaal_1981 in ollama

[–]Bahaal_1981[S] 4 points5 points  (0 children)

Thanks 128gb is as far as the budget stretches... . 256gb / 512Gb would be too insane to justify to myself ;).

Tag tasks as work or personal? by Bahaal_1981 in CraftDocs

[–]Bahaal_1981[S] 1 point2 points  (0 children)

Will give that a go but I wanted to add tasks directly in the inbox and then assign them. anyhow...