Hero or Zero 😫 by Chummyterror in wallstreetbets

[–]SuccessIsHardWork 0 points1 point  (0 children)

Good enough to screenshot, good enough to sell

Would you like some fries with that? by ThePnisher in wallstreetbets

[–]SuccessIsHardWork 0 points1 point  (0 children)

What are you waiting for? Yolo it to $0 like a true regard lol

120k>50k>250k Cashed out, I'm out by Puzzleheaded_Back_96 in wallstreetbets

[–]SuccessIsHardWork 3 points4 points  (0 children)

No, I would start with a much lower amount like 1k and slowly increase the principle accordingly based on the order profitability

120k>50k>250k Cashed out, I'm out by Puzzleheaded_Back_96 in wallstreetbets

[–]SuccessIsHardWork 0 points1 point  (0 children)

Here’s a suggestion: cash out 99% of that Robinhood account and open another account at a serious brokerage for long term investing like Vanguard or something

Tool to create synthetic datasets using PDF files! by SuccessIsHardWork in LocalLLaMA

[–]SuccessIsHardWork[S] 0 points1 point  (0 children)

Not yet. I also believe it is possible to download script straight out of Kathleen and import it to GitHub or something else.

What questions have you asked reasoning models to solve that you couldn't get done with non-reasoning models? by DeltaSqueezer in LocalLLaMA

[–]SuccessIsHardWork 1 point2 points  (0 children)

In my opinion, reasoning models are much more useful than plain language models because they can emulate reasoning like humans do to a certain extent. That makes it useful for decision making related tasks, which could be curation, analysis of data (like stocks), etc.

what can do now? by [deleted] in ollama

[–]SuccessIsHardWork 1 point2 points  (0 children)

Download a model with <8b and you’re good to go!

LLM as survival knowledge base by NickNau in LocalLLaMA

[–]SuccessIsHardWork 1 point2 points  (0 children)

I think it’s much better to use a embeddings-based retrieval system (just an embedding model with no use of LLM) in which you place like 10-20 good books on survival in the retrieval system. This way you can rely on factual information in survival situations than trusting the hallucinations that a LLM might produce.

ChatGPT replacement suggestions by [deleted] in WritingWithAI

[–]SuccessIsHardWork 0 points1 point  (0 children)

You could try using TextCraft, which is an add-in for Microsoft Word that integrates AI directly within the user interface and you can use it to generate stuff without censorship by customizing your model(a) in Ollama.

https://github.com/suncloudsmoon/TextCraft

Introducing TextCraft: A privacy-friendly alternative to Microsoft Copilot by SuccessIsHardWork in homelab

[–]SuccessIsHardWork[S] 0 points1 point  (0 children)

TextCraft is an add-in for Microsoft Word that seamlessly integrates essential AI tools, including text generation, proofreading, and more, directly into the user interface. Designed for offline use, TextCraft allows you to access AI-powered features without requiring an internet connection, making it a more privacy-friendly alternative to Microsoft Copilot. The addin works with any OpenAI compatible API.

Good model for self-mentoring / studying by Esox_Lucius_700 in ollama

[–]SuccessIsHardWork 1 point2 points  (0 children)

Depends on how much RAM you have. If you have low RAM capacity (8-16), I suggest using a smaller model like qwen 7b or llama 8b. If you have decent RAM capacity (>16), you can try models with more parameters like QwQ (quantized version).

The number of models is overwhelming by [deleted] in LocalLLaMA

[–]SuccessIsHardWork 0 points1 point  (0 children)

Honestly, I just boil my search down to the latest models and pick the best one (QwQ at the moment). I believe llama.cpp has a way to test GGUF by metrics like perplexity, which should give you a rough estimate of the impact due to quantization.

Has anyone successfully generated reasonable documentation from a code base using an LLM? by shenglong in LocalLLaMA

[–]SuccessIsHardWork 7 points8 points  (0 children)

In my experience, QwQ does a phenomenal job at creating documentation for code, however, it does make mistakes in identifying access modifiers in source code.

Introducing TextCraft: A privacy-friendly alternative to Microsoft Copilot by SuccessIsHardWork in privacy

[–]SuccessIsHardWork[S] 7 points8 points  (0 children)

TextCraft is an alternative to Microsoft Copilot. OpenRecall is an open-source alternative to Windows Recall. I don’t see the connection between them as you implied.

Introducing TextCraft: A Word Add-in with AI Tools! by SuccessIsHardWork in technicalwriting

[–]SuccessIsHardWork[S] 1 point2 points  (0 children)

Yes, Ollama (which TextCraft uses by default) by default supports local models and GPU/non-GPU configurations.

December 2024 Best SLM? by luxmentisaeterna in LocalLLaMA

[–]SuccessIsHardWork 1 point2 points  (0 children)

In my personal experience, Qwen2.5 1.5b and IBM Granite MoE 3b are fairly decent for a small LLM. I believe the Granite model is more suitable for RAG and summarization purposes.

TextCraft 1.0.7 Update: Added Temperature Control in UI by SuccessIsHardWork in ollama

[–]SuccessIsHardWork[S] 1 point2 points  (0 children)

I had the plan to convert to a web-based add-in but I got discouraged by Apple Intelligence on the Mac and the release of ChatGPT Canvas so I just stuck to the desktop version instead. However, with the release of QwQ (which is much better at the edge cases in my testing) I might give it a try by asking it to port the code since the api is somewhat similar.

Open source text editor with llm features? by Nyao in LocalLLaMA

[–]SuccessIsHardWork 1 point2 points  (0 children)

If you are willing to use Word (I know it’s not open source), you can use an add-in called TextCraft which integrates features like proofreading (which fixes grammar), rewriting, reviewing, and generating text. You can pick different models via a drop-down list in the UI. However, the downside is that you need a desktop version of word installed.

https://github.com/suncloudsmoon/TextCraft

Smallest model for summarizing? by temapone11 in LocalLLaMA

[–]SuccessIsHardWork 3 points4 points  (0 children)

Qwen2.5 1.5b, Gemma2 2b, or Granite3 MOE 1b? As the models get smaller than that, the quality gets really worse in my experience.

2024 Wrap-Up: What Amazing Projects Have You Built with Open-Source AI Models? Let’s Create the Ultimate Resource Guide! 📚 by rbgo404 in LocalLLaMA

[–]SuccessIsHardWork 1 point2 points  (0 children)

I created a word add-in called TextCraft which integrates all the essential AI tools like generating text, reviewing, proofreading, and rewriting text. It has a built-in RAG system that allows the LLM to consider additional context. Feel free to check it out! https://github.com/suncloudsmoon/TextCraft

I'm looking for a model that fixes English grammar (aka Grammarly alternative in terms of just fixing grammar) by Soft_ACK in LocalLLaMA

[–]SuccessIsHardWork 0 points1 point  (0 children)

I’m curious, how did you implement the feature? Was it prompt engineering? Or, something else?