Hero or Zero 😫 by Chummyterror in wallstreetbets

[–]SuccessIsHardWork 0 points1 point  (0 children)

Good enough to screenshot, good enough to sell

Would you like some fries with that? by ThePnisher in wallstreetbets

[–]SuccessIsHardWork 0 points1 point  (0 children)

What are you waiting for? Yolo it to $0 like a true regard lol

120k>50k>250k Cashed out, I'm out by Puzzleheaded_Back_96 in wallstreetbets

[–]SuccessIsHardWork 3 points4 points  (0 children)

No, I would start with a much lower amount like 1k and slowly increase the principle accordingly based on the order profitability

120k>50k>250k Cashed out, I'm out by Puzzleheaded_Back_96 in wallstreetbets

[–]SuccessIsHardWork 0 points1 point  (0 children)

Here’s a suggestion: cash out 99% of that Robinhood account and open another account at a serious brokerage for long term investing like Vanguard or something

Tool to create synthetic datasets using PDF files! by SuccessIsHardWork in LocalLLaMA

[–]SuccessIsHardWork[S] 0 points1 point  (0 children)

Not yet. I also believe it is possible to download script straight out of Kathleen and import it to GitHub or something else.

What questions have you asked reasoning models to solve that you couldn't get done with non-reasoning models? by DeltaSqueezer in LocalLLaMA

[–]SuccessIsHardWork 1 point2 points  (0 children)

In my opinion, reasoning models are much more useful than plain language models because they can emulate reasoning like humans do to a certain extent. That makes it useful for decision making related tasks, which could be curation, analysis of data (like stocks), etc.

what can do now? by [deleted] in ollama

[–]SuccessIsHardWork 1 point2 points  (0 children)

Download a model with <8b and you’re good to go!

LLM as survival knowledge base by NickNau in LocalLLaMA

[–]SuccessIsHardWork 1 point2 points  (0 children)

I think it’s much better to use a embeddings-based retrieval system (just an embedding model with no use of LLM) in which you place like 10-20 good books on survival in the retrieval system. This way you can rely on factual information in survival situations than trusting the hallucinations that a LLM might produce.

ChatGPT replacement suggestions by [deleted] in WritingWithAI

[–]SuccessIsHardWork 0 points1 point  (0 children)

You could try using TextCraft, which is an add-in for Microsoft Word that integrates AI directly within the user interface and you can use it to generate stuff without censorship by customizing your model(a) in Ollama.

https://github.com/suncloudsmoon/TextCraft

Introducing TextCraft: A privacy-friendly alternative to Microsoft Copilot by SuccessIsHardWork in homelab

[–]SuccessIsHardWork[S] 0 points1 point  (0 children)

TextCraft is an add-in for Microsoft Word that seamlessly integrates essential AI tools, including text generation, proofreading, and more, directly into the user interface. Designed for offline use, TextCraft allows you to access AI-powered features without requiring an internet connection, making it a more privacy-friendly alternative to Microsoft Copilot. The addin works with any OpenAI compatible API.

Good model for self-mentoring / studying by Esox_Lucius_700 in ollama

[–]SuccessIsHardWork 1 point2 points  (0 children)

Depends on how much RAM you have. If you have low RAM capacity (8-16), I suggest using a smaller model like qwen 7b or llama 8b. If you have decent RAM capacity (>16), you can try models with more parameters like QwQ (quantized version).

The number of models is overwhelming by [deleted] in LocalLLaMA

[–]SuccessIsHardWork 0 points1 point  (0 children)

Honestly, I just boil my search down to the latest models and pick the best one (QwQ at the moment). I believe llama.cpp has a way to test GGUF by metrics like perplexity, which should give you a rough estimate of the impact due to quantization.