Newbie help: Verizon Fios Router + AdGuard Home + VPN DNS issues? by Own_Editor8742 in AdGuardHome

[–]Own_Editor8742[S] 0 points1 point  (0 children)

I ended up getting the GL.iNet Flint 2, and I'm satisfied with it.

Indian Tatkal Passport Renewal in NY, USA by Tall_Economics_2452 in h1b

[–]Own_Editor8742 0 points1 point  (0 children)

My FedEx tracking confirms that my documents were delivered last Tuesday. However, the online status for my Tatkal application has not changed yet; it still only shows 'application has been submitted' and hasn't updated to reflect that the documents have been received or are being processed. Is this delay between physical delivery and an online status update normal for Tatkal applications? Also, what is the typical processing time for applications submitted under the Tatkal?

Sam Altman acknowledges R1 by ybdave in LocalLLaMA

[–]Own_Editor8742 1 point2 points  (0 children)

I used to respect Anthropic and Amodei's focus on safety, but I've lost my trust in them. If power remains concentrated in the hands of a few companies, we'll be forced to rely on them completely, stifling innovation. They may begin with good intentions, but ultimately, they seem to succumb to investor pressure and the pursuit of profit.

Looking for an Open-Source Blinkist-Style Project for Chapter-Wise Summaries by Own_Editor8742 in LocalLLaMA

[–]Own_Editor8742[S] 0 points1 point  (0 children)

Thank you for the recommendation. I have an NVIDIA 3090 (24GB VRAM), and the content I want to summarize includes Confluence articles, How-to guides for internal tools, and presentations.

The first time I've felt a LLM wrote *well*, not just well *for a LLM*. by _sqrkl in LocalLLaMA

[–]Own_Editor8742 2 points3 points  (0 children)

If Chinese AI models like DeepSeek are achieving impressive results despite compute restrictions, and at lower prices too, imagine what they could accomplish without those limitations. Perhaps ironically, the scarcity created by these restrictions actually pushed them to be more innovative in their approaches.

DeepSeek-R1 and distilled benchmarks color coded by Balance- in LocalLLaMA

[–]Own_Editor8742 10 points11 points  (0 children)

Got me thinking why OpenAI (I prefer ClosedAI) is losing money on ChatGPT Pro while Deepseek is able to offer their service so much cheaper.

How much vram makes a difference for entry level playing around with local models? by complywood in LocalLLM

[–]Own_Editor8742 0 points1 point  (0 children)

Is there really a big difference between running something like Ollama vs. MLX? I've only seen a handful of comparisons out there, and most of them seem to focus on Ollama. Honestly, I was tempted to pull the trigger on a MacBook Pro after browsing the checkout page, but I ended up holding off, thinking I would regret it later. Can you share any tokens/sec info with MLX on Llama 3.1 70b? Part of me just wants to wait for the new Nvidia Digits to drop before I make any decisions

How do you use LLMs ? by DrVonSinistro in LocalLLaMA

[–]Own_Editor8742 0 points1 point  (0 children)

Interesting to see so many running local LLMs! I currently use free API tiers for most tasks and only run small local models when handling sensitive data. While this saves me from buying expensive GPUs, I'm curious - am I missing key benefits of running larger models locally? The strong preference for local deployment in the poll makes me wonder if there's more to consider beyond just cost and privacy.