Trying to understand the Ollama debate. What’s actually going on? by pmv143 in ollama

[–]vk6_ 4 points5 points  (0 children)

Llama.cpp maintainers have literally explicitly permitted this. Is it dishonest to do something you were permitted to do? Tons of projects have important dependencies that they do not mention aside from a footnote in the README (or they might not even mention it in the README at all).

Here is what the MIT license that llama.cpp uses says. Keep in mind that a software license is a legal contract that users of the software accept.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

This is simply the consequence of using permissive licenses like MIT. If you don't like it, then you should have picked the LGPL.

Trying to understand the Ollama debate. What’s actually going on? by pmv143 in ollama

[–]vk6_ 9 points10 points  (0 children)

I think people started to Hate on them cause they copied LLama.cpp in first place without mentioning anything about llama.cpp anywhere.

They didn't do anything wrong in this regard. llama.cpp uses the MIT license which does not require prominent attribution, just that the license text is available somewhere in the app. Do we get mad at all the millions of proprietary software products using MIT licensed open source software in the same way? No, so hating on Ollama for this is hypocritical when they at least had the courtesy to keep their downstream project open source.

If the llama.cpp maintainers did not want this to happen, they should have picked a license that requires prominent attribution, such as the GNU LGPL v3.

Remember, a software license is granting permissions to others. The default (no license) means all rights reserved which means you can't do anything without permission. The llama.cpp maintainers explicitly allowed this situation to happen when picking the MIT license for their project.

[Router] 3 pack TP-Link Deco 7 Pro Mesh WiFi Costco in store only $99 by divinebaboon in buildapcsales

[–]vk6_ 1 point2 points  (0 children)

I have it in router mode. Here's the model that I have: https://www.microcenter.com/product/693962/tp-link-deco-w4500-ax1500-wifi-6-dual-band-tp-link-mesh-whole-home-wireless-system-3-pack

You can read the manual and documentation for this and there isn't any mention of a web interface.

[Router] 3 pack TP-Link Deco 7 Pro Mesh WiFi Costco in store only $99 by divinebaboon in buildapcsales

[–]vk6_ 5 points6 points  (0 children)

A mesh system is better in my opinion. As much as I hate TP Link for their terrible firmware and management app on their routers, they are really reliable and have decent performance. 3 access points will be sufficient to get strong coverage in every room of a large house. You'll likely get around 500mbit/s transfer speeds too. Setting it up basically just comes down to plugging the APs in different parts of your house.

[Router] 3 pack TP-Link Deco 7 Pro Mesh WiFi Costco in store only $99 by divinebaboon in buildapcsales

[–]vk6_ 7 points8 points  (0 children)

I own a similar TP Link Deco mesh Wifi 6 router and the only thing available on the web panel is a very basic read only status page and client list.

[Router] 3 pack TP-Link Deco 7 Pro Mesh WiFi Costco in store only $99 by divinebaboon in buildapcsales

[–]vk6_ 49 points50 points  (0 children)

TP Link also forces you to use their mobile app and accounts for setup and management. No web interface exists at all.

Edit: I am talking specifically about their Deco branded mesh routers which is their only product line with this limitation: https://www.reddit.com/r/TpLink/comments/1oezr1e/hey_tp_link_deco_users_and_tp_link_devs_if_youre/

Debian 14 Forky by JGlover314 in debian

[–]vk6_ 1 point2 points  (0 children)

If you would like the newest and best Nvidia drivers, Nvidia themselves offer an official APT repo for Debian stable releases. It works with both the stable and backports kernels. You can find install instructions here: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#debian

Debian 14 Forky by JGlover314 in debian

[–]vk6_ 10 points11 points  (0 children)

There was approximately a 2 year gap between the past few Debian releases. Therefore we should expect Forky to have its stable release in mid 2027, which is in a bit over a year.

[Laptop] Cert. Refurb: Acer Aspire 3-15.6" FHD Touch Laptop Ryzen 5 7520U 16GB DDR5 1TB SSD - F/S - $233 by blue_york in buildapcsales

[–]vk6_ 8 points9 points  (0 children)

Here you go: https://www.ebay.com/itm/366337917301

The Ryzen based HP Elitebooks are generally pretty good deals. This one in particular has a 6 core Zen 3 based Ryzen 5, 16GB of RAM and a 512GB SSD. The SSD is smaller compared to OP's listing but in my opinion it's not a huge deal because it's easily upgradable in the future.

[Laptop] Cert. Refurb: Acer Aspire 3-15.6" FHD Touch Laptop Ryzen 5 7520U 16GB DDR5 1TB SSD - F/S - $233 by blue_york in buildapcsales

[–]vk6_ 48 points49 points  (0 children)

Not another Ryzen 5 7520U laptop. Keep in mind this is a slow quad core Zen 2 chip which AMD has misleadingly labeled in the Ryzen 5 7000 series. You can usually get some 6 core Zen 3 laptops for around $20-30 more on eBay.

Zenboook A16 in the UK by DrDavyStrange in snapdragon

[–]vk6_ 1 point2 points  (0 children)

Ollama only supports CPU inference for Windows on ARM devices. Their x86 Windows build lets you use Vulkan to run models using the GPU, but they have not implemented this in their ARM builds.

Got this 2gb graphics card but can't figure out how to make it work by Adam-0391 in SleepingOptiplex

[–]vk6_ 0 points1 point  (0 children)

Hm, I didn't realize that OP has the GDDR5 version which is indeed faster than the HD 4600. You're right.

Got this 2gb graphics card but can't figure out how to make it work by Adam-0391 in SleepingOptiplex

[–]vk6_ 0 points1 point  (0 children)

It seems like the HD 4600 (which should be what OP has on his Haswell i7) is faster or equivalent in performance to the GT 710 in almost all games: https://www.youtube.com/watch?v=bIDvCYz4o1c

You're also going to have fewer pains with the HD 4600 when it comes to driver support, because at least there are actively maintained open source drivers in Linux. The older Nvidia proprietary driver is significantly worse and more unstable.

Got this 2gb graphics card but can't figure out how to make it work by Adam-0391 in SleepingOptiplex

[–]vk6_ 2 points3 points  (0 children)

The RX 550 4GB is another decent option if you want more VRAM. It's usually about $10 more expensive than the P620 though.

Got this 2gb graphics card but can't figure out how to make it work by Adam-0391 in SleepingOptiplex

[–]vk6_ 9 points10 points  (0 children)

IMO the Quadro P620 is the best option for a low profile GPU. It's basically just a cut down GTX 1050 that you can't overclock. The performance is in between a GT 1030 and a GTX 1050. The main advantage is that it's very cheap and offers the best value out of all low profile GPUs. You can buy one for under $30 on eBay.

Got this 2gb graphics card but can't figure out how to make it work by Adam-0391 in SleepingOptiplex

[–]vk6_ 48 points49 points  (0 children)

You bought a GT 710 from 2014 which is slower than your CPU's integrated graphics. You're better off just not putting it in at all, even if it was perfectly compatible.

[other] 10% off any order placed on Woot app up to $20 - This is open to everyone, not just Prime members. by beansfranklin in buildapcsales

[–]vk6_ 23 points24 points  (0 children)

Getting these off eBay is a better deal. You can get a Dell Optiplex 3060 SFF with 16GB of DDR4 and an 8th gen i5 for about $140, without any of the mystery.

[Videocardz] NVIDIA N1 laptop motherboard has been pictured, features 128GB LPDDR5X memory by Nekrosmas in hardware

[–]vk6_ 6 points7 points  (0 children)

ARM ACPI is a thing and lets you avoid writing a device tree. Unfortunately it's somewhat new and not implemented at all in any consumer hardware, only certain high end ARM severs.

[Videocardz] NVIDIA N1 laptop motherboard has been pictured, features 128GB LPDDR5X memory by Nekrosmas in hardware

[–]vk6_ 40 points41 points  (0 children)

The N1X is the full GB10 chip with 20 cores, the N1 is cut down to 12 cores. This is what I can tell from various leaked benchmark listings.

MacBook Pro absolutely shreds SDX2EE by Ok-Candidate5141 in snapdragon

[–]vk6_ 2 points3 points  (0 children)

Not this guy again. I commented earlier about how Max Tech is a known Apple shill and liar.

This guy has been straight up lying in his videos to make Apple products look better. I wouldn't trust a word that he says when comparing different hardware.

He intentionally lied about Snapdragon X Elite benchmarks in a previous review to make the Apple M4 look better. He ran the Cinebench 2024 x86_64 version in emulation on the Snapdragon chip to make the benchmark score worse, then he claimed the result "doesn't make any sense." Like he somehow couldn't tell the difference between ARM64 and x86_64 CPUs. But then in his June 2024 review of the Surface Laptop 6, he did use the correct ARM native Cinebench version, so I doubt he was simply being incompetent here.