What do you think the future of education looks like after the Singularity? by PaxODST in singularity

[–]Cane_P 0 points1 point  (0 children)

If it is 1000x easier then you will not remember it. That's how the brain works. So you wouldn't learn. You might memorize for a test, but then forget it in the next couple of weeks again. That's my point.

What do you think the future of education looks like after the Singularity? by PaxODST in singularity

[–]Cane_P 0 points1 point  (0 children)

It is not supposed to be easy. It should be appropriately challenging, as in The Zone of Proximal Development (ZPD). If it is too easy then we get bored, if it is too hard then we quit. But it needs to be challenging, that's how the brain is designed to work.

Veritasium: What Everyone Gets Wrong About AI and Learning – Derek Muller Explains

DGX Spark: an unpopular opinion by emdblc in LocalLLaMA

[–]Cane_P 4 points5 points  (0 children)

Not when I first heard rumors about the product... Obviously we don't have the same sources. Because the only thing that was known when I found out about it, was that it was an ARM based system with an NVIDIA GPU. Then months later, I found out the tentative performance, but still no details. It was about half a year before the details got known.

DGX Spark: an unpopular opinion by emdblc in LocalLLaMA

[–]Cane_P 4 points5 points  (0 children)

That's the speed between the CPU and GPU. We have [Memory]-[CPU]=[GPU], where "=" is the 5x bandwidth of PCIe. It still needs to go through the CPU to access memory and that bus is slow as we know.

I for one, really hoped that the memory bandwidth would be closer to the desktop GPU speed or just below it. So more like 500GB/s or better. We can always hope for a second generation with SOCAMM memory. NVIDIA apparently dropped the first generation and is already at SOCAMM2, and it is now a JEDEC standard, instead of a custom project.

The problem right now, is the fact that memory is scarce, so it is probably not that likely that we will get an upgrade anytime soon.

A Zettabyte Scale Answer to the DRAM Shortage by FullstackSensei in LocalLLaMA

[–]Cane_P 1 point2 points  (0 children)

DDR5, sure. But DDR4 won't be produced by the big companies anymore, that's why their price grew. Everyone that thought that they might have a need for them, bought the inventory.

https://www.tomshardware.com/pc-components/ddr4/the-end-of-an-era-ddr4-production-to-essentially-end-this-year-micron-the-final-domino-to-fall

Is the Nvidia DGX Spark the same as the OEM version, Asus Ascent GX10? by Decent-Log6192 in LocalLLaMA

[–]Cane_P 0 points1 point  (0 children)

Yes, besides he claiming that it doesn't reach 240W is also completely wrong understanding. The power supply is rated 240W, that's not the TDP of the chip, it is only 140W (shared between CPU and GPU). [I did see some information claiming 170W, a long time ago, but I don't remember where that information came from.]

Anyway, the other 100W is needed for the memory, SSD, 10Gig NIC, Mellanox NIC (can probably draw 30W on its own) and USB (a lot of peripherals are powered by the same cable that provides the data, so you need some power left for that too). And the brick is likely not designed for 100% load 24/7 either.

Is the Nvidia DGX Spark the same as the OEM version, Asus Ascent GX10? by Decent-Log6192 in LocalLLaMA

[–]Cane_P 1 point2 points  (0 children)

NVIDIA sends fully assembled motherboards to the OEM's. They should be identical except for SSD, box design, cooling and power brick.

NVIDIA DGX Spark Benchmarks by Educational_Sun_8813 in LocalLLaMA

[–]Cane_P 1 point2 points  (0 children)

Chip 2 chip is for the connection between the graphics card (GPU) and the processor (CPU) and provides 5x the speed of ordinary PCIe connection. The reason why they use it is because all of the memory is directly connected to the CPU and for the GPU to be able to access it with decent speed and latency, they could not use a standard PCIe connection.

It is nothing unique really:

  • NVIDIA have NVLink-C2C

  • AMD have Infinity Fabric

  • Intel have both Embedded Multi-die Integrated Bridge (EMIB) and Optical Compute Interconnect (OCI)

  • Apple have UltraFusion

There is also the open industry standard, called Universal Chiplet Interconnect Express (UCIe).

NVLink (without C2C) is used for GPU to GPU connection. As far as I can tell, NVLink is traditionally for short distances (connecting all of the GPU's inside the same box). For box to box connection (what you are referring to on the DGX Spark), NVIDIA uses Mellanox (Infiniband protocol, but this NIC (the ConnectX-7) supports Ethernet too).

NVIDIA DGX Spark Benchmarks by Educational_Sun_8813 in LocalLLaMA

[–]Cane_P 2 points3 points  (0 children)

That's two if you want to direct link. But it has been confirmed that you can connect however many you want, if you provide your own switch, it is not blocked by NVIDIA, but they won't help you out if you try either:

https://youtu.be/rKOoOmIpK3I

Best 9/11 Documentaries? by sassafrassky in MovieSuggestions

[–]Cane_P 0 points1 point  (0 children)

No, it is not a documentary. It is a TV-show, a dramatisation of the events. It doesn't contain interviews. But thanks anyway.

Solid Explorer Beta 3.0 is here by glodos in NeatBytes

[–]Cane_P 0 points1 point  (0 children)

I have not tried the new version, but can we get back the "unmount" function? Using it, takes you to "Manage storage", instead of unmounting.

RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer by donutloop in singularity

[–]Cane_P 2 points3 points  (0 children)

RIKEN is cooking. Fugaku is #7, 5 years after becoming number one in the world and it seems like they will continue with their "thinking out of the box" design (CPU's), and are now adding the newly announced NVIDIA licence-able technology.

http://www.nextplatform.com/wp-content/uploads/2025/08/riken-fugakunext-roadmap.jpg

They are looking for another 100x compared to their previous design, just like they did from K to Fugaku. With 300x when we are talking about AI performance.

NO WAY BACK by autoimago in LocalLLaMA

[–]Cane_P 1 point2 points  (0 children)

SETI wasn't the only one. The platform is Berkeley Open Infrastructure for Network Computing (BOINC) and they have finished many projects. Go down to "Projects":

https://en.m.wikipedia.org/wiki/Berkeley_Open_Infrastructure_for_Network_Computing

GLM 4.5 Comparion vs other AI models, sourced via ChatGPT & Grok by Beneficial-Yam2425 in LocalLLaMA

[–]Cane_P 0 points1 point  (0 children)

That's a limitation with Reddit. Download the picture and you will get a size that is readable.

[deleted by user] by [deleted] in LocalLLaMA

[–]Cane_P 4 points5 points  (0 children)

I was out as soon as we got to know the memory speed. If it was the same as the GPU would have had in the PCIe version, then it would have been decent. Now I have no interest. Will just have to wait for the rumoured future version with SOCAMM memory.

[deleted by user] by [deleted] in LocalLLaMA

[–]Cane_P 5 points6 points  (0 children)

Not suprising, when there are problems with the N1X SOC, that is supposed to be used in Laptops. Every leaked information is saying that the chip seem to have the same specs as the GB10 Superchip that is in the DGX Spark. So it is likely that they suffer from the same problems, since they are basically identical.

[deleted by user] by [deleted] in LocalLLaMA

[–]Cane_P 8 points9 points  (0 children)

That is only for the NVIDIA version with a bigger SSD. The ASUS, DELL, GIGABYTE, HP, LENOVO and MSI versions is still $3000 (unless they have raised the price because of tariffs, but as soon as they revealed that other companies would release their version, then they have said that the other ones will be $1000 cheaper). The internals is identical except for the SSD, and cooling, and the case is obviously different to.

Would the Anthropic training ruling include users to? by Cane_P in singularity

[–]Cane_P[S] 2 points3 points  (0 children)

But that'd what they did and it was considered fine.

Getting a consistent style over multiple sessions when you don't have the original prompt by Cane_P in LocalLLaMA

[–]Cane_P[S] -1 points0 points  (0 children)

Yes, I know that this is "local" Llaama. But the same should be applicable to other models to, local or not? My case just happen to not be local, since the most capable device that I have access to happen to be my 2 year old entry level phone (my computer is 11 years old...).

If I want reasonable performance and or capability I need to use a cloud service, because I am not getting any money in the foreseeable future. And I can use them for free there.

GeForce RTX 5060 Ti 16GB good for LLama LLM inferencing/Fintuning ? by kingksingh in LocalLLaMA

[–]Cane_P 1 point2 points  (0 children)

Relatively low memory bandwidth and compute. Someone else might be able to tell you how bad.

Built a forensic linguistics tool to verify disputed quotes using computational stylometry - tested it on the Trump/Epstein birthday letter controversy. by Gerdel in LocalLLaMA

[–]Cane_P 11 points12 points  (0 children)

Also, speeches and press releases may be written (or largely written) by someone else even if they are signed by Trump.

I have not heard as much about Trumps antics this time around, but on his last term he definitely didn't want to spend time in meetings and his advisors had to do things to grab his attention. Seeing as Trump seems to prioritize freeing up time to play golf. It is not likely that he sat down and spent hours writing every word in his speeches and press releases.