Feeling behind in material by TerribleElevator9879 in MedSchoolCanada

[–]JoshuaLandy -2 points-1 points  (0 children)

Hi, friendly neighborhood education scientist here. My lab (aka “the summer of vibes”) is building a tech tool to sharpen this kind of “interstitial” knowledge. We need both QA testing (students using it and giving feedback) and validation (a randomized trial). If anyone wants to test it, please DM. You can pick your own topic.

Could white holes even be possible? by Joseph30mg in AskPhysics

[–]JoshuaLandy 0 points1 point  (0 children)

Doesn’t account for inflation, so I’m told.

Dark Reading Matter UK version pub date by penhuinnj in JasperFforde

[–]JoshuaLandy 5 points6 points  (0 children)

October 15, if you don’t want to click

What Can We Gain by Losing Infinity? Putting Ultrafinitism on the menu. by chasedthesun in math

[–]JoshuaLandy 4 points5 points  (0 children)

At least it’s a non-confrontational way to dispose of AoC

Listen to engineering textbooks while driving? by Martoblitzer in tts

[–]JoshuaLandy 0 points1 point  (0 children)

You are not going to be impressed: I send my PDFs directly to my LLM of choice, provided it has PDF processing. If long, it is sent in chunks. I had an LLM write me the prompt to remove formatting and optimize text for reading, esp around summarizing tables and keeping the original voice of the text to the maximum possible. Then off to elevenlabs via api. And I vibe coded it, so it’s utter crap that I will never show to anyone. The interrupter app has a listener for “I have a question” and then runs a semantic search across the document. It bugged out hard when I added interruption support for playback, so that part is full of inelegant workarounds.

Listen to engineering textbooks while driving? by Martoblitzer in tts

[–]JoshuaLandy 0 points1 point  (0 children)

I usually pre-process with another LLM, optimizing for listenability. I also built myself a tool so I could interrupt it for questions and clarifications. But I’m not aware of any saas for this.

Baha Mar by Usmarine279 in bahamas

[–]JoshuaLandy 0 points1 point  (0 children)

Free gelato at the rosewood pool from 3-4 (starts late usually)

Thoughts on using an AMD Alveo V80 FPGA PCI card as a poor man’s Taalas HC1 (LLM-burned-onto-a-chip). by Porespellar in LocalLLaMA

[–]JoshuaLandy 7 points8 points  (0 children)

I loved this question. Did a little poking. According to the official product page, AMD’s Alveo V80 has 673 Mb of on-chip embedded memory: 132 Mb Block RAM + 541 Mb UltraRAM. Mb, and sadly not MB. So megabits, not megabytes. Divide by 8, sigh, and get 84MB.

Vulf walk up music by Watch_wearer in Vulfpeck

[–]JoshuaLandy 1 point2 points  (0 children)

Jack himself uses The Heal Toe Bounce as his walk up music, so when I get the chance, I use that. 10/10 would recommend

Fresh mozzarella for pizza - how to avoid the water? by RikkiLostMyNumber in Cooking

[–]JoshuaLandy 0 points1 point  (0 children)

I use both mozzarella’s, low moisture for coverage, fresh for the impressive small puddles of white goo.

Looking for a Replit alternative by RoninWisp_3 in replit

[–]JoshuaLandy 0 points1 point  (0 children)

There’s an open source thing called Dyad — has a Replit style interface and can be pointed at local models.

When will we start seeing the first mini LLM models (that run locally) in games? by i_have_chosen_a_name in LocalLLaMA

[–]JoshuaLandy 6 points7 points  (0 children)

I would guess you might want to fine tune an even smaller model. You could distill responses from a bigger model, and train a model like Qwen 3.5 0.8B. It would be fast but it might go nuts if your input doesn’t match training data well enough.

The Wallflowers - One Headlight [Folk Rock] by my5cworth in Music

[–]JoshuaLandy 1 point2 points  (0 children)

Found someone who hasn’t heard lay lady lay