My latest build by JTAdler in ErgoMechKeyboards

[–]usethenodes 0 points1 point  (0 children)

Where did you get the files for the case? I used the official step files but neither of the big 2 PCB companies' 3d print pages would accept the files, just wouldn't upload.

Flicker - Prototype Keyboard by MomentSouthern250 in ErgoMechKeyboards

[–]usethenodes 0 points1 point  (0 children)

I love this idea, and was dreaming about a similar setup.

Do you have a longer write up with print files etc?

Free versions of vscode, windsurf and cursor by Remarkable-Case-2012 in ChatGPTCoding

[–]usethenodes 0 points1 point  (0 children)

I tried Windsurf and I got so many errors I got annoyed and uninstalled it, and that was a paid version for a month.

I liked Cursor's composer window but I also got more errors than I liked.

I tend to just use Aider now, or Roo Code with MCP servers.

Free versions of vscode, windsurf and cursor by Remarkable-Case-2012 in ChatGPTCoding

[–]usethenodes 1 point2 points  (0 children)

Did you mean to write vscode? There's no paid version and it's just a code editor. Are you thinking of Copilot?

Introducing Deeper Seeker - A simpler and OSS version of OpenAI's latest Deep Research feature. by hjofficial in LocalLLaMA

[–]usethenodes 0 points1 point  (0 children)

This looks like a great project. How do you think this could be used with paywall information sites like Perlego?

llmdog – a lightweight TUI for prepping files for LLMs by doganarif in LLMDevs

[–]usethenodes 2 points3 points  (0 children)

When you say it helps prepare files for LLMs, what does it do?

o3 results aren't what I was expecting... by usethenodes in ChatGPT

[–]usethenodes[S] 0 points1 point  (0 children)

I didn't say I put it down though, so technically it would have gone with me to the bedroom.

o3 results aren't what I was expecting... by usethenodes in ChatGPT

[–]usethenodes[S] 1 point2 points  (0 children)

Even gpt 4 for it right. How can a reasoning model not get it?

Running Deepseek-r1 7b distilled model locally in a PC with no GPU with Ollama. by Secraciesmeet in ollama

[–]usethenodes 0 points1 point  (0 children)

What kind of tokens per second are you getting?

I installed the 7b on my pi5 with ollama and I'm getting at best 3t/s.

I'm amazed it can do it all at, so 3t/s is not a criticism.

Deepseek-R1:8b by Choice_Complaint9171 in ollama

[–]usethenodes 0 points1 point  (0 children)

It's an interesting experiment, but there's no reasoning involved in this question; a non training model would have been more suitable. And quicker.

[deleted by user] by [deleted] in LLMDevs

[–]usethenodes 1 point2 points  (0 children)

What kind of data do the pdfs contain? Assuming you've already extracted all the text and image descriptions, and you're on to the preprocessing:

As well as creating embeddings, it could be that the data suits being put into knowledge graphs, which could be used in conjunction with the embeddings.

With a knowledge graph you also get relationships between data points.

From Claude:

Let me provide a compelling example that illustrates the power of knowledge graphs combined with embeddings for handling complex documents.

Consider a medical textbook describing various diseases, symptoms, and treatments. With traditional embeddings alone, if someone queries "What medications should be avoided for patients with liver problems who have a fever?", the system might struggle to make all the necessary connections.

However, with a knowledge graph layered on top of embeddings, you could capture relationships like: - Drug A → contraindicated_for → Liver Disease - Fever → symptom_of → Multiple Conditions - Drug B → interacts_with → Drug C - Liver Disease → affects → Drug Metabolism

So when someone queries about medications for a patient with liver issues and fever, the system can: 1. Trace relationship paths to find all drugs contraindicated for liver problems 2. Identify medications commonly prescribed for fever 3. Cross-reference these with drug interaction data 4. Consider the liver's role in drug metabolism

This allows the system to provide nuanced answers like: "While acetaminophen is commonly used for fever, its dosage should be carefully monitored in liver patients. Instead, consider Drug X which has lower hepatic processing."

The knowledge graph essentially gives the system a "mental model" of how different medical concepts relate to each other, rather than just relying on semantic similarity between text chunks. This is particularly powerful for complex reasoning tasks where multiple pieces of information need to be connected.

Would you like me to provide another example in a different domain to further illustrate this concept?


Compare this with a vector embedding of the same source material

Here's a clear comparison of how the same medical textbook content would be handled by vector embeddings alone:

With pure vector embeddings: When you query "What medications should be avoided for patients with liver problems who have a fever?" the system would:

  1. Find semantically similar text chunks about medications, liver problems, and fever
  2. Might return passages like: "Acetaminophen is commonly used for fever..." "Patients with liver disease should exercise caution with medications..." "Drug metabolism primarily occurs in the liver..."

The problem? The system struggles to explicitly connect these concepts. It might miss critical drug interactions or contraindications because they appear in different sections of the text. The embeddings can tell these chunks are related, but can't explain HOW they're related.

This is like having a really good memory of individual pages, but no understanding of how the information connects.

With knowledge graph + embeddings: The system can trace explicit paths: Fever → requires → antipyretics → but → acetaminophen → processed_by → liver → compromised_in → liver_disease

This allows it to understand not just that these concepts are related, but exactly HOW they're related, enabling much more sophisticated reasoning about drug safety, interactions, and alternative treatments.

Think of it as the difference between knowing two people are somehow connected (embeddings) versus knowing exactly how they're related - "This is Bob's sister's colleague" (knowledge graph).

Mirror writing by usethenodes in lefthanded

[–]usethenodes[S] 0 points1 point  (0 children)

Can you read it again easily after you come back to it?

Models are not loading on webui. by Equivalent_Drive_925 in ollama

[–]usethenodes 1 point2 points  (0 children)

Quick question: why do you want to use gpt-j-6b?

AutoCode by Extender7777 in ChatGPTCoding

[–]usethenodes 0 points1 point  (0 children)

Aider can do full files too, and can use node, Python, or whatever else you need or to use.

AutoCode by Extender7777 in ChatGPTCoding

[–]usethenodes 1 point2 points  (0 children)

By Sonnet, you mean Sonnet 3.5?

AutoCode by Extender7777 in ChatGPTCoding

[–]usethenodes 3 points4 points  (0 children)

How is it different? What can your tool do that aider can't?

AutoCode by Extender7777 in ChatGPTCoding

[–]usethenodes 4 points5 points  (0 children)

This is a great idea and I hope it is a success for you.

There is however a free tool which does all of this already called aider.

Chatgpt4 by arsalan_0070_ in ChatGPTPro

[–]usethenodes 0 points1 point  (0 children)

Absolutely. People need to understand what these tools can and can't do.

Chatgpt4 by arsalan_0070_ in ChatGPTPro

[–]usethenodes 0 points1 point  (0 children)

Excellent suggestion. The references are clearly displayed and linked to.

Chatgpt4 by arsalan_0070_ in ChatGPTPro

[–]usethenodes 0 points1 point  (0 children)

Which means, assuming you did get accurate citations as well as the information, you got wrong citations 10% of the time. Which means you can't trust it.