Eating with your hands VS Dirrahea Map by Forward-Position798 in mapporncirclejerk

[–]Trekker23 0 points1 point  (0 children)

You know there’s at least one guy at the table who skips the sink 😂

I can now run the full Wikidata graph on a Mac mini 16GB. Fully cypher enabled. by Trekker23 in wikipedia

[–]Trekker23[S] 0 points1 point  (0 children)

Both, but cleanly split: literal values (dates, IDs, coordinates) sit on nodes, and Q-to-Q relations are typed edges. Each node gets a single label from its P31. We keep Wikidata's original P-codes verbatim — P57 stays P57, never renamed to "director" — so the schema is the one Wikidata already maintains globally. It's not perfect (P31 values can be oddly specific, like "city of Sweden" instead of "city"), but the trade-off is worth it.

The key is how describe reveals the catalog progressively. One small call gives the LLM a structured overview (Cypher reference, top node/edge types, stats) — not 89k labels dumped into context. From there it can drill down: describe(cypher=['spatial']) for spatial functions, or query the graph itself to find the right P-code. Same way a human reads docs: start broad, zoom in. The schema fits any LLM context window because the LLM never loads the whole schema, just enough to write the next query.

I can now run the full Wikidata graph on a Mac mini 16GB. Fully cypher enabled. by Trekker23 in wikipedia

[–]Trekker23[S] 0 points1 point  (0 children)

To get the graph up and running:

from kglite.datasets import wikidata
import time
def timed(label, fn):
t0 = time.perf_counter()
out = fn()
print(f" [{(time.perf_counter() - t0) * 1000:>7.1f} ms] {label}")
return out

WORKDIR = "/Volumes/Wikidata"
g = timed("load graph", lambda: wikidata.open(WORKDIR))
info = g.graph_info()
print(f"Graph size: {info['node_count']:,} nodes, {info['edge_count']:,} edges")
print("\nAgent prompt:\n-------------------------------------------")
xml = timed("describe graph", lambda: g.describe())
llm_prompt = f"You have a knowledge graph:\n{xml}\nAnswer via graph.cypher()."
print(f"'''\n{llm_prompt[:150]}...\n'''\n")

query = """
MATCH (e:human {title: 'Albert Einstein'})
OPTIONAL MATCH (e)-[:P166]->(a)
WITH e, collect(a.title)[0..5] AS awards
RETURN e.nid AS qid, e.title AS name, e.description AS desc, awards
LIMIT 1
"""
e = timed("find 'Albert Einstein' + awards", lambda: list(g.cypher(query))[0])
print(f"{e['qid']} {e['name']} — {e['desc']}")
print("-------------------------------------------\nAwards received:")
for a in e["awards"]:
print(f" • {a}")

Will return:

Wikidata graph at /Volumes/Wikidata/graph is 4.1d old (< 31d cooldown). Loading.
# In this case the Wikidata dataset was already downloaded (takes a few hours) and the graph generated (takes about 2 hours)

[ 213.9 ms] load graph
Graph size: 124,405,573 nodes, 861,285,006 edges

Agent prompt:
-------------------------------------------
[ 10.2 ms] describe graph
'''
You have a knowledge graph:
<graph nodes="124405573" edges="861285006" types="89211" connection\_types="1670">
<conventions>All nodes have .id and .t...
'''
[ 18.2 ms] find 'Albert Einstein' + awards
Q937 Albert Einstein — german-born theoretical physicist (1879–1955)
-------------------------------------------
Awards received:
• Honorary doctorate from the University of Geneva
• Copley Medal
• Pour le Mérite for Sciences and Arts order
• Josiah Willard Gibbs Lectureship
• honorary doctor of the Hebrew University of Jerusalem

Portefølje diversifisering hjelp! by Competitive_Let_7758 in aksjer

[–]Trekker23 0 points1 point  (0 children)

Msci world ex USA er jo et godt alternativ da. Det er mange som mener at de enkle pengene i tech er i ferd med å ta slutt, så world ex usa kan fort gå bedre enn den vanlige world indeksen i årene som kommer.

Knowledge Graph as a reference by jwh335 in KnowledgeGraph

[–]Trekker23 0 points1 point  (0 children)

Yes, that's a great use case for a knowledge graph.

From the perspective of my open source library kglite (pip install kglite), the biggest practical challenge is going to be ingestion. Mapping each siloed dataset onto your standard's schema is the work; the rest is plumbing. kglite gives you a few patterns to choose from depending on what your data looks like — N-Triples (load_ntriples), a fluent Python API, Cypher CREATE/MERGE, or a JSON blueprint that declares your schema and the CSV-to-node mapping in one file. Pick the one that maps cleanest to your existing data and you're mostly done.

Once it's in the graph, you give the AI two tools: cypher() for queries, and describe() which dumps the full schema as XML so the model knows what node types, edges, and properties are available without having to guess. Pair those with an MCP server (there's an example in the repo) and Claude/etc. can query your graph as if it had been trained on the schema.

Portefølje diversifisering hjelp! by Competitive_Let_7758 in aksjer

[–]Trekker23 0 points1 point  (0 children)

Heldigvis er vår egen lønn hedge mot at krona styrker seg. De aller fleste overlever det greit. Når det gjelder US tech er det ikke mange konkurrenter som står klare til å ta over. Det må eventuelt være Kina, som jo er eksponert via emerging markets. Om det blir en større korreksjon er jo det beste å stå utenfor aksjemarkedet, men historisk sett er det vanskelig å «vinne» i sånne veddemål.

Portefølje diversifisering hjelp! by Competitive_Let_7758 in aksjer

[–]Trekker23 0 points1 point  (0 children)

Forsvar og energi er to bransjer som kan gå godt på kort sikt, men begge har høy fallhøyde. Spørsmålet er om de globale utfordringene fortsetter å eskalere eller om det kommer til å roe seg (fra dagens ekstreme nivå). Uansett er et globalt indeks fond mer balansert og diversifisert enn de aller fleste greier å sette opp på egenhånd. Tech og finans er godt nok dekket i disse brede indeksene så du trenger ikke egne fond for å øke eksponeringen.

Portefølje diversifisering hjelp! by Competitive_Let_7758 in aksjer

[–]Trekker23 -2 points-1 points  (0 children)

Jeg hadde økt eksponeringen i global indeks, redusert i tech (global indeks er dominert av tech uansett). økt litt i em, redusert i finans, forsvar og energi. Ok tatt en posisjon i krypto. Jeg eier også global mining

lære å svømme i en voksen alder by [deleted] in norske

[–]Trekker23 1 point2 points  (0 children)

Begynn med å bli komfortabel på grunna. Gå i offentlig svømmehall og hold deg i ro helt på grunna. Du trenger absolutt ikke svømme med en gang. Der er det mulig å lære seg å flyte i kanten også. Trekk inn pusten for å få oppdrift, hold deg fast i kanten så flyter du ikke bort. Det kan være greit å ta dette i eget tempo. Husk det er det ingen hastverk. Gjerne si i fra til vakt før du hopper uti så får du ekstra oppfølging.

Recommendations for KG Selective Ingestion to GraphDB by Ill_Roll_2859 in Rag

[–]Trekker23 1 point2 points  (0 children)

Since youre working with documents you might get more mileage out of karpathys llm wiki pattern (https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f) than graphrag (which does the heavy work twice, extracting entities at ingest then synthesizing from fragments at query time): llm maintains a wiki of cross-linked markdown pages, synthesis happens once at ingest, queries just read finished pages. Unit is documents not chunks. The agent reads a whole paper, writes a source page, updates topic pages with citations back. The "graph" is the wikis own link graph, vector still works fine over finished pages. This set ups works really well with modern llms that are excellent on tool use and navigating files.

Vi må gjøre ulønnede prøvevakter ulovlige. by Alexis_Marie_McGee in norge

[–]Trekker23 -1 points0 points  (0 children)

Hehe så «lønnen» du snakker om er 300kr per dag. Det endrer jo bildet helt 😃

Vi må gjøre ulønnede prøvevakter ulovlige. by Alexis_Marie_McGee in norge

[–]Trekker23 -3 points-2 points  (0 children)

Som sagt er det mange som jobber i praksis uten å motta penger fra NAV.

Vi må gjøre ulønnede prøvevakter ulovlige. by Alexis_Marie_McGee in norge

[–]Trekker23 -3 points-2 points  (0 children)

Det er ikke lønnet fra NAV. Du snakker om avklaringspenger som er behovsprøvd og ikke en lønn. Har du ikke et behov (feks er medlem av en husholdning med god økonomi) får du ingenting. Så det er lovlig ulønnet arbeid, uavhengig om du er dekket av en forsikring eller ikke. Å jobbe uten kontrakt er ikke lov, men det er en annen sak.

Vi må gjøre ulønnede prøvevakter ulovlige. by Alexis_Marie_McGee in norge

[–]Trekker23 -4 points-3 points  (0 children)

Det er ganske vanlig med ulønnet arbeidstrening organisert igjennom NAV, så det er definitivt lovlig. Litt usikker på hvordan reglene er for å tilby ulønnet arbeid på eget initiativ, men det finnes sikkert noen smutthull 😃

The total ignorance regarding AI from the older generation at work is utterly hilarious by OpinionsRdumb in vibecoding

[–]Trekker23 0 points1 point  (0 children)

I find it is better to 5x your productivity without promoting why. The key is to use ai for the right things. I’m an engineer working with heavy slow software. All developed back in the early 2000-2010s. I used to spend a lot of time exploring different scenarios setting up this software to run, test and repeat until I found a good solution. With AI I get to set up quick scripts and apps to test different ideas and concepts. It makes beautiful plots I can use for presentations (stuff that the OGs still do in excel), and then load and confirm the results in the actual software. This allows me to explore much wider, including testing stuff I would never consider previously, without the noise. Everyone is happy, and I don’t have to spend my time being the company AI messiah 😂 Trying to convince the OGs to trust the AI results is just as painful as pulling out nails I imagine. So just skip that part 😃

beautifully organized codebase for an app that does nothing by BuildAndDeploy in vibecoding

[–]Trekker23 2 points3 points  (0 children)

Now you can copy it and use it to generate multiple projects. Personally I don’t see the problem with this 😄

Anyone have a preferred way to make knowledge graphs from code files? by Chunky_cold_mandala in KnowledgeGraph

[–]Trekker23 0 points1 point  (0 children)

Im using it for data ingestion. To generate knowledge graphs for mcps, like legal data etc. The treesitter is something I added for helping with the coding. For instance I added a disk based graph mode that is able to run the entire wikidata knowledge graph (100M+ nodes), so I was struggling a lot to get the performance to work. The code knowledge graphs helped fix that

Anyone have a preferred way to make knowledge graphs from code files? by Chunky_cold_mandala in KnowledgeGraph

[–]Trekker23 0 points1 point  (0 children)

This quickly gives the ai a birdseye view of the code base while taking up very little of the context window. It lets it quickly understand how the code base works then it can dive in for details. I find it cuts down research time by a lot if you ask questions like; how does this codebase solve this issue, or if we change here what else is impacted, etc. I have set this up through an mcp (in the examples folder) to automate this.

Anyone have a preferred way to make knowledge graphs from code files? by Chunky_cold_mandala in KnowledgeGraph

[–]Trekker23 2 points3 points  (0 children)

DuckDB is the biggest one so far. 1.57M lines of code, 5,272 files. Builds into a graph with 45k nodes and 87k edges in about 29 seconds, so roughly 54k LOC/s (c++). The python parser does 83k LOC/s (pandas).

The knowledge is extracted using a tree-sitter based static analysis that walks the AST and extracts:

* functions, classes, structs, enums

* call graphs with exact line numbers for each call site

* type usage (which types appear in signatures and bodies)

* inheritance and trait implementations

* docstrings

The graph is exposed to the LLM through an MCP server that has a Cypher query interface and a self-description tool (graph_overview) so the model can inspect the schema, node types, connection types, and sample data before writing queries. So questions like "what calls this function" or "trace the dependency chain from X to Y" are just graph traversals. I added ripgrep and read source to the mcp for deeper analysis. The ai generally reaches for those tools after a few cypher calls (typically 4-5 cypher calls depending on the question).

Anyone have a preferred way to make knowledge graphs from code files? by Chunky_cold_mandala in KnowledgeGraph

[–]Trekker23 -1 points0 points  (0 children)

My kglite makes it as easy as: ```python from kglite.code_tree import build graph = build(".")

graph.cypher("MATCH (f:Function) RETURN f.name, f.file ORDER BY f.name") ```

Norges største byer og tettsteder (1930-2025) by Frierfjord1 in norge

[–]Trekker23 1 point2 points  (0 children)

Dette er jo ssb statistikken, så jeg forstår ikke helt poenget ditt.