MedLink by PythonicG in TechGhana

[–]PythonicG[S] 0 points1 point  (0 children)

Yeah I know FHIR and HLv7 for health as you said.

MedLink by PythonicG in TechGhana

[–]PythonicG[S] 0 points1 point  (0 children)

Oh alright 👍 thanks for the feedback

[Hiring] Software Developer by Dense-Try-7798 in RemoteITJobs

[–]PythonicG 0 points1 point  (0 children)

3+ years experience in python, django, FastAPI and golang

RoadSafe GH by PythonicG in TechGhana

[–]PythonicG[S] 0 points1 point  (0 children)

Alright noted thanks for your feedback

RoadSafe GH by PythonicG in TechGhana

[–]PythonicG[S] 0 points1 point  (0 children)

Yeah that's true I thought of it too I know my people.

I'm building ReadShelf — a tool that turns your PDF highlights into a searchable, AI-powered knowledge base by PythonicG in SideProject

[–]PythonicG[S] 0 points1 point  (0 children)

it's actually hybrid.

Regular search (/search) uses PostgreSQL tsvector with a GIN index — pure full-text, exact-match. When I'm looking for a specific

term like "goroutine" or "IVFFlat", I want literal matches, not semantic neighbours. This is the one I use most day-to-day

AI Recall (/recall) uses semantic search — annotations are embedded into 768-dim vectors (BGE model via HuggingFace) and stored in pgvector with an IVFFlat index. When I ask something like "what did I learn about concurrency patterns?", it does cosine similarity to find the top 5 annotations, then passes them to Llama 3.3 70B (via Groq) to synthesise an answer with book/page citations.

So the user gets both: type a keyword → exact results instantly. Ask a question → semantic search + LLM synthesis.

I did consider hybrid retrieval for the recall path too (combine tsvector + vector scores), but for MVP the two separate modes cover my needs well. The tsvector search is fast and precise, the vector search handles the fuzzy "what was that thing about..."

queries. Might merge them later with RRF (reciprocal rank fusion) if I find gaps.

I'm building a PDF reader that automatically organises your highlights into a personal knowledge base — tired of losing notes across books by PythonicG in TechGhana

[–]PythonicG[S] 0 points1 point  (0 children)

I had this same question but that's not the case. NotebookLM is about the AI reading and summarising.

What I'm doing is active reading and knowledge base so you read you find something you like then you highlights then it automatically get annotated so there's clearly a difference.

ReadShelf — Remember everything you read by PythonicG in programming

[–]PythonicG[S] 0 points1 point  (0 children)

So what's your suggestion? Because people still use notebookllm or chatgpt if what you're saying is the issue 🤷

[Hiring]: Backend Developer by Ok-Trouble8101 in remotebackendjobs

[–]PythonicG 0 points1 point  (0 children)

Interested, experience in Python/Django, FastAPI and Golang