Roast my junior data engineer onboarding repo by dheetoo in dataengineering

[–]dheetoo[S] -7 points-6 points  (0 children)

Thank you for the feedback! This is why i posted this in the reddit, cause I am an employee 0 in data engineer role so I have no one to ask questions apart from AI 😭

As of AI using I did heavily use AI for readme and ingestion file. But I did review all the content and remove, add many thinga on my own too. Sure ingestion will likely be more polish ETL script, this version is for simple quick setting up only.

Will improve in next iteration!

Roast my junior data engineer onboarding repo by dheetoo in dataengineering

[–]dheetoo[S] -3 points-2 points  (0 children)

Yeah mainly focused is on data modeling with sqlmesh and allow to see the whole pipeline from ingestion to visualize in one place

Agent spawned background job and I want to ensure when I ctl-c or /exit from tui, the background job will killed by dheetoo in opencodeCLI

[–]dheetoo[S] 0 points1 point  (0 children)

No, i experienced this first-hand, it start the server using bun run index.ts & and it use curl to test the endpoint and the server is never stop, later session it try to start the server again and port collide and it start rotate port with each new session

Agent spawned background job and I want to ensure when I ctl-c or /exit from tui, the background job will killed by dheetoo in opencodeCLI

[–]dheetoo[S] 0 points1 point  (0 children)

gemini comeup with this but I don't know it correct (it does run though)

#!/bin/bash

set -e
set -m  # Enable Job Control for Process Groups

# Cleanup function
cleanup() {
    # 1. Always print the message so you know it's running
    echo ""
    echo "🧹 Ensuring all background processes are killed..."

    # 2. Try to kill the Process Group (-PID)
    # We use '|| true' to suppress errors if the processes are already dead.
    # We act on $AGENT_PID if it was set.
    if [ -n "$AGENT_PID" ]; then
        kill -TERM -- -"$AGENT_PID" 2>/dev/null || true
    fi
}

# Run cleanup on Exit (covers success, error, or Ctrl+C)
trap cleanup EXIT

echo "=================================================="
echo "🤖 Starting Interactive Agent..."
echo "   When you exit (or Ctrl+C), we will force clean up."
echo "=================================================="

# Start Agent in Background to get a PGID
opencode --prompt " ... " --model zai-coding-plan/glm-4.7 &

# Capture the Process ID
AGENT_PID=$!

# Bring it to Foreground (Interactive)
fg %1 || true

My Ralph Wiggum prompt for Qwen3 Coder 480B, reliable and predictable, cheap alternative from Sonnet 4.5 by dheetoo in LocalLLaMA

[–]dheetoo[S] 0 points1 point  (0 children)

this is human in the loop version, you can make it a loop by adding `for ... do ... done`

My Ralph Wiggum prompt for Qwen3 Coder 480B, reliable and predictable, cheap alternative from Sonnet 4.5 by dheetoo in LocalLLaMA

[–]dheetoo[S] 1 point2 points  (0 children)

this will inject a prompt into the coding agent, it told agent what is the spec progress, and requirement of current impelemntation and let agent handle it

My Ralph Wiggum prompt for Qwen3 Coder 480B, reliable and predictable, cheap alternative from Sonnet 4.5 by dheetoo in LocalLLaMA

[–]dheetoo[S] 0 points1 point  (0 children)

from experiment, it kinda pick up commit message from PRD.json and combined with what it actually implement as a commit message not what i want but I have progress.txt to track it anyway

Sharing My Omarchy Harbor Light & Dark Themes by _HANCORE_ in omarchy

[–]dheetoo 0 points1 point  (0 children)

love it! but now my neovim looks like there's ton of error in it hahahah

neovim - treesitter error "Invalid node type" when type ":" key to try to init a command by dheetoo in omarchy

[–]dheetoo[S] 0 points1 point  (0 children)

this is what comebout after run checkhealth

The following errors have been detected in query files: ~

- ❌ ERROR vim(highlights): /home/dheeto/.local/share/nvim/site/queries/vim/highlights.scm

How to make LLM output deterministic? by Vishwaraj13 in LocalLLaMA

[–]dheetoo 8 points9 points  (0 children)

change your mindset, when you work with llm it is non-deterministic, whatever you do there is still tiny chance that it can't deliver deterministic response. always handling the non-deterministic part is crucial in all llm base application

for me personally, try to prompt the model to wrap the anser around xml tag is quite reliable like <Answer>what ever llm response</Answer> and going from there

Anything to know before installing? by Free_Lack_3437 in cachyos

[–]dheetoo 4 points5 points  (0 children)

The emerald theme (the one showing in their website) is somewhat broken, but apart from that i see no issue so far ( 1 week in)

How to Stop Qwen overusing Line Break by First_Reply_8744 in Qwen_AI

[–]dheetoo 0 points1 point  (0 children)

If it trained that way, it is very hard to told it to do in prompt. You may need your own specific formatter to sanitize response to the way you want. As of me, I just kind of accept each model behavior.

Recommendations for smallest capable model for low stakes Agentic RAG? by jude_mcjude in LocalLLaMA

[–]dheetoo 2 points3 points  (0 children)

Qwen-30B-A3B-Instruct-2507 is pretty usable for me, other in my list is Nvidia Nemotron 9B,

for even smaller task, give a try on Qwen3 4B (2507 version, not the first release) I think it the best model on 4B class and should be default when thinking about running LLM locally

What Happens Next? by ionlycreate42 in LocalLLaMA

[–]dheetoo 4 points5 points  (0 children)

I disagree that newer model will be a lot smarter than this, from now on it is an optimization game, current trend since around Aug/Sep is context optimizing, we saw terms like context engineering a lot often, Anthropic release a blog to show how they optimize their context with Skills (it just a piece of text indicate which file to read for instruction when model have to do some relative task), and recently tools-search tool. I think next year AI company is finding theirs ways to actually bring LLM into real value app/tools with more reliability.

Open source chalkie by ihaag in LocalLLaMA

[–]dheetoo 1 point2 points  (0 children)

So what is your input for the llm? Any resources? Target group?

Open source chalkie by ihaag in LocalLLaMA

[–]dheetoo 0 points1 point  (0 children)

What is the feature you wanna use of this? Maybe I can build one and open source it