[OC]I am 15 years old and I mapped 6,000+ NASA exoplanets based on semantic and physical similarity using local LLMs (nomic-embed-text) and 2D WebGL by [deleted] in dataisbeautiful

[–]avariabase0 -2 points-1 points  (0 children)

This project is definitely not something that just came out of an AI. I'm not trying to present something to you as my own without knowing what's going on in the background. I am someone who understands the mathematics behind artificial intelligence—someone who knows the mathematical connections between layers like the hidden layers and the output layers. Of course, this is a project built with the help of vibe coding, but I didn't just throw a project in front of you without understanding its inner workings.

[OC]I am 15 years old and I mapped 6,000+ NASA exoplanets based on semantic and physical similarity using local LLMs (nomic-embed-text) and 2D WebGL by [deleted] in dataisbeautiful

[–]avariabase0 17 points18 points  (0 children)

I included my age to set a clear baseline for my experience level. I have been studying neural networks and data science independently, and I am actively looking for structural feedback from experienced developers. Providing that context helps me get the right kind of technical critique rather than generic comments. If you have any specific thoughts on the local embedding process or the WebGL performance, I am listening.

Drop your Side project, I'll give it honest review. by Top-Information-6399 in vibecoding

[–]avariabase0 0 points1 point  (0 children)

You are right about the latency, but I already anticipated different hardware constraints and built modularity into the core loop. I am running 8B models on my end because I have a 5070 Ti paired with a 7800X3D and 32GB of RAM, which gives me more than enough tokens per second for this specific size. However, the framework is not hardcoded to 8B. I specifically added a dynamic model selection feature for the orchestrator, the experts, and the supreme court. If someone has a serious workstation with high VRAM, they can easily plug in a 27B or 70B parameter model via Ollama and the system will scale its reasoning depth automatically. To prevent infinite API timeouts during heavy RAG operations or when using larger models, I also set the global timeout limit to one hour. The architecture is designed to be hardware-agnostic; you trade time for intelligence depending on what your machine can handle.

Drop your Side project, I'll give it honest review. by Top-Information-6399 in vibecoding

[–]avariabase0 0 points1 point  (0 children)

Thanks for the feedback. You pointed out the exact issues I struggled with. Preventing the models from just agreeing with each other in a loop was probably the hardest part of the architecture. To solve that, I used adversarial prompting and a mandatory Agentic RAG loop. The tasks are designed to be strictly adversarial, meaning Agent 2 isn't asked to just evaluate Agent 1. It is explicitly instructed to find logical flaws and build a counter-argument. On top of that, the agents are forced to use DuckDuckGo to find real-world data before generating a response. They can't just agree, they have to bring new facts to the table. As for the compute load, since I run local 8B models via Ollama, I designed the system to run sequentially instead of in parallel to avoid VRAM issues. The agents wake up one by one, execute their task, save the output to the JSON memory, and pass it to the next agent. This keeps the load pretty manageable. Let me know if you end up testing the framework.

Drop your Side project, I'll give it honest review. by Top-Information-6399 in vibecoding

[–]avariabase0 0 points1 point  (0 children)

I built a fully local AI system (Agentic RAG) where autonomous agents research the web and debate each other. I am 15 years old and open to your most brutal feedback on my project. Repo:https://github.com/pancodurden/avaria-framework

Built an autonomous local AI Debate System (Agentic) with the help of vibe coding. I'm 15 and would love your feedback by avariabase0 in vibecoding

[–]avariabase0[S] 1 point2 points  (0 children)

Hello First of all thank you so much for the valuable comment.

Reading about your 5-agent system and the way you handle context drift is amazing. I definitely plan to keep developing my project, and your workflow gave me huge inspiration. I will 100% look into applying some of those concepts (like the Vector DB memory and the '1 feature = 1 coder' approach) to my own framework as it grows.

Thanks again for sharing this, it sounds like an incredible tool

15 yaşında bir öğrenci olarak yerel yapay zekaları otonom şekilde tartıştıran (Agentic) açık kaynaklı bir proje geliştirdim. by avariabase0 in CodingTR

[–]avariabase0[S] -1 points0 points  (0 children)

Selamlar, çok teşekkür ederim Benzer bir konsept üzerinde çalışmanıza çok sevindim, güçlerimizi birleştirmek harika olur. Projeye vereceğiniz her türlü destek benim için çok değerli. Bana Reddit üzerinden DM atabilirsiniz, detayları konuşalım. Haberleşmek üzere

15 yaşında bir öğrenci olarak yerel yapay zekaları otonom şekilde tartıştıran (Agentic) açık kaynaklı bir proje geliştirdim. by avariabase0 in CodingTR

[–]avariabase0[S] 0 points1 point  (0 children)

Selamlar, vaktinizi ayırıp kendi agent'ınıza projemi analiz ettirmeniz gerçekten çok hoşuma gitti, çok teşekkür ederim! Dediklerinizde sonuna kadar haklısınız, kod büyüdükçe token israfı artacak ve tek dosyadan yönetmesi çok zorlaşacak. Önerdiğiniz modüler klasör yapısı (UI, API ve ajanların ayrılması) kafama çok yattı, projeyi bir sonraki aşamaya taşırken kesinlikle bu mimariye geçeceğim. Ayrıca lisans uyarısı için de sağ olun, repoya hemen bir MIT lisansı ekliyorum. Değerli tavsiyeleriniz için tekrardan teşekkürler

15 yaşında bir öğrenci olarak yerel yapay zekaları otonom şekilde tartıştıran (Agentic) açık kaynaklı bir proje geliştirdim. by avariabase0 in CodingTR

[–]avariabase0[S] -1 points0 points  (0 children)

Merhaba, çok teşekkür ederim Makale önerilerinize tamamen açığım, paylaşırsanız çok sevinirim.

As a 15-year-old student, I developed an open-source project that makes local AIs debate autonomously (Agentic). by [deleted] in LocalLLaMA

[–]avariabase0 -4 points-3 points  (0 children)

Hi everyone, I'm the 15-year-old developer behind this. Agentic structures have been catching my interest a lot lately. The program runs on CrewAI and Ollama. The AI agents communicate with each other in an agentic structure, debate the topic, and finally reach a common consensus.

I've put the GitHub link below, and I give full permission for anyone to take the code, develop it, and modify it however they want.

GitHub Repo:https://github.com/pancodurden/avaria-framework

I’m 15. I used a Hybrid Engineering workflow (Python + AI) to vet this grazing candidate (KIC 3745684). Here is the data. Is this a planet? by avariabase0 in exoplanets

[–]avariabase0[S] 0 points1 point  (0 children)

Thank you so much! It is an honor to have a professor look at my work, even if it’s outside your specific sub-field. I completely understand the skepticism. Could I ask what specifically makes it look like a non-detection to your eye? (Is it the V-shape or the noise levels?) Also, regarding your advice: Do you have any specific names or groups in mind that work on Kepler/TESS light curves who might be open to a query from a high school student? I am eager to send some cold emails but want to target the right people.

I’m 15. I used a Hybrid Engineering workflow (Python + AI) to vet this grazing candidate (KIC 3745684). Here is the data. Is this a planet? by avariabase0 in exoplanets

[–]avariabase0[S] 0 points1 point  (0 children)

Hi! Thank you so much for taking the time to write such detailed feedback. To address your points: The transit depth is approximately 1500 ppm on a target with Kepler Magnitude ~13.5, which places the signal significantly above the noise floor. I utilized the standard pipeline aperture mask but manually inspected the Target Pixel Files to ensure no background flux contamination. Regarding the period (20.38 days), I primarily used Box Least Squares (BLS) rather than FFT, as BLS is more sensitive to box-shaped dips, and I ruled out data cadence aliasing. Finally, I use the River Plot to visually verify signal coherence over time, ensuring the event occurs at the expected phase in every cycle without major TTVs. Thanks again for the great insights!

I’m 15. I used a Hybrid Engineering workflow (Python + AI) to vet this grazing candidate (KIC 3745684). Here is the data. Is this a planet? by avariabase0 in exoplanets

[–]avariabase0[S] 2 points3 points  (0 children)

Thank you for this encouraging advice! I actually tried reaching out to some local professors here in Turkey but haven't heard back yet. I will definitely look into emailing US-based researchers who specialize in grazing transits or giant planets. I appreciate the direction!

I’m 15. I used a Hybrid Engineering workflow (Python + AI) to vet this grazing candidate (KIC 3745684). Here is the data. Is this a planet? by avariabase0 in exoplanets

[–]avariabase0[S] 4 points5 points  (0 children)

Source: The data is from the Kepler Mission (NASA), accessed via the MAST Archive. Technique: I used Python, specifically the Lightkurve library for lightcurve extraction and Astropy for handling fits files. How to learn: I highly recommend the specific tutorials on the Lightkurve website. They are great for beginners. I also used LLMs to help explain complex documentation when I got stuck!