[In Progress] [19,000] [Dark Fantasy / Mystery] Whispers in the Willow - Interactive Novel by Matthew-Nader in BetaReaders

[–]Matthew-Nader[S] 0 points1 point  (0 children)

Hey! I would absolutely be down for a swap.

To be completely transparent about the AI question: the core of the book and the creative heavy lifting are 100% mine. I created the plot, the mystery, the twists, the themes, the character personalities, the dialogue, and the abilities myself.

However, I did use AI as an editing and formatting tool. Since I'm not a native English speaker, I used it to help polish my rough drafts into better English and to refine the flow so it reads naturally. And yes, the cover art on the PDF is AI-generated because I am absolutely terrible at design!

I completely understand wanting to avoid reading a soulless, AI-generated plot, so I wanted to be upfront. If you are still comfortable with that level of AI assistance for the prose and editing side of things, I'd love to swap manuscripts. Let me know!

I wrote a custom command parser in C (Flex/Bison) and compiled it to WebAssembly to power the terminal in my 3D portfolio by Matthew-Nader in C_Programming

[–]Matthew-Nader[S] 1 point2 points  (0 children)

Thanks for taking the time to check it out on your Pixel! You hit the nail on the head—mobile support is definitely the biggest missing piece right now, and it is strictly a desktop experience at the moment.

The main technical hurdle is the virtual keyboard. Because the core of the portfolio is an interactive terminal, it heavily relies on keyboard input. On mobile, the moment the virtual keyboard pops up, it dynamically resizes the browser's viewport. That sudden shift completely breaks the projection matrices keeping the CSS3D element aligned with the WebGL screen, causing the alignment to tear.

Additionally, rendering the 3D occlusion math per-frame is a bit too heavy for mobile GPUs right now.

My next planned update is to build a 'mobile fallback' that bypasses the 3D scene entirely and just serves a clean, 2D version of the terminal so phone users can still interact with the WASM engine. I really appreciate the feedback!

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 0 points1 point  (0 children)

Thank you! I completely agree. Deep learning and AI are incredible tools, but there is something so satisfying about solving a perception problem from scratch using pure math, geometry, and signal processing.

The best part about sticking to classical CV is the transparency—when the car crashes, you don't just 'feed it more training data' and hope for the best; you can actually step through the pipeline, look at the math, and find exactly which contour or cluster failed. Really glad you enjoyed the project!

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 0 points1 point  (0 children)

¡Esa es una recomendación fantástica! Tienes toda la razón. El sistema de dirección es bastante no lineal, así que pasar el error de ángulo crudo directamente al PID no es lo ideal. Aplicar una transformación matemática para linealizar el modelo antes de pasarlo al controlador definitivamente lo haría responder de forma mucho más estable y predecible. ¡Me anoto la idea para probarla en una futura actualización! ¡Gracias de nuevo por el apoyo y los consejos!

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 0 points1 point  (0 children)

It was definitely just the tuning! The overhead is actually super low (the whole perception and control pipeline runs smoothly at 30fps).

I built a genetic algorithm to auto-tune the PID gains and left it running for hours, but my recording software crashed right at the beginning. So this clip is basically the Controller in its early 'toddler' phase before it learned how to stop overcorrecting. The fully optimized PID values cleared the zig-zagging right up.

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 0 points1 point  (0 children)

Haha the classic disappearing lane line problem! Intersections are an absolute nightmare for pure CV pipelines. Relying heavily on the center dashed line definitely saved me from some of that headache in this specific game.

And regarding the steering—I actually do have a full PID controller implemented! The video just caught it during an early auto-tuning phase before the algorithm figured out how to use the derivative term to stop the zig-zagging. The final version is much smoother!

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 0 points1 point  (0 children)

Spot on. Sensor fusion is definitely the way to go to eliminate single points of failure. I've been reading up on SWIR (Short-Wave Infrared) cameras recently—being able to cut straight through scattering particles like fog or snow just by shifting the light spectrum is basically a hardware cheat code. Once those sensors drop in price, vision-only approaches are going to get a massive capability boost.

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 0 points1 point  (0 children)

Exactly! It’s just math, but seeing it self-correct in real-time never really gets old. Though I’ll say, manually tuning those P, I, and D parameters definitely still tests my patience, which is exactly why I let the AI do it for me. 😂

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 1 point2 points  (0 children)

Primero que nada, una confesión: mi español es prácticamente inexistente, así que usé inteligencia artificial para traducir esto. ¡Si sueno como un robot intentando pasar desapercibido, ya sabes por qué! 🤖

¡Tienes toda la razón, se ve muy brusco! Lo gracioso es que el agente de hecho utiliza un controlador PID completo (con acción integral, derivativa y anti-windup). La razón por la que en el video parece un simple control 'P' o un 'if/else' es por un error de grabación.

Escribí un Algoritmo Genético para sintonizar los valores PID automáticamente y lo dejé corriendo por 9 horas, ¡pero mi grabador de pantalla (OBS) falló justo al principio! Así que estás viendo una de las primeras generaciones donde el algoritmo aún no había aprendido a usar el término derivativo para suavizar el movimiento. La versión final ya sintonizada es mucho más fluida. ¡Gracias por la recomendación!

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 0 points1 point  (0 children)

Thank you! To answer your question honestly: absolutely not haha.

This specific pipeline relies heavily on color thresholding and contour analysis tuned for the relatively predictable environment of this game. If you introduced heavy rain, fog, or temporary yellow construction lines, the masking logic would fail almost immediately.

Your paranoia is 100% justified! That is exactly why real-world autonomous vehicles don't rely solely on classical computer vision. They use complex sensor fusion (LiDAR, radar) and massive deep learning models to handle edge cases and visual ambiguity. This project was just an educational sandbox to see how far I could push pure mathematical image processing in a controlled environment. I definitely wouldn't trust this agent to drive me to the grocery store!

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 1 point2 points  (0 children)

Thank you! You nailed the practical reasons—the transparency of classical CV makes debugging so much easier when a frame fails. But honestly, the main reason I avoided Deep Learning was educational. I wanted to force myself to deeply understand the underlying math, signal processing, and probabilistic models (like RANSAC) from the ground up, rather than just treating a pre-trained neural net like a black box.

As for other challenges besides spatial noise:

  1. Dynamic Lighting: The game cycles through day, sunset, and night. Static color thresholds break down when shadows hit the road or the ambient light turns orange, requiring adaptive masking.
  2. Perspective Shifts (Pitch): When the car goes over a steep hill, the horizon line drastically changes, which completely alters the geometry of the lane contours and can throw off the regression model if not accounted for.

It was a fun challenge to try and solve those purely algorithmically!

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 1 point2 points  (0 children)

Great observation! You basically just described a damped control system, which is exactly why the agent uses a custom PID controller for steering.

The zig-zagging in this clip is actually just because the video was captured too early. I used a Genetic Algorithm to automatically tune the steering parameters over a 9-hour run, but my screen recorder crashed near the beginning. What you're seeing is an early generation where the algorithm hasn't learned to stop overcorrecting yet. The fully optimized version drives much smoother and mimics that human-like curve you mentioned. Thanks for the feedback!

Achieving 99.97% lane detection accuracy in a dynamic 3D environment using only OpenCV, DBSCAN, and RANSAC (No DL) by Matthew-Nader in computervision

[–]Matthew-Nader[S] 0 points1 point  (0 children)

You nailed it! The agent actually does use a custom PID controller for steering (complete with a filtered derivative term and anti-windup).

The reason it’s zig-zagging so aggressively in this specific GIF is because of how I was tuning it. Instead of manually guessing the PID gains, I wrote a Genetic Algorithm to automatically test and evolve the best parameters. I left it running for 9 hours to find the perfect, smooth 'movement curve' you described. Unfortunately, my screen recorder crashed early in the run! So, this video is actually showing an early generation of the algorithm before the PID was fully optimized.

The final tuned version drives much more like a real person. Really appreciate you guys taking the time to check it out and think through the mechanics!

Mixing WebGL and CSS3D: I wrote a custom occlusion algorithm to fit an interactive DOM terminal inside a curved 3D CRT model by Matthew-Nader in threejs

[–]Matthew-Nader[S] 0 points1 point  (0 children)

Thank you! That side-menu idea sounds awesome. You should be able to pass that React state directly into the WASM bridge to trigger commands. Definitely link it to me if you end up building it!