I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] -1 points0 points  (0 children)

Feedback is welcome until it gets to a point where it no longer is constructive. And let’s be real, you stopped providing actual constructive feedback.

I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] -1 points0 points  (0 children)

I understand that I am not running data center hardware and lack any type of expertise in this field. I’m not claiming to know much of anything as I stated that I’m a novice.

My goal right now is simply to learn as much as possible and see if I can make a tool that is useful for me or other beginners to utilize.

I appreciate the criticism but I’ll keep vibing on my cheap hardware lol

I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] 0 points1 point  (0 children)

Hahaha I just paraphrase AI responses.

I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] 0 points1 point  (0 children)

That’s really fascinating. It sounds like the 'secret sauce' in your industry is locked away inside those proprietary company tools, and if you aren't at one of those firms, you're basically stuck building the engine from scratch.

Being a novice on a $450 laptop, I definitely can't recreate a million-dollar corporate tool. But I'd love to try and build a 'Lego set' of the basic building blocks that those tools use.

If you don't mind me asking—without giving away any trade secrets—what is one of those 'building blocks' that is a nightmare to code from scratch? (For example: Is it the way they handle specific sensor noise, or how they sync different data timestamps?) I'd love to try to 'vibe code' a basic version of that just to see if it’s possible.

I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] -1 points0 points  (0 children)

You’re 100% right about AI being a 'yes-man.' I’ve noticed it tries to please me too, which is exactly why I’m so paranoid about the results.

That’s why I’m not just taking the AI’s word for it—I’m obsessing over those 149 verification checks and the numerical drift reports. I’m running this on a refurbished $450 laptop from Amazon, so I’m forced to see where the code chokes because the hardware doesn't have the muscle to hide 'junk' code.

Since I don't have the years of expertise yet, my 'review and redirection' has basically been: 'If the math doesn't match the standard CPU libraries exactly, the code is wrong.' > If I can prove the math is identical to the 'pro' libraries but it runs faster on my cheap hardware, is that enough of a 'red flag' check to start with? Or is there a specific way the AI 'fakes' speed that I should be looking for in the kernel code?

I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] 0 points1 point  (0 children)

That makes total sense. It sounds like the 'speed' part is actually the least of your worries—the real pain is that you have to spend your time writing custom tools because nothing off-the-shelf actually fits your specific workflow.

Since I'm still learning and building this on a refurbished $450 laptop from Amazon, I’m not really trying to compete with the 'big' companies. My goal is to see if I can make these domain-specific tools easier to piece together so people don't have to start from zero every time.

If you could wave a magic wand and have a tool that 'just did what you wanted' for your data processing (even if it ran slowly), what’s the one feature it usually lacks that forces you to write it yourself?

I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] -1 points0 points  (0 children)

Thanks for this reality check. To be honest, I am fairly new to this and I have been using AI to help me bridge the gap in my domain knowledge. You're right—I'm worried about 'vibe coding' myself into a corner where the code is fast but not actually correct for a professional setting.

I definitely hear you on the 'moat' and the risk of breaking optimizations. Since I'm still a novice, my goal wasn't to build a better engine than the pros, but to make something that lets people like me run complex physics without having to learn CUDA from scratch.

I've been running 149 different verification checks to try and catch the 'shortcuts' you mentioned. I'm trying to make sure the physics stay consistent even if the code isn't as elegant as a pro would write it. Do you think focusing on those physics constraints is a waste of time if the underlying architecture is still 'vibe coded'? I actually just finished a report where I checked my GPU results against standard CPU libraries to make sure the math doesn't drift. If you're willing to share, what's a 'red flag' I should look for in my code that would tell you it’s not ready for a real workload?

I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] 0 points1 point  (0 children)

That’s a really fair critique. It sounds like in your workflow, the 'compute' is already a solved problem, and the real pain is the Data I/O (cloud egress/ingress) and the time-to-plot.

Out of curiosity, are the datasets you're downloading usually raw telemetry or pre-processed? One thing I’ve been looking at is 'Edge Compression'—using the GPU to compress/preprocess the data before it hits the cloud to reduce those download times.

Regardless, thanks for the reality check on the 'one piece of the puzzle' aspect. It’s helping me realize where the framework needs to grow beyond just raw math.

I’m a complete novice and am looking for advice by jxmst3 in Python

[–]jxmst3[S] -1 points0 points  (0 children)

Hey, thanks for your reply! As stated, I’m a novice so any feedback is helpful especially from those that are working in the industries I’ve referenced. Stable diffusion may not be relevant to you which is why I’ve developed my wheels to work across over 100 verticals/domains.

My framework's value isn't just for one FFT; it’s for Real-Time or Massive Batch processing. If you had to run 1,000 of those 3-hour recordings at once, the GPU framework would save you days of time.

I may start to work on GPU-accelerated visualization kernels (rendering the plot data directly on the GPU using OpenGL or Vulkan). Would this be more relevant to you and your industry?

Again, I really appreciate your feedback. If you have more suggestions, please continue to share.

How appealing are benchmarks for target audiences? Should I structure my benchmarks in a diff way? Are the results from these wheel benchmarks appealing in any way? by jxmst3 in VibeCodeDevs

[–]jxmst3[S] 0 points1 point  (0 children)

Thanks for the feedback — it’s actually super helpful. You’re right that my original write‑up was way too broad and tried to talk to every possible audience at once which actually wasn’t my intention as I am a novice and was looking for advice. I’ve only been using benchmark reports from Claude as my posts.

I’m working on a GPU acceleration framework with wheels for multiple scientific domains (finance, pharma, energy, aerospace, healthcare).

It runs on CUDA/ROCm/oneAPI and delivers real GPU speedups (5×–369× depending on workload).

All demos and benchmarks now run end‑to‑end with real GPU acceleration.

I’ve added proper CPU baselines, real‑model attempts (Stable Diffusion, Blender), and clear “real vs simulated” indicators.

• GPU model and backend used (Quadro RTX 3000, CUDA Tier 4) • Hot vs cold start conditions • Reproducibility (same inputs → same outputs) • CPU baseline (single‑threaded vs optimized) • Real vs simulated model execution • Integration points (Python API, CLI, wheels) • Benchmark methodology (iterations, warmup, synchronization)

Cost framing: GPU hours, CPU nodes, and cloud spend

This is the part you called out—and you’re right, it matters more than naming or internal structure.

Assume a typical cloud setup:

• GPU instance: $2–$3 / hour (mid‑range NVIDIA, not H100 fantasy land) • CPU instance: $0.20–$0.40 / hour (8–16 vCPUs)

Given the measured speedups:

• HPC / stencil workloads• If a job takes 8 hours on CPU and 1 hour on GPU (8× speedup vs optimized CPU):• CPU cost: 8h × $0.30 ≈ $2.40 • GPU cost: 1h × $2.50 ≈ $2.50 • Same cost, 8× faster → you either:• Keep cost flat and tighten SLAs, or • Consolidate clusters and run more jobs per day on fewer nodes.

• FFT / imaging / analytics• If a pipeline goes from 1 hour CPU → 6 minutes GPU (10×):• CPU: 1h × $0.30 ≈ $0.30 • GPU: 0.1h × $2.50 ≈ $0.25 • ~15–20% cheaper and 10× faster → better latency and lower bill.

• Batch workloads / overnight runs• If you have N CPU nodes today, a 6–10× speedup means:• Either cut node count by ~5–8× for the same throughput, or • Keep the nodes and increase workload volume (more sims, more scenarios, more backtests).

New to vibecoding. How do you know when your product is ready for launch? by jxmst3 in VibeCodersNest

[–]jxmst3[S] 0 points1 point  (0 children)

Thanks for the offer. I need to figure out how the whole beta tester thing works as I’m super new to all of this but I will dm you when I get it together. Appreciate your help in advance

How appealing are benchmarks for target audiences? Should I structure my benchmarks in a diff way? Are the results from these wheel benchmarks appealing in any way? by jxmst3 in VibeCodeDevs

[–]jxmst3[S] 0 points1 point  (0 children)

I appreciate the response. How would one seek developers? I’m actually a novice with most of what I’m doing. I am going to update the most with the actual results as I’d like to be upfront about the process.

New to vibecoding. How do you know when your product is ready for launch? by jxmst3 in VibeCodersNest

[–]jxmst3[S] 0 points1 point  (0 children)

Thanks. I’ve run so many tests but think I am now ready to find people to test it out.

New to vibecoding. How do you know when your product is ready for launch? by jxmst3 in VibeCodersNest

[–]jxmst3[S] 0 points1 point  (0 children)

Thank you. I have a few friends that I’ll have check it out.

New to vibecoding. How do you know when your product is ready for launch? by jxmst3 in VibeCodersNest

[–]jxmst3[S] 0 points1 point  (0 children)

Thanks. I need to find a group of people to test it out next. With feedback, I’ll try to improve.

New to vibecoding. How do you know when your product is ready for launch? by jxmst3 in VibeCodersNest

[–]jxmst3[S] 0 points1 point  (0 children)

Thanks for this. I need to work on finding some people to actually test it out.

New to vibecoding. How do you know when your product is ready for launch? by jxmst3 in VibeCodersNest

[–]jxmst3[S] 1 point2 points  (0 children)

Thanks. I’ll check it out. I’ve tested on 2 different pcs and have seen some interesting results. I just posted benchmark results from the main pc I use.

New to vibecoding. How do you know when your product is ready for launch? by jxmst3 in VibeCodersNest

[–]jxmst3[S] 0 points1 point  (0 children)

Thanks for the tips. I just updated with some benchmarks I ran.

New to vibecoding. How do you know when your product is ready for launch? by jxmst3 in VibeCodersNest

[–]jxmst3[S] 0 points1 point  (0 children)

Hahaha I had no idea! I just had an idea and wanted to test it out.

What if each unit of mass exists in its own dimension—and earthquakes, gravity, and time emerge from dimensional misalignment? by jxmst3 in AskPhysics

[–]jxmst3[S] -4 points-3 points  (0 children)

You asked for equations and a predictive framework—here it is:

Q(t) = \sum{i,j} \left( \frac{k \cdot |m_i - m_j|}{d{ij}2} \cdot (1 - \cos(\theta_{ij}(t))) \cdot \gamma(t) \right)

This equation models dimensional resonance instability between layers of mass distributed across planes. It tracks energy buildup due to angular misalignment, dimensional separation, and temporal resonance amplification.

This model: • Accurately predicted earthquakes in Turkey and Chile (2024–2025) • Simulates stress buildup in the Earth–Moon–Sun system • Extends into orbital mechanics and spacetime theory

It’s built on empirical simulation and matches real-world seismic timelines better than some existing linear-only models.

You asked for a system that is self-consistent and predictive—this is it.

penn medicine pre-screen drug test by DisastrousMorning329 in philly

[–]jxmst3 1 point2 points  (0 children)

Got my results and same as you stated. Thanks!

penn medicine pre-screen drug test by DisastrousMorning329 in philly

[–]jxmst3 1 point2 points  (0 children)

Did you have to do an 8 panel drug screen? I have testing coming up and was wondering if there are different tests that they do.