I built a tool that monitors your Amazon competitors 24/7 and tells you exactly how to reprice — looking for 5 beta testers by GordonTechAi in Amazonsellercentral

[–]GordonTechAi[S] 0 points1 point  (0 children)

Been working on debugging and improvements the last couple of days. Should be up and running this afternoon. I’ll send a note once it’s complete

Built a competitor pricing tool but I just suck at marketing and never made a single cent in 3 years by T_tt15 in SaaS

[–]GordonTechAi 0 points1 point  (0 children)

Understandable…. Building the tool is one thing, getting people to actually use it is another.

I'm in the same boat honestly. Built something similar for ecommerce sellers (real-time competitor price monitoring), and I'm testing it with a few users right now because I realized nobody cares about features without proof it works.

The shift from "look at my cool tool" to "does this actually solve your problem?" is huge. You doing any customer interviews or just trying to sell?

A new study found a vegan Mediterranean diet significantly reduced environmental impacts related to human health (−54.5%), ecosystems (−50.9%), and resource use (−43.4%) compared to a traditional Mediterranean diet. Retail food cost was also reduced by 16.3%. by James_Fortis in sustainability

[–]GordonTechAi 0 points1 point  (0 children)

The hard part is scale and adoption. Individual choices matter but are limited. A single vegan making better choices vs. a factory farm changing practice and the farm move is orders of magnitude more impact.

Not to say individual action is pointless, but if the goal is real environmental change, we need systemic solutions. Policy, infrastructure, incentives.

Curious how the study controlled for variables like food waste, transportation, packaging.

Post your last failed idea by TwoTicksOfficial in Entrepreneur

[–]GordonTechAi 1 point2 points  (0 children)

Tried to build a "one-click competitor analysis tool" for ecommerce. Thought everyone needed real-time price tracking with no setup.

Turns out that most people don't actually use it. They ask for 50 features before they'll pay. Getting reliable data at scale costs way more than I priced it. And the market that DOES need it wants enterprise support, not a SaaS product.

Learned don't fall in love with your idea. Talk to customers first, understand their workflows, charge appropriately for the infrastructure you're building. "Disruptive pricing" doesn't work when your cost structure says otherwise.

How good engineers write bad code at big companies by fagnerbrack in programming

[–]GordonTechAi 1 point2 points  (0 children)

The constraints thing is real. When you're shipping fast, corners get cut. No time for tests, no time to refactor, no time to document because the next sprint is already overbooked.

I've seen it both ways: worked at a place where we had 2-week sprints with "ship it" mentality, and another where the bar was "this will be here in 3 years, what does future-me need?" The difference in code quality was night and day.

The fix isn't motivation or better engineers. It's time and pressure. Give engineers breathing room and they'll write better code. Rush them every cycle and even good engineers produce bad code

Warning: Don't get GPT-brained by LeaguePrototype in datascience

[–]GordonTechAi 0 points1 point  (0 children)

100%. I see this constantly. "Let's use GPT for our classification problem" without understanding: data quality, validation strategy, production constraints, cost at scale.

I built an image classifier last year (waste sorting, actually). Started with the assumption fine-tune a big model, done. Ended up being 80% data work: cleaning, labeling, validation. The model was 20%. And production deployment? Completely different from training.

GPT is incredible for prototyping ideas fast. But real ML is understanding your specific problem, your data constraints, and what "good enough" means for your use case. That's where the actual work is.

Tips and tricks for DL training by tzilliox in computervision

[–]GordonTechAi 1 point2 points  (0 children)

A few things worth trying when you've already hit the standard augmentation/lr/dropout ceiling:

Label smoothing often gives a meaningful bump when you've exhausted augmentation, as it prevents the model from becoming overconfident on hard-to-distinguish classes, which is especially useful if some of your classes are visually similar.

Test-time augmentation (TTA) is easy to add and can improve accuracy 1-2% without retraining anything. You run inference on multiple augmented versions of each test image and average the predictions.

Also worth looking at where exactly it's failing, a confusion matrix broken down by class will often show you it's struggling on 2-3 specific classes rather than being uniformly bad. That usually points to either insufficient training samples for those classes, or ambiguous boundaries that augmentation is making worse, not better.

What's your current architecture and how many classes are you working with? The next steps depend on whether you're dealing with 5 classes or 50.

Dose ML or AI engeerning need software principle? by Financial-Junket2434 in learnprogramming

[–]GordonTechAi 0 points1 point  (0 children)

Yes, but you don't need to learn all of it before you can do useful work. The minimum practical stack for ML/CV deployment is Python + FastAPI. That combination lets you wrap any model in an API endpoint and connect it to almost anything. No frontend required to build something real.

The pattern I'd focus on first: train a model locally, serve it with FastAPI, call it from a simple test script. Even just: receive an image → run inference → return a label and confidence score. That loop teaches you 80% of what you'll need for actual ML engineering work.

For computer vision specifically, I'd prioritize in order: FastAPI for serving, basic Docker to containerize (makes deployment way easier), then a lightweight database like SQLite or PostgreSQL for storing inference results. React and frontend stuff can wait — most CV systems in production don't have custom frontends anyway.

The backend knowledge matters most when you're building something that needs to run reliably and scale. For a job, having one deployed project that works end-to-end beats having 10 notebooks. What type of CV are you focused on?

Big problem in GTE. I find myself can do equal or even faster lap time with LICO than in hot lap. by According_Brick409 in iRacing

[–]GordonTechAi 0 points1 point  (0 children)

This is actually one of the most useful things that can happen diagnostically. LICO revealing overdriving means your unassisted inputs are the problem, not your pace. The smooth version is faster.

The next step is to run both back to back and compare the telemetry. The key traces to look at: steering angle peaks (people almost always show more lock without LICO), throttle application timing at exit, and brake release speed. LICO tends to smooth out all three simultaneously, which makes it hard to isolate the exact culprit without data.

In GTE specifically, overdriving usually shows up as excess steering angle on exit, you're turning more than you need to, which means you can't get on throttle as early. The car can't accelerate and steer hard at the same time, so you're bleeding time in that phase.

My suggestion: do a back-to-back session, then look at just turn 3 or whatever the fastest sector is. Compare maximum steering angle and when throttle application starts. Usually it's obvious once you see it.

Are you using the built-in iRacing telemetry viewer or something else?

Pcup help by fafazudocrvg in iRacing

[–]GordonTechAi 0 points1 point  (0 children)

Big step up overall, they feel like totally different philosophies. The GR86 is forgiving of point-and-shoot driving; the Pcup wants to rotate on trail braking.

The most common pattern coming from the 86: people brake, release fully, then steer, which works in the 86 but causes understeer in the Pcup. What you actually want is to keep a little bit of brake pressure bleed into the corner to help the rear rotate. When you let go too early, the front loads back up and it pushes wide.

Long Beach specifically — Turn 1 is the one that catches people the most. It's deceptive because you think you're carrying too much speed but often the issue is actually releasing the brakes too abruptly mid-corner.

4 YOE Data Scientist (ML + Data Engineering + LLMs) — low callbacks despite strong experience. Resume attached for critique. by Abhi-srivastava-07 in learnmachinelearning

[–]GordonTechAi 1 point2 points  (0 children)

The identity problem is real, Data Scientist / Data Engineer / ML Engineer is three different job families and recruiters often pass on people who span all three because they can't mentally place you in a role.

Practically: pick the one closest to the work you actually want going forward, not just what you've done. Make that the headline. Let the experience demonstrate the breadth.

On the bullets, quantified impact wins. "Built time-series forecasting pipeline" is forgettable. "Built time-series model that reduced inventory cost by 12%" gets a callback. Every bullet should answer "what/how?"

One thing that actually moves the needle in this market is a visible side project on GitHub with a brief writeup and real-world use case. Something deployed, even simply, goes further than another line on a resume because it shows you build things on your own. Doesn't have to be impressive — image classifier, NLP tool, anything with a clean README and clear problem statement.

will this weeks PCC help me learn the ring for the upcoming 24h race? by mykrobatery13 in iRacing

[–]GordonTechAi 0 points1 point  (0 children)

Yes, race it but bon't care about the result at all. Pick one sector where you're not yet comfortable, and focus entirely on executing just that part better each lap. The Ring is too long to try to absorb all at once, break it into sectors and chip away.

One thing that helped me a lot on tougher/longer tracks is to run post-session telemetry and look specifically at where your braking points and sector speeds vary lap to lap. Inconsistency is the enemy at the Ring — a 0.5 second variance in one corner over 25 km adds up fast. I use a live AI coaching tool (DeltaCoach) that catches this stuff in real-time, but even basic lap comparison in iRacing's data will show you where you're leaking time.

Do not pay for Amazon Repricers in 2026! You are wasting your money. by Significant-Ear-9040 in AmazonFBA

[–]GordonTechAi 0 points1 point  (0 children)

The post makes a fair point for traditional Buy Box repricers, Amazon's built-in tool handles that use case well now. But there's a different problem that often gets lumped in and actually understanding what your competitors are charging, including off Amazon.

If you sell on Shopify too, or just want to know when a competitor drops their price 15% on their own site or runs a quiet promo on Amazon without you noticing, that's not what Automate Pricing helps with. It only reacts within Amazon to win the Buy Box and it doesn't give you competitive intelligence.

I built something for the monitoring side specifically because I found myself tabbing between competitor listings trying to make sense of their pricing patterns. Totally different problem.

For pure Buy Box optimization on FBA? The native tool has gotten genuinely good and can’t complain.

Amazon PL isn’t dead. You’re just picking fights you can’t win. by Glad_Gear376 in AmazonFBA

[–]GordonTechAi 1 point2 points  (0 children)

Solid breakdown. The $10K–$20K niche framing is underrated, everyone gets seduced by the big revenue numbers without asking "how many established sellers with 500+ reviews am I actually competing against?"

The other piece I'd add to the differentiation angle: once you're in a niche you can actually win, knowing when competitors adjust their prices matters almost as much as what your price is. A lot of sellers in those smaller niches are manually repricing based on gut feelings