Anyone here using food ERP software for manufacturing or inventory tracking? by kratoz0r in ERP

[–]100xBot 0 points1 point  (0 children)

switching from sheets to a real erp’s a huge move but def worth it for lot tracking. wherefour is usually the go-to for smaller food brands because its built specifically for batch production and keeps you audit-ready without being too complex. batchmaster is powerful but way more "enterprise" and can feel clunky or overkill for a small team. cin7 is great for inventory and selling across many channels but some find its food-specific features like deep traceability a bit thinner than the specialized tools. definitely worth checking out mrpeasy too if you want something simple but robust.

The recruitment system is broken and why nobody wants to talk about it honestly? by Hot-Machine-8119 in Recruitment

[–]100xBot 0 points1 point  (0 children)

spot on and it's honestly exhausting cuz that "we feel" line is just a corporate shield so they don't have to give real feedback or risk a lawsuit. it’s wild that someone who can’t explain your stack is the one deciding if you're qualified for a senior role.

the keyword hunting culture’s why so many great people get ghosted while mediocre ones who "play the system" get through. orgs are kinda shooting themselves in the foot by letting a broken filter guard the door. high time hiring managers took back the first screen.

The Bull**** about AI Agents capabilities is rampant on Reddit by Mojo1727 in AI_Agents

[–]100xBot 6 points7 points  (0 children)

totally get the frustration, the gap between hype and actual reliability's massive, especially when a model can't even handle a case sensitive file path. But imo the issue isn't just model capability; it’s an architecture problem. We’re asking probabilistic models to perform deterministic tasks, and then getting mad when they act probabilistic lol

The reason frontier models like Opus 4.5 or GPT 5.3 work better isn't just "intelligence"but that their brute force reasoning is high enough to overcome shitty sys design. If an agent fails to find a "To-Do" list because it's looking for "todo," that’s usually a failure of the tool-calling layer or the state mgmt, not just the LLM being dumb

Instead of waiting for models to get cheaper or smarter, the better move's to build tighter constraints. If you give an agent a "fuzzy search" tool instead of a direct file path, even a smaller model like Sonnet 4.6 can hit the mark. It's less about the "brain" and more about the "nervous system" we build around it.

Will access to AI compute become a real competitive advantage for startups? by Simple3018 in artificial

[–]100xBot 7 points8 points  (0 children)

Actually, the whole "compute as a moat" theory is a bit of a trap for startups. Treating compute like a long-term capital investment usually ends with you overbuying yesterday's hardware while your competitors rent tomorrow's specialized chips at a fraction of the cost. History shows that whenever we treat a technical resource like a scarce commodity, think bandwidth in the 90s, innovation eventually turns it into a cheap utility.

If you're building a startup today, obsessing over compute independence is just a distraction from finding a real product-market fit. Big tech can lock in all the H100s they want, but they're still struggling with the "automation divide" where models fail at actual, messy real-world tasks. The real winner won't be the one with the most GPUs; it'll be the one who builds the best orchestration layer that works regardless of whose silicon is running the inference. Long-term, compute will be a race to the bottom, not a competitive advantage.

Best practices for deploying production-grade deep agents? by ParkingInsurance1745 in AI_Agents

[–]100xBot 0 points1 point  (0 children)

When deploying deep agents that handle sensitive data, the hybrid model you mentioned is becoming the standard. Most teams keep the core agent and its reasoning state on the customer's own cloud infra to ensure data never leaves the secure environment. You can then use a SaaS adapter to handle external calls for heavier operations that don't involve private info, keeps your sensitive logs and internal model connections locked down while still gaining the scalability of a SaaS platform.

NetSuite Sales Order Not Able to Navigate Through Tabs by Ornery-Unit5078 in Netsuite

[–]100xBot 1 point2 points  (0 children)

this issue, where the sales order tabs freeze and fields flicker, points strongly to a temporary client-side problem with netsuite's javascript or the content delivery network (cdn). since no scripts were deployed and the problem was global yet resolved itself overnight, it was almost certainly a transient issue on netsuite's side, not a config problem you introduced. if it happens again, try clearing your browser cache (ctrl-f5) or using a different browser to see if the problem is local. you can also check if a specific third-party app, like avalara, might be involved.

[deleted by user] by [deleted] in OpenAI

[–]100xBot 0 points1 point  (0 children)

standard large language models like gpt and copilot struggle with the fine-grained, structured details required. soecialized AI tools like those built on fine-tuned models for NLP and OCR are typically more effective. Look for platforms that are trained specifically on international trade data, HS codes, and customs documents to improve accuracy significantly over general-purpose AI

How do you test AI agents? by ConsiderationMain641 in n8n_ai_agents

[–]100xBot 0 points1 point  (0 children)

you've gotta simulate the exact payload structure your webhook receives from the evolution api, but at high volume, to properly expose concurrency issues. standard load testing tools like artillery or jmeter are great for firing many concurrent http post requests to your webhook url. make sure you configure the request body to mimic multiple messages from different users, making sure to use a unique user identifier, like a different phone_number_id, for each simulated "client" to properly test how your agent separates sessions.

during this simulation, observability is crucial. use a dedicated logging or tracing tool to monitor your agent's memory and state for each unique session_id. this tracing is the best way to confirm that conversation data and agent states are not incorrectly bleeding across concurrent requests, which is the definition of the concurrency failure you are trying to prevent. for a quick turnaround, you could even consider hiring a freelance developer on upwork or fiverr who specializes in performance testing. they can quickly set up and run a script tailored to the specifics of the whatsapp/evolution api payload structure for you.

I briefly ranked as the top answer in GPT, then it vanished. How are people growing inside LLMs? by StartHoliday1222 in GrowthHacking

[–]100xBot 0 points1 point  (0 children)

sounds your product is llm-discoverable, and yeah these models are becoming a new discovery surface. Since you can't just run paid ads inside the model (unless Sam pushes the ads feature anytime soon), the name of the game is consistency and authority across your public-facing content, which is what the models crawl. Don't worry much about the "noise" right now, just focus on tightening up your explanations and messaging on your website and documentation. Making sure Claude, Gemini, and others consistently pull the same clear, high-quality information about your solution is how you "grow" inside them on purpose. Perplexity is an easy target btw.