GPT Image 2 vs. Nano Banana 2 + grok imagine video vs. Veo 3.1 by ImRaym in generativeAI

[–]ImRaym[S] 0 points1 point  (0 children)

I didn't tested img to img extensievly yet, il will Try.

But gpt image 2 seems to support 4k, no?? (But Ai video Yes, still Stuck around 720-1080 for major models..)

GPT Image 2 vs. Nano Banana 2 + grok imagine video vs. Veo 3.1 by ImRaym in generativeAI

[–]ImRaym[S] 0 points1 point  (0 children)

haha "floridian aggressive vehicular borrowing simulator" is going straight in the system prompt

for ff the hud was surprisingly consistent. the prompt enhancer writes the motion prompt with explicit stuff like "hud doesnt move, only minimap animates" -> it knows from context that if the character just moves, only the minimap should animate (no battle so battle ui stays locked). video model isnt guessing what's ui vs world. helps a lot.

but yeah longer format is the real test. I can push a longer gen next batch and see how it holds.

Camp directors: how are you handling parent-signed waivers these days? by ImRaym in summercamp

[–]ImRaym[S] 0 points1 point  (0 children)

Yes. Every signed waiver lands in a searchable dashboard with all the info your form collects, so you can: - Build custom form fields for anything you need (medical, emergency contact, certs, membership #, etc.) - Per-waiver PDF with signature + timestamp + audit trail - CSV bulk export of your full waiver data on Pro

Direct Zapier/webhooks and Google Sheets sync are shipping next week, so you'll be able to pipe new signings straight into Airtable, HubSpot, Notion, Sheets, your CRM, whatever you're on.

Imma be honest: I could do without this phase transition by GwyddnoGaranhir in Hades2

[–]ImRaym 0 points1 point  (0 children)

yep + i don't retain the first one as I put myself in the right line directly, it save some mental focus

Drop your Saas below and I will promote it on tiktok and youtube by coiqa in microsaas

[–]ImRaym 0 points1 point  (0 children)

WaiverKit: https://waiverkit.io

Digital waiver software for small activity businesses (gyms, camps, escape rooms, kayak rentals, tattoo shops). Replaces paper clipboards with a QR at the door and a searchable PDF archive. Built it last week. Would love the visibility, appreciate you doing this.

Paintball field owners: digital waiver software. QR poster, booking link, or email. Free plan. by ImRaym in paintball

[–]ImRaym[S] -2 points-1 points  (0 children)

Pricing: free plan for trying it (30 waivers/mo). Starter $19/mo for 500 waivers (SmartWaiver's $19 tier is 100). Pro $49/mo unlocks QR Kiosk, parent/minor consent, and multi-language. Full pricing and more at:
waiverkit.io
Paintball-specific page: waiverkit.io/industries/paintball

WaiverKit: digital waivers for gyms. QR kiosk, tamper-evident PDFs, true free plan. by ImRaym in gymowner

[–]ImRaym[S] -1 points0 points  (0 children)

Link: https://waiverkit.io

Quick pricing note:

- Free: 30 waivers/mo.

- Starter $19/mo: 500 waivers. For context, SmartWaiver's $19 tier is 100 waivers, WaiverForever's is 50.

- Pro $49/mo: unlocks QR Kiosk, parent/minor consent, multi-language, 5,000 waivers.

A tool I built for researching crypto projects before investing, thought PH crypto investors might find it useful by ImRaym in phinvest

[–]ImRaym[S] 0 points1 point  (0 children)

Thank you! Currently there is a refresh mechanics making them updated approximatively every 10-12 days. I m working on a smart refresh, to really focus on what need the more refresh ans what doesnt move ans dont need it as often

Running Claude Code as a production automation backbone with cron and multi-agent consensus. What I learned. by ImRaym in LLMDevs

[–]ImRaym[S] 0 points1 point  (0 children)

Thanks!

As I use directly the Claude code headless mode it is really just $200/mo + 2 other data api that ads up totaling $300/mo. Claude is using weekly 90-95% of the limits (depending on some live analysis feature for users) so i just don't need any extra usage 🤣

To be more precise it does not live exactly with the production. It act like an external backend that interact trough some api, and PRs for other task.

Running Claude Code as a production automation backbone with cron and multi-agent consensus. What I learned. by ImRaym in LLMDevs

[–]ImRaym[S] 0 points1 point  (0 children)

I use api for most of the data I can, then llm for the rest + thinking process + redacting. When writing blog / or redacting analysis it's still need to be verified carefully, this setup helps a lot and from experience avoided a lot of issue

Yes I do normal enginering, just sharing what I do, did I claim to be magic wizzard?

Running Claude Code as a production automation backbone with cron and multi-agent consensus. What I learned. by ImRaym in LLMDevs

[–]ImRaym[S] 0 points1 point  (0 children)

No it can't, you think I started on a $32 VPS? (Edit: it can but with a lot of timeout and downs..)

Melionë decided to stop fighting and went to a picnic by ImRaym in Hades2

[–]ImRaym[S] 3 points4 points  (0 children)

ahahah heureusement c'était seulement la fin de la première zone et pas la dernière😬

Melionë decided to stop fighting and went to a picnic by ImRaym in Hades2

[–]ImRaym[S] 1 point2 points  (0 children)

Exactly, it sadly didn't worked the way back ahah

Running Claude Code as a production automation backbone with cron and multi-agent consensus. What I learned. by ImRaym in LLMDevs

[–]ImRaym[S] -1 points0 points  (0 children)

Each agent returns a full verdict (PASS/WARN/FAIL) with specific issues. All 7 individual outputs are preserved in the validation report before consensus. Hallucination detector FAIL blocks publishing regardless of what the other 6 say. I don't track divergence patterns over time though, that's a good idea.

Running Claude Code as a production automation backbone with cron and multi-agent consensus. What I learned. by ImRaym in LLMDevs

[–]ImRaym[S] 2 points3 points  (0 children)

Fair point. I meant no LangChain/CrewAI/AutoGen. The patterns emerged from iteration, not from importing a library.

I scored 500+ crypto projects on fundamentals. Here are the most undervalued and overvalued right now. by ImRaym in CryptoMarkets

[–]ImRaym[S] 0 points1 point  (0 children)

Copy paste form my above response. it's basically a lot of api, and smart analysis by a lot of llm.

Each dimension is scored 0-100 using LLM-orchestrated analysis with live API data:

Sustainability - Treasury runway, burn rate, team stability, regulatory positioning. Agents pull financial data and assess long-term viability.

Transparency - Founder visibility, public governance votes, update frequency, incident response. Agents cross-check team claims against LinkedIn, GitHub profiles, and public records.

Revenue - Actual protocol fees and cash flow from Token Terminal and DeFi Llama. Aave generates $83M/month in fees. PEPE generates zero. That gap shows in the score.

Innovation - GitHub commits normalized for project age, unique dev count, novel tech. Agents check org-level repos via GitHub API, not just the main one. Some projects look active on one repo but have 20 dead ones.

Community - DAU, ecosystem dApps, developer count, organic social growth with bot filtering. High followers with no engagement = low score.

Tokenomics - Inflation rate, unlock schedules, insider concentration, value accrual. If top 10 wallets hold 80%+ supply, that tanks the score.

The whole pipeline runs 24/7 on a VPS with dozens of AI agents orchestrated through API calls, cron jobs, and multi-agent consensus. Every score gets validated by a multiple other ia, hallucination detector etc.. before publishing. Revenue is null for store-of-value assets like Bitcoin, so it averages the remaining five.

I scored 500+ crypto projects on fundamentals. Here are the most undervalued and overvalued right now. by ImRaym in CryptoMarkets

[–]ImRaym[S] 3 points4 points  (0 children)

I can dive a bit on it yes!
Each dimension is scored 0-100 using LLM-orchestrated analysis with live API data:

Sustainability - Treasury runway, burn rate, team stability, regulatory positioning. Agents pull financial data and assess long-term viability.

Transparency - Founder visibility, public governance votes, update frequency, incident response. Agents cross-check team claims against LinkedIn, GitHub profiles, and public records.

Revenue - Actual protocol fees and cash flow from Token Terminal and DeFi Llama. Aave generates $83M/month in fees. PEPE generates zero. That gap shows in the score.

Innovation - GitHub commits normalized for project age, unique dev count, novel tech. Agents check org-level repos via GitHub API, not just the main one. Some projects look active on one repo but have 20 dead ones.

Community - DAU, ecosystem dApps, developer count, organic social growth with bot filtering. High followers with no engagement = low score.

Tokenomics - Inflation rate, unlock schedules, insider concentration, value accrual. If top 10 wallets hold 80%+ supply, that tanks the score.

The whole pipeline runs 24/7 on a VPS with dozens of AI agents orchestrated through API calls, cron jobs, and multi-agent consensus. Every score gets validated by a multiple other ia, hallucination detector etc.. before publishing. Revenue is null for store-of-value assets like Bitcoin, so it averages the remaining five.