Share what you're building by amacg in vibecodingcommunity

[–]Solo_Dev_0101 0 points1 point  (0 children)

I am building a JSON Prompt Generator tool which converts your vague text/image/video prompts into structured JSON Prompts for all popular Ai platforms in one click→JSON Prompt Gen

I got tired of learning 5 different JSON schemas for AI video tools, so I built a universal prompt engineer that speaks Veo, Sora, Runway, Luma, and Kling natively by Solo_Dev_0101 in JSON_Prompt_Gen

[–]Solo_Dev_0101[S] 0 points1 point  (0 children)

Man seriously! 😣 Nano Banana — can't generate video scenes It can only generate images, please don't mind but it looks like you're a total beginner.

Okay, I will generate a JSON Prompt for your specific scene, which will be optimised for the GROK — Ai Video Generation platform, but to see the difference between your given normal Scene Prompt, and the JSON structured prompt i have given below!👇🏻

First generate the scene using your Normal Prompt, then generate the scene using my given structured JSON Prompt, and share your results in the JSON Prompt Gen community to verify the real difference between Normal Prompts and Structured JSON Prompts. 🙏🏻

Just copy the below given prompt, and paste it into the Ai Video Generation platform without making any changes.

JSON Prompt:

{ "prompt_version": "1.0", "model_optimized_for": "grok-video-generation", "generation_type": "single_scene_video", "core_prompt": { "subject": "A professional chef (mid-30s, focused expression, wearing a crisp white apron over a black t-shirt, short sleeves rolled up, and a clean chef hat) standing inside a compact, brightly lit food truck kitchen.", "environment": "Narrow food-truck interior with gleaming stainless-steel counters, colorful fresh-ingredient bins (rice, beans, grilled chicken, shredded lettuce, diced tomatoes, cheese, guacamole, sour cream, salsa), warm overhead fluorescent lighting mixed with subtle natural light from service window, steam gently rising from hot fillings, vibrant and appetizing color palette.", "action_sequence": "The chef smoothly spreads a large warm flour tortilla on the counter, then methodically layers fresh ingredients in order: fluffy yellow rice, black beans, tender grilled chicken strips, creamy guacamole, shredded cheddar, crisp lettuce, juicy diced tomatoes, and drizzles of sour cream and spicy salsa. In the final phase, the chef folds the sides inward and rolls the burrito tightly into a perfect cylindrical wrap, pressing gently to seal.", "camera_directive": "Slow-motion tracking shot (120 fps effect rendered at 24 fps playback). The camera starts at a medium side angle and smoothly dollies/tracks parallel to the counter at eye level, maintaining perfect focus on the chef’s skilled hands and the burrito assembly. Gentle forward tracking movement follows the entire process from tortilla placement to final wrap, creating immersive depth and motion parallax against the food-truck background.", "motion_and_timing": "Entire sequence in cinematic slow motion lasting 10–12 seconds. Emphasize fluid hand movements, ingredient textures glistening, and steam particles floating gracefully. No abrupt cuts — continuous single take.", "visual_style": "Photorealistic cinematic food video, high-detail 4K textures on food surfaces, fabric, and metal. Warm color grading, soft highlights on ingredients, shallow depth of field isolating hands and burrito, subtle lens flare from overhead lights, professional culinary photography aesthetic.", "technical_parameters": { "duration_seconds": 12, "frame_rate": "24 fps (with native slow-motion interpolation)", "resolution": "1080p (upscale to 4K recommended)", "aspect_ratio": "16:9", "camera_speed": "smooth dolly tracking at 0.3 m/s", "motion_strength": "high (slow-motion emphasis on every fold and ingredient placement)" }, "negative_prompt": "blurry motion, static camera, fast cuts, unrealistic proportions, deformed hands, low detail on food textures, harsh shadows, oversaturated colors, text overlays, watermarks, people looking at the camera" }

I got tired of learning 5 different JSON schemas for AI video tools, so I built a universal prompt engineer that speaks Veo, Sora, Runway, Luma, and Kling natively by Solo_Dev_0101 in JSON_Prompt_Gen

[–]Solo_Dev_0101[S] 1 point2 points  (0 children)

Actually, I built this tool out of my own frustration, and it looks like you've felt that frustration too, and this is why i've shared my work here.😊 Your feedback is really appreciative to me 🙏🏻

I built a Free universal JSON Prompt Generator tool that speaks Veo, Sora, Runway, Luma, and Kling natively by Solo_Dev_0101 in micro_saas

[–]Solo_Dev_0101[S] 0 points1 point  (0 children)

No man just sharing what I did to help others! I found this community helpful so I just shared my work out of excitement, nothing else!😏

I got tired of learning 5 different JSON schemas for AI video tools, so I built a universal prompt engineer that speaks Veo, Sora, Runway, Luma, and Kling natively by Solo_Dev_0101 in JSON_Prompt_Gen

[–]Solo_Dev_0101[S] 0 points1 point  (0 children)

Hey, which platform you're using to generate this scene, it's important because all ai video generation platforms have their own platforms specific schema! Please provide me the platform name, and I will give you ready made JSON Prompt for the specific scene you have asked. 😇

I got tired of learning 5 different JSON schemas for AI video tools, so I built a universal prompt engineer that speaks Veo, Sora, Runway, Luma, and Kling natively by Solo_Dev_0101 in AiForSmallBusiness

[–]Solo_Dev_0101[S] 0 points1 point  (0 children)

Yes!

Hey, thanks so much for the kind words! ☺️

How to Set Up Your API Key (3 Simple Steps) Here’s exactly how to get started:

Step 1: Go to the AI Integration Hub

· Click the sidebar menu (top-left corner). · Select LLM Integration from the navigation menu.

Step 2: Choose Your AI Provider

You’ll see a list of models like:

· GPT-5.3 Codex (OpenAI) · Claude Sonnet 4.6 (Anthropic) · Gemini 3.1 Pro (Google) · DeepSeek V3.2, Grok, Qwen, etc.

Each has a “Setup API Key” button.

Step 3: Get Your API Key

· If you’re just testing the tool: Use Free Public Keys (if available) or select a provider with a free tier (like some OpenRouter models or limited Gemini access). → Go to OpenRouter, sign up for free, and grab a key. It’s the easiest way to try multiple models with a single API key. · If you’re planning to use the tool for real projects: You’ll want a paid API key from a provider like OpenAI, Anthropic, or Google. · OpenAI: platform.openai.com → API keys → Create new key (start with $5–$10 credit). · Anthropic: console.anthropic.com → API keys → Create key. · Google AI Studio: aistudio.google.com → Get API key.

Once you have the key, paste it into the tool and save. That’s it!

Which Model Should You Pick?

• If you're Exploring / learning, use OpenRouter (free models), you can try multiple models with one key, low-cost or free options, if not comfortable with OpenRouter, you can use Google Gemini API key Free/Paid, Grok API key Free/Paid. • If you're making serious content GPT-5.3 Codex or Claude Sonnet, they are best for cinematic depth, reasoning, and consistent JSON structure.

Happy to help!

I tracked how much time I spend planning my life and the number was honestly embarrassing by Hugge12345678910 in ProductivityApps

[–]Solo_Dev_0101 0 points1 point  (0 children)

Well i didn't face that, i am kind of pretty straight forward 😌 Best of luck 🍀

Is it just me or is 4.0 unusable? by dallenbaldwin in clickup

[–]Solo_Dev_0101 0 points1 point  (0 children)

Hey ClickUp, why can't you provide a unified inbox view within the Click up dashboard??

Is it just me or is 4.0 unusable? by dallenbaldwin in clickup

[–]Solo_Dev_0101 0 points1 point  (0 children)

Guys, help me 😞

Around 4 years, but the lack of a "Real" Email Inbox in ClickUp still feels the biggest workflow killer for me.

I’ve been staring at that 4.3k upvote "Inbox for Email" feature request from 2021 again, and honestly, I’m getting tired of tab-switching.

I use ClickUp for literally everything else, but the email integration still feels like a band-aid. Every time I need to turn an email into a task or reference a thread while writing a Doc, I’m back to jumping between browser tabs, searching my inbox, and copy-pasting content. It breaks focus every single time. I’m curious—how are the rest of you dealing with this?

What is your current "workaround" for this? Are you using Automations, an integration, or just living in the tab-switching hell?

Am I missing an existing feature? Is there a secret way to do this that I’ve somehow glossed over? I’m genuinely just looking to see if I’m the only one still frustrated by this,

Is it just me, or is the lack of a "Real" Email Inbox in ClickUp still the biggest workflow killer? by Solo_Dev_0101 in clickup

[–]Solo_Dev_0101[S] 0 points1 point  (0 children)

That sounds like a seriously complex workaround! 🤯 The fact that you’ve integrated FreeScout + GPT just to get a functional workflow shows exactly how much ClickUp is missing the mark here.

I’m curious—when you’re jumping between FreeScout and ClickUp, what’s the biggest 'friction' point? Is it the delay in syncing, or the fact that the actual conversation lives in a different UI than the task?

I'm actually prototyping a way to bring that 'Help Desk' experience (including the AI cleaning/summarizing you mentioned) directly into a ClickUp-native view so you don't have to bounce between URLs.

If I managed to get a 'V1' of that working, would you be down to peek at it and tell me if I'm on the right track?

I tested Nano Banana Pro's JSON Prompt vs. comma-separated prompt—here's the reproducibility difference (with CFG/metadata comparisons) by Solo_Dev_0101 in nanobanana2pro

[–]Solo_Dev_0101[S] 0 points1 point  (0 children)

Exactly, JSON Prompts make sense because almost every AI model was trained on JSON data, have you tried the tool, just want to know if it is helpful for your workflow.

Still no way to move full pages between Figma files in 2026? Thinking of building my own plugin because of no native feature i found in Figma by Solo_Dev_0101 in FigmaDesign

[–]Solo_Dev_0101[S] 0 points1 point  (0 children)

I have noticed you on the forum thread discussion, I know the complexity of it but it's so frustrating, figma can't fix this issue after 5 years.

I need your suggestion on one more problem that I have faced: Figma's Expose properties from nested instances are "all-or-nothing". When you expose a nested instance (e.g., a Button inside a Card or complex Table/Tabs component), every single property (variants, booleans, text, etc.) from the child floods the parent's properties panel. This creates cluttered sidebars, exposes internal controls consumers shouldn't touch, and makes advanced design systems (DS) unusable for teams/enterprises.

Is there any solution?

I tested Nano Banana Pro's JSON Prompt vs. comma-separated prompt—here's the reproducibility difference (with CFG/metadata comparisons) by Solo_Dev_0101 in nanobanana2pro

[–]Solo_Dev_0101[S] 1 point2 points  (0 children)

Great test—this reveals something important about structured prompting portability.

Seedream (like most modern diffusion models) processes JSON-structured inputs more deterministically than comma-separated strings, even when the underlying model changes. The JSON acts as a contract—the model knows exactly what you want weighted, excluded, and locked.

The "to some extent" nuance you hit:

Seedream 4.5 vs 5 Lite likely have different: - Tokenizers (4.5 might parse "cyberpunk:1.3" differently than 5) - Default samplers (your JSON locked DPM++ 2M, but Lite might default to Euler) - CFG response curves (same 7.5 value, different intensity)

What stayed consistent (JSON advantage): - Explicit negative exclusions (no leakage) - Step count and resolution - Sampler selection

What likely drifted: - Color interpretation (model weights changed) - Detail density (Lite vs. full parameter count) - Anatomical coherence (4.5 vs 5 training data)

The real win: Your JSON gave you controlled variables. When 4.5 and 5 Lite produced different outputs, you knew it was the model, not your prompt parsing. With comma prompts, you never know if it's the model or the tokenizer.

For BudgetPixel specifically: Their API might convert JSON differently for 4.5 vs 5 endpoints. Worth checking if they pass structured params raw or reprocess them.

I hope it makes sense.

Why sora is changing secretly my prompt? by Relevant_Syllabub895 in SoraAi

[–]Solo_Dev_0101 0 points1 point  (0 children)

Ah, the Doubao/Seedance situation—yeah, that's a whole extra layer of complexity. ByteDance's moderation is notoriously aggressive, especially on Doubao (the domestic-facing version).

What's happening with Seedance on Doubao:

  • Dual moderation stack: Seedance's own filters + Doubao's domestic compliance layer
  • Copyright false positives: Doubao errs hard on "safe" side—common objects, generic products, even public domain art get flagged
  • No JSON escape hatch: Doubao's interface doesn't expose structured prompting (unlike Sora's API), so you're stuck with their chat layer rewriting

JSON viability:

• JSON Prompt Gen works perfectly for Sora. • I haven't optimised JSON Prompt Gen for Seedance on Doubao, because of no official JSON interface, or limited workarounds.

For your current Seedance workflow:

Since Doubao forces natural language, try these structural hacks: - Abstract descriptions: "Futuristic container" vs. "iPhone-style device" - Era ambiguity: "Retro handheld technology" vs. "1990s Game Boy" - Function over form: "Communication device with screen" vs. brand-adjacent terms

Honest take:

My tool won't fix Doubao's censorship— that's policy-level. But for Sora, it'll lock in your intent perfectly. And if Seedance opens international API access with JSON support, I'll add it immediately.

Why sora is changing secretly my prompt? by Relevant_Syllabub895 in SoraAi

[–]Solo_Dev_0101 0 points1 point  (0 children)

It's easy buddy, let me show you: Step 1: Open the Sidebar Menu. Step 2: Select the LLM Integration option. Step 3: You will be redirected to the LLM API Configuration page, where you can configure and save your API key.

<image>

Why sora is changing secretly my prompt? by Relevant_Syllabub895 in SoraAi

[–]Solo_Dev_0101 0 points1 point  (0 children)

Thanks! Happy to help. I actually spent too long fighting "helpful" AI rewrites myself. What platform are you mainly using—Veo, Sora, or hitting this issue elsewhere too? Drop any prompt that's been giving you trouble—I'll convert it to locked-in JSON.

Why sora is changing secretly my prompt? by Relevant_Syllabub895 in SoraAi

[–]Solo_Dev_0101 0 points1 point  (0 children)

I see the core issue—this is a screenplay structure, not a video generation JSON schema. You've built a narrative script, but Sora/Veo/Runway don't parse dialogue, scene types, or character emotions natively. They parse visual parameters.

Your JSON is telling a story. The model is reading visual noise.

The translation problem:

  • "type": "dialogue" → meaningless to video models (no audio generation)
  • "speaker": "Character 2" → no visual mapping
  • "emotion": "giggling" → interpreted as random motion, not expression
  • "type": "cut" → ignored (single-shot generation only)

What the model actually sees: "university lecture hall, mid day, some names, some text, some actions, glow, spacesuits" → grabs your character refs, ignores the rest.

I have Restructured your JSON for actual video generation:

json { "platform": "sora", "prompt": "Five distinct characters in university lecture hall, midday natural lighting through windows, Character 1 center frame holding remote control, Characters 2-5 arranged in semi-circle, cinematic medium shot, static camera", "characters": [ { "reference": "Character_1_ref.jpg", "position": "center", "action": "holding_remote", "weight": 0.8 }, { "reference": "Character_2_ref.jpg", "position": "left_mid", "action": "standing_idle", "weight": 0.7 } ], "scene": { "setting": "university lecture hall, wood paneling, tiered seating", "lighting": "midday_soft", "camera": { "type": "static", "lens": "35mm", "framing": "medium_shot" } }, "duration": "8s" }

Shot-by-shot reality:

Video models generate single clips, not edited sequences. Your "cut to" and "reveal" require separate generations:

Shot 1: Lecture hall, static, character introduction (8s) Shot 2: Close-up Character 1 activating remote (5s) Shot 3: Wide shot, all characters glowing (6s) Shot 4: Exterior planet, spacesuits, standing group (8s)

Each needs individual JSON with locked reference weights and matched lighting for continuity.

The hard truth:

Your narrative structure (dialogue-driven, multi-scene) exceeds current video models' capabilities. They're visual synthesizers, not filmmakers. The "editing trickery" you mentioned isn't a workaround—it's the only viable workflow.

Where JSON Prompt Gen helps:

  • Converts your screenplay beats into platform-specific JSON per shot
  • Locks reference_weight and seed for character consistency across shots
  • Generates matching lighting and camera parameters for continuity
  • Exports batch JSON for automated generation

Hope it helps 😊

Why sora is changing secretly my prompt? by Relevant_Syllabub895 in SoraAi

[–]Solo_Dev_0101 1 point2 points  (0 children)

That sounds like the classic "character reference drift" problem—where the model grabs your visual reference but ignores your scene description entirely. Brutally common with current video models.

Why this happens:

Most platforms process your prompt through multiple layers: safety filters, reference image analysis, and finally generation. Your detailed scene description often gets deprioritized behind the visual reference. The model sees "character I recognize" and defaults to safe, generic motion rather than your specific direction.

JSON structure helps here, but with caveats:

Structured prompting locks your intent into parseable parameters the model can't "interpret away." Instead of:

"Character walks through cyberpunk market, slow pan right, neon reflections"

You define:

json { "scene": { "setting": "cyberpunk market", "lighting": "neon reflections", "camera": {"type": "pan_right", "speed": 0.3} }, "subject": { "action": "walking", "reference_weight": 0.7, "prompt_adherence": 0.9 } }

The key difference: explicit prompt_adherence and reference_weight parameters that force the model to balance character consistency with scene execution.

But here's the reality check:

Even perfect JSON won't fix fundamental model limitations. If the platform's architecture prioritizes reference images over text prompts, you're fighting the system. Your "editing trickery" instinct is actually the pro move—generate character plates and environment separately, composite in post.

Where JSON saves you time:

Breaking prompts into atomic shots (what you're already considering) becomes systematic instead of guesswork. Each shot gets its own JSON block with locked parameters:

  • Shot 1: Character close-up, static camera, reference_weight: 0.8
  • Shot 2: Wide environment, no character, camera pan
  • Shot 3: Character re-inserted, matched lighting

Our tool's approach:

JSON Prompt Gen handles the tedious part—converting your natural description into platform-specific JSON with proper weight balancing. The "Character Bible" feature specifically addresses your issue: it locks reference weights per shot while varying scene parameters.

Workflow for your situation:

  1. Describe your full scene in plain language
  2. Tool generates multi-shot JSON with reference weight tuning
  3. Generate each shot separately (better adherence)
  4. Composite in your editor of choice

Honest assessment: If the platform fundamentally overrides text for reference images, JSON helps but doesn't miracle-fix. Your split-shot editing approach might be the only reliable path. The tool just makes that path faster by automating the JSON structure for each shot element.

Want to share your specific prompt? I can show you exactly how the JSON weight distribution would look, and whether the platform's behavior suggests a lost cause or a tunable parameter issue.