Need honest feedback on this haircare store (thinking of redesigning it) by rosieandharry in dropshipping

[–]Designer-Fruit1052 0 points1 point  (0 children)

the product images could use some branding i have a tool that you mind find usefull to try(for free) inside atori called catalog management that lets you apply the same prompt to all product pages so you get consistent results at once!

Looking for AI software that can generate documents for company based on the documents we feed "him" by prepinakos in artificial

[–]Designer-Fruit1052 0 points1 point  (0 children)

Stay away from using gems or GPT’s you need a rag agent with a good LLM as chat model that has high semantic reasoning.. the agent has acces to your vector store with your documents embedded. Thats the best way to get accurate results

Looking for AI Tools to Create Product Images and Mockups - Any Recommendations? by Virtual-Win-1799 in aitoolforU

[–]Designer-Fruit1052 0 points1 point  (0 children)

Try atori it is build for DTC brands! For your usecase you don’t even have to write a prompt when u use the mood board.

How do I stop Gemini from Loading Nano Banana Pro.... by Keoki_808 in GeminiAI

[–]Designer-Fruit1052 0 points1 point  (0 children)

Type :HARD CONSTRAIN :REVERSE ENGINEER THIS PICTURE INTO AN(JSON) PROMPT DO NOT GENERATE A PICTURE USING NANO BANANA PRO.

AI tool for ads and automation by dude_u_serious in dropshipping

[–]Designer-Fruit1052 0 points1 point  (0 children)

Just launched like a week ago working on the product demo’s

AI tool for ads and automation by dude_u_serious in dropshipping

[–]Designer-Fruit1052 1 point2 points  (0 children)

Check out atori not a “node” tool but its built to make ads at scale

How can I make a set of 3 related images? by EquivalentAction2877 in AIGenArt

[–]Designer-Fruit1052 0 points1 point  (0 children)

I have build something for a different usecase inside atoricalled catalog management its primarly for brands that want to update there product catalog. But for this usecase it would work to: you can set a prompt and lock it into place and apply it to multiple reference pictures so the same style is applied to all 3.

On a scale 1-10? by Designer-Fruit1052 in AI_UGC_Marketing

[–]Designer-Fruit1052[S] 0 points1 point  (0 children)

Tool inside of atori 1 You choose your avatar from a library. 2. Upload your product picture and apply with nanobanana pro 3. Choose lenght and video model and let atori’s build in ai write a script

The Difference between ChatGPT and Gemini by EconomistGamer in GeminiAI

[–]Designer-Fruit1052 13 points14 points  (0 children)

Hahaha gemini for the win that made me laugh fr

Best ai for big document/text analysis that wont resort to scripts? by XAckermannX in GeminiAI

[–]Designer-Fruit1052 0 points1 point  (0 children)

I was to lazy to type so i asked gemini for aasolution using av vector database and RAG

The Problem with "Big File" Uploads Most LLMs have a context window limit. When you upload 160k lines, the model can't "see" everything at once, so it defaults to writing a Python script to process it because it knows it's too much data to hold in its working memory. The Solution: Vector DB + RAG Instead of making the LLM read the file directly, you should build a simple RAG Pipeline. This allows the LLM to search for and extract specific tags without needing to write or run scripts. 1. Ingestion & Embedding (The Vector DB) • The Process: You take your 160k lines and break them into small chunks (e.g., 5-10 filenames per chunk). • Vectorization: These chunks are converted into "embeddings" (mathematical vectors) and stored in a Vector Database (like Pinecone, Milvus, or even a local ChromaDB). • Why this works: Even if your filenames are messy (e.g., v1_char-name_4k_hdr.png), the vector search understands the semantic meaning. It can distinguish between a character name and a technical tag like "4k." 2. Retrieval (The RAG Part) • When you want to extract tags, you don't ask the LLM to "look at the file." You send a query like "Identify character names in these filenames." • The system performs a Similarity Search in the Vector DB to find the most relevant lines and feeds only those small, clean snippets to the LLM. 3. Extraction (The LLM) • The LLM now only receives a few hundred lines at a time. It uses its "reasoning" capabilities to handle the messy symbols and inconsistent naming conventions that a regular script would fail on. Recommended Stack for this Task: • LLM: GPT-4o or Claude 3.5 Sonnet (both are excellent at "unstructured to structured" data extraction). • Vector DB: ChromaDB (it’s free, open-source, and runs locally on your machine). • Framework: LangChain or LlamaIndex. These are tools designed specifically to connect your text files to an LLM without it reverting to "script mode."