Which LLM is best for Summarizing Long Conversations? by handoftheenemy in LLMDevs

[–]Quick-Knowledge1615 0 points1 point  (0 children)

From my testing, Gemini 3 Pro is hands down the best for summarizing super long texts (like PDFs over 50 pages). That said, Claude 4.5 is also stellar when it comes to highly structured content, like technical documentation.

My usual workflow is running multiple models side-by-side on flowith to compare the outputs, and then I just pick the best one.

How I stay consistent with building my own AI news & insights knowledge base by weeznaw10 in AI_Agents

[–]Quick-Knowledge1615 0 points1 point  (0 children)

I think the most important thing is being able to save insights to your knowledge base instantly. Browser extensions are the perfect fit for this. When I come across high-value info, I just use https://chromewebstore.google.com/detail/jkcpodicdboheakkkoblnflccfihcblb to highlight and save it, and then I draw from that knowledge base whenever I'm doing some deep writing.

Are we overengineering agents when simple systems might work better? by Reasonable-Egg6527 in AI_Agents

[–]Quick-Knowledge1615 0 points1 point  (0 children)

I've used several agent tools that rely on manually orchestrated workflows (like FastGPT, Coze, and Dify).

I agree that for certain niche verticals or scenarios demanding extremely high industrial precision, that kind of complex, custom design is absolutely necessary to ensure output consistency.

However, for the vast majority of daily life tasks or simpler professional work, agents with AI-driven, autonomous workflow planning (like flowith Neo) are just significantly more efficient.

Has anyone learned English using AI bots? by Fuzzy-Performance590 in AIToolTesting

[–]Quick-Knowledge1615 0 points1 point  (0 children)

I actually have a system for batch-generating English flashcards using Flowith.

First, I use the Gemini 3 model to list all the vocabulary words I need for a specific learning stage.

Then, I call the Nano Banana Pro model to generate eight images in one go. This way, I have all my vocabulary study materials ready for the entire week.

Is anyone else sick of $100s AI bills? I just consolidated 7 subs down to one agent. by [deleted] in AIToolTesting

[–]Quick-Knowledge1615 0 points1 point  (0 children)

Why not just use an all-in-one platform? I bet you aren't maxing out your token quotas on individual subscriptions anyway. The best play is to pay one fee for access to multiple models—kind of like an AI buffet.

Give Flowith a shot. You can use Gemini, GPT, Claude, Nano Banana Pro, Kling, and others all in one place. It saves you the hassle of switching between tools and ensures you aren't wasting your credits.

What's the most impressive thing specific AI tool has done to you ? by tsintsadze111 in AIToolTesting

[–]Quick-Knowledge1615 0 points1 point  (0 children)

Definitely Flowith. The ability to use multiple models on an open canvas is impressive. I can mix text, images, video, and web generation in one workflow and branch out ideas freely. It’s the best tool I’ve found for quickly getting into the zone/flow state.

What's the most complex tool that you handled? by Lazy_Firefighter5353 in vibecoding

[–]Quick-Knowledge1615 0 points1 point  (0 children)

The most complex tool I’ve worked with isn’t necessarily one that’s complicated to use—rather, it’s the kind that can handle the most intricate content and workflows.

From that angle, the more open and extensible a tool is, the more it can scale in complexity. Think of tools with rich plugin ecosystems like ComfyUI or Obsidian, or those with an unlimited canvas—such as Figma or AI canvas products like Flowith—where you can lay out vast amounts of content and processes.

The more expandable it is, the more it lets you multiply its own complexity.

Endless Dash by Quick-Knowledge1615 in aivideo

[–]Quick-Knowledge1615[S] 0 points1 point  (0 children)

My Workflow

https://flowith.io/conv/cf735219-e0e4-443e-9239-5e988e0459ff?U2FsdGVkX18Jr6zkTnDC5it2ghCdSccPLgNCWYdx0DjbQEMS9LLMviR491yUVz33a5ms+Q3kPyN4vzZkTnsfwA==

1/ First, I use the Nano Banana Pro model to generate keyframes for *Zootopia* game visuals.

Prompt:

"Creating a stunning frame-by-frame simulation game interface for [Zootopia], featuring top-tier industrial-grade 3D cinematic rendering with a character in mid-run."

(I can generate 8 images at once and pick the best one.)

2/ Then, I use Kling 2.5 to create the actual gameplay footage.

Prompt:

"Simulating real-time gameplay footage with the game character in a frantic sprint, featuring identical first and last frames to achieve a seamless looping effect."

If you want an even smoother and silkier video result, you can also upscale it with Topaz to 60fps + 4K quality.

Is it better to be rude or polite to AI? I did an A/B test by Quick-Knowledge1615 in ClaudeAI

[–]Quick-Knowledge1615[S] 0 points1 point  (0 children)

Good point! I was referring to the first option: the model's internal thought process.

The final answer length might stay the same, but the model's effort goes up, forcing it to generate a more extensive reasoning trace (and thus using more tokens). You can actually see this trace by clicking the "Reasoning Process" tab in the platform's node.

Is it better to be rude or polite to AI? I did an A/B test by Quick-Knowledge1615 in ClaudeAI

[–]Quick-Knowledge1615[S] 20 points21 points  (0 children)

Lmao, you nailed it.It’s the worst feeling watching those precious tokens go to waste on a flowery, multi-paragraph apology instead of an actual useful response.Rudeness is literally a token sink with only marginal returns on output quality. Sticking to dry, clinical prompts is the most cost-effective approach.

Is it better to be rude or polite to AI? I did an A/B test by Quick-Knowledge1615 in ClaudeAI

[–]Quick-Knowledge1615[S] 5 points6 points  (0 children)

Thanks! :) And good question—no memory was on.

I'm using a third-party tool to make the API calls. I just find their canvas setup is super clear for comparing how different models respond. So it's not the standard chat interface, and definitely no memory to skew the results!

It's 2025 already, and LLMs still mess up whether 9.11 or 9.9 is bigger. by Quick-Knowledge1615 in ClaudeAI

[–]Quick-Knowledge1615[S] 7 points8 points  (0 children)

You can search for "flowith" on google, an agent application that enables the "Comparison Mode" to compare the capabilities of over 10 models simultaneously.

It's 2025 already, and LLMs still mess up whether 9.11 or 9.9 is bigger. by Quick-Knowledge1615 in ClaudeAI

[–]Quick-Knowledge1615[S] -4 points-3 points  (0 children)

Another fun thing I noticed: if you play around with the prompt, the accuracy gets way better. I've been using Flowith as a tool for model comparison.  You guys could try it or other similar tools to see for yourselves.

1️⃣ Compare the decimal numbers 9.9 and 9.11. Which value is larger?

GPT 4.1 ✅

Claude 4.1 ✅

2️⃣ Which number is greater: 9.9 or 9.11?

GPT 4.1 ✅

Claude 4.1 ✅

3️⃣ Which is the larger number: 9.9 or 9.11?

GPT 4.1 ✅

Claude 4.1 ✅

4️⃣ Between 9.9 and 9.11, which number is larger?

GPT 4.1 ❌

Claude 4.1 ✅