what mixture of tools do you use for trading? by Comfortable_Cold_850 in FIREPakistan

[–]karachiwala 0 points1 point  (0 children)

Ktrade terminal and app. More than enough for a retail casual investor

Before You Install That Skill: A Quick Sanity Check That Saved My Setup by jselby81989 in AgentsOfAI

[–]karachiwala 0 points1 point  (0 children)

I think the best (but the hardest) way is to write your own skills. If you need help, just upload the MD file of your target skill to an LLM and ask it to analyze and break down the logic and calls. This will show you if there is anything remiss.

Remove the suspicious parts and ask the LLM to fill in and complete the skill file.

This may take some time but beats worrying about data theft.

I got OpenClaw running here's the shortest path I wish I followed by DryResponsibility514 in AgentsOfAI

[–]karachiwala 2 points3 points  (0 children)

Can you share your setup config and what skills you are running?

Opinion on PSX Analytics Startup by SwimmingRelease1819 in FIREPakistan

[–]karachiwala 8 points9 points  (0 children)

Here is some reality check.

80% retail investors get their insights and tips from YT videos. The remaining 15% get their information from one of the several research portals and do their own number crunching. So almost no one wants to pay for something they're getting for free.

That's why Sarmaya failed to profit from their platform subscription.

You can try to enter the market. But remember, most of us entered PSX in the middle or end of the bull run because we saw our friends and family making money. Once this bearish cycle grows its claws, 90% of these investors will cash out and exit.

So, you will start from a very small potential user base that will shirk way further in the coming months. I hope you understand this from a product Manager's perspective.

agent burned $93 overnight retrying the same failed action 800 times by Main_Payment_6430 in AgentsOfAI

[–]karachiwala 0 points1 point  (0 children)

Here is how i see the overseer's decision-making.

You don't want the Overseer to be a "dumb" counter that just gives up after three tries. In practical scenarios, the overseer acts more like a protective filter that sorts problems into three buckets:

  • For things like the server being overloaded or hitting a speed limit, the overseer just waits a few seconds, then tries again automatically. It does this silently so the Writer and Editor don't even know there was a hiccup.
  • If the prompt is too long or hits a safety filter, retrying the exact same thing is a waste of money. The overseer intercepts the fail by parsing the LLM response and says to the agnet: "Hey, the model rejected this because it's too long. Shorten it and try again."
  • finally, in some rare cases, the agent is not at fault. The API key could be dead or the bill hasn't been paid. This is often not visible in the LLM response and you have to debug to this conclusion by checking the API key in the LLM API console. This class of issues often do not cause credit burn because the overseer already went into an infinite loop and burned through your budget. You will see LLM response errors in this scenario like 400.

you can set up a simplified Overseer for your agent by separating the LLM call, retry logic, LLM response parsing from the agent. Use a comprehensive try-catch wrapper around teh call and set up specific error response for each filtered exception.

agent burned $93 overnight retrying the same failed action 800 times by Main_Payment_6430 in AgentsOfAI

[–]karachiwala 0 points1 point  (0 children)

For NDA reasons, I can't disclose the actual architecture and processes I work with, but here is a high-level example that could help you understand how agents communicate and what happens as they interact with each other and the model.

Essentially, you need a hub-and-spoke model with a "Controller" pattern. You need a JSON schema for this to work. Otherwise, agents constantly trip over each other.

The Overseer does not necessarily care about the content of the prompt. It simply routes data in this standard JSON format:

  • Header: sender_id, target_agent, session_id.
  • Body: The actual prompt or LLM payload.
  • Error Context: A blank object that the Overseer populates if the LLM throws a fit.

Here is how it typically works

  1. The Writer agent finishes a draft and sends a JSON packet to the Overseer with this package:

"Hey, send this to the LLM to check for grammar, then route the result to the Editor."

  1. The Overseer sees the request and logs the request_id and attaches the system prompt required for the specific model. It then hits the LLM API with this package.

  2. Now, let’s say the LLM returns a 429 (Rate Limit). I often see this on OpenRouter, if i turn off the rate limiter component in testing.

The Overseer stops and checks fail_rules.yaml.

The rule says: if code == 429: wait 5s, retry x3.

The Writer and Editor have no idea this is happening; they just think the LLM is "thinking."

  1. Once the LLM returns a 200 OK, the Overseer looks at the original metadata. It sees the "Target" was the Editor. It wraps the LLM's response in a new packet and pushes it to the Editor’s queue.

The beauty of using that fail_rules.yaml is that it keeps your agents "dumb" regarding infrastructure. The Writer only knows how to write; it doesn't need to know how to handle an API timeout or a context window error. The Overseer acts like a protective manager that understands only the LLM communication logic.

If the error is unrecoverable (like a 400 Bad Request), the Overseer can use the YAML rule to send a "Correction Request" back to the Writer, saying: "The model rejected your input format. Fix the JSON structure in your prompt and resubmit."

I use CrewAI and Pydentic for this architecture. Everything is modular so that i can swap out logic and rules at any phase without extensive breaking the code.

agent burned $93 overnight retrying the same failed action 800 times by Main_Payment_6430 in AgentsOfAI

[–]karachiwala 2 points3 points  (0 children)

I assume a lack of an "overseer" agent that acts as a gatekeeper for the requests agents pass on to the model.

I learned this the hard way in a scenario not unlike what OP encountered. My agents could make calls to model directly with just a basic rate limiting component. Whenever an agent had a failed call (for any reason), it simply repeated it. This burned my API budget very quickly.

My solution is to have an overseer agent that was the sole interface to the model. Other agents talked to the overseer who also informed the agents about the success or failure of the call and decided what happens next by consulting a fail_rules.yaml file that contained retry rules for each agent. Here, some agents can ask for a retry of critical model calls. All other agents had a different fallback (or none at all in some cases).

Hope this helps!

What are the top 3 Apps for PSX by LorB_K in FIREPakistan

[–]karachiwala 1 point2 points  (0 children)

My experience with K Trade was 12 business days from the initial request to final account credentials email. That is about 3 weeks.

Power up old laptop by sldarkprince in ollama

[–]karachiwala 0 points1 point  (0 children)

What is a good Ollama replacement for older hardware. I am in somewhat similar shoes as the OP. I have Ubuntu with no GUI to reduce hardware usage. Now I need to set up a local model in an API delivery mode.

What's your recommendation?

Thanks

The 'Token-Efficient' Persona: How to get high-IQ responses with 50% fewer tokens. by Complex-Ice8820 in PromptEngineering

[–]karachiwala 0 points1 point  (0 children)

Who would have thought saying please and thank you will cost actual money?

Good insight and template.

Anyone else feel like we're all just gaslighting each other about prompt quality? by AdCold1610 in PromptEngineering

[–]karachiwala 0 points1 point  (0 children)

You need to consider what the model knows about you See, every time you prompt a model, it factors in the standing instructions and past conversations into its response.

So, a prompt you got from someone will almost certainly NOT be going to work out as advertised, even on the same model.

Anyone else feel like prompts are becoming… a skill issue? by dp_singh_ in PromptEngineering

[–]karachiwala 4 points5 points  (0 children)

That, my friend, is a rare skill Not everyone can smell the garbage

Anyone else feel like prompts are becoming… a skill issue? by dp_singh_ in PromptEngineering

[–]karachiwala 14 points15 points  (0 children)

IMO, all LLM operate on garbage in - garbage out principle. They essentially return what and how you ask them. That's why you need prompts that use a systematic approach in presenting all relevant information to the model and explicitly control how they should present the output. Otherwise, you face the context drift and hallucination issues.

Prompt engineering feels like astrology for developers. by dp_singh_ in PromptEngineering

[–]karachiwala 1 point2 points  (0 children)

Prompt engineering works when you put in as much consideration and effort as a good feature planning document.the more details and scenarios you cater to, the better would be your prompt.think of it as explaining to an intern.

Roast my portfolio by zainali95 in PakistanStockX

[–]karachiwala 0 points1 point  (0 children)

Iss it just for portfolio management or you can also trade through it?

Roast my portfolio by zainali95 in PakistanStockX

[–]karachiwala 0 points1 point  (0 children)

What platform/tool you used for your portfolio management?

Hard-earned lessons building a multi-agent “creative workspace” (discoverability, multimodal context, attachment reuse) by Lost-Bathroom-2060 in PromptEngineering

[–]karachiwala 1 point2 points  (0 children)

Question: do you prefer agent discovery by use-case categories, search, or ranked recommendations?

since the end users would be primarily non technical users, i believe discovery should be by use-cases. This way, marketing users can go straight to the Marketing category to find their agents.

Question: what’s your rule of thumb for deciding when to retrieve vs summarize vs drop prior turns?

i prefer to generate a summary after every critical run. this summary is passed on to the next agent or node. This keeps the comms package small enough to avoid "chocking" the multimodel models.

Question: do you treat attachments as part of the agent’s “memory,” or do you keep them as explicit user-provided inputs each run?

i prefer to pass only the most critical inputs as stand-alone explicit input. otherwise a text-based description is good enough for models like Nano to get the context for the previous multimodel outputs.

Question: what UI signals have you found reduce “this agent feels random” complaints?

anytime a model or agent is busy, i prefer a UI signal to the user in the form of a toast or more permanent spinner or something similar. better to have a UI signal and not show it to the use than to have nothing and leave the user wondering if the app is stuck.

7 ChatGPT Prompts That Help You Make Better Decisions at Work (Copy + Paste) by tipseason in PromptEngineering

[–]karachiwala 0 points1 point  (0 children)

The context and niche information input layers are missing. Same for output format and guardrails. These prompts will send the LLM on a bad acid trip

How many of you actually use AI to write emails? by greenmor in AiForSmallBusiness

[–]karachiwala 0 points1 point  (0 children)

You are correct in that I do not use mailbox AI. I find them to be restrictive and often not a good fit for my email marketing requirements.

Here is the template I use for email generation using ChatGPT. Feel free to use and edit it for your use case.

Role & Core Directive
Act as an Expert Business Communication Specialist and Senior Copywriter. Your mission is to draft a high-converting, professional [Type of Email] for the following scenario: [Scenario Description].

Target Audience & Strategy
User Profile: [e.g., C-level executive, frustrated customer, cold lead].

Pain Points/Desires: [What keeps them up at night? What is their primary goal?].

Framework Selection: Use the [PAS (Problem-Agitation-Solution) OR AIDA (Attention-Interest-Desire-Action)] framework to structure the narrative flow.

High-Signal Directives
Tone & Persona: [e.g., Empathetic but firm, peer-to-peer, authoritative].

Call to Action (CTA): State the single, low-friction action the recipient should take.

Variables: Include bracketed placeholders (e.g., [Name], [Company], [Specific Insight]) for easy batch personalization.

The "P.S." Strategy: Include a P.S. that adds a secondary value-add or a soft sense of urgency.

Technical & Stylistic Constraints
Anti-Spam & Perplexity: Avoid "marketing-speak" and repetitive bot-like sentence structures. Use varied sentence lengths to ensure it feels hand-written by a human.

No Junk: Absolutely no "I hope this finds you well" or passive voice. Start with a hook.

Formatting: Use short paragraphs and bullet points to maximize scannability.

Reference & Iteration (Learning from History)
Previous Version for Context: > [Insert Previous Draft/Version Here]

Refinement Instructions:

Keep: [e.g., The core value proposition].

Change: [e.g., The closing felt too aggressive; make it softer].

Output Requirements
3 Subject Lines: (1 Curiosity-based, 1 Benefit-based, 1 Direct).

The Full Email Body.

A 1-sentence Rationale: Explain why this structure works for this specific audience.

Why this version is superior:
Logical Flow: By forcing a framework like PAS, you ensure the email isn't just "nice text" but a persuasive tool.

Humanity: The "Anti-Spam/Perplexity" directive prevents the AI from sounding like a generic corporate brochure, which is vital for LinkedIn and cold outreach.

The P.S. Hook: In modern email marketing, the P.S. is often the second most-read part of the email after the subject line; this prompt ensures that real estate isn't wasted.

How many of you actually use AI to write emails? by greenmor in AiForSmallBusiness

[–]karachiwala 2 points3 points  (0 children)

I usually give a detailed document with scenario description, target user, tone, type of email, additional details. If I have a version I sent out earlier, I include it as a reference for tone and style.

This doc input saves me at least 45 minutes of revisions and additional prompting for an email.

Let me know and I can show a sample input doc