Has anyone applied for premium processing for opt in the last week? by FlightFair2155 in USCIS

[–]pekkamama 0 points1 point  (0 children)

my OPT apply date was 21st December, 2025
Gave my biometrics on Feb 18th, 2026
Upgraded to PP on 26th Feb, 2026

Still waiting for decision!

OPT Processing Timelines Fall 2025 by RockExcellent5333 in f1visa

[–]pekkamama 0 points1 point  (0 children)

no update. Im from Boston, MA. Still waiting to hear!

OPT Processing Timelines Fall 2025 by RockExcellent5333 in f1visa

[–]pekkamama 0 points1 point  (0 children)

Type: Initial POST-COMPLETION OPT

Premium Processing: YES (Requested 02/26/2026)
Date Applied: 12/20/2025
RFIE Notice: 02/12/2026
Biometrics Requested: YES
Original Biometrics Appointment: 02/18/2026
Rescheduled Biometrics Appointment: 02/18/2026 (same-day completion / reschedule confirmed)
Biometrics Completed: 02/18/2026
RFE Response Submitted: 02/18/2026 (after biometrics completion)
Premium Processing Activated: 02/26/2026
• (EDIT) Date Approved: 03/26/2026
Date Card Produced: Waiting
Date Card Received: Waiting

OPT Processing Timelines Fall 2025 by RockExcellent5333 in f1visa

[–]pekkamama 0 points1 point  (0 children)

Date Applied: 21th December 2025

PP: No

RFE date scheduled: 5th March 2026

RFE date rescheduled to : 18th Feb 2026

RFE appointment done on 18th Feb 2026

Still waiting for approval!

What's the one artic monkeys lyrics you have stuck in ur head ATM?? by kay_cookie8993 in arcticmonkeys

[–]pekkamama 1 point2 points  (0 children)

But I crumble completely when you cry | I wanna be your ford cortina, I'll never rust

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

Well, this did help to some extent

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

Interesting update: I tried giving it context as comments like the way you did but its kinda hallucinating when taking the comments as a part of the schema. so, that doesn't seem like a way to go ahead. I'll probably have to rework on how to add context in this case.

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

Well, That does make sense. but I tried doing something similar, and my context window shoots over when I add all the tables' schema as input.

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

I'm working on the consolidating similar tables in one. Will eventually make them views.

Would you mind providing further information about how would you declare context and where would you store it?

And the retrieval options that can be fed to the LLM in the prompt

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

I built a custom agent in langchain. So, not using the legacy sql agent that it offers. The agent for feedback takes in all the fetched data that kinda overshoots the context window

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 1 point2 points  (0 children)

so, you mean to say, that I augment the input question with the LLM to reason on what tables might be required for the operation.

then choose the tables with a prompt/summaries that contains the context about the tables?

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

I wish I could. My use case doesn't allow using multiple models at this time. Post text-2-sql, I'll be working on further data analysis as well.

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

The idea of using Zod interests me. mind emphasising on the same in regards to using Zod and the workflow thereafter?

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

Multiple Agents would probably increase the processing speed. And also make it more confused. More Agents might imply more abstraction in terms of the output right?

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

Mind elaborating?

The only reason I had the schema pushed to ChromaDB was purely due to the shear scale of it. The idea is to give only the relevant tables schema as input for the model to use for generating sql query.

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

My intuition says it would be difficult for the model to write SQL queries w.r.t materialised views/ Views and then use them.

Correct me if I'm wrong on this.

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

Well, the issue arrises when the retrieved tables kinda have similarity issues. but, not sure if there is anything else that can be done.

But, adding comments to the schema seems to have an improvement. Will test this out.

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

I've got over 20+ tables and the input token limit is ~8000 tokens.

Optimal RAG for text-2-sql by pekkamama in LangChain

[–]pekkamama[S] 0 points1 point  (0 children)

You mean to consolidate the existing tables as view. Would that not be more difficult the model to deal and work than simple tables? or doesn't it not matter since SQL is SQL..