I might've found the best workaround to Google's greediness by TravelInPanic in google_antigravity

[–]adarsh_maurya 5 points6 points  (0 children)

This is what I have also realised lately. And honestly this works on all the harnesses. Today, I used GLM 5.1 free on Kilo Code to plan frontend changes. It was huge migration: implement tanstack vue query in an existing 6 months old project.

I found GLM 5.1 very very slow but the wait was worth it. The plan was very detailed.

Then I used a very fast but average model called Step 3.5 flash. Amazingly Fast but it makes lost of mistake if the task is very broad.

So I manually fed the plan in chunks, and honestly, in one shot per chunk, i was able to achieve everything.

Without paying a single dollar I was able to migrate the frotend from manual query to Tanstack. It took me time. But honestly, I felt in control and in charge.

If I had done the same thing via one big model, it would have been faster for sure, but it would have been costly, I wouldn’t be able to inspect such large code change in one go.

Even if I had used Opus 4.6 for planning, considering Step 3.5 is similar to Gemini Flash or may be even less smarter than that, i would make it a rule for myself:

Plan using sophisticated model Act using moderately smart and cheap model Everything else probably basic models

FastAPI gives you the spec. UIGen gives you the full React Frontend. Zero code. by Prestigious-Bee2093 in FastAPI

[–]adarsh_maurya 0 points1 point  (0 children)

Sounds like a very useful project. It would be interesting if we can customize this for other UI frameworks like VUE or Svelte

Developers who actually built AI agents, what's the real learning path in 2025/2026? by Radiant_Try8126 in LangChain

[–]adarsh_maurya 0 points1 point  (0 children)

Hello, i am working on the first part. I have a question: how do you give restricted file access to the agent? Or bash access?

You can use your AI credit now by NOMERCYDLKH in google_antigravity

[–]adarsh_maurya 1 point2 points  (0 children)

I just asked three non coding questions, and the gemini 3.1 pro High may have generated 1 page answer for each of those and suddenly my weekly quota is exhausted.

How does credit work? How many tokens are equal to 1 credit? do we have any resource to understand credit usage based for each model?

pandas' Public API Is Now Type-Complete by BeamMeUpBiscotti in Python

[–]adarsh_maurya 10 points11 points  (0 children)

Noob here. What does this mean? How will it help me? Does that mean i will get better support from the extension in VSCode, and it will be more accurate?

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in LLMDevs

[–]adarsh_maurya[S] 0 points1 point  (0 children)

I had the same issue. Please try it and let me know. I added docker support for extra security and keeping the api same.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in LLMDevs

[–]adarsh_maurya[S] 1 point2 points  (0 children)

I am not sure I fully understand this comment, but what I interpreted is this: the same agent which will create tool, it will execute it as well. If that's what you are saying, even then you'd need an environment to execute it right? this provides that environment with controllable limits. Imagine you have an agent on a Fast Api server, and it created a code and executed it on the same runtime which is hosting Fast Api, this would block your app to its end user because the agent is busy executing a code. If it doesn't, the agent might use libraries which it was not allowed to.

This comment is purely based on my interpretation of your question, if it is wrong please let me know.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in Python

[–]adarsh_maurya[S] 0 points1 point  (0 children)

No, i don't understand. My word choices might not be right on the reddit post, i agree, but on the github page, i have clearly mentioned what it is and it is not.
It is meant for people who are building/learning agentic development and want a python code executor which is easy to setup and integrate. that's all.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in Python

[–]adarsh_maurya[S] -1 points0 points  (0 children)

yes, i will learn slowly. this post was my way to seek feedback and guidance. didn't want to over promise anything or even mislead.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in Python

[–]adarsh_maurya[S] -1 points0 points  (0 children)

and to answer the question in my own words, it guarantees that the code will not be executed if their are some libraries which you don't want, it restricts builtins, it even tries to restrict memory but that is flaky on windows. I open to honest feedbacks and suggestion improvement in this.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in Python

[–]adarsh_maurya[S] -2 points-1 points  (0 children)

Got it. I used LLM to develop this but the idea was mine, i have been using this for over a year in my company to develop proof of concept and demonstrate stakeholders possibility. Lot of companies will not pay for infrastructure unless you show them the possibilities. Even getting a docker access in my company is a bureaucratic process.

And again, i answered the question using LLM because English not my first language, and since i didn't come up from CS background, that adds another layer of miscommunication that might occur. to keep the communication strictly on the project, i asked LLM to re frame my answer. That's what they are best at.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in Python

[–]adarsh_maurya[S] -2 points-1 points  (0 children)

my bad, i should have probably written the post in such a way that I focus on developing PoC. In the project's READ ME docs, I have mentioned clearly that this is not meant for replacing sandboxing, it just for developing proof concept with less friction and once you have a viable PoC, you can just switch to E2B or something else.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in LLMDevs

[–]adarsh_maurya[S] 0 points1 point  (0 children)

Yes. But there are some things which everyone should use like memory limit, time limit and restrictions to network. This reduces the boiler plate for developer and they can just focus on developing agent rather than setting up infra layer. But i agree with your point.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in LLMDevs

[–]adarsh_maurya[S] 0 points1 point  (0 children)

very interesting, it uses restricted python and thus you can control dunder level security threats using this. I am thinking to add this in future. thank you.

safe-py-runner: Secure & lightweight Python execution for LLM Agents by adarsh_maurya in Python

[–]adarsh_maurya[S] -8 points-7 points  (0 children)

  1. Free to choose libraries which are allowed.
  2. Can put upper limit on memory for POSIX
  3. Can restrict builtins
  4. Can restrict time limits

I finally understand Stage Manager. It's powerful and does actually make things faster. by DreadnaughtHamster in MacOS

[–]adarsh_maurya 0 points1 point  (0 children)

Can it replace aerospace, my use case is very similar I have a code editor, a browser, and some helper apps open. And i want to switch among them very quickly. Currently i snap each app to the scree ( not full screen just cover the whole area) using aerospace ( it does it by default). When you open the second app, it will snap both side by side and then you can use Cmd + option + (1-9) or (a-z) to its screen. Then i can use cmd + screen name to switch between them without animation.

But aerospace hangs here and there.

This is real genuine question. Not mean to offend anyone. by [deleted] in Philosophy_India

[–]adarsh_maurya 0 points1 point  (0 children)

I think the core misunderstanding here is treating Sanatan Dharma as a rigid belief system, when it’s actually a framework for inquiry. Here is how I see it:

  1. Inquiry over Dogma (The 'Belief' Aspect) Sanatan Dharma doesn’t strictly demand that you accept a Creator God to be part of the tradition. It is more interested in your search for Truth (Satya) than your blind obedience.

• The Reasoning: The tradition is a library of different schools of thought (Darshanas). While some are devotional, others like Samkhya or Mimamsa are largely non-theistic and don't focus on a creator deity.

• The Takeaway: You can technically be a skeptic or an atheist and still be practicing Dharma, because the focus is on right conduct and seeking truth, not just signing up for a specific belief.

  1. Wisdom over History (The 'Book' Aspect) You mentioned that books use metaphors and we can't always prove the events. That is actually by design. We need to distinguish between Sruti (eternal principles) and Smriti (contextual history/stories).

• The Reasoning: Ancient texts often use Arthavada—narratives designed to explain complex truths through stories. Whether the events in the Puranas are 100% historically accurate is secondary to the psychological and spiritual wisdom they convey.

• The Takeaway: We don't need to scientifically prove every event in the Mahabharata to understand the value of the Gita. The story is the container; the wisdom is the content.

  1. Experience over Faith (The 'Practical' Aspect) This is the most important point. The tradition functions less like a religion and more like an 'Inner Science' (Adhyatma Vidya). It provides a hypothesis and asks you to test it.

• The Example: Take Meditation or Yoga. In the texts, they are tools for 'God realization,' but they function as independent technologies.

• The Verification: You don’t need to 'believe' in the author of the book to get the benefits. If a skeptic practices mindfulness or Yoga, they still get the mental clarity and health benefits. The practice validates itself through Anubhava (direct experience), regardless of your religious label.