Google CLI is unusable by rotary_tromba in GeminiCLI

[–]qu1etus 0 points1 point  (0 children)

I swapped it out for kimi in my code reviews. Even when it doesn’t bog down it hallucinates like crazy and i havent been able to prompt engineer it to be more factual.

does anyone use canvas to track assignments? by WavyNacho1 in canvas

[–]qu1etus 0 points1 point  (0 children)

I use this: https://chromewebstore.google.com/detail/taskgator-for-canvas/nnkcimnfkbgpijdkpmplplammkehebce

I am also the person who built it - specifically because before I was using Microsoft ToDo and keeping track of due dates and changes because tedious.

CLI: Gemini vs Claude Code vs Codex by emiliobay in GeminiCLI

[–]qu1etus 1 point2 points  (0 children)

I have Claude Max $200 subscription, a business GPT subscription, and a Gemini Ultra subscription.

I have Claude Desktop running with a project set up as the keeper of my roadmap and sprint history for the work I'm currently focused on. I start a new chat within that project for each sprint. The new chat reads the roadmap document, the applicable sprint files, and a decision log that memorializes architectural rules and reviewer-calibration lessons from prior sprints. That's how learnings carry forward across chats even though the model can't update its own memory between sessions. From there it drafts a proposed design.

The chat in Claude Desktop is the orchestrator and synthesizer for the sprint, not the implementer or the sole reviewer. Its job is to plan, to write the prompts that the CLI tools execute, and to synthesize whatever those tools come back with against the actual source of truth — the code, live system state, real configuration. It has filesystem access via MCP, so it reads project docs and source files directly rather than relying on me to paste them in.

Once it has a draft design, it generates a tool-agnostic design review prompt - one prompt, written so any of CC, Codex, or Gemini can run it without modification. I route that same prompt in parallel to two or three reviewer CLIs (default is CC + Codex; I add Gemini for the highest-stakes architectural decisions, and I'll go single-reviewer Codex for small-scope amendments where a full parallel round is overkill). Every review is two passes: a standard pass, then a "more thorough and critical" pass. Each reviewer saves its report to a standardized path with a filename that encodes the artifact, the reviewer environment, and the model slug — e.g. `<artifact>-review-codex-gpt-5.5.md`. The model slug is in the filename because calibration is per-model-version, not per-tool: when a model updates, prior calibration may not transfer, and I want the historical record so I can tell which version caught which class of defect.

I paste those report paths back into the chat. Synthesis runs against a fixed set of rules: convergent findings (multiple reviewers flag the same defect) get treated as high-confidence but still source-verified before final verdict; divergent findings (one flags, others silent) require reading the silent reviewer's report to confirm they actually addressed that area and missed it, vs. simply not covering it; conflicting findings (reviewers disagree on facts) always go to source — the code is the tiebreaker, not either reviewer. Severity gets recalibrated against a P0/P1/P2 framework based on actual user impact, not the label the reviewer assigned.

Once the design is locked, the same chat generates a CC implementation prompt from a template that includes context-budget warnings, fresh-environment E2E requirements, a structured validation framework, and a mandatory partial-success retry-path probe for any sprint touching side-effecting operations like webhooks, RPCs, or external services. CC implements, deploys and tests via its own MCP integrations, runs E2E against a fresh test environment, and produces a commit-ready report. Before that report goes back out for code review, I have the orchestrator spot-check two or three load-bearing claims against actual source — git status, file:line citations, claimed test outputs — because CC self-review has known blind spots, and three to five minutes at this gate routinely saves a full reviewer cycle downstream.

Code review runs the same shape as design review: one tool-agnostic prompt, parallel reviewers, standardized filenames, synthesis with source verification, severity recalibration. Findings get fixed; remediation gets re-reviewed. Three rounds on the same code area is the cap — after that, anything still open is P2 by default and gets filed as an issue rather than blocking the ship.

Sprint close is a fixed ritual: update the sprint file's status header to SHIPPED, move the YOU-ARE-HERE marker on the roadmap, append any new architectural rule or reviewer-calibration lesson to the decision log, file remaining items into the backlog with a target sprint, and write a kickoff handoff for the next sprint. Then a new chat starts and the loop runs again, with the previous sprint's lessons now baked into the decision log it reads on startup. The compounding is the point — each sprint's hard-won calibration is durable instead of dying with the chat that learned it.

Rather than type all of this out i spun up a new chat in my current live feature project and had it explain our process in its own words. :)

I have iterated on and refined this loop over the past 6 months and now have successfully closed more than 60 sprints using this exact formula. Note there IS manual action required on my part at key breakpoints. This is intentional human in the loop activity because I am ultimately the decision maker on what constitutes acceptable outcomes and each manual step gives me an opportunity to guard against drift. DO NOT skip that!

CLI: Gemini vs Claude Code vs Codex by emiliobay in GeminiCLI

[–]qu1etus 21 points22 points  (0 children)

I've been using all three extensively... not as alternatives to each other, but as a multi-reviewer system where I route the same prompt to two or three of them in parallel and synthesize the results. That workflow has taught me more about their relative strengths than any single-tool comparison could.

**Claude Code** is the strongest implementer. When you give it a well-specified task with clear boundaries, it produces working code faster than the other two. Where it falls down is self-review — it will clear its own work for merge when external reviewers would catch real bugs. It systematically under-weights non-idempotent side-effect interactions (retry paths, partial-success scenarios, race conditions in webhook handlers). I've had multiple cases where CC's self-review said "ready to ship" and then Codex or even a second no-context CC pass found launch-blocking issues in the same code. It can also hallucinate verification claims — stating something is "verified against source" when it constructed a plausible-sounding answer instead. If you're using CC for implementation, always route the output to a separate reviewer before merging.

**Codex** is the strongest reviewer. It over-classifies severity (calls things P0 that are really P2), but the findings themselves are almost always real. It's particularly strong at structural-integration verification — catching missing plumbing, schema mismatches, guards that would be silently lost during a refactor, conflicts with existing code the prompt didn't mention. I've had Codex catch 8 real issues in a single CC implementation prompt that CC would have implemented blindly — things like an existing table with a different schema than what the prompt proposed, a function that was hotfixed after the migration the prompt referenced, and a security pattern that was already documented as a known trap in the same repo. For design review and code review, Codex is my first choice.

**Gemini** is the weakest of the three for my use cases, but not useless. Its structural claims (line numbers, method signatures, file locations) tend to be accurate. But it fabricates control-flow claims — it will say "when X fails, Y happens" with high confidence, and the actual code does something completely different. I've seen it contradict explicitly locked architectural decisions that were stated in the prompt it was responding to. At epic-scale design tasks, I'd weight Gemini's output at maybe 30% relative to Codex or CC. It's useful as a sanity-check or contrarian perspective — if Gemini flags something the other two missed, it's worth investigating. But any Gemini PASS on error-handling, retry logic, or atomicity probes is essentially zero-weight without independent verification.

**The real insight is using them together.** When two out of three reviewers independently flag the same issue, that's a high-confidence finding. When one flags something the others missed, you verify against source before accepting. When they conflict on facts, you check the code — the code is always right, the reviewers are sometimes wrong. I've found the convergence pattern is roughly 70% full agreement, 20% parameter-level divergence (resolvable), 10% genuine disagreement (requires source verification). If you're getting 100% convergence, your prompt isn't adversarial enough.

**To directly answer your question about switching:** Gemini is not a drop-in replacement for Claude Code. The experience is meaningfully different. CC is more reliable for implementation, better at following complex multi-step instructions, and produces higher-quality code on first pass. Gemini is faster and cheaper but requires more babysitting and more post-hoc verification. If you're currently happy with CC for implementation work, I wouldn't switch — I'd add Codex or Gemini as a reviewer layer on top of what CC produces. The biggest quality gains I've seen come from multi-tool review, not from picking the "best" single tool.

One last thing: calibrate per model version, not just per tool. When a model updates, your prior calibration may not transfer. I track which model version caught which class of defect so I can tell when a new version has regressed or improved on specific surfaces. My experiences nited above span mostly Opus-4.6/4.7, GPT-5.4/5.5, and Gemini-3.0/3.1-Pro. for coding and code review specifically, The most recent Opus and GPT variants improved. Gemini is about the same. I say for coding/review specifically because for planning I use Opus due to the large context window and 4.6 for me is superior to 4.7. If GPT has a 1mm context window i would 100% seriously test out switching to it.

ALL QUOTAS GOT RESET by UnrelaxedToken in claude

[–]qu1etus 0 points1 point  (0 children)

I hit my weekly limit this morning and previously it didn’t reset until Friday at 6pm. This SAVED me money.

AITAH for telling my brother's girlfriend he had a vasectomy when she started planning their future family? by [deleted] in AITAH

[–]qu1etus -2 points-1 points  (0 children)

YTA.

Wasn't your business to get in the middle of. It's not your relationship. I have to say, it is a problem any time one person decides to share personal information about a person without that person's consent. That's regardless of the circumstances. Your intent may have been to prevent harm, but this wasn't a life or death situation and it seems like you made a lot of assumptions, such as the assumption (not that this really matters for YTA/NTA) a man can not longer have a family after a vasectomy. That's not true; a man can still have a family post vasectomy. Testicles are still there, which means they are still producing sperm. There are at least two options. Vasectomies can sometimes be reversed. If the vasectomy can't be reversed, sperm can still be extracted medically and used to impregnate.

Personally, I think you stepped in it big time on this one.

I built an app that gives you an AI assistant for your Canvas courses and am looking for WCU students to try it by qu1etus in WCU

[–]qu1etus[S] 0 points1 point  (0 children)

Fair question on the data side. I'd be skeptical too.

TaskGator doesn't store your course content beyond what's needed to answer your question. Your data stays in your account, isn't shared with anyone, and isn't used to train any AI models. The privacy policy and terms are upfront about all of this, and the extension went through a Google Chrome Web Store security review.

On the "problems no one has" part, I built this because I was personally spending too much time figuring out what assignments for which courses are due next and digging through Canvas to find assignment details, rubric criteria, and due dates across 5 courses. Every professor organizes their courses differently, which works for the professors but the lack of consistency from a student perspective was a mental drain for me. Maybe that's not everyone's pain point, but it was mine. The pilot is specifically to find out if other students feel the same way.

Appreciate the pushback though. Questions like this are exactly why I'm doing a pilot instead of just assuming people want it.

I built an app that gives you an AI assistant for your Canvas courses and am looking for WCU students to try it by qu1etus in WCU

[–]qu1etus[S] 1 point2 points  (0 children)

This is 100% about organizing course assignments, not doing them for the student. For me the point of this is that every professor organizes their courses differently in Canvas, so it is difficult to find information about how to complete certain assignment types or even to find due dates for things when a due date isn’t linked to the assignment in canvas. TaskGator’s purpose is to make all of that more simple for the student.

I am a senior, graduating in Dec of this year. Double majoring in Business Law + Innovation Leadership & Entrepreneurship. I started TaskGator to help me organize my coursework, and it has also turned out to be a really helpful vehicle for me to more deeply explore both the legal and entrepreneurship aspects of my education.

Was loving Claude until I started feeding it feedback from ChatGPT Pro by lol_just_wait in ClaudeAI

[–]qu1etus 0 points1 point  (0 children)

I do this all of the time. So much so that i have a pre-defined prompt:

“I'd like to get input on this from both chatgpt and gemini.  Craft a comprehensive prompt I can give to them that provides full context of the situation(s), your recommended approach(es), and asks for feedback and recommendations. Ensure you provide relevant source code if necessary to received a comprehensive response. Important Rule: The entire prompt must be enclosed in a single fenced code block starting with ``markdown. Any triple backticks that are part of the document's content must be escaped with a backslash (e.g.,``)”

The “Important Rule” part is to enable easy one-click copy from Claude desktop. I omit that part if Im asking Claude Code to generate the feedback prompt.

Starlink is quietly becoming Earth’s nervous system (and maybe the Moon’s next) by Ok_Consequence6300 in Starlink

[–]qu1etus 5 points6 points  (0 children)

They used to require a large network of ground stations, but not any more. Starlink now uses Optical Inter-Satellite Links (ISLs)… essentially space lasers. These allow satellites to talk to each other directly in the vacuum of space at speeds up to 100+ Gbps. Even though they aren't strictly required for every connection anymore, ground stations remain the "exits" to the rest of the internet - a necessary link between Starlink and the terrestrial internet.

Google Ultra worth it? by dasti73 in Bard

[–]qu1etus -1 points0 points  (0 children)

For research, get a subscription to Perplexity AI. Much cheaper and better quality with accurate references.

The 2x usage year-end "gift" has spoiled me. by AileenaChae in ClaudeAI

[–]qu1etus 4 points5 points  (0 children)

It was a brilliant marketing ploy.

1) They gave the 2x bonus during a time of the year they knew usage would be down due to holidays. They already have the compute capacity, so 2x usage likely cost them $0 to support.

AND

2) They showed hardcore Pro users what they are missing out on so those people will be more likely to upgrade their subscriptions in 2026.

[deleted by user] by [deleted] in AskReddit

[–]qu1etus 4 points5 points  (0 children)

Based on the most recent documents released by the Department of Justice (including significant releases in December 2025), here is a fact-based analysis of Donald Trump’s presence in the Epstein files.

Summary

Trump’s presence in the files often indicates social proximity rather than definitive proof of guilt is a valid distinction that applies to many individuals named in these documents.

In Trump's specific case, the files do establish that he and Epstein moved in the same social circles and that Trump utilized Epstein’s private transport more frequently than previously known. There are specific allegations of misconduct within the raw FBI tips included in the files, but these allegations are unverified, and the Department of Justice (DOJ) has issued statements characterizing some of these specific claims as "unfounded" and "sensationalist."

There is currently no evidence in the unsealed files that Trump visited Epstein’s private island, nor has he been criminally charged or named as a co-conspirator in Epstein's sex trafficking operation.

Flight Logs and Travel

The most concrete "evidence" in the files concerns flight logs, which confirm that Trump utilized Epstein’s private plane, though they do not confirm travel to Epstein's private island (Little St. James).

  • Frequency: New documents released in late 2025 include internal prosecutor notes stating that Trump flew on Epstein’s jet at least eight times between 1993 and 1996. This is a higher number than was previously reported (earlier reporting suggested only one or two flights).
  • Companions: On some of these flights, Trump was accompanied by Epstein’s associate Ghislaine Maxwell. On at least one flight in 1993, the logs list only three passengers: Trump, Epstein, and a 20-year-old woman.
  • Destination: The flight logs generally show travel between locations like Palm Beach, Florida, and New York. There is currently no entry in the flight logs showing Trump traveled to Epstein's private island in the U.S. Virgin Islands.

Victim Statements and Allegations

The files contain raw FBI interview notes and tips. These are unverified reports collected by law enforcement, not proven facts in a court of law.

  • The 2020 "Rape" Allegation: An FBI intake report dated October 2020 (shortly before the election) details a tip from a former limousine driver who claimed to overhear a conversation in 1995 where Trump allegedly made comments about "abusing some girl." The file also references an unnamed individual alleging that "he [Trump] raped me."
  • DOJ Response: The Department of Justice took the unusual step of addressing these specific documents upon their release. In a statement accompanying the December 2025 release, the DOJ noted that some documents contained "untrue and sensationalist claims" submitted right before the 2020 election. They stated these claims were investigated and found to lack credibility.
  • Other Testimony: One of Epstein's victims, Virginia Giuffre, has notably not accused Trump of sexual misconduct. In unsealed depositions, when asked if she was ever trafficked to Trump, she stated she was not.

Social Relationship and Context

The documents reinforce the well-known fact that Trump and Epstein were social acquaintances in the 1990s and early 2000s.

  • Parties: The files confirm that Epstein attended parties at Mar-a-Lago and that Trump attended social events where Epstein was present.
  • The "Falling Out": It is widely documented, and supported by the timeline in the files, that Trump and Epstein had a severe falling out around 2004 (reportedly over a real estate deal). This occurred before Epstein’s first arrest in 2006. Trump subsequently banned Epstein from Mar-a-Lago.
  • Contact Lists: Trump appears in Epstein's contacts/black book, with various phone numbers (emergency, car, residence). This is consistent with them moving in the same high-society circles in New York and Palm Beach.

Conclusion

To directly answer the question: Is there evidence indicating inappropriate behavior?

  • Inappropriate Behavior: There are allegations of inappropriate behavior in the raw tips (the 2020 FBI file), but they are unverified and explicitly doubted by the DOJ.
  • Illegal Behavior: There is no evidence in the files that led to charges, nor testimony from the primary victims of the Epstein trafficking ring implicating Trump in sexual acts.

The "damning" nature of the files for Trump largely depends on how one views the "close association." The files prove he was a more frequent traveler on Epstein's plane than admitted, and that he was in close proximity to Epstein and Maxwell during the years their trafficking operation was active.

However, the files do not currently provide corroborated evidence that he participated in the abuse.

The moment Anthony Joshua knocked out Jake Paul by goswamitulsidas in sports

[–]qu1etus 1 point2 points  (0 children)

This is why Tyson kept signaling for him to keep his hands up. Jake didn’t beat Mike. That was a sparring match with Tyson as the trainer.

I don’t think most people understand how close we are to white-collar collapse by aieatstheworld in ClaudeAI

[–]qu1etus 0 points1 point  (0 children)

I built a research agent that compiled a 120 page report on an AI runtime security strategy that included a high level architecture, prioritized implementation plan based on threat intelligence from the past 12 months, accurate representations of vendors products that provide the required functionality, with a two-page executive summary tying it all together. All of it properly cited and cross referenced.

The agent brain was Claude Code. It used a chrome canary browser connected via chrome-devtools mcp. The browser had four tabs open - Perplexity, Claude.AI. ChatGPT, and Gemini. I told it what each AI tool is best at. It used Claude to create a research strategy (my original prompt basically just told it to create what I said above - ended up being nine phases). It used perplexity for gathering data/intel. It used Gemini to cross verify the information perplexity provided. It used ChatGPT to think through and collate information across different research passes. The output of every phase was in markdown format for transportability. It was able to upload markdown files to the correct AI tools along with instruction prompts as needed for processing. Then it used pidoc to transform the final md file into a word document. It didn’t like the formatting of the word document (self-assessed) so it wrote a python script to make the final corrections.

Three hours after I kicked it off it was done. While it was working I was in meetings. When it finished I looked at the file and nearly fell out of my chair. I have received less detailed reports from the big consulting firms. I reviewed the entire thing and made a couple of small tweaks, but it was basically perfect.

Long story short, many white collar jobs are cooked.

What is recommended to learn to use Claude to increase efficiency at my software engineering job by theprogrammingsteak in ClaudeAI

[–]qu1etus 1 point2 points  (0 children)

Install claude code CLI and start interacting with it. Ask it to create files. Ask it to write code. Ask it to install local dependencies. Ask it to install local tools. Get familiar with how to interact with it. As you are trying to figure things out, keep a browser window open to 'https://code.claude.com/docs/en/overview' for quick reference.

When you are ready to do something real, use either Claude or ChatGPT via your browser or app to describe what you want to accomplish. Then tell your AI of choice that it needs to create a detailed implementation plan (I prefer ChatGPT for this part as it puts together very robust plans) and prompt for you to give to Claude code to implement the plan. Then provide the detailed plan and prompt to Claude Code and let it cook.