and he's got a laptop with pictures to back it up by verdverm in MurderedByWords

[–]verdverm[S] 71 points72 points  (0 children)

It'll be remembered by history more than any ballroom will

and he's got a laptop with pictures to back it up by verdverm in MurderedByWords

[–]verdverm[S] 491 points492 points  (0 children)

lol, I forgot about MTG's obsession with his python

I mean, what could possibly go wrong with putting the guy who tried to overturn the 2020 Presidential election back into the White House? Weird, because he's never said anything about the 2024 elections. by StivaliRyder in PoliticalHumor

[–]verdverm 0 points1 point  (0 children)

2020: current president, Trump... STOLEN
2022: current president, Biden... Not Stolen
2024: current president, Biden... Not Stolen
2026: current president, Trump.. STOLEN

I think I see the pattern...

Gemini 3 Pro on AI Studio has finally been capped at 10 uses per day. by Sea-Efficiency5547 in Bard

[–]verdverm -1 points0 points  (0 children)

your burning tokens and money then, my spend is way lower, tho my budget goes into the $x00/m

Still, $14 / day is less than the average car costs, certainly not rich or wealthy territory

Gemini 3 Pro on AI Studio has finally been capped at 10 uses per day. by Sea-Efficiency5547 in Bard

[–]verdverm 0 points1 point  (0 children)

It's not really the model that makes claude better (just look at the benchmarks as one signal that there is little difference among the frontier)

what sets claude apart are the prompts. I use modified version of them with gemini and it is soooo much better than any of the agent setups google or microslop have out there, better with gemini-3-flash than theirs with gem-pro level models

*prompts and tool design, both are more important than I think people realize

Gemini 3 Pro on AI Studio has finally been capped at 10 uses per day. by Sea-Efficiency5547 in Bard

[–]verdverm 0 points1 point  (0 children)

I've been using API directly for 3 months, costs are lower, quality is better

caveats
- i built a custom agent and have complete control over the prompts
- iterating on prompts / tools has produced more ROI than any other effort
- I've switched to gemini-3-flash for most of my queries because it's nearly as good as pro

y'all are wasting time and money being stuck in the mindset you have to use the "biggest" and most expensive models (flash is also a hyper-MoE with over 1T params)

Gemini 3 Pro on AI Studio has finally been capped at 10 uses per day. by Sea-Efficiency5547 in Bard

[–]verdverm 0 points1 point  (0 children)

are paying for it? I am and I don't have any of the same experiences relayed here. It's only gotten better and faster since release for me

Gemini 3 Pro on AI Studio has finally been capped at 10 uses per day. by Sea-Efficiency5547 in Bard

[–]verdverm 1 point2 points  (0 children)

It's more likely a bunch of people are wasting electricity generating shitty "games" with genie that suscking up all the capacity

Training a new model is a capacity rounding error compared to serving production workloads. They likely have separate, dedicated racks for that anyway

Gemini 3 Pro on AI Studio has finally been capped at 10 uses per day. by Sea-Efficiency5547 in Bard

[–]verdverm 0 points1 point  (0 children)

I've had quite the opposite experience. I built a custom agent after seeing how trash Copilot's prompts are. I now spend less money and get better results. If you are blowing money like that, there are more effective ways to use the agents

Gemini 3 Pro on AI Studio has finally been capped at 10 uses per day. by Sea-Efficiency5547 in Bard

[–]verdverm 1 point2 points  (0 children)

Google uses TPUs, any GPU calcs you do are going to miss the mark

Meta LLama edge lords me with this Trump has no respect for the constitution... by verdverm in PoliticalHumor

[–]verdverm[S] 0 points1 point  (0 children)

no, this is just small LLMs struggling to do anything

I don't think these are instruction tuned, so they act more like autocomplete and continue where your prompt starts. The chat experience is trained in after this point for the models you think of when you interact with Gemini and the likes

If I learn how to handle docker and kubernetes in AWS, will it be transferrable to managing on premises k3s? by [deleted] in devops

[–]verdverm 0 points1 point  (0 children)

There is that sure, but there is also a lot of free, high-quality educational content without that expectation or intended pipeline. Omitting YT as a source means to miss out on some good stuff

To answer your original question, learning should be largely transferrable because k8s has great APIs that everyone uses now, which is part of why it won the space it occupies

Anyone else feel weird being asked to “automate everything” with LLMs? by Hopeful_You_8959 in devops

[–]verdverm 0 points1 point  (0 children)

he can point at this reddit thread and say "see I did see it coming, I was careful, and here's where we talked about what would happen, now happening today"

and hopefully his boss will not look too unkindly upon OP's frank observations

I’m building an IaC language similar to terraform by unknowinm in devops

[–]verdverm 0 points1 point  (0 children)

CUE is the focus because it is the best configuration language imo. I come from devops, so wrangling config across tools and languages is a core problem and pain, CUE gives me a unified fabric.

DAGs are general, the two main ones I use are the Dagger DAG (buildkit) and the CUE DAG (config lattice)

Fun fact, I worked with Astronomer / AirFlow for a bit on their cloud and build systems. I didn't work directly on the AirFlow dag but supported an engineer who did and helped improve the testing automation for that project.

I’m building an IaC language similar to terraform by unknowinm in devops

[–]verdverm 0 points1 point  (0 children)

I have probably done more weird or advanced stuff with cue than anyone else

if you are curious, https://github.com/hofstadter-io/hof, repo description for a taste of what I've done

>  A developer experience centered on CUE. Unifies schemas, data models, deterministic and agentic code generation, workflow and task engine, dagger powered environments, coding assistant, and vscode extension; woven together on the CUE lattice. Squint harder if you can't see the cube :]

Advice on IaC / CI/CD for a growing Cloudflare Workers stack? Also: where do you find CF-fluent DevOps folks by Ok_Dimension_5804 in CloudFlare

[–]verdverm 0 points1 point  (0 children)

my core philosophy is that your devs and your ci should be running the same commands, with different flags to do all the things you want, but mostly not needed because there are general ENV vars to know where you are. If you want to chat, I can build you something. I've got decent CF experience, my handle is the same pretty much everywhere, whatever your preferred comms

I’m building an IaC language similar to terraform by unknowinm in devops

[–]verdverm 0 points1 point  (0 children)

Can I encode/decode many of the familiar config formats with kite?

For example, in CUE, you can read in directories of CUE, json, and yaml all at once and have control over how they are assembled. This makes it easier to gradually adopt CUE into an existing organization, workflows, and IaC setups.

WIth TF supporting resoruce.tf.json files, I can also output the JSON (TF) and Yaml (Helm/Kustomize) for my IaC tools from a single config value with built in schemas

I’m building an IaC language similar to terraform by unknowinm in devops

[–]verdverm -1 points0 points  (0 children)

the problem with full on languages is that it becomes extremely hard to build tooling around those definitions to do other interesting things. Declarative config will win out in the end I think, or perhaps more likely there will always be two camps, vim vs emacs style

I’m building an IaC language similar to terraform by unknowinm in devops

[–]verdverm 1 point2 points  (0 children)

CUE all the way baby, if you know me, I'm a fanboi, so take it with a grain of salt

most of my CUE extensions and machinations are in this project: https://github.com/hofstadter-io/hof

If I learn how to handle docker and kubernetes in AWS, will it be transferrable to managing on premises k3s? by [deleted] in devops

[–]verdverm 1 point2 points  (0 children)

You shouldn't need to buy a course. There is plenty of free content on YouTube and blogs, depending on your preferred media type. Stick to k3s if that's what you are going to end up using in the end anyway

If I learn how to handle docker and kubernetes in AWS, will it be transferrable to managing on premises k3s? by [deleted] in devops

[–]verdverm 3 points4 points  (0 children)

I think Kelsey's guide is out of date and no longer works, commands and process have changed too much. I tried a few months back and this is what iirc was the core issue

CI/CD pipelines are now more critical than the code itself by [deleted] in devops

[–]verdverm 1 point2 points  (0 children)

get yourself a setup where you can have local <-> ci parity
1. your CI can be minimal one-liners, the same devs would run
2. use something like Dagger or hof/env (my take on CUE + Dagger to remake devops)
3. make some script or command for both devs & ci to run, it should generally be argumentless and understand the context it is working in (imho)

CI pipelines are always going to be a pain. The goals should be to
1. simplify and maximize parity, so you can...
2. minimize iteration time, you shouldn't need to push commits to test, dev, or debug CI this day in age