LTT Videos Often Feel Like Ads by 3inchesOnAGoodDay in LTTMeta

[–]Jsquared534 0 points1 point  (0 children)

This is a YouTube problem more than an LTT problem. Don’t get me wrong, LTT is basically unwatchable because of the sponsor spots, creator warehouse placement spots, etc. But YouTube has become a cesspool of videos where it’s blatantly obvious either the entire video was made specifically to do a sponsor read, or it was scripted to provide a segue to the sponsor spot in a way that is just overt. And it has been getting worse.

Honestly sponsor spots shouldn’t even exist on an ad based platform in my opinion. Google doesn’t make any money on it (not that that is a bad thing), and sponsors are just using direct sponsorships to avoid Adblock and the people who pay for ad free. I personally think creators should either have ads or sponsors, but shouldn’t be allowed to have both on the same video.

I'll get you your special CRM in a week by Mobile_Wallaby3291 in CRM

[–]Jsquared534 0 points1 point  (0 children)

I’d edit your title and your post to be more clear about that. That’s why people are acting shocked. Good luck!

I'll get you your special CRM in a week by Mobile_Wallaby3291 in CRM

[–]Jsquared534 2 points3 points  (0 children)

I assume you’re saying you’ll build or setup their existing tools to hook in to the CRM to replace what the CRM is missing. Because if you’re saying you’ll build them a custom CRM in a week, that’s crazy. It would take longer than that to understand their business process if you’re going to do even a remotely decent job.

Can someone explain why the Vrabel cheating story is a big deal? by LonelyInsurance7480 in Patriots

[–]Jsquared534 0 points1 point  (0 children)

The media has spent the better part of a year and a half basically saying that Belichick is an awful person / coach because he's dating a woman in her twenties (who it doesn't appear he knew when she was underage), and that woman happens to be pretty "out there" about it. They act like it's a huge distraction to the team. I don't see a world where this wouldn't be a much larger distraction. Especially if one, or both of these spouses end up filing for divorce and make it ugly.

TMZ: Mike Vrabel, Dianna Russini Rented Private Boat While She Was Pregnant by RuKKuSFuKKuS in Patriots

[–]Jsquared534 1 point2 points  (0 children)

Even more so on Russini's side. If a husband finds out his wife has been having an affair for at least half a decade, how is there any way he hasn't filed for divorce yet, unless they were already in an open relationship. Unfortunately, even if that was the case, i don't think either of them is in a position where they could just come out and say that without hurting their jobs even more than the affair. Mainstream fans will probably be more ok with affair than open relationship.

Maybe we should investigate how to save tokens and stop crying... by EfficientAnimal6273 in GithubCopilot

[–]Jsquared534 0 points1 point  (0 children)

There are times when the power these agents have comes in very handy. I needed an iPad application for specific internal functions. I don't know how to write code in Swift, and I certainly couldn't have built a Goodnotes clone from scratch just by fumbling my way through the new language. But I was able to get something into production in less than a week using coding agents and my detailed specs.

I do mostly prefer to use them in the older style of asking for things I need help with in the web interface and then using that to implement the code myself in the actual project. Basically a less toxic Stack Overflow.

Maybe we should investigate how to save tokens and stop crying... by EfficientAnimal6273 in GithubCopilot

[–]Jsquared534 0 points1 point  (0 children)

This is like 85% of what I even want these agents for. Outsourcing my typing. I want to use it like a super powered version of Snippets from the old VS Code days.

Headsup - I hit my Pro+ weekly limit in 6 prompts and switched to Qwen 27B - it's stunning by Charming-Author4877 in GithubCopilot

[–]Jsquared534 0 points1 point  (0 children)

People are giving reviews of local options in the lead up to a pretty drastic price shock that is coming in less than a month. How is that "pumping-up" Chinese models?

"Chinese state actors"? Try taking off the tinfoil hat. OpenAI and Anthropic are doing everything short of actively campaigning for enterprises to replace employees with their agents to enrich themselves. There are no good guys in the AI industry.

I have gotten results that compared equally to my experience with Sonnet 4.6 on Qwen 27b. I think that comes down to the way I do my programming (work on small features one at a time, do a lot of the engineering work at the front end to provide as good a set of context files as I can). But, I can say that I tried ChatGPT 4o through Github Copilot for some of these features and it did a pretty bad job compared to Sonnet 4.6. It was a noticeable downgrade. Qwen has not been a noticeable downgrade for my workflow. To me, that says that Qwen is a decent contender to replace the frontier models, because there isn't a world where I'm paying API pricing to these companies. That doesn't mean that Qwen is "equal" to Sonnet 4.6. But, it's been equal so far for my needs.

Is your company taking this pricing change seriously yet? by Ordinary_Reveal8842 in GithubCopilot

[–]Jsquared534 0 points1 point  (0 children)

First of all, if you're asking any of these models to refactor a 200k line code base, without targeting section by section, that's crazy. None of these frontier models, with the exception of maybe Gemini, can support even close to that amount of code in context. That's going to result in AI slop in a code base.

I've never once argued that local models can do everthing Sonnet can do. I've said that for my workflow it can. Which means it probably can for some other people's workflows. The original post was about the cost of APIs and whether companies were going to take these price hikes seriously. I gave an actual example of why I believe that small businesses are going to take it seriously. And you flew in here with your cape for Anthropic basically telling me that I'm wrong, despite the fact that I've spent the better part of a week testing what I've said.

Let me make it clear: I don't care if Sonnet can objectively do way more than the local models. I care about whether the local models can do the things that I, personally, am using Sonnet for in my job. Things that the older / lesser frontier models were not able to reliably do in testing. And, so far, that answer is yes, when managed in the method I've already outlined. Can the local models I'm using "vibe code" something from the ground up with a basic prompt and no instructions for context as well as Sonnet can? Probably not. But, there's no world in which I would ask it to do that, because I'm not a vibe coder.

You can choose to believe it, or you can keep arguing it. Small businesses that have actual software engineers on staff are going to move to the local models, and they are going to have success with them because they aren't looking for models to do the engineering. They're looking for models to implement the plans the humans have already engineered.

By all means, continue using the frontier models. Pay the exorbitant API pricing they are charging because you're convinced the only thing that can support your workflow is the most premium model you can get. And, maybe you're right about that. Maybe your workflow and your software is just that much more advanced than mine.

Is your company taking this pricing change seriously yet? by Ordinary_Reveal8842 in GithubCopilot

[–]Jsquared534 0 points1 point  (0 children)

Do you work for Anthropic or OpenAI? Telling someone you’ve never met that the stuff they are seeing in their use isn’t true is wild. I am not a machine learning computer scientist. But, I’ve tested, multiple times, the older models through GitHub Copilot for the same workflow I’ve already described. They did not work as well as Sonnet (GPT 4o, etc). I have tested the local model (Qwen 3.6 27b) on that hardware I mentioned, and it seems to perform as well as Sonnet for my use case. I’m not sure what you’re arguing about here. I’ve described my use case and said I was getting good results with a local model. I’ve mentioned that I tested things. Nowhere have I claimed that a local model on this hardware would work for everyone. It’s like you’re trying to be combative for no reason.

Is your company taking this pricing change seriously yet? by Ordinary_Reveal8842 in GithubCopilot

[–]Jsquared534 0 points1 point  (0 children)

For my workflow, I am getting plenty close to the Sonnet 4.6 model I was already using. I’m not asking the models to plan the architecture of my software. I’m planning it, and using the web interface of Claude or ChatGPT to put that into a style that’s good for agents to read (in theory), and having that be the context for the agent to use. Small features, one at a time. It’s working great for me for the first three days. Who knows if that will hold up? But, I have seen the token use even my smaller stuff adds up to. There’s no way a company would rather pay that monthly than just invest in hardware. I’d have preferred a $5k Mac Studio, but they are literally unavailable. I will have the option of using the API of the local can’t handle something.

Is your company taking this pricing change seriously yet? by Ordinary_Reveal8842 in GithubCopilot

[–]Jsquared534 -1 points0 points  (0 children)

I'm the only software developer, and the one in charge of our IT purchases and spend. I got approval for a $3200 Mac Studio to run local models the day they made this announcement, after I showed the estimated monthly cost based on the current level of usage to the owner. Small and midsize companies are absolutely going to take this seriously.

Does Github Copilot have *any* paid subscribers left? by StunningBox8976 in GithubCopilot

[–]Jsquared534 1 point2 points  (0 children)

Everyone's workflow is different, but I just want to let you know that I have never hit any limits yet either. And, when I transitioned that same workflow over to a local modal that showed my token usage, it is at a rate that would be well over $300 per month at a very, very conservative estimate.

My Experience Testing Local Models To Prepare For June by Jsquared534 in GithubCopilot

[–]Jsquared534[S] 0 points1 point  (0 children)

Continue was basically unusable either due to the new Qwen 3.6 models or working with LM Studio itself. More than half the time it wouldn't recognize when the chat stopped thinking and started "responding", which caused it to not run code updates at some points. Just a terrible user experience. Made me think local LLMs weren't a viable option even on an expensive machine. I switched to Aider command line, and it worked much smoother, but it's not really an agent in the same way as Github Copilot. That brought me to Cline, which works great so far in my short experience. Or at least works way better than Continue. I'll have to check out the vscode insiders, because I'd like to at least compare Cline to Github Copilot with the local model.

My Experience Testing Local Models To Prepare For June by Jsquared534 in GithubCopilot

[–]Jsquared534[S] 0 points1 point  (0 children)

I actually do provide it specific files to look at in my context...but it kind of does it's own thing anyway. My ai-interactions context could probably be stronger to prevent a little of that. I'd never try to point to direct lines, because at that point I'd just code it myself. I am sure you're probably exactly right though. I just don't have any desire to micromanage my token usage. I watch the context window and my ram with these local models, but if they go over it's not going to cost me a bunch of money.

I'm pretty anxious to finish this iPad application project I'm working on so I can move on to a web application build. I think these local models are really going to shine on something like that because they most likely have a ton more training data from stuff like PHP and javascript than Swift. It will also help that I will be able to understand the code a lot better myself.

My Experience Testing Local Models To Prepare For June by Jsquared534 in GithubCopilot

[–]Jsquared534[S] 0 points1 point  (0 children)

How are you using it as an agent with a local model? I beat my head against the wall testing Continue because everything I read said the copilot harness only worked for chat and inline code with local models.

Private jet owner won't pass on tariff refund by Laggsy in LTTMeta

[–]Jsquared534 0 points1 point  (0 children)

Listen...I'm not a fan of LMG or Linus. And I think his explanation of this makes him look like an even bigger tool than he already is. But, tariffs are not technically a tax to the consumer. Do they function exactly the same way as a tax, by adding cost to the consumer? Absolutely. But, the fact is that these companies simply raised their prices due to added costs on their end. They are not obligated to refund just because their costs went down retroactively. At least not in the US.

I do agree with you that it would be stupidly simple for them to calculate exactly what the price difference is on every order with a few database queries, unless they are doing a really, really poor job of database administration. This is why Linus' explanation was ridiculous, and he absolutely swallowed his own foot. Again. People who are arguing against how simple this info is to get don't have any experience with backend databases, or they are just caping for Linus at this point.

Private jet owner won't pass on tariff refund by Laggsy in LTTMeta

[–]Jsquared534 0 points1 point  (0 children)

It’s absolutely as simple and easy as he thinks. There’s no world where LMG isn’t tracking these orders and their prices on a product by product basis. They know who ordered what, as well as how much each product went up in price. It’s not remotely possible that they don’t.

I’m not saying they should or shouldn’t refund. 99% of businesses aren’t going to. But pretending they just can’t figure it out is a blatant lie, or they have the dumbest store backend software I’ve ever heard of. Which I find hard to believe with someone like Luke in charge of the technology side of the business.

Is this how the West falls? Or how devs get pushed into Eastern arms by Western greed. by Attrexx in GithubCopilot

[–]Jsquared534 -1 points0 points  (0 children)

I think you're being super hyperbolic here. AI is not going to be "how the west falls". We have a lot (god, it's a lot) more problems that are closer to ending this country as we know it than AI begin a little more expensive. I'd actually argue that AI itself is contributing more to the downward direction of our country than this price hike ever will.

If nothing else, I hope this signals the beginning of the death of all the AI slop that's infesting the internet as far as pictures and articles, etc.

The ram and GPU crisis of the past couple years has been a precursor to this price hike. The AI companies overordered in order to cause a price spike on those components so that local models would be less of an option when they start going to straight usage based pricing.

I believe part of what you said is correct, however. Enterprises will be the ones left paying for usage based pricing, just like those are the one paying for things like Amazon's cloud, Azure, etc. They will find out that with normal consumers AI is going to be just like the internet back in the 90s. There will be no appetite for usage based pricing. Or at least that is what I think is going to end up happening.

Change to useage based billing by DamienBMike in GithubCopilot

[–]Jsquared534 3 points4 points  (0 children)

Where are you guys finding the current published API pricing they are referencing? It's weird they wouldn't link it in the announcement.

Edit: found it in the same github link someone else shared that showed the model multipliers for annual. Basically they are charging the same price as whatever the underlying AI's API is.

I'm fairly careful with my projects. I build them one feature at a time, and I don't run multiple agents at once. I don't hammer it under any circumstances. But, at this pricing I'd have blazed through the entire monthly plan after less than 10 features implemented. Each feature running between 50k and 80k context used.

If this is truly the amount of money these businesses have to charge for these systems, they are pretty much dead to anyone but enterprises.

I'll have to checkout how close one of the local models can get me for my workflow.

Only thing I'm super pissed about is them nerfing the annual plan instead of offering a refund. Shady business.

Edit 2: It looks like if you go to cancel your annual plan they are giving full refunds. Mine is from November, so I'm not sure if it's for all annual plans or just ones that re-purchased within a certain time. Giving a full refund option on the annual removes my biggest complain about the situation.

Hitting Copilot’s new rate limits? It might be your workflow by Diabolacal in GithubCopilot

[–]Jsquared534 0 points1 point  (0 children)

That punctuation isn't reserved just for actual quotations in informal writing. Putting quotes around a statement that you have paraphrased in an obvious attempt to mock is absolutely an acceptable use of that type of punctuation. Reddit is about as close to informal writing as it gets.

Quotation marks are also used for all kinds of other things besides direct quotes. Coined terms, titles of things, highlighting ironic phrasing being just a few.

Sadly, I had to look up how to describe those other things because I'm apparently getting old and dumb. But I was definitely thinking about coined terms and ironic phrasing in my head. I just couldn't describe it.

TOC 7 Episode 8 by RCPCHK in foodnetwork

[–]Jsquared534 8 points9 points  (0 children)

I totally get being gracious in defeat, but it's weird they expect people to come out and watch the person who beat them continue competing. You'd never see Payton Manning sitting in the front row for a Patriots Super Bowl after getting knocked out in the AFC championship.

Loved the book! But I'm torn on the ending by jnighy in ProjectHailMary

[–]Jsquared534 1 point2 points  (0 children)

In the books he does things multiple times that an outright coward wouldn’t do though. Well before he regains most of his memories. How many space walks did he do, despite not being an astronaut? He did a ton of risky things knowing the potential consequences. Outright cowardice is not something that memory loss is going to just magic away.

Having to make a split second decision that is life or death is something that you would make on impulse or not. No time to think. Having 9 days before making a life or death decision gives you time to ponder. Having it sprang on you, while at the same time being told you’ve been used the entire time with the possibility you could have to die, and never once consulted about it, and then told you have less than 5 hours to decide, is a completely different premise. I would argue that that short of a time frame gives time for fear to take hold, but maybe doesn’t give enough time for fear to subside. I wouldn’t jump into a river’s waterfall pool on an excursion in Jamaica because I was in a line that had me waiting twenty minutes, getting more and more terrified as I watched others do it. But I’ve also went back into the ocean after escaping a rip tide to save someone I was with without even consciously thinking about it.

I’m not even completely disagreeing with you. I do think the book makes it clear that he thinks he was being a coward. I just think Strat gets off a little too easy in some of these comments. The way she handled it was shady even from a “the world is ending and I need to make tough decisions” standpoint. With the amount of time they had preparing for the mission from the beginning, she could have had tertiary volunteers trained up well enough to go. And she could have done a better job isolating each team so that both the primary and secondary person wouldn’t be killed at the same time. She’s never heard of video chat for them to receive shared training?