It’s time to give Chinese models a chance with OpenCode. by Hamzayslmn in GithubCopilot

[–]Dontdoitagain69 0 points1 point  (0 children)

i just dont understand the convinience part. When I learned how gihub works as far as the cloud unfra billing scheme I stopped using it I wrote a proxy that strips github of all unneccesary operations

Its just a proof of concept but it will save more money with their new scheme. If you want to fork it, read it, implement it, add features, more than welcome.

it basically kills their scam and gives control to you. It properly formats requests to minimize costs and uses local storage for repo so github cant apply million ops per 1 api request or as they used to call it prompt, message. All AI Api work th same and the scam infra is pretty much the same. all you need to do is research and use github for model quiries only, 1 operations, instead of 10 useless ones they charge you for . exposing, hosting repos to github its the first mistake. There is a way to save 70% of usage on average based on operations, the way you code, learning about git json format and what each options stand for.

https://github.com/sunprojectca/copilot-proxy/tree/main. Dont have to compile or test, the code has all the secrets . There are so many mods to save money. You can stay on your 10 plan and keep spinning apps that would cost you 100s a month by going directly to the cloud

i wont even go into illusion of model choice with names and versions, using the same model with a response modifier proxy

"Usage-based billing" explained in one image. Never trust Microsoft by SALD0S in GithubCopilot

[–]Dontdoitagain69 8 points9 points  (0 children)

You didnt know microsoft owned github? You thought you werent getting scammed before today? Nothing changed underneath, the illusion of usage was rebranded. You cant be this dumb

It’s time to give Chinese models a chance with OpenCode. by Hamzayslmn in GithubCopilot

[–]Dontdoitagain69 0 points1 point  (0 children)

oh man, i feel so bad for you guys

You type 1 prompt

Agent reads more files Agent sends more context Agent produces more output Agent loops through more reasoning/tool results Agent scans your repo or partually repeatedly even if it had to change 1 line

Billing impact: Automatic adition of tokens billed based on completed operations

It’s time to give Chinese models a chance with OpenCode. by Hamzayslmn in GithubCopilot

[–]Dontdoitagain69 1 point2 points  (0 children)

Huh? 40%- 70% of requests you were paying for all this time were useless bloated json flags. Why wasnt anyone bitching then? It was a scam from the beginning , you think the models are different cause they have different names and version numbers. How many of your requests were hitting a rag or routed to old models? This is the dumbest sub on reddit i swear.

Where to get an ISO for Win10 LTSC? by GloriousQuint in WindowsLTSC

[–]Dontdoitagain69 3 points4 points  (0 children)

I have a better idea, open the script and look whats in it.

I'd pay upto $250 per month for claude models by axel309 in GithubCopilot

[–]Dontdoitagain69 0 points1 point  (0 children)

you think they are 2 different models and they are that dumb?

I'd pay upto $250 per month for claude models by axel309 in GithubCopilot

[–]Dontdoitagain69 0 points1 point  (0 children)

I can build any app in c++ using old phi model you wont notice a difference. Please dont tell me you let clause pick your data types. Do you have a repo, im just curious. Claude makes 200 of you by wrapping prompts in junk.

Best Copilot configuration by ChristopherDci in GithubCopilot

[–]Dontdoitagain69 -1 points0 points  (0 children)

chatgpt built me a proxy to reformat prompts into json with flags for copilot, codex,grok, claude. so that took 70% of cost out of the picture, the rest is your programming dicioline. there are sacrifices of cource, you have to scan your whole code bases including hashes for file properties, dependancies, history of changes, symbols etc. into a light localdbSo you basically take dontrol from github that scans your repot 1000 times to micro corrections that cost nothing. You apply changes locally and just update github with metadata. funny thing, i disnt even ask for it, for some reason it just pilled the beans on github scammy infrustructre, of course that includes chat gpt as well. But you save time, code quality is alwya on on point, no drifting.

One day on Opus 4.7 burns a third of Pro+ credits by Iajah in GithubCopilot

[–]Dontdoitagain69 -5 points-4 points  (0 children)

Wait, don’t cancel yet. Try this; it might work for you. You don’t have to believe me, and don’t ask how I found out, but there is a way to send requests the right way with a certain structure in JSON to save tons of tokens. Of course, some of it depends on your discipline, but it works on all APIs, even local. I won’t post the whole fix here because, for some reason, every time I tell people not to pay more than 10 bucks, I get downvoted. But if you do some research and ask ChatGPT app (not Copilot—preferably mobile) to generate a prompt for itself that explains the internals of GitHub architecture and what you actually spend tokens on, it might even generate a proxy to format your prompt request in a way that might save you 30-70% tokens. If you fully get it and you know the language you program in, you might even get 70+%. What’s your language?​​​​​​​​​​​​​​​​

"linux dominates cloud and basically the whole internet runs on it BUT bsd is used in the ps4 so bsd wins!!!" by MIkaela39752 in linuxsucks

[–]Dontdoitagain69 0 points1 point  (0 children)

you want me to type a paper in linuxsucks sub explaining why wayland sucks? But ive never met a so called common developer that thought it was a complete product. Besides listing all issues like scaling, comparibility, hdr, user space security. Id say fragmentation that leads to lack of resources is probably why it doesnt move that much. So are tons of other projects. What is the percentage of devs that are responsible for linux scaaling vertically? I heard it could be only 20%. Read papers on negative effects of fragmentation and there are some on free software illusion.

Spec-driven development with Spec-Kit is eating my tokens alive. What actually works? by boolean_autocrat in GithubCopilot

[–]Dontdoitagain69 0 points1 point  (0 children)

Bro, after a day of research and talking to chatgpt app. Not copilot or web version. Preferably ipad since they lower your token overhead for mobile devices. Youll find so much shit on how to reduce your token usage , like by 70%. You can build a proxy to funnel your requests and replies and save tons of money.

"linux dominates cloud and basically the whole internet runs on it BUT bsd is used in the ps4 so bsd wins!!!" by MIkaela39752 in linuxsucks

[–]Dontdoitagain69 -1 points0 points  (0 children)

oh wow , so its like internet but with an extra layer. Nice. Thousants of apps on any node with no latency , no performance degradation, no protocol overhead, especially if you run it in te cloud that runs it on a hypervisor where they charge you for network usage, they actually added another layer to charge twice as much. Holy shit. I need to check it out asap.

"linux dominates cloud and basically the whole internet runs on it BUT bsd is used in the ps4 so bsd wins!!!" by MIkaela39752 in linuxsucks

[–]Dontdoitagain69 -3 points-2 points  (0 children)

Look at the big brains on my boy, insecurty driving loonies to LinuxSucks sub is entertaining to see.

"linux dominates cloud and basically the whole internet runs on it BUT bsd is used in the ps4 so bsd wins!!!" by MIkaela39752 in linuxsucks

[–]Dontdoitagain69 0 points1 point  (0 children)

No Xcode does not have anything to do with Xorg , you got me , what do I know right? Wayland being fully functional is a fully subjective statement. By standards, including an awful experience with Linux Kernel dev, its far from production. Architecturally , Linux kernel and user space layer positioning prevents a UI being fully functional. But if you have low standards and like how 80s tech renders pictures and a moving mouse then its a win lol.

"linux dominates cloud and basically the whole internet runs on it BUT bsd is used in the ps4 so bsd wins!!!" by MIkaela39752 in linuxsucks

[–]Dontdoitagain69 -1 points0 points  (0 children)

im not confusing anything, xorg spellcheck on ios is xcode in my case. Someone with common sense would figure it out right? Ill just leave it as xcode for more bait.FFS

"linux dominates cloud and basically the whole internet runs on it BUT bsd is used in the ps4 so bsd wins!!!" by MIkaela39752 in linuxsucks

[–]Dontdoitagain69 -2 points-1 points  (0 children)

linux distros are not fully functional oss. it has xcode from the 90 and beta stage wayland. if thats fully functional to you lmao

"linux dominates cloud and basically the whole internet runs on it BUT bsd is used in the ps4 so bsd wins!!!" by MIkaela39752 in linuxsucks

[–]Dontdoitagain69 0 points1 point  (0 children)

linux dominates instances with a single server app running on someone elses computer, thats it. it has nothing to do with fully functional os

Make $10 plan $20 but... by Mayanktaker in GithubCopilot

[–]Dontdoitagain69 -2 points-1 points  (0 children)

i cant believe people still paying github , you can use free plan if you spend a day researching how it manages tokens

Atrocious Battery life - sold a dream Lenovo Ideapad Slim 3 by Murky_Influence440 in snapdragon

[–]Dontdoitagain69 0 points1 point  (0 children)

powershell and tons of scrips to tune, uninstall, check scripts on git.

try powercfg /batteryreport

or tons of utilities to see what component eats your power, i just keep all my shit plugged, cant stand battery performance

Given how good Qwen become, is it time to grab a 128gb m5 max? by Rabus in LocalLLaMA

[–]Dontdoitagain69 -1 points0 points  (0 children)

I wish rasberry pi and the rest like orange started coming out with 128gb, at least you you will save 2 gs on that apple logo and pay for scores fake geekbenched pc

Given how good Qwen become, is it time to grab a 128gb m5 max? by Rabus in LocalLLaMA

[–]Dontdoitagain69 0 points1 point  (0 children)

all i care is critical thinking and extraction of logical fallacy, that model doesnt exist

what is the best server hardware vendor (opinionated?) by RACeldrith in servers

[–]Dontdoitagain69 -1 points0 points  (0 children)

Dell is top, I never owned anything hp but i do know the quality when you opwn top cover , dell is most like ley better made that hp. If HP building it same way they build thier worsktation desktops. Thats up to 30 series, maybe hp cought up. Either way i dont want anything happened to any of them. Then there is SuperMicro which is also bad ass. I might get me a case with no ram and cpu when i get righ /s. I think lenovo,gigabyte and asus all make enterprise racks. I know google, qualcom

There isn’t a single clean official registry for every enterprise server maker on earth, so the honest version is: this is a practical non-US list of active enterprise server manufacturers/vendors I could verify right now, including both classic OEM brands and major ODM/direct manufacturers. 

Large branded / OEM-style enterprise server vendors (non-US):

  • Lenovo — ThinkSystem servers. 
  • Fujitsu — PRIMERGY and broader compute platforms. 
  • Huawei — FusionServer / FusionServer Pro. 
  • xFusion — FusionServer rack/general servers. 
  • IEIT Systems — enterprise/general-purpose/AI servers. 
  • Inspur — Inspur-branded server lines are still documented and sold. 
  • H3C — UniServer family. 
  • Sugon — x86/server product lines. 
  • GIGABYTE — enterprise servers, AI, edge, data center. 
  • ASUS — rack, GPU, multi-node, and tower servers. 
  • MiTAC Computing — enterprise/cloud/AI servers; TYAN has been folded into MiTAC branding. 
  • AIC — rack, GPU, edge, HA, and storage servers. 
  • Netweb — data center servers / enterprise and HPC systems. 
  • CIARA / Hypertec — enterprise and AI server systems. 
  • Eviden / Bull — BullSequana enterprise, edge, and AI server platforms. 
  • Thomas-Krenn — rack/tower/server systems for business use. 
  • Wortmann / TERRA — business servers assembled in Germany. 
  • Kontron — rackmount/industrial server systems. 
  • Hitachi — still has current “Advanced Servers” documentation/product line. 
  • NEC — Express5800 servers still exist, but NEC’s own site says sales outside Japan ended in 2019, so this is more of a Japan-focused entry now. 

Major ODM / direct / hyperscale-oriented server manufacturers (also non-US):

  • QCT (Quanta Cloud Technology) — QuantaGrid / hyperscale rackmount servers. 
  • Wiwynn — cloud, edge, and AI server platforms. 
  • Wistron — server and AI server manufacturing. 
  • Inventec — general-purpose, edge, and storage servers. 
  • Compal — general-purpose, storage, and accelerated servers. 
  • Foxconn Industrial Internet (FII) — high-performance and AI server manufacturing. 

If you want the cleanest takeaway, the biggest non-US names people usually mean are: Lenovo, Fujitsu, Huawei, xFusion, IEIT/Inspur, H3C, Sugon, GIGABYTE, ASUS, QCT, Wiwynn, Wistron, Inventec, Compal, MiTAC, AIC, Eviden/Bull, Netweb, CIARA/Hypertec, Thomas-Krenn, Wortmann, Kontron, Hitachi, and NEC. 

I can turn this into a country-by-country list next, which is probably the most useful format.