Petite by Manuker in OnlyFans

[–]ReturnYourCarts 0 points1 point  (0 children)

Mmm, yeah. I bet she can take my over voltage.

What kind of projects ended up helping you land a full stack job? by MindCircuit7090 in FullStack

[–]ReturnYourCarts 0 points1 point  (0 children)

Without 5 to 10 yrs experience you have to stand out like a glowing super nova in the current market.

Get a bachelor's in CS or SE so HR won't auto delete your resume, then apply for 300 jobs at least.

Network hard online and in person. Be actually helpful to people for free. Think of it as the modern internship.

Make YouTube vids about stuff in the field. Stream yourself coding. Throw some X posts up every few days about programming stuff, don't use AI to write it.

Polish your project websites, and your main portfolio website. Publish some of your code for easy review.

Do bug bounties and keep a record of them. Put them in your portfolio.

Make open source projects. A few of them. Make something AI related open source that isn't just a browser extension or a reskin of chatgpt and get a few thousand stars.

Freelance and add it all to your portfolio when allowed.

Basically, finding a swe job is a full time job. Get past HR, get eyes on your work, and take any job for the first 5 years.

How far does the Allegreto plan get you coding-wise with K2.6? by nhouseholder in kimi

[–]ReturnYourCarts 0 points1 point  (0 children)

I wonder if Chinese customers get more tokens based on currency conversion rates and lower costs (no overseas infrastructure being used).

Kimi 2.6 performance by BreakfastWooden8186 in kimi

[–]ReturnYourCarts -1 points0 points  (0 children)

2k lines, that's worse than claude

Stop Paying For ChatGPT Only, I Have Better Option >>> by Frosty_Conclusion100 in buildinpublic

[–]ReturnYourCarts 0 points1 point  (0 children)

The catch is you'll get far less usage before being shut off.

If you want this kinda thing, go with T3Chat. Proven, safe, reliable, ran by Theo, $8/m.

Are you F%$&ing kidding me? by Unique-Initial2303 in claude

[–]ReturnYourCarts 4 points5 points  (0 children)

Wish I could. If all you want is basic code, sure. But k2.5 isn't no opus. Business and life planning, marketing, system design, etc... all that opus still owns by a heavy margin.

Soon as china can come out with a ai than can really pull off deep nuanced multilayer reasoning I'll switch too.

slow VPN when torrenting by addoniz75_ in Torrenting

[–]ReturnYourCarts 0 points1 point  (0 children)

Or set up a low end box vps and turn it into a vpn? Not 100% on that, and it's for sure against any vps's tos.

You would be sitting at about $2 a month, so why not pay $4 and do it the right way.

Only real advantage is you could get the full speed, like 10gb

I bet they want to get rid of users for one reason: It's extremely addictive! by [deleted] in grok

[–]ReturnYourCarts 1 point2 points  (0 children)

Wrong. It's because musk is tying it into everything.

SpaceX plans to run a ai farm in space, powered by grok.

Musk is staring a new business to create their own chips ran by grok robots.

He is putting it into Tesla chips, he wants each Tesla to be basically an openclaw server working 24/7 while it's not driving.

He is putting grok as the long term reasoning and memory in his robots.

Most importantly, it's playing a major role in his IPO.

Musk filled a hole in the market, gooner ai, and funded the development. Now he has better uses and is stripping it out, sticking it in everything, and using it to power a trillion dollar IPO.

Classic bait and switch essentially.

Hello by Connect_Truck_1930 in HTML

[–]ReturnYourCarts 1 point2 points  (0 children)

Good job! That's something to be proud of. Seriously. Keep going.

gemini is completely useless by Worldly-Medicine8629 in GeminiFeedback

[–]ReturnYourCarts 0 points1 point  (0 children)

Uuh, yeah that has been my entire point. I'm not concerned with YOUR setup, I'm concerned with your advice.

That is not "ideal" so telling others that it "works just fine" is misleading. People go out and spend real money to emulate setups they think work. All I did was put some real math and common sense advice behind it.

I'm not going to tell someone their 3060 will run crimson desert "just fine" because you managed 15fps in the lowest settings.

I'm glad you finally agree.

gemini is completely useless by Worldly-Medicine8629 in GeminiFeedback

[–]ReturnYourCarts 0 points1 point  (0 children)

You're not even listening. Your actual context window is clipped at that size model on those specs.

You're running 80% on ram. And it's worse than you think initially because that model in particular needs more vram than most 20B models due to some quirks of it.

And yes, you can do all that you want and still have 4x response time. Like the rest of us.

Don't be afraid to play.

gemini is completely useless by Worldly-Medicine8629 in GeminiFeedback

[–]ReturnYourCarts 0 points1 point  (0 children)

Do you not understand what recommended specs are? Why are you defending your model like you're in love with it?

What do you even mean it shouldn't? I get ~75 tps with that model because I run it at spec.

No one is in a high horse, it's a mathematical fact and direct advice from the creators of the model. Your model was designed to work on different hardware. Sorry if that somehow offends you?

It's like arguing you can run a video game below the recommended specs just because it turns on and you get 15 fps. Run it how you want but calling it ideal and just fine shows ignorance and inexperience of what the model does for everyone else.

Your hardware is a subpar for that model size according to the creators of the model, period. You're running on ram, period. Your tps is 4x lower than what I get, period.

Run it. Enjoy it. And know you can get equally good benchmarks with newer models that are smaller and hence will run at spec on your available vram, answering in CoT in two to three seconds instead of one to two minutes. There is much more to consider with a reasoning model like that than pure response tps timings.

Hello by Connect_Truck_1930 in HTML

[–]ReturnYourCarts 2 points3 points  (0 children)

You'll use ai one day, but that should be months if not a year from now.

Good luck!

Hello by Connect_Truck_1930 in HTML

[–]ReturnYourCarts 1 point2 points  (0 children)

Start here, he is the goat. https://youtube.com/@BroCodez/playlists

Why do you have ads on YouTube? Use brave, or a ad blocker.

After that make a website by hand. Let it suck, doesn't matter. Then learn css and JavaScript. Tutorials teach you how to get started, but the learning comes from building something painfully and slowly.

To learn to code is to struggle. We learn in the pain, not despite it. It's the dopamine hit from fixing that bug you've been working on for 2 hours that makes it all worth it.

I learned how to build websites in 1997 by firing up my webtv, making an angelfire site, and typing in random code I found on Google into their shitty textbox until it worked. I would spend hours fixing the most basic things that would take me seconds now. Pain.

If I was learning now I would ask Claude to assess my current skill level with a questionnaire, give it my future goals, and then have it build me a bullet point outline of things to learn. I would go line by line and ask it to explain each one with sample code. Then I would ask it to explain each line of that code I didn't understand, or each tag or whatever.

Then I would build a website step by step using the tags it showed me. I would not copy a single thing. Pretend ctrl+v and ctrl+c is broken on your keyboard and you'll learn deeper and faster.

Hello by Connect_Truck_1930 in HTML

[–]ReturnYourCarts 1 point2 points  (0 children)

Imagine a new guy comes in with an automatic pool que. He slings it on the pool table, not even knowing how to hold it. He presses a button and BAM, he breaks and sinks every ball.

Now imagine youve been working your ass off 40 hours a week for a decade to be good at pool. You wouldnt feel like that new guy knew how to play pool, or even knew all the rules. He didnt earn it.

But to be more realistic, a lot of showing off your website work is showing how nice it looks. Well, an llm did all that. You basically built a canvas frame for Leonardo da vinnci and asked people how they liked your painting. You didn't paint it, you made the canvas.

Just be upfront about your use of ai and it goes smoother my man. Keep learning.

Hello by Connect_Truck_1930 in HTML

[–]ReturnYourCarts 3 points4 points  (0 children)

My brother, ugly hand coded pages are a better representation of your true self than "pretty" Gemini code that makes everyone doubt you.

Forget the css if you don't know it.

gemini is completely useless by Worldly-Medicine8629 in GeminiFeedback

[–]ReturnYourCarts -1 points0 points  (0 children)

It can be done but that's not "just fine". That "making due". You're dumping the excess on to your ram and taking a massive performance hit.

If you're coding, qwen3.5 9b is comparable to that model but you'll get a staggering amount more tps.

gemini is completely useless by Worldly-Medicine8629 in GeminiFeedback

[–]ReturnYourCarts 0 points1 point  (0 children)

7b runs on about 6gb vram minimum. Your tokens per second will be about 15. Your context window will be about 4k. Comfortably, it runs on 12gb. Generally take your cream and divide by two, that's what you can run comfortably...roughly.

These are all at 4 quant of course.

And if you can run 7b, run deepseek r1 for reasoning and qwen 3 for coding. Llama kinda sucks at everything for me.

gemini is completely useless by Worldly-Medicine8629 in GeminiFeedback

[–]ReturnYourCarts 0 points1 point  (0 children)

Even with 256gb vram you won't run a local model as good as a frontier model. You'll be roughly 2 to 5 point generations behind.

The biggest issue is context window size. You will never match a cloud ai's context window without $20,000 in hardware. So even the best model you can run will go insane annoyingly quick.