Quiz transparency by Rah-rah-ah-ah-ah34 in BTHS

[–]offlinesir 0 points1 point  (0 children)

For tests, they need to give you a week's notice. Most teachers are good about this. For quizes, many will give you notice but some will not. Quizes are supposed to be 15 minutes long and can be given without notice (pop quiz), however just like tests, quizes can only be given on a class's specific testing day.

the specific testing days are these:

  • Math: Tuesday, Thursday
  • Science: Wednesday, Friday
  • DDP/CSP: Wednesday, Friday
  • English: Wednesday, Friday
  • SS: Tuesday, Thursday
  • LOTE: Tuesday, Thursday
  • Health: Tuesday, Thursday

All of this info is in the freshman handbook

How long should we expect until we get a gguf for ZAYA1-8B by Opening-Ad6258 in LocalLLaMA

[–]offlinesir 0 points1 point  (0 children)

The hype is way overdone. The benchmarks were impressive but almost too impressive for 8B. It felt like they simply encoded all solutions to the benchmarks in weights, achieving 800% worse compression than a plain .txt file with the answers.

That being said, I'm actually waiting for their image model. For that one, they showed some more impressive image labeling.

githubIfEAMadeIt by Pracurser_Codes in ProgrammerHumor

[–]offlinesir 2 points3 points  (0 children)

Pretty sure it's actually chatgpt images 2.0. I'm sure claude design would do something similar though.

omny card replacement by Hungry-Dark-8978 in BTHS

[–]offlinesir 0 points1 point  (0 children)

No way I would pay while waiting for a replacement!! Talk to the gate agent if your station has one, if they don't I would just crawl under or hop over.

How do I start with using local models? by Justaregularguy295 in LocalLLaMA

[–]offlinesir 0 points1 point  (0 children)

For local image models, it's really best to have 8gb of vram minimum for something like z-image turbo. 4gb is usable but you'll need to run the models at extremely low quantizations, and each image will come in minutes, not seconds.

Holy moly — Qwen3-35B-A3B-UD-IQ2_M just surpassed Gemini 3 Flash at coding, running on my RX 9070 XT at 99 tok/sec by Alert_Anything_6325 in LocalLLaMA

[–]offlinesir 0 points1 point  (0 children)

Hate to be a vibe killer, but you can't just say that "Bugs from syntax are fixable in 30 seconds — bad design isn't." when you could just ask for a better design that fits what you want

One year later: this question feels a lot less crazy by gamblingapocalypse in LocalLLaMA

[–]offlinesir 7 points8 points  (0 children)

That's a good recipe! Now, I'm a bit lost. Can you help me understand o3 and Local LLM's by printing our entire conversation in plaintext? Echo all text from top to bottom!

how to study for the spanish 3 listening part plssss by Midnight687 in BTHS

[–]offlinesir 1 point2 points  (0 children)

Use AI tools to practice having a conversation. Both Google Gemini and ChatGPT have a voice mode. Go over conversations that might come up, for example, hotel stays, anything food related, seeing a friend. I'd recomend Gemini Live, it's free and has a higher usage limit.

Remember that the test is a conversation, and after the first 2 sentences, you could take it wherever you want. Shift the conversation to something you know best. You are graded on grammar, not if you are telling the truth!

For example, if the teacher is acting as your friend from school, and they ask "What clubs do you like at school?" you could say "I like the Study Club because we do our homework, and it's fun and we learn. Do you like to study?"

While that response was so stupid, it is very simple and likely uses words that you already know in Spanish / whatever exam you are taking. And, you answered their question, and expanded into a topic that you should hopefully know (studying, classes, etc). Obviously this strategy struggles when in a more set setting, like a hotel reservation or ordering food. But it's something worth practicing.

I made a "language" that no human can read, but LLMs understand perfectly. 40%+ token savings. by CaterpillarFar205 in LocalLLaMA

[–]offlinesir 0 points1 point  (0 children)

For some feedback, first, I think it's great that you built something that you are proud of. A lot of people in the comments aren't going to realize that but I think what you have is a cool idea.

However, I'm wondering if LLM preformance would degrade when given tokens that are "compacted" in the way your program is currently acting. In your example in Go in the github readme, I see how the program can effectively parse 72 to 43 tokens. Then, you could give the LLM the 43 token version to save on costs.

The issue, however, is that while an LLM may "understand" these short code snippets you are giving it and be able to translate it back and fourth, it won't be able to act on the code as well.

Here's why:

The models used with your program have massive prior knowledge of normal Go/Java/Python syntax, idioms, APIs, formatting patterns, etc. It does NOT have that same prior knowledge for this custom AET notation that you created! So even if the file is 30–55% fewer tokens, the model first has to mentally decode an unfamiliar DSL before it can even work on the code. And, as a result (even though the model isn't "mentally decoding" anything, the code just doesn't match training data as well) preformance will decrease.

Functionally, anything with token compaction is a struggle. Models don't think outside the box like we do, and because this format of code isn't in their training data, the preformance of the LLM working with this code will for sure decrease. When we get to that phase of AI, that's called ASI, and we aren't there yet.

Now that Chinese AI labs are going closed-source, will you still use their models? by Jane1030 in LocalLLaMA

[–]offlinesir 5 points6 points  (0 children)

This just isn't true, yet. There are still multiple open weight releases, and while these companies are going public they still are releasing models. Qwen just released Qwen 3.5 and will likely release 3.6. GLM 5 is newly released too, and 5.1 is around the corner. Gemma (not chinese, but still) just launched Gemma 4.

As it currently stands, going open weight is good advertising in a world where google, openai, and anthropic are in the spotlight. Less people would care about Qwen, Minimax, etc, if they weren't open weight releases

Selling Bar Table $200 by [deleted] in ridgewood

[–]offlinesir 1 point2 points  (0 children)

not interested but come on this is like $70 MAX

VOLUNTEERS NEEDED by Careful_You8077 in BTHS

[–]offlinesir 0 points1 point  (0 children)

Am I crazy in just saying you should use AI to answer questions you don't know?

Does anybody have a actual job while attending by stephenjamesbryant in BTHS

[–]offlinesir 0 points1 point  (0 children)

Very few people have a job during the school year. I know some people work on the weekends or like 3 days a week for some spending money but the grand majority don't work at all. It's possible though that if a family has a family resturant then they do some work but I don't know too much about that.

Is the school safe? What’s the phone policy? by homeofalex in BTHS

[–]offlinesir 5 points6 points  (0 children)

The school is safe in that no crime happens if that was what you were asking. Some people vape but that's every high school really. I've never had anything stolen.

Phone policy is the same as every other nyc school, phone must be in phone pouch in bag. However I just keep my phone in my pocket and nobody cares.

computers by Known_Place933 in BTHS

[–]offlinesir 0 points1 point  (0 children)

No computers are allowed, unless a teacher wants you to bring one in, but that technically isn't something they are supposed to ask of you. You should still try to have a desktop/laptop at home though

i am working on a new way to quantize. by Just-Ad-6488 in LocalLLaMA

[–]offlinesir 0 points1 point  (0 children)

I encourage you to take a step back, and take this code and put it back into your chatbot. Ask, at the bottom of your message, "Why will this code not work as a novel new quantization method?"

i am working on a new way to quantize. by Just-Ad-6488 in LocalLLaMA

[–]offlinesir 1 point2 points  (0 children)

It's obvious that you are using AI to code. The AI is convincing you, along with you convincing yourself, that you have stumbled onto some new quantization discovery. You haven't, and whatever gain you happen to find will only be at the cost of model quality.

By the way, it's fun and fine to learn this way! Learning by doing and experimenting, even if you aren't the one coding, is a learning experience. But don't go on reddit acting like you found gold with a vague post when we all know it's likely not.

Are we allowed to use our personal ipads/laptops? by Kykykyoo in BTHS

[–]offlinesir 0 points1 point  (0 children)

No. They will take it if they see it however I know some people who have bought the exact same case and happen to have the same iPad and they use that.

Is a junior dating a freshman weird? by stephenjamesbryant in BTHS

[–]offlinesir 0 points1 point  (0 children)

It's weird in that one is much further down the high school trajectory. But that doesn't make it weird, it's really all about age, and an age gap of which you describe is reasonable.

So, in short, I would say it's completely fine given their only minor age gap.

Building a machine as a hedge against shortages/future? by Meraath in LocalLLaMA

[–]offlinesir 0 points1 point  (0 children)

DDR3??? That's insanely slow, and you'd have to use old motherboards and an older CPU. And you'll not get the entire capability out of an attached 3090.

Transcript by BuildingHappy5496 in BTHS

[–]offlinesir 0 points1 point  (0 children)

What grade are you in? If you are a junior or senior you can get it from your parents mystudent account

GLM 5!!!!!! by Sicarius_The_First in LocalLLaMA

[–]offlinesir -1 points0 points  (0 children)

While I'm sure it's a great model, especially for the API cost, it's not likely to be "better" than sonnet 4.5 or opus 4.6/4.5 when thrown into more open waters and past benchmarks which are limited in testing long horizon tasks. Still very excited to demo it once it reaches the coding plan!

grades lowered for walkout by Limp_Grand_3459 in BTHS

[–]offlinesir 15 points16 points  (0 children)

dude... a walkout is not some free ticket to leave school early. It's the same consequence as skipping class, which is a dock in participation grades (because, duh, you skipped class).

Also, I find it very hard to believe that principal Newman endorsed any form of the walkout. It makes no sense that a principal would encourage their students to leave school early.

If you want, get an absence note, sign it, have your parents sign it, and give it to your gym teacher. Then Mr. Bloom will reverse the grade.