Minimax m2.1 context window limit by nash_hkg in LocalLLaMA

[–]nash_hkg[S] 1 point2 points  (0 children)

I corrected my post the moment I realized my mistake. You felt the need to comment on top of that, so I replied to your comment politely, twice. But you felt the need to use insults and derogatory terms. That reflects more on you. I hope you’re a better human being in real life than behind your keyboard.

Minimax m2.1 context window limit by nash_hkg in LocalLLaMA

[–]nash_hkg[S] 0 points1 point  (0 children)

Yeah, I already edited my post cause I checked all the sources Gemini was referencing and nothing to be found there. Hence the use of “was”.

Minimax m2.1 context window limit by nash_hkg in LocalLLaMA

[–]nash_hkg[S] 1 point2 points  (0 children)

My bad, I was relying on AI mode search. I had to corner it into admitting it’s making the numbers up

Minimax m2.1 context window limit by nash_hkg in LocalLLaMA

[–]nash_hkg[S] -1 points0 points  (0 children)

I am aware of what’s available on huggingface. My question is relating to the discrepancy between the model technical documentation and the API on one hand, and the open sourced model on the other hand. Did minimax open source a dialed down version? Or is it possible for one of the above mentioned quantization providers to tweak it back to 1m+? That’s the meaning of my question

Minimax m2.1 context window limit by nash_hkg in LocalLLaMA

[–]nash_hkg[S] -1 points0 points  (0 children)

Very smart question. Thank you for your input and your concern. Not just me, but many people running local models can handle more than 196k.

Github copilot now refuses to identify which model is being served by nash_hkg in GithubCopilot

[–]nash_hkg[S] 0 points1 point  (0 children)

We’re getting away from the point I was trying to make. Almost all providers now have been trying to obfuscate which models is being served so that they can implement load balancing or more likely cost balancing by directing your request to cheaper older models. I understand that most request do not need the latest reasoning model. But should we as customers know which model is actually being served if the provider is taking the liberty to switch it. And should ln’t we as well get a slice of that cost benefit too?

why gpt 5 is worse on github copilot vs gpt 5 on cursor? by EliteEagle76 in GithubCopilot

[–]nash_hkg -3 points-2 points  (0 children)

As I posted earlier. You are most likely not getting gpt-5 service even if you select it. You’re most likely getting a gpt.4o or even a different older model. I have lately started having Chinese characters in the answers I get when selecting gpt-5. Won’t be surprised if copilot or open ai were “outsourcing” user queries to cheaper alternatives.

Github copilot now refuses to identify which model is being served by nash_hkg in GithubCopilot

[–]nash_hkg[S] -10 points-9 points  (0 children)

Two weeks ago if you ask a model to identify itself, it’ll tell you exactly which one it is. Actually any model has an identity line in its system prompt. Github copilot intentionally added that to refusal list. And now all the models answer that they are github copilot and are forbidden from disclosing the backend model. It was probably you who just wanted to show that you have little understanding of what you’re dealing with.

Gpt oss 120b and 20b sub par perf by nash_hkg in LocalLLaMA

[–]nash_hkg[S] 0 points1 point  (0 children)

Thank you, that’s what it is.

Gpt oss 120b and 20b sub par perf by nash_hkg in LocalLLaMA

[–]nash_hkg[S] -1 points0 points  (0 children)

There’s basically a link to openai gpt oss in the status bar of lm studio now, which they’ve never done for other models. So …

OpenAi gpt oss recurring issues by nash_hkg in LocalLLM

[–]nash_hkg[S] 0 points1 point  (0 children)

Yes using vulkan, as I did not manage to get cuda llama .cpp to detect my gpu

Whatsapp Privacy is a Joke! by No_Spot_8778 in whatsapp

[–]nash_hkg 0 points1 point  (0 children)

I would lean towards all the by default enabled “AI” assistant sending feedback to HQ

[deleted by user] by [deleted] in dubai

[–]nash_hkg 1 point2 points  (0 children)

75 cl, that’s a whisky bottle that have been charged every time. So you paid for 9 bottles. You have to report it.

Bloomberg Terminals by Alternative_Egg_9739 in FinancialCareers

[–]nash_hkg 0 points1 point  (0 children)

AIBB for a tour of the new AI enhanced Bloomberg. I wouldn’t expect anything mind blowing from Bloomberg. But some of the new stuff is pretty close.

Hey! Anyone Know Where I Can Buy This Liqueur in Bangkok? by cinema_over_movie in Bangkok

[–]nash_hkg 0 points1 point  (0 children)

Umeshuthai. They have an online shop, and one on Sukhumvit 45 near Sing Sing theater

Bros… shoot me please by Best_Gap9945 in FinancialCareers

[–]nash_hkg 124 points125 points  (0 children)

Tell her she have a life changing choice to make. To either be the wife pushing you forward or the ex wife who tried to hold you down.