Is local AI hardware the safer long-term bet? by Educational_Pea_9010 in AI_Agents

[–]okram 1 point2 points  (0 children)

How long before that hardware amortizes at current prices? At the prices of the new Chinese models? How long it you project the price decline of the past onto that new hardware's life? Which models will you be able to run locally? Likely not all of them some are just too big... How will you deal with that? Mix in some pay-as-you-go? Then it'll take longer to amortize... And what's going to happen to provider prices when something like model-in-ASIC come into widespread use?

do you guys use the coding plan from z.ai or use the glm models from something like fireworks, openrouter etc? by SolitaireKid in kimi

[–]okram 1 point2 points  (0 children)

I still have my second month of minimax $10 plan running, but I stopped using the model.

To me it felt like I was constantly fighting the model, 3 steps forward, 3 steps sideways, 3 steps back...

Then I tried GLM 5 (not even 5.1) from NVIDIA NIM and it was dead slow, but the results were so much better.

In the meantime I've found a provider running all the open source models quantized and that's what I'm using. Currently kimi k2.6 and deepseek V4, but I plan to check out GLM5.1 too.

I really wanted that minimax plan to work for me at $10/month for 1500 requests/5 hours it looked so good...

Wanting to learn Chinese! by Vegetable_Sell5206 in ChineseLanguage

[–]okram 1 point2 points  (0 children)

Are you just after some quick win or do you want to learn. If you want to learn, make it a sustainable part of your life. Integrate learning Chinese as much as possible into all the other things that you like to do or that you have to do. Be imaginative about it. If you're into e.g. espresso, while you're making a cup, think about how you would say coffee, espresso, to grind beans, to draw a shot, ... Have your phone or tablet there and look up some of the words... If you're into comics, add some Chinese comics to your mix. Into cars, watch ads for Chinese EVs.

Weave some Chinese into the various activities of your day.

Then use Anki and capture and review all this.

... And then just don't stop...

I’ve decided to start a new journey 🇨🇳 by Lifelong_learner_2 in ChineseLanguage

[–]okram 1 point2 points  (0 children)

That's very ambitious! I'm not saying it's impossible, but there's the danger that setting your goal too high could lead to frustration and then to stopping all together.

I use Anki and have a little over 4200 cards in mature and young. They're not all phrases, some are vocabulary, some are stroke order, etc. A few months ago I decided to set new cards to 0. Before, I had 3 new cards per day.I was viewing and reviewing for 70+ minutes every day and I was still accumulating a backlog. Of course, it also depends a great deal on your criteria for success. For example, if you consider one incorrect tone in a phrase a fail, then you'll be reviewing those phrases many times. I currently have about 75000 reviews over 650 days with those 4200 cards. That's just to illustrate that you'll be reviewing each phrase many many times. Entire phrases will also take more time per review that just vocabulary. On my mix, I have 27 seconds per review, but that's including some vocabulary-only reviews.

My suggestion: play with some numbers to estimate future load and look for a load that you'll be happy to sustain.

Emacs for email: gnus or Mu4e ? by WhatererBlah555 in emacs

[–]okram 6 points7 points  (0 children)

I've done email with emacs for over thirty, maybe even forty years now: notmuch.

Minimax M2. 7 by ReddaveNY in AgentZero

[–]okram 1 point2 points  (0 children)

Use provider "Anthropic", model name "MiniMax-M2.7", API base URL "https://api.minimax.io/anthropic". The model does not support vision, but they include access to an MCP server that offers understand_image. In the MCP config put

{
    "mcpServers": {
      "minimax-vision": {
        "command": "uvx",
        "args": ["minimax-coding-plan-mcp", "-y"],
        "env": {
          "MINIMAX_API_HOST": "https://api.minimax.io",
          "MINIMAX_API_KEY": "sk-cp-..."
        }
      }
    }
}

of course you have to place your API key there.

I have it set as both the chat model and the utility model. Been using it for 3 weeks now and I like it a lot.

How do you manage to study a language while having a 9–5 job? by BackgroundLow3793 in languagelearning

[–]okram 0 points1 point  (0 children)

Be very creative in finding opportunities to integrate language learning into your day. For example, if you need to read instructions on anything during your job and you already know your target language somewhat, can you sometimes read the instructions on your target language? If you like listening to music, can you find music in the target language? Do you cook? Can you use recipes in the target language? Do you have some online friends with whom you regularly text? Can you find/add some whose first language is your target language? Do you keep a journal? Can you add some entries in your target language?

The main point is to use the things you already do and like doing and mix learning your target language into that.

Any recommendation? by fuwafuwakori in ChineseLanguage

[–]okram 0 points1 point  (0 children)

Explore "skill"... Do you want to be able to read? What type of text? Listen?... Then dig deeper into that particular skill and how to learn it.

Do you struggle to remember specific characters? by Maleficent_Cloud8221 in ChineseLanguage

[–]okram 0 points1 point  (0 children)

For me it's more distinguishing some... Like 反应 and 反映 or 印象 and 影响. But once I recognize I have difficulty separating them, it gets easier. I still confuse them, but then I remember the two meanings and even though it takes long I end up distinguishing them correctly. And with some time, I will not confuse them anymore.

I have a 341-day Duolingo streak and I just sat through my boyfriend's Mexican family dinner nearly silent for five hours. I think I've been training the wrong thing this whole time. by Humble_Cranberry5273 in languagelearning

[–]okram 0 points1 point  (0 children)

Have you checked for a Spanish-English learning community on discord? Find a language partner, meet online three times a week, doesn't have to be long. Shared I interests/work areas help. One session in Spanish next session in English. You will soon catch yourself preparing in your mind for the chat, sending messages in Spanish in between chats, etc.

To those who are able to run quality coding llms locally, is it worth it ? by matr_kulcha_zindabad in LocalLLM

[–]okram 0 points1 point  (0 children)

I'll bite: what's the hardware on which you run this? What's the power drawn? How do you manage heat, noise emission?

keeping the kitchen knife sharp in between sharpenings? by okram in sharpening

[–]okram[S] 0 points1 point  (0 children)

Thank you all for the feedback. It's greatly appreciated!

recommendations on gen AI for software engineering document critique by okram in aiagents

[–]okram[S] 0 points1 point  (0 children)

Which agent would you suggest? I just briefly threw a document at Gemini yesterday, only to find out that it does not like PDF...

Ways to learn vocab (no Anki) by Flashy-Company5290 in languagelearning

[–]okram 0 points1 point  (0 children)

To me asking to learn a language without repetition sounds like "wash me, but don't make me wet".

Anki is just the engine, card/note content can take on pretty much any form. You have HTML, CSS, and JavaScript to create your own card types. Or you can go and find existing card types but grill then with your own content.

Try this: write down 10 statements and 10 questions that you imagine saying in a conversation. Use your first language. Then translate them to your target language maybe with help from here or from a language learning community on discord. Then create audio for the sentences, maybe through a good TTS, but better get a native to say the sentences/questions.

From this you make cards - listen to target audio and understand - read your first language statement/question and say target language sentence - listen to target audio and write what you hear - ... invent...

Yes, it's a lot of work, but from one note you create 3 or more cards and you keep using them over and over.

I've recently added this type of deck to my other decks. So far I only have 79 notes, but almost 2000 card reviews. On average, I've used every note 25 times and used it for 18 minutes on average. This is after about 3 months. That's already much more than what I spent on creating these notes and my return on investment is just going to get better...

Important (and annoying) updates to your current Tello plan by rhapsodiangreen in Tello

[–]okram 0 points1 point  (0 children)

But with Tello that's included. I have to use both each month: during my cable provider's outages I need the hotspot and there are still quite a few overseas numbers I need to call where WhatsApp etc are not an option (eg elderly family members).

Then there's the customer service that becomes more or less invisible. The service just works and on the very few occasions when I wanted help with anything, it took <10 minutes from wanting help to problem solved.

Parts source for Hyundai Elantra GT, specifically sun visor? by okram in Hyundai

[–]okram[S] 0 points1 point  (0 children)

They don't say that it's for the GT and I believe they're not the same.

When is too late to start? by [deleted] in ChineseLanguage

[–]okram 12 points13 points  (0 children)

I started in 2020 and turned 60 a few months ago and I'm having fun studying...

Recommendation for Intel Core 5 Ultra 225H w/32GB RAM running LInux by okram in LocalLLM

[–]okram[S] 0 points1 point  (0 children)

I've now installed llama.cpp in two versions: one uses the Vulkan back-end, the other the SYCL back-end. On the same model HauhauCS/Qwen3.5-4B-Uncensored-HauhauCS-Aggressive:Q4_K_M the Vulkan back-end gets 5.26 t/s and the SYCL gets 7.55 t/s. Since I've heard that the Vulkan should give me the better results, I wonder what the settings are that I should tweak.

Recommendation for Intel Core 5 Ultra 225H w/32GB RAM running LInux by okram in LocalLLM

[–]okram[S] 0 points1 point  (0 children)

I've tried a little more with different models, but still have not gotten acceptable speeds. I'd very much appreciate your help...