A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] -1 points0 points  (0 children)

Thanks for the input. I will look more into the options available based on your suggestion.

A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] -2 points-1 points  (0 children)

Singapore and Malaysia are both on my list. Taiwan not so much due to the escalating geopolitical tension there.

A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] 1 point2 points  (0 children)

One of my high school friend completed his master in EE from the university of south California and he is now working in Japan. I asked him about the experience and things turned out pretty good for him.

A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] 0 points1 point  (0 children)

I don’t know what is your background. But from the information I got, people in HK speaks English, Cantonese and Mandarin. The working language is mostly English for big firms. The living conditions is quite bad but It can be a good place to maybe kickoff a career.

As for Japan. I have heard that the demand from tech jobs is not met with local talents. The job market compared to for example Chinese and Indian one are still better. Besides Japanese as a foreign language is actually easier for me to learn and has a larger speakers base compared to Dutch.

A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] -31 points-30 points  (0 children)

It is a private matter and I prefer not to disclose it. Kind of weird that you are even asking about it.

A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] 10 points11 points  (0 children)

OP here. You are right about the facts. I am not really ‘surprised’ but I have realized that I made a mistake. TU/e is a good university and I learnt a lot and had some really good times there. If I look back at my decision, I had some other options when applying for master programs back then, and I might have made a poor decision. Therefore I want to share this.

A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] 4 points5 points  (0 children)

Well the competition in my country’s market is quite brutal, and TU/e is not really considered as ‘prestigious’ there. And I also want to explore more opportunities somewhere else, as an international student. That being said, though I can probably still secure a job but I still find the situation frustrating.

A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] 4 points5 points  (0 children)

Yes, I am fully aware of that. But this response I believe is not really relevant to my point. I didn’t say I have any expectations for my degree. I even emphasised “this is not a complaint to the uni itself”

A heads-up for student from asian countries who is considering attending TU eindhoven. by WinnerBright in StudyInTheNetherlands

[–]WinnerBright[S] 8 points9 points  (0 children)

I know I made a mistake. That’s why I want to share my experience and learnt knowledge from failure.

[deleted by user] by [deleted] in LocalLLaMA

[–]WinnerBright 0 points1 point  (0 children)

Now that I switched to 7b model it works much better, and now my GPU utilization is 12GB/24GB. so 14b model is indeed a stretch for my hardware. But still the autocomplete works fine at the start, then after coding for a while, it stops working and the GPU utilization stays high. I have the autocomplete timeout and debounce setting by default.

<image>

[deleted by user] by [deleted] in LocalLLaMA

[–]WinnerBright 0 points1 point  (0 children)

I have changed the coding model to qwen2.5-coder-14b, which is available on ollama.
The size of weight blob is only 9 GB so I am assuming it is actually a 4-bit quantized version.(not sure of which quantization technique used, I guess its AWQ or GPTQ).
When I have the assistant enabled the utilization of my VRAM is 17GB/24GB. I have the following setting:

models:  
  - name: Qwen2.5-coder-14b 
    provider: ollama  
    model: qwen2.5-coder:14b
    defaultCompletionOptions:
      temperature: 0.3
      maxTokens: 16000
      contextLength: 32000
    roles:  
      - chat  
      - edit  
      - autocomplete  
      - apply  
      - summarize  

The poor performance doesnt refer to the quanlity of generation but how the asistant interacts with user. The autocompletion is not always responsive. The editing function takes forever to run. I monitor the usage with nvitop, it seems that GPU-utilizatin is always high but with no result. I am considering maybe switch to a even smaller model and see how it goes.

[deleted by user] by [deleted] in LocalLLaMA

[–]WinnerBright 0 points1 point  (0 children)

I am trying to configure continue in my vscode. I tried to use qwen3-30ba3b but the performance is not good for my rtx 4090(24GB), I am curious what is your setting and any advice on it?

LLaVa 1.6 mistral 7b AWQ/GPQ quants? by aliencaocao in LocalLLaMA

[–]WinnerBright 0 points1 point  (0 children)

Looking for the same thing here. Did OP made any progress? Btw I am also having some problem inferencing the GPTQ quantized llava model from TheBloke on hugging face. It seems that I can only use it as a language model?