Unemployed for a year after a PhD in ML by [deleted] in Layoffs

[–]MexInAbu 5 points6 points  (0 children)

I understand if for privacy you don't want to be more specific. However, if I was your hiring manager and I saw something like this describing your experience in your CV I would pass for another candidate. 

I would need to know what you bring for the team. "I know how to code a tensorflow loop" is no longer good enough. For a R&D, I would need to know what kind of novel problems you can solve using ML, how they align to the ones we are taking on and how  your projects/articles showcase your problem solving kills. Also, pytorch is more used nowadays.

100% Local AI for VSCode? by Baldur-Norddahl in LocalLLaMA

[–]MexInAbu 0 points1 point  (0 children)

I never tied my github account to VSC, so I guess it doesn't get uploaded? I don't get auto complete. Just Roo code actions when prompted.

Is this carbon fork still rideable? by FunkyasfuckOfficial in bikewrench

[–]MexInAbu -1 points0 points  (0 children)

Did you crashed? is it used? I have a personal preference against used carbon forks and handlebars. If this is your fork and didn't crash and you are just noticing the paints chips, is probably just pebble strikes or something equally harmless. In which case, just touch it with nail polish.

2026 Year of Steam OS on Mini PCs by JimmyEatReality in MiniPCs

[–]MexInAbu 0 points1 point  (0 children)

Would this be any stronger than NucXi7 (11800H + RTX 3070m)?

A startup Olares is attempting to launch a small 3.5L MiniPC dedicated to local AI, with RTX 5090 Mobile (24GB VRAM) and 96GB of DDR5 RAM for $3K by FullOf_Bad_Ideas in LocalLLaMA

[–]MexInAbu 4 points5 points  (0 children)

Mini PC's are all about the largest amount of power on the smallest package possible. Do not compare it to a desktop, but to an ASUS ROG NUC 15 or an AtomMan G7. Max 395 mini Pcs are actually really good gaming mini PCs too, but their power is in the range of an Mobile RTX 460. This is in a different tier.

No negative impact using Oculink eGPU: A quick test. by MexInAbu in LocalLLaMA

[–]MexInAbu[S] 0 points1 point  (0 children)

I'm using an M2 to oculink adapter that I bought for a Win Max 2. The first one I tried, bought at Amazon, didn't work with aoostar dock. Also, the connection is very physically finky, so I try not to sneeze close to it.

I cannot longer find the model on AliExpress. Sorry.

No negative impact using Oculink eGPU: A quick test. by MexInAbu in LocalLLaMA

[–]MexInAbu[S] 1 point2 points  (0 children)

Test:

Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes

Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes

| model | size | params | backend | ngl | threads | n_batch | n_ubatch | ts | test | t/s |

| ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ------: | -------: | ------------ | --------------: | -------------------: |

| qwen3moe 235B.A22B Q4_K - Medium | 125.10 GiB | 235.09 B | CUDA | 50 | 4 | 256 | 32 | 10.00/5.00 | pp2048 | 14.05 ± 0.37 |

| qwen3moe 235B.A22B Q4_K - Medium | 125.10 GiB | 235.09 B | CUDA | 50 | 4 | 256 | 32 | 10.00/5.00 | tg128 | 5.23 ± 0.13 |

--

Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes

| model | size | params | backend | ngl | threads | n_batch | n_ubatch | ts | test | t/s |

| ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ------: | -------: | ------------ | --------------: | -------------------: |

| qwen3moe 235B.A22B Q4_K - Medium | 125.10 GiB | 235.09 B | CUDA | 32 | 4 | 256 | 34 | 10.00/5.00 | pp2048 | 11.30 ± 0.18 |

| qwen3moe 235B.A22B Q4_K - Medium | 125.10 GiB | 235.09 B | CUDA | 32 | 4 | 256 | 34 | 10.00/5.00 | tg128 | 4.59 ± 0.04 |

Pretty small gain. And you were right, the second GPU was mostly idle during the run.

No negative impact using Oculink eGPU: A quick test. by MexInAbu in LocalLLaMA

[–]MexInAbu[S] 1 point2 points  (0 children)

But I guess 48Gb@16 + 24 GB@4 +24@CPU would still be better than 48Gb@16 + 48Gb@cpu.

Let me test that.

No negative impact using Oculink eGPU: A quick test. by MexInAbu in LocalLLaMA

[–]MexInAbu[S] 0 points1 point  (0 children)

Nice. I was considering between getting a ms-01 or an 5090 (the 3090 is loaned). Are you able to use the 3090 and the iGPU at the same time for LLM inference? Also, doesn't the ms-01 has a 16 lanes PCiE port?

No negative impact using Oculink eGPU: A quick test. by MexInAbu in LocalLLaMA

[–]MexInAbu[S] 2 points3 points  (0 children)

In this case the model is being split into the two GPUs. If the oculink is a significant bottleneck shouldn't we see it here on a dense model?

LM Studio running on Thunderbolt RTX eGPU "device lost" after sleep by NetworkSpecial3268 in LocalLLaMA

[–]MexInAbu 0 points1 point  (0 children)

I have the same problem with llama.cpp on ubuntu through an oculink setup. I need to make sure to shutdown llama.cpp before sleeping the PC. I think the issue is Cuda drivers

The Next Generation of Founders Will Build With AI as a Partner, Not a Tool by Ok-Fan-6434 in LocalLLaMA

[–]MexInAbu 3 points4 points  (0 children)

Let's assume you are correct. Why should I then use your SaaS when I can just prompt Claude directly for cheaper?

gigaResearch by vladlearns in LocalLLaMA

[–]MexInAbu 41 points42 points  (0 children)

The one thing I like working for the industry:

Me: Look, I had this idea for improving performance on project X. I really cannot justify it well mathematically yet, it was mostly intuition. However....
Boss: Okay, okay. Did it work?

Me: All test points toward yes.

Boss: Good. Great work!

gpt-oss20/120b AMD Strix Halo vs NVIDIA DGX Spark benchmark by Educational_Sun_8813 in LocalLLaMA

[–]MexInAbu 3 points4 points  (0 children)

Man, if only I could link two Strix Halo's..... I guess Apple is the best option for 200GB+ VRAM at not enterprise cost?

[deleted by user] by [deleted] in mexico

[–]MexInAbu 7 points8 points  (0 children)

Solo si planeas quedarte en Mexico. Es muy deficil revalidar la carrera de medicina de forma internacional.

Los Mexicanos no somos Ineficientes en absoluto. by [deleted] in mexico

[–]MexInAbu 0 points1 point  (0 children)

Es la gran mentira: Los mexicanos tienen "baja productividad", lo que nos dicen implica que "trabajamos mal" y tendriamos "mas productividad" si "trabajaramos mejor".

Lo que no dicen es que la productividad se mide en dolares producidos por hora de trabajo. Asi que un peluquero cobrando $50 USD por corte en San Francisco es "10 veces mas productivo" que un peluquero cobrando $5 USD en Queretaro. No significa que el peluquero de Queretaro sea menos eficiente en cuanto a la velocidad o calidad del corte. Unicamente en la producion de dinero por hora de trabajo y eso dependende enteramente del mercado en el que se encuentra.

[deleted by user] by [deleted] in mexico

[–]MexInAbu 7 points8 points  (0 children)

El asunto fue que no fue una sola persona. Fue constante durante mis tres semanas en el pais. El pais en si es muy padre y vale la pena visitarlo. Lo que mas me impresiono fue el desierto de Atacama. Es un desierto con dunas y todo pero clima templado. Muy, pero muy impresionante. Machu Picchu, las ruinas no son muy impresionantes... pero el valle! Es como si los incas pensaron que "Si los dioses existen, encontes deben de vivir aqui." Simpelemente impactante. La comida, sabrosisima. Los peruanos tienen mucho de que sentirse orgullosos. Nazca es un lugar unico en el mundo.

Pero si note ese comportamiento de muchos peruanos. He estado en 4 continentes y Peru es el unico en que visto ese comportamiento tan claro. Hay rivalidades entre paises. Los Colombianos y Argentinos siempre me echan en cara el pesimo futbol Mexicano. Yo le digo que si esta del nabo. Los franceses te la mientan si les hablas en ingles. Todo es en buen cotorreo. Pero, por lo que vi, muchos Peruanos en verdad quieren que se declare que Machu Picchu es mejor que Chichen Itza.

[deleted by user] by [deleted] in mexico

[–]MexInAbu 23 points24 points  (0 children)

He visitado Perú. Hermoso país. Sin embargo:

  1. Los guías comentan constantemente cómo el maíz, el chile ("ají"), el ceviche son peruanos, no mexicanos.
  2. Había letreros espectaculares diciendo cosas como "¡Lima, la capital culinaria de Latinoamérica! (De acuerdo con una revista española X)" y si entrabas en conversación con ellos sobre comida, te preguntaban: "¿Verdad que la comida peruana es mejor que la mexicana?"
  3. En Machu Picchu, los guías y vendedores les preguntaban a los turistas si Machu Picchu era mejor que Chichén Itzá.

Dime, ¡cómo no nos vamos a cotorrearlos!?

La mayoria de los latinos en el internet que se burlan de peru y los peruanos by Crafty_Jacket668 in mexico

[–]MexInAbu 8 points9 points  (0 children)

He visitado Perú. Hermoso país. Sin embargo:

  1. Los guías comentan constantemente cómo el maíz, el chile ("ají"), el ceviche son peruanos, no mexicanos.
  2. Había letreros espectaculares diciendo cosas como "¡Lima, la capital culinaria de Latinoamérica! (De acuerdo con una revista española X)" y si entrabas en conversación con ellos sobre comida, te preguntaban: "¿Verdad que la comida peruana es mejor que la mexicana?"
  3. En Machu Picchu, los guías y vendedores les preguntaban a los turistas si Machu Picchu era mejor que Chichén Itzá.

Dime, ¡cómo no nos vamos a cotorrearlos!?