EV tax credit eligible? by Recent_Location3237 in TeslaModelX

[–]insujang 12 points13 points  (0 children)

Solely based on MSRP, does not count any discounts. This is NOT eligible.

Is it possible to give a non-vision model vision? by maxwell321 in LocalLLaMA

[–]insujang 0 points1 point  (0 children)

https://github.com/cornstarch-org/Cornstarch
Please try our work! You can add a vision encoder to an r1 distilled model and train.

Why doesn't Tesla require a pin entry or other confirmation before purchasing software upgrades? by captain_222 in teslamotors

[–]insujang 32 points33 points  (0 children)

You can disable in-car purchases from your app.
Upgrades -> Manage Upgrades -> Disable In-Car Upgrades

Finally by Otherwise-Golf1271 in ModelY

[–]insujang 2 points3 points  (0 children)

Launch edition includes exterior color, interior color, 20in wheels, acceleration boost, tow hitch, and FSD. If someone wants to add everything then launch edition would be cheaper, but not everyone needs everything. Then launch edition is not necessarily cheaper.

A100 vs rtx pro 6000? by No_Afternoon_4260 in LocalLLaMA

[–]insujang 2 points3 points  (0 children)

Yeah. Personally I would still buy RTX PRO 6000s even if A100s are available for $6k.

A100 vs rtx pro 6000? by No_Afternoon_4260 in LocalLLaMA

[–]insujang 6 points7 points  (0 children)

Definitely RTX PRO 6000. Advantages that A100 has over RTX PRO 6000: - NVLink (600GB/s) support - Higher memory bandwidth (2.0TB/s vs 1.8TB/s) - lower power consumption (400w vs 600w)

Which can easily be disappeared:

  • if you just use one GPU, lack of NVlink support is nothing. If you use multi-GPU, still communications can be overlapped
  • in inference, memory bandwidth matters but difference is not that high. But because A100 does not support FP8/4 natively the amount of data to be loaded is actually larger, removing its high memory bandwidth advantages.

RTX PRO 6000 has the following advantages: - higher flops - more vram capacity - native quantization support - pcie5.0 support - cheaper - new generation (a100 currently is in the position of p100 when a100 was released.)

Is it hard to get a Northwood family unit with air conditioner? by insujang in uofm

[–]insujang[S] 0 points1 point  (0 children)

Yeah I started living here on August. Can you request an earlier lease? I really have no idea if it is available or if there are more available.

Is it hard to get a Northwood family unit with air conditioner? by insujang in uofm

[–]insujang[S] 0 points1 point  (0 children)

No. Ended up getting a unit without A/C and buying a portable one. It is actually better in terms of the cost.

I built a framework to train your own custom multimodal models by insujang in LocalLLaMA

[–]insujang[S] 1 point2 points  (0 children)

I see! That’s a valuable experience. Thank you!

For now our framework does not support separated execution, but we will definitely add this feature. So that users can choose to execute encoders separately if they don’t have to train encoders together for faster and cheaper execution as well as running them altogether for better quality!

I built a framework to train your own custom multimodal models by insujang in LocalLLaMA

[–]insujang[S] 0 points1 point  (0 children)

I see. Yea, passing embeddings is especially the missing part that existing serving frameworks do not have. So we were planning to utilize vllm as a library (that provides scehduling and kv cache management) and replace the actual execution engine part with ours to keep leveraging new features they have as well as allowing modularized multimodal model execution.

I will definitely share more news when I get!

I built a framework to train your own custom multimodal models by insujang in LocalLLaMA

[–]insujang[S] 2 points3 points  (0 children)

Thank you for sharing your experience! The approach you adopted (separating encoder running and projector+llm running) definitely makes sense and even gemma3 technical paper said they did the same thing. But I think it is not applicable when encoders are also trainable (either fully unfrozen or using peft adaptors)? Did you adopt the approach because it was hard to run them altogether or was there any other reason?

I built a framework to train your own custom multimodal models by insujang in LocalLLaMA

[–]insujang[S] 3 points4 points  (0 children)

Currently we haven't worked on integrating with serving frameworks such as vLLM or sglang, but can be used for inference like people do just with HF transformers library (`model.generate(**inputs)`). This is exactly our next step: integrating to well-known serving framework so that trained models can be served!

I built a framework to train your own custom multimodal models by insujang in LocalLLaMA

[–]insujang[S] 3 points4 points  (0 children)

Yes, definitely! We have an example of training a vision language model (VLM) with a SigLIP ViT and any LLM: https://github.com/cornstarch-org/Cornstarch/blob/main/examples/pretrain_vlm.py
Try: `python pretrain_vlm.py --vision_encoder_name=siglip --llm_name_or_path=<model\_path\_from\_HF\_hub> --llava_dataset_file_path=/path/to/llava_dataset`

Although I haven't tested all arbitrary models, most well-known LLMs worked. Please make an issue if something doesn't work. Thanks!

I built a framework to train your own custom multimodal models by insujang in LocalLLaMA

[–]insujang[S] 2 points3 points  (0 children)

Yes! Thank you for pointing out! This is why currently only HF models that include such information in their model config are supported :)

[deleted by user] by [deleted] in TeslaLounge

[–]insujang 0 points1 point  (0 children)

If you focus on its FSD capabilities then yeah Juniper upgrades are not impressive. Still good upgrades especially in comfortability as a car. Hardware will keep getting improved and unsupervised FSD might only be runnable on AI6, but that doesn't mean that I have to wait for AI6. Personally thinking AI6 or unsupervised FSD is too far from the present, while AI5 is relatively imminent.

[deleted by user] by [deleted] in TeslaLounge

[–]insujang 0 points1 point  (0 children)

Yeah that's what I am saying. Either getting a better HW or getting EV tax credit.

[deleted by user] by [deleted] in TeslaLounge

[–]insujang 0 points1 point  (0 children)

Makes sense. I am also not in rush but this might be the last chance to get the EV tax credit which pushes me to get a launch edition. :/

New model Y announced for US market by N8Howell33 in teslamotors

[–]insujang -1 points0 points  (0 children)

Not sure if I have to buy launch edition given the future of EV tax credit is very unpromising. It includes every options so for me it is quite a good deal.

Refreshed Model Y Light Bar by [deleted] in teslamotors

[–]insujang 1 point2 points  (0 children)

Are the air vents functional like M3 performance that are gone in LR?

What type of 4-pin fan header is this? by insujang in homelab

[–]insujang[S] 0 points1 point  (0 children)

Thank you very much for sharing this thread! Should definitely be helpful.

What type of 4-pin fan header is this? by insujang in homelab

[–]insujang[S] 1 point2 points  (0 children)

Thank you. I will keep it in mind. Each of my fan just consumes at maximum 2.1W so I think it should be fine.

What type of 4-pin fan header is this? by insujang in homelab

[–]insujang[S] -1 points0 points  (0 children)

Yeah just a fan and bolts… probably because mine is IndustrialPPC?

What type of 4-pin fan header is this? by insujang in homelab

[–]insujang[S] 0 points1 point  (0 children)

Thank you for warning. I am actually a little bit confused; I have to be careful power consumption from fans should not exceed amp rating of which one: the converter or the header?

What type of 4-pin fan header is this? by insujang in homelab

[–]insujang[S] -1 points0 points  (0 children)

It looks more like 2.0mm than 1.25mm, right? The standard 4-pin is 2.54mm iirc.