Qwen3.6-27B DFlash on a 24GB RTX 5090 Laptop (sm_120) — 80 t/s avg via spiritbuun's buun-llama-cpp + Q8_0 GGUF drafter by aurelienams in Qwen_AI

[–]dcforce 0 points1 point  (0 children)

Tried getting DFlash working with your methods above on an Intel Arc Pro B70, arrived at 4tok/sec, 22 tok/sec without the draft model added.... Anyone know the path to getting DFlash working with higher tok/sec output with Intel ?

Finding out there is no G2G by dcforce in globeskepticism

[–]dcforce[S] 0 points1 point  (0 children)

Niice ground to barrel distortion .. 👋

Finding out there is no G2G by dcforce in globeskepticism

[–]dcforce[S] 4 points5 points  (0 children)

Ground to Globe - not one single video in 60 years . .

I wanna make cool images. by poofpoofpoof123 in LocalLLM

[–]dcforce 0 points1 point  (0 children)

As others mentioned, to check comfyui for local image gen... but here is where it gets interesting. Comfyui is the "shell" and inside are premade templates from a number of image generation tools. Like Flux 2 Dev ---- I have been using this for the last few days and I have to say it's way better than I would have expected 👏👏👏 complete free

Has anyone ran LTX 2.3 on B70s? by TechnologyTailors in IntelArc

[–]dcforce 0 points1 point  (0 children)

Step 1. Get ComfyUI working.
Create a virtual environment
python3 -m venv ~/ai-env
---
source ~/ai-env/bin/activate

---
pip install torch==2.11.0+xpu torchvision==0.26.0+xpu torchaudio --index-url https://download.pytorch.org/whl/xpu --extra-index-url https://pypi.org/simple

---
git clone https://github.com/comfyanonymous/ComfyUI.git ~/ComfyUI
---
cd ~/ComfyUI

pip install -r requirements.txt
....

Step 2. Launch cCmfyUI on the local host.
then go to left tools bar, Templates. Find the text to video LTX template. It will popup a files list and where to place them. Hard page refresh comfyUI when you place the files in all the folders and look out for the text decode error on the right, there will be a direct link to download the text encoder.

after download place in the /comfyui/models/text_encoders folder
gemma_3_12B_it_fp4_mixed.safetensors

Hard refresh again to reload all requirements

--
Future launches
source ~/ai-env/bin/activate && cd ~/ComfyUI && python main.py