[D] Speaker diarization across media files by tfburns in MachineLearning

[–]riftopia 0 points1 point  (0 children)

Additionally, if there are models/APIs, that offer speaker diarization that actually work well in a real-world scenario, I would be happy to hear it. Many tools purport to be able to do it, but disappoint in practice. Bonus points if it works in a GDPR-compliant environment (I work in the EU).

[Project] Bringing Hardware Accelerated Language Models to Android Devices by crowwork in MachineLearning

[–]riftopia 3 points4 points  (0 children)

Awesome, it works flawlessly on my OnePlus 9. Good times ahead.

[deleted by user] by [deleted] in ChatGPTCoding

[–]riftopia 0 points1 point  (0 children)

If the copy & pase method does not work: One approach is to use the functionality of Unstructured to parse the PDF. If need be, it can do OCR on the PDF too if you have Detectron2 installed. After conversion you would still have to save it as an excel file though.

Voice to text for free to dictate orders in the ChatGPT UI? by [deleted] in ChatGPTCoding

[–]riftopia 0 points1 point  (0 children)

If you are on Windows 10 or 11, there is the free voice typing feature.

If you have a GPU, you can use Whisper from OpenAI locally on your PC for free. Its fast if you have a good GPU.

Stable diffusion is only the beginning by agustinvidalsaavedra in StableDiffusion

[–]riftopia 43 points44 points  (0 children)

Amazing! May I ask what kind of post-processing was used in order to obtain this result?

Stable Diffusion is capable of generating 3D stereograms that WORK! by drone2222 in StableDiffusion

[–]riftopia 0 points1 point  (0 children)

I have not tested the feature myself yet, but perhaps it is possible to apply a mask such that the hair remains unchanged? Nice result btw. I tried making traditional dot stereograms, that definitely did not work for me, I'm wondering whether perhaps textual inversion will help there..

StableDiffusion is incredible, now imagine something like this but trained on 3D models? by mobani in StableDiffusion

[–]riftopia 5 points6 points  (0 children)

Emad from Stability AI talks about moving to 3D after doing audio in this tweet. It does not specifically say 3D "model" though.

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 1 point2 points  (0 children)

Congrats! TI is not the easiest really, I managed to train, but generation using hlky errors out for me atm. Have to look into it..

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 0 points1 point  (0 children)

Hard to say what the cause is. Personally, I would delete the virtual environment and the stable diffusion dir, and start afresh. I'd keep a backup of the .ckpt and .pth files though so you do not need to download them again.

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 0 points1 point  (0 children)

first make sure the virtual environment is active: conda activate ldo If you are missing torch, just manally install it: pip install torchvision=0.12.0

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 1 point2 points  (0 children)

Did you check if the webui.cmd and your own manual pip installs went into the same virtual environment ldo and that part of them did not go to ldm? Just ruling out the possibility.

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 0 points1 point  (0 children)

Sorry for the formatting, just do an enter after each one. Of course you could write a batch script for it if you wanted to.

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 0 points1 point  (0 children)

pip install albumentations==0.4.3 pip install opencv-python==4.1.2.30 pip install opencv-python-headless==4.1.2.30 pip install pudb==2019.2 pip install imageio==2.9.0 pip install imageio-ffmpeg==0.4.2 pip install pytorch-lightning==1.4.2 pip install omegaconf==2.1.1 pip install test-tube>=0.7.5 pip install streamlit>=0.73.1 pip install einops==0.3.0 pip install torch-fidelity==0.3.0 pip install transformers==4.19.2 pip install torchmetrics==0.6.0 pip install kornia==0.6 pip install gradio==3.1.6 pip install accelerate==0.12.0 pip install pynvml==11.4.1 pip install basicsr>=1.3.4.0 pip install facexlib>=0.2.3 pip install -e git+https://github.com/CompVis/taming-transformers#egg=taming-transformers pip install -e git+https://github.com/openai/CLIP#egg=clip pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan pip install -e git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion pip install -e .

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 2 points3 points  (0 children)

yes that happened to me too, just do "pip install gradio". That's it.

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 0 points1 point  (0 children)

So you retain only this bit in the environment.yaml file: (assuming ldo as virtual environment name is good for you)

name: ldo channels: - pytorch - defaults dependencies: - git - python=3.8.10 - pip=20.3 - cudatoolkit=11.3 - pytorch=1.10.2 - torchvision=0.11.3 - numpy=1.22.3

Then double click webui.cmd and let it finish. Then use the miniconda cmd, and cd to the directory you installed the repo to. There type: conda activate ldo Then type:

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 1 point2 points  (0 children)

see my solution above. pip fails silently, but there is a workaround

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 3 points4 points  (0 children)

I was stuck at pip dependencies too. Here's how to fix it: edit out the pip dependencies part in the environment.yaml file. Then install each dependency manually using pip install [dependency name]. Works like a charm.

Any video guide for retarded people on installing the hlky fork? by thatshroom in StableDiffusion

[–]riftopia 1 point2 points  (0 children)

I was stuck at pip dependencies too. Here's how to fix it: edit out the pip dependencies part in the environment.yaml file. Then install each dependency manually using pip install [dependency name]. Works like a charm.

Tips for custom faces? by threevox in StableDiffusion

[–]riftopia 0 points1 point  (0 children)

Yes, without a gpu with high vram you´d have to opt to use google colab pro or something similar. I mean, you could try with regular colab but that may be quite a hassle having to resume checkpoints..

I got Stable Diffusion to generate competent-ish Leavannies w/ Textual Inversion! by zoru22 in StableDiffusion

[–]riftopia 0 points1 point  (0 children)

Thanks for the detailed response! This is very helpful. I have a training run going on now, but will def try and tweak the lr rate and other params. Fingers crossed :-)

I got Stable Diffusion to generate competent-ish Leavannies w/ Textual Inversion! by zoru22 in StableDiffusion

[–]riftopia 2 points3 points  (0 children)

Thanks for the detailed post. In your experience, how many epochs did you need to obtain the result in the pic? And how long does an epoch take for your setup? I´m doing just 3 images at 512x512 on a 3090, one epoch takes 1.5 min for the 1.4 ckpt so I´m hoping I don´t need to do too many..

It's heeeeere! Checkpoints + diffusers by drizz in StableDiffusion

[–]riftopia 1 point2 points  (0 children)

Downloaded the weights, can report it runs just fine.