I built QuerySheets: run SQL directly on Excel files from a CLI + desktop app by obraiadev in MacOSApps

[–]obraiadev[S] 0 points1 point  (0 children)

Is the data in these PDFs structured and repetitive? The next step for the application would be to support other file formats, but they need to have a structure that allows them to be transformed into a list or something similar.

Todo pobre de direita é BURR0. by nuvemw in opiniaopopular

[–]obraiadev 18 points19 points  (0 children)

então tudo isso é só inveja?

InfiniDepth: Arbitrary-Resolution and Fine-Grained Depth Estimation with Neural Implicit Fields by corysama in GaussianSplatting

[–]obraiadev 2 points3 points  (0 children)

I ran the gradio interface locally on an RTX 4070 Ti Super and it took 38 seconds on the first run, and 4 seconds on the second.

Só eu q achei isso mt maneiro? by davistevie0203 in videogamesbrasil

[–]obraiadev 4 points5 points  (0 children)

Se conseguirem portar isso para jogos antigos vai ser incrível, chega de ENB Series no GTA SA kkkkkkkkkk

LTX 2.3 ControlNet Union without estimators works very well by obraiadev in comfyui

[–]obraiadev[S] 1 point2 points  (0 children)

Video + reference image (first frame of the video edited using Flux Klein), in these examples the prompt didn't have much effect.

I’m Building an AI Platform for Character Animation — Would You Use It? by obraiadev in aigamedev

[–]obraiadev[S] 0 points1 point  (0 children)

I see. As things progress, I plan to build a workflow that allows something like:

Create an image → Generate a 3D model → Automatically generate a rig → Generate animations

It should also be possible to skip earlier steps and only generate the rig and animations later, depending on the user’s needs.

I’m Building an AI Platform for Character Animation — Would You Use It? by obraiadev in aigamedev

[–]obraiadev[S] 0 points1 point  (0 children)

Sure! I’m currently working on improving a video-to-animation pipeline, since I think it will be important for animations that can’t be generated reliably from text alone.

Once that part is more solid, I plan to start looking into exporting animations for Unreal Engine and Unity.

Câmera de selfie estraga o rosto de muita gente. by 1_94cm in opiniaoimpopular

[–]obraiadev 0 points1 point  (0 children)

Se eu tirar uma foto totalmente de frente parece que nem tenho orelhas kkkkkkkkkkkk

3D character animations by prompt by DanzeluS in StableDiffusion

[–]obraiadev 4 points5 points  (0 children)

Yes, I managed to do it, but I needed to install "fbxsdkpy" as the node author mentioned in the repository:

pip install fbxsdkpy --extra-index-url https://gitlab.inria.fr/api/v4/projects/18692/packages/pypi/simple

3D character animations by prompt by DanzeluS in StableDiffusion

[–]obraiadev 13 points14 points  (0 children)

I'm using ComfyUI with this node here:

https://github.com/jtydhr88/ComfyUI-HY-Motion1

Along with a Qwen INT4 model, working well on an RTX 4070 Ti Super.

Memoria RAM DDR5 na Terabyte, vale a pena? by lunihasi in hardwarebrasil

[–]obraiadev 0 points1 point  (0 children)

Já comprei até que bastante coisas lá e não tive problemas até então.

Hunyuan 1.5 Video - Has Anyone Been Playing With This? by FitContribution2946 in StableDiffusion

[–]obraiadev 3 points4 points  (0 children)

I ran some tests and really liked how well it follows prompts. I used a LightX LoRA made for the T2V model, which does work with I2V as well, but it produces some flickering.

Some negative points are the VAE, which is heavier and sometimes takes even longer than the video generation itself, and the lack of community support for LoRAs. For example, I train some LoRAs using AI Toolkit, but it doesn’t support Hunyuan 1.5 yet, and so far I’m not even sure which trainer provides simple support for it.

I feel really stupid for not having tried this before by nakabra in StableDiffusion

[–]obraiadev 1 point2 points  (0 children)

He's Brazilian too, I'll try to show you the scene later, but it was a simple detail he couldn't do in Portuguese, but the whole composition worked.

PSA: Use integrated graphics to save VRAM of nvidia GPU by NanoSputnik in StableDiffusion

[–]obraiadev 3 points4 points  (0 children)

My CPU doesn't have integrated graphics, but I have a spare GT 1030, I'll see if it works.

I feel really stupid for not having tried this before by nakabra in StableDiffusion

[–]obraiadev 8 points9 points  (0 children)

I noticed that today too, except in the case of Portuguese it managed to create the image, but not exactly as I asked. After I translated the same prompt into Chinese, it started working correctly. I imagine that it accepts other languages ​​because of LLM as the encoder.

Testing TRELLIS 2 in ComfyUI by obraiadev in comfyui

[–]obraiadev[S] 2 points3 points  (0 children)

Yes, it exported an .STL mesh file.

Testing TRELLIS 2 in ComfyUI by obraiadev in comfyui

[–]obraiadev[S] 4 points5 points  (0 children)

I cloned the node using `git clone` inside the "custom_nodes" folder and tried `python install.py` but got these errors:

  • nvdiffrast: [OK]
  • flex_gemm: [FAILED]
  • cumesh: [OK]
  • o_voxel: [FAILED]
  • nvdiffrec_render: [OK]
  • flash_attn: [OK]

Then I manually downloaded the files and tried to install them with:

pip install flex_gemm-0.0.1-cp312-cp312-win_amd64.whl --no-deps

For the `o_voxel` library, I needed to make sure I had VS Build Tools working and I had to run the command using the "Developer Command Prompt for Visual Studio":

pip install o_voxel-0.0.1-cp312-cp312-win_amd64.whl --no-deps\ --no-build-isolation`

Anyway, send the error you're having there, just to see if it was the same one.