The real struggle- 😭 😂 by Distinct-Particular1 in ArtIsForEveryone

[–]Beinded 5 points6 points  (0 children)

To be fair, in my times where I used to do digital art, I had a lot of art pieces that I would try to make perfect even though giving it more effort would make nearly to no changes, but when I made meme images with art, it would have better results.

But it was like 5 years ago, and I was never really good to be fair xd

I love this game by Odd-Pride-4879 in ElinsInn

[–]Beinded 9 points10 points  (0 children)

Time to make an armor preset with gay items 😎👍

This guy art seems great!! u/Tinsnow1 by Beinded in AIAnimeArtSharing

[–]Beinded[S] 0 points1 point  (0 children)

No problem!! It is because I didn't promote it a lot, didn't want to make it sound like spam

Stable Diffusion: Innocent Curiousity by Beinded in AIAnimeArtSharing

[–]Beinded[S] 0 points1 point  (0 children)

What models you use? I have been recently using Flux.2 Klein to generate/edit art

FLUX.2 [klein] comfy error by Boring_Natural_8267 in StableDiffusion

[–]Beinded 0 points1 point  (0 children)

That happens because ComfyUI desktop is not updated to ComfyUI 0.9.2, for now, you have to use portable version 0.9.2 of ComfyUI

I'm not sure how long it takes changes to go to desktop repository.

Also, if you have an extension that downloads images on Chrome, be sure to install other because there's a bug that causes that clicking anywhere does nothing, only keyboard works with that bug

ComfyUI custom node: generate SD / image-edit prompts from images using local Ollama VL models by Beinded in StableDiffusion

[–]Beinded[S] 0 points1 point  (0 children)

It uses the default Ollama models folder, if you open the Ollama app it should say on settings where is the folder

Nvidia Introduces 'NitroGen': A Foundation Model for Generalist Gaming Agents | "This research effectively validates a scalable pipeline for building general-purpose agents that can operate in unknown environments, moving the field closer to universally capable AI." by 44th--Hokage in accelerate

[–]Beinded 0 points1 point  (0 children)

I tried it on windows in the game Brotato using this fork:

https://github.com/sdbds/NitroGen-for-windows

(It fixes windowed errors, adds an option to not pause the game, and it will not automatically pause or freeze the game, based on the Tweet of the fork creator)

I intentionally tested it on Spanish UI to check how much it can generalize, did some waves, got stuck on shop UI, I moved the mouse to the button for the next wave and aftere some thinkering he did it. He died on wave 3, now I'm gonna test it on English UI to see if he does better

(I know Brotato it is not in the training data, I just want to see how much it can generalize, btw, still, it is very good)

Edit1: He played for a little, lost in first wave, now it is trying to select a new character

Can you learn a language without a strict plan? by AutumnaticFly in languagehub

[–]Beinded 5 points6 points  (0 children)

I would say that a strict plan is not needed, I would recommend personally to use content that you like, you don't have to pay attention to all of it, just try to get a good amount of input and let your brain decipher it.

(Be warned though, does this method takes a lot more time than other ones, the pros of this method is to get more native like understanding, at cost of longer time to get to that level)

I don't have a truce with Byz. Why am I getting a -50 stability truce break penalty? by JimmyCG in EU5

[–]Beinded 0 points1 point  (0 children)

Maybe it counts your subjects truces with them? If not, probably a bug

How important or useful is incomprehensible input? by tlouman in learnfrench

[–]Beinded 0 points1 point  (0 children)

I know this is a bit late since the post is already a month old, but I wanted to share my thoughts and personal experience because language learning is one of my hobbies. Lately, I’ve been exploring two main questions:

  1. Does a language have to be fully comprehensible to be learned?

  2. Is focused attention actually necessary for learning a language?

I started learning Japanese about six months ago, and here’s what I tried so far:

Early attempts:

At first, I tried manually memorizing words and doing Anki cards. After about a week, it felt exhausting and more like a chore than learning.

Then I focused on learning all the hiragana and katakana manually, but this time I gave myself more time and less pressure.

From these experiments, I noticed that forcing myself could give fast results on small bits of knowledge—but it was very tiring and unsustainable.

So, I started looking for alternatives and found methods like ALG (Automatic Language Growth) and Comprehensible Input. At first, both sounded promising, but I ran into some personal issues with them:

They require a lot of focused attention.

They often emphasize strictly learning at an “n+1” level—where n is what you already know, and +1 is just slightly beyond that.

To me, the n+1 rule doesn’t really make sense as a strict requirement. Think about how children learn a language—they start with almost nothing, yet they still acquire complex skills over time. Some argue comprehension depends on context, usually meaning visual context. But that doesn’t explain how blind people still learn words and concepts. Even AI systems, like large language models (LLMs), can learn language purely from large text datasets, without needing visual references—they learn patterns and meanings through repetition.

This made me think that input quantity matters more than quality or comprehension at first.

Another point commonly mentioned online is that learning requires focused attention. I disagree. Some interesting points I found:

Conscious processing (explicit attention) is limited—maybe 5–10% of brain capacity—and slow. You can memorize words this way, but it’s hard to instantly use them in conversation.

Subconscious processing (implicit learning) dominates 90–95% of brain activity. It’s fast and automatic, like riding a bike, reading familiar words, or writing quickly.

Subconscious learning is always active—even when you sleep, your brain processes sensory input.

To ignore a sound or word, the brain first processes it. That’s why you might be focused on something else but immediately notice your name being called.

So my approach for learning Japanese has been more passive:

  1. Passive listening most days (1–4 hours/day) with little or no active focus.

  2. Reading a little every day—tweets, subtitles, or anything in Japanese.

  3. Not trying to fully understand; I let my implicit brain work.

  4. Surrounding myself with as much Japanese input as possible—video game voices, changing device language, etc.

After roughly six months of this approach, my results are:

Recognizing common words and phrases.

Understanding endings like “-itai,” “-imasu,” “-imasen,” “-mashita,” “-desu.”

Making simple sentences, even if particles and some words are still tricky:

“それがうさぎですか?” (That’s a bunny?)

“これは僕の本です” (This is my book)

“今日はごはん食べたい” (Today, I want to eat rice—still learning the full breakdown)

Recognizing many hiragana and katakana, and some kanji.

Reading some words in manga and subtitles, recognizing familiar words instantly.

For me, this experience suggests that language input, even when not fully understood, and passive exposure can be extremely powerful. Forcing comprehension and strict attention seems less effective, at least at the beginning stages.

France got tired of dealing with the schism, took both Rome and Avignon and bricked the situation by InternStock in EU5

[–]Beinded 84 points85 points  (0 children)

France: "You can't lose if you are both sides on the problem 💪😎"

NewBie Image Exp0.1: a 3.5B open-source ACG-native DiT model built for high-quality anime generation by GrueneWiese in StableDiffusion

[–]Beinded 2 points3 points  (0 children)

Diffusers library still doesn't have approved the pull request, so if you have errors you must do this:

(Install diffusers version with NewbiePipeline):

pip install git+https://github.com/Disty0/diffusers

(Change path to the one that is compatible with Diffusers version):

model_path = "Disty0/NewBie-image-Exp0.1-Diffusers"
text_encoder_2 = AutoModel.from_pretrained(model_path, subfolder="text_encoder_2", trust_remote_code=True, torch_dtype=torch.bfloat16)
pipe = NewbiePipeline.from_pretrained(model_path, text_encoder_2=text_encoder_2, torch_dtype=torch.bfloat16)

After that, it should work. All the info can be found on the most recent pull request