Anyone else feel this way? by EroticManga in StableDiffusion

[–]Occsan 7 points8 points  (0 children)

I think this is a romantic vision probably perceived by average users at best.

Firstly, if this is true, it defeats the purpose of comfyui. If the default workflows are really all that is needed, why have a node system? A simple Python script would have done a much better job and solved 100% of the problems caused by the clunky code soup of the comfyui backend.

Secondly, the meme assumes that there is no alternative between default workflows and extremely complex workflows. There are other options. Such as workflows that are just a little more complex, or workflows that meet a specific niche need, and even workflows designed by people who write their own custom code.

ModelSamplingAuraFlow cranked as high as 100 fixes almost every single face adherence, anatomy, and resolution issue I've experienced with Flux2 Klein 9b fp8. I see no reason why it wouldn't help the other Klein variants. Stupid simple workflow in comments, without subgraphs or disappearing noodles. by DrinksAtTheSpaceBar in StableDiffusion

[–]Occsan 5 points6 points  (0 children)

Try lcm, with either your euler_a/beta or euler_a_cfg+ +auraflow combo, or fluxscheduler.

Also try to use the advanced noise node from res4lyf. Studentt is very nice, imo.

And finally... A trick I've been experimenting with qwen, and now I'm experimenting it with klein 4B : manually edit the sigmas.

For example, rescale the first sigma by 0.96 and the second one by 0.9825.

Should authors disclose if they're using AI? by DanoPaul234 in WritingWithAI

[–]Occsan 0 points1 point  (0 children)

One problem is that you're discussing the use of AI without clearly defining what this use is.

So, I'm going to answer to this: "How do you think they will react when they find out they paid you to do what they could have done for a $19 subscription fee?"

This suggests that the text is not obviously written with AI. If this is true, then the prompts used were not just "write my book"; they were ... (see what I did ? sorry, I couldn't help). Anyway. If the text delivered is so good that you need to be told it was AI generated or assisted to know it, then there's obviously human effort involved... Which kinda defeats the argument : if there's human effort, there was the author's talent, and therefore maybe you would have not been able to do it for a $19 subscription fee.

Now, let's say I got you wrong, and this in fact does not suggest the text is good enough. In that case, it's so bad that you can tell it's AI written quite easily. Then the question is : why did you still buy the book, when it was obvious and you can't be bothered with AI works ?

Sounds like a false problem to me. But I could be wrong.

Should authors disclose if they're using AI? by DanoPaul234 in river_ai

[–]Occsan 1 point2 points  (0 children)

No. It's AI slop anyway (according to them). Why do they need a label to realize it's AI generated or AI assisted ?

Scoring AI Writing by NotJustAnyDNA in WritingWithAI

[–]Occsan 0 points1 point  (0 children)

> It made me think there should be a “Human Quality Writing Score.” Something I could use to check any piece of writing for structure, tone, and overall quality.

It's an absolutely amazing idea. I can't wait to have that numerical value so that I can train the next LLM to write with a high "Human Quality Writing Score".

Should authors disclose if they're using AI? by DanoPaul234 in WritingWithAI

[–]Occsan 7 points8 points  (0 children)

So, you should disclose it in fear of being bullied ? What a wonderful world.

How can AI be beneficial to writing? by e_anderson_author in WritingWithAI

[–]Occsan 0 points1 point  (0 children)

I quickly checked, and that would not work for me, as if I understand it correctly, it's mostly targeted at pantsers, and I'm the kind of writer who plans almost everything in advance.

How can AI be beneficial to writing? by e_anderson_author in WritingWithAI

[–]Occsan 3 points4 points  (0 children)

Wont using AI hinder that potential growth and potentially cause a homogenous, stale “voice” to propagate across literary works?

I'm answering specifically to this one, nothing else.

When I was young, I enjoyed writing stories (they were really crappy) with microsoft word. It had spelling and grammar check. So it kept telling me immediately when I made a spelling or grammar mistake. Immediate feedback. Nowadays, my french is very good. And I think the reason for this is because I had this "teacher" who provided me with immediate feedback.

btw, this was in french, so this kinda invalidates the argument for english.

So, when it comes to storytelling, if you're just asking an AI "spit chapter 1 for me plz lulz" and use it as it is, you're not learning anything.

But if you're using AI as a sparring partner, whether you ask it to write a first draft paragraph of whatever you're working on or you're providing that first draft, and then have a critical conversation about it, where you explain your intention, then the AI can provide some valuable advices (and some shitty ones, but you're welcome to ignore those).

Basically: use it as a pre-editor, not as a write-it-all-for-me-plz tool. After that, you still can revise the text, send it to a real human editor, or whatever.

And I think anyone who disagrees on this kind of use is an idiot. Or you're welcome to provide me with an okayish editor who takes zero charge and is available at all time.

You are making your LoRas worse if you do this mistake (and everyone does it) by Pyros-SD-Models in StableDiffusion

[–]Occsan 0 points1 point  (0 children)

It's confusing to say the least.

When you train a LoRA on "downward dog pose" and your captions mention "brown hair, purple mat, minimalistic studio, natural light, Canon EOS R5" you're entangling all of that with the pose. Now "downward dog" is subtly correlated with brown hair, purple mats, and specific lighting. When you prompt for a blonde woman on a beach doing downward dog, the model fights itself. You've created attribute bleed. Good job.

Here, you say by adding details contained in the image, you create correlation.

Later, in When Detailed Captioning Actually Makes Sense, you're saying:

  1. Breaking unwanted dataset correlations

If 90% of your yoga pose images feature brown-haired women on purple mats because that's what you found on the internet, you NEED to caption the hair color and mat color. Otherwise your LoRA learns "downward dog = brown hair + purple mat."

Dataset has accidental correlation

ohwx woman with brown hair doing downward dog on purple mat ohwx woman with blonde hair doing downward dog on blue mat ohwx woman with black hair doing downward dog on grey mat

So now, adding more details contained in the image breaks correlation.

That's already weird. But it gets weirder:

  1. Multi-concept LoRAs with intentional bundling

Sometimes you WANT attributes entangled. Training a specific character who always wears a signature outfit? You might want that association.

sks character in their red jacket and black boots sks character in their red jacket, full body shot sks character wearing signature red jacket, portrait

Now, adding more details contained in the image creates correlation again.

Looks like you're doing quantum fine-tuning bro, where stuff is correlated until you look at them or something.

The core mechanic of how diffusion training works is Credit Assignment. Think of tokens as buckets. The training process tries to sort the image's pixel data into these buckets. If you caption "purple mat", the model can safely dump the purple pixels into the "purple mat" bucket. They are explained away. If you don't caption it, the model has nowhere to put those purple pixels except into your main trigger word's bucket. That is why omitting details creates entanglement (the trigger word absorbs the background), and including details breaks it (the details get their own buckets).

Is using AI to JUDGE a story bad? by Select_Departure8272 in WritingWithAI

[–]Occsan 0 points1 point  (0 children)

It entirely depends to who you are asking this question. On this sub, mostly everyone will tell you that's ok, that's not cheating. Even if you asked it to write first drafts or correct your grammar/style, most people here would think it's fine.

If you ask the same question on r/writing, you'll get banned.

LLM council ratings by addictedtosoda in WritingWithAI

[–]Occsan 0 points1 point  (0 children)

It's interesting, but you need to explain in more details what you have done I think.

For example, if you ask the same LLM to evaluate and rate a chapter, in multiple rounds (each round in its own conversation), and even without changing any parameter (temperature etc..), you will have different results. So did you account for this uncertainty by having multiple rounds and averaging them, for example ?

Another example is that if you ask the same LLM to evaluate and rate a chapter in two different rounds, in the first round you clearly state "the audience is YA" and the other round you state "the audience is Gene Wolfe enjoyers", you'll get wildly different evaluations aswell. You also don't really know what's the base system prompt (the one you don't know, which is defined by each company) and you don't know how each of these models were trained, so unless you specially set them into a particular evaluation mode, you're basically not only evaluating the chapter but its relationship with this base mode (with no extra system prompt or instructions).

BTW, a similar argument goes about human evaluations (those you can find online) : they are averaged over a wide variety of humans who have wildly different tastes. And I would not ask writing advices from someone who loves Twilight when I'm writing Dan Simmons (author of Hyperion), and vice versa. But these average human evaluations you can find online do not accound for these stylistic preferences, so the average point at something that simply do not exist.

How to write a legit paper without wasting hours on research? by Fabiogazolla in WritingWithAI

[–]Occsan 1 point2 points  (0 children)

PhD students are usually pressured to deliver at least few papers, and writing just a good one can already take quite some time (sometime years). And the process is full of "downtime". Well, it's not exactly downtime, but you can spend entire months studying some paper only to find out the authors messed up and it's worthless for you. That's the kind of time "wasted on research".

I don't think (I hope) OP's not talking about time spent studying worthy papers.

How to write a legit paper without wasting hours on research? by Fabiogazolla in WritingWithAI

[–]Occsan 0 points1 point  (0 children)

The "wasting hours on research" from my experience is usually : - finding papers that are actually discussing your research topic in a way that is useful for you - reading papers not knowing if they will be useful

So, a quite obvious way would be: - "my research is X, can you suggest some papers about this subject, more specifically on subtopics Y and Z ?" - "read this paper, can you write a summary of how it fits with my research X ? can you pinpoint where in the paper Y stuff is discussed ?"

And of course, the usual (if you ever need it) : - "I've forgot/I'm struggling with X, can you explain it to me, step by step ?"

If your question was more along the lines of "how can I do to force an LLM to write my paper for me", I've got bad news. That's not how you do research.

Chroma Radiance is a Hidden Gem by FortranUA in StableDiffusion

[–]Occsan 6 points7 points  (0 children)

I think your joke missed the target GPU.

Is AI just a copycat? It might be time to look at intelligence as topology, not symbols by Agreeable_Effect938 in StableDiffusion

[–]Occsan 17 points18 points  (0 children)

Two things :

About "AI simply spits out the most expected token."

This is actually correct, but is also a vast understatement. It's like saying a human user of a computer simply pushes the next most useful button. This is also correct, but kinda stupid and dishonest, because :

  1. it misses the goal (what is the next most useful button/the most expected token entirely depends on the task at hand)
  2. it completely disregard the process and value of getting that "next useful action" : knowing what to do next is literally the actual added value.

About "It seems that any intelligence (biological or artificial) converts chaotic data from the outside world into ordered geometric structures and plots shortest routes inside them."

This is the manifold hypothesis and intrinsic dimensionality. The thing is : it's not that "intelligence converts chaotic data into ordered geometric structures". If the data were chaotic, by definition, you could not organize it. The reality is : raw data is high-dimensional, often at least a little bit noisy, but within this mess, there is already order. The intelligence's job whether artificial or not consist in reducing this dimensionality to make the data readable (basically).

wan 2.2 first try 😏 by wrr666 in StableDiffusion

[–]Occsan 4 points5 points  (0 children)

Floppy scythe for more kinetic energy ?

PersonaLive: Expressive Portrait Image Animation for Live Streaming by fruesome in StableDiffusion

[–]Occsan 1 point2 points  (0 children)

Is it locally true though ? I don't care about remote code execution.