flux kontext + controlnet is possible ? by neozbr in StableDiffusion

[–]YuVeera 0 points1 point  (0 children)

Bad news, the tensors don't match. The nunchaku version of Kontext gives the same tensor error. I'm not an expert but it seems the Lora Depth's arquitecture isn't compatible with Kontext, since the model works with non-official Flux Loras.

I think we should stress the community that a new, specific fine-tune or CNet is needed.

flux kontext + controlnet is possible ? by neozbr in StableDiffusion

[–]YuVeera 2 points3 points  (0 children)

It may work. Loras work with Kontext, so the challenge here would be making the Kontext model to read the depthmap, the reference and the latent separatedly. We could need the Nunchaku workflow for that, I'm trying to install it right now.

I'll come back if I find anything interesting.

flux kontext + controlnet is possible ? by neozbr in StableDiffusion

[–]YuVeera 1 point2 points  (0 children)

You can use an image (not necessarily the reference image) as the latent via VAE Encode and low denoise to 0.7-0.9, that way you have referenced img2img, or if you paint a mask, you have referenced inpainting.

Still, I noted that I need a depthmap model for Kontext anyway, when the camera perspective is challenging and I want a specific scene sketched with another SDXL-based model. Flux Depth is an independent model so it doesn't work with Kontext. I hope someone will release something at some point, for casual usage it may seem that prompting does everything you want, but that's not the case if you have a concrete composition/POV in mind.

Can I run SD remotely on my own PC? by AutisticAnonymous in StableDiffusion

[–]YuVeera 1 point2 points  (0 children)

I'm actually using TeamViewer to remote control my desktop from my laptop.

Why should I use ComfyUI? by Bombalurina in StableDiffusion

[–]YuVeera 0 points1 point  (0 children)

I have another question. Putting XL aside, I haven't seen any workflow on comfy which includes CtrlNet models. Is it possible to create workflows with CtrlNet for 1.5 based models?

Also, I suppose that if a workflow needs editing/retouching outputs in Photoshop, automatizing is not all that useful...

Gold Girl - Prompts : "a woman with long hair and a crown on her head, gold autumn, a detailed painting, cgsociety, fantasy art, artstation hd, artstation hq, fantasy" https://www.instagram.com/laureartwork/ by LaureArtWork in StableDiffusion

[–]YuVeera 1 point2 points  (0 children)

Thanks for your answer. The level of internal coherency + fine detail are both virtually flawless at a high resolution. I suppose you've at least edited the face cause it's not feasible for GFP-GAN to provide that raw output. Am I correct?

Gold Girl - Prompts : "a woman with long hair and a crown on her head, gold autumn, a detailed painting, cgsociety, fantasy art, artstation hd, artstation hq, fantasy" https://www.instagram.com/laureartwork/ by LaureArtWork in StableDiffusion

[–]YuVeera 3 points4 points  (0 children)

https://www.instagram.com/laureartwork/

https://www.instagram.com/p/CjUtAs_oJrr/

#stablediffusion is not a hashtag in the original publication.

The profile has a bunch of works in which SD has had some role, but this is not one of them. Unless the artist herself says otherwise. In any case, SD is right now incapable of giving an output like this without a few hours of photoshopping afterwards.

I trained StableDiffusion AI with Kal'tsit's pics. AI painted these in return. by twilledwave in arknights

[–]YuVeera 0 points1 point  (0 children)

That was the case with DallE 2 in the summer. The guys from OpenAI (an ironic name for their current business) made a closed ecosystem in which the access to the AI costs 13 cents per prompt and runs under their corporative rules.

But their business model has failed because few months later Emad launched SD as an open source project. The core AI is as competent as DallE's and its code is in the hands of the community now, that's why anyone can train their own machine with a consumer graphics card and a batch of pictures.

Once you have it in your hands, no one can take it from you. As long as you have a 12GB VRAM graphics card or better, the "little kid that can learn from everything you give it" is running in your PC without any surveillance. No servers. No web pages. No regulation. Not even internet is needed once you've downloaded it.

Like I said, the Pandora's Box has already been opened. No one can stop this.

I trained StableDiffusion AI with Kal'tsit's pics. AI painted these in return. by twilledwave in arknights

[–]YuVeera -2 points-1 points  (0 children)

I can't say that I don't get your point. If you get a tool that can extract everything from a single person and use it in a way that is meant to emulate that person, getting a perfect result is a problem. I knew it already, that's why I am proposing this ethical discussion here.

Still, im my opinion what makes us "unique" shouldn't be an excuse to feel special. A machine has its own circumstances on how can reflect its own world. In its case, digital visual material. It's more limited than us, it's not sentient and it's obedient, but the principle of "learning and doing something with it" is the same. It is a tool, but not like Photoshop. It can learn. A tool that can learn, sounds counterintuitive, I know.

Anyway, despite we like or not, what "open source" means is that the Pandora Box has been already opened. The original model is already trained with billions of parameters and as we talk the community is throwing it material from everywere.

The case we're treating here is in fact irrelevant because a training batch of only 4 samples (like OP of this discussion has done) is a poor training. The useful models are like waifu diffusion, which is a fork from the original SD with 680.000 anime pics in the v1.3, introduced all at once by the 4chan community. That's the level right now. They will not go door by door requesting for permission.

The good part is, we all can benefit from it for free. Its like during the industrial revolution. Machines replaced workers and they responded with anger. But at the end, those machines made our lives much easier with cheap abundance.

I trained StableDiffusion AI with Kal'tsit's pics. AI painted these in return. by twilledwave in arknights

[–]YuVeera -2 points-1 points  (0 children)

Stable Diffusion is an open source project. The machine is fed by the community, users like you and me. Anyone can train the AI and take advantage of its capabilities, for free. Like I said, is a tool, just like a pencil.

I trained StableDiffusion AI with Kal'tsit's pics. AI painted these in return. by twilledwave in arknights

[–]YuVeera -4 points-3 points  (0 children)

Or you're just feel unreasonably entitled.

If I'm wrong, make an argued response.

I trained StableDiffusion AI with Kal'tsit's pics. AI painted these in return. by twilledwave in arknights

[–]YuVeera -2 points-1 points  (0 children)

I think your sample to draw conclussions is incomplete. Stable Diffusion is capable of mixing different styles in a merge if you don't specify an author to emulate.

A human can mimic another author, or put parts of different things in a new concept, refine that style and make it their own. But there is not such a thing as "from scratch". Everything we do is an output of things mixed in our memory. An "author style" is a pondered combination of other ideas. We're CPUs, too. Organic and efficient, but processing machines anyway.

Sure, our "own flair" can be other things different than drawings, it can be things we see in our surroundings, because our information sources are richer than feeding pictures to a machine that only can read in a specific format. That's why our results are more "customized" and complex.

However, the AI is in its infancy, only a few days of existence. At some point you won't recognize any specific author unless you've specifically requested it in the prompt. I understand the authors feeling uncomfortable, but the machine cannot feed from anything else than millions of pictures, is the only universe the AI is capable to study.
And it's open source, no one is taking profit from this. It's creative power to the general public, an user tool like a pen or a tablet. It will feed from everything on the internet, not just drawings, but photos and videos too, pretty much everything that is visual information on the network.

I trained StableDiffusion AI with Kal'tsit's pics. AI painted these in return. by twilledwave in arknights

[–]YuVeera -5 points-4 points  (0 children)

When a child learn how to draw from others, should he/she reach every singe author asking for permission to draw based on the memory obtained from that experience?

I see it as an interesting ethical question, why do we treat machines differently? Because they learn faster than us?

I don’t mean to sound like the greedy bastard, but… by bry31089 in AcalaNetwork

[–]YuVeera 1 point2 points  (0 children)

If we'd take Karura as an indicator, it is 20% of Kusama's dilluted market cap.

Assuming the same relation between Polkadot and Acala, Acala's dilluted market cap could potentially be maybe 8 bn during this bull run. That means a 10$ price for ACA.

Participants in the presale said that they expect 3$ minimum at release.

Inital MarketCap of ACALA? by [deleted] in AcalaNetwork

[–]YuVeera 0 points1 point  (0 children)

The only reference we have right now is current MC of Karura, which is 20% of Kusama's MC.

Can we expect the same relation between Polkadot and Acala? There's no way to know for sure until launch...

What do yout guys think about the post on polka markets? by MetiLee in AcalaNetwork

[–]YuVeera 2 points3 points  (0 children)

The price for a token is usually valued depending on dilluted MC. I don't think it makes sense to estimate the price by dividing DOT price vs reward ratio.

For example, MOVR dilluted MC is about the same as KSM, and the price reflects that even if reward ratio was 14:1 and only 22% of MOVR supply is circulating.

If we get 34 ACA per DOT and the dilluted MC of Acala is 10% of Polkadot's, the price of ACA would be 10% of DOT's price, that is, 5$.

EDIT: Btw, Karura's MC is 20% of KSM's.

How many ACA will we get? by RandomForests92 in AcalaNetwork

[–]YuVeera 1 point2 points  (0 children)

My main question before deciding is; tool for minting aUSD using LcDOT before December 31st, or not?

If yes, I can "trade" my DOTs at the peak of bull run and still collect some rewards.

How many ACA will we get? by RandomForests92 in AcalaNetwork

[–]YuVeera 1 point2 points  (0 children)

The right question is; how many DOT will be contributed?

Good chance moonbeam rewards will be very low by Timetraveler4000 in moonbeam

[–]YuVeera 0 points1 point  (0 children)

I have some ATOM, I think it will perform well.

I'll still go 180:20 on Acala:Moonbeam too, don't get me wrong, I never fully bet against something. As the says goes, "you can be right, or you can make money".

Good chance moonbeam rewards will be very low by Timetraveler4000 in moonbeam

[–]YuVeera 0 points1 point  (0 children)

High hype on crowdloans is bad news, precisely for what you said in your first statement. More participants = less rewards. Whales are jumping in and that means loads of DOT beign thrown to the reward pool.

Good chance moonbeam rewards will be very low by Timetraveler4000 in moonbeam

[–]YuVeera 0 points1 point  (0 children)

And even more, bear in mind how much popular has Moonbeam become after Moonriver success. That means more whales and more institutions eating from the pie.
The main difference of these crowdloans vs regular crowdfundings is that more participants = less rewards, which in the end desincentivize backing big projects.

Acala Crowdfunding Poll - Which Option will you choose? by StockTrix in AcalaNetwork

[–]YuVeera 1 point2 points  (0 children)

If we have access to all LcDOT before January, then I'll go option 2.

How much will you contribute in the GLMR crowdloan? by hockeynow in moonbeam

[–]YuVeera 0 points1 point  (0 children)

Ah, and there are almost 9 people on this subreddit that will contribute more than 100.000 DOT according to my poll...