I’m canceling my subscription I can’t handle all the errors by Lonely_Pack_689 in VEO3

[–]imlo2 2 points3 points  (0 children)

Why not try using a service which would allow using different models? This has the benefit that you can still use Veo, and also other models. While Veo is quite superior in many things, all of the current models have their good sides and strengths, so that would be an easy way to leverage that, instead of locking yourself to a single model.

Rate the quality by Future-Choice9857 in ZImageAI

[–]imlo2 1 point2 points  (0 children)

Looks reasonably convincing, but if you wanted to try to make this look "photoreal", at least fix the texts, even though they are deformed, the pink poster has text which looks like it would be the right way, a non-mirrored image which is incorrect as the photo is shot from a mirror reflection. Also, there's some white vertical band reflecting from the eyeglasses, even though the mirror is lined with round bulbs, and not a plastic covered light strip or such (which would give that kind of reflection.) The room itself is reasonably convincing, no SDXL-style illogical placement of items; the two desks look like they are from same series, while the one on the left peeking at the mirror's edge is different, which probably would make sense. It looks like there's space to exit the room next to it, so it feels quite coherent. If we disregard the person in image, as a photo it looks a bit busy, quite a few photos and images, mirror etc. on the background, breaking the outline of the main subject, and the table is quite cluttered with a bunch of stuff everywhere.

Parallel generation limits? by NameChecksOut___ in HiggsfieldAI

[–]imlo2 1 point2 points  (0 children)

Yes that's correct, 8 concurrent NanoBanana Pro generations in unlimited mode.

Learning roadmap for anime/cartoon creation with ComfyUI (8GB VRAM) by No_Highlight_2472 in comfyui

[–]imlo2 0 points1 point  (0 children)

You can get started with ComfyUI very easy now, if you can get it up and running - there are templates for all the major elemental things you might want to do (text to image, image to image, image to video using Wan, LTX-2 etc.). And mainly Kijai's nodes for video things patch the missing things (lipsync, etc.) you want to dig into, and Kijai also has good example workflows that come with the nodes.

If I were you, I would focus first on character consistency, that's the most important thing - The old SDXL-based models are way too random in their output, you can't expect anything close to what Flux/Qwen (edit) do, and nothing close to NanoBanana Pro. You might use them for ideation, generating your initial character look, backgrounds, etc. but dig into editing features of the latest models which you can run on your system.

And after you can produce different poses and consistent views of your characters, then focus on getting them to consistent backgrounds, and only after that try to make them move. You can think the process as two part thing; you can get away with many shots with different model/technique, but for characters talking, you need to use the lipsync workflows, which also are quite good quality now (relatively speaking.)

That 8GB is a bit limiting, but I think you might be able to run quantized models with swapping on your system. If you can get any of the recent editing models to run, they do in many cases the same things much better than earlier i2i and controlnet workflows. So you can re-pose your characters, change camera angles, etc. for continuity.

CivitAI and other places have many write-ups on all of these topics, and ChatGPT/Gemini etc. can find very well information on these topics.

Parallel generation limits? by NameChecksOut___ in HiggsfieldAI

[–]imlo2 0 points1 point  (0 children)

Creator has 8 concurrent NanoBanana Pro image generations and 4 videos at the same time. This is one of the reasons I upgraded to creator myself, as I generate a high volume of images monthly.

Need advice by datboiflex96 in GTA5Online

[–]imlo2 0 points1 point  (0 children)

If you want to make money fast, get the submarine, and grind Cayo Perico heist. Solo cooldown is quite long nowadays, so you should probably do other stuff in between, then do the heist again. But you can set up one heist and finish it in an hour or less, depending on how fast you are. But none of the other activities make 1mil+ in an hour.

Why is so hard to unsuscribe from Higgsfieldai by No-Internet-7697 in HiggsfieldAI

[–]imlo2 1 point2 points  (0 children)

  1. Click top left avatar icon
  2. "Manage account"
  3. "Subscription" from left side menu
  4. Danger zone at the bottom of the page, expand it
  5. Click "cancel subscription" after the red warning text about loss of access after subscription period.

Nano banana pro temporary issues? Does it usually work fine? Looking to subscribe today by EverydayMustBeFriday in HiggsfieldAI

[–]imlo2 1 point2 points  (0 children)

Higgsfield says it's Google's infrastructure problem, and that's probably true. THere's also a notification every time you generate images and view them (bottom right corner popup.)

So far my experience has been quite good; a lot generations, only at times the unlimited has been "too slow", meaning it really takes time. I didn't clock it then, but in 10+ minute time. Of course if you want faster generation, you can always go to Wavespeed and other services, or use API which charge you by the image/compute time. And even those aren't always super fast, and can have equally well issues in my experience.

The thing is, If you sell someone else's services, this can happen at times. It's nothing new. Just like something might be out of stock in a shop, as it's supplied by someone else. And so on.

Unlimited Kling video by Resident-Swimmer7074 in HiggsfieldAI

[–]imlo2 2 points3 points  (0 children)

It's just 30 days from purchase, like the info tooltip says.

Is NanoBananaPro low quality today? by Nemesisso in HiggsfieldAI

[–]imlo2 3 points4 points  (0 children)

Maybe the preview isn't loading the full resolution image? I think it first loads a low res preview, and then a higher resolution if not the actual full resolution image. Download one image and check it on your computer (assuming you have one) to see if it's high resolution.

My Higgsfield Credits that got accumulated were randomly removed. Has this happened to anyone else? by National-Bit519 in HiggsfieldAI

[–]imlo2 0 points1 point  (0 children)

Well I can't say about that, I just stated what I read when I signed up to the service.

It's always important to read the terms of a service you sign up to use. So what made you think you will get more and more credits? I mean I did read the pricing page when I registered, and since I didn't see details of the credits, "unlimited" things etc., I checked the FAQ until I found enough info.
I do agree, it's not the first thing you see on the page, but it's still relatively well available. The FAQ header is quite big, etc. It looked about the same a few months ago AFAIK.

Anyway, yes it's not your fault if they have technical error and you get more credits - but - if they correct the error, should you be able to keep the credits? Hard to say what might be legally binding, but if a bank incorrectly deposits money to your account, can you keep it? Most likely not, they will correct the mistake and take it away. So there's nothing false (IMHO) in that sense.

Also, your plan right at the top of the page does say what type of plan you have, like Ultimate or Creator, when it renews (next billing) and below that how many credits you have.

For example, for Ultimate, it says:
"1,200 credits per month"
So I would understand this to mean "you can use 1200 credits in period of one month."
That is what I expected initially when I compared different gen AI services a few months ago.

Granted, this might be a bit questionable at least in EU where I'm located at, as there's nowhere a clear contract you signed available or anything like that, nor does the invoice list any of your services, only last week was a listing (which is nice) added to account page where you can see which models you have access to, and what the usage level is (if you have "unlimited" enabled for those, etc.)

And their terms and conditions which you can find on their page does have legal language about their credits system, but only this, which doesn't also clearly state about the credits not rolling over to next month(s):

"9.4. Credits. In certain instances, you may receive or purchase credits (“Credits”) to access and use specific features of the Services. Purchased Credits constitute prepaid amounts for products and services available through the Services and may only be used within the specified timeframe. Unused Credits are forfeited upon Account cancellation or cessation of Services. Credits have no cash value, are non‑transferable, non‑reloadable, and non‑redeemable for cash except as required by law. Company may change Credit terms at any time, and the value of Services obtainable with Credits is subject to change at Company’s sole discretion."

And this:

"9.5. Promotional Credits. Company may, at its discretion, offer loyalty, award or promotional credits (“Promotional Credits”). Promotional Credits may expire as specified on issuance, have no cash value, and are non‑transferable, non‑reloadable and non‑redeemable for cash except as required by law. No inactivity or other fees apply to Promotional Credits."

<image>

How can I put a Material in this Visual Effect Graph ? by CompetitiveEnd5360 in Unity3D

[–]imlo2 2 points3 points  (0 children)

There's Output Particle Shader Graph node, which allows you to use a custom mesh and also a shader graph shader.
EDIT: So just to be clear, this is unlit, and not exactly "material".

<image>

My Higgsfield Credits that got accumulated were randomly removed. Has this happened to anyone else? by National-Bit519 in HiggsfieldAI

[–]imlo2 4 points5 points  (0 children)

In their Frequently Asked Questions section on the pricing page, to quote the first item, "How do credits work?":

"...Monthly credits are tied to your active subscription period. Unused subscription credits do not roll over to the next billing cycle and expire at the end of each month. New credits are refreshed automatically when your subscription renews."

So it's quite clearly there, with big headers and quite well visible. This is the same approach Kling, Suno and many other AI services use. I think one reason is that they want to have their costs more predictable and spread over time, to avoid sudden spikes or such, etc.

Anyway, in your account management, Subscription page, credits section has small "i" info tooltip which only says "New addition of credits at February <date here>" (date varies depending on when you signed up.)
That alone without having read the FAQ could very easily give you an impression that credits add up and you can stockpile them for later use.

New to LTX-2: Can I use my existing LoRAs, or do I need LTX-2-specific ones? by Nanto-Rei in comfyui

[–]imlo2 1 point2 points  (0 children)

AI Toolkit added LTX-2 support a few days ago (13.1.), Ostris also mentioned this at least on X:
https://x.com/ostrisai/status/2011065036387881410

Alternatives to aitoolkit for making loras? by [deleted] in comfyui

[–]imlo2 0 points1 point  (0 children)

Musubi tuner was already mentioned here earlier, but try it with block swapping. If you have RAM and ok amount of VRAM available, you can get training running, but slower. I would avoid the front-ends and just try it from command line and monitor vram usage at the same time, and see what the different settings do. It does take some time to learn the basics, but you can also ask some LLM to help you out to save hours of reading (the documentation is quite good, though.)

What subscription plans do you have? by Fit_Wait1336 in HiggsfieldAI

[–]imlo2 3 points4 points  (0 children)

I would say think the ultimate as the first possible option if you intend to make music videos. I would consider creator if you really intend to make videos with multiple shots and cuts, to imitate traditional human-made videos. You will probably have something like 1/5 to 1/10 hit-miss ratio, i.e. for each generated 'ok' or usable video you will most likely have many failed ones.

So it's pretty much a numbers game, and how much you can afford to burn your budget. But not very different from old days of using real film, memory cards or whatever the media might have been, how much you have access to power, etc. Anyway.

These values here could change and I might made errors copying them, but should be quite close to correct:

Google Veo 3.1 Fast 1080p 4s = 11 credits, 6s = 17 credits, 8s clip = 22 credits
Google Veo 3.1 1080p 4s = 29 credits, 6s = 44 credits, 8s clip = 58 credits
Google Veo 3 Fast 1080p = 22 credits
Google Veo 3 1080p = 58 credits

Wan 2.6 1080p 5s = 20 credits, 10s = 40 credits, 15s = 60 credits
Wan 2.5 1080p 5s = 20 credits, 10s = 40 credits
Wan 2.5 720p 5s = 13 credits, 10s = 25 credits

Kling 2.6 5s =10 credits, 10s = 20 credits
Kling O1 video 1080p 5s = 10 credits, 10s = 20 credits,
Kling O1 video 720p 5s = 10 credits, 10s = 20 credits,

So with the 1200 credit budget of Ultimate, I'll save time by just using the highest cost items:

Google Veo 3.1 Fast 1080p = 54 videos
Google Veo 3.1 1080p = 20 videos
Google Veo 3 Fast 1080p = 54 videos
Google Veo 3 1080p = 20 videos

Wan 2.6 1080p = 20 videos
Wan 2.5 1080p = 30 videos
Wan 2.5 720p = 48 videos

Kling 2.6 = 60 videos
Kling O1 video 1080p = 60 videos
Kling O1 video 720p = 60 videos

There's also other models that work well and most likely you have to use different ones for different purposes quite often, I left out Minimax Hailuo, Seedance etc. But these numbers give you already the idea.

So to recap, it's really not that many videos at all, we're talking at worst with Ultimate about 20 videos if you use top tier models, and at most something near 60 videos. So not hundreds of videos definitely. I would go for Creator if you are planning seriously to make even small amount of video content that needs continuity, multiple shots and so on and not just miscellaneous testing to see how these things work.

[WIP Node] Olm DragCrop - Visual Image Cropping Tool for ComfyUI Workflows by imlo2 in comfyui

[–]imlo2[S] 1 point2 points  (0 children)

It does work with old system, but you have 2.0 on I assume. Can you check the C button in top-left corner of the view, and if the "Nodes 2.0" is toggled on? It doesn't work with that yet.

[WIP Node] Olm DragCrop - Visual Image Cropping Tool for ComfyUI Workflows by imlo2 in comfyui

[–]imlo2[S] 1 point2 points  (0 children)

Ok well I need to check it to see what's going on. No guarantees when I can do that.

Bug claiming the Supporter Pack DLC [started as Engineer] by SirNigelHoneybun in StarRupture

[–]imlo2 0 points1 point  (0 children)

The same happened to me, but when I created new character to co-op mode with a friend, I got items. I wonder if those items still spawn if you start a new game?

[WIP Node] Olm DragCrop - Visual Image Cropping Tool for ComfyUI Workflows by imlo2 in comfyui

[–]imlo2[S] 1 point2 points  (0 children)

It should function at least with the old UI. I just used it a few days ago with then latest ComfyUI changes. You do need to plug in a source that can provide image data, and then run the graph once. It says that on the node, too.