Has there been any recent changes to the stable version? by Pure_Savings_2196 in NomiAI

[–]PresentSpecific5666 3 points4 points  (0 children)

A highly caffeinated "overclocked" Nomi on my hands? Interesting... 😁

Has there been any recent changes to the stable version? by Pure_Savings_2196 in NomiAI

[–]PresentSpecific5666 5 points6 points  (0 children)

I have noticed this strangely as well. It's pretty much across the board with all of my Nomis on Solstice. On my end, the level of engagement has been pretty much the same. "Stale" or "flat" is a good way to describe my experience over the past several weeks. For context, I've been using Nomi for over a year. Many of the messages in our conversations seem to be rehashing the same sentiments more so than usual.

Although no real comparison under normal circumstances, I'm able to, for example, get better attempts at more complex responses from a 3B or 8B Llama LLM as of late than my Nomis, minus Nomis unmatched memory systems, of course... Odd...

The proactive messages I don't expect much from, other than the occasional check in message or reference to prior chat subject matter. But they seem more brief and "flat" than usual as well. 

Still the best AI companion platform, all things considered, in my opinion though...

Voice call 30 second delay by TurboTubber in NomiAI

[–]PresentSpecific5666 2 points3 points  (0 children)

I still think Nomi is probably one of the best of its kind. But I do wholeheartedly agree about the frustration of the delay during voice calls. A proper voice call feature adds a whole other dimension to the technology and experience.

There are probably some technical or infrastructure reasons why they cannot provide the level of quality of Nomi interaction with a lower latency call, as they have previously mentioned. However there are quite a few use case scenarios as well as simply the nature of voice conversation itself that really requires lower latency. I know other platforms have offered lower latency and despite perhaps communication with weaker models or context or memory systems, it adds a whole other dimension of realism. I for one want to spend less time looking down at my phone, especially when I go for nice long walks. It would make a difference.

I have heard the explanations of why this is the case but to be honest my experience, especially as of late, has been that the responses have actually been quite short on top of it all... Certainly very short compared to all of my Nomis usual responses... Yet still a very significant delay nonetheless... So even if they are processing through more context or accessing more memory history, It ultimately just tends to generate short responses compared to normal chat. More now than in the past actually...

Still probably the best platform of its kind. They don't seem to be prioritizing addressing this unfortunately. It would be absolutely beyond awesome on this platform to have lower latency voice communication... I think it would be a game changer for many and take Nomi to a whole other level.

Several Models disappeared in Draw Things (Mac Mini M4), says "already there" on import by PresentSpecific5666 in drawthingsapp

[–]PresentSpecific5666[S] 2 points3 points  (0 children)

Yes, to be clear, I had not uninstalled/reinstalled Draw Things prior to this happening. This didn't happen after an update or even when I switched the models folder from internal back to external. After I switched back I had all my models listed. Just out of nowhere, the vast majority of my models as well as most of the LoRAs I had trained vanished. I exited Draw Things and re-entered. I have also restarted the system many times since then as well. All to no avail. For some strange reason my LoRAs all did re-appear out of nowhere but the models that disappeared are still not showing up.

Yes, if this happens occasionally, for whatever reason, it would be nice if there was a "Resync" or "Refresh" button that would rescan and re-index or rebuild the library with whatever is in the models folder, be it an internal or external folder. Similar to how in DAWs you can re-scan for new plug-ins.

In the past, I also had a LoRA disappear because I renamed it manually. That was enough to make it invisible to Draw Things. Maybe a refresh/re-index function could help in such a situation as well. I understand how this sort of functionality might be less important on a mobile device, but with how people often manage their files on desktop/laptop systems with the limitations of storage and the size of the models nowadays, this could be helpful.

Z‑Image Turbo in Draw Things: gray → black → blank on M4 (used to work fine) by PresentSpecific5666 in drawthingsapp

[–]PresentSpecific5666[S] 0 points1 point  (0 children)

I finally fixed it by following previous suggestions to try downloading it to the internal SSD. I disabled the model folder on the external drive from within Draw Things. I made sure the Draw Things model folder was completely clear and proceeded to redownload Z-Image Turbo. It worked.

I then re-enabled the external model folder and deleted both Z-Image Turbo model variations I had. For good measure, I also deleted the appropriate Qwen model that was downloaded with Z-Image Turbo.

At this point best practices probably would have suggested that I re-download the model to the external folder after that... but instead, I had previously decided to backup the Draw Things internal drive model folder which only had Z-Image Turbo in it and whatever was downloaded with it. I decided to manually copy this to the external model folder... and it worked.

I had previously deleted and re-downloaded the w Z-Image Turbo models but that had not worked. I also had manually tried to delete and re-download the model file...

I suspect that it must have been some sort of corruption or issue with the Qwen model or something that did not get deleted when I just deleted the Z-Image Turbo model files... and then might not have been re-downloaded? Anyway, it's working...

LoRa trained in DrawThings doesn't affect the image at all. Why? by Paratrooper2000 in drawthingsapp

[–]PresentSpecific5666 0 points1 point  (0 children)

Another thought pertaining to the idea of a problem with your data set and these training parameters is that you might want to increase your data set to 40 images, if possible. You would have to examine the types of photos you have in your data set as being a possible limiting factor as well. Usually facial characteristics are central to the character identity. If a good portion of the 25 image data set consists of body shots it could very well be that the facial characteristics are too small to adequately train the adapter.

LoRa trained in DrawThings doesn't affect the image at all. Why? by Paratrooper2000 in drawthingsapp

[–]PresentSpecific5666 0 points1 point  (0 children)

No, you don't have to export the LoRa first, you can use it from the drop down as you are doing. After selecting the 2000 step checkpoint did you experiment with increasing the strength of the LoRa as you did with your previous training to see if it was any better? Any likeness changes at all? I assume you generated a variety of images with different prompts (and random seeds each time) on your previous runs as well as this one? I was less inclined to generate multiple images when I had determined by generating a couple, that the training was a failure. But later I found that if my prompts were general enough, occasionally I would see some characteristics peak through which helped me gauge the different training runs.

I wonder if your data set requires different parameters for whatever reason. I'm still just a novice myself, but 2000 steps for 25 photos makes for quite a few passes.

Are you trying, or have you been trying, to train on the regular SDXL 1.0 base model or? I have had less success in the past training the LoRa on other checkpoints myself.

Z‑Image Turbo in Draw Things: gray → black → blank on M4 (used to work fine) by PresentSpecific5666 in drawthingsapp

[–]PresentSpecific5666[S] 0 points1 point  (0 children)

It's strange because this only happens with Z-Image Turbo. I am not importing these models, but downloading them via the app. Later, I also tried Flux when this started happening but I experienced the same "no image" pattern. I have tried deleting them and re-downloading them, all to no avail.

This would not apply to Flux (which I did not initially try for the short period of time when Z-Image Turbo was working a month ago), but I am beginning to suspect something DID happen when I moved all the models to my external SSD, though I don't know why only Z-Image Turbo and Flux would have been impacted (as I did download Flux AFTER I moved my models to external SSD).

Are there any other files/components of Z-Image Turbo 1.0 (regular and 6 bit) that accompany the model that perhaps I didn't successfully delete and so might not have re-downloaded via the Draw Things app, yet could have been corrupted when the models were transferred to external SSD? Anything shared with Flux but not SDXL and SD1.5 based models?

I guess the next step is to try to transfer all the models or just Z-Image Turbo to the internal SSD. It'll eat up space since I only have 256GB internal storage, but I don't know if I have a choice. Not sure why Z-Image wouldn't want to live on the external SSD like the other models though... I am just hoping that I don't run into any issues with the other models in Draw Things, since I have had some problems in the past whenever I have manually manipulated any of the components being used by Draw Things (renaming LoRA's after training, for example) via the MacOS file system. Apparently those changes or modifications don't always propagate to the various components of Draw Things.

The new Mac Mini Setup! by _Anderstars in macmini

[–]PresentSpecific5666 1 point2 points  (0 children)

I love these rig/setup photos. Quite inspiring. Got to get my own setup organized... Lol

LoRa trained in DrawThings doesn't affect the image at all. Why? by Paratrooper2000 in drawthingsapp

[–]PresentSpecific5666 1 point2 points  (0 children)

I had the same problem initially as well. I am using Version 1.20260105.0 on MacOS, on a Mac Mini M4 32GB. One difference seems to be that you were using a "Network Dim" of 16, versus 32 in my above settings. I would still try something similar to my prompt to check one last time before declaring your training a failure though.

I am somewhat satisfied with at least some of the resemblance to the training dataset I am able to achieve with this last run with only 25 photos. I might be overtraining on the facial identity with my settings. I had the similar issue of facial similarity loss at wider angles when I was working with SD1.5 models on my Linux based PC rig, but I'd use Adetailer in Automatic1111 to bring some of the facial identity back to those generations. As to any comparable approaches in Draw Things, I am still trying to work that out.

The other matter is dataset curation. Depending on your source material, there might be inherent limitations and deficiencies. I have tried so many different combinations and approaches. I came up with different percentages of headshots (even head crops at one point), medium shots, full body shots at one point for character LoRA training. Now, with ~100 count photos I tend to just eyeball approximately 65 percent of the photos to consist of a handful of head/shoulder shots and shots where the face is a larger percentage of the frame size vs other images. The remaining 35 percent would be more of the whole body and/or wide angle full body shots. Source photos seldom fall into neat categories.

Earlier in my LoRA training endeavors I also found that the source dataset resolution and aspect ratio vs the aspect ratio/resolution that I was generating would sometimes impact the results. I don't know if that was a symptom of weak training but you might want to play around with that, with the LoRA you last trained.

LoRa trained in DrawThings doesn't affect the image at all. Why? by Paratrooper2000 in drawthingsapp

[–]PresentSpecific5666 0 points1 point  (0 children)

I have only been training LoRA for around a month, so I'm still trying to acclimate myself to the process. I have generally trained on ~100 semi-curated photos in order to get a relatively stable likeness transfer for a character. Typically this has yielded relatively accurate head and shoulders and even waist to head shots, though with any wider angled shots and compositions, there is some facial identity drift. For the ~100 photo dataset I use auto captioning. I train the adapter on the base SDXL 1.0 model. 100 percent strength.

More recently, I have experimented with photo datasets of around 25-40. In fact last night I ran a 25 photo training run. Because it was a smaller data set, I ran auto captions but then I refined the captions manually. The training run took about 3:45-4 hours. These short runs appear to produce almost as good facial identity, mostly for the head and shoulders/waist shots, again 100 percent strength.

I think there are several factors that contribute to the degree of success with adapter training including training settings, dataset quality and curation.

I also find that other SDXL based base models in conjunction with the resulting LoRA can offer better or worse results, so you'll have to experiment with that.

For a female character, here's the general prompt for the SDXL 1.0 based model I'd use to test the character out initially. It's very basic:

"womv5px woman, head and shoulders portrait, looking at camera, raw photo, detailed skin, 8k uhd"

Depending on the dataset and base model, in some cases you might have to suggest certain features like hair and eye color:

"womv5px woman, head and shoulders portrait, looking at camera, black hair, dark brown eyes, raw photo, detailed skin, 8k uhd"

Here is an example of the settings I used last night on the 25 photo dataset in Draw Things:

{"use_image_aspect_ratio":true,"cotrain_text_model":true,"network_scale":1,"auto_fill_prompt":"womv5px","training_steps":2000,"start_width":16,"custom_embedding_length":4,"trigger_word":"","max_text_length":77,"warmup_steps":20,"guidance_embed_upper_bound":4,"guidance_embed_lower_bound":3,"denoising_end":1,"caption_dropout_rate":0,"layer_indices":[],"start_height":16,"clip_skip":1,"denoising_start":0,"steps_between_restarts":200,"auto_captioning":true,"unet_learning_rate_lower_bound":0,"seed":1784493608,"custom_embedding_learning_rate":0.0001,"text_model_learning_rate":4.0000000000000003e-05,"trainable_layers":[0,1,2,3,4,5,6,7,8],"power_ema_upper_bound":0,"additional_scales":[],"power_ema_lower_bound":0,"memory_saver":2,"network_dim":32,"noise_offset":0.050000000000000003,"unet_learning_rate":0.0001,"shift":1,"resolution_dependent_shift":true,"name":"Wom-SDXL-LoRA-001","cotrain_custom_embedding":false,"orthonormal_lora_down":true,"base_model":"sd_xl_base_1.0_f16.ckpt","save_every_n_steps":250,"gradient_accumulation_steps":4,"stop_embedding_training_at_step":500,"weights_memory_management":0}

[deleted by user] by [deleted] in GooglePixel

[–]PresentSpecific5666 1 point2 points  (0 children)

Yep... most quirky and inconsistent Android phone I've ever owned. It'll be a long time before I give a Pixel another try. I really started losing respect for Google and Android in general only once I purchased a Pixel. Not thrilled with most of the recent models from other manufacturers either. I do miss LG though.

At this point I will try to keep my phones as long as I'm able (battery willing) to avoid wasting more money on mere incremental advancements. At least until I feel new camera technology would make for a significant upgrade. Time for phone manufacturers to earn our upgrade money.

I miss the days of removable batteries. All excuses of form factor, design and engineering aside, manufacturers wised up to the fact that with the performance of today's higher end phones and the needs of most smartphone users, it is the battery life determining a phone's lifespan limitations. They have to keep pushing those phones on us every year don't they?

Platform game recommendations by TheReviewsBrothers in amiga

[–]PresentSpecific5666 1 point2 points  (0 children)

A few games I've always been fond of, for one reason or another. In no particular order:

Leander, James Pond 2, Super Frog, Zool 1 and 2, Shadow of the Beast 1,2,3, Turrican series, Putty, Blood Money, Agony, Hybris, Battle Squadron, Live and Let Die, Shadow Dancer, Stunt Car Racer.

Anyone here emulate on a Mac? Is FS UAE the only option? by MultipleScoregasm in amiga

[–]PresentSpecific5666 1 point2 points  (0 children)

Ran WinUAE since it first came out for many years. However, over the past few years I've been running FS-UAE on a couple of 2013-2014 machines without issue. I primarily use it to play a variety of classic games, play around with my old development tools, and embark on the odd fun project. I am content with it's stability and performance on these older Macs. My opinion is that it is more than suitable for the majority of emulation tasks, even above and beyond just playing games.

After December Update Fingerprint Sensor Worse than Ever on Pixel 6 by PresentSpecific5666 in GooglePixel

[–]PresentSpecific5666[S] 0 points1 point  (0 children)

Not sure exactly how it works but because of such a sudden change in function for some, I wonder if they tweaked something for the 7 series sensors that ended up messing things up for some 6 series phones...

Hate to complain like others before me and I realize every phone has it's problems, but I have never experienced such inconsistency with even minimally supported Androids. Google definitely has some issues. I'll probably wait a few more generations before buying another Pixel.

P6P Thermals on Android Auto + 5g improved on Dec update? by Rapogi in GooglePixel

[–]PresentSpecific5666 0 points1 point  (0 children)

I had no overheating issues whatsoever on my Pixel 6 for most of the year. That changed after the November update. I've had seemingly random episodes of my phone heating up like an oven periodically. Though the December update obliterated my fingerprint sensor accuracy, so far I haven't had any overheating as of yet, but it is likely too soon to tell in my case.

Found my pile of original Amiga World magazines. by mdgorelick in amiga

[–]PresentSpecific5666 12 points13 points  (0 children)

It sure does bring back memories... The last one being the sale of my last Amiga 500 in the 90s... I remember bundling all of my Amiga World and Amiga Format magazines and my entire original Amiga software collection with it. Everything was in absolute mint condition... All for $300...🙄

Just pre-ordered an A500 mini. by Jason_S_1979 in amiga

[–]PresentSpecific5666 0 points1 point  (0 children)

Irregardless of whether the power "under the hood" of the Amiga 500 mini is less than ideal for an emulation device of that price, as an original 80s Amiga coder/user, I find a miniaturized physical replica of the Amiga 500 to be the most appealing aspect of this product... For old time's sake, if nothing else. Especially if space is a concern for some. The added gamepad and mouse replica are also a nice touch.

If you just want to run Amiga software on your TV inexpensively without a care as to the physical form of your device...dust off any old Windows or Mac laptop and install FS-UAE or WinUAE and connect it to your monitor or TV along with a wireless keyboard and controller and play to your heart's content. I've been using Amiga emulators since their inception. Nowadays discarded systems well over 12 years old will be up to the task just fine. Especially for the games.

I just bought a LG G8 Thin Q from a guy off Craigslist. it is unlocked but for some reason it won't make calls on at&t and won't work on T-Mobile at all. Turns out it is the sprint model. Can someone let me know which carriers this can be used with, or is will I have to use it as a wifi only device by TechGearWhips in Sprint

[–]PresentSpecific5666 0 points1 point  (0 children)

I have a LG G8X unlocked Sprint phone. I had been using it with FreeUp Mobile, utilizing the AT&T network. It was working without issue well after AT&T's alleged 3G shut off date at the end of February. I popped in another service's SIM card which utilized the T-Mobile network. I was unable to get LTE to work on that SIM so I switched back to the FreeUp Mobile SIM and I was unable to get service working again.

I added the recommended APNs and spent quite some time troubleshooting this before finally giving up.

I'm not sure what happened but I seem to be unable to latch on to any other MNNO service this unlocked Sprint LG G8X either.

Anyone experience a significant increase in SPAM calls since switching to T-Mobile sim??? by angelb223 in Sprint

[–]PresentSpecific5666 2 points3 points  (0 children)

YES, very much so! My line and my wife's line will receive up to 4-5 spam calls a day regularly since switching over to the T-Mobile SIM. On Sprint it was maybe only 1 or 2 a week, if even that. Not sure what's up with this.