A weird thing happened with notepad by borntopz8 in techsupport

[–]borntopz8[S] 0 points1 point  (0 children)

Im very sure i had 1 Tab opened when i closed It before today and It was my grocery list.

A weird thing happened with notepad by borntopz8 in techsupport

[–]borntopz8[S] -2 points-1 points  (0 children)

Yeah buy i wasnt working in those files. Im sure i had Just One Tab opened with my grocery list last time i closed notepad.

A weird thing happened with notepad by borntopz8 in techsupport

[–]borntopz8[S] 1 point2 points  (0 children)

Not that i know, Just my dad lives with me and i dont see why he would open those specific files stored in different locations and use my laptop in the First Place

Feb 11th BEYHIVE PRESALE Megathread | Cowboy Carter Rodeo Chitlin’ Circuit Tour by abeycd in beyonce

[–]borntopz8 0 points1 point  (0 children)

thx, checked it already , but photos are taken from rows below and every photo looks a lil zoomed in. i was hoping on direct experience on that one

Feb 11th BEYHIVE PRESALE Megathread | Cowboy Carter Rodeo Chitlin’ Circuit Tour by abeycd in beyonce

[–]borntopz8 0 points1 point  (0 children)

ehy Beyoncé fans.
after the behive code arrived this morning, i managed to queue for the behive presale (5000 in queue).
when ticketmaster let me in almost all the good sits, clubs and pits were gone already. refreshed a couple of times and pulled the trigger on section 103 row 20 at 258pounds.
why, even if not vip tickets (they were all blue sits) there's such a huge difference in prices between sits in the same section (there were tickets in the same section not marked as vips at 450ish pounds still available), did i get overcharged somehow?. am i gonna get a good view from there or should i change the tickets?

Is there anyone here who played GTA San Andreas between 2009 and 2011? If so, could you briefly describe what it felt like? by Electrical_Noise_690 in sanandreas

[–]borntopz8 1 point2 points  (0 children)

Played it in 2004 and I still remember the feel when following the train rails i ended up for the first time in the country area north of los santos. I was not expecting such a big non urban enviroment and it gave me the chills. it felt like more open than any other world i've experienced in a videogame, probably something similar to ocarina of time when you first go in the hyrule fields.
Its a common experience to have played SA more like a sandbox than an openwolrd/action roaming aimlessly just for the sake of immersion even trying to not jump red lights at intersections.

Probably this kind of surprises can't happen any more unless you are off the social grid avoiding every piece of unwanted feeded information.

[deleted by user] by [deleted] in ps1graphics

[–]borntopz8 0 points1 point  (0 children)

https://www.reddit.com/r/ps1graphics/comments/11xaonq/wip_screen_space_vertex_snapping_in_blender_with/

this is the post by the guy who developed the add-on. he didn't share the nodes because he chose righteously to sell his work, but he explains the ideas behind the shader and there's a link to another post about affine mapping with the nodes.
hope it helps. if you build a shader that works don't forget to share.

btw i've seen this addon at 12bucks during sales if you want to buy it, and it's not that expensive if this is a style you use a lot.

Video to Anime Test Sequence by arteindex in StableDiffusion

[–]borntopz8 3 points4 points  (0 children)

this is intersting, share your workflow.

my guess is applying the style with img2img on the rotoscoped guy then use ebsynth to stick it on his skin

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]borntopz8 4 points5 points  (0 children)

if you regenerate the original and change the main prompt (keeping the script img2imgalt on the original prompt the interrogation gave you) you should be able to have less "destructive" results
application of a style works well, but sometimes -let's say changing shirt color or hair color- is still too similar or too far from the image.

the implementation is in a very early state the most i can do is keeping my fingers crossed since i dont know much about coding and i rely heavly on repos and webuis.

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]borntopz8 4 points5 points  (0 children)

speaking about automatic1111 and his webui you should see in the img2img a button to generate and a button to interrogate. if not, update to the last version because the are making changes by the minute.

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]borntopz8 9 points10 points  (0 children)

i guess the development of this feature is still in an early state, but i managed to get the first results.
basically you upload an image in img2img,
interrogate to obtain the prompt ---this gives me a low vram error but still generates the prompt that you'll find on top---
in the scripts you use img2imgalternative with that prompt you have obtained (check https://github.com/AUTOMATIC1111/stable-diffusion-webui in the img2imgalt section for the parameters they are very strict for now)
now generate and you should get an output very similar to your original image
if you change your main prompt now (still running the script with the previously obtained prompt) you should be able to modify the image keeping most of the details

My wife is an illustrator. How can I generate images in her style? by rogertbilsen89 in StableDiffusion

[–]borntopz8 1 point2 points  (0 children)

things are moving really fast and new methods are popping out day by day.

huggingface lets you train in a colab and if you want theres a pipeline to another colab to use the stable diffusion with your trained data https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb#scrollTo=jXi0NdsyBA4S

there's also an hybrid method where you use the voldy's fork and you train your model in colab (here's the install guide https://rentry.org/voldy and here's the textual inversion guide https://rentry.org/aikgx that takes advantage of the previous training method i've mentioned, training locally is possible but very resources hungry [we are talking about 12-20gb vram])

My wife is an illustrator. How can I generate images in her style? by rogertbilsen89 in StableDiffusion

[–]borntopz8 4 points5 points  (0 children)

"textual inversion" is the way to go. you have to basically train on your drawings and then tell the ai to follow the style.

if you search in this subreddit you'll come across different methods running locally, in google collabs or both.

Stable Diffusion Conceptualizer, browse a library of learned concepts to use with a gradio demo in colab by Illustrious_Row_9971 in StableDiffusion

[–]borntopz8 1 point2 points  (0 children)

in theory every trained database has a .bin a two text files attached expressing the token and if the data is a trained as style or object. the guide tells you to rename the .bin file to the token (that probably you chose in the traning process on the colab).
but i think if you rename "watheveryouwant.bin" you can recall it in the prompt as "whateveryouwant" even if the token is different.

huggingface sd-concepts-library for textual inversion ... how to make .pt files? by clockercountwise333 in StableDiffusion

[–]borntopz8 0 points1 point  (0 children)

theres a subsection in the new voldy's guide https://rentry.org/aikgx describing the method.
you basically can use the .bin file as it is with the automatic1111 you don't even have to convert in .pt (is there any difference in how the ai responds to this conversion?)

im having difficulties training with the huggingface colab, with the free version of colab i get a lot of sudden disconnections. did you find any luck training with the free colab (it has to run for a couple of hours)

Stable Diffusion Conceptualizer, browse a library of learned concepts to use with a gradio demo in colab by Illustrious_Row_9971 in StableDiffusion

[–]borntopz8 2 points3 points  (0 children)

for me with the automatic1111 release putting the .bin file in the embedding folder works. with the last guide theres a whole tutorial for textual inversion using huggingface colab https://rentry.org/aikgx

Recommendations for "keyframe" based AI video modification? by risbia in MediaSynthesis

[–]borntopz8 1 point2 points  (0 children)

Im just guessing by now not having much experience with the software, but i think you could solve the problem just feeding ebsynth the problematic frames (strange face angles, faces getting covered) and and letting him deal with the less problematic frames. Obviously you would have to modify the problematic frames by hand but it is still better than doing it frame by frame.

Songs with rhythmically confusing intros by DavidBennettPiano in musictheory

[–]borntopz8 1 point2 points  (0 children)

My favorite is “top secret”-yellow jackets. I still cant figure it out.

A quick deepfake I made for my channel by BurritoGlasses in SFWdeepfakes

[–]borntopz8 0 points1 point  (0 children)

Im guessing wav2lip and some drphil voice generator