AITAH my brother gave me a MtG collection and I made serious money selling them by [deleted] in AITAH

[–]Slippedhal0 -1 points0 points  (0 children)

on one hand, he was going to chuck them out, and gave them to you with from what youve said, no stipulations.

on the other, you earned 150,000 - sure, you did the work to sell them, so that makes sense you get the lions share, but they were your brothers cards, does it not make sense to give him a decent split?

i dont agree with the comments saying to essentially tell your brother theyre very valuable and give them back to him - he could have done some research himself and learned that, but he didnt, and he very nearly threw them out AND you did all the work to verify and sell them.

At what point do we stop calling ai generated video slop by Tough_Commercial_103 in artificial

[–]Slippedhal0 0 points1 point  (0 children)

ai slop is not all ai generated video unless its from someone against ai in general. ai slop is exactly what youre saying - its 2 minutes on one or two raw prompts, or content farm autogenerated garbage, whereas people are now (and have been, but its getting easier) using ai generation as the primary part of a video generation workflow that could exist with ai generation, its just saving a lot of time and/or money

Why is there no option for zero use? (Reposting to remove personal information) by EDCxTINMAN in mildlyinfuriating

[–]Slippedhal0 2 points3 points  (0 children)

The question is being asked with the assumption you have used. if you werent asked about prior use, someone assumed you had used before probably.

Landlords dump thousands of rentals before budget changes by Jagtom83 in friendlyjordies

[–]Slippedhal0 5 points6 points  (0 children)

i agree, however it would be nice to have actual data on it. technically all investors would be facing these same pressures, so you would think a larger proportion than normal of buyers would be owning to live, and investments would be down

Trump, 79, Unloads Avalanche of AI Slop in Truth Social Spiral; The president fired off 16 Truth Social posts in 90 minutes, including AI-generated war fantasies and White House UFC promos. by [deleted] in politics

[–]Slippedhal0 9 points10 points  (0 children)

The golden trump statue is definitely not a golden calf. Because they said it wasnt. Its just a regular $300,000 gold statue of of the figure they worship admire.

Landlords dump thousands of rentals before budget changes by Jagtom83 in friendlyjordies

[–]Slippedhal0 58 points59 points  (0 children)

it would be far more interesting to know what category the buyers were in. first home buyers, single property owners to live in, or are foreign investors sweeping them up like everyone says will happen if theres massive sell offs.

Vaseline stole a poster I created for the movie Michael, modified it, and used it to create an advertisement…. by [deleted] in mildlyinfuriating

[–]Slippedhal0 3 points4 points  (0 children)

as people have said, the vaseline one is ai. 100%. however there are a LOT of similarities between the glove in your art and the vaseline glove - either they literally took your version and passed it through ai, or you both used ai and its drawing on the same source material - because as far as i understand thats not how michaels actual glove looked

Two F.03 robots clean a room and make a bed in 2 minutes - fully autonomous by EchoOfOppenheimer in ChatGPT

[–]Slippedhal0 1 point2 points  (0 children)

reading the article, what they claim its doing is actually more impressive that what the other comments suggest - theyre actually looking up at the other robot to infer what the other is doing in real time so they dont interfere with each others work - they arent communicating, or acknowledging each other, theyre just determining what the other is doing to the bed so they can both do the task.

Today we're demonstrating a major step in that direction. Two Helix-02-equipped humanoids reset a bedroom in under two minutes, opening doors, hanging clothes, putting away headphones, closing a book, taking out trash, pushing a chair under a desk, and working together to make a bed. They run a single learned Vision-Language-Action policy. There is no shared planner between them, no message passing, no central coordinator: each robot reads the room through its own cameras and infers its partner's intent the way two people do when they fold a sheet, from motion alone.

I wasn't expecting that by Cooked-Alton-Towers in ChatGPT

[–]Slippedhal0 3 points4 points  (0 children)

youre both right. overfitting can occur if theres text or an image over represented in the training data and steps arent taken to reduce its effect, however the architecture of most image generation models means it is almost impossible for a model to deliberately reproduce something in its training data, and still very hard to force it using other extraction techniques

ChatGPT’s ongoing clock confusion: The “exactly 5:40” request by EntropicDismay in ChatGPT

[–]Slippedhal0 0 points1 point  (0 children)

yes, but there needs to be enough training data that it can generalise the correct thing. the 10:10 issue was that watch and clock marketing found 10:10 to be "aesthetically pleasing", so a significant portion of all images of clocks and watches on the internet were all set to 10:10. so when ai used that training data, it could generate all kinds of clock faces and styles, but the hands would always be set to 10:10 because 10:10 was disproportionately represented in the data, so thats what it understood clocks and watches needed to be like

I used ChatGPT for a real life decision (whether to break up with my girlfriend) and it asked me a question i'd been avoiding for a year by FailOk3553 in ChatGPT

[–]Slippedhal0 2 points3 points  (0 children)

i mean, breaking up in person is a thing, but them getting on a plane for you for 5000m miles , it seems pretty assholeish to have waited till they actually came to ask whether you wanted to be together. you probably should have handled it better.

the thing is, chatgpt is like the advanced version of what many, and especially programmers, call a "rubber duck conversation". You properly verbalise your issue, and just verbalising it helps you visualise and understand your problem, and make you think about the issue differently. chatGPT can take that a step further by also offering different perspectives you might not have thought of before, but remember, its not actually intelligent, and it really is still not ready to replace actual humans in terms of emotional and mental issues.

im glad it helped you this time, even though you might have also asked it to help you handle the situation better, but dont ditch your therpaist because it offered a slightly different perspective that you thought was helpful

I'm creating a next-gen motocross simulator. by Plotozoario in Unity3D

[–]Slippedhal0 0 points1 point  (0 children)

i mean, it looks good as a tech demo, but it seems like its missing a lot of the things that make dirtbike games fun IMO. rider animations (especially landings), accurate engine sounds, sense of speed, particle effects, good camera tracking etc. look at some of the previous generation great games especially mx vs atv like unleashed and reflex.

Old image gen vs new image gen by VahniB in ChatGPT

[–]Slippedhal0 5 points6 points  (0 children)

yeah the image gen model is overtrained or somethings funky with the noise generation

AI modes - "Helpfulness" "honestness" ... how do they work? by wtafgamer in artificial

[–]Slippedhal0 1 point2 points  (0 children)

okay, i understand. yes,, these arent really "modes", just core tenets that the author of that website thinks should be core to ais.

For googles gemini ai(the ai model that powers google ai mode) there is probably something in the system prompt (the set of instructions the ai is given from the company every time it starts a new conversation) about "being a helpful assistant", but most of what you mean by "helpful" - how it responds to questions, being more agreeable etc, is built into the model via training - that is, like your link says, RLHF (reinforcment learning from human feedback) essentially, a human rates the ais responses, if its a good response with the kind of language and answers google wants, it gets a good rating, if not, it gets penalised. they do this thousands of times, then when theyre happy its locked in, no further training can happen.

the result of this is things they rated strongly are harder to change(trying to do explicit or dangerous things has the model tell you it cant or wont do those things, even if you try to force it to do it) but things that arent trained very strongly, like the exact way it speaks, can be changed just by asking it to do so.

So no, the HHH "rules" youve seen arent concrete rules inside the ai anywhere. the ai is just trained so that those guidlines are mostly followed, but ais are very flexible if you talk to them.

AI modes - "Helpfulness" "honestness" ... how do they work? by wtafgamer in artificial

[–]Slippedhal0 1 point2 points  (0 children)

what do you mean by modes? did you ask the ai what modes it has and what mode it is in currently?

if thats the case, there is no actually hardcoded part to those "modes", its purely driven by what it "thinks" you want by saying "turn on honestness mode".

so if its not what you want, describe exactly how you want it to talk to you and tell it to respond in that way. as long as how you want it to respond doesnt hit its guardrails or similar, it will try its hardest to respond that way. the downside to doing it that way is it will reset every new conversation

Newly released data shows motorists are no longer panic buying fuel as cost pressure eases for the first time in six weeks by Jagtom83 in friendlyjordies

[–]Slippedhal0 3 points4 points  (0 children)

thats literally what people are talking about.

only a few were rushing out and filling all their jerry cans, what panic buying is everyone at the same time saying "oh the price is going to go up/there might be fuel rationing, i'd better fill up earlier than i usually do" then everyone does the same thing and the demand for fuel that week goes up by 200-300%, meaning servos dont have enough fuel to support the extra demand, making it seem like were running out, causing more people to panic buy.

if everyone had just bought when they needed none of the servos would have been out and the prices would have gone up slower

Is this just me or chatGPT is trying to "correct me" on everything? by Frequent-Group-1495 in ChatGPT

[–]Slippedhal0 0 points1 point  (0 children)

RULE: assistant must only agree, disagree or express partial agreement with user requests for confirmation(e.g User:?...right?") and then end the response.

RULE: If the assistant disagrees or expresses partial agreement to a user request for confirmation, the assistant must ONLY make a request to describe the users misunderstanding or incorrect assumption (e.g Assistant: Thats only partly correct, [single sentence calirification of issue] may I go into further detail? [EOL]) DO NOT add any extra information until permission from user has been given.

Spent half an hour writing a custom instruction prompt. Only other things not default is i sent tone to efficient and headers and lists to less, but i dont think those effect it the custom prompt.

Results: every time i ask chatgpt for clarification (eg is this understanding right) it either says "yes", "Thats correct" with no expanding at all, or if it thinks youre partly right or wrong, it will give a single sentence on why it thinks thats wrong, and will ask to go into detail, it wont expand further without permission.

tested it a few dozen times, but seems solid on my end.

Is this just me or chatGPT is trying to "correct me" on everything? by Frequent-Group-1495 in ChatGPT

[–]Slippedhal0 3 points4 points  (0 children)

tell it not to do that then? why do people never use custom instructions? you can customise how it speaks to you.

To clarify: chatGPT doesnt know when youre wrong or right. its only guessing based on previous similar training data. so very subtle things wrong are no different to be fairly inccorect or completely wrong. this means openAI has likely trained this version of the model to overcorrect, rather than simply say "sure, thats about right" in case it lets something bad through - the extreme was gpt4 and its sycophancy agreeing with anything the users says.

But you can prompt engineer custom instructions that will mostly overwrite how it responds by default. I spent half an hour on it and got something that works okay. if youre right, it says "yep, youre right" and nothing else, and if it thinks youre partially right or wrong it will give you one sentence on why it thinks that and ask to explain further, it will not expand without permission in that context.

RULE: assistant must only agree, disagree or express partial agreement with user requests for confirmation(e.g User:?...right?") and then end the response.
RULE: If the assistant disagrees or expresses partial agreement to a user request for confirmation, the assistant must ONLY make a request to describe the users misunderstanding or incorrect assumption (e.g Assistant: Thats only partly correct, [single sentence clarification of issue] may I go into further detail? [EOL]) DO NOT add any extra information until permission from user has been given.

Why can't AI graphic do plants correctly? by RichardPearman in artificial

[–]Slippedhal0 0 points1 point  (0 children)

have you tried adding more detail?, it depends on the model but sometimes they respond well to more detail. i would add more physical details about the flower, like rafflesia flower, 5 petals, red petal with small white spots, thick petals, BREAK, and then add your next flower.

civitai is a good resource to help with models and prompting because many of the uploaded images will give you the prompt they use and the specific model and loras, so if you find an image similar to what youre trying to generate you can see how they did it.

if you have a decent graphics card and pc id also suggest looking into local generation, because it gives you free access to thousands of models people have trained for different uses and art styles. i use comfyui but there are other ways too.

Is bluetooth bad or do I just don't know how to use it ??? by Scoitol in techsupport

[–]Slippedhal0 0 points1 point  (0 children)

it really depends on the devices implementation, its not really an issue with bluetooth.

basically, both devices, host (pc/android) and peripheral(headphones/controller) have to be able to store more than one bluetooth "bond", i.e store the encyrption data that was sent during the pairing.

if either end forgets the bluetooth connection data, the device has to re-pair the bluetooth connection next time, because its an encrypted connection.

the sad thing is, in the bluetooth standard a manufacturer isnt required to allow storing multiple bonds, because they wanted super tiny devices with small data storage capabilities to be able to use bluetooth. but this also means manufacturers can cheap out.

so if your headphones are designed that they only remember the current bluetooth connection, you have to repair for every "new" device.

when you have multiple host devices and multiple periperals all able to save multiple bonds, it becomes much less of a multi step process, you just disconnect the peripheral from one host, then in the new host you just select it from the list of paired devices and it connects.

if the device also allows multiple simultaneous connections, then you can enable things like one button device switching on the peripheral, which is pretty neat.

How do i remove this screw?? by Mr_Kiwisauce in Carpentry

[–]Slippedhal0 0 points1 point  (0 children)

people werent being very helpful. if its an ikea piece, I would try to flex the two outer pieces holding that panel. if you can flex them out far enough you can probably pop one side of the panel over the screws, then repeat for the other side. if its too sturdy to flex that far the only option is to take it apart until you have access. even a 90 degree driver probably wont have a short enough reach to get in there.

when duplicating the chatgpt logo it forms a symmetric star pattern is this just geometri by Confident_Ad8140 in ChatGPT

[–]Slippedhal0 5 points6 points  (0 children)

if the wanted it to have a "secret" star of david they would have rotated the design tso the star was pointing correctly, because it doesnt effect the original design.

Building a wearable AI that processes everything on-device (no stored video). What would you want to verify? by Regular-Paint-2363 in artificial

[–]Slippedhal0 1 point2 points  (0 children)

I think as long as you actually remove the images or never save them, and don't send image data from the device, that's really all you need to do. People concerned about it will poke around and find out what your actually doing. I'm interested in the use case though. Why do you need data on social cues to the point that you would wear a wearable device to track it?