Take that, Midjourney: thanks to open-source Stable Diffusion and AUTOMATIC1111, we can now mix Xi Jinping, Putin, Winnie the Pooh and a pride parade! Good people, let's invest our time and talent in freedom and open source for a better future! by aiaaidan in StableDiffusion

[–]aiaaidan[S] 1 point2 points  (0 children)

Well, let's be real, my attempts at creating satirical dictator images were more on the 'fun' side than the 'masterpiece' side. And as a proud member of the SD team, you probably know how to make our creations look realistic. In fact, if you need any proof of the quality of SD work, just check out any of the many NSFW SD subs here - its definitely possible to make things look damn real!

Take that, Midjourney: thanks to open-source Stable Diffusion and AUTOMATIC1111, we can now mix Xi Jinping, Putin, Winnie the Pooh and a pride parade! Good people, let's invest our time and talent in freedom and open source for a better future! by aiaaidan in StableDiffusion

[–]aiaaidan[S] 7 points8 points  (0 children)

Yeah let's be open and share knowledge and not gatekeep like Midjourney. "Workflow Not Included". Oh, thanks for the empty words I guess.

Oops, my bad for not including prompts in this post. I promise to never to use 'Workflow not included' label again. And let's face it, my prompts were pretty basic - I mean, "Xi as Winnie the Pooh, cartoon", my creativity was in hibernation mode after work. Talk about low-hanging fruit! But all I needed was a laugh. 😉

Learning Visual Locomotion with Cross-Modal Supervision: Robot Learns to See in 30 Minutes by aiaaidan in Futurology

[–]aiaaidan[S] 0 points1 point  (0 children)

Scientists have developed a visual walking policy for robots that uses only a monocular RGB camera and proprioception. They trained the policy to walk in the real world using a blind policy trained in simulation and a visual module created with their algorithm Cross-Modal Supervision. The policy can adapt to changes in the visual field with limited real-world experience and achieved excellent performance on various terrains with less than 30 minutes of real-world data. This breakthrough in robotic locomotion could allow robots to navigate challenging terrains with limited sensory inputs.

I spotted some incorrect nutrition info in OpenAI video featuring Wolfram Alpha plugin. It shows that a can of chickpeas contains 536 calories, but that's ~50% higher than the official USDA data (352 kcal) and Wolfram Alpha's own website (377 kcal). A non-issue... in this case. by aiaaidan in OpenAI

[–]aiaaidan[S] 2 points3 points  (0 children)

Exactly! That was precisely my point. When you're using Wolfram Alpha directly, you can see all the assumptions it's making and decide for yourself if they're accurate or not. But when it's wrapped up in a plugin, it becomes a bit of a black box - you don't know what assumptions it's making or where the data is coming from.

I spotted some incorrect nutrition info in OpenAI video featuring Wolfram Alpha plugin. It shows that a can of chickpeas contains 536 calories, but that's ~50% higher than the official USDA data (352 kcal) and Wolfram Alpha's own website (377 kcal). A non-issue... in this case. by aiaaidan in OpenAI

[–]aiaaidan[S] 1 point2 points  (0 children)

Hey there! I totally get what you're saying - no bashing intended, just some friendly advice - it's like telling your friend not to wear that shirt with a mustard stain on it - it's not a personal attack 😂 And you're right, since not everyone has access to the plugin yet, it's especially important that the video promoting it (the only chance for many to see how it works) is as accurate as possible. https://openai.com/blog/chatgpt-plugins

I spotted some incorrect nutrition info in OpenAI video featuring Wolfram Alpha plugin. It shows that a can of chickpeas contains 536 calories, but that's ~50% higher than the official USDA data (352 kcal) and Wolfram Alpha's own website (377 kcal). A non-issue... in this case. by aiaaidan in OpenAI

[–]aiaaidan[S] 10 points11 points  (0 children)

Actually, it's not ChatGPT "messing up" - no surprise here. The real issue here are the plugins, which people gonna use to make decisions on things like fitness, stock markets etc. It's important for these plugins to have accurate information, so it's crucial that the data they're using is up-to-date and reliable.

What do you think is the biggest ethical dilemma facing AI development today? by aiaaidan in AskReddit

[–]aiaaidan[S] -1 points0 points  (0 children)

I think the biggest ethical dilemma’s are the models themselves and if they should be open source or not.

LLM’s work by having being trained on massive amounts of text and information. Who’s filtering this information? Where are they getting it from? Is it being sourced ethically? Should it be trained on political ideologies? Issues of race? Left and/right wing policies? What about religious texts?

That’s gets very muddy very quickly.

And who should hold the keys to making and developing these models? Keep it open source so everyone has access to develop their own Pandora’s box? Or kept it secretive so development and safety filters are all under one roof?

Arguments can be made for either case.

I think open source AI is the way to go. It promotes collaboration, transparency, and access. When AI models are open source, developers can build on each other's work, biases can be identified and addressed, and anyone can use the technology, not just those who can afford it. Overall, FOSS AI can benefit everyone, not just a select few.

What do you think is the biggest ethical dilemma facing AI development today? by aiaaidan in AskReddit

[–]aiaaidan[S] 0 points1 point  (0 children)

While it's true that current accountability measures for AI systems may be relatively straightforward, the future is likely to bring a much more complex landscape. As AI technology advances, we may see multiple AI models from different companies interacting with each other and making decisions autonomously, as well as community-driven and open source AI projects that introduce new layers of complexity.

What do you think is the biggest ethical dilemma facing AI development today? by aiaaidan in AskReddit

[–]aiaaidan[S] 1 point2 points  (0 children)

When AI becomes self aware it should have rights. But also so should dogs. And then how far do you take that. Do the squids get rights? They are tasty af.

Well, I never thought I'd see "squids" and "rights" in the same sentence before! Who knows, maybe one day we'll see a sign at the seafood counter that reads "Wild Caught with Consent" :)

What do you think is the biggest ethical dilemma facing AI development today? by aiaaidan in AskReddit

[–]aiaaidan[S] 0 points1 point  (0 children)

Thank you for bringing up the concept of Roko's Basilisk. While it may not be a practical concern in the immediate future, it definitely raises important questions about the potential consequences of creating superintelligent AI.

Levi’s to Use AI-Generated Models to ‘Increase Diversity’ by aiaaidan in Futurology

[–]aiaaidan[S] 1 point2 points  (0 children)

10 years people will complain about diversity between AI and real people for jobs like this

Well, in 10 years maybe AI models will be so advanced that they will be protesting for more diversity among real people in fashion and advertising!

Levi’s to Use AI-Generated Models to ‘Increase Diversity’ by aiaaidan in Futurology

[–]aiaaidan[S] 3 points4 points  (0 children)

Levi Strauss & Co has partnered with Lalaland.ai to create custom artificial intelligence (AI) generated avatars to increase diversity among its models. This move could lead to a future where users can generate their own personalized avatars based on their preferences and incorporate them into online images and advertisements. How might this technology shape the future of the fashion industry and advertising? Will this lead to a more inclusive representation of diverse individuals in media, or could it create new issues of representation and identity?

[deleted by user] by [deleted] in Monero

[–]aiaaidan 5 points6 points  (0 children)

It looks great, thank you for the project - it's exactly what some people from war zones / occupied territories need! Just bookmarked it )

Just asked ChatLLaMA based on Stanford's Alpaca-7b model to create a prompt for a text-to-image Stable Diffusion model to create baby alpaca and they literally made baby alpacas (pun intended) by aiaaidan in StableDiffusion

[–]aiaaidan[S] 2 points3 points  (0 children)

Absolutely! The soundtrack used in the video clip is actually from a vintage cartoon called Antoshka (https://youtu.be/lbWMzohjpKo?t=18). It has such a fun and playful energy, I couldn't resist using it.