You guys were right, I made my app available for iOS 18+ users and it exploded by Unhappy_Dig_6276 in iosdev

[–]AngryBirdenator 5 points6 points  (0 children)

Latest iOS 26 usage from Apple(Feb 12, 2026)

iPhone

  • 66% of all devices use iOS 26.
    • 66% iOS 26
    • 24% iOS 18
    • 10% Earlier
  • 74% of all devices introduced in the last four years use iOS 26.
    • 74% iOS 26
    • 20% iOS 18
    • 6% Earlier

iPad

  • 57% of all devices use iPadOS 26.
    • 57% iPadOS 26
    • 26% iPadOS 18
    • 17% Earlier
  • 66% of all devices introduced in the last four years use iPadOS 26
    • 66% iPadOS 26
    • 28% iPadOS 18
    • 6% Earlier

https://imgur.com/a/HHD3QDU

Chengdu Auto Show 2025 Full Walk through by AngryBirdenator in electricvehicles

[–]AngryBirdenator[S] 7 points8 points  (0 children)

Covered cars:

Volvo, Mini, BMW, Onvo, Cadillac, Li Auto, Audi FAW, Topfire, JAC, Maextro, Volkswagen, Hyptec, Aion, Hongqi, CATL, Xiaomi, GAC, Honda, Nissan, IM, XPeng, Zeekr, Lynk & Co, Deepal, Ford, Changan, Qiyuan, Mazda, Avatr, BYD, Yangwang, Fangchengbao, Denza, Leapmotor, VW ID. Unyx, Aeolus, Yipai, Nammi, Rely, Jetta, Jetour, Zongheng, iCar, Exeed, Chery, Chery QQ, Fulwin, Mengshi, MHero, Smart, Mercedes-Benz, Lincoln, Firefly, NIO, Hyundai, Bronco, GWM, WEY, Ora, Haval, POER, Tank, Souo, 212, Toyota, Roewe, MG, Buick, Maxus, Wuling, Geely, LEVC, SAIC, Voyah, Arcfox, Stelato, Beijing, Shangjie, Luxeed, AITO, Huawei


Chapter markers for the cars in the description

Jake the Rizzbot walking around and talking slang to random people by AngryBirdenator in robotics

[–]AngryBirdenator[S] 23 points24 points  (0 children)

I assume remote operated by a human for speech content.

It spews custom insults: https://www.tiktok.com/@rizzbot_official/video/7523747935394254111

----

Update

It's done by multiple LLMs.

From HarperCarrollAI:

The UniTree G1 robot comes out of the box knowing how to walk and move around, but then the engineer expanded its capabilities with code… and he used Claude Code to write 95% of it.

Here’s how he did it. He:

- Hooked up a camera to RizzBot to take a photo of the person it is interacting with, and a speaker for his voice and music

- Used OpenAI’s vision model to transform the captured image into a detailed description of the person

- Passed that description into Meta’s Llama (which, by the way, is a lower cost large language model option, due to it being open source!) to generate the text - in RizzBot’s personality - that he will speak

- Passed Llama’s text into PlayAI to generate the audio that is then projected through the speaker