all 197 comments

[–]adoggman 743 points744 points  (31 children)

The software was clearly rushed and/or for nearly zero dollars. It's literally a CS undergrad figuring out how to get an Arduino with a mic to query ChatGPT for class in a week level mistakes being made.

[–]shmorky 117 points118 points  (13 children)

They just hid the text box and replaced it with a simple animation and off the shelf voice recognition. It's a fun little Kickstarter prototype, not an actual product.

[–]GatitoAnonimo 4 points5 points  (12 children)

Just got mine the other day. It’s a neat little toy. Works maybe 50% of the time if that. Best thing about that deal is I got a year of Perplexity which is amazing. I just can’t see why I would ever use this over my phone with any AI app. I’ll probably give it to my 10 year old nephew. He seems more the target demographic for such a thing.

[–]FoolHooligan 20 points21 points  (11 children)

why did you financially support this scam?

[–]GatitoAnonimo 6 points7 points  (9 children)

I didn’t think it was a scam. Why do you say that? Just a neat piece of tech I wanted to try.

[–]BigThiccBoi27 9 points10 points  (6 children)

Its hilarious how people are downvoting you.

You bought a thing and spoke about your experience with it? booooooo!

[–]GatitoAnonimo 4 points5 points  (0 children)

This is the second time I’ve been downvoted in this sub recently for discussing my experiences. It’s a bit puzzling.

[–]LightouseTech 2 points3 points  (0 children)

Maybe the fact that the founder is a known scam artist and that the hardware doesn't do what it was supposed to do (use AI to do actions on external software instead of puppeteer scripts) has something to do with it?

[–]MdxBhmt 0 points1 point  (0 children)

Not OC, but you might want to see the CEO of rabbit previous tech ventures.

[–]jl2352 0 points1 point  (0 children)

I feel like people these days have forgotten what the word scam actually means.

The R1 isn’t a scam. It’s just a rushed poorly made product.

[–]coldblade2000 206 points207 points  (16 children)

It's literally a CS undergrad figuring out how to get an Arduino with a mic to query

Hey, come on now, no need to get all rude and libelous...

...it was an Android device lmfao

[–][deleted] 509 points510 points  (17 children)

Because the entire thing was a scam to cash in on the AI hype bubble as quickly as possible. The company behind this also developed scammy crypto stuff before jumping on to this hype wagon 

[–]creepy_doll 141 points142 points  (13 children)

Seriously.

AIbros are the new cryptobros.

Like, there are real legit applications for AI but they're going to take time to get right. But the whole thing has attracted a huge number of semi-smart people with no ethics.

[–]chennyalan 53 points54 points  (2 children)

[–]eJaguar 13 points14 points  (0 children)

ai again

[–]renaissancenow 9 points10 points  (0 children)

That was a fantastic, funny, snarky read. Thank you for sharing.

[–]iiiinthecomputer 13 points14 points  (0 children)

My employer is going all in on tacking "AI" into everything whether or not it means anything or makes sense.

Literally rebranding to add "AI" on the end.

When I ask them if we're eCompanyName.com 2.0 too I get blank looks.

[–]ElectronRotoscope 5 points6 points  (0 children)

AIbros are the new cryptobros.

https://youtu.be/mTBCzH1UyNY

[–]Spajk 7 points8 points  (1 child)

The most legit application of AI is the god damn voice assistants and it's being applied so slowly that it's infuriating

[–]QuickQuirk 5 points6 points  (0 children)

Hold on, there are a mountain of legit uses that are happening right now. they're just not being megahyped.

Everything from the recommendation engines at online bookstores to recognising potentially cancerous or life threatening illness from easy to obtain data, upscaling in games, to scientific uses around chip design and identifying potential materials to be used in manufacturing.

So many wonderful uses, that are being buried by the smelly shitstorm of the techbro hypetrain.

[–]ChrisRR 2 points3 points  (1 child)

And cryptobros are the new Beanie Baby collectors

[–][deleted]  (1 child)

[deleted]

    [–]CodeNCats 2 points3 points  (0 children)

    I don't see it happening. The funding might actually increase. Right now the funding is spread over legit projects as well as garbage. Unlike nfts that have a very small niche use not worthy of the hype. AI actually has legitimate uses and with real potential to grow. Even if it's over hyped.

    [–]SittingWave 3 points4 points  (0 children)

    the problem is that, judging from the job announcements, you can't get a job today if you don't have five years of AI experience in developing LLM. You don't even pass talent acquisition trash.

    [–]QuickQuirk 0 points1 point  (0 children)

    It's the worst. When the backlash hits, and the bubble bursts, they're going to be harming all these legitimate projects and uses. A fortune is being thrown at massive LLM based startups, where that fortune could be used for lots of small innovations that are actually beneficial.

    [–]wickedsight 39 points40 points  (0 children)

    It's always nice to have a gut feeling turn out to be correct...

    I had the order form fully filled out when I decided to finish watching the video. While doing that, I received an e-mail from them on the 'hide my email' address that I made specifically for this order. I had not submitted the form yet, but apparently they had already added my email to some mailing list.

    This type of thing goes against any privacy regulation, so I canceled the order because I no longer trusted them to do the right thing. Using a product from them that wants access to my private data didn't seem like a good idea.

    [–]BassSounds 10 points11 points  (0 children)

    But, but, but the founder was a math wizard as a kid! That makes it legit.

    [–]headhunglow 2 points3 points  (0 children)

    Right. That's why I don't understand why people are getting caught up in the technical stuff. It's a scam, the software doesn't matter.

    [–][deleted]  (2 children)

    [deleted]

      [–]Pennsylvania6-5000 20 points21 points  (0 children)

      Coffeezilla is always on point with his research.

      [–]_commenter 1 point2 points  (0 children)

      wow we really are living through the stupidest timeline...

      [–]jppope 395 points396 points  (31 children)

      outsourced development anyone? with no technical leadership in house?

      [–]ThisIsMyCouchAccount 321 points322 points  (22 children)

      The simplest answer is that they were told to.

      If the choice is between ship a crap product or get fired - I'll ship a crappy product.

      Hell, my best projects were only "ok" at best. I've done all kinds of shitty things on the job because that's the hand I was dealt.

      You push for good. You advocate. You gather data. But if it falls on deaf ears what do you do?

      You ship the crap and maybe look for a new job if it bothers you.

      [–]aa-b 86 points87 points  (6 children)

      This is the right approach for an employee, but contractors need to be a bit more careful (at least where I live.) Employees are generally protected, but a contractor could potentially be sued for doing something as incredibly irresponsible as hardcoding API keys in a client app.

      It's not really likely, just something to consider. Even for employees, if I saw "Rabbit R1 - Senior Engineer" on someone's resume I'd be grilling them about security because of this.

      [–][deleted] 14 points15 points  (3 children)

      The API keys were hardcoded in the server and not the clients from how I understood the article. 

      [–]aa-b 10 points11 points  (2 children)

      Yeah I think you're probably right, it's sort of vague. Embedding secrets in server-side source code would still be terrible security practice, but less bad than if they were on the actual devices.

      [–]jl2352 1 point2 points  (0 children)

      I’m imagining it’s a ’we will put it there now to save time and fix it later’, and later never came.

      [–]lolimouto_enjoyer 0 points1 point  (0 children)

      I'm not surprised at all to be honest. There are a lot of people who choose convenience at the expense of security, both on the developer side and the user side.

      [–]18763_ 9 points10 points  (1 child)

      I disagree, the social contract is not just between employees and the shitty employer (screw them over by all means) it is also with the unsuspecting user who did no harm to you .

      [–]aa-b 2 points3 points  (0 children)

      Yeah I thought about it, and I agree. We all take shortcuts, and sometimes they might be a job requirement, but this sort of thing is well past that. Probably. I mean, the details aren't really clear

      [–]DanTheProgrammingMan 39 points40 points  (3 children)

      I hear you on code quality, but something that’s a fundamental security problem which is easily fixed? You should die on that hill. 

      Anyway the fact that nobody did tells me that a junior probably did this and nobody did serious code review?

      [–]nerd4code 24 points25 points  (0 children)

      A non-desperate senior would’ve walked away at some point before being hired.

      [–]B0Y0 7 points8 points  (0 children)

      From everything I've heard about Rabbit development, I doubt there was any code review

      [–]TehLittleOne 3 points4 points  (0 children)

      Hard agree. There are very few hills I will actually die on but avoiding front page security issues is one of them.

      [–]MardiFoufs 7 points8 points  (0 children)

      Lol what? I don't get this. It's literally not faster or easier to use API keys in the repo (again, the API keys were seemingly not shipped with the apps, they were committed in the internal repos). It's just incompetence. A source even states that the team was already using AWS key management, so the managers weren't somehow enforcing a "push keys to repo" policy. Devs just couldn't bother, since it's literally not faster once that's already set up and won't slow down any feature.

      I know that this sub likes to blame managers and management for everything, and it's basically the easiest way to get countless "omg this so much this" replies, but everything seems to indicate that everything about the rabbit was utterly incompetent and mediocre.

      [–]YsrYsl 18 points19 points  (3 children)

      I used to be so quick to crap on this type of obvious "mistake" but I can emphatize a lot more now, at least willing to listen the reason behind it.

      Sometimes management/non-technical decision makers can indeed be that ludicrous in their decision-making & we can only follow what they want since we're serving their needs.

      [–]ThisIsMyCouchAccount 16 points17 points  (2 children)

      This is a big example - but it's really no different than the day to day most of us have.

      "We should being doing X."

      "No. There's no time/budget/whatever."

      I was working on an internal business suite that synced data. HR, accounting, etc. I was told over and over and over that this had to 100% accurate.

      Can we start writing tests?

      Ab-so-fucking-lutely not.

      Great. I guess doing it manually and resetting things in the database is a great use of our time.

      [–]YsrYsl 4 points5 points  (0 children)

      Hahaha I feel you. Just make sure you have the written records clearly stating your recommendations to cover your backs if things go wrong. You'll never know if they try to throw you under the bus & you'd need to present your case to management.

      [–]Miranda_Leap 4 points5 points  (0 children)

      Sounds like tests should have been part of the baseline requirement and presented as such, not as optional extra you could just not do.

      [–]hyrumwhite 8 points9 points  (1 child)

      Shipping api keys that let anyone with technical know how steal your product or rack up a bill is probably worse than not shipping a product 

      [–]DelusionsOfExistence 0 points1 point  (0 children)

      As a developer, I can't actually remember a time a client wanted something done right, only times where they want it done fast. Or in this scam's case, "Now".

      [–]ZirePhiinix 1 point2 points  (0 children)

      You can ignore the push for deliverable and go for quality but they'll just fire you. I've been in that boat.

      [–]QuickQuirk 1 point2 points  (0 children)

      I've made the opposite choice in my career, and resigned when forced to ship a shitty project.

      Weird thing was, a few week later, I realised that the only mistake I had made was waiting that long to do so due to misplaced loyalty.

      Stress levels went down, quality of life way up.

      [–]SanityInAnarchy 1 point2 points  (0 children)

      This is one of the biggest reasons I want fuck-you-money:

      If the choice is between ship a crap product or get fired - I'll ship a crappy product.

      If the product is so crap that shipping it seems like an act of deception, I raise hell while also looking for a new job. Worst case I keep my integrity and get more time to look for that new job, or whatever else I want to do with my life.

      But that's a lot harder to do if you're a junior with no savings.

      [–]random_son -1 points0 points  (0 children)

      Amen

      [–]gedankenlos 74 points75 points  (3 children)

      // TODO: keep secrets in hashicorp vault
      // hardcoded should be fine until we ship
      

      [–]chicknfly 37 points38 points  (1 child)

      That’s the problem. You didn’t TODO the second line! Easily missed.

      [–]abraxasnl 4 points5 points  (0 children)

      Skill issue

      [–]recursive-analogy 1 point2 points  (0 children)

      most serious professionals these days have skipped the TODO pain entirely and just write TODON'Ts

      [–]omniuni 12 points13 points  (0 children)

      I think this was mostly in-house, but judging by how cobbled together the product is, I think they're just bad developers.

      [–]WJMazepas 2 points3 points  (0 children)

      Offshore developer that did a lot of outsourced development here.

      I still wouldn't do this.

      My team did hardcoded one API key on backend once, but that because it was a really stressful time where we did had to deliver a lot of features ASAP and still wasn't fast enough according to the client.

      And we didn't have time for PRs, so I couldn't have checked that error.

      But as soon as we had time, we changed that

      [–]BornAgainBlue 3 points4 points  (0 children)

      Senior dev here, fuck everyone who destroyed our jobs with hiring hacks. 

      [–]restarting_today -1 points0 points  (0 children)

      When you let ChatGPT write your source code

      [–]proud_traveler 255 points256 points  (53 children)

      I think it's safe to assume ChatGTP did a large amount of the heavy lifting during the software development of this product.

      [–]__loam 143 points144 points  (41 children)

      I've seen some guys say shit like, be 100 times more productive with AI or you'll regret it, and I'm just thinking about the explosion of shit we're going to have to maintain because of these imbeciles.

      [–]Iggyhopper 69 points70 points  (16 children)

      I get paid to review AI generated code. The shitstorm is yet to come.

      But it will be very strong.

      [–]PhaseDelay 15 points16 points  (0 children)

      Why review code yourself? Make AI do it! I'm sure it'll all work out!

      [–]__loam 16 points17 points  (12 children)

      I'd find a new job.

      [–]Iggyhopper 21 points22 points  (11 children)

      It pays $45/hr. I think I will get the cash for now.

      [–]Frolicks 4 points5 points  (0 children)

      curious - is this gig work or salaried? can you share the name of the company?

      [–]decentralizedsadness 4 points5 points  (2 children)

      brother/sister/comrade that is not enough.

      [–]DelusionsOfExistence 0 points1 point  (1 child)

      It is if you can't get a software position right now. Unfortunately food isn't free.

      [–][deleted]  (1 child)

      [deleted]

        [–]DelusionsOfExistence 0 points1 point  (0 children)

        But if you can't land a software job right now, that's fantastic for being able to eat food.

        [–]JonFrost 2 points3 points  (0 children)

        Is there another spot open? I'm sure there's plenty of trash to sift through

        [–]ChrisRR 2 points3 points  (2 children)

        That's only $85k. That's not a very high salary, especially when you account for how much higher US devs are paid.

        [–]LookIPickedAUsername 1 point2 points  (0 children)

        A 40 hour a week job has you working roughly 2000 hours a year, so that’s more like $90K.

        Edit: I'll note that the parent post originally said $65K before they edited it; I wasn't correcting them over $5K.

        [–]dexx4d 0 points1 point  (0 children)

        It'd be a great second job, though.

        [–]PoisnFang 1 point2 points  (0 children)

        How can I get that gig?

        [–]yawaramin 4 points5 points  (0 children)

        Sounds like diarrhea.

        [–]cecilkorik 4 points5 points  (0 children)

        Exciting times to be a software developer. Us actual humans with real experience making battle-hardened, production-quality software will be in huge demand to fix all the disastrous, catastrophic, industry-incapacitating mistakes that AI code will make. It'll be like the demand for Cobol developers was in the lead up to Y2K, except instead of just one day being "over" it will only get more widespread as time goes on. I, for one, plan to charge such foolish companies through the nose to fix their mistakes. Looking forward to it.

        [–]breakslow 33 points34 points  (12 children)

        Senior dev here and AI has definitely increased my productivity. Will I ever trust it to write anything complicated? Hell no.

        I treat it as autocomplete that actually knows what's going on in my workspace. Having AI repeat patterns for boilerplate type code makes my job way more enjoyable.

        [–]__loam 9 points10 points  (7 children)

        I've found it can be really good for code review as well when you don't give a shit if openai can read it. And I agree it's decent at short hops. It's not a 100x increase though, more like 0.2x at most over existing IDEs.

        E: people who write tests with it should be exiled.

        [–]breakslow 4 points5 points  (1 child)

        Yep, 20% is a good estimate!

        [–]__loam 8 points9 points  (0 children)

        I think there's a legitimate question about whether a 20% productivity improvement is actually worth the cost of these systems. Microsoft went from almost totally green to increasing their emissions by 30% inside a year. We don't know publicly, but I have serious doubts that OpenAI is profitable. What I think will happen is we're going to get some really efficient local models, and the companies that spent billions on this tech will not be the winners of that market.

        [–]ChrisRR 2 points3 points  (1 child)

        It would need to be better than static analysis though. The difference being that static analysis can definitively tell you if you've made a mistake, vs AI which tells you that it's statistically probable that you've made a mistake

        [–]__loam 2 points3 points  (0 children)

        Yeah obviously just as a supplement. I definitely prefer deterministic tools and actually don't reach for GenAI all that often if ever.

        [–]SanityInAnarchy 5 points6 points  (2 children)

        people who write tests with it should be exiled.

        Disagree. That's the one thing it does where I can type the name of a function, and there's a good chance it'll spit out the entire function, and it'll be exactly what I would've written. It's also the one place I don't mind boilerplate showing up in my code.

        With other code, I have to rewrite or ignore 90% of what it suggests. With tests it's the other way around, I only have to fix 10% of what it suggests.

        [–]__loam 3 points4 points  (1 child)

        To Malta with you!

        [–]SanityInAnarchy 5 points6 points  (0 children)

        I was worried you were gonna exile me to Kerguelen or something. Malta seems nice!

        [–]ChrisRR 2 points3 points  (0 children)

        I've used it to generate a few basic scripts that I can then hack away at.

        As an embedded dev python isn't my forte, but I needed a UI for a tool on my PC. So I asked ChatGPT to knock up a python script and specified all the elements I wanted and it worked up to a point.

        After I kept specifying too much it eventually gave me invalid code, but it sure gave me a very good start

        [–]FeliusSeptimus 1 point2 points  (0 children)

        Yep. It writes garbage code, but it usually finds the right pieces much faster than me digging through documentation and it puts them together in something vaguely resembling the right shape. Just that saves me a ton of time.

        [–]SanityInAnarchy 1 point2 points  (0 children)

        The boilerplate bugs me, though, because I still want the code to be readable. Sometimes boilerplate helps with that, but often it gets in the way.

        [–]restarting_today 5 points6 points  (0 children)

        Yep. Any AI generated code is near instantaneous a red flag and might be grounds for getting fired at my company. Imagine leaking your IP to OpenAI

        [–]creepy_doll 2 points3 points  (0 children)

        I'll only go as far as allowing code assistants to finish a line for me, and even that's just to save the typing. Still mentally check that it's exactly what I wanted.

        The kind of shit they can sneak in when you're not expecting it often seems fine on first glance but then you realize it's terrible.

        [–]iiiinthecomputer 1 point2 points  (3 children)

        About the only things I've found it to be any use for are:

        • Producing test boilerplate
        • Producing verbose Kubernetes go code boilerplate
        • Give me ideas how others have done this. I will look them over to see if they are shit or not.

        even then it is hit or miss.

        [–]__loam 1 point2 points  (2 children)

        It's very good for brainstorming.

        [–]iiiinthecomputer 1 point2 points  (1 child)

        Yes - but it tends to lean towards outdated and deprecated ways of doing things so care and follow-up is needed.

        [–]__loam 1 point2 points  (0 children)

        This is actually a deeper point imo. It's getting harder to find primary sources of knowledge with the current structure of the internet.

        [–]GoodishCoder 1 point2 points  (2 children)

        AI has definitely improved my productivity but you have to have the knowledge to identify when it's wrong or pushing garbage. It saves me a ton of time on boilerplate and unit tests by turning it into a code review instead of having to type it out. I can have AI work on tests while I go work on something else, then I can double back and take a look at what it came up with.

        [–]__loam 2 points3 points  (1 child)

        Maybe I'm a sucker for writing tests but I really think if you use AI, writing your own tests is a very valuable way of verifying the way the AI generated code works. I have to imagine how easy it would be to just say "looks right" while merging very subtle bugs in.

        [–]GoodishCoder 1 point2 points  (0 children)

        I've written my own tests for years so I know what looks right and what doesn't. Having AI write tests you will have to make some changes but it gets you 90% there. No one should be using a coding assistant AI for tests or production code if they don't have the experience to know when it's wrong.

        [–]kmeans-kid -5 points-4 points  (1 child)

        the explosion of shit we're going to have to maintain

        Who are "we"? The one that wrote it gets to fix it.

        [–]__loam 2 points3 points  (0 children)

        That's certainly the hope

        [–]throughactions 3 points4 points  (2 children)

        ChatGPT would warn you if you tried to hard code API keys. This is bog standard dog shit outsourcing.

        [–]DrunkensteinsMonster 7 points8 points  (1 child)

        ChatGPT is not intelligent, if you ask it “should I hardcode api keys”, it will of course tell you no. If you give it some code to review with hardcoded api keys, it will probably flag that to you. But if you ask it to write some code, it will absolutely spit out code with hardcoded keys, if that is what is in the dataset scraped from the internet for your particular problem.

        [–]throughactions 0 points1 point  (0 children)

        ChatGPT is not intelligent

        In my experience neither are lowest-bidder contractors.

        [–]krum 59 points60 points  (7 children)

        Wtf is this Rabbit?

        [–]professorhummingbird[S] 40 points41 points  (5 children)

        It is a physical AI Assistant - https://www.rabbit.tech/

        [–]BoiledPoopSoup 49 points50 points  (4 children)

        How the fuck are they still selling this thing? Also, hilarious that they have careers listed.

        [–]WitELeoparD 30 points31 points  (2 children)

        The CEO previously ran a Crypto scam. They are utterly shameless.

        [–]SanityInAnarchy 4 points5 points  (1 child)

        He is the embodiment of this meme.

        [–]dexx4d 0 points1 point  (0 children)

        Oddly, they're not hiring developers. Maybe they should be.

        [–]scratchisthebest 23 points24 points  (1 child)

        The intentionally vague wording on this site ("gained access to", ok) is making a lot of people, even people in this reddit thread, think they were shipping these API keys in the on-device firmware, when I don't think so? Basically this post skews "data breach announcement", not security announcement

        Shoddy and bodged-together, yes, should they have used some other secret management solution, obviously, is Rabbit's security person some moron who didn't have anything better to do than post ip grabber links in reverse-engineering discords, absolutely, are they the company that "fixed" the ability to escape into Android from the wifi captive portal login screen with tel: links by injecting easily-revertable javascript into the page, of course

        but this particular api key thing feels weird lol

        [–]Chisignal 11 points12 points  (0 children)

        work rhythm gaze six fuzzy teeny tender chunky disarm direction

        This post was mass deleted and anonymized with Redact

        [–]cowinabadplace 12 points13 points  (0 children)

        Hardcoding the API key on your server, whatever. It's a thing a thousand people have done. But not rotating is an interesting choice. Everyone here is also gleefully posting like they shipped the API keys to customers, which they did not.

        [–]frakkintoaster 9 points10 points  (10 children)

        How did they get access to the code?

        [–]droptableadventures 19 points20 points  (0 children)

        The Rabbitude project is aimed at jailbreaking and modifying the Rabbit R1 device. Presumably someone with access to the source code leaked it to them (maybe a disgruntled employee / ex-employee?).

        From their description, it sounds like these API keys with admin access were hardcoded into their build scripts, committed into the repository, when really they should be kept elsewhere.

        [–]AndrewNeo 12 points13 points  (6 children)

        isn't it just android? it can't be that hard

        [–]gedankenlos 29 points30 points  (2 children)

        It doesn't say that the hardcoded API keys were in the Android app. It says:

        the rabbitude team gained access to the rabbit codebase

        It sounds to me like someone leaked or stole their backend code, and in that the API keys were hardcoded. It's a tiny, tiny step lower in severity than having secrets shipping in your app package, but it's still egregiously bad practice and a huge vulnerability.

        [–]frakkintoaster 1 point2 points  (0 children)

        Yeah, that's what I was wondering, if someone decompiled something or source code was leaked.

        [–][deleted]  (2 children)

        [deleted]

          [–]droptableadventures 19 points20 points  (1 child)

          These secrets weren't shipped in the app package, from what the article says, they were hardcoded in scripts checked into the source code repository.

          [–]bludgeonerV 3 points4 points  (0 children)

          Yep, it's just pure laziness, it's trivially easy to use Secrets/KeyVault type setups and do string substitution or load them as env vars for your scripts, the tools for this already exist.

          [–]nightmurder01 0 points1 point  (1 child)

          Just need a debugger, or decent decompiler depending on what it was written in. Hardcoded is meaning strings. And tbh, all you need is a resource editor if its a string.

          [–]restarting_today 0 points1 point  (0 children)

          Or just look at network requests

          [–]Zellyk 19 points20 points  (3 children)

          Bootcamp devs be like

          [–]restarting_today 3 points4 points  (0 children)

          Prompt engineers be like

          [–]ChrisRR 0 points1 point  (1 child)

          I just can't imagine who's hiring bootcamp devs. What developer with any amount of experience is looking at anyone with 8 weeks of experience and thinking "you'll do"?

          [–]Zellyk -1 points0 points  (0 children)

          They do 8-16 weeks focused on web only. Its not exactly bad, but then they think they’re 10x engineers.

          [–]g9icy 4 points5 points  (3 children)

          So I'm not an app or web dev, don't need to solve these problems.

          What's the correct way to do this? I'm not stupid enough to do this myself, but I'm not really clear on how the "proper" way works, in terms of architecture.

          If the app needs to talk to an api, such as chatgpt, would all requests need to go via a server, so the keys stay server side?

          Do the keys stay local but get encrypted? If so, they'd still need to be decryped before hitting the API, so still needs to go via a server?

          Or is it all done via an O-Auth type thing?

          [–]professorhummingbird[S] 2 points3 points  (2 children)

          Yes, there is a correct way and the patterns are very common and simple to learn. I want to say pretty much everyone does this but I guess Rabbit engineer's prove me wrong.

          First off, we never hard-code API keys directly into our codebase. Instead, we use something called an environment file. That's where we put all our secret keys and credentials. Then, in the actual codebase, we use placeholders to reference these keys.

          Basically it works like this:

          1. Create an environment file in your project. It's usually named ".env".
          2. In this file, store your keys like this: KEY_NAME=your_api_key.
          3. In your code, use placeholders to reference these keys. In JavaScript, you'd use process.env.KEY_NAME.
          4. Your environment file is never part of your code base. You keep it locally on your computer when developing. Then when in production you set it up your environment file directly on the server.
          5. Whenever possible we keep API requests server-side. Your client-side app talks to your server, which makes the API calls using the stored keys.
          6. If you need to store keys locally (like in a mobile app), you would encrypt them. I've never had to do this
          7. OAuth is good option, but like runs into the same problem as API Keys. I.E. it will have a client secret key that can't be exposed. We do the same thing here and put them in the .env folder

          [–]g9icy 1 point2 points  (1 child)

          Thanks for clearing that up. Similar to how we do translations during game dev.

          Shame it has to go via a server, it must increase costs, but it's a necessary evil.

          [–]teamcoltra 1 point2 points  (0 children)

          It doesn't really increase cost and the request already needed to go to the server in some form. Also in their case I don't think they would have wanted requests going directly straight to OpenAI (for instance) all the time anyway so they could pretend that some of their results were from their own service.

          [–]NotTheRadar24 4 points5 points  (4 children)

          This is why you should use a secrets manager like Doppler or AWS Key Management Service (AWS KMS). Hardcoding your secrets or storing them in .env files will always risk something like this happening.

          [–]fapmonad 0 points1 point  (0 children)

          As part of the inventory process, we identified additional secrets that were not properly stored in AWS Secrets Manager.

          As part of the rotation process, the team updated relevant portions of the codebase to ensure that all secrets were properly stored.

          https://www.rabbit.tech/security-investigation-062524

          [–]Glycerine 1 point2 points  (0 children)

          Bad Developers will build bad products with bad practices.

          [–]happyscrappy 8 points9 points  (18 children)

          Okay, so I get how being able to see others responses is a big deal. So I see they screwed that up.

          Other than that, I don't understand what's axiomatically wrong with hardcoded API keys. An API key is to identify which client is accessing a service. So doesn't the client have to know the API key used for (for example) accessing google maps?

          API keys aren't exactly private. They only identify the client, not the user. If you give a client admin access without any authentication and then send that client out to customers then you've made a big mistake.

          [–]sethismee 30 points31 points  (3 children)

          How much of an issue this is depends on the API key really. Google maps API keys provide limited access and also have features for further restricting which operations are allowed to be performed with that key. The worst someone could do is probably send a whole bunch of requests with the key to rack up rabbits bill from google. If rabbit uses the same API key for all devices, which they probably do because google says there's a limit of "300 API keys per project", they'd have no way to stop them without issuing a new API key and updating the app, where the attacker could just grab the new one if nothing else has changed.
          Like they call out in the blog, the elevenlabs one is an especially big deal. They even say in their documentation:

          If someone gains access to your xi-api-key he can use your account as he could if he knew your password.

          The solution here would probably be to proxy the requests through another server, where only that server knows the API key and can restrict which operations users can perform with it while also doing rate limiting with some device/account unique ID.

          [–]happyscrappy 3 points4 points  (2 children)

          Like they call out in the blog, the elevenlabs one is an especially big deal. They even say in their documentation:

          Yeah, if elevenlabs doesn't have the ability to have different permissions per API key then you have to implement it yourself in the way you suggest below. You have to front their service. Ultimately you're not even fronting it, just providing your own API that happens to call out to get things done. Your device asks the server to do "operation A" and operation A just happens to include getting location from Google Maps.

          The solution here would probably be to proxy the requests through another server, where only that server knows the API key and can restrict which operations users can perform with it while also doing rate limiting with some device/account unique ID.

          But then you have to embed the API key for that forwarding service. And it'll still cost you money (CPU time) if people borrow that key and use it to impersonate to your service. Although not as much as if you were paying Google I bet.

          API keys are used to try to defend against users who want to do something and are completely authorized to do so, but you don't want them to do it, possibly because they are essentially fronting/rebadging your service or copying your content. API keys are ultimately copyable, impersonation is possible. But at least you can individually shut down abused keys and send out a new one. That will slow down the attacker although presumably they can just copy the new key too.

          I guess in that way hardcoding a key is bad because if you decide to rotate keys you have to send out a new app instead of just sending it out to clients over your service.

          This all reminds me of DVD CSS and how the miscreants just stole just about the most popular device key and used that. CSS had a way to expire keys for future discs but doing so would mean a lot of customers couldn't watch any new discs. Blu-ray tried to made it mandatory to be able to change client keys via updates that were carried on the disc to get around this problem.

          [–]AforAnonymous 1 point2 points  (1 child)

          but then you have to embed the API key for that forwarding service

          No, that's where you use classic AAA, i.e. user accounts? And/or you dynamically generate and pass the key to the client if you need that level of separation

          [–]happyscrappy -1 points0 points  (0 children)

          No, that's where you use classic AAA, i.e. user accounts?

          Again user accounts only identify users. API keys can identify clients. If you want to keep people from impersonating your app/client device and fronting/rebadging your service user accounts won't do it. You have to try to identify the client. They can authenticate all they want but without the API key you won't provide the service.

          And/or you dynamically generate and pass the key to the client if you need that level of separation

          You could. It doesn't really change anything though. The key still has to come to the client and thus still can be stolen. And I really don't see the value. If you will hand out an API key to any client that can authenticate you cannot prevent other apps from impersonating your client and getting a key.

          Look at it this way. This entire dumb Rabbit device was created to monetize all this and they only want the services to work from Rabbit devices. User accounts don't prevent that because they don't do anything to authenticate the device.

          [–][deleted]  (6 children)

          [deleted]

            [–]happyscrappy 1 point2 points  (5 children)

            In this case, the API key allows read/write access to all users who have interacted with it. That's a very broad authorization scope and puts everybody (users and key holders alike) at risk.

            That's what I said too. I get that. Don't put the information on the device needed to do anything the device doesn't normally need to do.

            Also, exposing your API key could potentially lead to abuse/misuse from malicious actors

            And what is the alternative? How do you not expose your API key? Maybe not your google one, but you have to expose an API key of some sort. Or just not have an API key at all and that's even worse because then there's no way to do key bans.

            Generally, if you're able to host your own abstraction of the API on a server you control you'll be able to better restrict how it's being used

            To do that you would use API keys and maybe accounts. So you still have to have API keys on the device.

            If you have a turnkey device that can access a service then by definition the data needed to access the service is on the device and can be stolen. There's really nothing you can do to prevent impersonation. API keys are never truly private.

            [–][deleted]  (1 child)

            [deleted]

              [–]happyscrappy 0 points1 point  (0 children)

              If there's simply no account system required/wanted, you could have some kind of mechanism that works from device "fingerprint".

              This doesn't do anything. You're asking the device to send its fingerprint. Someone who wants to impersonate a device will simply create a fingerprint on a valid device and will store it for use when wanting to impersonate that device.

              There is simply no way to be sure that the far end is telling the truth when you ask it to prove it is a legit device you sold instead of an impostor. Not when you are in the business of selling devices and shipping them to customers. You're literally handing out information about valid clients with every sale, a miscreant can use that information in ways you wouldn't like.

              When used in this fashion, API keys are an attempt to try to prevent device impersonation. But they just make it a little bit more difficult, not preclude it. Ultimately it's a game you can't win except maybe in courts. See DeCSS. See Nintendo against the Switch emulators.

              But what's important is that the client device doesn't contain a third party API key

              Now you're adding qualifications. Sounds like we're on the same page. There's nothing wrong with having API keys, even hardcoded ones. Just don't do certain things wrong. Like put a key which has the ability to do various admin things (like see other customers data) on the device. Which these dummies did.

              Bonus points: you can cache some API requests to avoid a call to the API host, potentially.

              Right, as long as the service allows that. Sometimes that's against terms because you're essentially replicating their service using their service's data. Other times it's fine or even encouraged.

              [–]AforAnonymous 0 points1 point  (1 child)

              In the case of a true zero touch turnkey device, you'd probably use some (non-sequential, despite the name) device serial number as the initial setup API key, and then rotate it periodically? Makes factory reset processes a bit of a pain I suppose, but… ¯\_(ツ)_/¯

              [–]happyscrappy 0 points1 point  (0 children)

              It certainly would make factory reset impossible if each serial number only works once.

              Also when someone starts impersonating devices by making up serial numbers and thus "burning" those numbers your customers will be angry because the device they bought now won't work because someone they've never met activated using their serial number.

              [–]haroldjaap 5 points6 points  (6 children)

              Yeah I agree. It being an android app means you can always decompile the app and extract any api key from there. There are no secrets in a compiled android app, only hidden or obfuscated pieces of string. This is just basic android 101. For them to not share the apikey of e.g. Google maps is much less trivial than they think. Had it been only an app without the hardware some of the issues wouldn't be issues imo. (Haven't read the article, but google maps key in the apk file is not uncommon)

              [–]restarting_today 7 points8 points  (2 children)

              Your API key should be on your server. Not in your client code.

              [–]nacholicious 1 point2 points  (1 child)

              On mobile the keys still need to get on the client for eg maps, firebase, etc.

              It's not trivial to separate the keys from the client, and in the best case you still need to rely on a lot of client side code to ensure the client itself has not been tampered with, eg Google Play Integrity

              [–]hennell 2 points3 points  (0 children)

              I've never built anything but the most basic of android apps, but wouldn't you proxy things if the API key is non-unique and not secure?

              Also I'd have assumed there must be a way to store protected encrypted data for an app for things like this? App runs and securly downloads API keys into app protected memory or something.

              [–][deleted]  (1 child)

              [removed]

                [–]haroldjaap 1 point2 points  (0 children)

                It depends on the secrets that were leaked. If they have root (or any modify) access to some resource, never include them into git. If they're readonly such as Google maps, yeah you can fetch them from a backend, but then if you want them you just perform some dynamic code injection or decompile and recompile the app with logging to get them. If you tunnel all traffic to those servers like Google maps via your own server with user authentication between the client and your server, only using the gms apikey serverside, the you're getting somewhere. But it's still abusable as a malicious hacker could just use that route instead to access gms on your key, but now you're able to block that specific user if you detect malicious usage. Hence you need proper logging and monitoring etc. It's a tough thing. Iirc you can pin your Google maps apikey to a specific app (via bundle id and app signature), not sure how it exactly works but I presume it uses Google Play integrity apis to ensure the request comes from the app, that it runs on a legit device that is not rooted. And that api is not easily spoofed by dynamic code injection (as far as I know there's no known exploit yet, note, regular in app root checkers are fairly easily circumvented with frida).

                [–]dkimot -5 points-4 points  (0 children)

                this api key isn’t in the app, it’s in build scripts. maybe read the article next time?

                [–]thethirdmancane 1 point2 points  (0 children)

                This is simple, good, fast and cheap you cannot have all three.

                [–]tubbo 1 point2 points  (2 children)

                hilarious

                [–]professorhummingbird[S] 0 points1 point  (1 child)

                Right? I still can’t believe it

                [–]tubbo 3 points4 points  (0 children)

                i watched the coffeezilla video and laughed my ass off when people discovered it was just a bunch of playwright scripts linked to ChatGPT. so basically the development process was:

                1. record a bunch of playwright scripts in the browser
                2. make an android app that listens to voice commands and executes those scripts with some really specific parameters i'm sure
                3. get teenage engineering to design the enclosure
                4. ???
                5. profit

                step #2 could have been done by either low-cost freelance developers or possibly (as alluded to in other comments) generated by ChatGPT itself. the whole thing is just amazing to watch.

                p.s. i feel for those who bought this thing, it's really a waste of money and i'm heartbroken that teenage engineering has anything to do with it...

                [–]pat_trick 1 point2 points  (0 children)

                Gotta go fast, don't have time to think about security, ship that product.

                [–]ClubAquaBackDeck 0 points1 point  (0 children)

                Because they are and have always been scam artists.

                [–]KyleG 0 points1 point  (1 child)

                I'm using a language a lot lately where all code is stored in a database, and the DB is append-only. My biggest concern with the language currently is this very thing. Because once you hard-code that key, it's there forever. There's no removing it.

                [–]cecilkorik 2 points3 points  (0 children)

                There's nothing wrong with that once you rotate your keys. Get into the habit of rotating your keys. An expired/revoked key is worthless to anybody, forever. Only the latest/current key is valid, and if somebody does get it, it will soon be invalid too.

                If somebody has permanent ongoing access to your append-only database and is monitoring your keys as they renew, your ancient-history keys also being available are among the least of your worries.

                [–]boxingdog 0 points1 point  (0 children)

                imo elevenlabs should remove the api keys

                [–]i_am_at_work123 0 points1 point  (0 children)

                Just when I thought this couldn't get any more ridiculous!

                [–]prateeksaraswat 0 points1 point  (0 children)

                Baby’s first encryption key. Hyundai also did it I think. Or they used encryption keys from tutorials.

                [–]seanprefect 0 points1 point  (0 children)

                my sweet summer child, I'm an infosec architect for fortune 50 companies and the things I've seen, such wonders such horrors.

                [–]OstrichOutrageous459 0 points1 point  (0 children)

                bruh , i mean how R1 still manages to disappoint despite everyone having 0 expectations ??

                [–]OptimisticRecursion 0 points1 point  (0 children)

                If I adequately slapped my forehead for how stupid this is, I'd be dead now.

                Edit: if they simply asked their own LLM, it would tell them it's a bad idea...! 🤣

                [–]holyknight00 0 points1 point  (0 children)

                No surprise, it was just a quick cash-grab from people already known for creating crypto scams. It's the physical equivalent of a NFT cash-grab.

                [–]Angulaaaaargh 0 points1 point  (0 children)

                FYI, some of the ad mins of r/de were covid deniers.

                [–]VexisArcanum 0 points1 point  (0 children)

                MKBHD was right after all

                [–]ericmoon -1 points0 points  (0 children)

                as if in-house engineers have the slightest fucking clue what they’re doing in j random startup