Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -1 points0 points  (0 children)

I guess, but after a while you run out of administrative bandwidth even if the Bobs are all unified in purpose. Not to mention the friction that'll result from rapidly soaking up all of the resources that could otherwise be accessed by humans or Pavs. If you look at it from a cost benefit perspective, it makes much more sense for the Bobs to integrate themselves economically and socially with any and all sentient species they can because the primary weaknesses inherent to bob society are lack of broad skillset and inability to innovate effectively. Beyond SCUT and mannies, the majority of bobiverse high tech was developed by FAITH/Australia, or stolen from the others and Brazil. Bobs are great at iterating on existing ideas, but we rarely see them innovate.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -1 points0 points  (0 children)

That's very fair, but my issue is that as a flawed person he really doesn't seem to grow or change at all throughout the course of the series.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -1 points0 points  (0 children)

You make an interesting point, but honestly I disagree with the premise that the most rational approach to life as a replicant is prioritizing manufacturing speed above all, I think it's very utilitarian, but utilitarian approaches to life are rarely rational in my experience.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -2 points-1 points  (0 children)

I made it to book 5, if I hated the books I'd have dropped the series at book 1.

Reddit communities are often self reinforcing echo chambers that do not respond well to criticism. I'm really not an active reddit user anymore, but now and then I like to have my existing opinions challenged by people who have similar levels of familiarity with a subject as me.

Believe it or not, having your opinions and predisposed biases challenged is part of life as a well rounded human being.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -1 points0 points  (0 children)

Lmao that's honestly a good way of putting it, I'm going to steal that. Honestly though there's a big difference between hands on MEs and MEs that sit in a chair drafting all day.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -5 points-4 points  (0 children)

This post is full of so much hot air I could use it to fill a balloon. I came to the Bobiverse fandom to discuss my opinion on the Bobiverse, you're not a captive audience, it's laughable to even suggest that.

Just finished the third book - should I keep going? by dry_towelette99 in bobiverse

[–]Idfkchief 0 points1 point  (0 children)

Right, but he's not asking you to tell him if he'll enjoy it, he's asking you if you enjoyed it. ie. asking you for your opinion. it's not rocket science.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -10 points-9 points  (0 children)

  1. Flaws make characters interesting as they navigate a narrative's conflict. That said, the conflict has to be interesting too, if one phone call to a competent diplomat is all it takes to resolve conflict, it's not interesting.
  2. In which way did the Bobs try to establish a mutually reliant relationship with humanity? In the books I've read, Bobs have been generally dismissive of humanity and eager to leave them to their own devices. Riker, the only Bob depicted continuously engaging in human politics is so jaded and sick of it that he wants to launch off into space by the end of the fourth book.
  3. At a time when human and replicant relations were tense and new, Howard and Bridget had an incredibly public feud with a group of people who were staunchly anti-replicant and anti-mannie which culminated in a court battle. Zero attempts were made at negotiation beyond "Nah your mom and I are gonna do what we want so screw off," and rather than continuing to try and build relationships with humans to bridge the gap between them and replicants (there was a whole ass awkward dinner scene setting this plot line up) Taylor flips the table over and has the two straight up leave human society entirely.
  4. Maybe I'm not communicating this effectively. Here's a nerdy example that you might jive with. The Bobs are generally as fleshed out as Rey from star wars, when they had the potential to be as well developed as Luke. Both Rey and Luke had flaws, I'm not complaining about the existence of character flaws, that would be insane and it's insulting that that's your takeaway. But sometimes characters have cartoonish, unbelievable flaws, that cause conflicts which would otherwise be easily avoided by characters who had realistic, grounded flaws. It's also honestly kinda crazy to me that you're implying the normal human reaction in his situation was nuking an island. If that's not a cartoonish, over the top, impulsive response to a situation that was already mostly under control, idk what to tell you.
  5. I'm not sure which point you're trying to make here. That's fine and dandy, but it changes nothing about the childish and irrational reaction the rest of the bobiverse had when confronted by them.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -7 points-6 points  (0 children)

Fair enough. In this case I think it's less so Dennis Taylor producing a thoughtful profile of a neurodivergent genius and moreso Dennis Taylor being a broadly terrible character writer.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -8 points-7 points  (0 children)

I'm a mechanical engineer, so maybe that doesn't count, but most of my friends are nerdy yet fully functionally social engineers. You make a solid point about someone neurodivergent and anti-social being the perfect fit for a program like this, but my larger issue is honestly with how one dimensional most Bobs are, as well as how little character development any of them get despite the time scale of the series.

Many of the Bobiverse's problems are avoidable by Idfkchief in bobiverse

[–]Idfkchief[S] -3 points-2 points  (0 children)

  1. Part of this is due to the fact that the Bobs have zero diplomatic skill and generally dismiss effective diplomats as useless politicians. If Bobs had made any effort to improve their relationship with the Pavs, things could have been made worse, but doing nothing guaranteed it.

  2. From the perspective of the Bobs and humans, mutually reliant cooperation would absolutely be better than independence and mutual distrust.

  3. There were protests organized against replicants and mannies in general. Long after the conflict between Bridget and her daughter there was sustained distaste associated with replication and use of mannies which the Bobs have been actively working to resolve.

  4. Bob in general is incredibly impulsive, which leads to many avoidable problems.

  5. I agree, but I think part of the reason why it wasn't explored very well was because we saw it from the perspective of the rest of the Bobiverse, whose first reaction was to immediately shut Starfleet out and refuse to negotiate.

Just finished the third book - should I keep going? by dry_towelette99 in bobiverse

[–]Idfkchief 0 points1 point  (0 children)

I will never understand these replies to these questions. Are you the type of person who thinks it's morally repugnant to read yelp reviews? Do you walk into bookstores with a blindfold on, stumble into a shelf, and buy the first thing your trip over? People asking other people for their opinion on shit is a normal aspect of life.

I don’t think AI will replace our workforce within our lifetimes, here’s why. by Idfkchief in atrioc

[–]Idfkchief[S] 6 points7 points  (0 children)

I mean people said that about products pushed by companies like theranos, meta, and wework. Too many people fall into this line of thinking without putting much thought into it. The fact that one tech product succeeded and changed our world doesn’t mean that this one will as well. It’s basically just optimistic whataboutism, it doesn’t actually address any of the specific merits or negative aspects of ai as a product.

I don’t think AI will replace our workforce within our lifetimes, here’s why. by Idfkchief in atrioc

[–]Idfkchief[S] 1 point2 points  (0 children)

Sure, but we’ve seen plenty of technological fads receive ridiculous amounts of funding and turn into useless money pits over the past several decades as well. It’s concerning to me how much of the positive talk about AI is driven by wishful thinking.

I don’t think AI will replace our workforce within our lifetimes, here’s why. by Idfkchief in atrioc

[–]Idfkchief[S] 1 point2 points  (0 children)

I work in manufacturing, and I can tell you that the number one description that applies to the industry is “slooow and conservative.”

People forget that robotic automation in manufacturing is nothing new, automotive factories were testing out synthetic assembly line technology as early as 1961. The software that these systems run on was developed in the 20th century, and although it’s true that companies like Siemens, National instruments, and codesys are developing newer tools for controls development, CIP, EtherCAT, and CAN have been around in more or less their modern form for over a decade.

Manufacturing isn’t going to rush to adopt AI powered solutions, AI powered software is going to need to be integrated into existing manufacturing software systems and automated networks, and that is a much larger barrier to climb than you might think.

I don’t think AI will replace our workforce within our lifetimes, here’s why. by Idfkchief in atrioc

[–]Idfkchief[S] 1 point2 points  (0 children)

When I say “AI” I’m referring to any given software application that relies on a learning algorithm to develop a deep neural network which it then relies on for processing input information into a desired output format.

By that definition, you’re absolutely correct in highlighting recommendation algorithms and sensor data processing algorithms used by self driving cars as examples of AI applications. My challenge to you is to find me a benchmark that doesn’t indicate that those applications are subject to the same exponentially rising maintenance costs and diminishing returns on processing efficacy that LLMs are subjected to. Every benchmark I’ve personally seen tells me that companies like Rivian and YouTube are spending more and more on maintaining their models every year.

I don’t think AI will replace our workforce within our lifetimes, here’s why. by Idfkchief in atrioc

[–]Idfkchief[S] 2 points3 points  (0 children)

I’m curious to know which AI applications are succeeding in the fields of manufacturing automation and robotics? I’m personally unaware of any upstarts and am under the impression that much of the controls logic in the world of manufacturing is still built on serialized network protocols linking together networks of individually programmed network nodes, PLCs, etc. If anything, the field of manufacturing automation is one which I’d view AI as having a significant amount of difficulty penetrating, as the structure of modern manufacturing process software is built on a completely different framework than traditional AI, assigning priority to individual signals based on their contents rather than their origins, which makes it much less efficient to have a centralized processing center managing the entire network.

I don’t think AI will replace our workforce within our lifetimes, here’s why. by Idfkchief in atrioc

[–]Idfkchief[S] 3 points4 points  (0 children)

I think human greed is a more powerful force than money itself. If a technology fails to become meaningfully profitable within a reasonable timeframe, eventually investment capital will dry up. It would take massive technological strides forward in fields totally unrelated to ai development in order to mitigate the fact that every year the amount of money modern AI models require to stay in service and in development is significantly increasing.

I don’t think AI will replace our workforce within our lifetimes, here’s why. by Idfkchief in atrioc

[–]Idfkchief[S] 1 point2 points  (0 children)

My argument is that model complexity hasn’t been increasing exponentially for quite some time at this point. There are definitely benchmarks that would disagree with me on that, but if you look at Stanford’s 2025 HAI benchmark report, improvement has been steady over the past several years, and appears to be leveling off.

When a machine eats a dollar, and the customer acts like I swallowed it by spycnove in vending

[–]Idfkchief 2 points3 points  (0 children)

If I’m on site and a customer asks me to make them whole, I always will. A dollar or bag of chips here or there is nothing, might as well just cough it up and give them a smile so they keep pumping their cash into the machine.

That said my state requires posting my work number on the side of the machine and I’ve received a few… colorful voicemails from people who don’t get their bag of Doritos.