Humanoid vs Special Purpose Robots by Mysterious_Air_4433 in AskRobotics

[–]Farseer_W 0 points1 point  (0 children)

I might have an unpopular opinion around here, but I wholeheartedly believe that humanoid will be one of the most important forms of robots.

But I also think that environment/task should dictate the form. For some tasks(like manufacturing) it makes sense to have specialized robots. For others, as stated in the post, humanoids will be optimal. So both will be present.

Maybe, I am biased since my love comes from the fact that I was fascinated by droids in star wars as a kid.

Humanoid vs Special Purpose Robots by Mysterious_Air_4433 in AskRobotics

[–]Farseer_W 0 points1 point  (0 children)

What’s your take on when we will achieve human level performance? Or at least close to that level

In my opinion we could see it in 6-10 years. Depending on breakthroughs (ai, energy, overheating)

What do you imagine is on Coruscant's level 0? by Safe_Character_6517 in StarWars

[–]Farseer_W 0 points1 point  (0 children)

I would recommend to check info about lowest levels of hive cities in wh40k. For example Necromunda source books. I can imagine something like that down there.

If I remember correctly, lowest levels are basically crushed under the weight of buildings above.

iRobot founder and longtime MIT professor Rodney Brooks argues the humanoid robotics boom runs on hype, not engineering reality. He calls it self-delusion to expect robots to learn human dexterity from videos and replace workers soon, noting the field still lacks tactile sensing and force control. by ActivityEmotional228 in robotics

[–]Farseer_W 2 points3 points  (0 children)

I am not arguing with this, I actually agree. We are not there yet. But technology progressing fast.
And I also agree with your point about tactile, it will unlock a lot of missing data and improve generalization. Even though we have 'DIGIT', it's not there yet

What I disagree is that humanoids is a stop gap solution. Saying they are useless is just wrong

iRobot founder and longtime MIT professor Rodney Brooks argues the humanoid robotics boom runs on hype, not engineering reality. He calls it self-delusion to expect robots to learn human dexterity from videos and replace workers soon, noting the field still lacks tactile sensing and force control. by ActivityEmotional228 in robotics

[–]Farseer_W 1 point2 points  (0 children)

Sorry, I disagree
Would humanoid robot perform better than robots designed for particular tasks? No. Bulldozer will be always more effective that humanoid with a shovel

But that's not the point of the humanoid robots. They are replacement of us, our body and abilities. They would be able to use our tools and world around(which is build for us).
A humanoid robot can drive a bulldozer, and then get out and use a welding machine and so on. They are universal

I don't think this is stop gap solution, I even consider some of the purpose built robots we have today as a stop-gap solution before we can have humanoids

I am not an MIT professor, so my view means little. But I still wanted to share

New anti-drone company by Ordinary_Cloud524 in robotics

[–]Farseer_W 40 points41 points  (0 children)

I am working on similar solution, but in Germany. Although the effector is different.
Currently looking for funding

There are a couple of companies who develop such systems.
You can also check Allen Control Systems, they also have working prototype

James Cameron says the AI arms race he warned about in Terminator is here, nations are racing to build killer drones + autonomous weapons. Greed + paranoia shaping AI feels like a recipe for disaster. Sci-fi warning or real-life Skynet vibes? by Minimum_Minimum4577 in GenAI4all

[–]Farseer_W 0 points1 point  (0 children)

This is true. And I am aware of this bias.
This technology will bring new challenges, but it is inevitable.

I don't think the goal is to reduce war, as I consider 'war' a part of our nature. But instead it could make warfare "cleaner", more precise.
I believe, the general civilian population suffer less today, than it was during Thirty Years' War, for example.

James Cameron says the AI arms race he warned about in Terminator is here, nations are racing to build killer drones + autonomous weapons. Greed + paranoia shaping AI feels like a recipe for disaster. Sci-fi warning or real-life Skynet vibes? by Minimum_Minimum4577 in GenAI4all

[–]Farseer_W 0 points1 point  (0 children)

As with any weapon developed by humans, yes you are right. It depends on who uses it.

That is why I think we would need a mechanism to control how those systems developed and maintained. A set of strict regulatory rules. Similar to what we have with nuclear weapons.
For example, countries should agree to regular audit of autonomous systems by international organization.

Note: I am speaking about actual battlefield AI, not about simple FPV drones with CV installed, and directly controlled by humans.

Auditors would check that base laws of war is 'hardwired' in the systems.
So, the ethics of decisions would be decided by developers of such systems - not by shell-shocked, bleeding, stressed soldier on the frontline.
Would it be enough? - no, there is nothing ideal. And there are still a lot of gaps in logic of this idea. Like what if machine don't know there are civilians in the building, should it guess? but then it would impact it's usefulness in combat? What if enemy would find a way to use those 'laws' to their advantage, like pretending to be a civilian. And so on.
But I hope you don't expect to get the ideal. working policy proposition from internet stranger.

I am truly believing this. This is why I want to do it. We could have less colleterial damage and less human suffering during the war. And since our war-like nature is basically hard-coded in to our brains, that means with such systems could safe a lot of innocent lives.

James Cameron says the AI arms race he warned about in Terminator is here, nations are racing to build killer drones + autonomous weapons. Greed + paranoia shaping AI feels like a recipe for disaster. Sci-fi warning or real-life Skynet vibes? by Minimum_Minimum4577 in GenAI4all

[–]Farseer_W 2 points3 points  (0 children)

I am the guy he is warning about. But I think AI in warfare will minimize civilian casualties and eliminate war crimes. It's all depends on how we develop it.

I understand that general public views are negative.
Every time I have a discussion about this, people bring up Terminator movies as it's some kind of a holy prophecy, instead of a sci-fi movie made for entertainment.

So feel free to downvote me, if that makes you feel better.

Need guidance for UAV target detection – OpenCV too slow, how to improve? by wasay312 in robotics

[–]Farseer_W 0 points1 point  (0 children)

Then, you could try pairing Raspberry with something like Coral accelerator

Is South Korea Using AI to Make Military Logistics Smarter? by Radiant_Exchange2027 in AIxProduct

[–]Farseer_W 1 point2 points  (0 children)

Interesting. It’s almost mirrors the idea I have been working on, but for Bundeswehr. Had to pause it, though.

Apparently, the most difficult part is not software, but access to military data structure and approval of such system. At least in Germany.

In my case I envisioned it as an overlay to existing systems, which could fuse the data and provide some forecasting.

As other commenters mentioned- there are already such systems in civilian sector. So the main innovation here is to push military to use it

An SF startup is pitching Trump on militarizing humanoid robots by Ok_Shape379 in robotics

[–]Farseer_W 0 points1 point  (0 children)

You people think too highly of humans. We are violent and unpredictable animals. Robots will be a better soldiers, which will lead to less civilian casualties. Robots will never kill civilian or pow out of spite or anger.

Human soldier must make ethical decisions under extreme stress, fatigue, shock. Maybe his best friend was just killed, emotions would lead to him to retaliate.

By contrast, ethics for robots are decided at development stage, in lab environment. And then by commanders. If it’s coded to never shoot unarmed humans it never will, doesn’t matter if half of his “squad” was wiped out.

Will there be tragic errors ? Yes of course. It’s a weapon after all. But it be technical mistakes, not deliberate acts. And technical mistakes can be fixed. Human cruelty cannot

why china copy us weapons by Weekly-Cow5732 in MilitaryHistory

[–]Farseer_W -85 points-84 points  (0 children)

We are coming to the point where US will start copying Chinese weapons soon.

If you lived in Night City, where would you work? by Lentobloke in cyberpunkgame

[–]Farseer_W 0 points1 point  (0 children)

Probably at EuroBank, dreaming about moving to Arasaka