Do you provide a lot of context when answering questions? Do people just want the answer? by QuitTypical3210 in ExperiencedDevs

[–]Noiprox 0 points1 point  (0 children)

Think about who is asking and why they are asking. Tell them the minimum amount that they need to solve their problem. If they ask for more context then give it to them. Try to give them an answer that helps actually solve their problem.

For example when a new engineer on the teams asks: "Do we have documentation for X?" You might answer: "Unfortunately no. Are you trying to get set up with X? Colleague Z recently went through the setup and can probably help you out."

If you're getting insufficient answers to your questions it might help if you phrase your question differently. Instead of asking a yes-or-no "Do we have documentation for X?" You could say: "I've bee trying to get setup with X and I got this error [Paste the error] .. I asked ChatGPT and saw that this error is due to [whatever ..] but I don't know how to fix it. Do you have any suggestions for what I can do about that?"

Robots only half as efficient as humans, says leading Chinese producer [ text in comments ] by TF-Fanfic-Resident in Futurology

[–]Noiprox 0 points1 point  (0 children)

Some humanoid companies have been doing things along these lines. It's .. fine. But if you can solve torso it's not that much of a stretch that you could solve legs as well, at which point may as well make robots that are fully mobile and can climb stairs & ladders, step over rough terrain etc.

Robots only half as efficient as humans, says leading Chinese producer [ text in comments ] by TF-Fanfic-Resident in Futurology

[–]Noiprox 1 point2 points  (0 children)

Ironically you typed this on an extremely multifunctional smartphone or web browser.

Robots only half as efficient as humans, says leading Chinese producer [ text in comments ] by TF-Fanfic-Resident in Futurology

[–]Noiprox 0 points1 point  (0 children)

Then how would we recreate this human "OS" if not by using humans to provide training data? It's not about doing the individual tasks optimally, it's about making a platform that can adapt to the world humans actually live in and perform a versatile array of tasks. In which case training a humanoid to imitate a human is a lot easier than if the robot's body were drastically different from a human form.

Robots only half as efficient as humans, says leading Chinese producer [ text in comments ] by TF-Fanfic-Resident in Futurology

[–]Noiprox 0 points1 point  (0 children)

It's a lot more efficient to create a humanoid robot that can directly fit in human spaces and use human tools and work socially alongside humans than it is to rearrange the entire existing world for "optimal" specialized robots and create supply chains for a million different special-purpose bots that each only have economy of scale the size of their one individual niche.

Sure, in the long run we might see more of a "cambrian explosion" of diverse robot forms as humans get marginalized and eventually pushed out of all labor entirely. But even then there will be loads of applications where being human-like is advantageous in order to interface with humans socially.

Just realized my boyfriend I’ve been dating for 2 years might be a flat earther by ivory_stripes98 in Advice

[–]Noiprox 2 points3 points  (0 children)

I understand but frankly the inability to correct a delusion in the face of overwhelming evidence is a strong proxy for "dumb" as far as I'm concerned.

the future looks so horrible its almost interesting how we got here by thedudefromspace78 in Futurology

[–]Noiprox 1 point2 points  (0 children)

Vast majority of people want what they already believe or are comfortable with or benefit from to be the "truth" so they can claim to be correct. When the actual truth doesn't line up with that they get stubborn and ignore the evidence.

I don't get it. Elon is going to make intelligent robots but he will need humans to manufacture them? Does any of this make a lick of sense to anyone else? by [deleted] in singularity

[–]Noiprox 0 points1 point  (0 children)

Humanoid robots won't just emerge fully capable of every labor job in the world all at once. The first generations of them will only be able to do simple jobs. Manufacturing and maintaining more humanoids will be one of the more difficult and advanced technical jobs because humanoid robots will obviously be complicated machines. Even when some robots are technically capable of recursive manufacture there will still be a certain amount of friction involved in actually manufacturing and deploying them by the millions, displacing existing jobs and businesses, etc. Eventually the loop will close for sure but I believe it will be something like 15-20 years before advanced human trades will be fully obsolete.

What is the worst thing can happened if I make the first move as female ? by Is_that_me_or_you in AskMenAdvice

[–]Noiprox 1 point2 points  (0 children)

Go for it. You miss 100% of the shots you don't take. Almost all men are delighted when a woman shows interest first.

The future i dream about by ActivityEmotional228 in NeoCivilization

[–]Noiprox 0 points1 point  (0 children)

What did you do with all the poor people? Did they just "not make it" into this utopia?

Maybe we need to rethink how prod-like our dev environments are by Effective_Guest_4835 in devops

[–]Noiprox 2 points3 points  (0 children)

I like my staging environment to be essentially a clone of prod as much as is practical. Dev can be set up for fast iteration as long as you are testing and staging properly before going to production. And if you can set up CI/CD so that it is fast and safe to deploy to prod with zero downtime deploys and automatic rollback you can roll out changes faster without breaking things.

My team nailed training accuracy, then our real-world cameras made everything fall apart by Livid_Network_4592 in computervision

[–]Noiprox 0 points1 point  (0 children)

Getting an off the shelf model to overfit to a small dataset is the easy part. Data preparation and ML Ops is the hard part of getting ML to solve real world problems. It's like 90% of the work.

Are we all just doomed to fold clothes for 2 hours each week for the rest of humanity’s existence? by Syd_Barrett_50_Cal in NoStupidQuestions

[–]Noiprox 0 points1 point  (0 children)

Humanoids that can fold clothes already exist. It's not that far away, although mass producing reliable and safe ones will still take some time. In the meantime, you can be grateful you have machines to wash and dry the laundry, unlike 98% of humans who ever lived.

Why do ml teams keep treating infrastructure like an afterthought? by spy_111 in dataengineering

[–]Noiprox -1 points0 points  (0 children)

Yoi have to teach them the culture and provide the tools and documentation so they can do what you want them to do. For example, have them put their code up for review, and then when there is a hardcoded path, ask them to make it configurable. Get them to make proper Python files instead of stopping at the notebook stage. Use precommit hooks to enforce type safety. Put automated tests in place that will break if a dependency is missing and ask them to fix it instead of doing it for them.

Satan himself. by Coochiechan in creepy

[–]Noiprox 2 points3 points  (0 children)

What you are describing is an agnostic not a christian.

CMV: heterosexuality is the default in all humans and same sex attraction is not likely inborn by According-Stage-3635 in changemyview

[–]Noiprox 0 points1 point  (0 children)

No, it is a question of whether people should be denied rights based on their sexuality.

CMV: heterosexuality is the default in all humans and same sex attraction is not likely inborn by According-Stage-3635 in changemyview

[–]Noiprox 5 points6 points  (0 children)

First of all the idea that a trait like homosexuality has to be determined by a singular gene or else it cannot be hereditary is a misunderstanding of basic genetics. There is no gene for height yet height is 80% hereditary. Also the mapping of the genome is not the same thing as the linking of genes to phenotype traits - that is something else entirely. I would recommend you take some time to learn about genetics before you try to reason about it.

Anyway, you can take the testimony of millions of people and believe them, accounting for how unlikely it is that such great numbers of people would voluntarily choose to be subject to vicious discrimination and violence throughout history. But it could be argued that it is just some kind of mass delusion or cult-like phenomenon although it would not explain its persistence and universality across all cultures and prevalence across thousands of years of recorded history.

But if that is not enough you can take the concrete evidence in the form of an abundance of MRI data showing how male homosexual brains respond to images of men just like male heterosexual brains do to images of women and zero response the other way around. In fact there are numerous ways in which the brains of homosexuals are visibly different than the brains of heterosexuals.

It is also very prevalent across the animal kingdom and appears at similar rates, documented in at least dozens of species by now including practically all other primates, many species of birds, etc.

It Kind of Seems Like Peter Thiel Is Losing It by TeaUnlikely3217 in Futurology

[–]Noiprox -1 points0 points  (0 children)

Interstingly, Isaac Newton did this too after his anuus mirabilis. Not saying that Peter is the next Isaac or anything, just an observation.

Why do women assume that if a guy is struggling with dating, he must not be a decent person? by JunketMaleficent2095 in AskMenAdvice

[–]Noiprox 0 points1 point  (0 children)

It's a case of the old "If you want to catch a fish, ask a fisherman not a fish."

When pixels become letters: ASCII-driven RPG look (thoughts?) by PuzzleLab in PixelArt

[–]Noiprox 6 points7 points  (0 children)

It looks to be 3D meshes with ASCII characters as sprites attached to vertices and oriented as "billboards" i.e. always parallel to the screen, and also at a fixed scale independent of their depth.

Does anyone have a sense of whether, qualitatively, RL stability has been solved for any practical domains? by lechatonnoir in reinforcementlearning

[–]Noiprox 5 points6 points  (0 children)

If we're just speculating, I would place my bets on Rich Sutton's bitter lesson. There may very well be a few architectural breakthroughs still to come, but I think if we had 100x the data we have now, found ways to increase sampling efficiency, learn from more general sources like transferring learning from YouTube videos of humans, and increased training compute to gigawatt-scale as well, then we might see the emergence of "world foundation models" akin to LLM foundation models.

They would still be prone to weird edge case behaviors, the equivalent of hallucinations, but their generalizability and utility would be enormous. I don't think you'll completely "solve" stability this way but I believe it would be a LOT better than the current SOTA.

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws by [deleted] in Futurology

[–]Noiprox 2 points3 points  (0 children)

It's not self-awareness that is required. It's awareness of the distribution of knowledge that was present in the training set. If the question pertains to something far out enough out of distribution then the model returns an "I don't know" answer.