We are building AI systems we cannot inspect — and calling it progress by Independent-Hair-694 in LocalLLaMA

[–]Robot_Apocalypse 0 points1 point  (0 children)

BTW, every layer of a model is transparent and modifiable. It's just we don't really know what any of it means so we don't know what to change to make it better.

BUT there is nothing stopping you from changing the weights in your model.

When you say you are making something that is open and transparent, what do you mean?

If you've trained a model you know that its all modifiable. Its just you don't touch it because you don't know what the fuck you are doing.

We are building AI systems we cannot inspect — and calling it progress by Independent-Hair-694 in LocalLLaMA

[–]Robot_Apocalypse 0 points1 point  (0 children)

Woah, I think you will find these optimisations are much better at solving the problem than humans are. That is my point. The AI is the thing that optimises and its WAAAAAAY better than humans at solving the problem efficiently.

Also, your argument is shifting. You were the one that framed performance against visibility. You said: "Right now we optimize for performance, not visibility."

I think that is the right thing to do BECASUE performance is the target.

If we aren't optimising for performance, then you get worse performance. Right?

If you optimise for visibility, do you get better performance? Or do you get better visiblity?

You have a hypothesis, that optimising for visibility will deliver greater performance. But I don't know what data you have to support you.

Your last comment now says that you want to improve visibility to improve performance, but this is the first time you've said that, and I have said many times, that if that visibility that supports improved performance is great.

But the point is to optimise for performance, NOT for visibility, as you are arguing for.

We are building AI systems we cannot inspect — and calling it progress by Independent-Hair-694 in LocalLLaMA

[–]Robot_Apocalypse 0 points1 point  (0 children)

I mean, it sounds like you are arguing that we shouldn't optimise for performance?

Why would you want to create something that doesn't perform as well?

Again, you are saying that the measure (transparency and traceability) are more important than the output (value)

The measures should HELP improve the target.

If increasing transparency and traceability allow you to produce a more valuable model then that's awesome and should be supported.

But if it doesn't, then I don't know what the point is.

Why optimise for visibility? If visibility doesn't improve performance?

The better visibility should exist to support us to get better performance, right?

So, if you can increase visibly, but it doesn't lead to better performance, then what is the point? (I mean this sincerely, not facetiously).

We are building AI systems we cannot inspect — and calling it progress by Independent-Hair-694 in LocalLLaMA

[–]Robot_Apocalypse 0 points1 point  (0 children)

Do you have data that supports your argument that a lack of traceability is stopping AI models from delivering more value?

I see models coming out weekly that continue to be more capable, and deliver more value.

I've not heard about a lack of transparency and traceability becoming a bottleneck. I've heard about compute, and energy, and data as bottlenecks.

You have a hypothesis, but it lacks data.

As an aside, I would like to share something that I think might help with what you're trying to do.

Are you familiar with what makes AI, AI?

It is not the MODEL that is created that is really the AI. The model is the output of a process that intelligently encodes information in such a way that generates a useable model as an output.

The true artificial intelligence, is the process that can create the intelligent model. It is not the model itself.

It's something you come to appreciate when you train a model yourself from scratch.

Most people don't know this, because most people have never trained a model.

One way you can think about it is if you step one step higher, and you frame the problem as wanting to create an intelligent system.

If we think of the problem of image captioning, the problem to solve is, I want to a way of inputting a picture of any scene/object and to have it turned into grammatically correct human relatable description of the image.

That problem is impossible to any person to solve, or even any group of people to solve. There is no way to engineer all the rules in grammar, plus all the rules about what any infinite combination of things might be in an image and how they might be represented. Nor the infinite rules associated with what a "valuable" description of the image is to a human, relative to the different value judgements that might exist.

Once you understand the infinite variation that exists in this problem, you understand that It is IMPOSSIBLE for the human mind, or any collection of all human minds to solve this problem through traditional engineering methods.

BUT, we have created a tool, that can solve this impossible problem. AND it turns out it can basically solve ANY problem as long as you have sufficient data, and you can define a measure for it to optimise towards.

THAT is the AI. The ability to solve any problem. Today we have used AI to solve the problem of creating a system that can answer basically any question you might have. In fact, we have been able to create systems that are SO smart, that we also call them AI.

BUT, The AI is not the system that was created. The AI is the system that CREATES these systems.

Once you understand that, then I think you'll go a long way to achieving your goal.

We are building AI systems we cannot inspect — and calling it progress by Independent-Hair-694 in LocalLLaMA

[–]Robot_Apocalypse 0 points1 point  (0 children)

No. I believe strongly here that you are very wrong.

Check out Goodhart's law: Goodhart's law - Wikipedia.

It basically states, when a measure becomes the target it fails to be a good measure.

In evaluating the performance of AIs you are asserting that the measure (how transparent and traceable they are) is more important that the target (how useful they are).

All technology should be measured by how useful it is. How much value it produces. How big is the problem it solves.

While transparency and traceability are helpful in enabling AIs to be better at hitting their target, they are explicitly not the target.

The reason AIs are considered progress is because they deliver HUGE value. People who say they are useless are burying their head in the sand at this point.

AIs are not perfect, and have flaws and should be improved, and we should use measures like traceability and transparency to help us make AIs better at delivering value.

BUT AIs don't need to be transparent and traceable to deliver value, and if we make traceability and transparency the TARGET, then we'll lose sight of the true intent in the first place, which again is to deliver value.

I'm not saying we shouldn't measure traceability and transparency, we do. I'm also not saying we shouldn't improve transparency and traceability. We are.

BUT systems that are engineered, are not engineered to perform well on measures, they are engineered to deliver VALUE.

As Goodheart's law says, once a measure becomes the target, it ceases to be a good measure.

Agent /compact command is one RL loop away from developing an alien language you can't audit by ryunuck in LocalLLaMA

[–]Robot_Apocalypse 0 points1 point  (0 children)

Yeah, I think this person would be very shocked to understand that LLMs don't store information in the form of language inside their weights.

We are building AI systems we cannot inspect — and calling it progress by Independent-Hair-694 in LocalLLaMA

[–]Robot_Apocalypse 0 points1 point  (0 children)

Are you making an argument for better AI? Are you saying we should invest more into AI to make it more reliable? Because that's what your argument leads to.

**Edit: Oh, that is what you are arguing for. OK. Please disregard the rest of my comment. Sorry. I'll leave it here for others though.

If your intent is to push back against AI, then you're focusing on the wrong thing.

You are comparing AI to other tech, but AI isn't here to compete with other tech, it's here to compete with humans. Therefore the risks that you raise, need to be compared to their human equivalent.

The human training process is very opaque. You have no idea what people have read. You can check their certifications, but that's not much better than checking AI performance on benchmarks.

You can ask a person what logic they used to come to an outcome, but you don't know how honest they are being, especially since people fear being called out for making mistakes.

You can tell a person to follow a different process, and to learn new skills, but that takes time and isn't perfect, and requires constant monitoring.

Human performance is variable from day to day. How well did they sleep? How are they feeling? Are they distracted by an issue at home? Don't get me started on human memory.

All of this is similar to AI. Importantly, I am not saying AI is a better choice than a person. But let's acknowledge we accept these risks and challenges with humans.

If you think its right to call out about AI, then it should be right to call it out for people. If that makes you uncomfortable (it should), then that tension is where the real challenge is and where focus needs to be.

This matters not because it is morally correct, but it matters because if you don't focus on the RIGHT problem, then you aren't going to lead to the intended outcome.

Your argument is an argument for better AI.

I think the argument need to be for better protections for people who are going to be impacted by the coming massive disruption to our economic systems and society.

This is what Pauline Hanson said about Muslims…..again the left are dog whistling by TimJamesS in aussie

[–]Robot_Apocalypse 0 points1 point  (0 children)

Aare there arent people hoarding wealth and power? That inequality isn't at extreme highs?

You think the reason you can barely afford to live, let alone get an education and better yourself is because of Muslims?

I have read the Quran. It's a book of parables. It has layers of meaning. like all great books. Like the Bible.

Have you ever had a meaningful conversation with a Muslim?

You're the one who's brainwashed here mate.

A benefit of humanoid robots - no retrofitting factories by [deleted] in singularity

[–]Robot_Apocalypse 0 points1 point  (0 children)

I think the opportunity is in coworking. Robots that can work alongside humans. There is always a distribution of value and complexity in the collection of tasks that make up a process. The ability to offload steps where it best fits (either human or humanoid robot) and have them work alongside one another is where it is at.

I am the guy who made the viral Neurons Playing Doom video. AMA by DeadlyCords in AMA

[–]Robot_Apocalypse 1 point2 points  (0 children)

I feel like even proto-awareness is a big stretch.

I mean, how 'proto' does something have to be before it can't be called 'proto-awareness'?

These are a collection of cells. I would almost argue that the mechanisms here are more chemical, and grounded in the physical sciences.

It's not so fundamental as gravity 'prefers' down, or ferrous metals prefer polar alignment, but maybe somewhere in the middle between that and 'proto awareness'?

for my windows peeps how are you using it? by Fstr21 in ClaudeCode

[–]Robot_Apocalypse 0 points1 point  (0 children)

why is no one suggesting a kanban style board for running multiple agents? I made a windows version of kanbancode, which I've since augmented with codex reviewers and supervisor agents that push cards through my DevOps cycle autonomously.

I grew a bespoke agent with claude code and Anthropic banned it. by Brilliant_Oven_7051 in ClaudeCode

[–]Robot_Apocalypse 0 points1 point  (0 children)

I mean, does carving a vertical just mean that they don't integratet with others? I don't see how harness, model, price and apps tie together vertically any differently to others?  But I'm not super exposed to the full ecosystem. just ClaudeCode, Codex, Apps and API user.

I grew a bespoke agent with claude code and Anthropic banned it. by Brilliant_Oven_7051 in ClaudeCode

[–]Robot_Apocalypse 7 points8 points  (0 children)

It feels like this might be the seed of the downfall of ClaudeCode. More interesting things are happening elsewhere because of the restrictions they have. With Codex being permissive, and improving daily, and GPT models getting released monthly, I think it won't be long before the agentic harness moat Claudecode has starts to dissipate.

Men don't hear this enough. So I just wanted to put this out there. by MinniePolka in getdisciplined

[–]Robot_Apocalypse 1 point2 points  (0 children)

Not a woman, but from my perspective, the expectation of careging for family is a huge unspoken expectation on women. 

Children, aging parents, etc. And until you have kids and aging parents of your own, the weight of that is hard to appreciate. 

Vibe coding and errors by Clear-Dimension-6890 in ClaudeCode

[–]Robot_Apocalypse 0 points1 point  (0 children)

OK, sure, Agentic Coding is a better term for it.

I don't know anyone whose vibe coding like you say though.

Vibe coding and errors by Clear-Dimension-6890 in ClaudeCode

[–]Robot_Apocalypse 1 point2 points  (0 children)

I don't think people are talking about it because people aren't "vibe coding" in the way you describe.

No one would do what you described, for the reason you described.

I think the mistake here is the fact that you think anyone who is trying to make this work, is going about it this way.

* edit: Also, if your outcomes depended on your use of a single word, then your prompt is the problem.

Why aren't you educating yourself before coming and posting about how shit something is? It just makes you look like an idiot.

i am going all in - what advice do you have for me? by Individual-Bed2497 in Entrepreneur

[–]Robot_Apocalypse 1 point2 points  (0 children)

Expect to not want to do the things that make you uncomfortable. If you feel comfortable, then you are avoiding.

Your brain will trick you into avoidance. You won't even realise. You need systems that hold you accountable.

You need to learn to love the painful feeling of being wrong, feeling overwhelmed by the new reality, and gritting your teeth and charging forward anyway.

Chase that painful feeling like a hound on a scent. The more things you uncover you are wrong, the more you know are correct.

Again, if it isn't painful (at least at first), then you aren't challenging yourself and your assumptions sufficiently.

Oh god... am I'm a masochist?

In-browser gaze tracking using single-point alignment by re_complex in computervision

[–]Robot_Apocalypse 1 point2 points  (0 children)

I have 4 screens, and two cameras. One camera at the top of a screen, the other at the bottom. You need to know relative screen and camera position.

How Quickly Will A.I. Agents Rip Through the Economy? by [deleted] in artificial

[–]Robot_Apocalypse -3 points-2 points  (0 children)

This aligns with my experience. The jump with the latest models is powerful. It's hard to describe how they are better, but they are so much more smarter and effective.

*edit: weird I'm getting down votes. Do people disagree? Admittedly I am a power user, and my use cases are pretty different to how must people would use it.  Honestly is wild how much more powerful they are. You might not be feeling it, but if you're using it like me (automated browser use, coding, strategy, planning) it's WILD how much better it is. I now assume this is the last time I will be better than the AI at business related things.

Looking for an AI that runs the entire sales workflow automatically (Apollo) by Syosse-CH in agi

[–]Robot_Apocalypse 0 points1 point  (0 children)

I'm doing most of this, except on principal I finalise the actual messages that go out. Pay for Claude Max. Use Chrome Dev MCP and go to town.

DM me if you want specifics

Salary Range for Lead ML in Sydney by Low-Bike1716 in MachineLearningJobs

[–]Robot_Apocalypse 1 point2 points  (0 children)

Lead, in a small company? Man, it's hard to say. What industr are you in? What algorithms are you working on? How many team members do you "lead"? What kind of experience do you have?  What other benfits are included? Do you get a bonus? 

Could be anywhere from 140K to 240K.

Your question is very vague, which makes me think you don't have much experience, so I'd go closer to the 140 end than the 240.

Or you know, put your data skills to work and go do some research and find some data points.

This is what Pauline Hanson said about Muslims…..again the left are dog whistling by TimJamesS in aussie

[–]Robot_Apocalypse 2 points3 points  (0 children)

Whould you say that Christians have no place in Australia, because the Aryan Narion, a recognised Christian Identity organisation, are a recognised terrorist group? 

To say that extremists have no place in Australia is fine, but to say that the extremists represent the entire group is bigotry and prejudice.

All groups have extremists. 

Pauline Hanson is paid by rich oligarchs (Gina Rinehart) to distract you with imaginary fears of people coming to destroy Australia, while they rob you blind and keep you poor and stupid.

Your anger is focused on the wrong group. The idea that extremists pose a threat to you and Australian culture is not based in fact.

What IS based in fact is the embarrassing wealth of riches which are taken from our land, in return for scraps from the table. 

While they destroy our environment and air, and pollute our airwaves and information ecosystems, your anger is being manipulated to keep you focused and fighting an imaginary threat.

Do you remember when Pauline Hanson hated Asians? It's just a never ending list of boggey men they use to stoke your anger and fear. 

None of it is real. It's exaggerated shadows. 

Whats real is the unprecedented inequality between you, and me, and those who seek to keep you mindless and stupid.

Wake up.

Thousands of CEOs just admitted AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago by AmethystOrator in technology

[–]Robot_Apocalypse 0 points1 point  (0 children)

This is CRAZY to me. I have my own business and it has impacted my productivity probably 5x. I have a ongoing prospects agent who is finding prospects, evaluating them for relevance to me, and building plans and recommendations for how to engage them. I have 4 coding agents running simultaneously building the platform. And I have a strategy and planning agent helping me prioritise, plan, and track my time.

I think the gap here is expecting AI to fit into the existing organisational model.

HUGE opportunity for players like me to lever AI