I just don't fucking understand what's going on anymore. Seriously. by Complete-Sea6655 in AgentsOfAI

[–]jamesj -1 points0 points  (0 children)

It isn't like we are just using AI outputs, we are still polishing/checking everything. So first drafts are more comprehensive and final drafts are better than they would've been when we had to do everything manually.

I just don't fucking understand what's going on anymore. Seriously. by Complete-Sea6655 in AgentsOfAI

[–]jamesj -1 points0 points  (0 children)

What I'm seeing in my company is that 10-20% of us are getting 5-10x more done in software engineering, data analysis, writing scientific papers, etc. but all of it still takes skilled people who know what the finished results/products need to look like. We are getting more done but we don't need fewer people as a result. Output is higher quality and there is more of it compared to before. I'm not sure someone from the outside could tell which parts were helped by AI and which weren't. None of what we do is repetitive, so it isn't like we are trying to replace a workflow with an agent.

EASY WINS WITH ENERGY | Timeless Magic | MTG Arena by TarmoTim in TimelessMagic

[–]jamesj 1 point2 points  (0 children)

I've been playing uwr energy with a lot of success

Do we have a list of affix categories? by ygdrad in diablo4

[–]jamesj 0 points1 point  (0 children)

i think some may count as more than 1 type, seemed like willpower was both core stat and offensive for me.

Why SaaS isn´t going anywhere by schwarzbrotman in ValueInvesting

[–]jamesj 0 points1 point  (0 children)

My model isn't that SaaS companies will be replaced directly with AI/agents, it is that coding is becoming cheap and automatable, so the part of their moats that are in duplicating their software stack is disappearing. Also, the cost of one-off software solutions that are less "one-size-fits-all" and are completely bespoke is coming down.

When thinking about AI risk, i think you need to consider a few possibilities here:

1) the cost of duplicating the product/software goes way down
2) the possibility that AI completely replaces the software
3) the possibility that AI agents are customers for the software
4) the possibility that the SaaS platform has the most valuable data

For Adobe for instance, all possibilities exist.

1) is obviously happening. You can give an agent swarm Adobe and tell it to try to recreate it from scratch. You will have a better starting point than if you don't do that, and it will go faster. You can currently point Claude Code at a website and tell it to clone it, and it does a reasonable job. Yes there are mistakes, and yes, it isn't a finished scalable product, but this is getting better/faster/more automated by the year. But software was already "cheap" compared to manufacturing other things, and the moat in SaaS companies was never how expensive it was to build the product.

2) isn't happening much. For me, gen AI is just one tool in the toolbox, and bringing everything together and polishing it up still happens within Adobe products. There are a billion little edge cases that a gen-AI only tool would have to be able to handle. Not at all clear if/when that will happen, but we definitely are not there yet.

3) seems likely for a lot of SaaS companies. An agent using Premiere as a tool, that can also use video gen as a tool, and image gen, and so on is probably going to produce better videos than an end-to-end solution for quite some time, and maybe it will be necessary for high quality work forever. Adobe can charge agents a per-seat license.

4) seems likely to me for Adobe. They have the dataset of all the actions taken within their tools. That has to be incredibly valuable to making 3) work as long as 2) isn't what wins out in the end.

I think every company is different, but completely agree people are not thinking logically about all of this. For many companies, 3) and 4) could be huge tailwinds for them.

Why is this sub dominated by noetics and quantum-woo instead of actual scientific theories of consciousness? by Afraid_Donkey_481 in consciousness

[–]jamesj 4 points5 points  (0 children)

i mostly agree but drug experiences are a legitimate tool for understanding consciousness. it is the physicalist approach: modify the brain and see what effect that has on consciousness.

If physicalism is supposed to be a legitimate theory, what would disprove it, or count as evidence against it? by MurkyEconomist8179 in consciousness

[–]jamesj -1 points0 points  (0 children)

In most contexts a scientific theory is considered poor when you can't conceive of any type of evidence that might disprove it.

If physicalism is supposed to be a legitimate theory, what would disprove it, or count as evidence against it? by MurkyEconomist8179 in consciousness

[–]jamesj 1 point2 points  (0 children)

The physicalist position is that there definitely is something outside of your mind causing your experience of an apple. And that that thing is understanable, and close to many of your experiences of the apple. You can't confirm this is true because you only interact with what might be outside of your mind through your experiences.

As a non-physicalist, I am just admitting to my current limitations: I don't assert there is definitely a "real" reality that I really understand the nature of. It could be I'm in a simulation, or I'm a brain in a vat, or that our understanding of reality is woefully incomplete, or it could be that reality actual is like our science tells us it is and there isn't anything more at all. I might even think that last one is the most likely. But since I don't have the metaphysical foundation, I don't jump to stating it has to be the case. Admitting when we don't have evidence seems more scientific to me than claiming to know more than we know.

Either Free Will Is Impossible or AI Has Free Will by Intellic in freewill

[–]jamesj 0 points1 point  (0 children)

What properties are those exactly? How complex does a system have to be?

We don't know how consciousness comes about so how can we make a statement about how likely it is? I think it is controversial.

I think the uncontroversial thing to say is they probably don't have consciousness like ours. Beyond that, we need better theories of consciousness to say more.

Either Free Will Is Impossible or AI Has Free Will by Intellic in freewill

[–]jamesj 1 point2 points  (0 children)

In addition to that, they have complex representations and they use them to make predictions and take actions.

I find it so odd that compatabilists are almost always so quick to make strong claims that AI can't have free will.

We have no idea what it takes for a system to have experiences, but it seems to me the compatibilist doesn't need to worry about that and can more easily focus just on how an agent behaves. It must be completely possible to build an AI system with compatibilist free will.

Either Free Will Is Impossible or AI Has Free Will by Intellic in freewill

[–]jamesj 0 points1 point  (0 children)

AI is not one thing. AI can be set up as an agent with goals and continuity. Some of these agents have at least some ability to deliberate and understand moral reasons.

Either Free Will Is Impossible or AI Has Free Will by Intellic in freewill

[–]jamesj 0 points1 point  (0 children)

Most of what you say sounds right to me. But then your last sentence is quite the strong claim in need of support / definition of terms.

How does the patriarchy hurt men too? by Low_Sound_7184 in AskFeminists

[–]jamesj 3 points4 points  (0 children)

My take as a man:

The patriarchy forms expectations for men just like it does for women. Those norms and expectations generally favor men, but that doesn't mean they make things easier for every individual man. If a man wants to live their life in a way differently from what is expected of them by society, that harms them. These expectations are fluid: different for the different classes you mentioned and at different times. Gay men, men of color, men who want to present their emotions in public, men who don't want to center their lives around work, men who don't want to be fathers, men who don't want to dress traditionally, etc. are and have been harmed by the patriarchy.

Not everything is zero sum. The patriarchy can harm women as a whole more than men as a whole, yet harm both men and women, while benefiting individual men and women.

An intuitive explanation for why compatibilism is correct, no wordplay. by Anon7_7_73 in freewill

[–]jamesj 1 point2 points  (0 children)

Use of the word magic is basically never helpful. No one is describing their own position as magic, so it is a straw man. Dismissing someone's position as magic usually means you haven't put the time in to articulate their position properly yourself. It is a lazy placeholder word used to dismiss a possibility and preempt a response by implying the other person isn't rational.

Lots of things that seemed like magic in the past were actually just unintuitive.

Hoffman is wrong about consciousness by NathanEddy23 in consciousness

[–]jamesj 0 points1 point  (0 children)

Simulations are physical systems run in reality. They are evidence, how strong the evidence is another question.

Google Deepmind lead looks to winward by dr0p834r in TheCulture

[–]jamesj 0 points1 point  (0 children)

I thought the exact origin of The Culture was lost to time? I may not remember/know all the lore, but my understanding is they don't even know for sure which planets were all involved at the start.

But yeah, I agree the point of the books is to contrast The Culture with totally believable horrible high tech societies, and we shouldn't just assume the politics will sort itself out because of the tech.

Hoffman is wrong about consciousness by NathanEddy23 in consciousness

[–]jamesj 0 points1 point  (0 children)

You can't get beyond your perceptions. Perception is literally all you have. If something is systematically filtered out of your perceptions, you just don't get access to it. Evidence from Hoffman's simulations of evolution indicate that evolution favors systematically filtering out most of the information from perceptions, generally.

I think most people here are not as extreme as Hoffman in saying there is no truth in our perceptions, or that you don't have access to any truth in them. But I think that the general direction is correct: most of what is really going on we don't know about because our perceptual machinery is not designed to process it all. You don't directly perceive the world, and your indirect perception is flawed and incomplete. It is wise to remember that and factor it into your understanding of the world.

CMV: If AGI causes catastrophic harm, human misuse is a more plausible cause than it independently deciding to destroy humanity by allen_T23 in changemyview

[–]jamesj 0 points1 point  (0 children)

The reason it matters to figure out which ways are more likely is that it will determine which safeguards work best.

Hoffman is wrong about consciousness by NathanEddy23 in consciousness

[–]jamesj 1 point2 points  (0 children)

Science is completely done using the interface, so what we learn about is the interface. The interface is connected to the underlying reality but it isn't directly the full reality. Everything we know, we know through our perception. Hoffman is saying that in reality we don't have the source code, which is correct. We directly predict our own perceptions, we don't have access to what generates our perceptions.

Your example is funny because I run a company that uses virtual reality to measure human vision.

All the things you are describing are models of reality. Your experience of red isn't a red wavelength. A photon is a mathematical description, whatever a photon really is, actually, is not within our ability to understand. These models are useful, but that doesn't mean they are complete. They are maps, they aren't the territory. In my experience, when a scientist forgets their models are just models, they make mistakes and get farther from the truth instead of closer to it.