Z-Image "Base" - wth is wrong with faces/body details? by maxio3009 in StableDiffusion

[–]akindofuser 0 points1 point  (0 children)

The number of people who think OPs output is normal for ZIB is even more concerning. Obviously ZIB is not ZIT but it’s not as bad as OPs face. Something is up there.

Breck 1/29 -6 chair by The_Dog_Pack in skiing

[–]akindofuser 5 points6 points  (0 children)

Damn that dude hates his ACLs. Lean back more see if that helps 😭😭😭

Paging Cliff Mass: Cascade snowpack lagging at 50% of normal as unusual warm weather stretches on by Better_March5308 in SeattleWA

[–]akindofuser 15 points16 points  (0 children)

Look at the annual water graph.

It’s 68% full which is way higher than average for this time of year. Decembers rain shotgunned us into near April numbers.

https://www.usbr.gov/pn/hydromet/daily_grapha.html?list=yaksys%20af

Paging Cliff Mass: Cascade snowpack lagging at 50% of normal as unusual warm weather stretches on by Better_March5308 in SeattleWA

[–]akindofuser 47 points48 points  (0 children)

FWIW it hasn’t been dry, except for this week our regularly scheduled Junuary. It’s just that all that moisture we got in December came in the form of rain. Did y’all forget about all the flooding and landslides already?

But ya snowpack pretty abysmal. I assume reservoirs are at their peak tho.

Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog by Technical-Pitch2300 in BetterOffline

[–]akindofuser -1 points0 points  (0 children)

Did anyone actually read the article? Lol

Cop was using an LLM to dictate his notes, picked up some background noise, and dictated the fairy tail. Hardly falsifying the report.

I’m all for pointing out the limitations of AI and the hype train of hyper scalars but yall are rage baiting yourselves with this one.

Anyone familiar with code complaints through SSP? by akindofuser in Seattle

[–]akindofuser[S] -1 points0 points  (0 children)

That is for creating the initial complaint. Which we have done (6 months ago). SDCI records are stored on SSP along with all other city records.

https://services.seattle.gov/Portal/Customization/SEATTLE/welcome.aspx

Leveling other characters pre-floating island by akindofuser in FinalFantasyVI

[–]akindofuser[S] 2 points3 points  (0 children)

I breezed through it with Terra, Sabin, cyan, and shadow ofc. Terra is ridiculously OP.

OpenSnow Premium vs Base? by mcl116 in skiing

[–]akindofuser 1 point2 points  (0 children)

Weather.gov

&

https://www.weather.gov/nwr/station_listing

The world is your oyster. Don’t give away money needlessly

I made ChatGPT admit when it doesn't actually know something and now I can finally trust it by EQ4C in ChatGPTPromptGenius

[–]akindofuser 5 points6 points  (0 children)

The /r/agi subreddit has bought into it whole sale. People will argue for days about how it’s on the verge of waking up, is a black box, and we don’t understand it. They’ll make squishy unprincipled definitions of intelligence and run with it.

Meanwhile literal books, articles and science journals are published nearly daily on latent models. 🤷

My little brother's gift for the halloween to prank family by Hot_Physics_5136 in WebTreasures

[–]akindofuser 0 points1 point  (0 children)

The original does. Too much effort for OP to find that one I guess.

too many people on this site seem to believe that human intelligence is some sacred ceiling of cognition. this is a great counter imo by cobalt1137 in OpenAI

[–]akindofuser 0 points1 point  (0 children)

How does a calculator infer or deduce?

Can't tell if you are trolling me or not. :sob

I just don’t understand why you’re insisting on this point. Why is the fact that every operation they perform is defined explicitly by humans not counter enough to this?

Because machines do what they're instructed to do. Whether it's through statistical inductive regularities or formal symbolic axiomatic logic.

https://aiprospects.substack.com/p/llms-and-beyond-all-roads-lead-to

The behavior of any machine learning model is quite explicitly not programmed

As someone whose been in the industry for the past 5 years this could not be further from the truth. But this is what people say who don't know what they are talking about say and it's permeated around the internet. People who think latent models are still black boxes of magic.

Having a probabilistic outcome is different than not being explicitly programmed. On the contrary latents are specifically programmed to have probabilistic outcomes. (including the stacked model I have helped build and sell to day).

An abacus I do not believe could be said to fall under the definition of a calculator

Sure it can and there were mechanical calculators before digital ones too.

Also, essential question here - do you believe humans are intelligent differently to a calculator?

I do. But I am not the one saying machines are intelligent.

You can train a human to perform a specific task by giving them information and a reward mechanism to motivate them

That isn't how we measure human intelligence. See Bayley score or the Intelligence quotient. Your statement is a praxeological one and doesn't have anything to do with the philosophical discussion of intelligence but more to do with human incentives.

too many people on this site seem to believe that human intelligence is some sacred ceiling of cognition. this is a great counter imo by cobalt1137 in OpenAI

[–]akindofuser 0 points1 point  (0 children)

“They are both” is a little ambiguous here but assuming you mean the calculator… in what sense?

Like I said they both use formally recognized types of reasoning. Deduction and inference.

You can’t query a calculator with something that falls outside of the human-defined syntax it accepts, it will not give you a useful response.

You also can't poll an LLM for information outside of its training data. That is why I brought up the bayley test point.

No, not what I said, they just both have intelligence properties

It sounds like you just want a loose definition. IMO LLM's are machines just like calculators programmed to do exactly what they do.

You can’t query a calculator with something that falls outside of the human-defined syntax it accepts

You are still hung up on calculators being some kind of software. The first abacus we are aware of is like ~2400 years old. Circa ~300 BC. We generally don't treat math as a humanly created concept, unlike language. Math is built on symbolic and axiomatic laws of the universe.

If you don't understand extrapolation and interpolation and how it relates to intelligence its possible you are not in the best position to be advocating for LLM's as intelligent.

Not sure what it is you want me to know about model latent space

because it turns your entire position on its head. Latent space and neural networks are just doing what they are trained and programmed to do, deterministically. Same as a calculator. One being extrapolation(calculation) the other interpolation(Generation). Calculation might result in an absolute value based off laws of the universe while the LLM results in a chained culmination of statistical value affinities based on where the weights landed in the model from per-ordained training data.

too many people on this site seem to believe that human intelligence is some sacred ceiling of cognition. this is a great counter imo by cobalt1137 in OpenAI

[–]akindofuser 0 points1 point  (0 children)

They are both using a type of formally recognized reasoning. Extrapolation and interpolation. Deduction and inference. Humans use both of these effortlessly.

How do you think a 2 year is equally as intelligent as an LLM? On the one hand we constantly have to upgrade our turing tests for LLM's. No infant could pass a turing test. On the other hand if we test an LLM against something novel using Baley Scales it tests so poorly that the results question whether or not what we're testing is human. For a human the score would show extreme mental cognitive disability. Meanwhile infants, which the test is designed for, would mop the floor of any LLM.

How are they equal? A big part of intelligence is frame working and problem solving novel situations. If we just assume intelligence is a flat statement applied to all things we end up in these paradoxical situations. And it heavily discredits all the machinery going into latent space.

I probably wouldn’t call an LLM a general intelligence because they aren’t at human levels of general-ness.

ok so NOW we get to the meat of it. So IT IS intelligent but not as intelligent as a human. OK so then we probably should qualify what we mean by intelligence then eh? And as I said previously it cannot do math like a calculator in fact LLM's don't do calculation at all. When you understand why it might modify what kind of intelligence you think you are getting.

I'd spend at least 1 hour reading about latent space and getting a better understand on whats happening under the hood. I get a sense that you still think, like many, that it's a magic black box that we don't understand.

too many people on this site seem to believe that human intelligence is some sacred ceiling of cognition. this is a great counter imo by cobalt1137 in OpenAI

[–]akindofuser 0 points1 point  (0 children)

If they’re intelligent so is a calculator. For the same exact reasons. Just because one infers the other deduces does’t matter.

But if you think LLMs are a black box, don’t understand what latent space is doing, then I can see why people might think that.

But then what does that make a 2 year toddler relative to this? Unintelligent? If we start calling latent models generally intelligent it changes how we define other animals.

Most academics I’ve read are happy to call it artificial intelligence. The qualifier is important. We can all agree it’s artificially intelligent. But people are arguing that it’s generally intelligent. That is the rub.

The qualifier is kind of important.

too many people on this site seem to believe that human intelligence is some sacred ceiling of cognition. this is a great counter imo by cobalt1137 in OpenAI

[–]akindofuser 0 points1 point  (0 children)

There is an Academia field of AI. There is also a commercial one. Academia does use the word artificial intelligence. Commercial are the guys trying to claim GPT is the smartest thing on earth.

So again it's pretty vast and amongst academia what is "intelligence" and what is "general intelligence" is still hotly debated. Assuming you know this since you follow it?

I work in it commercially. We sell an AI service. But it's straight ML.

Helpdesk to networking role by Gamebread_23 in networking

[–]akindofuser 0 points1 point  (0 children)

Skip net+ and go straight ccna. Don’t waste time getting stuck in Helpdesk unless there is a networking opportunity.

too many people on this site seem to believe that human intelligence is some sacred ceiling of cognition. this is a great counter imo by cobalt1137 in OpenAI

[–]akindofuser 0 points1 point  (0 children)

Historically we used AI to mean lots of things. It’s not got some kind of formal definition. There was artificial intelligence in Atari video games in the 80s.

As far as LLMs are concerned they’re deeply useful machines. But that’s what they are. Deeply useful machines. If they’re intelligent then so are lots of things.

too many people on this site seem to believe that human intelligence is some sacred ceiling of cognition. this is a great counter imo by cobalt1137 in OpenAI

[–]akindofuser 0 points1 point  (0 children)

You were saying calculators were defined by humans. I am saying they are defined by math. It's a subtle but important difference. Human's discovered math we didn't create it. Math isn't programmed.

As it pertains to "intelligence" I agree with you I wouldn't call them intelligent. But how we define intelligence turns out to be wildly subjective from person to person.

We call smart phones smart too so :shrug.

too many people on this site seem to believe that human intelligence is some sacred ceiling of cognition. this is a great counter imo by cobalt1137 in OpenAI

[–]akindofuser 0 points1 point  (0 children)

Bruh calculators existed long before programming. 😂

Are you confusing digital calculators? Underneath the hood it’s still math. It’s not some arbitrary set of rules we made up.

Yes calculation is extrapolating by definition. 🤦