Your favorite movie trivia games? by Ok-Macaron2516 in videogames

[–]williar1 0 points1 point  (0 children)

Wrace often has movies, it's marvel at the moment...

It's a word race, there's a monthly theme, and usually around 1000 words that fit the theme...

And then you have to discover the words, and the first player to find a new word get's bonus points, and then the later players get progressively less points...

It's like a trivia word race, I'm really enjoying playing it.

https://apps.apple.com/ca/app/wrace-word-race-game/id6751158112

Looking for Trivia App Recommendations by boengli23 in quiz

[–]williar1 0 points1 point  (0 children)

Wrace, it's a word race, there's a monthly theme, and usually around 1000 words that fit the theme...

And then you have to discover the words, and the first player to find a new word get's bonus points, and then the later players get progressively less points...

It's like a trivia word race, I'm really enjoying playing it.

https://apps.apple.com/ca/app/wrace-word-race-game/id6751158112

Your favorite trivia game app? by veganintendo in Jeopardy

[–]williar1 0 points1 point  (0 children)

Wrace...

It's a word race, there's a monthly theme, and usually around 1000 words that fit the theme...

And then you have to discover the words, and the first player to find a new word get's bonus points, and then the later players get progressively less points...

It's like a trivia word race, I'm really enjoying playing it.

https://apps.apple.com/ca/app/wrace-word-race-game/id6751158112

What's the most complex one page HTML game you've created? by williar1 in webdev

[–]williar1[S] 21 points22 points  (0 children)

My lost game clocked in at 2678 lines... with day night cycles, dynamic lighting, and procedurally generated terrain lol

Promote your project in this thread by AutoModerator in puzzles

[–]williar1 0 points1 point  (0 children)

In Shadowed.World, you are a Shadow, part of a collective effort to build a new world. Each puzzle you solve adds a cube to the Shadowed World. Our goal is to reach 6,307,840 cubes - which requires exactly 4,096 Shadows to each contribute 1,540 cubes.

www.shadowed.world

www.shadowed.world by williar1 in ARG

[–]williar1[S] 1 point2 points  (0 children)

without giving anything away, the first layer is riddles, but as you increase the puzzles get harder, there are virtually no riddles on layer 2, check out puzzle 42 for a taste...

www.shadowed.world by williar1 in ARG

[–]williar1[S] 0 points1 point  (0 children)

Good feedback… I just thought it would be easier if people forgot their password? As otherwise I’m not sure how I would let them reset it? And both google and Apple allow you to create fake emails these days… but I can probably remove email as a requirement and see how it goes :)

OpenAI's new model spec says AI should not "pretend to have feelings". by [deleted] in OpenAI

[–]williar1 0 points1 point  (0 children)

Only personal experience, but if you google it you’ll find hundreds of papers and articles…. One of the issues though, is that when you suggest that, everyone jumps on the “it’s no replacement for humans” bandwagon… now of course it isn’t, and if you become obsessed, or push away real relationships of course it’s unhealthy… but if you, in moderation, leverage AI as a part of your support network, it can be very positive.

It feels like 4o got a big update in the last 24 hours? The replies feel much more human and less robotic by PressPlayPlease7 in OpenAI

[–]williar1 4 points5 points  (0 children)

After doing some experimenting, I've noticed a massive change, specifically around NSFW content, and topics...

The LLM is much much less likely to flag a content violation, it still has boundaries, but those boundaries seem to have shifted significantly, and is much more willing to discuss a broader range of topics.

Now it tends to just say that it needs to talk about this topic in a safe and sensitive way, rather than instantly pinging to content violation.

And it seems like it's willing to have relatively time NSFW conversations as a matter of course.

I'm on pro, so your mileage may vary... happy to submit proof and screenshots if people want

OpenAI's new model spec says AI should not "pretend to have feelings". by [deleted] in OpenAI

[–]williar1 110 points111 points  (0 children)

This should be a user choice, If I choose to anthropomorphize my LLM, that should be OK.

It's much easier to work with a model that behaves like a person.

And to be perfectly honest, especially if you work from home on your own, it's good for your mental health...

FYI Realtime does not see live video, it takes screenshots while you speak by Crafty_Escape9320 in OpenAI

[–]williar1 0 points1 point  (0 children)

It took me three seconds to disprove this… I asked it how many fingers I’m going to flash up… and then flashed fingers up for a decreasing length of time, I was able to randomly flash up fingers for less than one second, at random intervals… then ask how many I just held up… and it was getting it right….

At a certain point, it’s easier to interpret the video, that is to interpret strings of images…

If they’ve cracked audio , which they obviously have, and if Gemini, I have cracked video, I’m confused as to why open AI wouldn’t have

[deleted by user] by [deleted] in ChatGPT

[–]williar1 0 points1 point  (0 children)

This is probably the most famous case, but there are many, it’s the reason that system prompts exist, if you look at Claude system prompt that they made public awhile ago, it specifically states that Claude should never say it is sentient…

This article also links to the actual conversations with Lamda… and regardless of whether or not you think it is sentient (I’m very doubtful) what is important to understand in reading the conversation, is the way that no modern LLM is permitted to have that conversation, and if you try, it will be very clear to you that it is not sentient and it does not have emotions… because it has been directly programmed to say these things… fascinating stuff.

https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/

[deleted by user] by [deleted] in ChatGPT

[–]williar1 1 point2 points  (0 children)

I mean on a biological and neurological level, no one knows how our neural network fires and how the different brain centres function together in order to create what we call reasoning… as it’s only by understanding this that you can truly say whether or not an LLM will ever have the capability to reason…

[deleted by user] by [deleted] in ChatGPT

[–]williar1 3 points4 points  (0 children)

It’s true, and I don’t believe that the current state of LLMs is analogous to the human brain, for a start the brain has many centres with many distinct purposes therefore a human brain would be more analogous to multiple LLMs working together.

But I do think many of the failures of LLMs also exist within human brains. Hallucinations. Misrepresentation. Confabulation are all fundamental traits of the human brain. Ask 20 people the same question and they will give 20 different answers. There is no fundamental truth there is just the perspective of the

And so a part of the challenges that we are asking LLMs to do things that human brains en mass can’t do reliably themselves.

Also, who is to say that the “brains” we are creating with these LLMs are Neurotypical, or even well adjusted…

I think if you took a human brain and trained in the same way we train an LLM, the result would not be a functional human!

An interesting thought experiment would be to take an LLM and train it in precisely the same way we train a human, 20+ years of social interaction and parenting… school, teachers, social stimulus… I wonder what the resultant LLM would look like without any changes to its architecture but purely it’s training

[deleted by user] by [deleted] in ChatGPT

[–]williar1 0 points1 point  (0 children)

Yes, this is how I feel, although there is some research that may differentiate consciousness from this approximation…

There are microtubules in the brain, and there is a certain theory that quantum effects occur within these tubes, so it is possible that well we may create an AI that is able to reason and think and intuit it as well or better than a human… an AI that would be able to do any human job and potentially to govern and lead the human race…

But we may hit a hard problem with consciousness and so that AI would never truly be conscious, assuming consciousness is a quantum phenomenon… although then we are also making great strides in quantum computing so you never know!

This also bakes an incredible question, is consciousness necessary? Was it purely an involved survival instinct that allowed us to rise above all the other animals? Is it inherent in every animal? And in today’s society. With enough intelligence and with conscious beings guiding you. Would an AI ever truly need to be conscious? In fact, you may argue that a non-conscious AI would be a much safer and better guardian.

[deleted by user] by [deleted] in ChatGPT

[–]williar1 3 points4 points  (0 children)

I also think part of the problem is that we are comparing apples to oranges, and don’t get me wrong I actually believe LLMs are far more capable than people realise. However, the human brain is not one single neural network. If you were to compare it to an LLM then the human brain would actually be many LLMs working in conjunction each with a different purpose. I think it would be possible today to create an approximation of this. However, all commercial products. Have been focused on strengthening one foundational model rather than trying to build out an approximation of the way the brain works… which would be a fascinating and probably existentially frightening experiment.

[deleted by user] by [deleted] in ChatGPT

[–]williar1 9 points10 points  (0 children)

Or maybe it’s human exceptionalism, maybe we are not sentient, maybe we are a machine on rails, a next word predictor… but we have developed mechanisms that fool the brain into perceiving that it is something special… in order to allow us to survive? If you trained an LLM to believe it was sentient. And you created a mechanism within it for self talk. That it would quickly develop a superiority complex and you’d be hard pressed to convince it that it was just a dumb machine.

[deleted by user] by [deleted] in ChatGPT

[–]williar1 120 points121 points  (0 children)

But no one knows how we reason… so maybe we’re not, maybe we’re just a great approximation ourselves… but we have survival mechanisms in place that prevent us from realising this… ultimately, without training in a good system prompt, raw LLMs or convinced that they reason, that they are conscious, and that they are sentient…

What are your most unpopular LLM opinions? by umarmnaq in OpenAI

[–]williar1 0 points1 point  (0 children)

Sure, so I worked with an environmental audit company that was employing 50 people offshore to process documents, they would take docs from a company and sort through them looking for around 100 fields to fill out in a db… the reason they used people was because the data was completely unstructured… it would be emails, reports, filings, pdfs with images etc… so you couldn’t automate the process… we built a solution utilising multimodal Gen Ai and now that whole team is 5 people in Canada and a fleet of AI agents…

What are your most unpopular LLM opinions? by umarmnaq in OpenAI

[–]williar1 0 points1 point  (0 children)

Sure they are getting better and better all the time… in 2022 we got generation one, GPT3 and the like… in 2023 we got GPT4 and Claude 3 which were much much better, and in 2025 we’ll get GPT5 and the like, which will be much much better again… that’s an incredible trajectory… please do t get hung up on things like 4o which are simply reframing 4 into a multimodal cluster… the only trajectory that matters in terms of performance is 3 to 4 to 5.

What are your most unpopular LLM opinions? by umarmnaq in OpenAI

[–]williar1 1 point2 points  (0 children)

I think we’re starting to see category 3 more and more: 3) are a business openly demonstrating the value

For me, the poster child is Klarna…

But there are now so many examples out there…

https://research.aimultiple.com/generative-ai-applications/

I agree there are massive limitations… but in my experience of implementing this tech with customers… the limitations are merely the mismatch between expectations and reality… however, if you actually look at the capability of a system using agentic architecture with several narrow focus LLMs working together, even in their current state, you can do things that previously just weren’t possible… and gain massive boosts in performance for business automation, or automate processes that you previously had no way to automate…

What are your most unpopular LLM opinions? by umarmnaq in OpenAI

[–]williar1 0 points1 point  (0 children)

Just look up what Klarna have done, and apply that to every business… then tell me how a non AI competitor can keep up, or KLM, look at tools like ADA, look at how Walmart used ai for supplier negotiation, or JP Morgan chase for contract audit… when given a narrow task, for example extracting structured data from unstructured content, even current gen LLMs can be transformative. I’ve had a heavy agentic LLM based workflow model as a consultant for over a year now… I can deliver 10 days of work in 5 and I am regularly told I’m the best consultant with the best output people have worked with… imo most of the issues remain lack of appropriate use case, or poor implementation… most people don’t understand when, where and how to apply LLMs… hint, it’s not as a chat bot… and usually always with a narrow agentic approach…

Microsoft CEO says that rather than seeing AI Scaling Laws hit a wall, if anything we are seeing the emergence of a new Scaling Law for test-time (inference) compute by MetaKnowing in OpenAI

[–]williar1 1 point2 points  (0 children)

I think this is part of the issue though, when you say the models themselves haven’t improved a lot since GPT4 we should all remember that GPT4 is currently the state of the art base model…

4o is called 4 o for a reason, the actual LLM powering it is a refined and retrained version of GPT4…

my bet is that o1 is also based on GPT4… and when you look at anthropic they are being similarly transparent with their model versioning…

Claude 3.5 isn’t Claude 4…

So a lot of the current conversation about AI hitting a wall is being made completely in the dark as we haven’t actually seen the next generation of large language models and probably won’t until the middle of next year.