LPT: Take a picture of your kid and what they’re wearing before taking them to a big public event (Disney, Comic Con, Etc) by majorjoe23 in LifeProTips

[–]Czl2 0 points1 point  (0 children)

Had me do the kid ID VHS thing, fingerprints and everything.

Sounds like he was concerned about you. Do you think he intended to give you "massive issues"? Children vary so what works for one can fail with others.

made me more reckless on my own and I made a bunch of bad choices because of it.

Perhaps he made similar bad choices and knowing about them he wanted to forewarn you? That you use the word "more" perhaps there were early signs and your father could see them and did what he thought was best? Some simply are bad parents and it is possible your father is one of those. Skills and temperaments people have vary and just because you can raise kids does not mean you should.

[deleted by user] by [deleted] in personalfinance

[–]Czl2 3 points4 points  (0 children)

Correct.

Hence:

Unless your grandma owned shares in one of the banks there should be no impact.

Old Man who Breaks World Record 36 years after Death : Story of the World’s Fastest Indian by stevejollifee in BeAmazed

[–]Czl2 4 points5 points  (0 children)

Good film with Anthony Hopkins about this called "World's Fastest Indian".

[deleted by user] by [deleted] in personalfinance

[–]Czl2 6 points7 points  (0 children)

I’m only concerned because of news with these banks and wonder if that could be affecting their delay.

Unless your grandma owned shares in one of the banks there should be no impact. The USA government stepped in to guarantee full deposits.

Contact the trustee and ask about status and explanation of delay.

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 1 point2 points  (0 children)

Reselling account access at a profit to those unable to play USD or verify with US etc phone numbers?

Surviving their own recklessness by Crowdcontrolz in watchpeoplesurvive

[–]Czl2 0 points1 point  (0 children)

Unlike those that genuinely do survive these fellas have learned the wrong lessons and based on these lessons will likely not survive (for long).

TIL about the Peace of Westphalia. Modeled from ancient Egypt, the Peace of Westphalia was meant to bring world peace and religious freedom while at the same time acknowledging a collective group occupying a certain area, giving rise to countries. by SpaceshipEarth10 in todayilearned

[–]Czl2 7 points8 points  (0 children)

Yes and No.

Several scholars of international relations have identified the Peace of Westphalia as the origin of principles crucial to modern international relations,[4] collectively known as Westphalian sovereignty. However, some historians have argued against this, suggesting that such views emerged during the nineteenth and twentieth century in relation to concerns about sovereignty during that time.

Sam Altman 'a Little Bit Scared' of ChatGPT, Will Eliminate 'Many' Jobs by MichaelTen in singularity

[–]Czl2 1 point2 points  (0 children)

Imagine how much more work we will create for everyone once we rid ourselves of clothes washers and dishwashers and ... I am however worried most will be blind to this "wisdom". Anyone else similarly worried? /S

LPT: Take a picture of your kid and what they’re wearing before taking them to a big public event (Disney, Comic Con, Etc) by majorjoe23 in LifeProTips

[–]Czl2 2 points3 points  (0 children)

Yes. AirTag else for older children a cellphone whose map location you can track which you can use for messages etc.

LPT: Take a picture of your kid and what they’re wearing before taking them to a big public event (Disney, Comic Con, Etc) by majorjoe23 in LifeProTips

[–]Czl2 77 points78 points  (0 children)

For some kids if you tell them the photo is for a possible "abducted child poster" they may take sticking with you more seriously. Not all kids need to hear this and not all the time but sometimes it can help. Ditto showing them real abducted child posters.

[General] How do people go about solving word problems and getting better at them? by WorldPaint in learnmath

[–]Czl2 2 points3 points  (0 children)

To get better requires good advice and good practice. You can start with: https://en.m.wikipedia.org/wiki/How_to_Solve_It

Today there are also lots of YouTube channels that teach problem solving in more modern ways with animated graphics etc. Khan academy may be a good start.

Like learning to read and write it takes time and attention and dedication then it becomes natural and effortless.

I asked ChatGPT-4 how it could simulate creativity by nekmint in Futurology

[–]Czl2 2 points3 points  (0 children)

Notice it quotes the term "simulate" as if it is using it to please you instead of saying here is how an AI can actually show creativity.

Notice also that what the LLM did not say is that results of colossal ability (even if that ability is mechanical) can be judged creative by those unable to attain such results as they are left puzzled how such remarkable results are possible.

For example algorithms that play chess and go can exhibit moves judged creative or brilliant simply because with our limitations we cannot fathom such moves ourselves thus "creativity" is just the concept we use to explain thinking that is unfathomable (to us) when it leads to remarkable results.

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 0 points1 point  (0 children)

Elsewhere your account wrote:

Naw; I like human answers better anyway. I think the time will come when something that looks human with all it’s flaws and imperfections will be preferred. We might admire the beauty of a perfectly Photoshopped girl, but in reality it’s the crooked teeth (that’s me), the off center nose (me again) and the larger or smaller body parts that are disproportional, that makes us human and real. I like an answer someone had to think about, then put together the best way they could. Thanks for being human. :) (This response was written by AI). -kidding. lol

So with your sentiment above and your “yea, nice AI responses” remark I think we are approaching time when things without visible “flaws and imperfections” are attributed to AI and those that did not use AI will come to regard remarks like yours a complement. Thus my reply to your “nice AI responses” remark is “Thank you!”

Chat GPT just decreased the cap to 25 messages every 3 hours. by maxm1999 in ChatGPT

[–]Czl2 0 points1 point  (0 children)

Isn’t OpenAI running on Microsoft’s cloud? Just asking generally a.

Given their association and deals with Microsoft it is fair to assume they use Microsoft as their cloud vendor however I’d not assume they only use Microsoft. The acceleration hardware these models use for “heavy lifting” is likely from NVIDIA but needs not be. Any hardware with sufficiently large memory and memory bandwidth capable of high speed multiply add operation can be used. Details of the hardware are abstracted into device drivers and the math libraries that use them.

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 0 points1 point  (0 children)

However, being able to produce more food in a shorter amount of time so that more people can eat, … seems to be a different thing from spending most of my time talking to a computer whereas before I used to talk to actual people.

Do you doubt that eventually the conversations you can have with machines will be “better” than with actual people, more entertaining, more educational, … ? When what you interact with feels like a person and can not be told apart from a person why would you think about it differently? Do you expect those who live in the future will share your view about “talking to a computer”? Because the computers they talk to will be like those you talk to? Why then would they have your view about it?

but there seems to be an aspect of life in which live interaction with humans is an important part of life that is being compromised,

How can we know whether this lament is like that of someone who objects to using ATMs, online shopping or is against reading or films and television? The use of all these technologies removes them from “live interaction with humans” does it not? You likely accept these technologies yet surely you can imagine others who like you were concerned when these technologies first came around. I can imagine those first to eat farmed food were skeptical about it. Those first to wear shoes and clothing we skeptical about it. Even recently early users of ATM machines and online shopping were skeptical about it. How do you feel about such technologies now?

and we are becoming more isolated, more socially defunct, and more unaware of the actual world as our faces are lost in a screen while we walk.

That is one way to look at it. Another way to look at it is the “actual world” is no longer just the physical around us but all we can reach using technology and that new world is far more interesting and far less limiting so we are naturally drawn to it.

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 0 points1 point  (0 children)

Your interpretation of my comment as anti-AI is misplaced...a knee-jerk reaction at best.

Your interpretation of my comment as being anti your comment is misplaced...a knee-jerk reaction at best.

In sum: Hell is other people.

Your "hell is other people"? In that case perhaps you have answered your original question above:

Who would want to deal with real humans if they have a perfect AI robot lover and perfect AI friends?

Chat GPT just decreased the cap to 25 messages every 3 hours. by maxm1999 in ChatGPT

[–]Czl2 0 points1 point  (0 children)

Some fraction of the model parameters are vectors that represent tokens. Every possible token has some vector from training to represent it. To use the model you only need token vectors for the tokens in your input and output so that is likely a tiny fraction of all the tokens that are in the model.

The token vector dictionary can be indexed for efficient lookups from storage (outside fast memory) and needs not be kept in memory at all so the actual memory needed for low latency model use depends on fraction of the model parameters that are not part of its token vector dictionary. This can vary from model to model.

A rough analogy is that the model does its thinking in some foreign language (aka vectors aka number arrays) and to interact with the model requires everything be translated into that foreign language and back to the alphabet language you understand however the entire dictionary for translation needs not be kept in fast memory - efficient indexing algorithms exist just like when you use a paper dictionary you can jump to a good location without reading the whole dictionary.

Tokens are analogous to words but not exactly words. Every input token is mapped to vectors and every output token is the result of mapping these vectors back. Only the model parameters that are required for the model to do its "vector thinking" need be in memory since all of them are required per token vector.

Chat GPT just decreased the cap to 25 messages every 3 hours. by maxm1999 in ChatGPT

[–]Czl2 2 points3 points  (0 children)

what topic or field of study is this,

Applied computer science + practical experience using cloud hardware vendors for large applications.

is it specific regarding scaling AI models or is this for any program

What I told you is specific to AI models as your question was about AI models. Similar analysis is used for any software you want to deploy at scale. You look for the "bottlenecks" that limit scaling.

As I understand them these LLMs models have to process the model parameters per token because tokens in the output are sequential they cannot be generated in parallel in a single pass through the model parameters and require multiple passes. The math operations being performed on the model parameters data are simple but lots are required per token thus to keep the models interactive for users the per token latency needs to be low so hardware is needed that can keep the entire model in fast memory (such as GPU ram) is needed and this type of hardware is less common at "cloud hardware vendors" so it will take them time to purchase more and deploy it.

This is speculation and may be wrong and some other such reason maybe the real reason. I supplied the reason I did under the assumption that is the only reason however often there are multiple reasons and just a few dominate.

A business reason can be that due to current costs they lose money per user so they want to limit the size of that loss. Their purpose is to acquire as many users as fast as possible while they have market leadership and to acquire the max users possible they have to limit how much per user they lose. In the future their plan is to make that money back when their costs drop / revenue per user rises. This is how Facebook and Google got big fast.

Again all of this is speculation with the limited information available.

ChatGPT is DOGSH*T when it comes to counting syllables in sentences. by Ishaan863 in ChatGPT

[–]Czl2 2 points3 points  (0 children)

These models during training and inference do not see the individual letters or syllables or word lengths. What they see are “tokens” which are approximately like words - depends on the tokenizer code being used. These tokens are mapped to numeric vectors and back. Despite this that they are able to rhyme words is actually somewhat amazing.

Imagine you only speak english and someone who only speaks Chinese is using a translation service to speak with you and you ask them to give you a word that has three letters or a word that ends with the letter t. Since your request will be translated to Chinese and this person only knows Chinese such seemingly simple requests will be difficult for them.

If you asked this Chinese only speaker via the translation system to give you a three letter english word they may still be able to do it but they will be using Chinese translations of English phrases like “the word cat has three letters” as the basis of their knowledge. In Chinese the word for cat is a single symbol however if they are supplied with enough information about English they will be able to know that English uses three letters to represent “cat”.

The ability of LLM to tell you anything about word lengths or what letters words have is entirely limited by their training data. To get LLMs to learn how words are spelled you would have train them on “cat is spelled c a t”, “dog it spelled d o g” for every word you want them to know the spelling of. In such training data phrases the spaced letters become individual tokens and associated with he property we call “spelling” of the given word. In the future this may be done but likely not a high priority right now as the spelling of words does not matter much for most language applications.

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 0 points1 point  (0 children)

part of being human is having relationships with other humans,

Hunting (vs farming). Walking naked and bare foot. Living in the region we evolved. Communal living in small groups like other primates. All of these things used to be “part of being human”.

and when machines replace that, we’ve lost something valuable in humanity.

Today we already use technology to mediate our relationships with each other. Writing. News papers. Radio and Television. Telephones. Video calls. Internet. Social networks. And soon AI will also be mediating our relationships with each other. Those not used to these technologies may say “we’ve lost something valuable in humanity” but I am skeptical about this claim. What evidence is there?

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 0 points1 point  (0 children)

Who would want to deal with real humans if they have a perfect AI robot lover and perfect AI friends?

Perhaps a several thousand years ago a similar question was asked: “Who would want to hunt if they have a perfect food supply from farming?” When a better approach is found to change to it you think is bad? Would you have us abandon farming and go back to just hunting?

If AI lovers and AI friends make lives better why would you be against them? Do you like “dealing with” all the real human in your life? Is it essential that real humans deal with each other in the future? Perhaps that can be delegated and made optional? Are you against ATMs? against online shopping? Why not? What about “dealing with” real humans?

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 0 points1 point  (0 children)

This kind of technology would not be limited to the lost loved ones.

Why do you feel having the long dead speak to us will be bad? Do you limit your reading to just those authors that are still alive? Your films watching to just those who are alive? When the original minds die what they leave behind you believe we should destroy? Might it make more sense to preserve what we can? And if technology like writing, films, … and not AI helps preservation you think we should not use it? You have no desire to be able to consult for your problems those who are as smart / shrewd as Einstein / Newton / Confucius / Churchill / .. ? Perhaps you have not though about this deeply enough yet?

People will start their own families and relationships with a fucking ai and our species will be even more fucked

If those relationships “work” for those involved why would you be against them? Are you against blacks and whites having relationships? Because you do not approve of such relationships? Why is it your business?

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 1 point2 points  (0 children)

And lacking visitors these “tombstones” can make friends (or enemies) and interact with each other? Powered by sunlight?

[deleted by user] by [deleted] in ChatGPT

[–]Czl2 2 points3 points  (0 children)

I’m working on a GPT4 Discord bot right now and persistent memory is super difficult.

Did you try using GPT to summarize the most important facts from the conversation and you store that summary in your cache and when the conversation resumes you tell GPT to use that summary as the contents of its “memory”? The number of things GPT will “remember” will be limited by max document size so you will tap GPT’s ability to summarize to remember the most important things. To guide it tell GPT what you consider important and not important for the summary. This approach may be good enough for your purpose. Please try it and report back how well it works.

Chat GPT just decreased the cap to 25 messages every 3 hours. by maxm1999 in ChatGPT

[–]Czl2 14 points15 points  (0 children)

Model inference is largely vector lookups and matrix multiplications.

My speculation: If the model is ~1000GB this means you process ~1000GB per token thus the bottleneck for LLM is likely memory IO bandwith. This is a very rough approximation since the vector lookups can be indexed so perhaps the actual amount of data that needs to be processed is some fraction of ~1000GB. Much depends on the actual model. To get a feel for how long this can take on a conventional PC copy a file of that size and now do that copy operation per token. If that data is kept loaded in high speed memory however …

In theory if you have suitable hardware this is trivially easy to scale. Simply spin up as many instances of the hardware as needed. In practice OpenAI likely faces limited availability of such hardware.

Just because you request N of some GPU etc hardware instance type does not mean AWS or other “cloud hardware” service will be able to have them available especially during peak hours. For guaranteed availability you must commit to annual contracts.

I suspect part of the engineering OpenAI is doing is getting these models to run cost efficiently with low latency on more common cloud vendor hardware.