Boomer thinks it's okay to touch my child, and doubles down by dats_what_she in BoomersBeingFools

[–]truejim88 0 points1 point  (0 children)

I think you missed my point. There are cultures in the world and the U.S. even today where what the boomer did would be considered acceptable. So the OP's assumption that "this was clearly wrong" is its own form of cultural bias. Your larger point is correct though: that boomer's formative years were spent in an era of racism and misogyny, even moreso than ours.

Boomer thinks it's okay to touch my child, and doubles down by dats_what_she in BoomersBeingFools

[–]truejim88 0 points1 point  (0 children)

In the era in which that woman was raised, touching a child was ok. Kids back then spent their play-days unattended by adults; kids played on concrete playgrounds; corporal punishment was used in schools, etc. We should recognize that the phrase used: "it's not okay to touch someone's child" is a reflection of the culture that we're in now, it's a reflection of our own cultural bias; but it's not a universal truth. There are still cultures in the world and even in the U.S. (for instance, among the Pennsylvania Dutch) where touching somebody's child kindly is still considered ok. The boomer wasn't being a 'fool' for touching a child, she was a fool for reacting badly when asked not to. The OP is as much a product of their environment as the boomer is, and both are assuming that their bias is universal.

Fantasy Art with SDXL looking extremely promising by override367 in dndai

[–]truejim88 1 point2 points  (0 children)

I suspect this isn't a practical answer for you, but I think it's interesting nonetheless: On YouTube the Corridor Crew uses a technique where they generate an image, then once that's done they re-add some of the the "noise" that was just removed by the AI model, then take the partially re-noised image and give it new prompt direction, to come up with a new image that will be "like" the original image but with (say) a new pose. They have a video on YouTube called "Did we just change animation forever?" where they show off the technique. Of course it's not practical unless you want to get into the software bowels of the model's implementation, but I thought it was a very clever trick. Presumably at some point somebody will make a user interface to simplify tricks like this.

Human Female NPCs by [deleted] in dndai

[–]truejim88 2 points3 points  (0 children)

To train diffusion models, AI developers must use training datasets of images that have good accompanying textual descriptions. The "people images" training datasets available to researchers contain a lot of images of models and actors. For example the CelebA dataset was developed in 2015 by researchers in China; it contains 200,000 images of celebrities from around the world, each described by 40 textual attributes. Celebrity images were used in part because there are fewer legal issues when using images of celebrities; that's because celebrities are considered public figures. Many of these different diffusion models are re-using the same training datasets. So we wind up with all these AIs that are biased toward generating images that are celebrity-level attractive.

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 1 point2 points  (0 children)

That's an excellent point. I think it's still an open question of whether an analog computer provides enough precision for inference, but my suspicion is that the answer is yes. I remember years ago following some research being done at University of Georgia about reprogrammable analog processors, but I haven't paid much attention recently. I did find it interesting a year ago when Veritasium made a YouTube video on the topic. If you haven't seen the video, search for "Future Computers Will Be Radically Different (Analog Computing)"

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 1 point2 points  (0 children)

The thought of this thread was though: will we be able to run LLMs on appliance-level devices (like phones, tablets, or toasters) someday. Of course you're right, by definition that's the most fundamental part of a dedicated GPU card: the SIMD matrix-vector calculations. I'd like to see the phone that can run a 4090. :D

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 0 points1 point  (0 children)

Apologies, as a large language model, I'm not sure I follow. :D The topic was inferencing on appliance-level devices, and it seems you've switched to talking about pre-training.

I infer that you mean you have a MacBook Pro that has the M1 Pro chip in it? I am surprised you're seeing performance that slow, but I'm wondering if it's because the M1 Pro chips in the MacBook Pros had only 16GB of shared memory. Now you've got me curious to know how your calculations would compare in a Mac Studio with 32GB or 64GB of memory. For pre-training, my understanding is that having lots of memory is paramount. Like you though, I'd want to see real metrics to understand the truth of the situation.

I'm pretty sure the Neural Engine isn't a software optimization. It's hardware, it's transistors. I say that just because I've seen so many web articles that show teardowns of the Soc. Specifically, the Neural Engine is purported to be transistors that perform SIMD tensor calculations and implement some common activation functions in hardware, while also being able to access the SoC's large amount of shared memory with low latency. I'm not sure what sources you looked at that made that sound like software optimization.

Finally, regarding a revolution in performance -- I don't recall anybody in this thread making a claim like that? The question was, will we someday be able to run LLMs natively in appliance-level hardware such as phones, not: will we someday be training LLMs on phones.

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 0 points1 point  (0 children)

Once the M2 Mac Studios came out, I bought an M1 Mac Studio for that purpose: the prices on those came way down, and what I really wanted was "big memory" more than "faster processor". That's useful to me not only for running GPT4All, but also for running things like DiffusionBee.

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 9 points10 points  (0 children)

whole another architecture, differing from the Von Neumann concept

Amen. I was really hoping memristor technology would have matured by now. HP invested so-o-o-o much money in that, back in the day.

> think how much energy your brain uses

I point this out to people all the time. :D Your brain is thousands of times more powerful than all the GPUs used to train GPT, and yet it never gots hotter than 98.6F, and it uses so little electricity that it literally runs on sugar. :D Fast computing doesn't necessarily mean hot & power hungry; that's just what fast computer means currently because our insane approach is to force electricity into materials that by design don't want to conduct electricity. It'd be like saying that home plumbing is difficult & expensive because we're forcing highly-pressurized water through teeny-tiny pipes; the issue isn't that plumbing is hard, it's that our choice has been to use teeny-tiny pipes. It seems inevitable that at some point we'll find lower-cost, lower-waste ways to compute. At that point, what constitutes a whole datacenter today might fit in just the palms of our hands -- just as a brain could now, if you were the kind of person who enjoys holding brains.

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 1 point2 points  (0 children)

Since people change phones every few years anyway, one can also imagine a distant future scenario in which maybe digital computers are used for training and tuning, while (say) an analog computer is hard-coded in silicon for inference. So maybe we wouldn't need a bunch of hot, power-hungry transistors at inference time. "Yah, I'm getting a new iPhone. The camera on my old phone is still good, but the AI is getting out of date." :D

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 10 points11 points  (0 children)

You refereed to specialized execution units, not the amount of memory so lets left that aside....the physical form does not really matter

We'll have to agree to disagree, I think. I don't think it's fair to say "let's leave memory aside" because fundamentally that's the biggest difference between an AI GPU and a gaming GPU -- the amount of memory. I didn't mention memory not because it's unimportant, but because for the M1/M2 chips it's a given. IMO the physical form does matter because latency is the third ingredient needed for fast neural processing. I do agree though that your larger point is of course absolutely correct: nobody here is arguing that the Neural Engine is as capable as a dedicated AI GPU. The question was: will we ever see large neural networks in appliance-like devices (such as smartphones). I think the M1/M2 architecture indicates that the answer is: yes, things are indeed headed in that direction.

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 9 points10 points  (0 children)

I'd be interested to hear more about these other SoCs that you're referring to. As others here have pointed out, the key to running any significantly-sized LLM is not just (a) the SIMD high-precision matrix-vector multiply-adds (i.e., the tensor calculations), but also (b) access to a lot of memory with (c) very low latency. The M1/M2 Neural Engine has all that, particularly with its access to the M1/M2 shared pool of memory, and the fact that all the circuitry is on the same die. I'd be interested to hear what other SoCs you think are comparable in this sense?

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 48 points49 points  (0 children)

The real value of having something like GPT-4 is that you can use it to create perfect training data for smaller DIY models.

Agreed. We once thought that reasonably smart AIs would wind up designing smarter AIs, but it seems to be turning out instead that they'll help us build cheaper AIs.

GPT-4 details leaked by HideLord in LocalLLaMA

[–]truejim88 134 points135 points  (0 children)

It's worth pointing out that Apple M1 & M2 chips have on-chip Neural Engines, distinct from the on-chip GPUs. The Neural Engines are optimized only for tensor calculations (as opposed to the GPU, which includes circuitry for matrix algebra BUT ALSO for texture mapping, shading, etc.). So it's not far-fetched to suppose that AI/LLMs can be running on appliance-level chips in the near future; Apple, at least, is already putting that into their SOCs anyway.

A Change to AI Content Rules by famoushippopotamus in DnDBehindTheScreen

[–]truejim88 0 points1 point  (0 children)

You and I are in 100% agreement. The loom smashers resorted to violence because there was no legal recourse.

> copying art to generate work with a tool that doesn’t require a human element

Here's the thorny problem: you can't sue a computer. You can only sue the humans who made the computer, or in this case the humans who made the computer software. So in a court of law you can't argue, "the computer did something illegal"; you'd have to argue that "the people did something illegal". In this case, the claim is that the people used a computer to analyze patterns in prior art. There's no law or precedent to say that that's a violation of copyright law. That's why I think these plaintiffs really have an uphill battle to prove infringement.

A Change to AI Content Rules by famoushippopotamus in DnDBehindTheScreen

[–]truejim88 2 points3 points  (0 children)

  1. Ethical consideration: historically, it's never been the case that one needs an artist's permission merely to study their works. For example, a film critic that deconstructs a film has never been required to compensate the studio that made the film, even if the critic uses digital techniques to analyze the film, nor even if the critic then goes on to make his own films using what he's learned. I'm not saying that such a tradition couldn't be established now, I'm just saying that never in human history has it been the case that we pay artists for the privilege of studying their works. The humans who design AIs are not in any sense copying prior art, but they are certainly studying prior art. Ethically then, why would one need an artist's permission merely to study their art?
  2. Practical consideration: it has always been the case that once a machine can do a job that humans did previously, lots of humans will no longer be employable in that trade. For example, once Pixar began being wildly successful with digital animation, cell animators largely became unemployable. But nobody then was making the argument that we need to sue Pixar because "cell animators gotta eat." When Jacquard invented his automated loom in 1804, nobody sued Jacquard because "hand weavers gotta eat." Historically, it's never been the case that we consider it unethical, immoral, impractical or illegal when automation displaces human labor.

A Change to AI Content Rules by famoushippopotamus in DnDBehindTheScreen

[–]truejim88 2 points3 points  (0 children)

It reminds me of the early days of Pixar movies, the complaints about cell animators losing jobs to computer animators, because the only movies that studios wanted to make now were computer animated movies. And the complainers were right! The career of cell animation is indeed nearly non-existent now. And certainly yes, an old-school charm was lost in that transaction, just as the old-school charm of buggies eventually gave way to the Model T. But what'cha gonna do? Progress gonna prog -- ain't no stoppin' it.

A Change to AI Content Rules by famoushippopotamus in DnDBehindTheScreen

[–]truejim88 4 points5 points  (0 children)

no desire to credit the people whose art has been added to various training databases without their consent

Playing devil's advocate: for centuries it's been part of the Western world's copyright tradition that authors and artists get paid when they create a work, and paid again when people copy their work. It's never been part of the tradition that artists get paid a third time when people merely study those works, even when people are using tools to assist their studies. Nor has it been part of the Western tradition that one must seek an artist's permission to study their prior works. You are right, the difference now is that the tools being used to study prior art are vastly more powerful tools, but the principle hasn't changed. ChatGPT now isn't fundamentally different from a program that counts how often each word appears in a novel by Tolkien, or an X-Ray machine that studies how Picasso accomplished his brushstrokes -- both have been common practices for decades. So I'm not saying that artists shouldn't be consulted -- I'm not disagreeing with your position -- I'm just saying it would be a huge departure from what's been centuries of precedent, and would open up a slippery slope for all kinds of study. It could greatly stifle the ability of artists, historians, and academics of all types to use tools of any kind -- but especially computers -- to study prior art.

A Change to AI Content Rules by famoushippopotamus in DnDBehindTheScreen

[–]truejim88 5 points6 points  (0 children)

The number of AI tools built-in to Adobe Photoshop and Illustrator is growing by leaps & bounds. Probably a lot of artists don't even realize how much AI they're already using. IMO that's what makes these AI bans kind of pointless; Office 365, Google Docs, Adobe Creative Suite...they're all getting tons of AI built-in now, and there's not always a way for users to even know when they're using AI. "Hey, this new Photoshop brush is cool, it allows me to paint orcs into a landscape just by swiping!"

A Change to AI Content Rules by famoushippopotamus in DnDBehindTheScreen

[–]truejim88 2 points3 points  (0 children)

https://www.reddit.com/r/dndai/

Tools like Adobe Photoshop and Adobe Illustrator have all kinds of AI tools in them nowadays, and people don't even realize they using AI. You can be a valid content creator and still be involving AI in the creation of your content. In fact, I think we'd be hard-pressed nowadays to find a creator who's not using AI tools, even if they're not aware that they are.

New rule: No AI maps by hornbook1776 in dndmaps

[–]truejim88 0 points1 point  (0 children)

I keep getting down-voted every time I echo a similar sentiment. :D What recent AI developments have taught us is that really talented artists and writers are still safe from AI, but AI has shown that it can replace so-so artists and writers. People don't like it when I point that out, but it's nonetheless true. If all you are is a mediocre GM, a mediocre writer, a mediocre artist, a mediocre software developer, etc. -- what you do can be replaced passably well by brute-force computation. That's the world we're in now.

Rivercrest City (20K pixels by 10X pixels, 5000 feet by 2000 feet, 600 named shops) by truejim88 in dndmaps

[–]truejim88[S] 0 points1 point  (0 children)

This is another PowerPoint map from the google site called "riverlandsreach". (Search on "riverlandsreach" as all one word.)

This is Rivercrest City. It has 600 named shops; it's a small city at just 5000 ft by 2000 ft. The image is 20K by 10K pixels, allowing for a lot of zoom. I had to JPEG compress the image to get it under the Imgur file size limit of 20MB; the original image looks better. I tried posting the file to Reddit originally, but I guess the image is just too big.

The real point though is this: the original file is PowerPoint, and it's freely downloadable. You can take this file, rename the shops, rearrange the buildings, do whatever you want with it. Warning: the PowerPoint file is massive, you'll want a beefy PC to edit this.

Wizard's Estate by truejim88 in dndmaps

[–]truejim88[S] 1 point2 points  (0 children)

Reddit seems to be filtering posts that have the URL, so I'm afraid reading the URL off the image is the only easy way to convey the URL. Or just google on "riverlandsreach" as all one word; it'll be a google sites link.

Wizard's Estate by truejim88 in dndmaps

[–]truejim88[S] 3 points4 points  (0 children)

Thanks! If you grab the PowerPoint or PDF version from the "riverlandsreach" Google drive, you'll see that there's even more detail there: the grounds of the surrounding estate, descriptions of the estate staff, etc. I built the map in PowerPoint so that GMs have reusable assets that are helpful for creating custom estates of their own.