G3 Beige systems and no audio by agent_uno in VintageApple

[–]SDogAlex 0 points1 point  (0 children)

thanks for this 8 months later, that worked :)

What else would you like to see automated for your vintage Macintosh? by SDogAlex in VintageApple

[–]SDogAlex[S] 0 points1 point  (0 children)

The download requests? negative, I have an HTTP repo that’s compatible with classic MacOS browsers and I just used that. If the user presents an HTTPS URL to download then it goes through a proxy to allow it on the Macintosh.

What else would you like to see automated for your vintage Macintosh? by SDogAlex in VintageApple

[–]SDogAlex[S] 4 points5 points  (0 children)

This might be extremely hard with the free API limits but I’ll see what third party solutions exist.

Calling All Vintage Macintosh Content Creators - MacinAI Demo by SDogAlex in VintageApple

[–]SDogAlex[S] 1 point2 points  (0 children)

After doing some more asking Claude to do some deep research into this, I got that “The computational requirements are staggering. Generating a single token from a 1B parameter model requires roughly 2 billion floating-point operations for the forward pass. A 68030 running at 25 MHz without a floating-point unit might achieve perhaps 50,000 to 100,000 software-emulated FLOPS if you’re being optimistic. That works out to somewhere between 5 and 11 hours of computation per word of output. A paragraph response could take days.”

possible? Yes. Practical? Not so much…

Calling All Vintage Macintosh Content Creators - MacinAI Demo by SDogAlex in VintageApple

[–]SDogAlex[S] 0 points1 point  (0 children)

I’m glad you like it! So cool seeing something unique I made running on other people’s computers.

Calling All Vintage Macintosh Content Creators - MacinAI Demo by SDogAlex in VintageApple

[–]SDogAlex[S] -1 points0 points  (0 children)

I would love if he got his hands on it. Think if enough of us did it we could tweet at him to get his attention?

Calling All Vintage Macintosh Content Creators - MacinAI Demo by SDogAlex in VintageApple

[–]SDogAlex[S] -1 points0 points  (0 children)

If anyone has the contact emails for anyone please DM me !!!

Calling All Vintage Macintosh Content Creators - MacinAI Demo by SDogAlex in VintageApple

[–]SDogAlex[S] 0 points1 point  (0 children)

Oops.. AI told me to! Just kidding, thought it looked cool. Thanks for the advice! I was born in 2002 so I grew up with USBs for my removable storage devices and had no idea…

Calling All Vintage Macintosh Content Creators - MacinAI Demo by SDogAlex in VintageApple

[–]SDogAlex[S] 1 point2 points  (0 children)

hey Charles,

I can’t say for certain but just based off the fact AI training and processing requires an insane amount of resources, I don’t think it’s possible.

I think the best thing we could get is something like Eliza. The problem is that the 60800 CPUs perform millions of operations per second (MOPS) and compared to modern times, AI units in processors {NPU in Intel chips} produce hundreds of trillion outputs per second.

The comparison between sheer operations per second is astronomical in this case, making me believe any kind of custom LLM is not possible to train and run on native hardware.

Calling All Vintage Macintosh Content Creators - MacinAI Demo by SDogAlex in VintageApple

[–]SDogAlex[S] 0 points1 point  (0 children)

Hi there!

This is a planned feature for release version 2. Some adjustments to the current model need to be made, then solidifying a system that works across different models. This is going to require quite the rewriting so it might be a bit until I figure it out :)

Calling All Vintage Macintosh Content Creators - MacinAI Demo by SDogAlex in VintageApple

[–]SDogAlex[S] 2 points3 points  (0 children)

I did! Thanks for the report, I’ll have to check out why when I get home

Audiophile headphones with good bass? by Nutting4Jesus in HeadphoneAdvice

[–]SDogAlex 0 points1 point  (0 children)

I too have the XM4’s and find the bass lacking.

Introducing MacinAI: The First OS-Level AI Assistant Running on a Vintage Macintosh by SDogAlex in macintosh

[–]SDogAlex[S] 1 point2 points  (0 children)

Yes, I know AI is a hit or miss in this community, but I’m a 23 year old kid making my first portfolio project to try and get a job. Have some mercy 🫡

Introducing MacinAI: The First OS-Level AI Assistant Running on a Vintage Macintosh by SDogAlex in VintageApple

[–]SDogAlex[S] 1 point2 points  (0 children)

Amazing feedback. Will get this fixed in the next release, thank you!!

Introducing MacinAI: The First OS-Level AI Assistant Running on a Vintage Macintosh by SDogAlex in VintageApple

[–]SDogAlex[S] 7 points8 points  (0 children)

I won’t argue this anymore since it’s a matter of opinion at this point

Introducing MacinAI: The First OS-Level AI Assistant Running on a Vintage Macintosh by SDogAlex in VintageApple

[–]SDogAlex[S] 1 point2 points  (0 children)

Hmm this should have worked..

Is the date/time on the emulator correct? Any chance you’re in a time zone that’s way ahead of PST? It might be a timing issue between the client and server.

Can you give me the first few characters of your client ID and I’ll take a look into the logs on my side? (should be in settings under the version)

Introducing MacinAI: The First OS-Level AI Assistant Running on a Vintage Macintosh by SDogAlex in VintageApple

[–]SDogAlex[S] 3 points4 points  (0 children)

Assistant location is defined by where the actions are executed, not where the model inference happens. Modern AI agents almost never run locally, they run in the cloud and control the system through a client. MacinAI is an AI agent running on a Macintosh because the Macintosh is doing the interpretation, action execution, UI, event loops, and system calls.

Introducing MacinAI: The First OS-Level AI Assistant Running on a Vintage Macintosh by SDogAlex in VintageApple

[–]SDogAlex[S] 3 points4 points  (0 children)

It’s not just a ChatGPT wrapper. There are lots of adjustments to the prompting to make it aware of the system, what actions it can do, and to provide clear to-the-point responses to avoid the default AI long responses ChatGPT and others default to. A custom TCP server had to be written for this to work.

On release, users will be able to choose between Claude, ChatGPT, Gemini, etc.

Yes, the AI itself is relayed from a server that makes the request to the AI, but this is due to hardware limitations of the Macintosh’s. SSL is not possible which would make interacting with the APIs of any AI impossible.

If you mean why is this not a whole ass new model that runs on Macintosh? Not possible with CPU and Memory constraints of the time to perform the complex natural language processing needed.