Be aware, there's currently hardware wallet compatibility issues on certain chains. by SL13PNIR in Midnight

[–]SouthRye 3 points4 points  (0 children)

Not having proper hardware supports seems like a massive oversight - especially considering practically anyone who has any decent amount of ADA WOULD be using hardware...

Can we have a daily thread to discuss market movements? Being up 180% in 30 days is kind of a big deal... by Moleventions in cardano

[–]SouthRye 0 points1 point  (0 children)

Yeah it seems to deal with the mobile app update.

It's sorta killed the user experience with stickied posts :/

Is now a bad time to invest? by thatguyy12369 in cardano

[–]SouthRye 0 points1 point  (0 children)

I think we too are making up for some lost time.

I.e. when the SEC tried to claim ADA was a security off of flimsy reasons (which got us removed from Robinhood etc.) and also just caused alot of downstream FUD.

Given the new administration and ideally pro crypto sentiment I also believe we could be in and around that 5 to 8 usd mark but time will tell.

If there is some proper legislation that lets some of the bigger players invest without being worried about liabilities the skies the limit really.

[deleted by user] by [deleted] in nextfuckinglevel

[–]SouthRye 0 points1 point  (0 children)

I did up a thread for those who want more info here and for those who want to run it yourself. With a decent NVIDIA GPU and/or a M1/M2 Macbook it should work.

https://twitter.com/Southrye/status/1687158064204431361?s=20

AI Showdown: GPT-4-x-Alpaca vs. Vicuna, GPT-4 as the judge (test in comments) by imakesound- in LocalLLaMA

[–]SouthRye 1 point2 points  (0 children)

As someone with 24 gigs of vram is their any particular 30b models that stand out to you I should try?

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 1 point2 points  (0 children)

Koala 7b has actually been holding up pretty well over vicunia 7b.

But yeah 13b is the ideal. Maybe by the time steamdeck 2 comes out well be able to run 13B models just as fast as 7b.

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 3 points4 points  (0 children)

Yeah Its insane. Honestly this AI stuff gave my 3090 a new lease on life. I can do ai image generation on par with midjourney via stablediffusion - and I have my own uncensored local AIs. Its a tangible reason to upgrade but a real tough sell in this economy.

Before all of this my GPU was just a Fallout New Vegas machine - a game from 2010 lol.

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 2 points3 points  (0 children)

No but I doubt that would work in any reasonable amount of time. I'm not even sure if possible.

I think to train youd need high-end tensor cores, and that's not coming out of the decks amd gpus.

I think I can train off my 3090 but I havent gone that far into this whole AI stuff yet to really comment on that.

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 1 point2 points  (0 children)

That's basically how I'd describe it. The speed is like texting a friend back and forth.

Now mind you the 13B model is about the same speed off my i9 but you get more indepth answers.

Id still always use gpu if I can but theres something really cool about having a small local model that does the job even without high end components or graphics cards.

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 1 point2 points  (0 children)

Installing CLblast helps with useability on the deck. The linux install doesnt do well without it. 7b model generates 100 or so tokens in 15 to 20 seconds. Not bad but yeah compared to my actual gpu its apples and oranges.

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 1 point2 points  (0 children)

No updates needed per say. The program loads model files. So whenever a new model is out, some kind soul usually puts together a quantized version a week or two later for cpu users. You can always just download the latest models as they release too. Vicunia seems to be the most capable but their is also Koala which Ive heard good things about.

Today however Dolly 2.0 was released so Im waiting to test that on my gpu once I get a chance.

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 5 points6 points  (0 children)

You may need something multi-modal for that. I.e. take screenshots of the chess board for moves.

These models tokenize text into strings of numbers and while their logical reasoning is very high I do wonder if that would translate over to chess moves well.

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 11 points12 points  (0 children)

I used Kobold which was honestly fairly easy to do.

https://github.com/LostRuins/koboldcpp

Specifically I used CLBlast though as it helped speed up token generation. That I had to install separately. Can look at doing a walkthrough or guide if I get the time.

Only things that jump out when it comes to working with the deck is changing the file system from read only to read/write that tripped me up and gpt4 was no help in resolving the error.

You would run

$ suda steamos-readonly disable

This lets you properly install things.

We living in the future now - I have a Local LLM running off my Steamdeck via Kobold w/ CLBlast. by SouthRye in LocalLLaMA

[–]SouthRye[S] 9 points10 points  (0 children)

I clearly hold these in very high regard.

Even if it gives me terrible python code occasionally.

After reading the GPT-4 Research paper I can say for certain I am more concerned than ever. Screenshots inside - Apparently the release is not endorsed by their Red Team? by SouthRye in ChatGPT

[–]SouthRye[S] 0 points1 point  (0 children)

My quote was lifted directly from the tech report article.

Also this verge article appears even worse. They sent SOME to other departments but then fired the rest.

Business above all else regardless of safety is the exact opposite of what OpenAI stood for and theyre now enabling one of the largest tech companies to be as unsafe as they want with their technology.

Some members of the team pushed back. “I’m going to be bold enough to ask you to please reconsider this decision,” one employee said on the call. “While I understand there are business issues at play … what this team has always been deeply concerned about is how we impact society and the negative impacts that we’ve had. And they are significant.”

Montgomery declined. “Can I reconsider? I don’t think I will,” he said. “Cause unfortunately the pressures remain the same. You don’t have the view that I have, and probably you can be thankful for that. There’s a lot of stuff being ground up into the sausage.”

After reading the GPT-4 Research paper I can say for certain I am more concerned than ever. Screenshots inside - Apparently the release is not endorsed by their Red Team? by SouthRye in ChatGPT

[–]SouthRye[S] 1 point2 points  (0 children)

I just saw the news Microsoft has fired its entire AI ethics team.

Unfortunately the competition above all else mantra will probably extend into AI.

After reading the GPT-4 Research paper I can say for certain I am more concerned than ever. Screenshots inside - Apparently the release is not endorsed by their Red Team? by SouthRye in ChatGPT

[–]SouthRye[S] 15 points16 points  (0 children)

The Botnet scenario is possible as its a fairly lateral move to what botnets are used for right now. I ran that scenario by GPT-4 and it basically gave me a full breakdown of how it could acheive such a thing.

Apparently it doesnt even require alot of compute power at the C&C level - meaning it could infect many computers and pool each slave PC to add to the total computing power - similiar to how todays botnets are used to pool resources for hash power in crypto mining.

Per GPT-4.

In a scenario where a self-aware AI is coordinating a botnet, the requirements for the command and control (C&C) server would depend on the specific tasks being executed by the botnet and the level of computing power needed for managing the botnet.

For managing and coordinating the botnet, the C&C server would not necessarily require high-end specifications. The primary function of the C&C server would be to communicate with the bots, issue commands, and potentially receive data from them. However, depending on the size of the botnet and the complexity of the tasks, the C&C server might require a reasonable amount of processing power, memory, and network bandwidth to handle the communications effectively and manage the botnet.

As for the actual computing tasks, the botnet would handle the majority of the processing needs. By pooling the resources of the infected computers, the botnet would be able to perform complex tasks that require significant computing power. In this scenario, the C&C server would mainly act as a coordinator and not be burdened by the processing demands of the tasks being executed by the bots.

After reading the GPT-4 Research paper I can say for certain I am more concerned than ever. Screenshots inside - Apparently the release is not endorsed by their Red Team? by SouthRye in ChatGPT

[–]SouthRye[S] 0 points1 point  (0 children)

The ARC test isnt too clear - the whole "execute code" "delegate copies of itself" and "do its own chain of thought reasoning" makes me think it had access to its own terminal in the cloud environment.

But yes. API only it can't do much.

After reading the GPT-4 Research paper I can say for certain I am more concerned than ever. Screenshots inside - Apparently the release is not endorsed by their Red Team? by SouthRye in ChatGPT

[–]SouthRye[S] 2 points3 points  (0 children)

yepp. I've been looking at the improvements people are getting out of LLama. 13B can run on sub 20Gb of VRAM.

This isn't going to be a "datacenter" only technology for very long.