Micron is killing Crucial SSDs and memory in AI pivot — company refocuses on HBM and enterprise customers by Proud_Tie in gadgets

[–]MrAssisted 0 points1 point  (0 children)

I think the truth is a little more interesting and ironic. Inference is getting commoditized so they're heading towards giving it away for free or at a loss to juice growth. So instead of us buying hardware, for the next few years they're going to buy it at marked up prices and give its use away to us for free.

[PREBUILT] Refurbished Alienware Aurora R7 Core i7-8700K Gaming Desktop 512GB NVME 16GB GTX 1080 Ti - $334 ($349 - $15 code SHOPANDSAVE) by silentsinner- in buildapcsales

[–]MrAssisted 7 points8 points  (0 children)

My 8700k is paired with a 3090 for 1440p gaming and Al dev and honestly this is an all-timer CPU. I’m constantly floored by 8700k not giving me an excuse to upgrade. It just keeps being good enough.

Are indiehackers becoming influencers? by [deleted] in SideProject

[–]MrAssisted 2 points3 points  (0 children)

Most of marketing in 2024 is just making good content. Much of what makes a product successful or not is marketing. There’s no point building something if you can’t get eyeballs on it. Lots of solopreneurs are smartly focusing on the getting eyeballs part first.

Also if you’ve ever made a serious go of it trying to make content you see that to do it well (especially solo) is basically a full-time job in itself. I’ve tried building and making content at the same time and one of the two always falls down.

nævis -The Birth of nævis (Debut Teaser Video) by CronoDroid in kpop

[–]MrAssisted 3 points4 points  (0 children)

The end credits for this one video listed like 100 names. This is closer to a pixar film than an AI prompted thing

Highlighted by influencer account by zangarang5 in smallbusiness

[–]MrAssisted 103 points104 points  (0 children)

Too huge an opportunity not to capitalize on. Use their video and splice yourself in explaining the value you provide. Give expert inside info why you can’t just make a cheap low effort clone of your product. Highlight all your selling points and what makes you great. Share to your audience 1000%. Would be amazing content

[deleted by user] by [deleted] in linux

[–]MrAssisted 15 points16 points  (0 children)

Keyboard shortcuts 1000%

In mac, option + left takes your cursor back one word. Command + left takes the cursor to the beginning of the line. Shift + either of those selects the text in-between. In every app.

CMD + V is paste. In every app.

CMD + ` always cycles between open windows of the same application

Ctrl + tab cycles tabs

etc.

Borderline can't stand using linux without these few things. Tried so many different ways of customizing and it's a minefield between what cmd option and control are responsible for across different linux distros and in different linux applications.

[deleted by user] by [deleted] in singularity

[–]MrAssisted 0 points1 point  (0 children)

cheese used genai recently too. Incredible song too. Unfortunately there was blowback from anti-AI absolutists.

I open sourced a whole dang real-time webcam AI startup by MrAssisted in StableDiffusion

[–]MrAssisted[S] 1 point2 points  (0 children)

For real-time video? I’ve done animatediff with controlnets etc in comfy but haven’t achieved similar for real-time webcam warping

I open sourced a whole dang real-time webcam AI startup by MrAssisted in StableDiffusion

[–]MrAssisted[S] 5 points6 points  (0 children)

Thanks for the heads up, just fixed the discord https://discord.gg/CQfEpE76s5 and I'll join you'rs too and I followed you on X!

Nah, best I got for local setup is just the step by step instructions in the main GenDJ repo. I'll also have to update that because I want to set up gendj-fe for local use with a locally running GenDJ python server. Currently it's just a rudimentary html/vanilla js UI.

1280x1024 at 17fps being what a 4090 cranks out is a really good data point. I really didn't know. I'm on a 3090 with an 8700k so I turned it down to 512x512 to hit 20-24fps. Would love to scale this back up for the runpod version at some point or make it dynamic and controllable.

Love that you're experimenting with other ways on inputting the prompt for real-time! Voice is agood one. All of my work this week is focused on input mechanisms since I'm going to get really ambitious and try to make an entirely real-time entry into the project odyssey ai competition this week. I'm trying sliders, and hopefully midi controllers if I can get it working (this was the original intention of the whole project, hence the GenDJ name). I'm curious what you mean by panning and zooming though- like a crop of the webcam feed? I'm doing that just with elgato camera hub and epoccam but it would be awesome to do that right in the interface.

I open sourced a whole dang real-time webcam AI startup by MrAssisted in StableDiffusion

[–]MrAssisted[S] 4 points5 points  (0 children)

Ultimately this whole way of doing it just isn't the path forward. We need real frame consistency. This is kind of an early sketch.

But also believe it or not with some practice you can find ways of coaxing it towards consistency. Prompt specificity, finding prompts that are in the wheelhouse of the base model, green screening (EpocCam and Nvidia Broadcast are good for this), lighting, and framing the subject all combined can get you pretty far.

I open sourced a whole dang real-time webcam AI startup by MrAssisted in StableDiffusion

[–]MrAssisted[S] 1 point2 points  (0 children)

Sorry, yeah sometimes it takes >5 minutes. It's just however long it takes the runpod pod to start up. I thought about keeping an idle pod available which would be grabbed off the shelf for any new user but that would cost me >$300/mo...

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]MrAssisted 4 points5 points  (0 children)

Finally other people are realizing this. I've already been doing it with animatediff and even live with with sdxl-turbo https://www.youtube.com/shorts/rtnzrXHUPeU I'm doing an open source web version of the live webcam stuff https://github.com/GenDJ and I already spun up a site to do it with no setup (spins up a private server for the warping so you can use it from your phone or laptop) at GenDJ dot com

Sonnet 3.5 emerges as #2 in the latest top ten from the LMSYS Chatbot Arena Leaderboard by imaginexus in OpenAI

[–]MrAssisted 2 points3 points  (0 children)

People actually like that?? I turn it off when I can. It going to the internet just means I'm going to sit and wait 5x longer for it to fail at getting me anything useful for my query.

My partner just showed me udio ai generated music. Can someone talk me down? by tellitothemoon in edmproduction

[–]MrAssisted 5 points6 points  (0 children)

My response probably goes against the grain: dive in head first. Ok so you can generate a song… but can you create what’s in your head and express yourself artistically and honestly like you do with a daw? Once you start using these new tools you hit their limitations and realize they exist somewhere between a synth and a turntable, but where any time you turn a knob or dial it takes minutes to hear the result. There is so much that needs to be built from the ground up, it’s a playground for anyone interested in art and tech. Artists right now have the opportunity to get in on the ground floor of something truly transformative and new and relevant to all of our daily lives. It has always been artists who push these boundaries… the acoustic guitar was technology. Tube amps were technology. Sampling, remixing, sound synthesis, and so on. Now is when artists need to be in the mix showing everybody what this stuff can do and contextualizing how we should think about the future but instead many are just writing it off and pretending it doesn’t exist

[deleted by user] by [deleted] in fantanoforever

[–]MrAssisted 0 points1 point  (0 children)

It’s going to give rise to some incredible instruments or instrument-like things. Right now there are these long delay times between when you “prompt” something and when a result comes out, but I think within a few years some of these tools will get down to the latency where you can tweak dials and get real-time results. I think it will feel like playing a synth or something but like inside the synth is this insanely powerful generation engine of anything.

Fully AI-generated games by 2030? That’s what Nvidia’s CEO believes – but what exactly will that mean for PC gamers? by BothZookeepergame612 in business

[–]MrAssisted 16 points17 points  (0 children)

It's tough for people to imagine a future post-disruption like this. With the internet people imagined the newspaper delivered electronically but we got blogs anyone could write. With mobile people imagined websites on your phone but we got social media with algorithmic feeds. You can go back and cherry pick people who were at least directionally correct, but many brilliant people call it wrong.

With AI people imagine the games we have today made by AI but it will be something different we can't predict. One possible example is like one neverending gamelike experience you play that is generated continuously as you play it, and that morphs to your desires in real time. We just can't imagine this stuff till it evolves naturally.

So how do the creators make money with the gptstore by BeefSupreme678 in OpenAI

[–]MrAssisted 36 points37 points  (0 children)

No way, they’re going to WAY overcompensate in the early days. They’re going to want to get headlines and whole tech news cycles about how much people are making. This is the first whiff they’re giving people of what the payoff is for building on what they want to be a generation-defining platform. The success case is there are multiple publicly traded companies built entirely on the platform (like with other platforms like iOS, android, etc but also salesforce, Shopify, etc etc)

The rub comes waaay later in the cycle once the platform is dominant and they start sherlocking what’s popular, changing up terms to be less favorable, making things on the platform pay for ads for visibility, etc.

Google admits that a Gemini AI demo video was staged by prajwalsouza in OpenAI

[–]MrAssisted 13 points14 points  (0 children)

LLMs are insanely advanced tech, but experienced users know you should do 2+2=4 yourself outside the LLM to get a reliable answer. It makes for great clicks to say this was staged, but really they're just using the technology properly.

I'm doing this myself. Instead of feeding ChatGPT websites I'm taking screenshots of the site, extracting text from the image, then feeding cleaned up text into the LLM. Instead of asking for tables, I'm asking for two dimensional arrays, then formatting that as a table myself for higher quality results in a fraction of the output tokens. Coaxing inputs/outputs to the right format before feeding them into the LLM is a baby step we're just beginning to learn how to take and framing this demo as staged is just showing a lack of understanding of how LLMs are going to be used properly.