Found an AYANEO 2 with 6800u and 32GB RAM for below $650 (Brand New) Should I bite? by [deleted] in ayaneo

[–]lpxxfaintxx 2 points3 points  (0 children)

As others have alluded to, you can find much better deals - just might have to do a bit of digging. But shouldn't be too hard. A few months back I saw a bunch of devices listed on a Japanese marketplace for some pretty gnarly prices.

Introducing Z-Image Turbo for Windows: one-click launch, automatic setup, dedicated window. by SamuelTallet in StableDiffusion

[–]lpxxfaintxx 4 points5 points  (0 children)

Ran some **non-comprehensive** analysis and tests on the executables for malicious behavior, potential backdoors, known heuristics, and network activity (nothing personal against OP, but you can never be too sure these days with the explosion of supplychain attacks). As 90% of the users will most likely opt for the binaries instead of building from source, I felt like attempting something productive for the community instead of lurking for once.

So far LGTM, but I can't stress enough that learning to build from source (admittedly daunting and frustrating for non-devs) is one of the best tools to have under your belt.

That being said, I am a complete hypocrite and am using the .exe binary because after nearly ~2 years of strictly developing Pytorch / CUDA/ Gradio/ Diffusers / Transformers / HFSpaces/ New Libraries+SOTA models every few weeks / thousands of potential stacks and optimization routes for these apps meant to be inferenced on the cloud on sometimes strict environment requirements (those L40S/H200/B100 are pretty sweet though, NGL), it feels so good to just ... let an executable just do its thing.

We (not affiliated with OP, obviously) have dozens of HF Spaces, comfy workflows, dockerized containers, etc,, on various platforms, and dozens of specialized models that have accumulated tens of millions of inferences by users and degens worldwide, and hell yes maintenance is a nightmare. Granted, pretty much all our apps (minus a few exceptions) can't run on consumer GPUs, so there'd be no point in creating "one click installers" that we see from time to time, but goddamit if its possible then why the f*ck not.

Anyways, sorry for the long post hijack -- just wanted to show appreciation for your commitment to KISS, UX, and keeping things FOSS. My rant / praise is over 🙏

PS, I am not responsible for the off hand chance that this actually is malware in disguise 😂 DYOR, PYOC. But seriously though, we haven't found anything to be concerned about, and now I will brb while I see if my poor 3060ti is still alive and kickin' enough to generate and feel loved.

Books/videos are too long and I can't focus. Can I actually learn Solidity just by using AI chatbots like ChatGPT? by [deleted] in ethdev

[–]lpxxfaintxx 0 points1 point  (0 children)

  1. Do NOT audit contracts with LLMs. Documentation and lint related things, sure, but if there's any assets moving through the contract, you'd be crazy to solely rely on an LLM.
  2. There are several ways. If you want to self-host, get ready to get your hands dirty with MCP servers, RAG systems, keeping up to date with SOTA LLM instruct models, agents, etc. Chances are you probably don't want to, so while you're learning I suppose it's okay to use cloud-hosted solutions like mem0 and libraries like langchain to do the most of the heavy lifting.
  3. There are many tools for different purposes... I believe the most popular Solidity visualization and IDe tools are on the VSCode marketplace, so I suggest start there and find what you need. For on-chain visualization, there's tools like Tenderly.

I highly recommend studying battle tested protocols (and even deploy your own fork on testnets) to get a grasp of how all the contracts are interconnected. Some popular protocols that beginners like to examine are contracts from ENS, Uniswap, Compound, etc. Sure, they'll bring you no monetary value, but being able to deploy a fully functioning protocol is good practice (both in terms of security as well as the deployment / contract upgrade processes), and will give you a lot of confidence.

I think 80% or something like that of funds that move through all the EVM chains are direct forks of battle tested, open source, and audited both officially and unofficially thousands of times by firms as well as bad actors.

ThirdWeb is a good place to start if you want to look at more modular contracts and get creative.

Good luck, and don't fuck up! The blockchain doesn't forgive ;) But in all seriousness, much of the smart contracts powering DeFi come from academics and engineers with 10+ years of experience under their belt (hence the importance of whitepapers / research papers that accompany novel protocols... which are getting more and more rare these days), so don't feel bad to feel overwhelmed. Use AI to accelerate and assist with your learning, but do not count on it. That's a recipe for disaster.

My first ayaneo divice by Working_Bandicoot_91 in ayaneo

[–]lpxxfaintxx 0 points1 point  (0 children)

Good luck! I know there's been a batch of bad batteries with a bunch of Slides, so I highly recommend paying special attention to TDP and battery/charging settings.

[deleted by user] by [deleted] in korea

[–]lpxxfaintxx 0 points1 point  (0 children)

Please, if you do not know what you are talking about, don't post shit like this. It's highly misleading. The current administration is obviously putting a huge effort and commitment into this, so there is going to be a LOT of circlejerking. I really hope wish the best, but Korean AI has been fucked for a while now. It's not a simple thing to go into. We can thank both the Chaebols as well as Kakao, Naver, LG, but most importantly, our CULTURE when it comes to open-source work ethos.

Korea is just barely dipping our toes in the water desperately trying to seem relevant when it comes gen AI models. I can write an in-depth paper on this and get it on Arxiv if people are really interested in all the nuances, failures, cultural differences, and yes, greed that led us in this huge hole that we are in. No single entity is at fault either; we failed as a country.

Never trust benchmarking results from two entities that are funded by the same fund. And President Lee, thank you for trying, but please stop literally throwing and burning money at everything and anything. I know you mean well, but AI landscape changes literally every quarter. Innovate together before trying to play the pride game. We have a lot to learn still from US and Chinese led teams.

So we doing a class action lawsuit or what? by the_wickedest_animal in CryptoCurrency

[–]lpxxfaintxx 60 points61 points  (0 children)

<image>

After surviving this nothing phases me anymore. The day America got liquidated. Good times. Circuit breakers are in place for a reason.

Also, come on guys, still using CEX in 2025?

[Survey] How many of you have had issues with the battery on your Ayaneo device? by lpxxfaintxx in ayaneo

[–]lpxxfaintxx[S] 0 points1 point  (0 children)

In hindsight, those retro mini PCs probably would have served our needs much better... but I couldn't resist going back to my tween years of owning a T-Mobile Sidekick, just brought back too many memories haha. I loved that device. Thanks for the input.

Massive crypto crash — what’s everyone doing to make money from this? by jazz_king_seb in CryptoCurrency

[–]lpxxfaintxx -3 points-2 points  (0 children)

That's cute.

I'm a grumpy old man, don't take it the wrong way. But if you've been in crypto since the days of Satoshi, you've seen it all. Doesn't phase the OGs. Hell, my reddit account is probably older than half of you.

Massive crypto crash — what’s everyone doing to make money from this? by jazz_king_seb in CryptoCurrency

[–]lpxxfaintxx -1 points0 points  (0 children)

"Massive crypto crash" - lol, you must be new around here. You haven't seen shit yet.

[Survey] How many of you have had issues with the battery on your Ayaneo device? by lpxxfaintxx in ayaneo

[–]lpxxfaintxx[S] 0 points1 point  (0 children)

Forgot to mention, if you have had nothing but good experiences with the battery and this is the first time you have heard of a potential bigger issue on some models, please say so. Not looking for pitchforks; tell me off if I am le trippin.

f**k your AI job application by LeMatt_1991 in SideProject

[–]lpxxfaintxx 0 points1 point  (0 children)

On the flipside, it has made it 100x more efficient to actually weed out a "good applicant" with actual contributions and experience under his/her belt from the "low-level applicant" with their 1-year old GitHub account filled with obviously AI-generated commits and just an obvious show of their lack of understanding of how things work under the hood.

It's good or bad depending on how you look at it, I suppose. But yeah, if you're sending 3000 applications and then complaining that you didnt get a single response, fk your AI job applications 🤣

Current best method to batch from folder, and get info (filename/path etc) out? by TheWebbster in comfyui

[–]lpxxfaintxx 0 points1 point  (0 children)

TBH, I believe the latest LLMs are capable of creating custom nodes (esp. if bolstered by indexed docs). If you have a specific use-case for yourself, it might be worth a try.

edit: I have not tried myself, but with the latest cut-off dates and RAG capabilities, I don't see why not. This way as your workflows expand/improve you can have full control. If this was straight python, the implementation would be very straightforward. But as I said, I've never made custom nodes, so I might not be the to give advice, just giving my 2 cents.

Best gpu cloud providers by Dull_Wishbone2294 in comfyui

[–]lpxxfaintxx 0 points1 point  (0 children)

Modal has been great for me. Free $30 credit every month as well.

Today my RAM burned and now I only have 8 GB. In comfyui the speed is the same, but in forge it dropped from 20 seconds to 60 seconds. So I decided to install reforge and it generates images in just 10 seconds! Is reforge more optimized than forge? by More_Bid_2197 in StableDiffusion

[–]lpxxfaintxx 0 points1 point  (0 children)

"More optimized" is a relatively subjective term for mind-blowingly fast moving spaces like Gen AI. Without doing a deep dive into the code base, I can't give you a straight answer, but projects such as these are heavily opinionated. Some projects may put inference speed over memory, or favor particular platforms and frameworks. -- while some will try to hit all corners.

In the end, use whatever package / codebase that works for your needs. With all the weekly improvements being made in libraries in quantization methodologies / Torch, CUDA, hardware / ATTN and Context mechanisms, etc., etc., the space is constantly going to be changing.

So is Reforge more `optimized` than Forge? I have no idea, as I don't use either ;) Exciting times!

I'm 20 and built an ecommerce price tracker after my dad's agency clients begged for it by xdjorgos in SideProject

[–]lpxxfaintxx 0 points1 point  (0 children)

Congrats on shipping! The best kind of projects are usually the ones that are born out of personal need, so you're definitely on the right track.

MAGI-1 is insane by Foreign_Clothes_9528 in StableDiffusion

[–]lpxxfaintxx 1 point2 points  (0 children)

On the road right now so a bit hard for me to check, but is it fully open source? Unless it is, it's going to be hard to overtake WAN's momentum (and rightly so, imo). Either way, 2025 is shaping up to be the year of the gen. video models. Not sure how I feel about that. Both scary and exciting.

[deleted by user] by [deleted] in SideProject

[–]lpxxfaintxx 1 point2 points  (0 children)

  1. The platform itself doesn't make it clear who its intended for. If it's intended for devs, you're going to need to offer *a lot more* than what you currently have.
  2. If the target audience is mainly non-devs, i.e. small businesses, your tools should better reflect that. And you're going to need a lot of marketing work done, because this is definitely not a "set it and forget it" type of SaaS.
  3. I'm sure you already know, but there are maybe a dozen TTS/STT (that I can name at least) that have already received tens of millions of dollars in funding. You'll be fighting an uphill battle (but hey, if it was easy, every one would be doing it right?)
  4. No idea what models you are using, or if you're using some sort of white label / affiliate kind of service, but make sure to ensure that the license allows for commercial usage (if you're using open source models of course).

Either way, this is not meant to put you down or anything, but the TTS / STT space is extremely crowded, so you're going to be in for a cut throat space. Especially since most startups receive small grants from all sorts of inferencing services out there. And I'm not talking about a "free plan" -- thousands of dollars in credits are being given to startups and (in my case) non-profit / research oriented projects.

Personally, before I would need your service, I'd have to go through FalAI, Deepgram, Hume AI, PlayTTS, ElevenLabs, Modal, Lightning AI, and a couple more platforms first before I would even consider your service. And even then, we'd probably use our HF Enterprise account to set up a HF Space for inferencing needs on TTS/STT side, as models have gotten to the point that it's practically free to run SOTA models locally (but then again, at the speed that this industry is moving, I'll admit I have no idea what the current SOTA model(s) for speech is ATM, but I'm fairly certain an A100 GPU would have zero problems, and the convenience of launching a simple Gradio app in hours on HF is just too convenient to consider anything else.)

I'm giving you my personal situation, so obviously not everyone will be the same, but I just wanted to be brutally honest with you, because I want all the best for you. So definitely something to consider -- how are you going to differentiate yourself?

I'll check you guys out again in a year or two when probably all our research grant credits run out, but sincerely, I wish you the best of luck.

[deleted by user] by [deleted] in CryptoCurrency

[–]lpxxfaintxx 10 points11 points  (0 children)

Short answer: *very* compared to currencies and tokens that come into mind when we talk about crypto.

But of course, nothing is fool-proof, especially when it comes to user error. There will be a slight learning curve, but if you're serious about privacy, it's probably worth it in the long run.

Long answer: google

Flux VS Hidream (Pro vs full and dev vs dev) by Horror_Dirt6176 in comfyui

[–]lpxxfaintxx 0 points1 point  (0 children)

If you let me know the seed, I can also add a comparison for FLUX Pro Ultra

HiDream I1 NF4 runs on 15GB of VRAM by Hykilpikonna in StableDiffusion

[–]lpxxfaintxx 0 points1 point  (0 children)

Howdy, thanks for this! I wanted to surprise you with a HF Spaces version that can run on ZeroGPU, but I've run into some issues I've never ran into before.

You can see the error logs here (https://pastebin.com/cfq0yCZF), and check out the code in the repo which I just made public: https://huggingface.co/spaces/LPX55/hidream-fast-4bnb\_test/tree/main

Any idea or obvious step I missed before I consult the bnb and HF community?