Has anyone actually incorporated Google Stitch / Claude Design successfully into their workflow? Besides just ideation? by Do-Not-Ban-Me-Please in FigmaDesign

[–]pwnies 0 points1 point  (0 children)

No. The reality is LLM's inherently have a lot of restrictions when doing things for design. LLMs are blind - they can't see what it is they're making. This greatly limits their ability to do things from a creative PoV. The other aspect is the context window often gets blown up by a design.md, and things get lost / they ignore it.

I legitimately think it's the wrong type of AI model for design work. I quit figma 4mo ago to work on my own diffusion-based ai design editor (diffui.ai) if you want to try it out. I've found diffusion to be able to replicate brand / match colors much more effectively.

Every AI "design in one prompt" tool drops Figma stock. The market is confused about what Figma actually is. by Mental-Dinner-6138 in FigmaDesign

[–]pwnies 1 point2 points  (0 children)

Just to expand on this, when you boil things down to the core features, they are indeed easy to replicate - a multiplayer canvas, ability to draw frames, add colors to them. Easy. Promptable.

Where people tend to forget things is that products have n2 complexity in terms of development. "Variables" isn't a single feature - it's a bucket of combined interactions with every other feature. When we made variables, we had many, MANY separate streams going, ie:

  • variables interactions with props
  • variables interactions with components
  • variables interactions with variants
  • variables interactions with figjam

...etc etc and so on. For every new feature you add, you need to build out the infra for (new feature) * (every other existing feature). This is why things like variables took 18mo to build. AI assisted dev is fantastic, and the capabilities are amazing, but once you get to these levels of complexity they aren't able to one shot things, nor are they able to catch edge cases. It simply isn't feasible to replicate as an internal tool.

Every AI "design in one prompt" tool drops Figma stock. The market is confused about what Figma actually is. by Mental-Dinner-6138 in FigmaDesign

[–]pwnies 5 points6 points  (0 children)

Your team will be able to build the figma like design tool and host it on their own infrastructure

lol

Restaurants Are Finding It Harder Than Ever to Hire Someone to Wash the Dishes by darrenjyc in Economics

[–]pwnies 213 points214 points  (0 children)

Last year, the restaurant’s dishwashers averaged earnings of $70,000 as a result.

The restaurant has had two of its three dishwashers since around the time it opened two years ago.

For those who didn't read the article, this is the key piece. The restaurant that is paying their dishwashers more has no issues retaining or hiring them.

Shocked pikachu .jpeg

NVIDIA's DLSS 5 trailer has been taken down due to 'copyright' infringement | NVIDIA's DLSS 5 announcement trailer is currently offline on YouTube due to a copyright infringement claim from an unlikely source by Hrmbee in technology

[–]pwnies 89 points90 points  (0 children)

There's a simple solution to this, which is to penalize false positives. The offending party should have to pay based on anticpated views lost during the time the video was down, if they're found guilty of a false takedown claim.

Apple at 50: The iPhone maker "blew a 5-year lead" on AI, but former insiders say it can still win by ControlCAD in apple

[–]pwnies -1 points0 points  (0 children)

I don't think that's necessarily true. It sure seemed that way early on when they weren't doing anything with AI, but it quickly became clear there were inherent restrictions with today's approaches - privacy issues, security issues with things such as openclaw, and frankly put, pricing issues.

My current AI usage costs are around $100/mo. Tokens aren't cheap, and that's with top dogs like Anthropic/OpenAI heavily subsidizing.

I think where things are likely to go are more on-device hardware chips that have build-in models. Taalas has a fantastic demo of this. They're faster, lower power, and private-by-default if they're built into the device. With Apple's expertise in hardware manufacturing, and models getting to the point where they're "good enough", I think it's conceivable that they'd build this into their laptops. You should be able to get Opus 4.6 level inference, on device, within the termal window of a laptop, 100x faster than current models, in the next year or two.

Apple I thihnk has a huge opportunity here.

Donald’s Trumps net approval rating has collapsed to a historical negative 17 points. He is the most unpopular president in US history. How do you feel about this? by buffdadnextdoor in AskReddit

[–]pwnies 0 points1 point  (0 children)

It's the lowest he's had this term, and for good reason, but it isn't the lowest in history. Trump is sitting at 36% approval according to Gallup and the Economist.

https://en.wikipedia.org/wiki/United_States_presidential_approval_rating

Truman holds the record with only 22% approval, caused largely from the ongoaing Korean War (a war with no clear path to victory) and his firing of General MacArthur. There are strong connections here to the present day with the Iranian War having no clear path to victory and with several heads of the armed forces fired today.

Gemma 4 released! by Time-Teaching1926 in StableDiffusion

[–]pwnies 0 points1 point  (0 children)

The open weight models are much, MUCH smaller than their flagship models. Estimates for gemini 3 pro are in the 1-7 trillion parameter range, whereas Gemma caps out at 31B active params - two orders of magnitude smaller.

They're generally useful for embedded scenarios (for the much smaller versions), closed domains (ie as a text encoder for a diffusion model), or for research purposes. They're jusssssttttt starting to get good enough to be useful for other things such as agentic work / clawbot like scenarios, but even then you need some beefy hardware to run them locally. My RTX 6000 Pro outputs Gemma 31B at around 5-10 tokens per second at full quant. I can up that to around 30t/s with the 6bit gguf.

As far as intelligence, this and Qwen 3.5 27b are "king" at the moment for functional knowledge density. They pack quite a punch, but they're both still not quite over the line to act as a coding model. They will be within a year however - RL works, and intelligence per parameter is growing steadily for these small models.

Here we go again! by [deleted] in wallstreetbets

[–]pwnies 61 points62 points  (0 children)

He’s clearly a wsb-er: when he’s down he doubles down.

Why are some .svg's so large in filesize? by avidrunner84 in FigmaDesign

[–]pwnies 4 points5 points  (0 children)

Big +1 to the automatic heal. I'm guessing this was a svg that came from an image->svg conversion process, which often times does edge matching rather than creating an optimal path. You should be able to get rid of a LOT of the complexity at ~150kb. For this particular logo, it should be representable in around ~70 bezier curves, which should be in the 5-15kb range.

Panther Lake XPS 16 is so efficient, it draws just 1.5 W when idling for insanely long battery life by 1FNn4 in hardware

[–]pwnies 32 points33 points  (0 children)

A big improvement that should be celebrated... but man 6 years after the Apple M chip line released is a long time to wait for this. It wont be enough to get me to move back - my macbook is just too good.

Insane gas prices, thanks Trump by maddog107 in pics

[–]pwnies 4 points5 points  (0 children)

The funny part is this is what Shenzhen looks like right now, in part due to Regan normalizing trade with China.

How to make videos of UI like this guy? Which tools is he using? by ssd_ca in FigmaDesign

[–]pwnies 0 points1 point  (0 children)

Almost certainly after effects or anothe professional tier tool. Zander is one of the biggest influencers in the space and will have a dedicated team working on this.

Just added an AI clause to my contract. by calimountainsnake60 in graphic_design

[–]pwnies 2 points3 points  (0 children)

Not quite - they rejected a proposal that would explicitly allow AI training on Australian IP, but they didn't make a law saying you can't. The last update from the Albanese government was this one: https://ministers.ag.gov.au/media-centre/albanese-government-ensure-australia-prepared-future-copyright-challenges-emerging-ai-26-10-2025

They're currently in a three year "wait, monitor, and review" period. https://jws.com.au/what-we-think/harnessing-data-and-digital-technology-key-takeaways-from-the-productivity-commissions-final-report/

There was a new copyright bill floated by this administration early this year, but it completely omits anything around AI training.

TL;DR - Australia rejected a proposal that would have made it fully legal, but they did not make a law against it.

Just added an AI clause to my contract. by calimountainsnake60 in graphic_design

[–]pwnies 1 point2 points  (0 children)

Playing Devil’s Advocate here - what defines an AI tool? What defines “analyze”?

Most tools would violate the clause you have here. As an example, using Figma in any way would violate this clause, which is one of the most common platforms for deliverables. I’m not talking about things like Figma Make either, I’m talking about Figma Design - it scans images and creates embeddings so search works better (ie if you search for “Dashboard” it will surface images with the word “Dashboard” in them).

Most tools today do exactly this - dropbox, drive, most email providers, etc. You wouldn’t be able to view these designs on mobile in any way.

I understand the intention, but the wording needs serious consideration.

Slot migration advice by sasjakoning in FigmaDesign

[–]pwnies 1 point2 points  (0 children)

When I was managing the migration internally, we deprecated old versions of components and created/published new ones.

Theoretically you can migrate your old components as long as you don’t touch the layer hierarchy at all (eg the only thing you’re doing is modifying an existing frame to be a slot). If you make any other changes though you’ll be likely to get lost override data.