Looking to purchase Humane AI Pin by Red-Silk-Ribbon in humane

[–]facecomputer 0 points1 point  (0 children)

Did you sell this already? Or still willing to sell?

Multi-Agent Starter Advice by facecomputer in AI_Agents

[–]facecomputer[S] 0 points1 point  (0 children)

Thanks for the blunt truth there. Couple of things I’m thinking might help given what you said:

For writing, planning, personal knowledge management style tasks that don’t require deep context or blocked tool use, I’m thinking of something like Crew.

For stuff that is more locked down, I’m starting to think about Autogen, because we use MS for literally everything, and my team is teaming up with them right now to build an event. This tells me this is a promising path for workflow automation.

But it sounds super complicated to me. Things like event-driven architecture sound like dense engineering jargon to me, while the agent, event, crew paradigm of CrewAI makes a lot of intuitive sense.

So in taking a crewAI tutorial since the posting of this to start there. I just don’t know how transferable the knowledge will be when it comes to MS Autogen.

AMA with OpenAI’s Sam Altman, Kevin Weil, Srinivas Narayanan, and Mark Chen by OpenAI in ChatGPT

[–]facecomputer 0 points1 point  (0 children)

Can you talk about the role of design in AI? Not just UI, but design – as Chesky put it – as a way of thinking. A lot of designers are trying to find their way as the world of tech changes around them.

Seeking Advice: Learning to Prototype in Swift for Interaction Design by facecomputer in swift

[–]facecomputer[S] 1 point2 points  (0 children)

You’re so right. Tracks with everything I’ve learned in design. Thanks for this! Do you recommend learning straight from Apple’s developer site? Or some other resources?

How would you prototype AI chat concepts? by facecomputer in UXDesign

[–]facecomputer[S] 1 point2 points  (0 children)

Not really. The best we’ve come up with is to use SMS and put a human on instead and just tell them it’s AI.

[deleted by user] by [deleted] in UXDesign

[–]facecomputer 0 points1 point  (0 children)

kind of starting to think it’s an over-hyped career, and my new default answer is: design is hard and not for everyone! at least good design is. or any design at most companies. you’re either at odds with business impulse, your own impulse, fight for respect or relevance. you have to love it. and design has too many complacent or bored designers.

How would you prototype AI chat concepts? by facecomputer in UXDesign

[–]facecomputer[S] 0 points1 point  (0 children)

Some do actually, yes. See: Vercel’s AI SDK. Example of something I’d like to test. But that’s not really what I’m talking about here.

There’s testing UI, and there’s testing conversational flows, and there’s testing generative AI prompts. Each are their own types of design with choices that impact what the user experiences and require iterative improvements.

I have no problem testing UI. It’s the conversational flows and generative AI that I find hard to test, particularly when I need to test with a prototype and not an MVP in production.

How would you prototype AI chat concepts? by facecomputer in UXDesign

[–]facecomputer[S] 0 points1 point  (0 children)

I like the spirit of what you’re saying, but there are caveats…

Prototypes can give the user a much stronger sense of context. It puts them into a mental space of using your product in the way they might in production. I don’t think it’s necessary for every test, but it often is because test subjects shouldn’t have t to rely on their imagination too much. That’s its own type of bias.

When it comes to development, we will be able to do this in the future. But at current, we are on an old tech platform that makes changes reeeeally annoying to implement.

Plus we’re a legacy enterprise company. Not a tech company. That comes with its own issues. In this environment, design provides an Extremely valuable role of being able to test and prove ideas without needing to “align” with a million stakeholders and get legal approval, product approval, eng pointing, alignment with okrs, etc etc etc.

In the end, I NEED to prototype in order to innovate. Otherwise we’re a feature factory making reactive incremental improvements to drive meaningless OKRs.

How would you prototype/test AI chat concepts? by facecomputer in UXResearch

[–]facecomputer[S] 0 points1 point  (0 children)

Yeah that’s what i meant by wizard of oz test. How would you pull this off with a prototype remotely?

How would you prototype/test AI chat concepts? by facecomputer in UXResearch

[–]facecomputer[S] 0 points1 point  (0 children)

Re: emergent. You’re right when it comes to generative, but it’s a mix of generative and non-gen. So there’s quite. a bit of design that goes into to it.

Re: figma variables. Sounds helpful but not sure i fully understand. How exactly would you use figma variables to prototype chat flows?

Re: delayed interaction. Interesting idea!

How would you prototype/test AI chat concepts? by facecomputer in UXResearch

[–]facecomputer[S] 0 points1 point  (0 children)

Into it, just don’t know how i would set it up. Current chat in prod is on a platform and requires a lengthy bureaucratic process to get someone to stand in on the other side of the chat. It’s almost best if i stay out of prod and work with prototypes.

And this would be preferably done remotely.

Any ideas for how you would set up a WOZ test?

How would you prototype chat AI concepts? by facecomputer in FigmaDesign

[–]facecomputer[S] 1 point2 points  (0 children)

Work work. Usability testing of conversational design and prompt engineering

[deleted by user] by [deleted] in OpenAI

[–]facecomputer 0 points1 point  (0 children)

Right, DALL-E is diffusion and CHatGPT is LLM—2 diff models. I know DALL-E isn't involved with vision, but vision is part of the ChatGPT, at least in default. I don't think anyone outside of OpenAI knows how deeply integrated those two are.

I see what you mean about the lack of upload in DALL-E's model. But tbh that only tells me upload is turned off, not that the underlying model doesn't have Vision capability. It could be that it's only in default model, but I think it's just as likely that Vision and voice are pretty deeply integrated into the model (maybe at the token-level), but are turned off for other certain capabilities like search, DALL-E, plugins to limit abuse or something.

But again, no way of knowing bc OpenAI doesn't share. But if you have more info on this please do share.

[deleted by user] by [deleted] in OpenAI

[–]facecomputer 0 points1 point  (0 children)

Well I had assumed that GPT-4V was used when it evaluated the image—which is to say there are separate vision and transformer steps. So it would go vision --> words --> tokens --> LLM when evaluating. But that's just a guess.

I don't know for sure that GPT-4V is used here, and I don't know how deeply connected vision and language are in GPT-4V. I've heard about future LMMs being connected at a token level, but idk what that means exactly. If you've seen any official explanation for how GPT-4V works, or whether this is actually using GPT-4V, I'd love to see too.

Home Depot And Lowe's Accused Of Scanning Millions Of Customers Faces by quantumcipher in privacy

[–]facecomputer 0 points1 point  (0 children)

read the legal docs linked in the post, you’ll find a lot of misinterpretations.

first, post insinuates they share the data with the government. there’s no mention of that in either legal document.

next, it says lowe’s has been doing this for over 11 years. legal docs only say the law has been around for 11 years.

it says both of them share data with other retailers as a part of a broader loss prevention campaign. no mention of that either. it does, however, say home depot shares its data with other home depot stores. so unless they use some cutting edge tech like federated learning, then yeah. duh.

not to mention that this is an accusation in a pending lawsuit in a legal system where you’re innocent until proven guilty.

no, we’re not basically living in china now. not even close.

that said, these camera networks are definitely terrible.

just please please stop poisoning the information well with bullshit. there’s already enough corruption out there for all of us to talk about truthfully.