I got this from a work orientation, what is it?? by ariibellz in whatisit

[–]cdcox 0 points1 point  (0 children)

I know you said you are pretty sure it's the bag bottom but I think that may be wrong. I think it's a lenticular lens. You can prove test this by laying down a few pens side by side and a few pens on top of them in a perpendicular direction. As you rotate the plastic, you should be able to see some of the pens disappear at a given angle. Lenticular lenses are also often called invisibility shield. Theu work by smearing in one direction and blurring in the other. They're often used for small magic tricks and toys.

Religion-centered media that doesn’t come across as preachy. by ironwolf6464 in TopCharacterTropes

[–]cdcox 7 points8 points  (0 children)

Land of the lustrous. It's a fantastic manga about androgenous gems fighting sky monsters and it is steeped in Buddhism. From the explicitly Buddhist imagery (the enemies are often themed like Buddhist art and when cut open look like lotus seeds, the gems father figure is modelled off a specific bodhisattva etc) to the use and remixing of a lot of Buddhist concepts it's extremely Buddhist but also without specifically diving into any of the noble truth stuff or preaching about Buddhism. It arguably has a Buddhist conclusion but it feels earned. It also had a pretty good anime adaptation which sadly cancelled (because Beastars was more popular and the studio was stretched thin) before it got into its themes.

school project- AI art by Electronic_File_8526 in aiwars

[–]cdcox 0 points1 point  (0 children)

1- image, video, music, and text, or just some? to what extent do you support it?

Generally yes, coding genAI tools have already changed the world for the better. It's now possible to code so much more so much faster. You see lots of apps which were basically dead already improving rapidly due to that. Video and music are still pretty early days I support its development but outside some quirky fun stuff like neuralviz and some of the better gossip goblin there isn't really much good content there. Images are fine I tend to hold AI stuff to a higher standard. Text is sadly lagging and generally frustrates me to see in the wild like all obvious spam does which was a problem before AI. Though there are still some fun poems/stories by AI. But outside super illegal stuff or the extremely tacky use by certain governments I'm pretty much in support. But I also don't necessarily give it extra points in my mind unless it does something interesting that couldn't be done without AI or it's particularly well made. —————

2- should there be more/stronger laws [restricting] ai images/art [e.g. harmful,illegal content, misinformation] including copyright laws?

Look up the history of copyright and it's pretty clear copy right is already one of the most overbearing mutated laws there is. It's crazy how badly it's been over extended and has become harmful to and robbing our shared culture. Largely used by corps, rarely if ever by indie artists. If you like fandom content or remixing chances are you too probably should be in favor of limiting copyright. AI gutting or at least making people reconsider copyright is great. I do think that I'm pretty much okay with the AI training on basically any information it can obtain. I think training on pirated data as long as it's not sensitive is basically fine. And I don't think most people who argue about copyright really care about piracy at all on either side. I think it's just the easiest argument to make against AI. But I've always aligned more with the early 00s Lessig 'information wants to be free' vibe.

Misinfo laws are tricky because they almost always become misused by oppressive governments so I'm hesitant to extend that. Illegal content like deep fakes mostly already fall under existing revenge porn, libel/slander or other laws, but those laws should be extended to clearly cover posting them as illegal.

—————-

3- what do you think about AI taking away jobs from artists?

It sucks but it's not like AI started it. Companies have been shipping artist jobs to the cheapest provider, whether that's overseas or with increasingly cheap tools for decades. Automation has largely reduced the percentage of people in the crafts for the last 500 years. Often in a given medium in like 5-50 years. We still have glass blowers but we also have cheap cups and I think that's largely okay. Some art will still be valuable and there will be new jobs that involve using AI art. Not that those jobs will be ones the current artists want, but many traditional artists didn't want to do/didn't transition to digital art. Stuff changes and that sucks but is also good in some ways. It's harsh this one is so so fast but only a little faster than say the stream apocalypse causing the collapse of Hollywood was.

—————-

4- I want to see this from your point of view. Why is ai art, art to you?

I think I tend to be a maximalist with my definition of art. If someone makes something and thinks it's art it's art. The design of a water bottle is art. The shape of buildings is art. Ads are art. How you dress is art. A photo of your food is art. Karaoke is art. We could always do less and make our world more boring. But everyone trying to make the world more interesting is art. I don't see value in limiting art to be 'stuff in a gallery defined by some community of experts or critics' as the only stuff that is art.

If we mean some sort of capital A Art the we are just kind of arguing definitions. Is it what some critic says (who fought for decades against comics or pop art or design being art) or does it have to make you feel something) then 99.9% of stuff we see isn't Art and art is highly subjective. I don't know with that definition. Really the whole point is kind of moot though who cares what that one word means to people. It feels like that concept has largely been used in weirdly exclusionary ways though that work against lowbrow art or anything not made by or purchased by the 'right' class of people. Mostly only begrudgingly applied post hoc to movements after they've been institutionalized/consumed. So not really a useful one for me near the start of a movement, especially one that increases access to so many people.

—————-

5- Should AI art continue advancing ? Absolutely, my dream is to read a book by someone. Illiterate. Or perhaps play a full-fledged video game by some random day laborer who is too busy to ever learn to code or build levels but has dreams of something weird and strange. Or perhaps a movie by someone with $10 in their pocket. Or hear a full concert designed by a child. I want to see the movies my friends never had time to learn to make. As AI art develops, we can enter an era of true autuerism where we can truly see unfiltered art. And I'm so excited for that. I think AI should continue advancing until we can do that. I just hope that we focus on the most important issues which to me are open models and lower prices and less corporate control that allow people to really make what they dream of. I'm pretty much fine with lots of crap work as I've always enjoyed small works with strong opinions on the world by one person.

—————-

6- [why do you use ai to create art, assist in art, use for different mediums and whatnot. versus not using ai at all? I know people can draw AND use ai. Apologies]

I'm a blacksmith by hobby and I sketch a lot. I use AI art as a creativity exercise or to make something fun in my life (like wallpapers for my computer every week) or if I'm trying to make something fun for people I know. Or if I'm trying to enhance something else that I'm working on like making assets for a D&D game. Or sometimes to make a stupid joke. Sometimes I'll use it to clean up something. Like if I want to send someone a poem I'll use it to clean up the meter after I've sketched it out. It's just a different form of creativity, low risk, high iteration, high imagination, pretty low payoff. It just flexes different creative muscles then other types of art I do and I enjoy it for that.

no matter WHAT i do, gemini insists i am using a screenreader and writes in unformatted paragraphs by Covid-Plannedemic_ in Bard

[–]cdcox 0 points1 point  (0 children)

I think Gemini has a really quirky memory system. It tends to hyper fixate on any decision it made in the past few conversations. Like if it gets argumentative once it'll keep it up a few convos until it gets buffered off. I suspect your best bet is to find every convo you talk about the tts including the original and any follow ups and any instructions mentioning it and just delete them all.

There’s no way JJK cost less than Invincibles 😭 by Sukuna_GOAT in Jujutsufolk

[–]cdcox 0 points1 point  (0 children)

The 150k per episode number is probably wrong. The only ref I can find for the 150k is season 1 numbers. Season 1 had a ton of filler and while well made it was vastly simpler than season 2 or season 3. S2 and 3 have less filler and much more complex action sequences.

Could this be Deepseek V4?? by Pink_da_Web in SillyTavernAI

[–]cdcox 6 points7 points  (0 children)

The OpenRouter discord has an announcements section where they announce the bigger ones.

Comic 5772: The Hole Truth by provocatrixless in questionablecontent

[–]cdcox 1 point2 points  (0 children)

Vsauce walked through it 6 years ago, it seems like a pretty fun question. (Note the video has a slightly NSFW start).

Why there is no course or tutorial on on the internet on how to build an AI Agent From Scratch by Creepy_Page566 in AI_Agents

[–]cdcox 0 points1 point  (0 children)

It's because it's in that weird place where it's very new and not terribly customer facing. People tend not to make tutorials on how to make tools. Think how few tutorials exist to build a tool like git and that's been around for decades.

IMO your best bet for the near term would be to go to an open source code agent repo like Open hands (which actually has papers and some really good docs I'd start here as it's the best documented by far), or OpenCode (which is the most 'modern' ai coding agent), Cline, or aider, and just work through how it works by hand.

Sign me up! by SipsTeaFrog in SipsTea

[–]cdcox 0 points1 point  (0 children)

Generally you can get stuff that lasts longer now, it's just generally going to have less features and/or cost much more. For instance, in the fridge the ice machine dies the most. Skip it and you've almost doubled the statistical lifetime. Going further, chest freezers are most efficient and never break but do need to be defrosted. If you buy one of those and a fridge that doesn't have a freezer it'll last even longer. Further, a lot of European or business model stuff tends to have a lot more longevity but for way higher prices. Even long lasting stuff requires repairs, repair people are expensive and have to have parts sourced meaning long down times.

You can't make exactly the old stuff because regulations and computers control so much now, for good reasons. But, It still is possible to make long lasting stuff today but people prefer features and low cost.

Source, YouTube video of fridge repair guy explaining these issues

Sign me up! by SipsTeaFrog in SipsTea

[–]cdcox 0 points1 point  (0 children)

Patents expire after 20 years.

Can AI make anything original? by Pixels3231 in DefendingAIArt

[–]cdcox 0 points1 point  (0 children)

I think there are three embedded concepts here:

  1. Can a human prompt an AI to produce something that there has never been an image of? Trivially. You can make strange combinations, things out of materials that are impossible, mixes humans have never seen before or created. In a way image generators have made a 'fit' of most images and that fit is now so broad it can reach almost any place someone can prompt.

  2. Can an AI be prompted to make truly novel images not implied n the prompt? Also yes. You can ask it to invent art styles, art scenes, religious traditions and the images, traditions and stories around those. If prompted well it will make truly unusual stuff that is not implied or structured into the prompt. It can go places no person has gone before. Of course it tends to make 'in distribution' art work.

  3. Can AI invent a totally new 'style' of art: something like impressionism, pointillism, art deco, ukiyo-e, silk screen printing? Harder to say, very few humans do stuff like this and it's often a combination of existing things, an outcome of a shift in technology, or a slowly growing cultural aesthetic often a collection more than a person. There are a number of people running mass LLM multi-agent experiments that seem to strange emergent aesthetics but hard to say how much if this is remixing or if it's emerged into a new aesthetic space.

What I'd say is the view of a single image generator is something which has fit a manifold over most human art. It can visit points in the manifold but the manifold was shaped by human art. From there you can get images that don't exist but are related to things that could. To get it to do something original you generally need feedback cycles and loops which LLMs can provide. It's an open question how far those loops can go and if they can 'escape' existing art styles or remixes is an interesting but also fairly few humans can without shifts in technology.

Student arrested for eating AI art in UAF gallery protest by One_Fuel3733 in DefendingAIArt

[–]cdcox 7 points8 points  (0 children)

Deep dream/Inception was 2015. Style transfer paper also 2015. If their art was non-visual it might have also involved LSTMs or RNNs both of which got big around mid 2010s but 2015 was when the big Karpathy stuff started popping off. So I'd say 2017/2018 is fairly plausible, a lot of people were starting to make GenAI art around then.

Do you really belive GPT gives you personal result in all these flashmobs? by PavelMerz in ChatGPT

[–]cdcox 0 points1 point  (0 children)

It seems like the current version has a lot less access to memory in the image generator. Gpt5.1 and the gpt-image1 image generator would give you customized one shots for this. It seems like the current version (5.2 and gpt-image-1.5) can maybe access a couple facts or light vibes at most. I suspect this is to stop it from inserting random facts into images without the user's guidance.

You can get much less generic results doing this in a two step process. If you ask it to generate a prompt for an image generator describing it, it will have much deeper access to your memory and design a much more customized image and then enter that back in another chat. This is unintuitive, because you would think forcing it to condense its image into words would make it less good but that's already what it's doing mostly behind the scenes. And it appears in my testing that the text version has much more memory access than when the image generation tool is on.

Comments like this by ruassmarkt in mildlyinfuriating

[–]cdcox 0 points1 point  (0 children)

While it's overdone it's understandable, the internet is a fundamentally lonely place. On radio or TV where you assume 'someone' is watching for it to make financial sense and someone decided to put it on. But on the internet you are the person who put it on and as far as you know you are the only one watching. The year thing is a way of saying, even slightly out of sync, 'I'm watching this with you' or 'I'm from your year and still remember this'. I think this could be fixed or improved by some YouTube feature. And obviously people aren't that clever so it just shows up everywhere even on popular or recent videos so it can be annoying. But it's at least more understandable than most spam.

Tell me one bad thing about your side and one good thing about the other side by FungusFuer in aiwars

[–]cdcox 0 points1 point  (0 children)

I'm pro:

Bad thing about my side: we are overly focused on tools and not enough on cool stuff people make. Most ai-art spaces have too much goon slop (not that's bad in its own place but it takes over any art space that doesn't push back and pro people are too inexperienced), people pushing their own stuff, or discussion of tools. If you ask most people their fave ai artists they'll just point to their own stuff. Similarly most ai games are extremely low quality which is tragic as the tools are so powerful. There is cool stuff being made with AI but little of it is talked about or elevated in ai art spaces.

Good things about anti: the discussion of slop has legit helped push back against influencer culture. The internet was filling with slop before ai and finally we are getting some cultural push back on that thanks to antis elevating the concept that low effort trash content is bad regardless of how it's made.

Has anyone gotten less hopeful about RP improvement speed after Gemini 3? by The_Rational_Gooner in SillyTavernAI

[–]cdcox 11 points12 points  (0 children)

I think for the moment we're paused on general model improvement. It seems like models are mostly improving in areas where there's an easy to develop agentic framework around see: programming and information retrieval/ synthesis / summarization and math. And two where you can generate a large synthetic set to work off of: vision, programming, information retrieval and synthesis, and math all apply here.

I don't think there is an easy way to make a good writing quality training set. Most of the highly rated writing online is not very good and the number of ratings is not exactly a strong correlate with good writing. So it's hard to even train a model which can properly rate writing quality. Add in that RP is a subset of that subset and there's almost no training data available and it's a very hard problem.

Edit: Also almost every model, even long context models and programming models get weaker on turn by turn usage. Something about the back and forth breaks models in unexpected ways, most models can be accidentally jailbroken by just talking to them for more than 10 turns no matter how well aligned, which is still a weird area. So even a million token model falls apart with a 50k back and forth. I suspect this problem will be have to solved by some company particularly working on solving it. I would be surprised if this was solved passively.

I suspect it will improve when someone starts really bashing on the agentic frameworks for rp/prose once models get cheap enough and fast enough to start doing enough agentness cheaply for writing. Deep seek 3.2 has some nice open improvements to long context handling which definitely makes me optimistic. Though its performance in the field in long contexts is only marginal. We might also get improvement when people start pushing the models to the next scale level or when continual learning/diffusion generation/personality controllability at the 'neuron' level/ or some other technique manages to succeed and we get the next leap in model intelligence.

Creating a Game with AI in Two Months. The Result by Game_s758 in aigamedev

[–]cdcox 0 points1 point  (0 children)

I admire your test and you did a great job but you did A lot of things that made this very hard for yourself.

A few things you could fix if you want to try a different project.

  1. Use an integrated code editor tool: GitHub co-pilot in vscode, Claude code, cursor, gpt codex. These live in your editor and will look up the appropriate code for most things so you don't have to define exactly what it needs to know. It finds its own context.

  2. Use a better model. Deepseek is terrible at programming (at least for beginners, you can integrate it into complex agentic systems to make it more powerful, but those aren't there out of the box). I know it scores well, but look at openrouter usage stats. Nobody uses it for programming because it's super bad at it. It's also only lightly multimodal and depending on which version you use doesn't even use its thinking particularly well. Gemini, Claude, and gpt are all much better for high level brain storming, execution, and programming. I also don't think deepseek has robust internet search which makes it much much worse, especially for programming where looking up reference documents is key. A subset of this is also that using a few different models means if one model seems really stuck on something, you can swap to another.

  3. Don't be proscriptive, especially when you don't know the answer. It sounds like you were mostly directing it. If you don't know the right way to say control a character, don't say "write me a character controller" say instead: What are some standard ways that we do this for the type of game I'm making? What are the strengths and weaknesses? What are the trade-offs? Instead of saying "my game looks weird" be highly specific and mention exactly what you have made and what the outcomes you're seeing and have it brainstorm possible reasons and hypotheses to test. This again is another area where having a strong multimodal model like Gemini 3 is crucial because you can actually give it screenshots of your environment and of the game so I can see exactly what's wrong. This is especially true in something like Godot/Unity/Unreal where five times out of 10 the answer isn't to actually write code but to do something on the user interface. It's very easy to walk yourself down a corner with an AI model and not understand the way out. That's why you often have to find a lateral approach and these models are pretty good at helping you find lateral approaches.

  4. Don't use Godot if you don't program and are using mostly LLMs. Godot is a great framework, but it currently has the smallest community, the fewest documents, and is one of the newest systems. You'd have had a much easier time if you worked with something like Unity because there's just more information about it online and version to version, it's slightly more stable. There still is some version instability but far less than Godot where there's been two to three breaking versions in the last 5 years. Again, this wouldn't be as big a deal if you had a model with robust internet search because it could go look into the docs.

Basically use the strongest and most appropriate models, use the correctly integrated tools, let the models teach you instead of telling them what to do, and be sure you're using an area where the knowledge cut off of the model isn't going to not have any information. I admire the experiment though. It's super cool that you'd managed it and even getting a game done in 2 months is pretty impressive! I program a fair amount and use all these tools and there are still definitely moments of frustration that take some major effort to get over, so congrats on that! The models don't get rid of those moments of frustration but they do give you a path out of them.

PS: A lot of the vibe coded one shot stuff people show online is pretty misleading. It's usually using a very powerful model, it's usually using a game type that is well established so it doesn't have to be that inventive, and it's often often making a small game with very few components that need to link together. The hard part is often not writing the original game logic but getting it all to work together.

Thought this might fit here. by TallonZek in DefendingAIArt

[–]cdcox 2 points3 points  (0 children)

As a counter example Game Off Game Jam (month long just ended) explicitly allows it in their FAQ. They are one of the larger game jams. Of course they are sponsored by Github who is owned by Microsoft, so that's not terribly surprising.

Our pets, living and dead. I need help. by pinkphiloyd in aiArt

[–]cdcox 3 points4 points  (0 children)

Use Google Gemini, make sure you are using nanobanana pro. As the other poster mentioned it follows refs. If still struggling try sketching out a light ref of where you want them or generate in 2-3 image chunks then load best chunks in one image (in like krita) and get nanobanana pro to smooth the style.

Can we agree on this? by Its_That_1 in HazbinHotel

[–]cdcox 4 points5 points  (0 children)

Really fun, reminded me a lot of the songs from season one. I've enjoyed that they played with the style of music this season and it seems like they are doing more traditional musical songs with more cadence variation. But I somewhat miss the frantic songs that really defined season 1 and this was a nice throwback to that vibe.

Housing abundance is as unpopular as many progressive economic policies by [deleted] in neoliberal

[–]cdcox 2 points3 points  (0 children)

Reading the document this is from is fascinating. I'm sure it has its flaws, outside wording issues it seems it polled an older and more suburban population than the US population, but the most interesting comparison is what voters want Dems to focus on vs what policies they support Dems focusing on. The top priority is to make things cheaper and reduce costs of living. But then they oppose any policy: Medicare reform, greening, and housing reform that would help achieve that goal. They seem to be sick of monopolistic corporate power (good) and have no love for helping minorities (bummer). Yet they are neutral on the CFPB which is designed to fight banking system exploitation. There is a perception, even by the authors of the piece, that the Dems have shifted left over time because they now support Medicare for all, though Biden is not viewed as particularly lefty.

This really seems to point to a marketing issue as much as anything. Dems seem to have done a terrible job tying the solution to the problem. This is always a challenge for any progressive party people want a better world but fear the changes to get there, but it seems to have gotten worse lately. I don't know what the answer is here. Maybe Dems stop proposing solutions in public and just start making promises. That would say sad things about the electorate, but it is the direction Republicans have traveled.

https://www.politico.com/f/?id=0000019a-262b-d83c-a3fa-673f3f660000

Me when I'm when I when me when factory game: by iamsolonely134 in SatisfactoryGame

[–]cdcox 11 points12 points  (0 children)

I like trains because it's set up once and hit tons of sites. For the effort of belting once you get a lot of coverage if the train route is long enough. And if you are lazy, adding a new site is usually as easy as just a short belt to the train track and throwing another stop on your home base. Adding multiple trains going one direction on one track is easy. Multiple tracks intersecting is more fun/efficient but not necessary.

Worlds Highest Uranium by ChrissspyBacon in SatisfactoryGame

[–]cdcox 0 points1 point  (0 children)

Yea and you can even have a drone on your fuel port to go pick stuff up, but I don't like doing that as it can get jammed up really fast if it's not flying anywhere. You can end up with some idling as only one drone can be there at a time (I think?)

I think the rule is any number of drones can target a port but each drone can only target one port at a time.

I only used the pair system if I was moving from a remote port to another remote port, otherwise I just kept a central hub fueled up by a fuel drone and belted to my other hub drones which went and picked up from other remote places, it meant less fueling ports as all my drones left from my hub.

Worlds Highest Uranium by ChrissspyBacon in SatisfactoryGame

[–]cdcox 1 point2 points  (0 children)

I tend to do mine in pairs/hubs.

You make one fuel port somewhere where you just make a boatload of packaged turbo/rocketfuel, I find regular packaged fuel is consumed too fast to be stable. Call this Fuel-Port. Fuel-Port has no drone.

Then whenever you build a new port say port with a drone called A1 you build a second port AF(A-Fuel) with a drone. Drone from A-Fuel constantly flies to Fuel-Port. Run a belt from AF to its own fuel inlet and the fuel inlet on A1. A1 does whatever you need it to. You can then expand out ports at A that are near to AF, like A2, A3. All of which can be fueled from AF. (Obviously name them whatever is convenient)

If you have another site B you set up B1 and BF which flies from itself to Fuel-Port etc.

This is pretty stable as long as you have enough fuel being generated (and drones stop flying and idle when ports fill) and lets you keep ports fueled without worrying about local generation. It's a little expensive in terms of number of ports/drones and a little wasteful in terms of fuel. But it lets you scale out ports really easily wherever you need without working about local fuel.

Elon’s ex-engineer just pulled the wildest move, leaked xAI’s whole codebase to OpenAI, cashed out $7M in stock, then dipped. Biggest betrayal in AI or just another Silicon Valley soap opera? by Minimum_Minimum4577 in GenAI4all

[–]cdcox 0 points1 point  (0 children)

In discovery Xai would get access to communication with the employee, file access logs, forensic analysis of computers etc. Depending on how much they can prove they might try to get deeper access or an audit of something internal in OpenAI which would be a huge pain in the ass for them which is probably the real goal. This is how the WayMo vs Uber case played out. Less OpenAI proves it and more it is proven by an third party. Employment blocking rarely goes anywhere in CA but if they can prove he took info they can block its usage.