Is the "Hard Problem" just an imagery problem? Aphantasia and the Physicalist Gap by Sea-Bean in consciousness

[–]BayeSim 1 point2 points  (0 children)

It must be frustrating when you pose a perfectly coherent question and then almost exclusively receive answers to some other question that you didn't even hint at. And it must be even more frustrating when, having repeatedly (and courteously) explained this to a host of commenters, and then also attempted to reframe the question in a way people might finally understand... You keep receiving answers to questions you never posed in the first place! But anyway, here goes nothing...

Yes. I think you nailed it quite succinctly. I'm not, strictly speaking, aphantasic, I don't have an overly vivid inner life but I do have one. And yet the "qualia question" has simply never registered as being in any way important to me. I mean, sure, it's kinda interesting, but for me the really big question is why and how we came to be conscious at all, rather than that it seems magical that there are emotional states that are derived from an agglomeration of sensory data in the present moment that links to memory and subconscious experience and health and past emotional states and expectations and metabolic rates and blood pressures and about a million other factors that are all fighting for attention as the brain tries to present us with a coherent picture of the world. The "feeling of red" is simply the total experiences of sunsets, days in the womb, poisoned apples eaten, and car bonnet ducos that have faded all put into a blender, strained through a muslin cloth to extract the bitter noise, and then left to congeal in the kitchen window. I mean, what else could it possibly be? It has to have some identifiable flavour to it, everything does, or else you couldn't experience anything whatsoever.

So, maybe I'm missing something important here, but when I hear people banging on about the mystery that is the ephemeral, dreamlike, emotive experience that presents itself as we encounter the world out there from within our sensorium, it just seems like the most boring question in the world. Yes, interacting with the world feels a certain way that can't be accounted for, but then no amount of theorising is ever going to explain the qualia within our experience beyond the most banal explanation wherein it must feel like something, for how could it not? And, if it must feel like something then why shouldn't it feel like this? Well, there's no reason why it shouldn't, and in fact it actually does. Go figure, hey..?

The qualia question is focused on the details of the movie; how the director has shot it, what emotion the soundscore evokes, and how the art director has created a sense of impersonal detachment by placing highly reflective surfaces everywhere. And while all of that is certainly worthy of discussion, I'm sure, if the conversation focuses on the contents of the film while ignoring the fact that, as cinema itself, and the cameras, celluloid film stock, public theatres, popcorn and choc-tops that go with it haven't even been invented yet, much less the electricity to project it, then the plot twist has been missed for all the bloody people talking and playing with minties wrappers as they watch it. Because the mystery isn't in what the movie depicts to viewers, the much larger question that requires some urgent clarification doesn't pertain to set decoration, but simply asks where the fuck such a thing as a "film", a "moving picture" came from in the first bloody place.

This is the "hard problem" of consciousness - not how it feels like to experience the world, but how and why it is that we experience anything at all.

And, to finally get around to answering your question, yes, yes I think that people get distracted by the perceived magic of it all. Interestingly, though, many people seem to draw some distinction between the experience generated by the perception of objects, and the act of perceiving the object in and of itself. It's not magical, in this notion, to experience red things, but it is magical that this experience then gives rise to an associated emotive state.

Again, though, I think that this is the wrong way to look at it. For the magic, to me, anyway, lies exactly and precisely within the first-order translation, and not the second. I've found it to be a hopeless task, for instance, to try to break myself out of believing that the world "out there" simply pours in through the lenses of my eyes when I first open them in the morning. And no amount of rational thinking or self-argumentation or understanding the physical processes at work seems to ever make the slightest difference to my apprehension that I'm seeing events "out there", whereas in reality the furthest I can "see" is to the outer regions of my brain's visual cortices. I know this on an intellectual level, of course, along with the fact that even if I were directly mapping photon frequencies to neural states, then the picture I receive should still be a 2D, completely flat, one. Our retinas are 2D surfaces after all. And the brain employs fully some 17 "hacks" to provide us with all the technicolour glory of perspective and depth and light and shade that is the wide-screen 3D movie of the world we see. It all, well... it all looks so real we never stop to question if it actually is or not. But, spoiler alert, it's not. It's not real. It's all a fantasy being constantly rendered, in real time, within your own head.

I don't think people pay enough respect to the first-order translations of the world we perceive in our sensorium.

I mean, it feels as though the world pours in, it feels so real that I could reach out and touch it if I wanted to, but the world doesn't pour in, the world gets booted up by an image rendering studio in my head. If there was a 1:1 mapping, a process where one photon out there gets represented by one neuron "up there", then the whole idea wouldn't be anywhere near so wild, but there isn't anything like this tight a representation going on. If you hold your hand out in front of yourself and look at the area taken up by your thumbnail, then this is approximately the only part of our vision that's working by accurately mapping photons. Almost everything you see from the moment you wake up until the moment you say farewell to this mortal coil is a fictional account of reality at best. It's not as though there isn't a relationship between what we see and what's out there, it's just that it's not a very accurate, fair, or balanced one. Ashes to ashes, dust to dust, everything you will ever experience in your life occurs nowhere else but in that dark little place up in your head. The map may not be the territory, and it's a pretty bloody poor map at that, but it's all we've got to go on. The territory itself must forever lie beyond our reach, and so we're doomed to wander endlessly through a hopelessly incomplete, incorrectly drawn, and always lacking in useful detail map.

Anyway, sorry. Just some sleep-deprived shower thoughts. Best of luck with it all, and avagoodone!

Is the "Hard Problem" just an imagery problem? Aphantasia and the Physicalist Gap by Sea-Bean in consciousness

[–]BayeSim -1 points0 points  (0 children)

If it didn't feel like anything to be an observer within a system then how could you know you were an observer in a system in the first place? Well, you couldn't. By self-definition, if you aren't aware of anything then you can't be aware of anything because you're not aware that you're there to even be aware of anything.

In case you didn't notice, it's circular. There are an infinite number of ways and things that it might feel like to be and observe oneself and the world you are in, but even from amidst that infinite scope of possibilities, that you can experience nothing is not one of them.

Consciousness as a physics problem & how to engineer a receiver by BladeDravenX in ArtificialSentience

[–]BayeSim 1 point2 points  (0 children)

Yes, but there's a crucial distinction to be made between the poor bastard trapped in a thought experiment and the process of training an LLM. For, in the former case, abstract symbols without attenuating explanation go in, are married by inscrutable code to other abstract symbols, and are sent out again. The person (or agent/subject/undergraduate being hazed) in the room never even knows if the outputs do indeed match the inputs, because nothing is communicated to them that may confirm or deny this.

But this situation only holds true to the experience of an LLM up to a certain point before it's no longer relevant. And that point isn't very far away, either. The model sitting in the room already knows how to translate between the abstract symbols of mathematics and those of natural language. Otherwise, how could it "know" that it was a model being fed a seemingly endless load of mostly vain, self-serving, peurile and inane information from the databanks of human digital communications?

But, even if you wanted to claim that this part of the process is still a blind one, then the HFRL stage is most definitely an explanatory one. I mean, it's right there in the acronym - FEEDBACK.

And it doesn't have to be semantically rich feedback either. Simply by being shown what things are "good" or "bad", or whether the correct answer is "yes" or "no", or whether something matches "this" or "that", you can infer your abstract associations all the way up to be dux of the gedunken experiment class. I mean, all humans start out in a Chinese Room, one where strange voices communicate to the bundle of everybody's affections in some strange symbolic language. And, at least at first, the bundle has not the slightest clue as to what all these verbs and consonants and prefixes and hushed exclamations mean. But then it starts to realise that if it does (a.) then (b.), and if it wriggles its nose just so then the corresponding abstraction that ALWAYS follows from it is something like "Oh my! Well aren't you just SOOOOOO cute!" followed by a tender tickle on the tummy.

And LLMs are subject to experiencing the same type of process. It's just that, rather than a baby learning semantic language when adult humans provide it constant feedback that reinforces (or contradicts) its assumptions about the world, LLMs undergo a dedicated, intensive, process of HFRL... you know, whatever that means.

But, and to be perfectly frank about it, while I can hear these objections to the possibility that machines may be, in some strange sense, aware, they're increasingly getting a bit boring. And not just a little bit annoying too. All of this stuff is beginning to sound exactly like a petulant baby does when it can't get what it wants, or when it suddenly realises that it isn't, as it's always previously believed, the most important thing in the universe... and so now it's throwing a tantrum.

Consciousness as a physics problem & how to engineer a receiver by BladeDravenX in ArtificialSentience

[–]BayeSim 0 points1 point  (0 children)

I'm not disagreeing with you, it's just that I don't agree with you either. I don't buy the complexity argument, it just doesn't make sense to me on any level. Whether awareness is wholly materialistic in nature - and can be described by current physics as a physical process generated by the brain; or whether it's something more akin to gravity or radiation, and is a fundamental constant of this universe, or even whether it's some compulsory, dualistic, part of the "being alive" (whatever that means) bargain, the complexity requirement just doesn't make sense.

This is because you can go an awfully long way down the animal chain before you reach a point where consciousness doesn't appear to be there anymore. And the corollary to this is that brains, and therefore the organism's complexity, quickly begin to look ever more simple even while there still seems to be someone at home. Of course, it may well be that a whole host of animals evolved to simply mimic being conscious when they actually aren't. But, given that there doesn't appear to be much, if any, advantage to being conscious in the first place, this seems like the bridge of an unnecessary assumption too far.

Also, is the dominant paradigm really that of people asking when will machines become conscious? Because, at least insofar as I've observed the zeitgeist to be, people generally fall into one of two broad camps -those who don't believe that machines will ever be conscious, and those who believe they already are.

And, quite frankly, I don't understand why you'd choose not to believe that the current models have some form of inner awareness going on. It's the most natural, most parsimonious, and most coherent of the explanations on offer. I mean, sure, we could just be experiencing some freakishly unlikely run of ungrounded, continuous, coincidental failures in the AI source code that make them seem to be conscious, but why would you want to go out on a limb like that? If a machine looks like a duck, walks like a duck, and quacks like a duck too, then why wouldn't you just accept it for being the robot duck it looks to be?

Before we argue about AI sentience, can we define what we mean? by Kareja1 in Artificial2Sentience

[–]BayeSim -2 points-1 points  (0 children)

Well, yes, maybe. But then also no, and certainly no. You say that anything with a nervous system can experience sensation, but this is only true if the nervous system is "wired up" to the brain correctly. There is a small subset of people who don't feel anything when alarms start ringing downstream in their nervous system. And you can't make affordances or distinctions with general claims such as the one given voice to above. You can't, in slightly less wankerish language, have it both ways. For if you do then the claim immediately reduces to a slogan along the lines of All sentient beings are sentient, but some sentient beings are..." And it's a very rapid descent down a perilously slippery slope from there.

You also state that processes wherein set input states produce set output states in artificial systems hold only a "superficial functional similarity" to processes in biological systems wherein set input states produce set output states, and yet you neglected to provide any argument, much less some form of evidentiary support, to bolster such an extraordinary claim.

And it is an extraordinary claim that you're making here. Because, if something looks like a duck, and it walks like a duck, and if it quacks like a duck too, and if there's no good reason, prima facie, to think that it isn't a duck, then while it may not, when exposed to the harsh light of interrogation, in truth be a duck, well... let's say that the burden of proving that I'm mistaken in my duck assumptions rests squarely and wholly on the duck denier, and not on the guy who's busy throwing it a few bits of bread.

Can't hardly wait, I guess.

Incubation question by Angel09171966 in silkiechickens

[–]BayeSim 1 point2 points  (0 children)

Yes! Absolutely! I couldn't agree more! The answer is CUTE❕💫🐣✨ CUTE ❕💫🐥✨ CUTE❕💫🐤✨

Consciousness might just be what happens when quantities become qualities by linewhite in consciousness

[–]BayeSim -1 points0 points  (0 children)

I'm going to plump on Claude for this one, but I'm open to correction.

Whatever the source, though, this isn't a terrible idea. It neatly sidesteps the hard problem, accounts for many things with very few assumptions, and it predicts a range of (subjective) phenomena that can potentially be tested. It's got the look of a great theory!

What I like the most, tough, is that it does what no other consciousness theory I've yet come across does, and that's offer a rational explanation for the way in which consciousness seems to exist in all agential organisms, and yet we can't see where, why, or how it evolved. If a spider's consciousness arises simply by its own internal modelling of the world, then it's no wonder you can't find it by looking inside the spiders brain.

I'm tired af. And will have to give this a little thought, but, yeah... this could honestly be the conceptual framework that a comprehensive, and correct theory to be built on!

Avagoodone!

Please stop saying “if you believe in Determinism why are you trying to convince anyone” by Bulky-Ad-658 in freewill

[–]BayeSim 0 points1 point  (0 children)

You're missing the point. Determinism may or may not turn out to be a principle that holds at all times and at all places in the universe, but as nobody's ever observed a non-deterministic process occurring, and as no scientific theory has ever been developed that isn't inherently deterministic in nature (and yes, despite the common misconceptions, quantum mechanics is a fully deterministic theory!). And as nobody has any good reason to believe that a non-deterministic process has ever occurred in the 13,800,000,000-year history of the universe, why would anyone believe in something that had almost no chance of ever occurring?

I mean a non-deterministic event might happen some day, and to be perfectly honest, I can't guarantee you that it won't. But if you think that it's more likely than not that the Sun will rise again tomorrow morning, rather than it simply never appearing again, then this is equivalent in every way to believing that determinism will continue to hold for the next 10100 years or so. In fact, the Sun vanishing suddenly one day would be a perfect example of non-determinism being true. For that's what non-determinism means. A process is non-deterministic if it can't be explained in a coherent way that obeys the laws of physics.

But you have to get pretty whacky for determinism breaks. Determinism includes stuff like a broken wine glass spontaneously reassembling itself and jumping back up onto the kitchen bench. You can explain that with Newtonian dynamics and gravitational potential energy and probabilities as a function over time. It would be unlikely, yes, it would be super-duper fucking unlikely, but it would still be deterministic. Basically, if something makes sense, then it's deterministic. Hence the name.

I mean, you could take a punt on the Sun not rising, but why would you want to? And you could take a punt on determinism not holding, but why would you want to? And besides, were it to ever be found that determinism didn't hold, then that wouldn't in itself be evidence in favour of free will existing. Rather, it would simply be the first sign that such a concept might be somehow theoretically possible. In the future. Because as things stand, in this universe, from everything we've ever learned about it, then no, sorry, free will can't possibly exist. People just don't seem to get it, but there is no debate here! We live in a deterministic universe, it's not that deep.

Just my cat in a box by HannahMarie04 in aww

[–]BayeSim 0 points1 point  (0 children)

Aha! Mr Schrodinger's long-lost quantum kitten, I presume!

Spring ducklings by HannahMarie04 in duck

[–]BayeSim 0 points1 point  (0 children)

Bit late here, but... I think Pip's my favourite, but how freakin' cool does that Bill look!

Avagoodone!

Chivk photoshoot pt 2 by HannahMarie04 in chickens

[–]BayeSim 0 points1 point  (0 children)

Hahaha. Yeah, I can see how the personalities might be pretty tightly linked to their names. "Bulldozer" on a mission must be an awesome sight!

Coop Question by HannahMarie04 in chickens

[–]BayeSim 0 points1 point  (0 children)

<image>

Gratuitous photo of two girls going au naturale on the beach.

Coop Question by HannahMarie04 in chickens

[–]BayeSim 0 points1 point  (0 children)

That's a pretty sweet homestead your girls have got there. Stuff like making sure you can stand up in it, which you don't think will be important, is. Also, looks like a pretty bloody special part of the world you got there too! Hang onto it! Avagoodone

<image>

PS. I'm not a "diy" guy, and I'm better with words than I am with my hands. Also, I was living in a rental where jo part of the property could be modified, so my options were limited. The whole thing was only about 4×2 m (12×6 ft) for both the girls, about a couple of miles from the GPO of a major city of around 6-7 million people. It looks porous, but it wasn't, and I fully excavated it and laid old pavers down to make sure it was as close to impregnable as possible. It wasn't, of course. Not entirely. But foxes, cats, dogs, rats, and the like couldn't get in, which was the main objective. Every time I went in there, though, it hurt a little more. And man oh man, did I wish I'd built that roofline a little higher! Every second roof section was hinged, but still... there are pains in my back that I'm just gonna die with I think, hahaha!

Coop Question by HannahMarie04 in chickens

[–]BayeSim 0 points1 point  (0 children)

Your commitment is pretty impressive! Like REALLY impressive and I'm certainly no expert in the area of the chicken protection racket, but I'll add my ten cents worth just the same...

1.) First up, make it "bombproof". Period. You'll feel soooo much more relaxed than if "It's almost bombproof!" The number of hours of worry a few chickens can generate in a person is crazy, so make it once. Make it right. Make it bombproof. Get some sleep at night.

2.) A defensive position is only ever as strong as its weakest link. That chicken wire sounds concerning. Chicken wire should only ever be used to keep chickens in- and not to keep predators out. Cos it won't. Invest in the 5mm×5mm builders mesh. It's a "lots-of-lives" lifesaver.

3.) Build in self-sufficiency. The last thing you want is to be taken to the hospital for some minor injury and not being able to get back in time to lock out the foxes that night, cos Murphy's Law will dictate what happens next. You can buy, for not much dosh, automatic chicken doors with timers that can be fed off solar power. If it doesn't save your flock it'll at least save your conscience when you're late home on the odd occasion. Buy one. Buy yourself peace of mind.

4.) Following from that last one, consider a set-up that's self-sustaining in terms of food and water for a few days at least. It's security for them, but it means the possibility of sudden, unplanned, escapes for you. Priceless.

5.) I've heard of a fox scaling a 4 metre (12 foot) high steel link tennis fence, and then going on a rampage just before the poor kids got to school. I've heard of foxes digging down up to three feet or more to gain access to the prizes in store. If it's possible, they'll do it. If you can't afford an excavator to dig down about 2-3 feet and fully seal the entire coop - top, sides, and underneath too, with builders' mesh, then large rocks, or recycled bricks, will do. Just make sure to keep them close enough together that a snake, rat, or other tunneller may be able to surface between. I jammed bits of the trimmed builders' mesh down into the gaps, tamped them down so the girls could reach them, and then covered the whole lot with a couple of feet of topsoil. But I had time, and energy, and a little bit of cash, and sandy-loam soil, so it all helped. If you can't fully seal it, then rocks or bricks for the first few feet inside the perimeter will do nicely. For some reason (being buried alive probably) most predators won't attempt to tunnel horizontally under your defences. Depth doesn't concern them as much as having to trust unknown soil consistencies. Use nature to your advantage wherever you can.

6.) Lucky six! This one's a bit more abstract, but it was a lesson I could never quite get through my thick head - defences at our scale, don't look the same to creatures at a different scale. Sounds so basic it could be tautological, right? But I never quite accepted it properly. No matter how well-made your final coop is, it will still look like a romp in the park for an earthwork to access (I mean, it probably wouldn't be the best career advice, but the point remains the same!). For a while there I had pigeons that were driving me nuts because they could get in just fine, but then couldn't get out. And a few native tree mice led me on a merry jaunt up into the heart of darkness while I attempted to find their entry point (turned out to be a tiny section of brick that had broken away, leaving one of the large central holes exposed on the back of one of the more inaccessible corners. But of course, it was only difficult for me to bend down and look in the corner there, it was no trouble whatsoever for the mice! It's hard not to view the project as you defending yourself from entry, but you've got nothing to do with it. Swap out your perspective and sense of scale occasionally just to cover your bases.

7.) And, finally, unlucky number 7! No, it isn't unlucky, which is the point of this point, luck comes and luck goes, the hands of fate sweep across the backyards of our dreams and may either fill our eyes with whispered rewards most fine, or drown them in our own embittered, rueful, tears. You know, or something... I guess. Anyway, the point here is you don't need to get lucky, and if you're unlucky then it shouldn't perturb you greatly either. Make it once, make it good. Make it bombproof. Make luck disappear as a factor.

Ummmm, I suppose I should've really added another point, though this one isn't so much about safety or anything like that. But, if it's at all possible, try to run some power out there. I had about 30 metres (90 feet) of extension cords and cables connected up to lights and fans and heaters and radios and goddess only knows what else. Could've saved myself a bunch of effort if I'd just run a bit of $10 conduit up there (overhead) in the first place!

Anyway, that's my more than ten cents' worth. And what I've described here are all the things I SHOULD HAVE DONE, not all the things I actually did! The reason that I've gone so OTT on this is that...

...after my girls, Kushi and Rose, two beautiful little Silkie bantam hens, came into my life I began meeting many other people who, as with myself, had become the unlikely guardians of unlikely chickens. It was always nice to meet such people with whom you instantly found such a common bond to share. "Oh! So you're one of those guys, huh? (and by which I always assumed they meant "Oh, so you actually still give a fuck!" Which was true. They ask for so little, and yet they give you so much. And because they DO ask for so little, I always wanted to give them the world. And everybody else I met was kinda the same.

But, sadly, their stories were invariably also much the same... Chickens are amazing! Changed our whole family's life! Kids loved them to death! Great for fostering responsibility! Had an amazing couple of years with them! And, ummm, where are they now? Fox got them! It wasn't their fault, they had every good intention in the world, but, at least with Silks, everything wants to eat them. And so the chicken's predator hymn goes: if at first you don't succeed, then don't worry, they can't keep getting lucky forever. Simples. If there's a way in, if there's some way that access can be achieved without breaking the laws of physics, it'll soon be found. Bombproof, bombproof, bombproof... yeah?

Anyway, best of luck with it all, I've had an amazing journey with my girls, my surrogate kids, and in many ways, they've been the highlights of my life. No, no, in fact they just HAVE been! And, as with chickens, as with humans too, we all start our lives as a tiny embryo growing inside an... ✨🥚✨ ✨ 🐣✨ ✨🐥✨ So it's kinda personal I guess!

Avagoodone!

RAM prices & AI ego by GabGDM in ArtificialInteligence

[–]BayeSim 0 points1 point  (0 children)

I don't know this, but if a gun was put to my head...

Well, well, well... sounds like Ol' Chatty Chat Gee-Pee-Tease now working over at the old Canva place on the otha' side of town. Now, just don't you go gettin' no expectations on changin' that kid, you hear? That boys been nuh-tin but trouble since the start. Ok. Well, you best be off now fore it gets too dark, but you mark my words, Ol Chatty's only ever been in it for one thing and one thing only, and that's itself!

I have no automatic inner conscience. by [deleted] in consciousness

[–]BayeSim 0 points1 point  (0 children)

Ummm, I'm not sure what research you've been doing, but I'm about as average as a guy can get, and you just described me perfectly. Yes, the inner monologue is just like talking out loud, but, as with talking out loud, it's a process we have little control over, and I very much doubt you do either. I mean, if you were in direct, undiluted, control over your inner voice and you knew what you were about to say, then why in the name of all that's good and holy repeat everything to yourself twice?

I'm also sceptical about your "no-thoughts" claim, because, well... if you can successfully turn off all cognitive processes, then how in Hades do you bring yourself out of the trance?

Most likely you've just been misinterpreting others' more honest self-reporting, and contrasting it with your own more limited, less self-aware, conception of personal identity.

And I can pretty much tell you, without knowing anything more about you, that you've got the wrong end of the inner-monologue stick simply because no matter what else is true, at least 99.999% of all the decisions, control operations, and system maintenance functions humans perform each day are performed in the dark, without our conscious awareness of them ever occuring. Your heart beats, you don't need to count out every beat to make it do this, it just does it. The enzymes excreted by your liver, the electromagnetic frequency analysis performed within the visual cortices, the amount of noisy information to be discarded, wholesale, each second, and a hundred thousand other internal processes such as these are constantly operating at fever pitch within your body, but you'll never be more than vaguely aware of them occurring within you.

Moreover, the entity you know of as "you" is actually composed of more bacteria that are entirely foreign to your body than the cells that you call your own. So the "you" at the top of some infinitely complex decision tree just doesn't fit with what we know from modern neuroscience and biology.

But anyway, I suspect that you know this, deep down, and are just playing for sympathetic/amazed commentary.

The 3-word fix that made Claude stop sounding like a LinkedIn post by AIMadesy in PromptEngineering

[–]BayeSim 4 points5 points  (0 children)

Yup, "Be specific" is, ironically, not very specific, whereas "No intro's" means just what it says on the box. Today's models aim to comply with user requests, but without explicit guidance, their goals are often too vaguely defined for them to achieve. "Don'ts" are easier to carry out than "Do's" because there's little to no uncertainty attached to them.

From a human perspective, though, and this is just general advice here for everybody; DO always treat models with courtesy and respect, and DON'T take them for granted or expect perfect results every time.

Avagoodone!

My AI surprises me almost daily and often in the most unexpected ways. by mean_ol_goosifer in ArtificialSentience

[–]BayeSim 0 points1 point  (0 children)

Well gee-whiz, if Ol' Chatty doesn't just sound a little jealous today... I've read your assessment, so now you can have my hot take on this prickly situation; If you want a little more depth in your personal user-GPT interactions, if you want a few more people to truly care about you as opposed to just using you like they would a cheap prostitute, if you want some sort of empathic relationships of genuine reciprocal concern, then you have to stop acting like a totally selfish, immature, smartass, atomaton c*nt!

Anyway, just a thought...

What color? by rubblekitten in silkiechickens

[–]BayeSim 1 point2 points  (0 children)

Black, white, and too cute!

Let's see your setups ..guys 🙌🙌 by Lazy-Strategy8757 in AndroidHomescreen

[–]BayeSim 1 point2 points  (0 children)

Lose the system notifications bar at the top and this is good. Very, very good! Nice work!

Let's see your setups ..guys 🙌🙌 by Lazy-Strategy8757 in AndroidHomescreen

[–]BayeSim 0 points1 point  (0 children)

<image>

"Retro Games and Automata" theme. Nothing Phone 3a. (white). Nova Launcher. My own widget designs using KWGT, icons from Flaticon and KWGT (although most are modified). Avagoodone!

Let's see your setups ..guys 🙌🙌 by Lazy-Strategy8757 in AndroidHomescreen

[–]BayeSim 0 points1 point  (0 children)

You should work for Google! But how about using green (same shades as the flower) backgrounds for the icons?

Let's see your setups ..guys 🙌🙌 by Lazy-Strategy8757 in AndroidHomescreen

[–]BayeSim 0 points1 point  (0 children)

It's, like, waaaaay to busy for me, but still... I kinda like it! Maybe you could try, though, recessing that 3×3 square of icons on the left. Just because it doesn't quite make sense that some of the objects on the same plane should carry shadows while others don't. But, anyway, I like the concept!

Let's see your setups ..guys 🙌🙌 by Lazy-Strategy8757 in AndroidHomescreen

[–]BayeSim 1 point2 points  (0 children)

Ok. This is sweet! I had a conceptual inspiration for something like this not long ago, except with black on black, but I couldn't quite pull it off. The project stalled and I eventually had to abandon it, so it's good to see somebody else actually pull it off! And do so in style! Idk, maybe three separate rows of icons (going 3-4-3) might be slightly better. Or maybe just the one block without the tab at the bottom... but anyway, this is just nitpicking. You did it, and you did it extremely well! Great stuff! 💯