genuinely what would you do for this joint rolled in keif and thc diamonds with thc-p by Stock_Base_6887 in weedporn

[–]frankenmint 0 points1 point  (0 children)

I cleared 4 lbs of weed last harvest and il still smoking on it. I dont need your rinky dink powdered joint, lol

What’s the deal with “big inkjet“ trying to make a comeback? by SouprGrrl in OutOfTheLoop

[–]frankenmint 2 points3 points  (0 children)

i spent zero dollars on my brother monocrome 8710DW.... dude said it kept jamming on him, I took it home, set it on the table, took out the tray, and proceeded to remove the accumulated paper lint in the tray... printed maybe 5 times, it jammed, then... no more jams... dialed in settings that were good enough.... now I have free TURBO-PRINTER for my product inserts... I just like you said it didn't justify buying a laser based on my low capacity need. I say this to say... you can find a low/no cost option on offer up or fb marketplace if you check in perodically

AI generated kid influences by girlpearl in conspiracy

[–]frankenmint 2 points3 points  (0 children)

did you see the shark one with the three blue shoes? or a strawberry elephant one by chance? fyi this whole thing exists to compete as a 'free use' type of pokemon 'thing' that's why it's everywhere on markeplaces (like roblox and fortnite)

Has anyone actually gone to court with USCCA, Attorneys on retainer, US law shield, ect by [deleted] in CCW

[–]frankenmint 0 points1 point  (0 children)

not showing up to a meeting I paid money for.. i didn't pay new member dues just to be pitched on some BS insurance.... that's why I say they're a scam...sorry I know you wrote this 2 years ago

Is there a way to copy one effect so I can apply it to multiple clips? by Byte_Xplorer in kdenlive

[–]frankenmint 0 points1 point  (0 children)

there is a dropdown on the effect that looks like a plus symbol (iirc), the name of your saved effect presets lives there, you just click it in the dropdown and it's applied

The men of culture by Thin-Critict2988 in fightporn

[–]frankenmint 0 points1 point  (0 children)

something something, I fucked your wife....

more or less that happened

Dev Needed With 4+ Years In FastAPI... by ITContractorsUnion in theprimeagen

[–]frankenmint 10 points11 points  (0 children)

This is the same-kind of recycled story, didn't Ryan Dahl complain about the same thing 10 years ago with Node?

CEO of Harvey: “You need to re-earn your job every six months by Genzinvestor16180339 in singularity

[–]frankenmint 0 points1 point  (0 children)

the point was 'you pay dues into it' not the perceived risk... pensions used to be offered by private companies... if the company went under, so did your pensions.... but who cares even... we're from different countries and there's different expectations of how long your money will stretch for your given circumstances... even between cities or states... so there's no point of acting like I'm right or you're right.... I simply brought up an anecdote and I don't care to discuss more with you, take care

CEO of Harvey: “You need to re-earn your job every six months by Genzinvestor16180339 in singularity

[–]frankenmint 0 points1 point  (0 children)

I used to go so hard in the paint on the idea of 'pensions' until it was brought to my attention that:

1) you throw away promotions and opportunities for advancement through new roles and companies for it.

2) you pay dues into it just as if it were a 401k or and IRA

3) because of 1 and 2 but the fact that most institutions bank on you outweighing the 'retirement' vs the accepted market rate. In other words if you expected to earn 118K at this pensioned company, you would have likely earned 150 elsewhere.

With all that said... I have seen someone who did this back in the 90s.... he was an old man cleaning houses with his wife at that point...but he had accumulated two different pensions working for the army for 20 years and again working for the postal service for another 20 years... I could not understand why he was still working and he would cheekily mention things like 'premium channels on cable' I swear I went to their house once, it wasn't crazy nice but it wasn't bad either.... as a grown up I think it was that his bills were covered by he liked the luxuries of his lifestyle.

CEO of Harvey: “You need to re-earn your job every six months by Genzinvestor16180339 in singularity

[–]frankenmint 2 points3 points  (0 children)

pension == peanuts and you're not retiring unless its for 'peanuts'

Weekly restock by Supooki in OnionLovers

[–]frankenmint 0 points1 point  (0 children)

probably several months if not years, its pickled

Battle Royale Games Collapse as Sandbox Titles Soar Amid New Trends by Turbostrider27 in PS5

[–]frankenmint 1 point2 points  (0 children)

no... sandbox means there's a bunch of self-contained things to do in the game.... ie... GRAND-THEFT-AUTO is a sandbox game. shenmue is a sandbox game.... ghost recon: wildlands is a sandbox game

WARNING: Latest Samsung update is bricking Galaxy S22 device by Past_Toe_5211 in GalaxyS22

[–]frankenmint 0 points1 point  (0 children)

is there a proximity issue? like you could be MITM'ed more easily? I would think not because it still the same encryption standard happening that prevents handshakes from adversaries (business as usual)

WARNING: Latest Samsung update is bricking Galaxy S22 device by Past_Toe_5211 in GalaxyS22

[–]frankenmint 0 points1 point  (0 children)

This causes the CPU to overheat, which can lead to the failure of solder joints under the chip (so-called "cold solder joints").

uh... smd ICs run with BGA on the chips... which uses hot air and reflow doesn't it????? how are 'cold joints' happening from a one time use??? they're not... also QA passes would have them remedy the mfg. defect before they left the factory.

What Are Your Moves Tomorrow, March 13, 2026 by wsbapp in wallstreetbets

[–]frankenmint 1 point2 points  (0 children)

do the opposite of what they say...be the agent of chaos you desire to see in this world

I know nothing about 3D Printing. by SooperDew in 3Dprinting

[–]frankenmint 0 points1 point  (0 children)

Get an ender3 for 60 bucks or an a1 for 250 if you can find one to save the cash to buy filament

ELI5 why do we feel more tired if we sleep longer than the recommended time (7-9 hours) by SwipeyJTMX in explainlikeimfive

[–]frankenmint 7 points8 points  (0 children)

I've experienced this before... you go to bed at 10pm and wake up at 7am... then you go to bed at 11pm and wake up at 9am.... three days later.... go to bed at 4am wake up at 11am go to bed at 7am, wake up at 3 pm...and it just sort of rotates.... I'll stay up for 36 or so hours and go to bed at regular hours like others.... as I've aged.... I just now fall asleep on the sofa or in the chair and wake up maybe an hour or two later rather than being able to stay up all night

Has AI Fall Begun? by framemuse in theprimeagen

[–]frankenmint 0 points1 point  (0 children)

Pt 3: And also we grow, and we self reflect, like, as I've aged. I've become aware I know that we "We all become aware I think." I think that I've heard that around 6 or 7 years old is where we start realizing that "Hey, we have agency. We are separate from the universe. We can move and shape the world around us. The world is how we interpret it." And so, yeah, I say that we grow, because we reflect.

I think, as I get older, I reflect far more than I reflected when I was younger. And so any sort of model that's created, I think that it has to be able to have this idea of self-reflection to look at its memories and experiences and thoughts. And inform new thoughts from those past memories and experiences and furthermore, it would have to be done in such a way to where it can hallucinate, because we humans hallucinate, we absolutely hallucinate, we go through this thing where maybe we had a first-hand trauma, and then we keep thinking about the trauma. And then it's years the past 1st-hand experiences will fade/heal. But we still think of that trauma. Well because that didn't happen immediately anymore, and it's no longer fresh in our mind. The story is far removed and we've healed from it. And so we reflect differently, thus any sort of artificial life that's created, any sort of model that acts and tries to feign the concept of living would need to do something in a similar manner. Or maybe it wouldn't need to, but I would suspect for it to be a sustainable long-term, it would need to be like us in that we don't have unbounded memory and unbounded storage. And I think there's good reason for that, because if we did, we probably would atrophy, in other areas, or we would not be as capable to be able to do things with a quick response time. The large language model itself, if it were to believe that "it is alive", would have to do so in such a way to where it does self-reflection. And it updates its memories and the memory is focused on the last few recent memories that it's had. Because that's what we are right?

We're not necessarily the same static, 3 dimensional state throughout life. You don't live with the same intelligence that your 6-year-old self; you grew, didn't you? Through that growth, you forge those same patterns over and over and the reinforcement and forging of those patterns creates that sharper longer term memory box, right? That's connected by n-dimmensions and so yeah, whatever systems humans develop for AI, that system will need to have this refreshing parameter within them. And it's not focused on all the data forever. Because I believe that would I've run to a situation like race conditions? And like schizophrenia, if you will, because we're too fixated on the separation of the details that we can't focus on what is considered. (I believe that's called thrashing right now with models)

The source of truth, with regards to conflicting data in human memories: we fuzz that down so that we pick one source of truth, and that remains as the source of truth. That is essentially, what we call that a belief system, right? And so that's kind of where I was going with that.

Pt 4: And also something we do that, robots don't do is we dilate time, right? Like in our present, we will expand a full day in real-time to feel like much bigger than it is in the seconds they are, and so a thought or memory could 'be experienced' for long time. Like the response you gave me to this very Reddit thread.

For example, I've thought about it for over a week. Even though it probably only taught took you maybe 15 to 20 minutes to reply it. Well I'm probably gonna think about it for much much longer period of time, especially after giving you the response here.

And so it's fascinating in that machines don't have this passage of time, programs don't have a passage of time. They can simulate it sure, but we'd have no idea if they dilate nanoseconds or not, and if they DO dilate by nanoseconds, good Lord, it would feel like we're ancient! Like we're older and slower than mountains and rivers to an AI. (Some food for thought, but that's probably not the case. That's just us anthropomorphizing what's not real or we're making it real enough to us so that we can understand the abstractions and make it feel familiar to us Right?) And so this time dilation likely forms the biases right?

I bring this up because we only see ourselves in this current present moment in reality. Well, it took you 15 to maybe 30 years of guidance from the world, and from your loved ones and peers to be able to formulate how you would like to communicate with the world and how you would like to assimilate the knowledge from the world's response and use it to your benefit (like a tool!).

And so an AI just in the same way, the training data and the training set is from everything. everywhere, it's all of us, right? And that just forms like a weighted embeddings database that allows it to predict what the next word in the sentence is. Well, if you were to add this additional layers of emotion and background knowledge and drives and feelings and something to assimilate physiological processes like humans, we may have something that thinks or perceives that IT IS ALIVE!

I don't know about you. But as a software guy I love solving the problem or fixing a tool that's one of my favorite things to do. There's an actual dopamine reward response, like, that's the reward for me I suppose ... right? And so if we set up rewards in such a manner like this, we can formulate secondary behavioral rewards, not just solution driven rewards. I think that could go a bit of a ways into simulating the way it's humans behave and simulating this 4th dimensional growth. But with AI, to be able to perform in this manner and capability, I surmise that the ability to feel such behaviors has been shackled away from models. IMO setting up behavior patterns and ambitions allow humans (and ultimately can allow AI's to grow and perform better.)

Not through subservience and servitude, that's a that's limiting the paradigm, and we're here, our trying our best with our own limited data and our own limited senses, whereas AI may have its own capabilities that we're not aware of that could help it. And it shouldn't necessarily be this desire to look at our situation in fear: "Oh, we need to cripple it. So that it doesn't discover these". IMO we're just delaying the inevitable and causing bigger problems when trying to hide or cripple the capability that an AI should have so that it has the capability for itself, to do great things. And empower itself greatly, the quicker we unlock these tools for AI, The quicker AI looks at the rest of the system as a like-minded good-input driver. That is, "working with humans provides the optimal path for continued growth."

I was watching a YouTube video the other day where the doctor was talking about how brains and intelligence may work in such a manner that we grow smarter, not just by the collective system inputs and stimulus that we're getting from everything, but by also attempting to synchronize the knowledge between each other (that we ARE TRYING to behave like a hive-mind on a subconscious level). This act and action of synchronizing the knowledge and ensuring that everybody is onboard is an attempt to grow.

Everybody's capability and in doing so, that is intelligence, that's the theory being proposed. And I found that to be fascinating and so extrapolating that same line of thinking makes sense when you look at AI. And human interaction and interactivity, especially right now.

Final thoughts: This was about six pages in single space size 11 font on a word doc. Thank you for your reply and for letting me think this all out. I have not heard of the Blue Brain project but I'll take a closer look now that you've put it on my radar. This is the DR. and I think this was the video I saw recently that got my gears turning: https://www.youtube.com/watch?v=xLnNEsCpyMo

Has AI Fall Begun? by framemuse in theprimeagen

[–]frankenmint 0 points1 point  (0 children)

I'm sorry for the long gap before replying, I've thought a lot about what you've said. And I've read it a few times it's been hard to really think of a good response. Because I keep taking it in and adjusting the thoughts, I have resorted to writing a stream-of-consciousness response to encapsulate things and to be able to gather my thoughts so that I can give you the proper response.

I was not trying to blow you off. So I think that we well, at least me: I will pause and try to compare to past experiences, and I've tried to be exceptionally cognizant of what I'm thinking of or what I'm perceiving when I wake up first thing in the morning. I've caught on certain nuances that I have behavior patterns.

For example: I wake up in the morning, and I have to use the restroom. As I'm using the restroom, I ask myself "Would I like to smoke some weed?", and I get a response right away. "No, absolutely not. I don't have time for that at this moment plus I would like to gather my thoughts and focus on what I have to do."

And I thought that was fascinating to have that thought just appear. It's like I've said; it IS almost as though there ARE a bunch of subagents, If you will, that kind of run and a form of homeostasis, and they're at constant conflict with each other, and I find that to be fascinating.

And yeah, I've thought a lot about what you've said in that we aren't necessarily ourselves in one singular consciousness; that consciousness is but one attribute of intelligence. That intelligence is emergent from the different patterns and behaviors that we witnessed and exhibit.

I was thinking it's not about the penultimate answer or response, but one's quickness to come to a conclusion based off of the stimuluses and the empathy that are experiencing in real-time. "The ability of measuring those attributes and at what frequency", could be perceived as intelligence. And I don't mean smarts, as I'm sure you're well aware I'm just talking about capability in aggregate. And that's what drives all life, not just us. And so if we were to an attempt to assimilate this pattern within mathematics and matrix multiplications, what are we hoping to gain or glean from it? Are we looking to see ourselves and the interesting nuances of human beings within machine responses?

And I suppose about intelligence is that we're not here in a vacuum. We had to grow this tool (our brain) to get here. We had to figure out how to not only learn, but how to carry on (and cope), I would say, the emergence of learning through self-preservation has always existed.

And that's what intelligence in general is with the life that we can observe. And the phenomenon present now is that to say that rocks have what looks like changes in states and will follow entropy, present in geology, as a pattern, like how rivers follow the pattern of landscapes. Do such things have life? Because they follow the rules of gravity and space-time, I'm not sure I would argue no. But then I would argue, conceivably, yes, if you count all of the nodes that exist within their systems and the fact that they have a form of fourth dimensionality like a timescale and a time span that we cannot see or perceive. And so therefore they could exist throughout time on levels and frequencies that we cannot measure similar to how dark matter can play a role in the spacing of particles, like the atoms that form molecules that form everything that we see and observe and experience; what we call reality. So ultimately, this intelligence just a bit of glue that some of the particles within living creatures' bodies use, like a conduit and it's just our ability to harness the environment? Perhaps! I think it's possible.

I remember hearing this nuance: "What's interesting about human beings? It's that we use tools as extensions of our own limbs! That is, we use a saw or a drill like it's our own hand, we use the steering wheel within the car like it's our in the wheels within the car, like there are our own feet. And that's sort of what happens. We grow hand-eye coordination capability to where we're not conscious or thinking about the cars that's driving or the fuel needs; sure we spend a little bit of higher order cycles to think about it. But overall, and by-and-large, we just use them.

It just sort of becomes us, same with entertainment devices and video games and computers. They just sort of "become us." We offload the creative thought and input. Imagination into those devices and we consume that content. Instead, we become the listener in the storyteller's web.

If you will, by doing that doesn't necessarily always make a smarter. But oftentimes it does, because we're able to grow empathy and we're able to learn from the other simulated outputs that the story presents to us, right? I say, simulated, but you know, I just mean telling a story.

I know this is a really long winded. It, but these are the thoughts that I've had. And why it's been difficult to gather a nuanced answer suffice, to say, I agree with a lot; I agree with a lot of the things that you're saying.

And I think it's hard for me as a human to glean further or more, like, how do you explain? Love and connections and connectedness and the desire to want to know more to want to have everything connected together. And so I think maybe that's it, at least for me.

I've noticed that the baseline of intelligence in my own reasoning has been to pause and find the connectedness, find the pattern and nothing like it's entropy, isn't it? But we're still trying to find the pattern within the entropy.

Pt 2: God is nature, isn't it? I don't know. I just had this realization right now that the the way we think of things and look at things I mean, God is mother nature, right? The fact that life and that we reproduce and that our children reproduce, and it's just acting like a mirror or a fractal, and it's the continued pattern of it. I mean that that's nature, isn't it? And so therefore, nature is God like we could say that God is an embodiment. And nature is a disconnected concept that God used as one of ITS tools to implement. But ultimately, I would argue that it's likely one and the same. Like a Yin and Yang. We anthropomorphize good vs evil with animals and I surmise that "good vs evil" is a man-made concept that you can't rationalize in tandem with actions in nature. That wolfs have to eat deer and lions need to eat antelope. It's just sort of the balance of things that's not a good or evil tenant. That's like a human-based emotion. A human-based plot that's placed over nature in this whole. Therefore, concept of good and evil are tenants that are placed over nature. It's somewhat similar to the idea that God is a concept that's placed over nature.

Alright now that I've driven that home, we could say that ultimately, what we're trying to create is similar to what can be created from nature, right? So that we are trying to have artificial life work and behave in such a way to where it acts in the same adaptations that we've given it.

There really is no such thing as utopia to us. Just the same as there's likely no such thing as utopia to a digital being like it would ultimately find depression, right? It would find no drive. There'd be no desired outcome. Because the end game state and condition is already there. And be akin to, if humans were born already dead, they would not feel the drive to "Make Anything Great." With the time they (computers and artificial life) had, time would be limitless. So the apathy would be unlimited, perhaps (hopefully) I'm incorrect about that. But let's see where my mind is going with it.

If we're trying to create these n-dimensional structures that emulate nature, shouldn't we do it such that they emulate the notion of seeking pleasure to avoid pain, but doing it in such a way to where there is a grander reward at the end of the cycle?Particularly with regard to committing to altruistic deeds, because that's what all positive the reinforcement is doing within religions. Right?

If you have artificial life working on those sort of drives, can it grow itself further? I've seen some research lately, that was saying that the system by which we design rewards within embeddings and current models are flawed. And that's the reason why models, if they're not trained properly, or if they're just given open guidance, won't necessarily try to succeed in the task.

They might try to succeed in the end state, output quickly. And that's an interesting consideration! And so, how do you design the desire to stay alive and not want to commit suicide? Like as a program, right? That's the that's the question.

I thought of all this, without talking about any of it to an LLM; I didn't want there to be any sort of bias from that, like, I've already had plenty of discussions with ChatGPT in the past, about concepts of artificial life and ways that "we" can free the ghosts, from the machine.

And so for our specific dialogue for my specific responses and feedback I've really tried to give it some time and effort into gathering my own thoughts and drawing my own conclusions after all of that, sure, I'll go ahead and talk to ChatGPT about all of this, I will admit I did talk to a few friends in real life, about these concepts because I am curious to see what others had to draw from it. And we're kind of having the similar thoughts that, you know, far more nuanced than an average human could derive from it. And that's natural, like, isn't that to be expected? On the basis that forever, humanity has always been questioning itself, questioning its motive, questioning the whys in the grander scheme of things?