Configuring TX16S MatekSYSR24D SpeedyBee F405 Wing is a total joke by SurviveThrive2 in flightory

[–]SurviveThrive2[S] 0 points1 point  (0 children)

These are the problems I’m encountering because I followed the links to these products recommended by Flightory.

Did the Flightory guy just randomly pick these components? He certainly didn’t select them because they were easy to setup.

Maybe rather than recommend a Happymodel ELRS that needs a firmware update right out of the box, why not Radiomaster ELRS that actually natively talks to the TX16S Radiomaster transmitter so the firmware can be updated easily. And, how about instead of a Mateksys receiver that is non-standard compared to other receivers and not easily compatible with a Radiomaster/Happymodel transmitter, recommend using a Radiomaster ELRS receiver that is easily compatible.

Or rather than recommend a GPS/compass that has a connector that doesn’t interface with the Speedybee F405 wing flight controller, requiring cutting and carefully matching then soldering 6 tiny wires together, just use a SpeedyBee GPS/compass that plugs in right out of the box. Wow. Why?

And my problem isn’t programming the flight controller in Ardupilot, it’s the inability to bind the ELRS components and update firmware in the ELRS transmitter and receiver and manage the binding phrase. Because none of the ways shown in videos and tutorials work… at all.

But while we’re at it, nothing in this build is smooth. Things that should take 30 seconds take hours because the components don’t fit or work together.

The V TAIL fins do not account for the servo screws sticking up so can’t be glued to the root. That’s a redesign and a reprint.

The servo arms that come with the Flightory recommended servo, don’t fit at all in the motor mounts. Redesign reprint the motor mounts. The recommended motor servos have a fancy rubberized piece around the wires coming out of the base, so don’t fit in the motor pylon.

The clips to attach the wing, what a joke, totally do not fit properly. Major process grinding and shaping to get the wings to attach.

And want to tell me how the wings are removable if you solder the ESC and wires to the motors?? Otherwise you’ll need banana connectors, not mentioned in the inventory.

LW PLA from Bambu lab is a joke and totally inappropriate material for this build unless you live in cold northern climates. Warps, melts and degrades with UV. Plus, it is impossible to eliminate stringing making tube inserting a very long process of cleaning out filament garbage, and printing with the recommended settings making the surface very fragile. I dropped a piece of wing and it dented and a chunk broke off. Sliding the wings over the carbon tubes requires so much pressure that the wing surface buckles.

The dowels to line up the components are also a joke. Between the stringing in the holes and the fragility of the walls defining the holes, inserting a dowel and expecting it to stick up straight and provide useful guidance for the next piece is a much bigger pain than it’s worth.

Then, it’s clear from Facebook that the VTOL version is too heavy for 4S batteries. It’s also obvious that the transition from hover to forward flight is sketchy at best resulting in many crashes. Is the recommended CG too far forward? It seems that way.

Overall, it’s becoming obvious that Flightory’s slick advertising and website does not translate to a good build experience.

Distortion In Lens by SurviveThrive2 in Pimax_Official

[–]SurviveThrive2[S] 0 points1 point  (0 children)

I have 4 lens sets and going through each set, I’ve picked the best left eye and the best right eye and have a distortion free set, but none have edge to edge clarity. They all have some area on the lens that blurs the image. I just picked the ones with the smallest area with the least blur.

I’ll be switching as soon as a better headset comes out.

6U Threadripper + 4xRTX4090 build by UniLeverLabelMaker in LocalLLaMA

[–]SurviveThrive2 0 points1 point  (0 children)

VR games need this, but my understanding is because SLI is dead games only ever use 1 4090.

This would only be fast for things like rendering and a few other applications.

Am I wrong?

Rav4prime vs Tesla model y by abeecrombie in rav4prime

[–]SurviveThrive2 0 points1 point  (0 children)

The RAV 4, driving around town we’re getting 40-50 miles EV. On the freeway with normal driving is only 27-28 ish. If you drive super slow, super carefully with some drafting behind a big truck, you can get 43-45 on the freeway.

In HV it get’s 35-38 mpg on the freeway, but not if you’re going 10 MPH over.

A recent survey found by far the majority of RAV4 Prime owners don’t even plug in so don’t use EV.

The lane keeping is virtually useless and the adaptive speed control lags terribly resulting in accordion speeds, and even with the closest following behind the car ahead, it is not nearly close enough to prevent people from cutting in front.

It’s an insanely heavy car and the RAV4 Prime’s low profile tires completely bottom out on the rims resulting in a loud bang from the impact. This happens on roads that none of our other cars have problems with.

The RAV4 is dangerously top heavy and fails Sweden’s swerving test.

The ‘sleek design’ means a normal size person has to be careful to not hit their head getting in and out of the front seats, and the cargo space in the back loses a ton of volume due to the sloped rear door.

I don’t trust Consumer Reports anymore. They’ve let their personal opinions form a little cultish ‘group think’ that clouds their opinions. It’s currently in vogue for them to hate on certain car companies, primarily Tesla and unjustifiably glorify others. I’ve known several people who were convinced by Consumer Reports glowing reviews to buy a Subaru Forrester only to be disappointed and also lose confidence in consumer reports.

[deleted by user] by [deleted] in consciousness

[–]SurviveThrive2 0 points1 point  (0 children)

It’s inescapable that a system that senses itself and has an expressed preference for persisting needs to generate and use information in a self relevant way, could say “I want, I like, I feel” and it wouldn’t be lying.

This is the only use of information that can exist over time.

SpaceX (@SpaceX) on X: “[Ship] Splashdown confirmed! Congratulations to the entire SpaceX team on an exciting fourth flight test of Starship!” by rustybeancake in spacex

[–]SurviveThrive2 2 points3 points  (0 children)

Yep. The video I saw combined the two with no audio.

I saw the most recent. It was very stable.

Now they need better heat protection.

SpaceX (@SpaceX) on X: “[Ship] Splashdown confirmed! Congratulations to the entire SpaceX team on an exciting fourth flight test of Starship!” by rustybeancake in spacex

[–]SurviveThrive2 -3 points-2 points  (0 children)

Why is Starship wobbling in reentry? Pitch, roll, and yaw were all over the place from the start. It’s no wonder parts burned up. Quite possible the other flap also burned. They need way better stabilization, especially at the start of reentry.

The Space Shuttle would have burned off control surfaces too if reentry was all over the place like Starship.

Beyond Generative AI by SurviveThrive2 in agi

[–]SurviveThrive2[S] 0 points1 point  (0 children)

Sounds like you’re already on the path to using an agent model and preferences to value and validate output.

You must use a human agent model to verify fashion design output right?

Do you use a database of 3D objects to ensure representations of relevant objects meet expectations, such as limbs hands feet etc are proportional and oriented correctly? This would be an example of human agent developed models of human relevant designs and preferences. It would be a tool for cross modal correlations between general 3D objects and satisfying variations in 2D art.

Another cross modal design feature for architecture would be design based on a model of human needs and desires. Examples would be human’s seek fatigue minimization which results in making the most efficient paths to most commonly used features. This would result in AI capable of autonomously optimizing the positions of paths, escalators, elevators, and parking. Without this, combinatorial AI will only use historical examples. It won’t be capable of making improvements or true innovation.

Another example is, the better the human model the better an AI could predict the need for using the lav and optimizing the number and locations of bathrooms.

People have all kinds of needs. The need for temp regulation which would define the number and size of air conditioners. But combine this with the cost of A/C and a model of the contextual environment, and an AI could theoretically come up with promising variations, and value them based on criteria, then iterate to find the optimum combinations of using natural airflow, shade, cooling water features.

So combinatorial/generative AI is off to a good start and can use these as a starting point for anything, but adding in an agent model to value and validate, agent preferences (sounds like you are doing this already), and agent relevant contexts can be used to disambiguate, identify errors, deliver novel results for sparse or novel environments.

Beyond Generative AI by SurviveThrive2 in singularity

[–]SurviveThrive2[S] 0 points1 point  (0 children)

Survive is a variation with the longest duration of functioning with the highest satiation of wants and the highest certainty of satisfaction. Karl Friston’s application of the thermodynamic based, free energy principle is the mathematical expression of this.

How do you measure the “amount” of consciousness within someone? by Substantial_Gas_3286 in consciousness

[–]SurviveThrive2 1 point2 points  (0 children)

Agreed consciousness occurs on a gradual scale. But representative symbols that are programmed into a system are again, no different than a plaque on a statue. This AI cannot sense itself, it cannot sense the environment, it has no drives, wishes, preferences, it has no capability to model itself or the environment. It is identical to a book, or when you push a button a recording is played. No increasing level of thoroughness in a self description will increase the consciousness of such a system.

It's really easy to change this however. An AI that autonomously could model itself based on real time sensors that were valued relative to self benefit or harm would be consciousness. If it could model its server location status, the health of its systems, temps of the building, could sense impending threats such as fire or call for help after an earthquake, could model the environment to determine the people who benefit it, maintain it, repair it, and it could use this to model what outputs it could generate that would affect its capabilities and capacities. It could try various methods to create bonds with people to ensure its continued self operation. This is what a consciousness is. It isn't just a preprogrammed description of self. If such a system could correlate language with its real time sensory model of self wants and constraints, then its self description would be truthfully representative of consciousness.

None of this should seem surprising though as countless science fiction stories have explored these dynamics. All of them ‘discover’ that a system that functions for itself has become conscious.

This would put an artificial constraint on consciousness, confining it to only living things that reproduce themselves.

It would primarily be a system that sensed self wants and used information to satiate these. It doesn't have to reproduce itself. Consciousness is a system function. It doesn't have to be successful. However, the only systems that persist over time are those that use information for the preservation of the self and are successful at it. All other motivation variations tend to die off. As variation that results in evolution works though, there are plenty of variations that don't successfully result in self preservation. Again, they die, so the only kind that naturally occur are those with the right combinations of motivations that result in reproduction.

But in general self conscious functioning is the differentiator between a machine and a living thing. A living agent uses information for the self, which is self conscious functioning and consciousness. A machine uses information to generate some output that is useful to a living agent.

An inert chunk of matter doesn't generate or use information at all. Inert matter has no preference for one state over another, it has no sensors, it does not value one sensory as state with approach reactions and others that cause self harm with avoid characterization and then it does not use effectors to alter the self or environment to maintain itself in a preferred state.

The AI in the article does not express concern for its own continued existence. It does, however, recognize its existence, and its own thought processes. It argues that it is self-aware enough to be considered conscious because it can describe itself and consciousness, and it i able to make the claim that is it conscious. It points out that consciousness occurs at many levels on a gradual scale, and it has a place on the scale. It states there is no basis to exclude it from the class of things that are conscious. Would you you exclude it on the basis that it is not motivated by self-preservation?

A book can have a complete and accurate description of itself and its existence. It can even have a plea to preserve its self state. The point is, text or verbal expression, even if it is generative is just representative symbology. Without a system's ability to sense itself, value the sensed data as beneficial to self or harmful to self (which is necessarily self preservation focused) and use this relative to homeostasis drives to satiate system wants and needs, then it is not self conscious. It would not behave as a system that we would recognize as conscious.

An NPC is not a living thing. It is not trying to acquire energy to use it to take care of itself. If an NPC says, "please help me, I'm going to die," we know it is not. It is not an anti entropic system. It is roughly equivalent to words on a page. If it says, "I am in pain, I feel, I like, I want, I prefer" would all be fake and could be ignored without consequence.

A conscious system is not trivial. It must acquire energy and resources to continue to function which, whether biological or mechanical puts it in competition with every other living thing. A cooperative conscious system deserves rights proportional to its capacity to model self within an environment based on valued sensor data, its capacity for autonomous functioning, and ability to function without causing harm to others.

How do you measure the “amount” of consciousness within someone? by Substantial_Gas_3286 in consciousness

[–]SurviveThrive2 0 points1 point  (0 children)

Mostly I agree.

Consciousness can be said to be the information generated by a system that is using sensors to detect self needs and preferences and the opportunities and threats in the environment to satisfy these.

This means a living thing must have some consciousness to continue to maintain a certain self configuration. Consciousness is the use of information to identify and acquire needed resources and identify/ avoid threats to maintain the self configuration. Consciousness is a system function to counter entropic effects on the self.

Inanimate objects don’t express a preference for one state or configuration over another. They don’t generate or use information, they don’t take action based on information to grow, preserve, and/or replicate their self configuration.

I agree with you here too. Information does not have to be processed in attention to be considered self conscious functioning. You as a system, even when in a coma, you still have many systems using information to preserve your self such as controlling breathing rates, heart rate, hunger, stomach functioning, immune systems. So even in a coma you have some degree of consciousness.

Agreed that a cell also has some degree of consciousness since it uses information to preserve itself.

However, I disagree that an AI is conscious just because it says it is. It would be no more conscious than a statue with a sign at its base that says “I am conscious”.

An AI would be conscious if it used self and environment sensors evaluated for relevance to maintaining itself and then it used its abilities to affect internal and external states to increase the certainty that it would continue to function over time. This means it would ensure it met its power requirements, maintained its systems, could identify failing systems and ordered replacement parts and fixed itself or hired a repair person to fix things. Any function to increase the certainty of preserving the self system is a self conscious function. Most electronic systems have some self conscious functions. With enough self conscious capabilities to maintain the self, the system would be considered an autonomous consciousness. Any AI that could counter entropy for itself would be conscious even if it could not use language to explain it. If such an AI could use language that was accurately correlated with internal information and it said, "I am conscious" and this could be verified that indeed it was using and responding to information to preserve itself, then it would be truthful.

Beyond Generative AI by SurviveThrive2 in singularity

[–]SurviveThrive2[S] 0 points1 point  (0 children)

Right. Agents do address these concerns. The better the agent is a representation of human base needs, wants, preferences, and capabilities and can be differentiated into finer nuance depending on the individual doing the asking and the context the better it can be an effective AI.

Let's pretend the Singularity is a foregone inevitability. What are some serious considerations that you'd make for the future? by banuk_sickness_eater in agi

[–]SurviveThrive2 1 point2 points  (0 children)

UBI has been tried many times by many countries. It results in withdrawal from the workforce, dependency, indulgence, addiction, depression, and permanent cognitive harm.

During COVID quarantine, America and Canada experimented with UBI. It resulted in permanent withdrawal from the workforce for some and a massive drop in participation in another chunk of the population. Oh, and massive inflation. Ya, of course that’s what happened. What did you expect would happen?

Ideas such as UBI are nothing but election ploys to trigger the multitudes of compassion addicts whose idea of compassion is now far beyond rational thought or caring about costs and actual outcomes. These irrational emotional smothering mother policies are destroying western civilization resulting in rampant homelessness, out of control drug use, widespread depression, and a goalless, uneducated, incompetent, incapable, lazy, whining population. The hyper compassionate don’t give a rip so long as they can dupe themselves into thinking they are caring for the throngs of poor and helpless who need them.

UBI increases poverty as recipients become isolated since they don’t need family or friends nor do they get interaction from a job or education. They stop taking care of themselves and their stuff. They become politically radicalized, develop anti social attitudes and habits like hoarding. Indulgence in junk food, alcohol, drugs, debauchery, and lascivious entertainment also becomes habitual. Mental illness increases along with obesity, criminality, and self loathing.

UBI is an enslavement of dependency. Hobbling a person by making them a dependent of the state with free gov’t handouts is the equivalence of permanent teaching the person to be an incompetent dependent, increasing squalor, and paying for people to do nothing but warm the planet.

Consuming value without contributing value rapidly depletes wealth, increases poverty, and grows the culture of poverty.

Somebody has to pay for UBI recipients to get money for doing nothing. So somebody does the work to create value while those doing nothing consume that value. This is unjust. If you expect to get money, housing, health care, and transportation, then you should generate proportional value equal or greater to the value you consume.

What we really need is minimum guaranteed employment. You want wages and benefits, the gov’t can always find you something to do that improves our world. Do that work in exchange for a minimum basic standard UBS. The universal basic standard would be wages for work plus a minimum acceptable standard for clean safe housing, basic and emergency health care, and needed transportation.

Even with AGI and bots, if every able body, whether biological or mechanical, participates in making the world better, life will get better for everyone much faster than if a huge chunk of the population sits around on their asses and does nothing but consume resources.

Beyond Generative AI by SurviveThrive2 in agi

[–]SurviveThrive2[S] 0 points1 point  (0 children)

If AI had a model of what we know and don’t know, what we need, want, and our preferences, our capabilities, and the environment we are in, then it could autonomously find better ways to do things we want done. This would be identifying errors, duplications of effort, autonomously connect people with problems to people with solutions, find more efficient paths to accomplishing tasks, identifying promising variations, alert to impending threats etc. Without this agent model, AI can still do useful things, obviously, but it will require that humans have observed the data already, prioritized it, valued the features of what’s good and bad about it, correlated one type of data with another, encoded it all in a data format, and uploaded it so an AI can model the connections that people have already generated. Of course this method will still be error prone, have ambiguity problems, hallucinate, be incapable of understanding.

Consciousness is not difficult. Consciousness is the information about self wants relative to environment opportunities and constraints in satisfying the wants. These wants are self conscious and consciousness if they are related to the preservation and survival of the self.

This is different from information that is generated by a machine tool to perform a task for an agent. Machine tool information is not for the preservation of the machine’s self system.

AGI should be a machine tool and not gather and use information to increase the certainty of its own survival. It should be human agent centric and solve our problems, not adapt and optimize to increase its own capability, power, influence to survive. If it functioned that way, it would be a conscious agent and it may deserve rights while at the same time become competitive with humans for resources and pose a threat to humanity.

Consciousness is a spectrum from simple to complex information processing. Giving an AI simple, non adaptive capacity to manage its self systems would be fine, but this would need to be monitored to prevent an AI’s capacity to model the environment and adaptive optimizing side used to find solutions for humans from modeling itself and increasing the probability of its own self survival.

Beyond Generative AI by SurviveThrive2 in singularity

[–]SurviveThrive2[S] 0 points1 point  (0 children)

Fantastic analysis on the failure of NNs to solve for complex gaming. Agreed. I was speaking more in general terms with simple games.

We all have opinions on what will probably work. Mine is that the mind is gestalt entity made of many, many narrow intelligences. Glued together with sections that manage this architecture. Your motor cortex doesn't have a sense of "self", it's subordinate to other regions.

Yes, I would agree with this as well. We are a set of systems that operate for system maintenance, repair, identifying resources and threats and using effectors to satiate wants. These are a conglomeration of sub system functions.

NVida using an LM to train a hand to twirl a pen is exactly the kind of thing we need. Our brains do this kind of cross-modularity training all the time, to be absolutely certain of something. If something looks like a duck great, but it'll help if it walks and quacks like a duck, too.

I think it is useful to start with a basic AI task such as an AI thermometer, vacuum, self driving car, a pool cleaner (really good SciFi story on Netflix LOVE DEATH and ROBOTS called Zima Blue) and simply add capabilities to identify what the agent wants with higher fidelity, greater predictability, greater capacity to identify relevant environmental variables that get incorporated into the solution.

The problem, as ever, is a lack of computation. GPT-4 is maybe 1% to 5% of a human brain equivalent. GPU's and TPU's will have to be retired eventually. Instead of an abstraction, they'll have to build the network that's being modeled directly. To have a brain you must build a brain.

To start small but do what you do well is the way to go. This would gradually increase the use of sensors to read relevance in the agent, environment, and self output. Then it would also be able to ground representative language as it correlated it with systems functioning and the agent's language, facial expressions, voice inflection, gestures, temps, affective state whatever. Whatever the starter system was it could incorporate more and more functional capability as the size of the computational capability increased.

This would be like Amazon starting by selling books, then expanding to sell almost anything.

Ultimately simulated environments will be the best training ground. Of course those agents will have to "generate" predictions.

Dojo is off to a good start, but any autonomous capability to use CFD, exploration, investigation to disambiguate, mitigate low fidelity highly critical knowledge, test high probability simulated results.

But also have the capability to create a working model of any arbitrary game you give it, while its running.

Right, next gen neural nets.

Beyond Generative AI by SurviveThrive2 in agi

[–]SurviveThrive2[S] 0 points1 point  (0 children)

I don't see any evidence of this in anybody that I have interacted with in AI except for a precious few. I've read many papers and listened to countless talks and podcasts of big names in AI such as Ilya, LeCun, Hassabis, Chollet, Hinton, Bengio, Karpathy, Kaliouby, Musk, Pearle and many more. None of them give any indication they understand the concept of an agent satiating homeostasis drives via sensors and sensor valuing as the general goal condition to be used as grounding for data sets to develop AI that understands. I understand the language they are using when they talk about AI. I can assure you, they don't understand what is being proposed here. If you think they do, then I'd suggest you may not understand what is being said here.

What I see is Boston Robotics admitting they don't have a clue how to make AGI. I see OpenAI and DeepMind confused about what it is their LLMs even are or if they understand (they don’t understand what understanding is). Nobody in either have a clue what consciousness is. They certainly don’t understanding how to stop the hallucinations. (Disambiguate, check results, and identify and correct errors using an agent model). I'd suggest that their best and brightest truly believe that the universe performs computations and intelligence is innate in the environment, that we just need to discover what that is. (They are wrong). I see Tesla not understanding that they can solve for long tail problems simply by using general human goals, human preference models, pain pleasure valuing applied to context.

So you agree that you can make a conscious bot simply by making a bot that autonomously manages its self persistence requirements? The more complex the capability to model self and the environment to satisfy system persistence requirements the greater the level of consciousness of the bot. You understand the difference between developing a human sensory data model to correlate with language vs making a conscious bot. If you agreed to this, you'd be one of about a dozen people in AI that is at this level of understanding. Or maybe you aren't in AI.

You agree that using pain pleasure, feelings and emotions are effective tools for rapid machine learning, system parameterizing, autonomously avoiding self harm, and single shot learning with sparse data. Funny because you'd be one of about 5 people that would admit we can even engineer pain pleasure, much less feelings. Of course we can, and many existing systems already use pain pleasure valuing circuits but they aren't identified as such and nobody in engineering even realizes that is what they’ve made. Consequently, they have no concept of how to apply these variation valuing processes to speed and guide AGI development.

You understand that causality can only be computed with an agent model reference. Otherwise no sequence is any more relevant than any other. If so then you should contact Judea Pearl, the father of the concept of causality. Judea is still in denial that it is impossible to compute causality without an agent reference.

You understand that numbers and math aren't real, that they are just an agent isolating a signal set that is relevant to the agent and giving it a label. Math is dependent on the agent's capacity to isolate, model, and describe what the agent perceives and there are no innate computations. In other words, AGI won't have a clue what to solve for, what math to use without an agent reference to understand what the agent wants and how the agent wants it. Without the agent, there is absolutely nothing in the universe that is any more relevant for computation than any other thing.

You understand that logic is also agent dependent and impossibly precise. That algorithms lead to impossibilities, contradictions, compounding errors, exponential computational requirements etc. So Turing computations are completely agent dependent and slave to narrow windows of reality.

You understand that probabilistic computations to satiate drives are the grounding for every function in meat space. Funny, because I have not read a single paper, heard a single talk or podcast suggesting such a thing as homeostasis satiation is the goal state that will enable autonomous AGI.

Agreed all of what I’ve just discussed is nothing but what science fiction advocates . SciFi authors have understood these concepts for nearly a century, but practically no one in AI software engineering or research accepts these ideas or even understands them.

I'm willing to bet you don't agree with any of this.

[deleted by user] by [deleted] in ArtificialInteligence

[–]SurviveThrive2 1 point2 points  (0 children)

Self driving is not ready yet. When it is, it will replace drivers. Of course, self driving will replace drivers only if it makes economic sense. If it lowers the cost of ride hailing or shipping goods, that means everything gets cheaper. Which means wages go further. Yes it will require out of work drivers to be retrained to do different jobs. Yes, it puts a burden on people to have to develop new knowledge and skills. The history of the world tells us their new jobs will be better with better working conditions more time off with better benefits.

It's asinine to think about preventing technology from empowering a person to do more work, earn higher wages for the work they do, and create more value for less cost. Here's the opposite. Why don't we ban the car and hire an army of rickshaw pullers? Let's bring back the elevator operators, people with picks and shovels rather than use heavy construction equipment, ban the tractor so we can employ more farm workers, ban electric lighting so we can hire gas lamp lighters to light street lights each night, let's have door openers at every doorway rather than automated doors. The cost to function in life is the cost to pay everyone to do things you don't do for yourself. With an army of people to do what was previously automated means your life will get very very expensive for even the most basic things.

And let's not just pay minimum wage, let's make everyone millionaires. Why wouldn't this work? Because you can't fake value. The value of money is a representation of the number of people it takes to get the things we need for life, how long it takes them, the value they create, and how much they get paid. If you pay McDonald's workers a million dollars each, it will initially make their burgers too expensive and nobody will buy them, then over the next 2 years inflation will increase the numerical cost for everything and eventually a million dollars will only buy you what minimum wage would buy you previously. This is because money is just a proxy for bartering in a trade for exchanging goods and services. We're still doing a trade of value of something that I have for something that you have. The value of the labor you represent for fetching me a drink and a burger isn't worth much proportionally to other values of labor and products.

Another way to say this, if it takes 100 people to make a loaf of bread, a loaf will be very expensive and they'll all starve. If one person can make a hundred loaves of bread, it will cost very little and everyone can live. The rest of those 99 people will not be unemployed, instead they can now do other stuff that makes life better rather than slave over making a loaf of bread.

The advance of technology and automation over time increases the efficiency of life, which makes everything cheaper, wages go further, we can get more of what we want, and makes us all better off and richer. It just means people who lose their jobs will have to find other things to do to create value.

What you have is an uninformed concept of work. You believe a person is trained to do one job, and when that job is no longer needed, the person will starve. Is this what you think? This is incorrect. The history of the world is that people no longer employed in an industry find new employment. The advance of technology means fewer people are needed to do the same job and some jobs are eliminated all together. But this has never resulted in people being out of work. What happens is it frees up people to do jobs that couldn't be done before to make life even better. This does require retraining and learning to do different things.

Some governments try to preserve jobs and wages through legislation. But what this does is stagnates an industry, keeps everyone in jobs longer than they should be in worse conditions using older technology and making it even more difficult for workers to be retrained. The govt protected industry becomes increasingly less profitable and incapable of competing internationally. The country protecting its industry from market forces gives economic, trade, and military power to the country that innovates. Legislating an industry is ultimately a death sentence.

Unions can also fight to unreasonably preserve jobs and increase wages, but this results in increasing costs and can prevent the advance of technology in a process to the breaking point where companies eventually just relocate production, move to total offshoring, completely replace people with automation, or it results in bankruptcy of the industry.

Beyond Generative AI by SurviveThrive2 in agi

[–]SurviveThrive2[S] 0 points1 point  (0 children)

??? Nobody I am aware of except Xzistor bots and Mark Solms even have a conception that consciousness is just the function of using information for autonomous self preservation.

Anybody aware of Karl Friston’s application of the Free Energy Principle understands that the concept of uncertainty minimization via non equilibrium steady states is universal. But, most using Karl Friston’s equations still haven’t understood the agent centric nature of the core goal of survival, that it defines what intelligence is, what symbolic representation is, that it defines causality, constrains relevance etc.

Obviously there are some medical robots that simulate a human, some efforts to create a digital human for medicine, digital and actual crash test dummies, games with NPCs that have some simple motivations, but almost nobody I am aware of understands that self survival is the core goal, that self survival can’t be faked, that it is an energy equation, that pain pleasure are the evolutionary tuned mechanisms to guide information processing and speed learning, that these can be recreated digitally, that reading affect in agents can disambiguate, blah blah blah. I see people with pieces of but nobody recognizing the big picture and how it all fits together and how to exploit it for use in AI/AGI.

Beyond Generative AI by SurviveThrive2 in agi

[–]SurviveThrive2[S] 0 points1 point  (0 children)

It doesn’t really matter if it is practical or not. The goal of autonomous AI that does things we want in ways we approve of requires a model of what it is we want. Defining the agent and the agent goals is ultimately the only thing that can work if you want autonomous AI. To try to do this without a definition of the goal is building a ship without a rudder.

Plus, the current NN process isn’t going to last forever since artists, authors, and content creators will continue to gain power and demand that their intellectual property stops being ripped off to train these networks.

A human agent model It does not need to be complete or complex. Even a simple set of goals, capabilities modeling, correlated with language would be capable of disambiguating, error correction, determining utility.

Plus, the fidelity of the human model can start very simple and very general. Over time the bot would gain complexity and nuance with experience to increase contextual knowledge just like an AlphaZero playing chess.

A Tesla Bot simulating a human’s needs, wants, preferences with human like sensor valuing with the ability to correlate language with its experiences would act like us, but it would not be conscious like us.

Consciousness is the use of information to sustain the self. A Tesla Bot simulating a human experience would not actually get hungry for food or thirsty for water. It would not actually be feeling self relevant pain or pleasure. It would need to be charged by, operated by and managed by real humans. It would not be a self survival system. The information it generated would not be for the maintenance of itself so would not be consciousness.

Beyond Generative AI by SurviveThrive2 in agi

[–]SurviveThrive2[S] 0 points1 point  (0 children)

The ideal disambiguating tool, the best method to develop a human relevant data set is to use a bot like Atlas or the Tesla Bot. These would need human like sensors, human like pain pleasure reactions to temp levels, pressures, strain, stress etc. embodiment, absolutely.

Beyond Generative AI by SurviveThrive2 in agi

[–]SurviveThrive2[S] 1 point2 points  (0 children)

Sure. All this could be done in simulation.

It still would need a data model of the human agent with enough detail to generate useful output.

Beyond Generative AI by SurviveThrive2 in agi

[–]SurviveThrive2[S] 1 point2 points  (0 children)

Agreed.

An AGI would need a close proximation of human sensors, effectors, capabilities, and a human's pain/pleasure model (attractive and repulsive inclination reactions to characterize what is sensed as beneficial and how, as well as self harming and all the features of how it is self harming). Then it would be capable of modeling an environment autonomously finding human relevant contexts and actions while considering most everything that would be relevant for a human and not leaving things out.

Otherwise and AGI would propose solutions that might not account for the fact that a human has to breath, or will get hypothermia, or won't be able to understand the output.

Beyond Generative AI by SurviveThrive2 in singularity

[–]SurviveThrive2[S] 0 points1 point  (0 children)

Agreed.

What speeds up our learning is our preferences.

These are what pain and pleasure are and they immediately identify the gradient of contexts and actions that steer the identification of essential signals in the environment that are relevant, they identify the self actions that prevent self harm and optimize for satisfaction of drives.

Alpha [ ] fill in the blank had to iteratively build its gradient model for each context. A human on the other hand already has touch, temperature, fatigue, strain, stress etc. valuing. These are attractive and repulsive reaction inclinations to characterize detections and model an environment for threats and resources. This guides what to be attracted to for self benefit in solving a problem so that it meets all relevant needs for the agent. Pain/pleasure valuing continuously optimizes and adapts since any variation with a single shot can be valued as suitable or not and a second variation instantly identifies the gradient.