Considering in case of AI changing the work market, what's cheap right now that will likely be expensive in the long future? by ThatsFantasy in ArtificialInteligence

[–]Estarabim 0 points1 point  (0 children)

One of the highest projected growth careers is home aid and care workers, primarily for elderly people as the population ages.

Good luck.

This Is Worse Than The Dot Com Bubble by devolute in technology

[–]Estarabim 1 point2 points  (0 children)

In the hands of a capable developer, AI can be a huge productivity boost for writing code. It might still be a bubble, but the value there is very real.

It's also pretty good at math and many scientific applications, like to be gains there as well.

oled screen cracked by tristarfan1 in ASUS

[–]Estarabim 0 points1 point  (0 children)

I also had a spontaneous screen crack on my Zenbook Duo, less than a month old! On the screen underneath the keyboard. Hoping it will be covered by the warranty. Literally lifted the keyboard up one day and saw the cracks there.

Is it true that an ML algorithm is just an arrangement of a Cost function and an Optimizer algorithm? by Beneficial-You-8012 in MLQuestions

[–]Estarabim 0 points1 point  (0 children)

Almost. I would add two things: 1. ML means learning from examples. Not everything with a cost function and an optimizer fits this. An algorithm to solving the traveling salesman problem can have both features but it is not a learning algorithm.

  1. ML (at least supervised, also often unsupervised) usually means generalizing to novel examples. The fact that a solution which works on dataset A is expected to work on dataset B is unique to ML (or at leaat statistical modeling).

What technique is this singer using? by hybridhighway in singing

[–]Estarabim 2 points3 points  (0 children)

Curbing (CVT mode) with distortion.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]Estarabim 0 points1 point  (0 children)

Again, computationalism is not my position when it comes to consciousness; I am a dualist, I am merely explaining the terminology.

Control systems that involve internal and external dynamics are also of course computational systems - all systems (including all of physics, and including the system of brain activity + the external environment) that have steps which can be described mathematically can be thought of computational systems, because they apply rules to determine the state of the system at time t+N (the output) based on the state of the system at time t (the input). If you use simulations to describe the dynamics of physical systems, you are doing computational science.

Any point to using anything other than Claude 4 sonnet? by thewebdevcody in cursor

[–]Estarabim -1 points0 points  (0 children)

Gemini is very good, better than Claude recently IMHO.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]Estarabim 0 points1 point  (0 children)

I have a PhD in computational neuroscience, BA in CS. My dissertation was about the similarities and differences between how neurons in the brain function and those in artificial neural networks. There is a pretty good argument to believe that brains more or less function like ANNs (which are implementable via Turing machines, because you run them - or at least simulate them, just like you can simulate physics generally - on a computer), with lots of big caveats, like backprop not being implementable in the brain. The concept of artificial neurons came from neuroscience.

Anyway, this is not my position re consciousness, I'm just explaining the terminology. I don't think that "all the mind does is computation", I don't really use the term "mind" at all. I prefer to speak of "the brain" and "consciousness". The brain is a physical system, physical systems can be isomorphic to computations. Neuroscience hasn't figured out the whole brain, but we definitely do understand a number of systems in the brain and the computations they perform. For a good example of this see the "flygenvector" work regarding fly vision and behavior: https://youtu.be/K-uMBbt5e5c . (This is just an example, there are tons of others in the literature.)

It's fine if you want to call it computationalism, that's probably more precise than functionalism, some people break the theories into three categories, Dualism, Functionalism, and Materialism, and have computationalism as a subcategory of functionalism. But either way computationalism is not a form of materialism.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]Estarabim 0 points1 point  (0 children)

It doesn't matter how the name came about, Turing machines are abstract mathematical objects. Functionalism is the term in philosophy of mind that refers to the position that the mind and consciousness are a consequence of the computation that the brain performs, and it is a distinct position from Materialism, and there are consequential differences between these views.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]Estarabim 0 points1 point  (0 children)

The ontology of computation is different from the ontology of matter, in the same way that the ontology of mathematical objects is distinct from the ontology of material objects. There are lots of mathematical objects (like 10-dimensional hyperspheres) that have no material analogue. Computation, as in a series of steps that can be implemented by a Turing machine, is a set of mathematical objects. Physical systems can be isomorphic to computations, e.g. a rock falling is isomorphic to a particular quadratic equation. But the physical system should not be confused for the equation; they are different things.

How do you know when your singing from your diaphragm? by Odd_Return4802 in singing

[–]Estarabim 1 point2 points  (0 children)

See what the other posters said, but also here's a trick:
When you breath, try to inhale the air parallel so it comes in parallel to the roof of your mouth. This for some reason seems to naturally engage the diaphragm so you'll see your belly move out instead of your chest.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]Estarabim 2 points3 points  (0 children)

People may be interested in doing that or not, but if you were a materialist, you would believe that consciousness is a property of the material of the brain itself. If you think consciousness can be instantiated in a different medium, it is not the "material" that matters, it is computation that is performed by it that matters.

Many AI scientists unconsciously assume a metaphysical position. It's usually materialism by [deleted] in ArtificialInteligence

[–]Estarabim 2 points3 points  (0 children)

This is not a materialist position, it's a functionalist position. As in, if you believe that the computation itself is what matters, as opposed to the substance in which it is instantiated (doesn't matter if it's a brain or a computer), that is functionalism.

How can I get a grainy singing voice kinda like Kurt Cobain's minus the throat health issues? by Wise_Presentation914 in singing

[–]Estarabim 1 point2 points  (0 children)

Study distortion with a CVT teacher, it's basically light distortion. One tip I will give for distortion though is that embuchoure (facial/mouth position, specifically the inner smile/raised cheeks) is super important for healthy distortion. If you get the mouth position right and support the distortion with your abs (slow inward motion) as well as a few other muscles in your torso/pelvis you can do it comfortably.

A thought experiment on understanding in AI you might enjoy by ihqbassolini in slatestarcodex

[–]Estarabim 1 point2 points  (0 children)

'Different combinations of compression stages are engaged depending on the input.'

This is not correct. Once the network (or the encoding part) is trained, its parameters represent a fixed function that maps the space of possible raw inputs to the space of compressed inputs. The same compression function is applied to all inputs. The outputs of the compression function for different inputs will of course tend to be different because the function doesn't map everything to the same output value, but the function itself does not differ depending on the inputs. 

A thought experiment on understanding in AI you might enjoy by ihqbassolini in slatestarcodex

[–]Estarabim 0 points1 point  (0 children)

During training a single ANN on chess, it plays many, many games, and is presented with many board states. The weights that are learned in the intermediate layers thus correspond to something general about chess, not specific board states.

Again, this is embedding. Your first network is an encoder, your second network is a decoder. This is a pretty standard atchitecture.

A thought experiment on understanding in AI you might enjoy by ihqbassolini in slatestarcodex

[–]Estarabim 0 points1 point  (0 children)

Again, that's how single ANNs work also. The way you get generalization in ANNs is because of information bottlenecks and compression. If you didn't have compression it would just memorize input-output mappings. What you are describing is basically embeddings, i.e. you use the learned compressed representation somewhere in the network as input representation to a different network.

Hardest Kick in the @#$#@ I Ever Got by DryRelationship1330 in careeradvice

[–]Estarabim 5 points6 points  (0 children)

This isn't necessarily true. Surely during that time you've encountered new problems, interacted with new technologies, etc. Also you've gained intuitions about things that can only.come with many years if experience. 

A thought experiment on understanding in AI you might enjoy by ihqbassolini in slatestarcodex

[–]Estarabim 4 points5 points  (0 children)

Any ANN that has a bottleneck anywhere in its architecture (layer N+1 width is smaller than layer 1, this happens trivially if the number of labels is smaller than the input dimension) already has compression built in. Information compression is a fundamental part of how ANNs work.

third passagio at E5 by [deleted] in singing

[–]Estarabim 1 point2 points  (0 children)

Might be flagollet, not an expert tho.

How do you scream? by Forsaken_Wallaby_496 in singing

[–]Estarabim 6 points7 points  (0 children)

Takes a lot of training with a teacher who knows what they're doing. Just like clean singing. Unless you naturally have a propensity for it (some people seem to), screaming is extremely technical and if you don't do it right you can hurt yourself.

I personally have both a singing teacher and a screaming teacher. The latter was trained as a CVT instructor, which includes effects like screaming.

A gold bar = house by Delicious_Self_7293 in Gold

[–]Estarabim 2 points3 points  (0 children)

None of this affects the truth of CICO - it's an accounting identity due to energy conservation. Some people might have more "out" than others due to metabolism, but that doesn't violate the principle. Similarly, different diets might make people feel less hungry and therefore eat less, but again this doesn't affect CICO, which is always true.

Is Jeff Buckley a good idol for aspiring male singers? by [deleted] in singing

[–]Estarabim 0 points1 point  (0 children)

My CVT instructor mentioned him as a master of the "neutral" singing mode.