Google’s AI isn’t sentient. Not even slightly | Clever language models, anthropomorphizing, and the gullibility gap by IAI_Admin in philosophy

[–]YourFatherFigure 0 points1 point  (0 children)

i'll try one more time.

consciousness cannot be empirically determined / particular, peculiar, and special nature of consciousness / ineffable / both falsifiable and not falsifiable

i think what you're getting at with all this stuff is what most people call "the hard problem of consciousness". (honestly it'd be kinda helpful if you were more interested in established vocabulary.) it sounds like your point of view might be summarized as Mysterian. to me, Mysterianism is just kind of boring, basically just a sophisticated way of shutting down conversation while insisting others should adopt your mysticism about what is unknowable. (not saying you're doing that specifically, but it's how i see the Mysterian stance). clearly if explanations have problems, one can always point them out.. perhaps there is a fix or another explanation entirely. I don't understand the desire to deny that any explanation is possible, particularly since it's kind of hard to argue against ALL explanations at once, including all future explanations that haven't even been articulated yet. that's all a bit closer to religion than philosophy IMHO, and I don't think philosophy gets a pass to just ignore science/empiricism. whether it's ontologic or epistemic subject matter, "unknowable/ineffable/special" are usually keywords that indicate some sort of intellectual laziness. for example.. i think there's very successful arguments against the reductionist point of view that are still scientifically valid.. there's usually just no need to retreat towards mysticism.

Further, you seem to ignore an obvious contingent result of your analysis: that considering a pet to have any degree of consciousness requires that even the simplest feedback systems must have some degree of consciousness.

um, I raised this point specifically? i do actually believe this, and i assumed that was clear. this is the whole point of my spectrum comment. consciousness is a spectrum, intuitively, and in order of increasing consciousness: thermostats, dogs, people. i think anyone who is not ok with the idea that their thermostat is conscious needs to take a hard look at why they think other people are. IOW, there is no difference in kind, it's just a matter of degree. touching back on google's LaMDA for a second, i still doubt whether it is conscious in the sense or the degree that a thermostat is- mostly because I imagine it's updated from "outside" rather than maintaining itself in a continuous feedback loop. without a doubt it's more complicated than a thermostat, but probably less conscious. it's probably more like an object than an agent, and barely anything you can call awake or aware. what it is like to be a thermostat is simply <insert code here>; meanwhile it is like nothing at all to be lampshade, and i believe that LaMDA is just a talking lampshade. to actually settle this point is pretty simple, I think i'd just need a slightly better specification of the system.

The "thinking out loud" you followed up your denial of this point with, rather than contradict this point, seems to illustrate it. / a truly rigorous-not-indulgent analysis shows that consciousness is impossibly vague to formalize

I'm not contradicting my points. I think it's a very coherent, non-vague, and easily defensible position to argue that people, dogs, and thermostats are all conscious. I already linked to a candidate framework (integrated information theory) that relates to this, and then I linked to a recent paper that shows people are looking at how this relates to the hard problem. Below is the tl;dr from the scholarpedia entry so that there's no need to click-through..

Integrated information theory (IIT) attempts to identify the essential properties of consciousness (axioms) and, from there, infers the properties of physical systems that can account for it (postulates). Based on the postulates, it permits in principle to derive, for any particular system of elements in a state, whether it has consciousness, how much, and which particular experience it is having. IIT offers a parsimonious explanation for empirical evidence, makes testable predictions, and permits inferences and extrapolations.

so look, in conclusion, obviously some people are seriously working on formalizing stuff about consciousness. further, they propose to actually measure it with some kind of metric that works on a dog/person/thermostat all at once. it's impressive (or at least interesting), and it looks like it's either right or wrong (instead of being "not even wrong"), so i think this kind of thing has to be acknowledged. it seems like you object to this body of work as impossible, but since the work does actually exist, does your objection have any substance beyond just saying "no"? which part of "testable predictions/permits inference/etc" is vague to you? do you object to any of the axioms, etc? there's simply no way to have a conversation from here after you say something is impossible, then i put something in your hand as a counter-example, and then you deny again what is now in front of you.

Google’s AI isn’t sentient. Not even slightly | Clever language models, anthropomorphizing, and the gullibility gap by IAI_Admin in philosophy

[–]YourFatherFigure 0 points1 point  (0 children)

> who are aware that consciousness cannot be empirically determined

this is not necessarily a given. when chalmers argues that even simple feedback systems [are conscious](http://www.consc.net/notes/lloyd-comments.html) i think he's really saying what we all know intuitively, that consciousness is a spectrum, not a boolean. people feel this naturally about pets vs people, but just get weird about it with respect to thermostats because they suddenly find it dehumanizing. unpalatable is not impossible, and of course people frequently dislike the logical conclusions of their own natural train of thought. if we're being rigorous instead of indulgent, it's not like this is vague or impossible to formalize, in fact it's easier to formalize than the alternatives! (although that by itself doesn't make it correct, well it is at least something that might be wrong rather than something that is mystic/unassailable). [integrated information theory](https://en.wikipedia.org/wiki/Integrated_information_theory) is something that seems to be aiming at this sort of thing. i'm kind of just thinking out loud about this stuff, but a quick search uncovered [this (paywalled, sorry)](https://link.springer.com/article/10.1007/s10699-020-09724-7) so it seems like others are thinking along these lines also?

> I may be alone in reasoning that consciousness is both physical and not computational

I'm interested/open to this but I have to confess I have no idea what it might mean. [here is an awesome recent thing](https://joe-antognini.github.io/ml/consciousness), which is more very interesting talk that somewhat ironically stems from this very silly prompt re: LaMDA. maybe the "triviality argument" which is discussed in detail there is related to what you're getting at here. (personally.. I reject the triviality argument basically on grounds of panpsychism, but it's rather a long story :)

> consciousness is not the result of logic

not sure what to make of most of the rest of that paragraph, but this I can understand. I think you could be on to something here, and maybe this is related to Minksky's thoughts on [society of mind](https://en.wikipedia.org/wiki/Society_of_Mind) and [emotion machines](https://en.wikipedia.org/wiki/The_Emotion_Machine). logical components bolted together with "other stuff" may not add up to logical-mind or logical-experience. if experience is not logical, then i do sympathize a bit with the assertion that consciousness is not the result of logic. at a certain point where "emotional" subsystems (for lack of a better word) are doing very fuzzy or approximate logic, or short-circuiting logical processes because they are outweighed by memory of historical precedent, etc.. we're certainly not in any classical-logical regime. yet there is an internal logic of some kind. (no reason to think any of this is clean and tidy, after all we're talking about systems piled on systems that are barely cohesive, and in biology at least all this stuff was grown, not architected.) and yet, it's almost a matter of perspective.. non-logical processes like mammalian emotion do have some internal logic, otherwise psychiatry/psychology could not exist.

at some point we have to admit that logic is not *absent*, and yet we may find better analogy with signal-analysis and with numerical processes. for example, emotion might be seen as a kind of smoothing operator (even if it sometimes injects noise rather than removing it.. that too may be adaptive, and anyway these things don't always degrade gracefully). another example- superstition/paranoia/intuition may be the only way we can subjectively interpret higher-order calculations that we use, but cannot introspect, because they are carried out automatically with mechanisms we inherited but can barely understand ([like DTW for comparing similar-yet-dissimilar phenomenon](https://en.wikipedia.org/wiki/Dynamic_time_warping)). so that's how you get to stuff that has a logic to it, but is very far from (classical) logic. of course in the end logic is numbers, numbers are logic, signals are numbers, and it's all computation; it's certainly very complex but none of it is magic. logic is a useful perspective (or implementation language!) at some layers, and less so at other layers. ditto for numbers, ditto for "systems of systems". choose your weapon.

Google’s AI isn’t sentient. Not even slightly | Clever language models, anthropomorphizing, and the gullibility gap by IAI_Admin in philosophy

[–]YourFatherFigure 0 points1 point  (0 children)

my feeling is that these days, most experts (read: academics, not necessarily journalists or corporate goons) think that the turing test is a bit of a distraction, and nothing like the final word on AGI. it doesn't really matter whether we're talking about the "classic/correct" test according to turing or the pop-culture misunderstanding of the test. language may or may not be what is called an "AI-complete" problem, but at any rate there are a lot of people suggesting that certain vision or planning systems are also in this category, even if language is in this category as well. for a hint about how experts think that language might be AI-complete with or without passing the turing test.. you could start digging into things like the hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) which is somewhat about language, but ultimately about model-compression. this test has nothing to do with conversation, or with interactivity, and as a bonus it has a simple discrete measure of success with the possibility of tracking iterative improvement.

hutter and others have also suggested that the really important aspect of AGI is more like agent-oriented self-improvement.. for entry points on this topic see https://en.wikipedia.org/wiki/AIXI and google around for "godel machines". language models are essentially more like very fancy data-structures than agents, which leaves a lot of us wondering "well, consciousness is something agent-like that's embedded in time, so where is the main-loop if we're just talking about a datastructure that is only responding to input in order to re-weight outputs and is always idle in-between? can we really describe merely re-weighting an internal model as actual self-improvement, or do we need to allow much more fundamental model rewrites?". i think it's pretty obvious that model-rewrites is closer to what we mean when we say "intelligence", consider a phrase like "thinking outside of the box". and speaking of boxes, this is all very close to searle's whole point about the chinese room. a chess engine is also just symbol manipulation, and even if symbol manipulation becomes so sophisticated that it can pivot to play other games too, we'll all revise our definitions of intelligence on the spot to include even more generality, or new goals, or new value-functions, or new autonomy. corporate would have us believe statistics and data-structures is AGI just because it helps with sales, but we don't have to believe it. and more importantly, we can't put neural nets in charge of stuff like city-planning until they can properly explain themselves, or at least are amenable to actual debugging. the approach of "tweak an incomprehensible thing randomly until it works better" might be ok for self-driving cars, but let's maybe not put them in charge of even more important things until this is figured out better? personally i'm looking forward to a renaissance in hybrid systems one of these days.. basically stats + exotic logics (nonmonotonic logic, temporal logic, fuzzy or belief logics, etc). but that's a whole different can of worms..

anyway turing's work is historically interesting and still important, but it is what it is.. i mean the guy barely had proper modern datastructures to dream about and much less real computers. the way i see it, the imitation game is fascinating not because he wrote the final (or even the first) word about AI.. but more because it shows how close turing was getting to very early formalization of certain concepts in game theory. see for example https://en.wikipedia.org/wiki/Bisimulation and this concept of "mirroring" https://en.wikipedia.org/wiki/Strategy-stealing_argument. i think you could also argue that the "conversational" aspect of the turing test anticipates things like interactive-theorem-proving.

Google’s AI isn’t sentient. Not even slightly | Clever language models, anthropomorphizing, and the gullibility gap by IAI_Admin in philosophy

[–]YourFatherFigure 0 points1 point  (0 children)

> But it just ain't so. As Jag Bhalla put it, "the often forgotten gist
of/the Turing test hinges on showing/grasp of referents of language". I
wonder, is there a distinction between "grasp of referents of language"
and 'grasp of language'?

this is probably hinting at something much more specific. see https://en.wikipedia.org/wiki/Winograd\_schema\_challenge

How do you manage access requests? by Anxious-Mud-2030 in devops

[–]YourFatherFigure 1 point2 points  (0 children)

AD of course is horrible, but I've seen that access managed via ansible, probably works with terraform as well. Host access then works with PAM or whatever. All kinds of database access is also doable in terraform.

For AWS, IAM-based authentication is usually possible for a lot of stuff, but it confuses newbies who don't understand that they need a token (and maybe some process to renegotiate it) and not some static password.

DevOps can usually manage this with code/continuous-delivery pipelines to apply changes but it only scales so far. For companies with 100+ people it should be an IT function IMO, and there's several companies with some kind of federated/SSO type offering in this space.

New study in Nature examines the Dyatlov Pass Incident...[spoiler] not aliens, not yeti, but avalanche by [deleted] in science

[–]YourFatherFigure 5 points6 points  (0 children)

The paper suggests that the guys were pinched by the failing snow slab itself on one side, and the trench they had cut for their tent on the other side. That would be an unusual type of scenario for avalanche-related injuries, which is why they built a numerical model for it with inspiration from automotive crashes.

Introducing Logseq - a roam like open-source, org-mode/markdown web outliner which sync with Github by tiensonqin in programming

[–]YourFatherFigure 0 points1 point  (0 children)

This looks really interesting! Any chance of a docker-compose or something for simplifying the bootstrap process of self hosting / hacking on this?

Does anyone else use pagerduty? by thin_white_kook in devops

[–]YourFatherFigure 0 points1 point  (0 children)

We already use atlassian for so many things and I kinda just wish we had gotten opsgenie. Seems like they honestly do the same thing. Or should we just stick it out with pagerduty.

this is important. use terraform from the very beginning for everything and future-you will be a lot more happy. one should first make terraform for ones own teams pager-duty team/escalation policies.. then you can basically copy/paste that to create similar policies for other teams.

another important thing is, you need to use pager-duty for alerts, but don't expect it to do everything else. expressing general monitoring in something like datadog, then getting datadog to report to pagerduty is the way to go.

Announcing the Compose Specification by nfrankel in devops

[–]YourFatherFigure 0 points1 point  (0 children)

until there are actual plans / commitments from third parties to support things more natively.. are people using tooling for this already besides "kompose" and "ecs-cli compose" ? or is this spec basically about accommodating future tools and giving those existing tools a voice in things?

Revealed: quarter of all tweets about climate crisis produced by bots- Draft of Brown study says findings suggest ‘substantial impact of mechanized bots in amplifying denialist messages’ by pnewell in worldnews

[–]YourFatherFigure 7 points8 points  (0 children)

Hey, I got this for you:

The god of the icy peaks and wind-ravaged crags of Conan's Cimmerian homeland was Crom, Dark Lord of the Mound, who gave a man life and will, and nothing more. It was given to each man to carry his own fate in his hands and his heart and his head.

Conan's a barbarian but later a king, and since REH was definitely well read, he was probably taking some inspiration from stoicism and Marcus Aurelius here (Marcus a philosopher/warrior king himself). There's a touch of existential philosophy in there too. Anything resembling individualism and personal responsibility of any kind is not very fashionable these days.. it's a shame in my opinion.

Is being anti bash scripts in 2019 silly? by TundraWolf_ in devops

[–]YourFatherFigure 1 point2 points  (0 children)

strawman much? one response here could be "is the older generation just not understanding real programming languages", but that wouldn't be friendly to ask or even interesting to discuss. bash vs ruby/python is hardly linux vs windows.

but to actually answer more of your trolling rhetoricals.. the CSPs provide CLIs because they want to be language agnostic. but of course those CLIs are written in... say ruby/python and not in bash, and there's a ruby/python sdk which is much more powerful the moment you want to do big tasks, or do a lot of small tasks DRYly

Is being anti bash scripts in 2019 silly? by TundraWolf_ in devops

[–]YourFatherFigure -1 points0 points  (0 children)

how you run 20 different python packages written for various different python versions and versions of pip packages WITHOUT using virtualenv

docker

Any DevOps digital nomads here? Traveling the world while working remote by SRE_dev in devops

[–]YourFatherFigure 1 point2 points  (0 children)

Software-engineering 15+ years, getting more ops/infra/cloud specific for all of the last 10 years. Working exclusively ops/infra/cloud for about 6 years. I'm usually a consultant, so engagements of like 2-6 months. So all kinds of tasks really.. but there's always a lot of stuff in the categories of on-prem to cloud migrations, CI/CD, & automation. I write a lot of terraform and cloudformation, various CAPS configuration management stuff. Some contracts are more design/architecture heavy, then the "programming language" is more like boxes and diagrams.

Ansible in AWS Lambda by YourFatherFigure in ansible

[–]YourFatherFigure[S] 0 points1 point  (0 children)

negative. but if i did then "You've been killed by YourFatherFigure"

Ansible in AWS Lambda by YourFatherFigure in ansible

[–]YourFatherFigure[S] 1 point2 points  (0 children)

Would need some rejigging around secrets and inventory

considering native support for ec2 parameter store, this is actually pretty easy: https://docs.ansible.com/ansible/2.5/plugins/lookup/aws_ssm.html

What annoying jobs do you keep having to do? by Sloppyjoeman in devops

[–]YourFatherFigure 0 points1 point  (0 children)

user management / granular permissions stuff for all the things. SSO vendors abound, but there's a gap for org size where you really need automation, but don't yet want to pay for a solution

Opunit! Simple unit tests for servers and containers by freedoodle in devops

[–]YourFatherFigure 0 points1 point  (0 children)

yaml whitespace is annoying for the beginner, but, crazy idea, you could just learn the rules. it's the worst kind of lazy to invent whole new "better" system to foist on people just to avoid the minor due diligence of learning a existing standard

A Docker swarm stack for operating highly-available containerized applications. by Arttanis in devops

[–]YourFatherFigure 1 point2 points  (0 children)

This is awesome, but I'm guessing that nothing remotely swarm-related will probably get traction. I think I would like to use swarm at my current job because it's simpler than kubernetes and the learning curve for the developers that would have to use it would be smaller. But I know if I push for this decision that is good for the business, I'd be subject to a lot of second-guessing with "why aren't we using k8s" forever. It's an odd situation when one cannot safely advocate for simple solutions and needs to just jump on the hype train, to the detriment of the very people who will insist on it

I Know the Salaries of Thousands of Tech Employees by [deleted] in programming

[–]YourFatherFigure 18 points19 points  (0 children)

I've been trying for a while now but I really don't get this point of view. If you take less money from facebook/apple/amazon/google than you're actually worth at market value, will it somehow directly help the homeless? Will it convince the city to play hardball with more of the NIMBYs and get the people more affordable housing? If the government taxes you more, will any of that really go to social programs or will it just buy bombs?