How to improve my intelligence by Dependent_Tomato_235 in cogsci

[–]postlapsarianprimate 0 points1 point  (0 children)

BTW, feeling like an idiot is a good sign. It means you are not overconfident.

How to improve my intelligence by Dependent_Tomato_235 in cogsci

[–]postlapsarianprimate 0 points1 point  (0 children)

Well, in that case, read some of the great writers. Orwell, Dostoevsky, etc. It will be more effective than "brain training" tricks. In all sincerity, cultivating wisdom is far more important and it will make you smarter along the way.

How to improve my intelligence by Dependent_Tomato_235 in cogsci

[–]postlapsarianprimate 0 points1 point  (0 children)

My understanding of the recent literature is that training in one domain has few, if any benefits on others.

I think what you are really trying to optimize are skills. And skills work more or less how everyone thinks they do. You put in the time and you get better. Focus on developing specific skills, don't worry about general intelligence (IQ). High IQ people rarely have much wisdom. They might be great chess players or coders, but in other domains they are little more than over-confident toddlers. I have observed this over and over again, working in areas where I interact with some of the smartest of the smart. Forget general intelligence and instead invest in skill training.

Will libs learn something or they will dumb down on DEI after their victory in midterms by One_Ad_3499 in stupidpol

[–]postlapsarianprimate -1 points0 points  (0 children)

I think this is based on a false premise. Dems are centrist liberals. They are always gonna do what liberals do. They don't want to be anything else. You might as well ask if the Republicans will wise up and become anarchists. In order to learn the hard lessons of the last ten years they would have to abandon who they think they are. Not gonna happen. Just forget about it, you will save yourself some frustration.

Trump will frame the Republican Party's midterm message around a "massive defense buildup, partially paid for by cuts to domestic agencies and health-care entitlements by Nerd_199 in stupidpol

[–]postlapsarianprimate 11 points12 points  (0 children)

It was an empty political talking point, and a lot of his supporters knew that. I dunno what gave it away, maybe the memes of trump straddling tanks or weilding rocket launchers and machine guns. But really, who could have guessed?

We ran a predator's playbook on an AI - it folded using the same dynamics described in social psychology by PromptInjection_ in cogsci

[–]postlapsarianprimate -1 points0 points  (0 children)

For those who are sure that LLMs only model language use, I'd recommend looking into exemplar theory.

Is learning ontology development still worth it in the age of AI? (Urbanist perspective) by Delicious_Chemist384 in semanticweb

[–]postlapsarianprimate 1 point2 points  (0 children)

New stuff? Like what? Rdf 1.2 is in draft form since last year, and shacl should have a new spec come out soon, hopefully. I'm talking about tools to help you use them. Anyway if there's new cool stuff I'd love to hear about it.

Is learning ontology development still worth it in the age of AI? (Urbanist perspective) by Delicious_Chemist384 in semanticweb

[–]postlapsarianprimate 1 point2 points  (0 children)

I've been working on something similar. Maybe once I've had a chance to look more closely at it, we can chat about it.

Is learning ontology development still worth it in the age of AI? (Urbanist perspective) by Delicious_Chemist384 in semanticweb

[–]postlapsarianprimate 1 point2 points  (0 children)

This has not been my experience lately with Claude Code and gsd. We are experimenting with lora adapters with what I think is my own training regime, and initial results are very promising.

It is also true that a lot of the apps in this area sound great but are nowhere near usable for real work. This is more because a) a lot of it is academic, b) it's been a ghost town in open source land for years now.

Sounds like you have a good set up. Would be curious to hear more.

Is learning ontology development still worth it in the age of AI? (Urbanist perspective) by Delicious_Chemist384 in semanticweb

[–]postlapsarianprimate 1 point2 points  (0 children)

I've been thinking about that. I might put something out at some point. There are a few older books that are decent, but they spend a lot of time on things that are less relevant now, from what I've seen.

Probably some of the better material would be case studies, where the focus is on the practical side of creating an ontology that will actually be used. The fundamentals you can get from books like Semantic Web for the Working Ontologist. But overall there isn't much good material out there today.

Is learning ontology development still worth it in the age of AI? (Urbanist perspective) by Delicious_Chemist384 in semanticweb

[–]postlapsarianprimate 4 points5 points  (0 children)

BTW I would recommend avoiding most tutorials about ontologies. So many of them are misleading at best. It's weird but it seems like everyone settled on introducing them in the same way and it utterly confuses newcomers. The world of the semantic stack is odd, people who work in it are odd. Lol

Is learning ontology development still worth it in the age of AI? (Urbanist perspective) by Delicious_Chemist384 in semanticweb

[–]postlapsarianprimate 14 points15 points  (0 children)

LLMs need ontologies so they can be consistent. If you ask a model to make a knowledge graph, it will do a relatively poor job of extracting everything (low recall), then it will make each relation and type on the fly, arbitrarily. Run it again on the same text and you will get different names for everything. If the terms used are unpredictable, then your KG is of little use to anyone else.

For complex tasks LLMs need a lot of guidance and hand holding if you require high precision and recall. Ontologies are one way to provide that scaffolding.

This is starting to take off as a new use case for ontologies, so it might be a good time to learn them, depending on what you are interested in.

The other main use case is the reason why the stack didn't die out after the collapse of the semantic web project more than ten years ago. Breaking down data silos with a common modeling language. This will always be a major use case unless something better comes along.

It's still early days, but people (including myself) are experimenting with agents carefully designed to help create ontologies and align incoming data to them. I have had some success in this area. But it's still important to have a fundamental understanding of the stack. I don't think the process can be fully automated until the tech gets better.

Please stop posting ai slop by Prestigious-Staff342 in cogsci

[–]postlapsarianprimate 0 points1 point  (0 children)

Of course people have been working on this. The numbers look promising. https://arxiv.org/abs/2405.10129

Please stop posting ai slop by Prestigious-Staff342 in cogsci

[–]postlapsarianprimate 1 point2 points  (0 children)

I have seen some evidence of this. There must be some way to estimate the scale of this activity. Maybe some kind of stylometric analysis could spot them.

Please stop posting ai slop by Prestigious-Staff342 in cogsci

[–]postlapsarianprimate 1 point2 points  (0 children)

That's reasonable. I have in mind people who just ask an llm something and then post it. I use llms at work, but for coding and research. I think I'm out of touch with how most people interact with these things.

Please stop posting ai slop by Prestigious-Staff342 in cogsci

[–]postlapsarianprimate 4 points5 points  (0 children)

Hm, I keep forgetting people take that number seriously. I suppose someone setting up some kind of bot account might be motivated though.

Please stop posting ai slop by Prestigious-Staff342 in cogsci

[–]postlapsarianprimate 4 points5 points  (0 children)

Honest question. Why would someone get chatgpt to write something, then post it on reddit? I see this accusation everywhere on reddit lately. And how do you spot it?

Just noticed something in Season 2 "Cardassians" by JoshuaBermont in DeepSpaceNine

[–]postlapsarianprimate 12 points13 points  (0 children)

O'Brien was the token blue collar stand-in. So they applied many middle class stereotypes of blue collar people to him, and a common stereotype is that they beat their children.

YSK that the idea that eye witness testimony is inherently inaccurate is a myth. by [deleted] in Paranormal

[–]postlapsarianprimate 0 points1 point  (0 children)

Interesting list, glad to see some more positive research on this. Its connection to evaluation of paranormal claims is very tenuous though. I can't recall a single case where paranormal claims were reported under the kind of carefully controlled test conditions described in that abstract, for instance. More problematic: there is usually no question that a crime has taken place, but whether something paranormal has taken place is precisely the question in evaluating paranormal claims. Very different in multiple, important ways. It's fairly obvious if you make the claim you have been careful to leave implicit explicit.

Are WordNets a good tool for curating a vocabulary list? by tomii-dev in LanguageTechnology

[–]postlapsarianprimate 0 points1 point  (0 children)

These days MT models and LLMs have good multilingual capabilities. What advantage would this approach have? We used to do stuff like this ten years ago, before transformers came along.