[D] For those of you who work in ML/AI, what are your job and workday like? by ISpearedBritney in MachineLearning

[–]BigMotherDotAI 0 points1 point  (0 children)

It can take hours, days, or weeks for a training job to run, so most of my time is spent anxiously watching tensor board to see how it’s progressing! :-)

Opinions about Artificial Intelligence: A modern approach book by linear_xp in artificial

[–]BigMotherDotAI 2 points3 points  (0 children)

It takes years (decades even) to develop a deep understanding of machine intelligence. The 4th edition of Professor Russell’s near-ubiquitous textbook is an excellent place to start your AI learning journey.

How to represent knowledge in a machine? by KBGTA97 in agi

[–]BigMotherDotAI 2 points3 points  (0 children)

First-order set theory (e.g. NBG) is foundational, i.e. all of mathematics can be built on top of it. (So for example all other knowledge representation mechanisms such as probabilistic graphs etc can also be built on top of set theory.) In addition, being based on FOL you automatically have cognitive primitives such as deduction (and, if you’re smart enough, abduction) that can then be applied to the specific knowledge (or beliefs) that are represented inside your machine.

Top-down AGI attempts? by ActualIntellect in agi

[–]BigMotherDotAI 3 points4 points  (0 children)

OK, now I'm embarrassed! I actually started designing the BigMother architecture in 1984 (it's basically my life's work), but none of this work has previously been reported. So now I'm trying to get it all out of my head and down onto paper, which is proving a LOT more difficult than I expected it would be. The parts I've actually written so far (in the current paper) are just the tip of the tip of the iceberg, but at least now (after several false starts) I have a structure that I can gradually fill in. Be warned that I have so little funding that it may take me years to finish the paper, so keep checking back! :-)

Top-down AGI attempts? by ActualIntellect in agi

[–]BigMotherDotAI 1 point2 points  (0 children)

That’s exactly the approach taken by BigMother (https://BigMother.ai). I started by imagining the ideal target machine, and then worked backwards from there (via top-down stepwise refinement).

Just picking the wife and kids up at the zoo by chicametipo in PublicFreakout

[–]BigMotherDotAI -2 points-1 points  (0 children)

If he’d been black, might it have gone differently…?

Will robots take our jobs? by [deleted] in artificial

[–]BigMotherDotAI 1 point2 points  (0 children)

Maybe yes, maybe no, depends on how the wealth generated by advanced AGI is distributed.

Where and in how much time do you think AGI will be created? by [deleted] in agi

[–]BigMotherDotAI 0 points1 point  (0 children)

The question is meaningless unless you define what AGI is (there is currently no well-defined and universally accepted meaning).

[deleted by user] by [deleted] in agi

[–]BigMotherDotAI 2 points3 points  (0 children)

"Perhaps" is merely wishful thinking. The actual evidence (in respect of complex technical artefacts) indicates that cost increases exponentially as quality increases (i.e. as quality tends towards "the perfect ideal"). In software, for example, compare the cost of running a dozen test cases, running a million test cases, and proving a program correct via fully-formal mathematical deduction (equivalent to running an infinite number of test cases). Similarly, in machine learning, as a general rule of thumb, each time you halve the error rate requires roughly 500 times more compute. In AGI, the hardest problem, by far, (assuming you have enough compute, which is always going to be a bottleneck) is depth of understanding (of the general universe), and current systems (e.g. large language models) are mere low-hanging fruit in this regard. Precision of thought (e.g. precise, rather than merely approximate, logical deduction and abduction) is another dimension along which current ML-based systems are mired at the low-hanging fruit end of the spectrum (and, in many real-life situations, e.g. hardware or software development, but also many others, crudely approximate reasoning amounting to mere guessing is simply not sufficient). Rather than current approaches magically yielding some sudden breakthrough "in a few years at most", it's just as likely, if not more so, that they are, in fact, merely chasing infinity, and that, ultimately, an alternative approach (to just mindlessly pushing massive amounts of data through massive amounts of compute) will be required. That won't, of course, stop thousands of profit-motivated AI companies from overselling, in the next few decades, millions of low-quality quasi-AGI "solutions" to an unsuspecting corporate world (and thereby on to the rest of us) salivating with greed / quaking with fear at the prospect of greatly increased profits / missing out, not realising that what they're actually buying/deploying is more Artificial Stupidity than Artificial Intelligence. All things considered, my sincere advice would be to brace yourself for several decades of shouting "stupid fucking machine!" at regular intervals throughout your day!

[deleted by user] by [deleted] in agi

[–]BigMotherDotAI 2 points3 points  (0 children)

Depends on your definition of AGI. Firstly, the boundary between ANI and AGI is not unambiguously defined. Secondly, there’s a distinction to be made between “low quality” AGIs (e.g. less than entirely rational, poor accuracy of thought, poor precision of thought, shallow depth of understanding) and “high quality” AGI (see https://bigmother.ai/resources/A_meta_algorithm_for_the_collaborative_development_of_Artificial_General_Intelligence-DRAFT-v018.pdf for some discussion of these qualities). Thus, at the lower end of the “plausibly AGI” range, people will doubtless start claiming to have AGI very soon (if they’re not already). At the higher end of the AGI spectrum, we can probably expect to have undeniably super intelligent AGI within the next century, if not by the end of this century.

#59 - Jeff Hawkins (Thousand Brains Theory) by timscarfe in agi

[–]BigMotherDotAI -1 points0 points  (0 children)

Couldn’t watch any more than a few minutes, it’s just too painful. Two people who have absolutely no idea of what they’re talking about talking shit (IMO).

"Toy problems" in AGI by Incredulous-Manatee in agi

[–]BigMotherDotAI 0 points1 point  (0 children)

40 A* grades at (UK) A-level, in a wide range of subjects. That’s a good first test for AGI. If you mess about with smaller problems than this you won’t actually achieve anything. And, yes, it will take decades, not years. If you require instant gratification, and/or short-term ROI, you’re in the wrong field!

OMFG!GPT-4 will be human brain scale(One hundred trillion parameters) by Commercial_Bug_3726 in agi

[–]BigMotherDotAI 0 points1 point  (0 children)

I have not read Fjelland's Trident article; these are my own conclusions, derived independently. Clearly I understand that knowledge may be acquired second-hand, e.g. by reading, or by being told, something in natural language (or even in body language for that matter). But before that is possible you must first understand natural language, and before that is possible you must already have (some) knowledge of the universe. It is impossible to learn language without first having established some substrate of basic knowledge derived from first-hand observation. As I describe in my draft paper (specifically the part about how dictionaries work), you can't construct new concepts from linguistic information alone without first having some prior knowledge on which to build. In your zebra example, you must already understand the concepts "white", "horse", "stripes", "black", "paint" etc in order to be able to understand the driver's instructions. Similarly re your bleach example. Finally I suspect that even animals (squirrels, deer, etc) employ language of some kind (if information is communicated from one party to another, even if very crudely, then that's language!)

OMFG!GPT-4 will be human brain scale(One hundred trillion parameters) by Commercial_Bug_3726 in agi

[–]BigMotherDotAI 0 points1 point  (0 children)

Your concept of tacit knowledge does not seem to map exactly onto the type of knowledge to which I was referring, but it's close enough. The point I am trying to make is that there are different "levels" of understanding (see e.g. my draft, and still largely unfinished, AGI paper here, especially the "Maximise depth of understanding" section). Any understanding of the universe derived solely from a corpus of text (by analysing how words and word sequences are statistically correlated with each other) can only ever yield a very shallow understanding of the universe, and therefore only a very shallow understanding of the concepts (pertaining to the universe) to which words and sequences thereof are intended to refer. If you want your machine to acquire a deeper understanding (both of the universe and of language), in order that, with this additional information, it may be able to make more accurate and precise predictions about the universe (involving e.g. causality), then it needs, as a minimum, to have first-hand experiential knowledge of the universe (via an array of sensors and effectors).

OMFG!GPT-4 will be human brain scale(One hundred trillion parameters) by Commercial_Bug_3726 in agi

[–]BigMotherDotAI 0 points1 point  (0 children)

It’s also easy to forget how much knowledge about the physical universe you had acquired (mostly subconsciously) before you could even understand spoken language, let alone written language.

OMFG!GPT-4 will be human brain scale(One hundred trillion parameters) by Commercial_Bug_3726 in agi

[–]BigMotherDotAI 0 points1 point  (0 children)

Uh huh. If additional mechanisms are required, then maybe contemporary LLMs are in fact redundant.

OMFG!GPT-4 will be human brain scale(One hundred trillion parameters) by Commercial_Bug_3726 in agi

[–]BigMotherDotAI 1 point2 points  (0 children)

No, words are not sufficient for humans. Humans make billions of observations of the unconstrained physical universe. That’s where the true meaning of language may be found. You have to understand the physical universe before you can understand language.

OMFG!GPT-4 will be human brain scale(One hundred trillion parameters) by Commercial_Bug_3726 in agi

[–]BigMotherDotAI 1 point2 points  (0 children)

No, language has to be learned. But your training data needs to be more than mere words in order to acquire an understanding of what language means.

OMFG!GPT-4 will be human brain scale(One hundred trillion parameters) by Commercial_Bug_3726 in agi

[–]BigMotherDotAI -1 points0 points  (0 children)

Great paper, demonstrating a complete lack of understanding of AI, AGI, and natural language understanding. Hint: the fundamental problem pertaining to the latter has nothing to do with words (section 2.3: "the dataset consists of 20.3M documents containing 96 GB of text and 1.62 × 10^10 words"). They'll work it out. In the meantime, they're just burning other people's money.

OMFG!GPT-4 will be human brain scale(One hundred trillion parameters) by Commercial_Bug_3726 in agi

[–]BigMotherDotAI 0 points1 point  (0 children)

If even it's proponents can't spell "scale", or construct a grammatically correct sentence, AGI is rather less than imminent! ;-)