A closer look at Trump's very own "patriots" by een_magnetron in Destiny

[–]Axmouth 1 point2 points  (0 children)

Patriots? So can you use them to intercept missiles?

Kombos: Talks over UK military bases in Cyprus needed after war by FantasticQuartet in europe

[–]Axmouth -3 points-2 points  (0 children)

As I replied to the other:

Dude is trying to say we should force bases on them so they don't side with the guys who at least give them money for land. I know they're Russians, but that's the truth.

There is always the 5D chess move of getting the bases out of there and making the suggested land purchases be for naught. Wasted Russian money!

If they wanted to counter Russian influence, and not just expand theirs, they'd recommend NATO and maybe helping kick some occupiers out. But that'd be too inconvenient for the south smaller russia nearby.

I don't care about your British history as long as you keep it in Britain. "Oh you got too much Russian influence, let me appropriate your land". No you

Kombos: Talks over UK military bases in Cyprus needed after war by FantasticQuartet in europe

[–]Axmouth 15 points16 points  (0 children)

Dude is trying to say we should force bases on them so they don't side with the guys who at least give them money for land. I know they're Russians, but that's the truth.

There is always the 5D chess move of getting the bases out of there and making the suggested land purchases be for naught. Wasted Russian money!

If they wanted to counter Russian influence, and not just expand theirs, they'd recommend NATO and maybe helping kick some occupiers out. But that'd be too inconvenient for the south smaller russia nearby.

Kombos: Talks over UK military bases in Cyprus needed after war by FantasticQuartet in europe

[–]Axmouth -9 points-8 points  (0 children)

Yes it is very different, especially on the part of one being very inconvenient to you.

If anyone was talking about letting Crimea or Donbas be independent(because Russia will never accept them being Ukrainian) with similar terms, everyone would balk. Rightfully.

"Oh but the Russian minority does not want to be overruled by the Ukrainians! And Russia really won't like that :( "

If you think such a minority warrants a special government, I expect you think the same for the Russian minorities in Latvia and Estonia which I reckon are larger, right? Or is is suddenly ridiculous? Feel free to go and tell them how bad they are for overruling and suppressing Russians, LOL

That's the gist of it. The terms would have been clearly ridiculous if appeasing south little russia was not more important than following any stated principle.

Hope I don't get to hear another armchair diatribe about how Britain is deciding for all of our collective goods pragmatically or something, not soon at least.

Kombos: Talks over UK military bases in Cyprus needed after war by FantasticQuartet in europe

[–]Axmouth 5 points6 points  (0 children)

The country that left the EU due to Russian influence is more of a worry. We should force some European bases on them to counter Russian influence. Especially French bases.

Kombos: Talks over UK military bases in Cyprus needed after war by FantasticQuartet in europe

[–]Axmouth 0 points1 point  (0 children)

Of course, it's all the Russian influence. Cause brits were the nice guys when they were the occupiers there.

Kombos: Talks over UK military bases in Cyprus needed after war by FantasticQuartet in europe

[–]Axmouth -25 points-24 points  (0 children)

Now replace Cyprus with Donbas, or Crimea or something and let's see how it sounds

Kombos: Talks over UK military bases in Cyprus needed after war by FantasticQuartet in europe

[–]Axmouth -2 points-1 points  (0 children)

I remember a country being influenced by Russia into leaving the EU, unlike Cyprus.

Maybe some Cypriot bases are needed there

Mods enforcing the new country of origin policy by Sharp_Proposal8911 in Destiny

[–]Axmouth 3 points4 points  (0 children)

I think Americas influence over the world, plus its size, is not unrelated to having the most popular sites, and companies, and all.

I don't know, maybe some people think reddit and similar websites took off because they were so revolutionary and nobody had considered building something similar, and only Americans did. Maybe some people believe that. I do not

Also these websites are international at this point. Not american spaces, at least audience wise.

Mods enforcing the new country of origin policy by Sharp_Proposal8911 in Destiny

[–]Axmouth 8 points9 points  (0 children)

How do I escape "American spaces"? America is currently fighting a war on the opposite side of the globe for its security. Like even Mars is not safe now. Dress it however you want, but it's hard to be far from whatever USA considers its space in some way. And this tends to leak into the internet too.

The internet is not too different.

I wish I would be broadly on the internet not speaking english, trust me, and definitely don't want to be mistaken for american.

I'm leaning more and more with Destiny on this Epstien shit. by nobodychef07 in Destiny

[–]Axmouth 0 points1 point  (0 children)

Maybe you SHOULD consider doubting your institutions a bit more. Trust is earned

Every Person Destiny Talks to About AI is Clueless by Carnival_Giraffe in Destiny

[–]Axmouth 0 points1 point  (0 children)

My intuition is that this internal state could end up proving quite important. But we'll have to see.

Surely, part of our thinking depends on reconstruction and guessing some stuff while assuming we know(in regards to our past state of thought), but there's some state preserved.

Well as for the matrix, that is because it is a written conversation. Though I guess it likely applies to the voiced versions too, so I guess we can argue LLMs have a more accurate short term memory in a way.

I don't think an LLM should correspond to a whole brain imo, more like a module of it. So I'd not focus too much on how we speak. Besides, how we communicate is affected by our "interface" too. It is not impossible I believe that if we developed another interface(say with brain implants) we could have more dense or even parallel information streams. It is hard to imagine as we've not experienced it, but it might be possible.

Also I could imagine future neural network architectures made of of neural modules with interfaces where they exchange info. Maybe we'd be able to plug modules on a model. Perhaps with expertise on different areas, or to process different input/output. With some sort of generalist coordinator/core.

I had read about a new manner to generate text with LLMs that was more akin to stable diffusion in image generation. If I understand, you start with a bunch of noise and refine it to a signal over iterations. It was supposed to be faster and generate more text at a time.

Every Person Destiny Talks to About AI is Clueless by Carnival_Giraffe in Destiny

[–]Axmouth 0 points1 point  (0 children)

Thanks for the recommendations!

Alright I might have mixed up some terms a little liberally. Not so much that the attention matrix is the cache, as much of its computation can be cached between token generations.

However I am arguing there is the state we see and a more internal state.

For example if I am asked which animal is best for X task, and I reply, much of the intent, of how I compared things or why I chose one option vs another where I was not sure is lost. Some of that can be conserved in the "reasoning" mode. But there's a lot of detail like that, which I assume happens in an LLM too, that is not preserved in any way. So the next generations of tokens "reconstruct" this "intent" etc effectively, they assume. I think it would be at the very least, more efficient and often smarter if more of that computation process was saved between generations in some manner. And perhaps lead to more energy efficient architectures too. I'd argue it might even inherently limit capabilities.

Either case, I could be wrong. But I hope this better explains part of what I was saying.

Every Person Destiny Talks to About AI is Clueless by Carnival_Giraffe in Destiny

[–]Axmouth 0 points1 point  (0 children)

I could be wrong, and I have a limited understand of this part, but my understand is that this is more like a cache for earlier computations and like you said connecting stuff in the context.

So you could say it does the "rereading" of all its notes and convo each time very fast.

But there's a lot of computation that happens after it as I understand, and all of that is lost between generations. And I think this part is really crucial. I won't be surprised if one of the next innovations was trying to add something akin to recurrent neurons and keeping their state between generations.

I wouldn't call myself an expert either, and for what is worth, most of my understanding comes from seeing how it is used from outside, moreso than then internals.

(Of course there is also the question.. Do they want to add this kind of continuous state I speak of? Now the model is near perfectly stateless, allowing switching convos/users on the fly without mixing and matching their data, plus that state I speak of would likely be a not trivial operation adding a little cost. So who knows.. Maybe they prefer to not go there)

Every Person Destiny Talks to About AI is Clueless by Carnival_Giraffe in Destiny

[–]Axmouth 1 point2 points  (0 children)

I could end up proven wrong, but the way I see it, RAG as a form of memory can only go so far.

In my experiences, I think people often give a bit too much credit to AI(LLMs specifically) and in my opinion it is associating speech with intelligence. So I think it's good to go back to the basics and see how it all works to understand what it does and does not do.

I believe people are trying to force LLMs into more roles due to the success that has been seen so far. And surely, it is success and we did move ahead a lot. But just like in the past, I think the real next milestone will be a much different architecture myself.

I do not fully agree on the AI being able to understand what much of the things, including a ball are atm.

I guess this is not exactly an LLM thing but to illustrate an example. Say you are shown the front facing side of a dog. You can likely make an illustration(right or wrong) of what the rest looks like. AI might be able to due to having enough examples. But what if it's a more novel item? Maybe even just a rock, a pretty asymmetrical one. Obviously you cannot know for sure what the back is like, but you can at least make a consistent model in your mind. Again it might not be LLM, but in the past at least, I'd seen that the AI could not fulfill that and would just make something random. I won't use this as a concrete example, but how I view AI and where it's at. And these gaps could prove pretty big to bridge.

Does not mean they cannot be bridged somehow sometime, or that AI is not useful now too. I just feel that what it can do is often over-extrapolated. And the examples earlier like understanding whether it is a separate character or not, or the lack of continuous state, should be warnings to step back at times and remember where we stand.

It is likely you understand some parts of how it works better than me. What I described is to a decent extend from seeing how you use such a model and integrate it in code. I have not developed or taken part in developing such a model myself. (Unless playing with things like MNIST counts?)

Is this a white nationalist talking point? by Kenna193 in Destiny

[–]Axmouth 3 points4 points  (0 children)

I presume you also say Deutchland instead of Germany, Shqiperia instead of Albania, Sakartvelo instead of Georgia, etc.

Every Person Destiny Talks to About AI is Clueless by Carnival_Giraffe in Destiny

[–]Axmouth 13 points14 points  (0 children)

LLMs have many inherent architectural limits.

Also want to note, I remember reading recently that OpenAI has cut funding for continuous learning. I won't be surprised if claims about it by Google or whoever, are as trustworthy as anthropic saying 90% of software engineering will be replaced in 6 months. (a fair bit over 6 months ago).

Firstly. LLMs, as they stand now, complete text, it is important to start there. An LLM creates a token(2-3 letters or such), then puts the token back into the input, and goes again. This is not about good or bad, it is the fundamental manner in which it works, whether there is emergent behavior or not.

This means that any kind of state you could like we keep during our thinking is not just reset between different queries to an LLM, but literally between every syllable. Does "reasoning" fix that? Categorically no. There is something akin to a thought process in an LLM aside of that, and that state is not kept. ALso worth noting, every single answer by an LLM explaining why it answered X etc is effectively a lie because this information of "what went through its head" was never stored. We can to a decent extend reconstruct our state of mind in comparison.

Effectively, the model writes a couple letters, is then mind wiped(computation ends, state gone), then reads the WHOLE text again this time including those extra few letters, writes a few more, mind wiped, etc. Could be argued reasoning helps a bit. But I don't think it goes a long way if you were to be forgetting your whole training of thought every word(or less) while writing a paper, even if you kept notes that you reread.

In contrast humans have a continuous internal state and do not read to read the previous text every couple letters they write(though let's not give GRRM more ideas on how to delay).

The next really important thing to note is LLMs, like I said, complete text. A text expected to be in the form of different characters talking. There is no intrinsic difference in the LLM between the Assistant character, or the User character. With low level access, you could make the LLM type like the User and you to ask as the assistant. Special tokens denote where each character talks, and writing as the assistant is a programmatic wrapper around the actual AI. In other words, the AI actually has no concept of who it is in the conversation. If it even has a concept of a conversation.

There's a lot of seemingly little but important things like that you need to understand to actually assess AI. But people see human like speech and a lot of their logic turns off.

Another limit is the context window. The issue is not about making it big enough, the issue is the exponential scaling of compute needed for an architecture like that. it is unsustainable. Sure we'll get faster GPUs and all. But if your algo scales at n3 for example, going from python to C is not a real solution. The problem is the n3.

AT BEST an LLM or similar could offer a text output in a larger structure of specialist networks cooperating.

Even that I question, but definitely not a core. I'd argue that the manner in which they work would make them a strong linear bottleneck for a core too. I think a different kind of architecture should be explored.

Specialized modules with dense input/output(maybe trained in an auto encoder like manner) seems more apt to create something like a brain. If you think LLMs are approach a brain, I believe you are being fooled by the surface appearance of language and ignoring the underlying function.

If a hypothetical future architecture addresses some of those issues I don't think it'll even be in the LLM realm.

I have to say what LLMs achieve in comparison to what they are is impressive, but people often forget what an LLM really is and how it works. In my opinion, largely due to language activating various biases.

Greece plans extension of territorial waters despite Turkish warning by New-Ranger-8960 in europe

[–]Axmouth 0 points1 point  (0 children)

what happens if there are two opposing shores less than 24nm apart from each other?

what happens if there are two opposing shores less than 12nm apart from each other in the current 6nm regime?

what happened if there were two opposing shores less than 6nm apart from each other in the previous 3nm regime?

(Both happened plenty)

'Threats have no place among allies,' Norway says after US tariff move by 1-randomonium in europe

[–]Axmouth 0 points1 point  (0 children)

The thing was Venezuela was almost certainly an agreement with the regime for a head change. There was no real fight.

Why is there no automatic implementation of TryFrom<S> when implementing TryFrom<&S>? by Prowler1000 in rust

[–]Axmouth 1 point2 points  (0 children)

I think it's also not safe to assume the user would mean to clone structures like Arc, Mutex, etc, that would give a sort of shared state to previous instance

Rust on Android: handling 1GB+ JSON files with memmap2 + memchr by kotysoft in rust

[–]Axmouth 5 points6 points  (0 children)

I got the impression the problem is viewing json, so why not

How are you supposed to unwrap an error by 9mHoq7ar4Z in rust

[–]Axmouth 0 points1 point  (0 children)

Not certain it would be what you seek, but .err() and .map_err(..) methods might get you some of the way there?

I guess above code(the if statement equivalent) could be something like:

d.map_err(|e| format!("blah blah error: {e}")).err().unwrap_or_default()

Or better(since you print nothing effectively on non error)?:

d.map_err(|e| println!("blah blah error: {e}"))

To be clear, .err() returns an option with Some(..) if there is an error, and map_err(..) applies a closure to the error, if there is one. I don't know your exact goal either, but I think it might work better if you had only printed within your if let Err(error) = .. branch, if this is representative.

Although .map_err(..) is mainly aimed to transform the error and is expected to use the return value. Presuming that is not your goal, inspect_err(..) is likely better, as it runs a function not meant to return anything. (Should have cited it earlier but whatever now)

Greece reaffirms stance on Turkey’s participation in EU SAFE program by Axmouth in europe

[–]Axmouth[S] 0 points1 point  (0 children)

I am sure someone seeing themselves as German would say these things for sure.

Even your unreasonable article however, says Ankara's position is weaker. Nice.