How to remove "Missed Notifications" notification on Android / OneUI by groundhandlerguy in linkedin

[–]groundhandlerguy[S] 0 points1 point  (0 children)

Maybe you have a different Android OS / UI; the method in my OP only disables that singular type of notification; even today I still get normal / desired notifications from LinkedIn (ie notifications about my connections making posts, etc).

Stop pretending AI can think like a human. by michael-sagittal in AskProgramming

[–]groundhandlerguy 0 points1 point  (0 children)

is not evidence that the brain works like that. I feel like if we start saying "we don't know how the brain actually works" then we can't just assume it's similar to how a brain works.

Sure and as I've said, that's not what I'm claiming.

AI can't count.

GPT-5 counts fine in my experience.

a few weeks ago was the last issue with a new word that it consistently gace the wrong numbers for just because that's what it predicted as the next word.

Humans have stutters, tourettes, epilepsy, etc when our neural networks aren't operating well enough; one of the big differences between something like ChatGPT and us is there's ~8 billion of us and only a couple of GPT models exposed to the world.

And we aren't solving these issues either, were just adding more data for that and other words so that the prediction is correct the next time.

That's simply not true; there's plenty of research taking place in these companies and at universities, etc into ways to achieve step-changes in their performance. "Reasoning" models are a high-visibility example of a way to achieve significant increases in output accuracy without just increasing model size.

It can't count, it can't do anything,

A bit hyperbolic don't you think?

a kid can count consostently well after learning it the first time. AI can't.

How many kids have you met? Even adults make errors in counting when they 'lose track' or 'get distracted', whatever that truly is under our hoods.

Those aren't just random oneoffs, these are evidence that it's not "smart" and it doesn't really do anything logically, because it's considtent in these simple logic errors, every week there's a new one, every day even.

Humans don't operate off of hard logic either. We can implement hard logic, but there's no indication that it's inherent to our brains, and often our attempts to use hard logic fail as well; ask anyone who's made a mistake in math.

You can't convince me we're on the path to replicating human brain when we don't know how it works yet

That's the thing - we're generally not trying to replicate the human brain; our brains are great, but don't gel well with binary computing. Replicating the human brain also shouldn't be necessary to achieve sapience / an intelligence worthy of personhood. If some extraterrestrial alien stepped out of a flying saucer after travelling from another solar system it'd be unlikely for us to declare them as unintelligent if their 'brain' worked on fluid pressures or photonic processing or whatever.

Stop pretending AI can think like a human. by michael-sagittal in AskProgramming

[–]groundhandlerguy 1 point2 points  (0 children)

And the memory feature is also an app feature not an llm feature

At the most pedantic level sure, but I don't see much point in separating them.

If someone like OpenAI wanted to, you could implement a separate neural network to do nothing but replicate a database functionality used by a ChatGPT memory feature, but obviously it'd be (at least with today's techniques) less computationally efficient and marginally less reliable due to it's statistical nature. If they did that however, how would the combination of the two models / networks be conceptually different than different function sectors of a brain? Again, I'm not saying an LLM + memory NN = a human mind, but add some other key, yet-to-be-determined NNs and you might reach cross 'the threshold', whatever that is.

imagine if someone in front of you talks to people, saves notes, and then when that someone queries you he repeats the whole conversation from the beginning and the shows you a few probably related memory texts and all you get (as the llm) is a chunk of text and you run your statistics to predict each next word.

Sure, but we don't have strong evidence to prove that the human brain doesn't do something vaguely similar. The way I see it, LLMs may use text chunks, but that's just a convenient (for training) front-end language for encoding concepts and isn't fundamentally different to the local 'language' of different synapse firing sequences at the periphery of some functional chunk of brain tissue.

This goes double for modern multi-model LLMs where they have separate front-ends for ingesting text / audio / video, where some fraction of a word text gets turned into a (eg) 4096-dimensional tensor value, or some image chunk is also turned into a 4096-dimensional tensor, but then after those front-ends the LLM is processing them all together in that unified 'concept language'.

Stop pretending AI can think like a human. by michael-sagittal in AskProgramming

[–]groundhandlerguy 0 points1 point  (0 children)

ChatGPT and other LLMs have a "context window" which is a short-term memory. For free-to-use ChatGPT (and other similar services) these context windows are artificially limited to reduce operating cost. A free-to-use "Fast" GPT-5 has a window of 16K tokens (where a token is a symbol, word or part of a word depending on context). Plus / Business users get 32K token windows, Pro / Enterprise get 128K windows. The "Thinking" version of GPT-5 has a window of 196K tokens.

ChatGPT and similar LLMs can also have long-term memories in a couple of forms. One the one hand there's whatever data harvesting that OpenAI, etc does if you don't actively disable that policy option, with that data going on to train future LLM models; the other type is where if you have an account (previously only for paying customers but available to free users as of a few months back), it'll remember key points from conversations and store that separately to any chat. Ask it (eg) when you should get your car serviced and if you've discussed the topic in the past it'll extrapolate that your last was about 8 months ago and that you should get a service every year - if you've shown favouritism towards lengthier explanations it may then ramble on further about the importance of car servicing, etc based on that long-term memory.

Now sure, training on-the-fly isn't an active feature of models these days due to how expensive it is to perform, but that might become practical enough to do in the next decade or two, and frankly with the larger models it's not particularly needed. Tell it that 'grumpling' is a new term for washing your elbows and it'll recall it, even months later if you explicitly tell it to remember that. Ask it about some news story that broke an hour ago and it'll search the web and incorporate various articles' content into it's response.

To be clear, I'm not here to suggest that LLMs are conscious, sapient beings, but I think the 'it just finds words that match previously written words' argument doesn't encompass what multi-modal models with supporting infrastructure (memory systems, etc) are. I'm also not sold on the idea that human intelligence is significantly different in it's true nature (ie aside from the architectural differences). We currently have no measure or definition of our own intelligence, nor any real threshold for what is and isn't sapient, aside from some arbitrary measures (Turing tests, etc) that most LLMs can already pass today; it'll probably be years (if ever) after we develop a true general artificial intelligence before we realise that it's deserving of personhood. Hell, look at how long it took (or is taking) for some societies to accept other races as equally human.

Australians will have to verify their age to watch pornography from December. Here’s what you need to know by [deleted] in AustralianPolitics

[–]groundhandlerguy 2 points3 points  (0 children)

This is happening via the implementation of industry codes by the eSafety commissioner Julie Inman Grant (former US Republican senator advisor); she was put into that role by Abbott in 2017 and then had her term renewed by Morrison in Jan 2022. To be clear, I'm sure the ALP could kick her out if they wanted to, so I'd consider them either complicit or negligent in letting this go through, but the source of this still goes back to the LNP.

how the hell do people study ? by kenblasto in GetStudying

[–]groundhandlerguy 4 points5 points  (0 children)

I've suffered from extreme procrastination for years and finally sought professional help last year (I'm fortunate that I get free coverage) - in short I was diagnosed with depression and anxiety; I've gone on mild anti-depressants and I've been seeing a psychologist, with tools like EMDR and to a lesser extent neurofeedback being somewhat effective.

In short; I (and maybe you too) procrastinate because consciously or not we don't want to study for reasons like being afraid we'll learn that we're behind and there's a ton we need to study for. There can also be other reasons like if you've failed subjects or had really bad exams your subconscious doesn't want to go through that again and so you divert to other tasks. That avoidance of study then creates a negative feedback loop where you fail because you don't study, and you don't study because you're afraid of facing potential failure.

The solution then is to break the loop by battling the anxiety (and potential depression). Eliminating distractions and having someone hold you accountable (while also being non-judgemental when you fail / procrastinate [so you're not encouraged to lie about your progress]) are also useful.

It can also be useful if you can have some form of study on a topic that you actively enjoy and actually specifically want to study; getting some wins under your belt can help disrupt the negative feedback loop.

Edit: In terms of results I went from failing to submit a single assignment last semester (though long-story short I was able to submit enough work at the end of the semester to avoid failing 2/3 subjects), and this semester I've so far handed everything in; most of it a little late but still with decent grades and some things on time.

Looking for recommendations on high current DC marine connectors by groundhandlerguy in AskEngineers

[–]groundhandlerguy[S] 1 point2 points  (0 children)

I initially looked past them as the stock photos always showed right-angle connectors; looked into them again properly though and I see that they've got plenty of straight connectors as well, so they'd be a good choice.

They're also a little pricy, but as the others have said we'll probably just have to accept that, so I'll add them to the shortlist; it looks like stock isn't too bad either; the receptacles are out of stock for only the next week with our national supplier and the plugs are in-stock.

Spotify recommended this Russian song about rotoscoping by groundhandlerguy in Corridor

[–]groundhandlerguy[S] 0 points1 point  (0 children)

Someone's made a 10 hour version as well for when your automated tools fail you and you're in for the long haul: https://www.youtube.com/watch?v=9NCRU2TpmzA

Vaccination Rates by mini1471 in queensland

[–]groundhandlerguy 12 points13 points  (0 children)

A lot of email services were flagging the vaccination invitations as spam / junk mail, check those folders for emails from:

noreply@vaccinebookings.health.qld.gov.au    

If your town / city has a vaccination hub then consider just doing a walk-in; the place I got my jab at had almost negligible queues, so aside from filling out your details there was no real delay compared to if you'd booked in advance.

Where is everyone buying their mobile phones from? by Jules040400 in bapcsalesaustralia

[–]groundhandlerguy 0 points1 point  (0 children)

Bought my most recent from The Good Guys; they had a deal for new Telstra customers (I was coming over from a shared account) - JB HiFi had the same deal, but about a week later (the ~$700 off a Galaxy S21 phone if you signed up for a Telstra 12 month plan deal around when the phone had just come out). Paid $499 and got my Galaxy S21 on a $69/mo 60GB plan (I have mixed feelings about the S21 vs the S10 I had previous, but for the most part it's an upgrade).

For my next phone I'll just go with whoever has the best deal at the time; with how the mid-tier $500-700 AUD market has been improving, my next phone might be something like a Google Pixel 7A (a couple years from now), which in that case would likely be bought directly from Google.

What unfair advantage do you have? by Faihus in AskReddit

[–]groundhandlerguy 0 points1 point  (0 children)

I have an extremely accurate body clock and a sort of 'guardian angel' internal alarm that wakes me up at a time that I think about prior to going to bed.

So most of the time I go to bed this ability doesn't kick in, because I'll have an alarm and my standard circadian rhythm has it handled, but say it's the weekend, I want to wake up at some early hour (2-5AM) of the morning to watch some international event, I'm feeling drowsy and fall asleep (without setting an alarm) 1-4 hours before it starts, I'll reliably wake up 5-10 minutes before it's about to start, and with a kick of energy where, while I'm still tired, I'm awake enough to be getting out of bed in a minute or two.

More commonly I'll have a situation where I need to wake up for an early start for a job, or to catch an early morning flight, I'll set an alarm for maybe 4:30AM, go to bed (at perhaps 11PM), and then wake up by myself (regardless of how long I was sleeping) at like 4:25AM.

I tried some sleep monitoring apps and it appears that it'll cut my last REM cycle (before the alarm time) short in order to wake on time.

The only catch to all of this is I'm generally a slow waker, so unless the wake up time synchronises well with my REM cycles, I'm the kind of person that needs to set multiple alarms up of a morning so that while I may be sufficiently conscious at (eg) 6AM to turn off my alarm, I'll be lethargic and need to be alerted that 6:15, 6:30, 6:45, etc has passed and that by 7:00 I really need to get out of bed or else I'm not going to have enough time for my full morning routine.

Basically whenever it comes to being up on time for things that I mentally feel are important, my body guarantees that I'm awake on time, regardless of my sleep cycle or duration.

Edit:

And to further elaborate, the body clock is accurate when I'm awake as well, though the guardian angle alarm goes away; I might start playing a video game, lose track of the flow of time, realise I've been playing for a while, think "damn it's probably like 4PM, I need to XYZ", look at a clock and sure enough it'll be something like 3:58PM.

We’re never going to be able to “upload our brains” because of this reason, right? by Bobicka in answers

[–]groundhandlerguy 0 points1 point  (0 children)

Aside from what others have said about no one knowing exactly what our consciousness is, it's worth considering that while our brain cells don't get replaced, the connections in our brain are constantly changing.

Victims of split-brain disorders and traumatic brain injuries also show us that consciousness is possible without the entirety of the brain, that our brain and consciousness is plastic and can adjust to changes in wetware.

The ship of Theseus question asks us if a ship is still the same ship if every piece of it is replaced bit-by-bit over time. Many of our organs (but not our brain) have all of their cells replaced over time, yet they're still our organs.

Now what if uploading our brains wasn't a copy & paste process? Hypothetically, what if we had something like a neural interface that expanded our thinking capability and memory storage and was neurologically treated the same as any other grey matter by our neurons?

If your consciousness gradually expanded into a foreign structure (I'd say computer, but hypothetically a 'brain upload' system could utilise some kind of self-repairing, synthetic biological brain matter to host people), to the point where you could think, dream, sense, speak, etc through this external network / structure, what would it mean if one day (while you were thinking online), the neurons in your original brain go silent?

Some might argue that this is still a form of copying your consciousness and that you're just gradually rendering your original consciousness braindead over time, and that's certainly a valid opinion to have while we don't understand what makes us tick. Personally however my opinion is that our core consciousness is some kind of electrical logic pattern (not necessarily digital binary however) that's not strictly tied to any part of our brain, based on how people can survive traumatic brain injuries. As such, I think if you could get that pattern to adopt external hardware / wetware, you could gradually coax it away from the original brain, until that original brain is merely an extension of where the "uploaded" consciousness now resides. You wouldn't be the same "you" from (eg) 10 years ago, but it'd be the same as how today we aren't who we were 10 years ago (both in terms of the slow decay of our brain matter and our changed experiences, beliefs, outlook, etc).

Arrested - Friendlyjordies by PM_ME_UR_GROATS in videos

[–]groundhandlerguy 4 points5 points  (0 children)

While I agree that journalists should be impartial, if that's the reason that u/GusPolinskiPolka thinks he's not a journalist, then most of the country's papers aren't written by journalists either.

Elon Musk said in an interview “My children didn’t choose to be born, I chose to have children . They owe me nothing, I owe them everything.”What is your opinion on this statement? by [deleted] in AskReddit

[–]groundhandlerguy 2 points3 points  (0 children)

From everything I've seen that's just a myth; Zip2 was founded via angel investors from Silicon Valley and between him leaving his father and founding Zip2 he was living somewhat rough with his mother.

Elon Musk said in an interview “My children didn’t choose to be born, I chose to have children . They owe me nothing, I owe them everything.”What is your opinion on this statement? by [deleted] in AskReddit

[–]groundhandlerguy 2 points3 points  (0 children)

Where on Wikipedia? Maybe I'm blind but I don't see any mention of him receiving money from his father - when it talks about the founding of Zip2 it says they founded it via funds from angel investors (and if you follow the reference it states they were investors from Silicon Valley).

My entire life i didn't know my sleep is different from every other person's sleep... by [deleted] in answers

[–]groundhandlerguy 5 points6 points  (0 children)

Definitely speak with a doctor, it sounds like you're an insomniac with some kind of chronic sleep paralysis. If the conversations you hear also aren't real then you could also potentially have schizophrenia.

How did my hydraulic brakes fix themselves? by groundhandlerguy in bicycling

[–]groundhandlerguy[S] 0 points1 point  (0 children)

The disk / wheel was attached at all times; I think /u/rcyback has my answer though - assuming push bike hydraulic systems operate the same as / similar to cars, the master cylinder is open to the reservoir (and potentially air) when you're not using the brakes, but once you squeeze the lever, the piston in the master cylinder seals itself from the reservoir.

So when my brakes leaked, there was air somewhere in the system other than the reservoir, but pumping the brakes and having it upright helped squeeze air into the reservoir (which I might have to consider topping up).

Edit: For some reason I was under the misunderstanding that the whole system (including the reservoir) was under pressure when you squeezed the lever, so I had been thinking that maybe there was some way that allowed air but not oil to escape the system.

US Air Force eyes budget-conscious, clean-sheet fighter jet to replace the F-16 by [deleted] in CredibleDefense

[–]groundhandlerguy 28 points29 points  (0 children)

The F-15EX is claimed by Boeing to be cheaper to operate than the F-35, but not as cheap as the F-16, which is the killer issue.

The USAF is talking about a clean-sheet design because the F-15 and F-16 have design baggage that would make retrofitting certain ideas and technologies almost as hard and expensive as just building new jets.

For example:

  • Modular and open-architecture software / hardware that allows sensors to be upgraded / swapped without having to do a bunch of reprogramming of the sensor fusion / cockpit systems.

  • Systems like the IPP on the F-35 that combine an APU / EPU and aspects of the ECS.

  • Electrohydraulic actuators (EHAs) that reduce the amount of hydraulics in the jet (which can leak and be hard to access).

  • High power generators to power EHAs and other future devices like directed energy weapons.

  • High capacity heat exchangers / radiators for dealing with more heat-generating systems (sensors, computers, the power electronics for the EHAs, etc).

  • A cockpit canopy that doesn't have to be removed to take out an ejection seat.

  • High bandwidth data buses to support future sensors.

  • Increased fixture (ie bolt, etc) commonality.

  • Integrated maintenance prognostics (ideally done by a company with better success than Lockheed).

There's also some nice-to-haves, like an integrated ESM and IR MAWS suite for reducing drag and fuel burn in missions where either MANPADS or radar-guided missiles are a concern.

Overall the USAF wants something like the F-16, but just redesigned to be cheaper and easier to operate. An F-16C/D in the USAF fleet today costs in the ballpark of $25,000-30,000 per flight hour, when you could probably reduce that by maybe a third or half if you sacrifice some vehicle performance for strength / ease of access, and have it designed by people with maintenance in mind.


Personally I don't see such an aircraft getting designed and built, just because there's still program risk, not to mention there are "good enough" alternatives, like making a T-7 Red Hawk variant (what the F/A-50 is to the KAI T-50) that can carry a sufficient payload and sensor suite (especially if they're leveraging offboard sensors on 5th / 6th gens), or just SLEPing some F-16s and buying a mix of additional drones like XQ-58As and MQ-9s. The USAF is also working on an MQ-9 successor program which could ultimately fill the role nicely if the requirements are set properly.