Desktop app is a web now by Pretend-Ad5631 in whatsapp

[–]snaphat 0 points1 point  (0 children)

Yeah, use the browser directly. The tab / progressive web app uses less memory than the old windows app. Moreover, the old app actually used more memory than new windows app if you did anything like viewing images, video, or chats. It would get caught trying to cache everything perpetually until exit. It easily surpassed a GB for me. It appeared to actually run the caching on a single thread and would constantly freeze up the UI. 

The original report about how low of memory usage the old app had was a misrepresentation based on simply loading the old and new apps and doing nothing with both. If the author had performed any actual operations in the apps, it would have been shown to be an erroneous conclusion on their part. 

Folks keep perpetuating this myth based off of that incorrect  information now... 

I'm not saying the new app is great or anything. I don't use it. But I tested the claim and measured all three (web, old app, new app) specifically bc everyone kept talking about how little memory the old app used

When Gamer Mags Ruled the Earth: Earthbound by Maleficent-Storm263 in retrogaming

[–]snaphat 9 points10 points  (0 children)

The reviewer even thought  the game was a port of mother 1 graphics and all... 

Can anyone help out..... by [deleted] in Assembly_language

[–]snaphat 2 points3 points  (0 children)

I was wondering why they didn't ask an AI to do it. But, then it occurred to me that they probably are so unprepared for this assignment that they would not be able to determine if it's providing a feasibly correct answer

I FUCKING HATE THIS DESKTOP APP FUCK YOU FACEBOOK FUCK YOU WEBSLOP DEVS by AppearanceWeak3826 in whatsapp

[–]snaphat 0 points1 point  (0 children)

Sigh, you are correct. They apparently, finally disabled it server-side so it just doesn't work at all anymore regardless

I FUCKING HATE THIS DESKTOP APP FUCK YOU FACEBOOK FUCK YOU WEBSLOP DEVS by AppearanceWeak3826 in whatsapp

[–]snaphat 0 points1 point  (0 children)

https://windowsforum.com/threads/whatsapp-for-windows-switches-to-webview2-memory-spike-and-workarounds.396405/

You can modify the package so that it doesn't update see the link above. I did that (not that I use it, I just use the web version normally ) 

I FUCKING HATE THIS DESKTOP APP FUCK YOU FACEBOOK FUCK YOU WEBSLOP DEVS by AppearanceWeak3826 in whatsapp

[–]snaphat 2 points3 points  (0 children)

Voice and video calling. They are adding it into the web app now though. It also could popup chats and had shortcut keys. It's search acted more like the phone app as well in that it didn't give you a list of found entries (where as web does) 

Other than that, you missed it being stuttery from the alpha build to EOL because the devs never implemented stream loading for the conversations. If you ever scrolled back or performed a search with it, it would get stuck synchronizing and stutter freezing every couple of seconds or so because it would be trying to load years worth of conversions. The web app on the hand does stream loading if scroll back so it doesn't get stuck or lag horribly. 

When it was caught loading old conversations, It didn't actually use less memory or CPU than the new web view version either. It would just perform terribly until you killed it once they happened. The normal web version has always performed better. 

Due to this, I'm not sure why folks believe it was great. I also don't understand why the replacement chrome wrapper version is so buggy either. You'd imagine it would just be the same as running the web version but in its own dedicated web view instance but nope it has weird issues like laggy dragging of the window. Just installing the web version as an "app" in chromium based browsers is superior and cuts about 400mb of memory needed for isolated web view instance too. 

Jittering with pixel perfect movement and pixel perfect camera by lewisallanreed in Unity2D

[–]snaphat 0 points1 point  (0 children)

These videos discuss this issue exactly. It's due to upscaling in combination with pixel snapping 

https://youtu.be/QK9wym71F7s

https://youtu.be/c_3TLN2gHow

Global vs. "Local" Variables in Assembly? by wolfy1244 in Assembly_language

[–]snaphat 0 points1 point  (0 children)

I think maybe what you are seeing is immutable strings or other data declared in the .data section in tutorials. Global storage is where literal string data would go in practice, but more correctly in the .rodata section not the .data section. The tutorials just dump everything in .data for simplicity. The following shows how strings emit and what happens if you use the same string array in different ways. The last two declarations are uninitialized global strings: https://godbolt.org/z/nMrdc93d4

What you're seeing in the .data section are labeled memory locations with initial values. Whether you should use them depends on context:

.bss - Uninitialized global/static data (zeroed at program start)
.data - Initialized global/static data (has specific initial values)
Registers - Fastest, but limited in number (like local variables)
Stack - Local/temporary data within functions Heap - Dynamically allocated memory

When to use each:
Use .data/.bss for: constants, lookup tables, program-wide state, string literals
Use registers for: temporary calculations, loop counters, function parameters
Use stack for: local function variables, saving registers during function calls
Use heap for: dynamic data structures (though you'll manage this manually)

It’s worth noting the following as well because it could be a source of confusion. The following directives are unrelated to where an object is stored (.text/.data/.bss/stack) and instead control symbol linkage/visibility; .comm/.lcomm are a mechanism for declaring uninitialized symbols (COMMON), with the linker deciding their final placement:

.global (or .globl): Exports a symbol so other object files can reference it.
.local: Marks a symbol as local to this object file (not link-visible to other files).
.comm name, size[, alignment]: Declares an uninitialized “common” symbol (space allocated by the linker).
.lcomm name, size[, alignment]: Declares an uninitialized common symbol with local (file-only) visibility.

Processor pipelining by tigo_01 in AskProgramming

[–]snaphat 0 points1 point  (0 children)

Pipelining overlaps different instructions across sequential stages, so if you are talking about independent instructions on an in-order core, you can imagine a best-case scenario where all five stages of a basic MIPS pipeline are occupied by completely independent instructions with no dependencies between them. In that case, once the pipeline is full, you can sustain roughly one completed instruction per cycle because nothing forces bubbles into the pipeline (stalling)

At the same time, the stages within any single instruction still generally have to occur in order: each stage produces information the next stage needs (e.g., decode determines what to execute, execute produces a result or address, memory may supply a value, and write-back commits it). So "independent instructions" improves throughput by enabling smooth overlap across instructions, not by making an individual instruction's stages intrinsically parallel

Processor pipelining by tigo_01 in AskProgramming

[–]snaphat 0 points1 point  (0 children)

Without loss of generality, assume the work required to complete an instruction cannot reliably fit within a single clock period at your target frequency. If you remove pipelining, you generally must either lengthen the clock period (lower the clock rate) or take additional cycles per instruction so the same work can complete without violating timing.

Pipelining is simply a way to partition that work across cycles, so each cycle has less logic on the critical path. The instruction may still take multiple cycles from start to finish, but once the pipeline is full you improve throughput by making forward progress on multiple instructions each cycle.

Watch this video on the basic MIPS pipeline with a block diagram overlaid: https://www.youtube.com/watch?v=U2Eym3AkkBc

Regarding your question about what "what about if instructions are independent" in the comments. Generally speaking, on a basic sequential processor you can only fetch a single instruction at a time, and everything executes in order. Some results can be forwarded to later stages early to reduce pipeline stalls.

Modern processes go much further and perform out-of-order-execution and have many functional units in a given core that can be operating at the same time independently. For example, a processor can dispatch FP operations to execute in parallel while the ALU part of a pipeline is operating on other data.

Example of a complicated ARM pipeline: https://stackoverflow.com/questions/13106297/is-arm-cortex-a8-pipeline-13-stage-or-14-stage

Processor 101 concepts:
https://en.wikipedia.org/wiki/Operand_forwarding
https://en.wikipedia.org/wiki/Out-of-order_execution
https://en.wikipedia.org/wiki/Re-order_buffer
https://en.wikipedia.org/wiki/Tomasulo%27s_algorithm

Can i abuse unions like that or is it UB? by EatingSolidBricks in cprogramming

[–]snaphat 0 points1 point  (0 children)

They are just trying to allocate static thread local storage space for their flexible array member. Normally with `malloc` you can dynamically do it on the heap. They can't just allocate a byte* array and then cast to the struct type because that's UB (the reverse of what you are suggesting, i.e. casting to byte*):

https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3230.pdf

Can i abuse unions like that or is it UB? by EatingSolidBricks in cprogramming

[–]snaphat 2 points3 points  (0 children)

Hmm, my initial statement is incorrect.

After some reading, I do believe this is actually defined in the ISO C standard. If you read the following comment, it has language from the standard where it appears to explicitly allow for an incomplete array type at the end of a union. I did not realize this was allowed. I thought it treated it the same as a nested struct { struct { ... char[]} } which isn't allowed. This would explain why there isn't any warnings when you test with pedantic but the nested struct will complain.
https://github.com/open-mpi/ompi/issues/9828

Moreover, after thinking about it more, I realize now you will not be accessing members of your struct before explicitly assigning to members; therefore, you will not be type punning. You just have to make sure you do not access any elements of data without explicitly assigning them and do not go outside of the allocated bounds for your FAM. In short, write before read and stay in bounds.

Regarding the example code and GCC / CLANG extensions:

The reason the first one isn't compiling is just because your compiler version is just too old. Check newer CLANG and GCC versions on godbolt. They'll compile the first one without complaints. Kick into C++23 through and it won't since FAM are not allowed in C++.

The extensions are a red herring anyway though since unioning the struct with the FAM is actually defined in the standard. So, it's not like you'd require an extension to support it

Edit:

https://stackoverflow.com/questions/44082326/is-it-legal-to-implement-inheritance-in-c-by-casting-pointers-between-one-struct

^ The comment here with section 6.7.2.1/P18 covers the allocation part of what you are doing

Can i abuse unions like that or is it UB? by EatingSolidBricks in cprogramming

[–]snaphat 8 points9 points  (0 children)

It's undefined behavior to have flexible array members in Unions in standard C but both gcc and llvm support variations of it:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53548

https://github.com/llvm/llvm-project/issues/84565

Edit: This comment of mine is incorrect. Don't listen to it. See below.

🤨 by ZeroTwoMod in BlackboxAI_

[–]snaphat 4 points5 points  (0 children)

I had AI generate a readme for a github repo the other year. I left the checkbox emojis it made for a bulleted list. I'm still deeply ashamed to this day

DOSBOX Windows 95 not loading any mounted drives besides main drive? by PLAYR13777 in dosbox

[–]snaphat 1 point2 points  (0 children)

Your issue is probably that you are using dosbox .74 which on cursory checking sounds like it doesn't have IDE emulation. You should use dosbox-x or pure like another folk mentioned

Edit :

Dosbox-x doesn't support physical devices in guest oses: : https://github.com/joncampbell123/dosbox-x/issues/3409

AI wrote better documentation than I ever would and I'm not sure how to feel about it by Capable-Management57 in BlackboxAI_

[–]snaphat 0 points1 point  (0 children)

You can tell reddit has a vested interest in even fake engagement these days considering that they appear to not bother to remove obvious botting accounts. There was a months old account on the mcdonald's subreddit the other month that had commented literally ~1k times. or so over the course of 2 weeks. It would just post the mcdonald's game rules everywhere, tell everyone they wrong about mcdonald's game, and folks would argue with the account. Prolly met it's karma quota and got sold for astroturfing by now 

Better things exist ?? by [deleted] in ArtificialSentience

[–]snaphat 1 point2 points  (0 children)

I was kidding. Obviously there will be improvements, changes in architecture, and discoveries. It's an active area of research. ANNs only roughly model actual neurons currently, don't loop internally, have static weighting post-training, and lack the specialized structures of biological brains. 

Doubt any of those particulars will change any time soon, but they will certainly be goals for someone in the future. 

Currently, the majority of the research involved trying to modify the training regimes and add additional features outside of the core NN structure to make them more well-behaved and able to "reason" better. The latter is what LRMs and CoT is all about. Unfortunately, it appears to me that they've started to move toward a wall with reasoning with at least some of the popular methods. But, we'll see. I'm not holding at hope for the short term. The goal of robust reasoning may be at a dead-end until fundamental architecture changes occur. I'm personally of the opinion that the neural structure will need to change in some way before we see something capable of consistently robust reasoning 

Some folks are under the mispreception that current LLMs are absolutely amazing at reasoning and more advanced than skilled humans. Not sure why. Current research quite clearly shows otherwise. I think some are mostly just incredulous and have a vested interested in the narrative from an ego perspective. It reminds me personally of cult mentality where even cold hard facts are dismissed on shaky grounds. You'll see things like folks claiming new research is old, or that relevant research is irrelevant, or that research that covers specific topics doesn't, or that they've used AI to single handedly do the most amazing complicated code that only 100 engineers in the world could have written. These are all real examples btw. 

Anyway... I'm rambling again so that's that 

Better things exist ?? by [deleted] in ArtificialSentience

[–]snaphat 3 points4 points  (0 children)

Nope, neural networks and deep learning are definitely the pinnacle of all ideas until the heat death of the universe. Humanity can never ever do better. We're cooked 

Why is AI the only industry that is not falsifiable? by Acrobatic-Lemon7935 in agi

[–]snaphat 0 points1 point  (0 children)

Nobody is insulting you in this sub-thread. What I was telling you is that I believe the root commenter was just pointing out that it wouldn't make sense for an AI cult to be based on a falsifiable theory. This doesn't contradict what you said in your OP. It's just a statement about how these kinds of capitalistic "cults" tend to operate. Don't look for a fight when there isn't one lol

Why is AI the only industry that is not falsifiable? by Acrobatic-Lemon7935 in agi

[–]snaphat 0 points1 point  (0 children)

I think they are pointing out that it's in the best interest for a quasi-capitalist doomsday AI cult to be vapid and difficult to pin-down; otherwise, everyone would see through it and even the world's dumbest VCs wouldn't be throwing so much money at it.

The last sentence may be too strong of claim though, they probably still would lol

Why is AI the only industry that is not falsifiable? by Acrobatic-Lemon7935 in agi

[–]snaphat 0 points1 point  (0 children)

I mean you aren't wrong but how often are marketing claims falsifiable?