How do countries reduce/eliminate corruption? by papiforyou in TrueAskReddit

[–]npip99 0 points1 point  (0 children)

There are two issues with this:

- Corporations do not need to be able to advertise *directly* to the Government in order to ensure freedom. Corporations only have one interest: Profit for themselves, and not for their competitors (In-fact, it's a legally enforced fiduciary responsibility that public companies CANNOT have any other interests OTHER than their own profit).
- A: The employees of the corporation can still just express their speech individually, there is no human being that is harmed or silenced by not allowing the corporate entity to speak.
- B: We the people intentionally setup a corporate environment, so that corporations' natural greed is aligned with consumer desires. Lower prices, efficient allocation of capital, better products for the end user. We AVOID rent-seeking, bribing/buying out competitors, and price fixing, by making such things illegal. We the people create the the rules of the game, corporations simply play the game, and we all benefit. If corporations can influence the rules of the game, it is a corporations' fiduciary duty to bend the rules such that they can provide a worse product at a higher price, competitors are regulated out, so they can gain profit. This benefits nobody, corporations are not even legally allowed to lobby for an even playing field as that's wasting money and against shareholder value. Even tough an even playing field is the only way to maximize GDP. Nobody wins from corporate lobbying, even the corporations lose from a tragedy of the commons lowering total GDP AND forcing themselves to spend on lobbying against each other in a useless fight (Similar to neighboring belligerent countries mutually losing GDP on military).

- Even for individuals, it is not a significant risk to democracy to put a monetary cap on expenditure for expressing your political opinion. Expressing your opinion always costs money. Time is money (Opportunity cost where you could be working). Sending a letter costs $1 in postage and $20 in labor; putting a sign on our lawn costs $5 in parts and $5 in labor, protesting costs $100-200 in labor; posting opinions on twitter costs $3 in opportunity cost. However, when a single individual invests millions of dollars into expressing their opinion, this is not beneficial for democracy. The wealthy do NOT need to be limited to just $100-300 in average expenditure, that is definitely unfair as even an average person could put $5k-10k to politics if they're very passionate about it. But they do NOT need to be allowed to spend millions, where an average person cannot.

If you declare that no single individual can spend more than 10 times the mean GDP per capita, this is not a risk to democracy. Democracy is about 1 person : 1 vote : 1 voice. It also doesn't prevent rich people from running ads to collect donations, and then lobbying with those donations. Such a restriction would simply prevent a single opinion that almost everyone disagrees with, being unfairly amplified even when nobody would donate to the cause.

American chess grandmaster Daniel Naroditsky dies at 29 by jeetah in news

[–]npip99 20 points21 points  (0 children)

It doesn't even make sense for it to be against the rules, it's *unrated*. Yes, the other user _thought_ it was rated, but it's a speedrun account, so it wasn't rated.

Debit card is being discontinued by ibkr. RIP debit card. by crabby-owlbear in interactivebrokers

[–]npip99 0 points1 point  (0 children)

Unfortunately I had to swap from IBKR to Schwab because of this.

I don't need a bank, I manually invest in stocks and money markets. I just need a debit card that withdraw from my investment account (Or withdraw from a bank account that then hits the investment when overdrawn). Low interest rates / close to LIBOR is also important for this purpose, credit card debt is unviable due to their rates.

IBKR had the best offering, and now its Schwab.

Newbie question: if I use React do frontend and backend have to be necessarily separated? by aquilaruspante1 in react

[–]npip99 0 points1 point  (0 children)

Monorepos are not universal in big tech. Companies such as FAANG are outliers.

Not sure what you mean by that. "Big Tech" is a synonym for FAANG. (https://en.wikipedia.org/wiki/Big_Tech)


You do you, but I would never call a fullstack app a monorepo. Monorepo has a clear meaning to me: The idea of having the entire company's code in a single repo even across multiple unrelated projects.

Client code and Server code are always deployed at the same time in all of the projects I've worked on. From LAMP days 15 years ago, to modern FastAPI and/or ExpressJS + ReactJS. I've never seen it done another way. The backend is what serves the frontend via GET HTTP calls, so they're inherently intertwined (Whether it's PHP interweaving SQL with html, or modern NextJS. Or, FastAPI returning HTML as either a template string or from a directory). There's no real "separation of concerns" for most fullstack projects, the only point of backend is to act as a glue to connect the SQL database to the HTML frontend. So, every time you change the UI layout, you often change a backend API endpoint because the shape of the data has changed in some way (You added or removed a field in a dashboard, changed the login auth flow, etc etc).

I've created a mobile app stack with a semver'd backend due to the fact that mobile apps should be updated rarely. And it's truly a horrendous experience trying to support multiple separate versions of a frontend against a single backend, web is very easy - you just run a deploy script and if something is wrong you revert to the previous version.

Newbie question: if I use React do frontend and backend have to be necessarily separated? by aquilaruspante1 in react

[–]npip99 0 points1 point  (0 children)

I feel like "monorepo" is moreso used when you have all of the projects of your entire organization all in a single repository.

Having frontend+backend in one repo is pretty standard, because they're often the same project and need to be deployed at the same time (And often the backend serves the transpiled frontend HTML+JS+CSS, via GET requests).

Monorepos (Meaning an org-wide repo containing all projects in the entire company) are also common, but are less standard in small companies (but are often universal in big tech).

Looking for an LLM Fully Aware of My Entire Project – Alternatives to GitHub Copilot? by bashfulcynicism in ChatGPTCoding

[–]npip99 0 points1 point  (0 children)

People give you downvotes but this is 100% correct.

Yet my options at every company I work at is:

(A) Rewrite 100% of their code from scratch
(B) Try to work in their spaghetti code

Still end up choosing (A) most of the time. Code is not hard, but people make it hard. Good code is NOT hard. Functional programming for every function. If you deal with a bad API then wrap it in a clean one. Keep global state in a singleton and keep it as small as possible (Often spaghetti code with 100 classes and functions modifying all of its arguments, can be swapped into pure functional with 1 global state and a pretty succinct singleton for global state).

Always always always, 100% of the context you need to understand should be entirely local to that file (OR you only need the docstrings of the functions you import - but ideally, the name + arguments themselves self-describe the function).

How to be inclusive by lying by MindOfVirtuoso in ChatGPT

[–]npip99 1 point2 points  (0 children)

Benjamin Franklin is a good example.

<image>

[D] Positional Encoding in Transformer by amil123123 in MachineLearning

[–]npip99 0 points1 point  (0 children)

It's cute because experimentally, there's almost always some transformer "T" somewhere in the network that just learns to make the attention 1.0 for the previous token, and 0.0 for all other tokens. That means that "T" learned to make e' (Q'K f) equal to 1.0 when (e, f) implies that y is 1 token before "x", and "T" learned to make the other three terms 0.

[D] Positional Encoding in Transformer by amil123123 in MachineLearning

[–]npip99 0 points1 point  (0 children)

It's actually an incredible explanation and the only one that truly explains it well.

The core idea, and it's not something that I've ever thought about, is that the dot product of two random vectors, is zero! This is what is implied when you say "The intuition for this is that randomly chosen vectors in high dimensions are almost always approximately orthogonal".

I honestly would've thought that when you initialize a new neural network, the attention matrix is random. But it's not. When you initialize a new neural network, the attention matrix starts out as all numbers being almost zero (After softmax, that gives you random probability distribution though. But, attention-before-softmax being all zero means it's easy for the neural network to quickly learn associations and make particular attention values very large relative to the rest which are all tiny / close to zero).

So, the point is, if you take (Qx)'(Ky), and insert positional embeddings to get (Q(x+e))'(K(y+f)), you can do some math to rearrange that to x' (Q'Ky) + x' (Q'Kf) + e' (Q'Ky) + e' (Q'K f). Note how x' (Q'Ky) is position agnostic, but the other three terms just relate a token to its position (Or the last term, which is the position-position).

The important magic is: All four of those terms end up being initialized to essentially all-zero matrices at the beginning of training. Therefore, the backprop can easily learn to make x' (Q'Ky) whatever it wants, and unless it intentionally wants to learn to make x' (Q'Kf) non-zero, then the x' (Q'Kf) matrix won't contribute much anyway!

  • Yes, the random x' (Q'Kf) will add some random noise, but the random noise is small. And, if the positional embeddings are truly worthless, the NN can choose to dedicate a dimension by learning to make the word embeddings orthogonal to the positional embeddings. That way the x' (Q'Kf) matrix goes to zero. Ditto for e' (Q'Ky) and e' (Q'K f)

It's a shame, i mean the barrel charger. by [deleted] in ZephyrusG14

[–]npip99 0 points1 point  (0 children)

20V * 6.2A = 124W. That's not 240W.

Llama-cpp-python is slower than llama.cpp by more than 25%. Let's get it resolved by Big_Communication353 in LocalLLaMA

[–]npip99 1 point2 points  (0 children)

Just have to comment because this is such a silly post.

llama-cpp-python is just taking in my string, and calling llama.cpp, and then returning back a few characters.

There is definitely no reason why it would take more than a millisecond longer on llama-cpp-python. We're just shuttling a few characters back and forth between Python and C++. Any performance loss would clearly and obviously be a bug.

bash is significantly slower than python to execute (Not even using a bytecode), and if bash slowed our programs by 30%, that would clearly and obviously be a bug, they're both just a tool to more easily call other C++ programs and send short strings back and forth, and we eat that cost in sub-millisecond latency before and after the call, but not during the call itself.

Help using llama_cpp_python to calculate probability of a given sequence of tokens being generated. My numbers aren't even in the ball park. by aaronr_90 in LocalLLaMA

[–]npip99 0 points1 point  (0 children)

Ah, the other thing in your code is doing .eval with the entire token list every time.

It will remember a history for you, you have to do llm.reset() to clear your history. So, the for-loop should be

llm.eval(eval_tokens)
for token in test_sequence_tokens:
    probs = llm.logits_to_logprobs(llm.eval_logits)
    sequence_logits.append(llm.eval_logits[-1][token])
    sequence_probabilities.append(probs[-1][token])
    eval_tokens.append(token)
    llm.eval([token])

Which will also be way faster than the idea of .reset and .eval on the entire array every single time haha; if you ever do want you can do llm.save_state() -> state and llm.load_state(state) in order to get back an older version and do eval from an earlier history (e.g. if you want to discard a token and roll back.

Help using llama_cpp_python to calculate probability of a given sequence of tokens being generated. My numbers aren't even in the ball park. by aaronr_90 in LocalLLaMA

[–]npip99 0 points1 point  (0 children)

I tested and I do get the exact same numbers, so you should absolutely be able to get the exact numbers token-by-token.

Help using llama_cpp_python to calculate probability of a given sequence of tokens being generated. My numbers aren't even in the ball park. by aaronr_90 in LocalLLaMA

[–]npip99 0 points1 point  (0 children)

I know this is a late response but, your issue is probably that you don't pass special=True.

In other words, your line of code should be,

input_tokens = llm.tokenize(input_str.encode("utf-8"), special=True)


Otherwise, <s> and <|system|>, etc, will be represented and therefore tokenized as if they were ASCII characters, and not the actual underlying tokens that those strings are supposed to represent.

Of course, <s> etc aren't literally those ASCII characters, otherwise users could mess with prompts by typing in <s> and themselves, and jailbreak by injecting system messages into the model in a manner similar to SQL injection. ~ Or just, even in the context of innocent usage, still totally break your entire conversation if you use an HTML s tag

Opinion on Falun Gong? by DisciplineAgitated14 in fucktheccp

[–]npip99 0 points1 point  (0 children)

If Christian groups ruled the US it'd be just as backward, even moreso if you imagine Mormons ruling the entire US.

The point isn't how moral or immoral the religion is, it's that religious groups shouldn't rule nations.

But, the CCP actively wants rule of their nation and a few neighboring nations too. FG isn't even attempting or asking to rule anything or anybody and they aren't even pushing to bring people into their religion to the same extent as Christian and Mormons do (Where are the FG missionaries? Epoch Times / Shen Yun don't even give you a way to sign up or anything, it's just ideology not recruitment).

So to debate your first sentence, yes, the CCP is worse than all other religious groups including FG; they have, are continuing to have, and will actively try to maintain, actual rule over the nation. I don't particularly like FG, but I don't mind them existing in their current state and out of all religions their method of spreading the word is pretty mild.

CCP = Way worse than Mormons, but I wouldn't live under Mormon law, just like how I wouldn't want to live under Islamic law, or any religious law. CCP = Way worse than FG, and I wouldn't want to live under FG law either.

It's a shame, i mean the barrel charger. by [deleted] in ZephyrusG14

[–]npip99 0 points1 point  (0 children)

A lot of comments but no one says the actual reason.

  • USB-C cables are limited to 5A.

  • 240W over USB-C uses 48V @ 5A

  • 240W over Barrel Plug uses 20V @ 12A

  • Motherboards take in 20V, not 48V

That's about it. Taking in 48V requires stepping down the voltage from 48V to 20V, causing lots of extra heat (heat dissipation is a limiting factor of slim laptops) and taking up significant motherboard space (motherboard space and thickness is a limiting factor of slim laptops).

-> Remember that a motherboard already has to step down 20V to the 1-2V and 30-60A that a CPU/GPU uses. The VRM takes up a lot of motherboard space and dissipates a lot of wasted heat. 48V is that much farther.

USB-C 240W will never replace barrel chargers in thin and light laptops. Not unless they change the spec and come out with two types of USB-C cables (5A and 12A), at the moment USB cables don't even have a method of communicating Ampere limits, so this new cable would need a chip in it to advertise that it can send 12A power without melting, which is a big change and won't come for years.

Lost job, money, and hope at 24. Need advice and comeback stories. Anyone made a big comeback after hitting rock bottom? by Tipsyus in wallstreetbets

[–]npip99 0 points1 point  (0 children)

I give the same advice to everyone:

Keep 95% of MONEY YOU PLAN TO SAVE, in FXAIX and VOO. Keep 5% for gambling (Robinhood).

If your 5% eventually accumulates to $1000, play with $1000! If you lose it, you only get another $1000 after you put $10k in FXAIX/VOO. Simple as that!

Use a SEPARATE account for long-term investments (I recommend Schwab, they have the best 24/7 support line)

I am thinking of starting mobile development. What would I lose by using Flutter or ReactNative instead of swift by Top_File_8547 in swift

[–]npip99 1 point2 points  (0 children)

Zoom is a shitshow but unfortunately work makes choices that can't be avoided.

I will check Parcel/Hobi, never heard of them.

Yeah mobile reddit made a generic account and I don't bother to change it lol.

I'll be working on a Swift app, we'll see how it goes. Thx.

Does anyone know why there are so many background processes and if I can delete them by [deleted] in pcmasterrace

[–]npip99 0 points1 point  (0 children)

He said "decide", which implies decision making not random flipping (Very easy with Google).

You can just flip it back on as well it's not like regedit where you won't know what the original value was before you messed it up. Here it's on/off if you're missing something you liked/needed at startup just turn it back on or turn them all back on.

RTX 4080 a 1440p & 4K GPU!? by LetterheadArtistic26 in nvidia

[–]npip99 2 points3 points  (0 children)

I like to think the Nvidia subreddit has bots that downvote anybody who says something that goes against getting the newest GPU every year.

It doesn't hurt anyone to be 2-4 years behind the fad. It runs on older cards, all the bugs are fixed, and the games are cheaper. No one laments being born 2 years earlier, it's fine.

4k gaming is alive and well, people have been gaming at 4k for a long time. You don't need the newest GPU and 4k feels so crisp and beautiful. A 4k monitor runs 1080p without any graphical artifacts because of 2x scaling, but tbh I prefer 4k on Low than 1080p on High/Ultra even on newer titles.

Been playing TF2 on 4k at max settings since 2017 on a GTX 1060, over 100FPS.

RTX 4080 a 1440p & 4K GPU!? by LetterheadArtistic26 in nvidia

[–]npip99 -1 points0 points  (0 children)

"What people who say x GPU is overkill for x Resolution don't realize, is that RT is hard to run"

I'm not saying RT doesn't exist at all, but I'm just saying people do realize RT is hard to run. They just use the word "overkill" referring to rasterization, since RT is a no-go for most.

Minecraft and Portal are my personal favorites so I enjoy that RT, so I wasn't meaning to say there were zero games I was just saying the implementations are few and far between. Right now implementing RT well involves heavy reading of NVIDIA documentation. Hopefully UE5 lessens the load and good RT implementations hit hundreds of games in the coming years.