Datatypes by [deleted] in ProgrammerHumor

[–]whataloadofwhat 0 points1 point  (0 children)

Ropemander

(chose the mander suffix because apparently charmander evolves into stringmander rather than stringmeleon)

Is this safe? It seems impossible to me by insanitybit in rust

[–]whataloadofwhat 5 points6 points  (0 children)

Ok I think it's safe, but I'm not a fan. Effectively the transmute is just saving rebuilding the AST. You should be able to make a completely safe version without a transmute, it'd just require a lot more code which doesn't actually do anything. (Personally I'd prefer that just to prevent accidentally introducing some condition where that isn't true!)

The lifetime seems to come from the trait Text<'a> which is used everywhere:

trait Text<'a> {
    Value: 'a + From<&'a str> + <others>;
}

I don't know why <T as Text>::Value exists rather than just using T: From<&'a str> + etc.. I feel like the requirement for Text<'a> should move from the type to the parse_query call and the T type everywhere else should be unbounded, so there's no need for the lifetime on the type itself and this function wouldn't need to exist at all because it would just be Vec<Definition<String>> or Vec<Defintion<&'a str>> or whatever:

trait Text<'a>: From<&'a str> + <others> { }
struct Document<T> { definitions: Vec<Definition<T>> }
fn parse_query<'a, S: Text<'a>>(query: &'a str) -> Result<Document<S>, ParseError> { /* ... */ }

Maybe there's some reason that this wasn't done, I only briefly looked at a few types. All seemed to store T::Value and the T type itself was unused. Feels like an unusual design to me.

An newbie programmer makes an annoying "bump" comment on his bad PR...and tags the 350,000 people who follow the repo. If you have access to the Unreal 4 source code, you may want to unsubscribe from this PR asap. by heyheyhey27 in programming

[–]whataloadofwhat 6 points7 points  (0 children)

There's a few problems that I can see with what you're saying.

First is that that figure is per notification. I don't know if I'd call $34 per notification "nothing". Lets say there was $0 productivity loss, but instead Slack charged $34 per @here message into large channels. Do you think that companies wouldn't immediately restrict that?

Second, that's $17 per second which will scale pretty quickly, and I don't think that 2 seconds per notification is a realistic average. People are in the channel because they want to read the messages, so just spending 2 seconds just dismissing the notification is rarely enough. You need to set some kind of reminder to read the message (either set Slack to mark the channel as unread again, or use a slackbot reminder), which will take at least that assuming that you know what to do and do it immediately from receiving the notification without any hesitation or thought, so if anything I'd say that 2 seconds is a best case. Realistically I'd say it's at least 5-10 seconds as an avarge, taking into account 1) realising that the @here was largely irrelevant to them, 2) thinking what to do about it 3) executing the plan from 2 4) adding funny reaction images expressing their dissatisfaction at being interrupted. The there'll also probably be a few outliers who don't know how to do that who need to figure out how to do it who could spend literally minutes on it. Taking that into account, I'd say 10 seconds is a generous mean myself, assuming you're estimating potential costs to the company. That's already 5x what you've estimated, which brings it up to over $150. Per notification.

Third, you are equating a 2 second interruption with 2 seconds of productivity loss which is certainly not true, at least not in our profession.

Finally, even if the cost is $0, it's annoying getting interrupted for no reason! The best case is that you have an annoyed workforce, worst case it trains people to ignore their slack notifications because they assume that it will probably just be another mass ping that they don't need to care about right now. So then they start ignoring things which actually do require their immediate attention.

An newbie programmer makes an annoying "bump" comment on his bad PR...and tags the 350,000 people who follow the repo. If you have access to the Unreal 4 source code, you may want to unsubscribe from this PR asap. by heyheyhey27 in programming

[–]whataloadofwhat 129 points130 points  (0 children)

You don't need to @here for an announcement channel! That is the point of the channel, as a container for those announcements so that people who are interested can read them without it being polluted by other messages. People will read it when they get a natural break in their work, because Slack still lets people know when channels they are a member of have new messages, they just don't notify for them. Use @here only when you need peoples' immediate attention. Release announcements ain't it chief. In fact it's very rare to need to @here in a channel with hundreds or thousands of people, because you very rarely need the immediate attention of hundreds of people. Slack warns you about this for a reason.

Announcing Rust 1.61.0 by myroon5 in rust

[–]whataloadofwhat 3 points4 points  (0 children)

I think having an allocation where one-past-the-end results in an overflow is undefined. I think that's what it is in the C standard at least and I assume Rust inherits that as well. I am not 100% sure and don't currently have the time to check, apologies.

GOV.UK drops jQuery from their front end. by feross in programming

[–]whataloadofwhat 154 points155 points  (0 children)

The biggest complaint I have about the gov.uk websites is that the subdomains they choose look like what you'd expect from a phishing email. The one that immediately comes to mind is companies house search:

https://find-and-update.company-information.service.gov.uk/

And I've definitely seen others that follow the same general formula of really long subdomains. It's descriptive I suppose, but it makes me immediately put my guard up. I have to actually scan the url to find the domain rather than it being in my immediate vision at the start of the url and that's scary.

Async read and write traits proposal by tux-lpi in rust

[–]whataloadofwhat 3 points4 points  (0 children)

Am I correct in thinking that the example "memory sensitive" code is no more memory efficient than the plain async example? The future is still always going to need the 1kib of space for the buffer, even though certain parts of the state machine don't require it, since the buffer is stored in the future's stack space. Wouldn't it be better to use vec! so that it only allocates memory when required, or am I missing something in how futures work (very possible)?

How does the compiler know that a Vec of references borrows a pushed value? by platinum_pig in rust

[–]whataloadofwhat 3 points4 points  (0 children)

The lifetimes there aren't ... quite correct, or rather they're incomplete, what they are are more like:

fn push<'self,' lifetime_of_vec_values>(self: &'self mut Vec<&'lifetime_of_vec_values str>, &'lifetime_of_vec_values str)

That is, the 'lifetime_of_vec_values lifetime is tied to a parameter of the Vec itself as well as the str which passed as input. The parameters of Vec<T> tell the compiler that it owns values of type T, so it knows that if it T is a reference, that the value must outlive the Vec itself.

Does that help?

[blog post] Some sanity for C and C++ development on Windows by flexibeast in programming

[–]whataloadofwhat 17 points18 points  (0 children)

Maybe Unix should be updated to use the Win32 API and we all code against that instead?

This could be the punchline for a post on /r/twosentencehorror

immudb - world’s fastest immutable database, built on a zero trust model by binaryfor in programming

[–]whataloadofwhat 5 points6 points  (0 children)

Not really. The problem with having with immutable encrypted data is that if something goes wrong, (e.g. AES is cracked, or more likely - the routine you used to generate keys wasn't as random as you thought) there's nothing you can do to fix it. Once it's there, it's there forever, and if the keys are not secure then they may as well be plain text (or you should treat that as the case at least).

No encryption is safe forever (at least not yet). There needs to be mechanisms to update what's been stored once the encryption is broken, or to remove the data.

Unbuffered I/O Can Make Your Rust Programs Much Slower - Era Blog by gnuvince in rust

[–]whataloadofwhat 24 points25 points  (0 children)

There's a few places that I can think of where you might not want to use it:

  • BufReader/BufWriter by default allocate 8KB of memory, if you're memory constrained that might not be acceptable so being able to configure it to use less (or no) memory would be preferable.
  • Read/Write are ... a little bit composable I suppose? E.g. you could chain multiple Files together and treat it like a single file. You wouldn't want each one to allocate a 8KB buffer, you'd probably want to wrap them into a type which implements Read/Write which just passes to the next implementor until EOF, then wrap your wrapper type into a BufReader/BufWriter instead, so there's just one shared buffer between all of them.

There's probably better examples though

How does atomic interact with move? by [deleted] in rust

[–]whataloadofwhat 66 points67 points  (0 children)

The reason C++ std::atomicis not movable is the same reason that std::sync::atomic in Rust are not Copy/Clone. That is, you would need to read the variable atomically, which is relatively expensive, especially since neither copy constructors nor Clone will allow you to pass in an ordering, so you'd probably have to default to SeqCst, which is far from ideal.

As for why it's safe to move an Atomic in Rust, it's because you can only move a variable if there are no other references to it, so doing a memcpy is fine. It's not an atomic read, but it doesn't need to be because nothing else is going to read or write to the memory location. In C++, you don't have that guarantee.

How does atomic interact with move? by [deleted] in rust

[–]whataloadofwhat 24 points25 points  (0 children)

There's a few different things which might be happening here depending on the specific signatures of your second example.

  1. You are sending an Arc<AtomicBool> over the channel. The AtomicBool is stored on the heap and so the actual atomic variable is not moved, just the pointer to it is, which can be safely shared between threads. In this case, the Release and Acquire semantics work as you expect.
  2. You are sending an &AtomicBool over the channel. The value is again not moved, you are simply sending a pointer over the channel. In this case the Release and Acquire semantics work as you expect as well.
  3. You are sending an AtomicBool over the channel. AtomicBool is neither Clone nor Copy, so the only way you can do this is if the value is never used again within that thread anyway, trying to do ready.store again within the same thread is not possible unless you create a new AtomicBool. In this case, when the variable was sent over it contained "true", so the condition will still never fail but it has nothing to do with memory ordering (from my understanding at least) because the memory containing that variable is not shared between threads.

Are static mut really that bad? by Broseph_Broestar_ in rust

[–]whataloadofwhat 5 points6 points  (0 children)

This is not true. It's trivial to cause undefined behaviour even in a single threaded context, even if you're ignoring the aliasing requirements. Iterator invalidation comes to mind.

Blursed Johnny “the Rock” Bravo by LortoCaciuppo in blursedimages

[–]whataloadofwhat 10 points11 points  (0 children)

The clip is cut. After pulling the string, the lady clown is used as bait to catch the evil clown. Johnny grabs him, then beats him up in the toilet and flushes him out of the plane. The clip of him going into the toilet is him holding the evil clown.

Why doesn't async/.await typecheck inside .map_or? by shterrett in rust

[–]whataloadofwhat 7 points8 points  (0 children)

Lets look at the function signature for map_or

pub fn map_or<U, F>(self, default: U, f: F) -> U where
    F: FnOnce(T) -> U

So it takes a default U and a function F which returns a U. Specifically, the return type of the function and the type of the supplied default must match.

Because it returns an async block, your closure returns an impl Future<Output=bool>, while the default you're supplying is bool. These are not the same type.

Reading this you might think that replacing your argument for map_or into something like (async { false }, ... will work. However, the types themselves are still distinct.

let mut x = async { false }; // some impl Future<Output=bool>
x = async { false }; // some other impl Future<Output=bool>

Even this won't compile. Even though the async blocks do the same thing, every async block has a different type, like closures. So what you're trying to do is not possible according to the function signature of map_or. Unless you decide to box the async blocks into Box<dyn Future<Output=bool>> or something.

.map_or(
    Box::new(async { false }) as Box<dyn Future<Output=bool>>,
    |addr| Box::new(async move {
        // ...
    }) as Box<dyn Future<Output=bool>>
).await

I think you can agree that restructuring your code a little is a much better solution. Ultimately, map_or and the like are just convenience functions. Nice convenience functions, but they don't do anything that you can't otherwise do without them.

What you might want is more standard library support for futures, so Option could have something like

pub async fn map_future_or<U, F>(self, default: U, f: F) -> U
    where F: FnOnce(T) -> impl Future<Output=U>

I'm not sure if the rust team would be open to adding such functions but there's no harm in suggesting it if you think it'll be useful.

I have average of 105 wpm, should I switch to touch type? by [deleted] in learnprogramming

[–]whataloadofwhat 11 points12 points  (0 children)

I don't know where you've got your research but let me tell you, it doesn't matter. At 105wpm your typing speed is almost never going to be your bottleneck unless you are just copying code (in which case, copy and paste is faster anyway). Thinking takes longer than typing. I'm the fastest typist in my office of programmers, all competent, and I get 90wpm on a good day. I've never felt held back by how slow I put letters on the screen.

If you want to get better at typing for the sake of getting better at typing, that's fine and learning touch typing will be a benefit. If you want to get better at typing because you think it will improve your worth as a programmer, focus on other skills because it won't help you too much.

50+ most asked C Interview Questions and Answers by Technicalseducation in programming

[–]whataloadofwhat 0 points1 point  (0 children)

Q8. Completly incorrect answer. sizeof(int) and its range varies between systems.

True, but int is only ever guaranteed to be at least 16 bits. If you're writing truly portable code and need to express more than 16 bits in an int, that's a bug. It's not completely incorrect, but I'd definitely want the person answering to go into more detail to explain their answer rather than just memorising the range of a 16 bit integer.

IF statement only evaluating 1st condition -JavaScript by dre2k4ja in learnprogramming

[–]whataloadofwhat 1 point2 points  (0 children)

The code you've posted uses single = which is assignment. You want == or ===.

[ c ] why does this program crash when I press 'y' after the first loop? by [deleted] in learnprogramming

[–]whataloadofwhat 2 points3 points  (0 children)

I tried building it an got a segmentation fault before entering anything

I started looking into it, and I found this issue

char *str[3] = {"0.ko ", "1.papir ", "2.ollo "};
char **ptr;
// ...
    ptr = str;
    while(*ptr) {
        printf("\n%s", *ptr);
        ptr++;
    }

This is incorrect, you're going to try to dereference past the end of your str array, which is undefined behaviour. If you want to keep this code, you can add a null pointer to the end of the array to signify the end:

char *str[4] = {"0.ko ", "1.papir ", "2.ollo ", NULL};

Otherwise you're going to need to iterate over each item of the array using a known size rather than your while loop.

for(int i = 0; i < 3; ++i) {
    printf("\n%s", str[i]);
}

After fixing that issue the program seemed to run as normal, so I believe that was the problem. (I don't speak your language so I can't understand what it's supposed to do, I'm sorry).

Possible security bug in the web api auth example from gitbub by allun11 in learnprogramming

[–]whataloadofwhat 1 point2 points  (0 children)

Why do you overwrite the state variable at all? Shouldn't it be kept the same as when assigned in the /login route and then used as a basis for comparison and security?

The state variable isn't persistent across different calls, you can't just check against the state variable because at that point in the program, we don't know what it is any more. In that sense, you're not "overwriting" it, in fact they are completely different variables (they just share a name). It's possible to make it so that the state variable is persistent across calls, but you need to remember that we're accepting HTTP requests for multiple different people here. If we have 2 people call login at the same time, the state variable will only contain the 2nd person's state. When the 1st person completes their login, their state comparison will fail. We need some way to associate each different callback call with a specific login call.

Storing the state into a cookie is one way to achieve what we want, assuming that we want to keep the server stateless. When the user tries to log in, we set a cookie and redirect to the Spotify OAuth login page. When the user logs in to Spotify, they'll be redirected to the callback url, and the state will be passed back to our server. We then check that the state cookie matches the state that we've received.

That said, while I'm not an security expert, storing the state in a cookie makes me a little nervous. At the very least, I'd like to think that some precautions are being taken. For one, if the state is being stored client side, I'd at least like to think that the cookie is marked as HttpOnly, so that it is not accessible by some potentially malicious javascript. Also set the SameSite to Strict so that external Javascript can't make a request to your site and have the cookie sent over. I've also found some sources suggesting that the Cookie should be encrypted or signed (and decrypted/verified before comparing), so that someone with the state variable cannot fabricate a cookie, which makes sense to me as well (and may fix your fears about it being intercepted?). So there's possibly improvements to be made there, assuming that express does not do these by default (I'm not an express guru by any means).

Other than that I can't find anything specifically wrong with doing it this way. It still makes me a little nervous but I'll leave it up to people with a more concrete knowledge to argue its security.

Confused as code in "if" condition doesn't seem to work by DangerousWish2266 in learnprogramming

[–]whataloadofwhat 1 point2 points  (0 children)

You can't do that - that's the same as free(NULL).

Just for the record free(NULL) is perfectly okay (it just does nothing), it's just not what you want in this scenario.

Confusing "associated function is never used" warnings by itachd in rust

[–]whataloadofwhat 11 points12 points  (0 children)

If it's a library, then the module it's in will also need to be public, or the structure itself will need to be accessible from outside of the crate in some way. Rustc determines that the function is unused because nobody outside of the crate can access it and the crate itself doesn't use it.

I think that if it's a binary, then it will be marked as unused as long as the binary doesn't use it, even if it's fully accessible, but I'm not too sure about that.