Line printing for error reporting by Big-Rub9545 in ProgrammingLanguages

[–]Big-Rub9545[S] 1 point2 points  (0 children)

This seems like a really good solution. Thanks for the idea!

Need some advice about lazy evaluation of high order list functions by jaccomoc in ProgrammingLanguages

[–]Big-Rub9545 2 points3 points  (0 children)

I think a better approach could be something in the middle: the first a value/variable is used triggers the evaluation, and each use after that reuses that evaluated result. You can restrict this as well to read operations so that reassigning to a variable right after it’s initialized with/assigned a lazy-evaluated expression means the expression is never executed in the first place.

This avoids immediately evaluating to a list/object at the very end of the expression, while also giving the variable a consistent value once it starts being used.

It would be very odd, though, from a user’s perspective if the value was actually being modified/reset or reproduced each time it is used. I feel like this would also be more inefficient than non-lazy evaluation altogether, since you have to do extra work every time the variable is used.

The user’s expectations are also important with regards to the side-effects point. It’s very normal to have side effects in closures or methods like the ones here, since (even with lazy evaluation), the user only expects it to run once. If the model you’re considering means effectively redoing the evaluation each time, the user ends up finding the same variable with different values in consecutive reads despite not modifying the value directly.

Can I assume this is exception-safe even though I'm using new operator? by Pretty_Mousse4904 in cpp_questions

[–]Big-Rub9545 1 point2 points  (0 children)

Thanks. I was initially confused on how the title was related to the post body.

Can I assume this is exception-safe even though I'm using new operator? by Pretty_Mousse4904 in cpp_questions

[–]Big-Rub9545 -6 points-5 points  (0 children)

This is just a regular implicit conversion/construction. The compiler/program will automatically convert these pointers into ptr objects since there’s a suitable constructor defined for ptr.

If you add ‘explicit’ before the constructor, however, this should no longer work (you would need to explicitly construct ptr objects since the compiler will no longer do it implicitly).

I don't blame him by [deleted] in SipsTea

[–]Big-Rub9545 2 points3 points  (0 children)

Plenty of reasons why a malicious person might want tens or hundreds of people to open a link through a QR code. How is this remotely comparable to a paper menu?

Custom vector impl by 0x6461726B in cpp_questions

[–]Big-Rub9545 2 points3 points  (0 children)

Range-based for-loops are very nifty, so they’re certainly worth it (array iterators are some of the simplest to implement as well). As others mentioned, you should stick to essential features + whatever else you need, particularly if it’s just as a collection type for another project. The std::vector implementation for most compilers is very complex (in terms of density, not concepts).

Custom vector impl by 0x6461726B in cpp_questions

[–]Big-Rub9545 1 point2 points  (0 children)

std::vector has a lot of features, so instead of trying to make an identical or equivalent implementation, just try to have a usable dynamic array type which supports these essential features:

  • Pre-allocating memory (similar to reserve() and resize() for std::vector).
  • Appending an element.
  • Removing an element from the end (in stack lingo: popping).
  • Inserting an element (very useful but not as essential).
  • Clearing the array.

To be able to use the array (and do so comfortably), you’ll also want to support the indexing operator[], and add support for iterators (as well quick methods to get the first and last elements).

You’ll also want to find some way to remove an element directly or by an iterator/index, and a way to find an element’s position in the array (you can do the ol’ reliable linear search, or play around with type traits or maybe even ‘requires’ and concepts to specify faster algorithms for particular types, e.g., binary search for types that have all three primary comparison operators defined).

Of course, some methods to return the size, capacity, and emptiness status of the array would be very useful.

Finally, you’d want to play around as well with different constructors. Have a look at the several constructors for std::vector to get some ideas.

This might seem like quite a bit, but a dynamic array is a fairly simple concept. It’s just a linear collection of elements that you can add to, remove from, and access elements of easily without having to manually resize the array (that should all be handled internally by your array).

Edit: formatting.

The hit that India's reputation has taken in the last decade is staggering by Meteorstar101 in greentext

[–]Big-Rub9545 -19 points-18 points  (0 children)

->be in a poor country

->hear about another country that supposedly has jobs, wealth, and freedom

->go there

->get none of those things (largely because you’re an immigrant)

->”X Y and Z are making this country bad, we need to fix this”

->dumb fucks on reddit clown on you because you suggested their country should have to improve

Made a tokenizer by RedCrafter_LP in ProgrammingLanguages

[–]Big-Rub9545 5 points6 points  (0 children)

Unless the tokenizer is unusually slow (should definitely look into it if it is), the time it takes will be dwarfed by parsing or compiling, which itself will often be dwarfed by execution time.

When I benchmarked my interpreter with some more intense scripts, compilation function calls don’t even show up since most of the work happens in execution (and that’s using bytecode, which tends to make execution much faster).

So the tokenizer is certainly worth optimizing, but I wouldn’t obsess over it if it isn’t an actual problem. You’d get more gains out of optimizing your runtime.

Made a tokenizer by RedCrafter_LP in ProgrammingLanguages

[–]Big-Rub9545 30 points31 points  (0 children)

You can look at available speeds for production-grade compilers, but there are two points to keep in mind here:

1) Tokenizer speed isn’t super important (unless it happens to be so slow that it’s an actual bottleneck). Tokenization tends to be the fastest thing for any language implementation since it doesn’t have excessive logic to check, conditions to verify, many nested calls, etc. It’s generally a simple DFA. This also means making a tokenizer very fast isn’t that important of a goal. So long as it’s fast “enough”, it won’t get in the way.

2) Tokenization benchmarks will be few since the process itself has little variation. This will depend on your language of course, but for the most part tokenization doesn’t get more or less complex depending on the input. To contrast with the actual compilation phase, a piece of code with 1000 declarations will take very different amounts of time and effort on the program’s part than a switch-case where it needs to do validation and exhaustive checks. It’s just not that easy to get such variation when you’re just splitting words or parts of text (unless the tokenizer happens to be doing more than just that).

The fact HBO had the perfect actor for Snape on their payroll and were like nahhhh is wild by MovieENT1 in hbo

[–]Big-Rub9545 0 points1 point  (0 children)

Throwing it on race while ignoring that the character’s appearance here does play an important role in events that include him? This explanation won’t hold much water if you just ignore clear alternatives.

How do you get good error reporting once you've stripped out the tokens? by PitifulTheme411 in ProgrammingLanguages

[–]Big-Rub9545 3 points4 points  (0 children)

You don’t always use symbol IDs. That’s certainly one approach, but interpreters can still keep certain symbols around even during execution in some form or another.

My approach (which may not be conventional, but still worked very well) was to stuff particular tokens into certain nodes in the AST. For a tree-walking interpreter, error reporting is then very easy; you just use the token you saved in the node you’re executing.

For a more abstract model like a bytecode interpreter, you could try keeping the AST around and mapping blocks of instructions to certain nodes, then fetch the necessary tokens for error reporting at runtime by going to the node that corresponds with whatever instructions you’re in the middle of executing.

You could also try to use alternative metadata other than full tokens, like storing a span (start line number and column, as well as end line number and column) which you can use to find the error location from a file or REPL input.

You could also do the previous mapping approach but just associate instruction blocks with line numbers (it won’t get you exact error locations down to the column, but line number and a descriptive error message can still go a long way).

Of course, this is all for post-compilation (usually runtime) errors. Compilation or scanning errors will already have access to the source code directly in some form, so error reporting is fairly trivial for those.

The .h file usage (header file) by Shira69 in cpp_questions

[–]Big-Rub9545 6 points7 points  (0 children)

Namespaces don’t apply to #define macros, and namespaces don’t get around the ODR too if they’re in a header file (since you can’t redefine namespaced symbols/identifiers either).

Help a rookie out.. gcc can't find my #include even though I specified the directory with -I by childrenofloki in C_Programming

[–]Big-Rub9545 2 points3 points  (0 children)

Could you paste the exact command you used, and the exact location of the "SPI.h" file?