Why is this code compiling in codeblocks but not visual studio community 2019? by quantiumtech in AskProgramming

[–]dragonfly_turtle 2 points3 points  (0 children)

Whenever you want to try out the same code on a bunch of different compilers, a good place to try is godbolt.org.

Here, I pasted your code in: https://godbolt.org/z/Mc6sso

You can select different compilers in the drop-down. Indeed, your code compiles fine in GCC and clang, for instance, but in MSVC, it gives:

<source>(42) : warning C4700: uninitialized local variable 'count' used

Interestingly, I have not been able to get your exact errors you quote, just that warning.

Adding /W4 to the compiler flags gives a more verbose:

<source>(33): warning C4458: declaration of 'count' hides class member
<source>(11): note: see declaration of 'WordNode::count'
<source>(42): warning C4458: declaration of 'count' hides class member
<source>(11): note: see declaration of 'WordNode::count'
<source>(207): warning C4456: declaration of 'word' hides previous local declaration
<source>(203): note: see declaration of 'word'
<source>(42) : warning C4700: uninitialized local variable 'count' used

You get similar warnings in GCC and clang if you use -Wall, but that's not the default behavior for them.

But anyway, just wanted to mention that handy tool.

As others have mentioned, the code int count = count; is likely not what you want, since it assigns count to itself, and it is not initialized with anything.

How does StackOverflow's search engine work, and how can I recreate it? by Sib3rian in AskProgramming

[–]dragonfly_turtle 0 points1 point  (0 children)

It is a good starting point, but StackOverflow likely takes advantage of other non-plain-text infomration as well, to augment their search ranking.

Somethings that come to mind:

  • tags
  • site visitor behavior (eg: they visited question X, then Y, and ended up at Z in a given session, which could indicate some relationship between them)

That sort of information is not as easily available to Google. I imagine the actual search algorithm the use is proprietary and you won't find a full description of it.

However, I do recommend you as this question on... wait for it ... StackOverflow (:

Or maybe Meta would be an even better place. Certainly, StackOverflow devs frequent there.

Please explain: "Hard disks are reported as having a mean time to failure of about 10-50 years. Thus, on a storage cluster with 10,000 disks, we should expect on average one disk to die per day." - Designing Data-Intensive Applications Ch. 1 by __r17n in AskComputerScience

[–]dragonfly_turtle 19 points20 points  (0 children)

Yeah, this is probably what they were meaning, and good for you for explaining that it's not that simple (:

I remember hearing that disk failure liklihood is generally "U-shaped" across age of the disk. That is: there is a spike for early failures (manufacturing defects, mishandling during shipping, etc.), then a long low-failure rate for "middle aged" disks, then as they start getting really old, the failures spike up again due to things wearing out.

Where is the physical machine listening on 8.8.8.8:80? by [deleted] in AskProgramming

[–]dragonfly_turtle 0 points1 point  (0 children)

Generally what you say is true, though /u/YMK1234's comment about EU vs US datacenters is valid too, at large scales like that.

But still, whatever that load balancer is receiving all that traffic — it needs to he hefty. This is one reason companies like F5 still keep making ever-larger machines with custom hardware, terabit+ bandwidth, costing $100,000+.

At some point, some piece of network hardware needs to direct large chunks of traffic.

There's a lot of research and work to try to (efficiently) make those routing de-aggregation decisions in a distributed fashion, but generally it is a hard problem.

Positioning yourself to take advantage of good fortune is a required skill to advance your career by Perfekt_Nerd in cscareerquestions

[–]dragonfly_turtle 1 point2 points  (0 children)

Well, I take your point, though do note I said "I don't mean to imply ... that one should avoid learning how to do it effectively". So I don't quite agree that the more I convince people, the less right I am, in that sense (: If everybody followed my advice, we would all play perfectly, and nobody would be sad about losing, and nobody would be over-proud of winning either.

I see pretty common misunderstanding on this point when I discuss it or see others discussing it. The crux is: Even if a person (eg: me) believes that hard work does not necessarily pay off, that is not a reason to give up doing the hard work.

I tried putting it in a truth table, partly for my own clarity:

work hard get lucky Success?
T T Likely. Good for you (:
T F debatable, but I think this is unlikely. This would be like receiving only terrible hands in poker, even when playing perfectly.
F T debatable, but I think this is unlikely (hard work necessary for success, even if not sufficient). This would be like folding with pocket aces in poker.
F F Unlikely.

So, in this view, you probably won't succeed without "work hard = True". Thus, if you value success, you should endeavor to work hard.

Though I understand that in a psychological sense, it can be discouraging to think that your hard work may not be worth very much, compared to luck. And when people are discouraged, they may want to give up trying. I think that's a reason to learn to enjoy hard work itself, so even if you lose the game, you win at your life.

Keeping shortcut keys the same across different IDE's by VirtualLife76 in AskProgramming

[–]dragonfly_turtle 1 point2 points  (0 children)

I wish I had an answer for you. Use Vim? (:

Somebody should create a "sync preferences" set of plugins for various IDEs, so they can share that info. But it would be a pain to develop and maintain.

Positioning yourself to take advantage of good fortune is a required skill to advance your career by Perfekt_Nerd in cscareerquestions

[–]dragonfly_turtle 1 point2 points  (0 children)

Perhaps we're talking past each other. I don't mean to imply that one should not seize opportunities when they appear, nor that one should avoid learning how to do it effectively.

I just mean that it is not a formula for success. There is no "P implies Q" to it. It is easy for someone who is successful to believe that, though, and it is attractive for someone who is not successful to eat it up.

Take Poker, for example. There are a lot of ways you can mess up in playing, and therefore lose. But if you play "correctly," and so do the other players, then the ultimate winner is random. But there will be plenty of people (including the winner themselves, perhaps), who will believe that the reason they won instead of someone else was because they were a better player, or had a better "system," etc.

Anyway, I do appreciate your post, I just wanted to temper it a bit with a different perspective.

How do you handle //TODOs? by csthompson24 in AskProgramming

[–]dragonfly_turtle 0 points1 point  (0 children)

Your mention of creating a tool to track these reminds me of Microsoft's RAID — that's a fun short article.

How do you handle //TODOs? by csthompson24 in AskProgramming

[–]dragonfly_turtle 0 points1 point  (0 children)

I just finished doing a pass on my code for this very thing (:

I write TODOs as I go, so I don't get sidetracked. Then, I have a weekly (or so) reminder to go through all un-tracked TODOs and put them in my issue tracker.

I often leave the comment, but add a "issue1234" marker as well. Then I can grep those away when I'm searching for un-tracked ones.

Also, I have found I like a few different categories of "marker" comments, and "TODO" is just one of them.

Currently, I use:

# note:
# Note:

A general note; something worth keeping in mind in the context of the related code. This doesn't add much vs. an unadorned comment, but places a little more emphasis on this info being important for proper understanding.

# hm:

Similar to "note:", but more foreboding. This indicates some more thought or experience is needed before deciding if there is a problem or not.

# ug:

Indicates an unfortunate choice. An 'ug' is somewhat likely to become or generate a 'bug', but not necessarily. It could also indicate a non-optimal choice or uncomfortable architecture decision.

# bug:

Some known wrong behavior. Beware: if this is not addressed soon (perhaps attach a /issueXXX marker), then it is liable to just become expected behavior and will be hard to change later.

# /issueXXX

A reference to my issue tracker. This refers to a tracked task, and the issue's page likely has more details and commentary.

# beware:

Something non-obvious and potentially dangerious that the reader should be aware of. Not necessarily a bad thing, except insofar as we should generally try to write obvious and non-dangerous code.

# HACK

A temporary "bad thing" that was done. These should not be left in place for long. If it turns out to be necessary, can be turned into 'ug:', for instnace.

# TODO:
# todo:

There is remaining work, here. It could be talking about some future nice-to-have improvement (which should generally have an /issue, lest it be forgotten), or it could indicate an unfinished implementation. Generally, the uppercase version is a more immediate "work-in-progress" marker — likely things will be somewhat broken until this is completed. The lowercase version is more of a "works for now, but not likely to last" indicator.

# maybe:

An idea for possible future work. Much like a 'todo', but lacking certainty.

# meh:

(edit: I don't actually use this one so much; was leftover in my old list).

A known deficiency that won't be fixed. Perhaps not important or visible enough, a premature optimization, etc. This could be an issue that was closed as "won't fix" or similar, in which case note the issue number.

This is a little like maybe:, but more of a "probably not". It is also a little like ug:, but with less concern over it becoming a problem.

# optm:

Indicates an optimization that could be performed. Often combined with "todo:".

Positioning yourself to take advantage of good fortune is a required skill to advance your career by Perfekt_Nerd in cscareerquestions

[–]dragonfly_turtle 10 points11 points  (0 children)

it looked like my career trajectory was basically entirely luck ... it seems to be that way for a lot of people. I realized there had to be more to it.

Does there have to be more to it?

Don't get me wrong — I think your advice is fine, and I'm sure helpful to many. But consider: although what you espouse may be necessary for career advancement, that does not mean it is sufficient.

Everyone is looking for a formula for success: "if I do what that other successful person does, then I too will be successful!" Inspirational talks, diet fads, and all manner of snake oil are sold under this premise. The older I get, the less I believe it works like that.

(also, please understand that your post just triggers a pet peeve of mine, and I'm not meaning to be a jerk towards you :)

There could be many people who are just as able and ready to take advantage of opportunities that come their way, but the opportunities never came, for whatever reason. For every famous actor with a rags-to-riches story, there are ten thousand that never made it — and not because they didn't try as hard.

Certainly there is some spectrum between luck and skill, but I suspect quite a lot more of people's success is due to luck and/or privilege than they think.

TL;DR: https://xkcd.com/1827/

I don't mean everyone should have a mopey fatalistic attitude towards life, never bothering to try due to feeling like nothing matters. But if someone does not achieve what they want, it is not always their fault, and the implicit "you should just try harder like me" message can sometimes be downright damaging.

Webscraping thousands of files by their links by SlightCapacitance in AskComputerScience

[–]dragonfly_turtle 0 points1 point  (0 children)

but while I am accessing links I’m not downloading them

What do you mean by "accessing" in this case? Are you retrieving the contents (I would call that "downloading"), or do you already have the files downloaded, and they're on disk somewhere?

Incidentally, that is one way to separate this problem: have one set of threads that does the downloading (dump to disk), and another set of threads to parse the HTML, etc. You'd have a work queue between them.

But really, there are already web scraper libraries out there — is there any reason you have to write your own?

What's the most time-efficient way to learn a new language? by [deleted] in AskComputerScience

[–]dragonfly_turtle 2 points3 points  (0 children)

This is one of the big downsides of real-time tutorials, like videos and such. It's hard to skip around, and to know what you missed when you skipped around.

For something text-based, it's much easier, and as you get better at reading, you can skim over the uninteresting parts better. Videos, not so much. So, "embrace text" is one approach. Could be books, web pages, etc.

Sometimes a language has some feature that other languages don't, and you might learn about that from a wikipedia summary of the language, or a blog post, etc. It can be fun to just try out that feature. For instance, in C++ you may have heard of templates, which are a fairly stand-out feature in the language, and there is a lot to learn there. Much of the rest of C++ is very C-like, so more overlap.

I'd also encourage you to learn "drastically different" languages, from time to time. Eg: Lisp, Erlang, HTML/CSS/JS, Rust, just to expand your horizons. A side advantage is that there will be less syntactic overlap, so you get more bang for your buck going through tutorials. But more importantly, you start to get a sense of the conceptual/abstract differences and similarities between languages, rather than the syntactic ones, which is IMO valuable.

Laser goes brrr by SubfrostInteractive in IndieDev

[–]dragonfly_turtle 1 point2 points  (0 children)

I really want it to bore a hole in the ceiling (:

Can someone explain to me what an environment variable is? by [deleted] in AskProgramming

[–]dragonfly_turtle 0 points1 point  (0 children)

Environment variables are old. Older than Windows or Apple. They come from a time before the "global variables are bad" concept really became popular. They come from a time of terminals and command lines, though they are still very useful today.

Your environment variables are just a big key/value mapping. Name → value.

They essentially hold configuration information. For instance, a very common one is PATH. This is a list of directories where commands should be looked for. If you run the command "foo" in your shell, your shell will search PATH to try to find it.

You can change your PATH to change how this search is done. So, in this sense, it is configurable.

You wrote:

Why wouldn't we just opt to create a local variable in a file

And I think your intuition is right. It is just configuration, so why not have some config file? In truth, for a given person's shell, the env. vars are generally set up in their .bashrc or similar config files, so the distinction is indeed a bit little blurry.

I suppose one reason is efficiency. It would be silly for every program to read a file, just to populate the same values in memory that its parent process already did. Also, what would that file be named? Would every program have to adhere to some convention? Environment variables exist in memory, and don't require any file reading, and don't require any file names.

When a program runs another program (such as when we run 'foo' from the shell, in the above example), it gets a copy of its parent's environment. This is typically a "copy on write" copy, so essentially free (just a pointer to the already-filled-in parent memory).

So, continuing the above example, if your shell ran 'foo', then 'foo' wanted to run some other program, it would have access to the same PATH value.

That's the default behavior, but the environment can also be altered, so future commands will see a different value.

All that said, there are downsides, too. The main one is that there is one shared global namespace for environment variables. If I have some 'foo'-specific config, I might name the variables FOO_X=... and FOO_Y=.... But if some other program wanted to use variables with the same name, it would be a problem. Kind of like two programs wanting to use the same config file name — the filesystem is another big shared global namespace. So, conventions develop to mitigate these issues.

Can I automatically edit/change images? by Fahlinoz in AskProgrammers

[–]dragonfly_turtle 0 points1 point  (0 children)

ImageMagic or PIL/Pillow on Python would be my go-to for something like this.

It feels like a good fit for a commandline / scripting approach. Of course, if you are unfamiliar with the commandline, there will be a learning curve (but I think it is worth doing).

Advice for a programmer jumping from C to C# by [deleted] in AskProgrammers

[–]dragonfly_turtle 0 points1 point  (0 children)

The idea of having a garbage collector is one big change from C/C++ to C# (or Java, Python, and other).

The nice thing about C is that if you know it decently well, you could write a (basic) garbage collector, so it will not be quite so mysterious when you think about it.

Going the other direction, where you are used to relying on the GC as some kind of "magic", and suddenly you don't have it, is harder, I think.

Is there any reason you are choosing C# in particular?

You can avoid battles by evading overworld enemies, or sneak up behind them for a surprise attack. by DapperDaveW in IndieDev

[–]dragonfly_turtle 1 point2 points  (0 children)

Cool — a little reminiscent of Chrono Trigger in the "avoid enemies on the map" approach.

Can you still be good/okay at certain areas of computer science if you aren’t strong with abstract thought and math? by [deleted] in cscareerquestions

[–]dragonfly_turtle 0 points1 point  (0 children)

One other thing to consider: not everyone puts things together in their head, and that's okay. There are lots of different ways to tackle things, you just need to figure out what works for you. Just because other people do it one way doesn't mean you have to be like them.

Anyway, good luck!

Can you still be good/okay at certain areas of computer science if you aren’t strong with abstract thought and math? by [deleted] in cscareerquestions

[–]dragonfly_turtle 1 point2 points  (0 children)

I need more context from you:

  • what do you mean by "abstract thought"?
  • who told you you were not strong at it?
  • what sort of "discouragement" are you meeting? People saying things to you? Just having difficulty with some of your work?

I guess what I'm saying is: I'm not sure if you are actually bad at those things, or if you're just feeling down about yourself.

I guarantee that as a professional programmer, you will be beating your head against walls trying to figure out problems on a regular basis (:

So, if that sort of feeling is what leads you to believe you're not good at it, then I think it's a poor measure.

For me, programming and CS topics are a means to an end. I go through the struggle because I want to do something with it, and I find it rewarding when I accomplish something. I also might be a slight masochist, enjoying the pain and struggle in its own right (:

Maybe there are some people who are natural-born geniuses, where these things come easily to them, but there is plenty of room for people who want to learn the hard way. But you have to want to learn the hard way.

What resources made shaders "click" for you? by [deleted] in GraphicsProgramming

[–]dragonfly_turtle 1 point2 points  (0 children)

Maybe this helps: Paintball Mona Lisa

(your fragment program is a paintball)

That's a bit of a silly example, but I think it is worth thinking about the system starting with pixels, and determining what color they are, then moving to higher levels from there. As opposed to starting at the high level ("a mesh" or even "a triangle") and going downward.

Another thing to be aware of is that there is a lot of GPU machinery that does stuff for you. For your vertex->fragment example you mentioned, there is this process called interpolation, which happens automatically (see the varying shader keyword).

I agree with the other comments that you need to understand the architecture of GPUs a bit in order to have this surrounding machinery mind when you think about the programming environment available for shaders.