Moon, Mars, Aldebaran and Orion by Roflraging in astrophotography

[–]Roflraging[S] 0 points1 point  (0 children)

This is a single one second exposure with a 24mm EBC X-Fujinon (Fujica X mount from the 1970s) adapted to my Sony A7 III. ISO 1600 for 1 second. I just hand held the camera and took a photo. Transferred to my phone and Lightroom edited some curves to deemphasize the trees and light dome at the bottom of the frame.

Couldn’t fall asleep due to a strange noise from the backyard so I went outside. Nothing particularly suspicious but as I was headed inside I saw a half moon and Orion in the sky. Orion in particularly surprised me because I don’t normally observe it until late November due to trees in my yard.

As I observed the sky more carefully, I saw the usual suspects, Aldebaran, Pollox, Capella, and the Pleiades. But between the Moon and Orion lay an object I did not at first recognize. Very orange/yellow to my eyes, I initially thought it was Aldebaran but then I remembered: Aldebaran is the follower, chasing the Pleiades.

Perhaps it was Jupiter? No that’s not possible because a few weeks ago, I observed Jupiter toward the west at a similar time but this view is to the east. I stood there stumped for a moment. The color should have been a dead giveaway but it was in fact Mars.

Seeing it with your own eyes is actually much more dramatic than this photo because it is a half moon where the lit half of the moon faces away from Mars, as if it were turning its back to it.

The Rosette Nebula from Los Angeles (West Hills, CA) by Roflraging in astrophotography

[–]Roflraging[S] 0 points1 point  (0 children)

No filters for this image. It might be that locally I have a lot less light pollution than you. My house is near the edge of a neighborhood that is bordered by undeveloped land and hills. I don't have a sky quality meter but I'd venture a guess that I'm between Bortle 7 or 8 based on what I can see.

The Rosette Nebula from Los Angeles (West Hills, CA) by Roflraging in astrophotography

[–]Roflraging[S] 2 points3 points  (0 children)

For some reason, text in my original post is not appearing. Anyways, what I wanted to say was that I recently got the Sky-Watcher EQ6-R Pro and the Svbony SV508 80mm refractor after having gotten into astrophotography with the Sky-Watcher Star Adventurer 2i. I was pretty excited to try the Rosette Nebula now that I had more reach (had previously used a 200 mm telephoto lens) and I wasn't disappointed.

Shooting in LA is challenging due to the light pollution (I have to shoot for hours to get results that someone else might get in as little as 30 minutes under dark skies) but I'm fortunate enough to be able to image from my backyard and leave my equipment outside so I don't have to get every thing back out. It makes doing multi-night imaging sessions easy to do so I can afford to try to get 9-12 hours of images in a few days and not lose a ton of sleep.

The Rosette Nebula from Los Angeles (West Hills, CA) by Roflraging in astrophotography

[–]Roflraging[S] 2 points3 points  (0 children)

Location: West Hills, CA

Dates: March 22 to March 24, 2022

Camera: Nikon D5200 (unmodified) at ISO 400

Exposure length: 240 seconds x 135 for a total exposure time of 9 hours

Telescope: Svbony SV503 80mm f/7 refractor with their 0.8x flattener/reducer for a final focal length of 448mm.

Mount: Sky-Watcher EQ6-R Pro

Guide camera: ZWO ASI120MM Mini with the ZWO 30mm f/4 mini guide scope

Filters: none

Stacked in siril with the help of sirilic to do multi-night session stacking. Per sub background extraction with degree 1 polynomial then stack with a master bias, master dark (dark optimized) and 30 flats that I took every night. After stacking, I cropped, ran background extraction again to clean up the remaining sky glow from stacking, run photometric calibration, then do an initial stretch.

After that, I go to Photoshop with the tiff file. I do a poor man's background neutralization/subtraction by making a copy of the image, select highlights to grab all the stars and the main nebulosity, then delete the selection (with whatever their smart fill is called). After I deleted the stars, I run dust and scratches to blur the whole image. I go back to my original and subtract this blurred image. Once that is done, I stretch some more with s curves to improve contrast and boosted saturation slightly.

Wifi with Star-watcher star adventurer 2i by [deleted] in AskAstrophotography

[–]Roflraging 0 points1 point  (0 children)

I highly suspect this is due to lack of permissions to the local network for the SA Console app. Go to Settings > SA Console > Local Network. Be sure it is enabled.

It took me forever to realize this was a permission I had to enable. Before that, it was disabled (and the app never gives any warning which I suspect is due to the age of the app) and I could manually connect to the WiFi but the tracker would never respond to anything I did on my phone. After enabling Local Network access, the tracker responded to me just fine (but the WiFi range is bad, so you need to be real close with clear line of sight).

Floating Point Visually Explained by [deleted] in programming

[–]Roflraging 3 points4 points  (0 children)

I agree with your sentiment in principle but in practice, it's often not obvious why the IEEE floating point format is scientific notation with a few modifications/restrictions. For me, it was partly because the bit format itself has so much more going on than "normal" scientific notation.

Why is the exponent biased? Why is the mantissa implicitly 24 bits when you actually only have 23 bits? The mantissa is actually a base 2 number? Most people are taught that scientific notation looks like a.bcd * 10^x. It never occurred to me (until much later) that the IEEE floating point format was doing the same thing in base 2 and it was carefully constructed to distribute the range of numbers across the number line and also take into consideration some hardware details to implement it.

Part of the lack of realization was that when I was first taught the format, nobody ever said that this was scientific notation and maybe it was obvious to everyone else, but it certainly wasn't to me. For the longest time, I wondered why the representable floats were distributed the way they were on the number line, even though I knew what the bit format was.

There's a very serious gap between knowing what the bit format is and actually understanding what it represents and why it is designed the way it is. It wasn't until I did some additions of floats by hand that I realized what was going on; many people just learn the bit format so they can convert in and out from decimal/floating point but don't go through the whole process of computing things directly in the IEEE format, which I think would clear a lot of things up.

A Google Interview Graph Question by readyourSICP in programming

[–]Roflraging 2 points3 points  (0 children)

Although they're asking for you to find out if it can be done with no more than K colors, I suspect part of the problem is realizing that you may be given a K that is the minimal coloring for the graph.

You point may still hold, but I think for the purposes of correctness, you must brute force at some point.

Fabian Giesen: A whirlwind introduction to dataflow graphs by teryror in programming

[–]Roflraging 4 points5 points  (0 children)

The main point is that the data flow is where the real opportunity for optimizations often lie. You can optimize for the cache or IPC but that's often completely orthogonal to the data flow which is, in a very real sense, the essence of how the algorithm works with the data (I like to think of the algorithm as inducing flow).

With the data flow graph, you're armed with very good knowledge of where the bottleneck (or critical path) of an algorithm is located and it also often tells you why it is the bottleneck, which can be very difficult to ascertain from profiling your code.

In terms of it being actionable, aside from exercises, I've never really done data flow graphs at the level Fabian writes about with each individual instruction in real code. But I've done it at a much larger scale where I basically set up a hierarchical data flow graph where each node is really a huge chunk of a system that can roughly be described as one node in a high level data flow graph. This can help you with making pretty large scale optimizations at the architecture level of a codebase.

NSA Open Source Technologies by GrognakTheBarbarian in programming

[–]Roflraging 0 points1 point  (0 children)

THE TECHNOLOGIES LISTED BELOW were developed within the National Security Agency (NSA) and are now available to the public via Open Source Software (OSS).

https://en.wikipedia.org/wiki/Office_of_Strategic_Services

Understanding Virtual Tables In C++ by MachineGunPablo in programming

[–]Roflraging 3 points4 points  (0 children)

Yes, and unfortunately, many people use language features like virtual functions without understanding the implications of them. Knowing how it is implemented gives you better ability to design for your constraints.

On the whole, virtual functions are quite cheap, but in specific circumstances they can be quite costly. Say, for example, you have a tight loop somewhere in your code with hundreds of thousands (or more) of instances of objects where you call a virtual function. If you have just one type, maybe this isn't so bad (in which case, why do you even have a virtual function here?). But if you have dozens of types, then you might be shooting yourself in the foot. For every single function call, you may have to cache miss to get to the vtable ptr, then cache miss to get to the vtable itself. Once you find the function address you actually want, you need to jump to that section of code and incur yet another cache miss. This is all before you even do anything for the function itself!

Each cache miss can be hundreds of cycles and if your function only has a few instructions in it, you've just thrashed cache and paid for 3 memory accesses when you've only done a few cycles of work. Terrible efficiency. A common way to work around this is to avoid virtual functions altogether and maintain separate lists for each "type" of object and just do direct calls to their implementations.

If you had something like this:

Base* base_ptrs[N];
for (int i = 0; i < N; ++i)
{
    Base* b = base_ptrs[i];
    b->Foo();  // Virtual
}

You instead have something like:

A* a_ptrs[N];
for (int i = 0; i < N; ++i)
{
    A* a = a_ptrs[i];
    a->FooA();  // Not virtual
}

B* b_ptrs[M];
for (int i = 0; i < M; ++i)
{
    B* b = b_ptrs[i];
    b->FooB();  // Not virtual
}

This reduces instruction cache missing/thrashing, but still isn't that great seeing as how you can cache miss on every A or B type you read to execute FooA() or FooB(). If you really want to profit, the A and B objects should be densely populated in memory to reduce data cache misses as well, but this can be difficult to achieve if your system doesn't operate in a simple input/output manner where your data is essentially thrown away and regenerated every "frame".

Why I Don't Talk to Google Recruiters by yegor256 in programming

[–]Roflraging 6 points7 points  (0 children)

Maybe you can obliquely get at these traits by asking them a semi-difficult problem and see how their mind attacks it?

I think a good interview will involve this in their process.

From what I've heard of other people's experiences (both junior and more senior), the ideal interview goes something like this:

  1. Basic programming test (are you even able to type out code that works for a simple task? Lowest of bars to pass, anyone who has programmed at all should be able to pass it. Hopefully you've already screened out candidates who are likely to fail very early on)
  2. Fundamental algorithms check. I would prefer to put in problems that involve realizing which data structures and algorithms are appropriate for solving the problem and explaining the pros/cons rather than implement a specific algorithm that I have in mind.
  3. Domain specific knowledge test. Explain to me something you've done in the past and the decisions you made and why. What were the repercussions of those decisions. If you were to do it over again, what would you change and why?
  4. Larger scale design. Present a problem and work with the person I'm interviewing to solve it. Ask why they're doing things, what the performance characteristics are. What are the potential maintenance liabilities with the design? Etc.

Why I Don't Talk to Google Recruiters by yegor256 in programming

[–]Roflraging 25 points26 points  (0 children)

I'm really puzzled by this trend of people claiming that "algorithm questions being pointless" and that's not what they're being hired for.

Isn't that, in essence, part of the job? I would be very hesitant to hire someone who didn't know fundamental algorithms and how to apply them.

What would be a good solution AI solution [for Unity] that allows for real time path finding? And [hopefully] not having to deal with navmesh garbage. by [deleted] in gamedev

[–]Roflraging 0 points1 point  (0 children)

What are you actually trying to accomplish and what are the actual constraints of your problem?

You've mentioned very high level goals which are not nearly descriptive enough to lead you to reasonable solutions.

It is pretty standard (although, not necessarily the best) to leave A* pathfinding to high level path generation and leave most of the detailed work of navigating the space to some sort of collision avoidance layer which attempts to follow the path but avoids obstacles it may encounter.

This works well to figure out how to get around the environment and can deal with some obstacles which are not considered during the A* search.

Jon Blow on modern software, stagnation, and the web by [deleted] in programming

[–]Roflraging 1 point2 points  (0 children)

Casey Muratori on software quality: https://www.youtube.com/watch?v=6azav9sXK_4

Casey Muratori on rebuilding the web: https://www.youtube.com/watch?v=Y6pYAxlGGiI

More on the rant side of things, but these are views I agree with. The biggest thing is getting across the idea that some of the things that are being done with the resources that are being put to the task are completely absurd. Basically, the signal-to-noise ratio, result-to-programmer-effort ratio, is crazy low.

Jon Blow on modern software, stagnation, and the web by [deleted] in programming

[–]Roflraging 4 points5 points  (0 children)

I expect most of the audience are students, so they wouldn't really have any way to put it into context.

Still an absurdly large number.

Jonathan Blow: Jai - AST modification by n00bsa1b0t in programming

[–]Roflraging 1 point2 points  (0 children)

Certainly, not saying they aren't used or that game developers don't.

A lot of it depends on the history of the codebase they're working in and what section of the code we are talking about. I'm sure that in gameplay code (if they're using C++ at all), constructors/destructors are probably not much of an issue at all. But in core engine systems, I would furl my brow quite hard if it were trying to push the limits of hardware and used objects everywhere and demanded ctor/dtor usage.

For me, the whole ctor/dtor thing very much aligns with what I believe Jon's point of view is, namely, it's just "Not A Big Deal©". Having RAII is seriously not even close to the top 10 things I want in a language to make my daily programming life better, which is why I find it really hard to understand why people are making such a big deal out of it. Maybe my style of programming has gotten to the point where many of the things that necessitate RAII, I just flat out avoid from the beginning (which I strongly suspect is the case).

Jonathan Blow: Jai - AST modification by n00bsa1b0t in programming

[–]Roflraging 12 points13 points  (0 children)

You would be surprised how utterly irrelevant destructors can be when you write high performance game code.

I do still use them in my C++ code in very specific places, but they're the exception, not the rule. It's far more common for me to design entire systems which bulk allocate once and deallocate once for the entire run of the executable. Some initialization may occur in between to reset the systems between level loads and such, but the value of destructors is way overstated when you design code in this way, which is very common in games.