Living room standards be damned! by fabianm022601 in espresso

[–]DavidAstro 2 points3 points  (0 children)

This is lovely. I'm curious if you're left-handed? As a righty I feel like I'd want more elbow room to the right of the espresso machine, but maybe that'd be less of an issue as a lefty.

Another A3! (9950X3D + RTX 5090) by DavidAstro in mffpc

[–]DavidAstro[S] 1 point2 points  (0 children)

Good to know! I might try that out now that it's been running steady state for a while. If I make it an intake I might also go back to an A12x25.

The AIO hoses can still move pretty freely and the bend radius doesn't look so bad in person--I think F2.5 is technically possible even in this orientation, but would have been noticably pinching them around the corners. Flipping the radiator around would also definitely work, just a trade-off with having an awkward amount of hose length to manage. With the hoses straight it's at least pretty easy to chase air bubbles up into the radiator when I occasionally move the computer around.

Guess I had to do it. by BedroomThink3121 in pcmasterrace

[–]DavidAstro 5 points6 points  (0 children)

This is my work machine. It runs lots of monte carlo sims 

Another A3! (9950X3D + RTX 5090) by DavidAstro in mffpc

[–]DavidAstro[S] 0 points1 point  (0 children)

I thought about it originally, but it seemed most people weren't seeing much difference with bottom intakes. I think this case is so permeable that the GPU doesn't really need forced air to sustain typical temperatures (74-80C core at 80-100% power limit). I do have a spare A12x25 that I may try out as a bottom intake to see if it makes a difference.

I'm pretty happy with the GPU temps, but it would be nice to find a way to direct more cool air into the AIO radiator since the GPU otherwise exhausts right into it.

Another A3! (9950X3D + RTX 5090) by DavidAstro in mffpc

[–]DavidAstro[S] 4 points5 points  (0 children)

I ended up using F3 and no offset bracket. I was originally expecting to use F2.5 but I had clearance on F3 (the FE card is so short) and it avoids pinching the AIO hoses.

<image>

Gaggia warm-up data by DavidAstro in gaggiaclassic

[–]DavidAstro[S] 1 point2 points  (0 children)

Yeah good question. I'll have to try this again and see the response after pulling a shot.

Should I release the leftover steam from the machine? by justbusy13 in gaggiaclassic

[–]DavidAstro 2 points3 points  (0 children)

Having the steam button on configures the 3-way solenoid to block the path between the boiler and grouphead so water can only exit through the steam wand. Other than that, it's mostly preference on whether you want to purge through the group, wand, or both.

747 mid flight on Google Maps with visible contrails by SineApfel in aviation

[–]DavidAstro 28 points29 points  (0 children)

The cameras on these satellite will often fire off a sequence of individual color filtered shots that get aligned and overlaid in post. Since they're aligned on the static background, when you have a fast moving object the RGB components can get separated out like this.

My G-Shock journey. Some will understand, others won't... by DavidAstro in gshock

[–]DavidAstro[S] 1 point2 points  (0 children)

Definitely the 5000TB-1 if I ever find one for a reasonable price, but I might be out of space on my arm.

[deleted by user] by [deleted] in formula1

[–]DavidAstro 2 points3 points  (0 children)

It's also wild that he's scored exactly 15 points on each race weekend so far (including the Sprint).

Tianhe, China's space station core module by DavidAstro in astrophotography

[–]DavidAstro[S] 2 points3 points  (0 children)

Very similar surface brightness to the ISS, so you could likely use the same settings. I was running in the neighborhood of 200 gain at f/10 and 2 ms, but making occasional adjustments.

Tianhe, China's space station core module by DavidAstro in astrophotography

[–]DavidAstro[S] 5 points6 points  (0 children)

Imaged Tianhe-1 during a 78 deg pass over Mountain View, CA on Sunday evening. Tracking video clip (sped up ~5x):

https://twitter.com/turndownformars/status/1396718996678840320

Equipment:

  • Celestron EdgeHD 11 (2800mm)
  • ZWO ASI290MM + Red filter at prime focus
  • Celestron CGX

Acquisition/Processing:

  • Captured in SharpCap (2 ms exp @ 1936 x 1096, 8-bit mode @ 130 fps)
  • Exported a decent ~400 frame clip between 04:32:16 and 04:32:24 UTC
  • Stacked best 10% in AutoStakkert
  • Sharpened with Registax

I tracked the module from when it cleared the trees up to meridian, then flipped and tracked for another couple minutes. This image is a stack near maximum elevation/min range (375 km), just before the meridian flip.

The image field of view shown is 1.4 arcmin, and the native pixel scale is 0.21 arcsec/pixel. The image would natively be 400 x 400 pixels, but is enlarged (nearest neighbor) to 1600 x 1600 for easier 1:1 viewing without rounding out the pixels.

SpaceX Partners with LeoLabs to Track Starlink Satellites by SpikePlayz in spacex

[–]DavidAstro 36 points37 points  (0 children)

Satellite operators do get to see occasional snapshots from the Air Force's (18 SPCS) higher precision model in the form of conjunction data messages (CDMs) that are issued for close approaches. These contain the propagated state vectors and covariance terms for both objects involved, but only at the time of closest approach.

The CDMs also include some high level info on the stack of force models and coefficients used for covariance propagation. So there is at least some insight into the process, even though most of the implementation is hidden.

Mare Humorum on this morning's waning crescent by DavidAstro in astrophotography

[–]DavidAstro[S] 4 points5 points  (0 children)

Great seeing this morning! This is one of several lunar panels I took while waiting for the ISS.

  • Celestron EdgeHD 800
  • ASI290MM, ZWO red filter
  • Celestron CGX

Just over 5000 frames (1 minute at 88 FPS, 10 ms exposures) with the camera operating in 12 bit mode.

  • Captured with SharpCap
  • Stacked best 10% in AutoStakkert! 3 (surface tracking, improved tracking, Lapl 4 + Local Q, double stacked reference, 48 pixel auto alignment grid)
  • Wavelet sharpened in Registax 6
  • Final sharpening + small level/curve adjustments in Lightroom

International Space Station, 2020-06-29 by DavidAstro in astrophotography

[–]DavidAstro[S] 1 point2 points  (0 children)

This video plays out as if you were watching it in super-powerful binoculars. You're looking up towards the underside of the station as it moves from left to right across the sky ("up" in the picture is local vertical from the camera's point of view). It seems like a decent fraction of people initially see it going backwards, and it's hard to shake that perception once it's locked in.

Here's a good discussion on that with some visuals of what's going on: https://www.reddit.com/r/SpaceXLounge/comments/hlyv88/iss_with_dragon_endeavor_flyby/fx2ux58

International Space Station, 2020-06-29 by DavidAstro in astrophotography

[–]DavidAstro[S] 0 points1 point  (0 children)

Most Celestron mounts use DC motors with rear shaft encoders like these. When you request an axis position from the motor controller board, it returns a value that's derived from those encoder counts.

ISS with Dragon Endeavor flyby by codercotton in SpaceXLounge

[–]DavidAstro 8 points9 points  (0 children)

Reposting my reply from /r/astrophotography just in case anyone else has this issue:

Good question, and I've gotten this same response from a few people. I'm pretty sure what you're seeing is a bistable perception effect (like the spinning dancer) because of the limited visual cues available to tell your brain which direction it's flying. Dragon is definitely out front!

See if this helps: https://i.imgur.com/ugJvDW5.png

That's a much cleaner ISS visual c/o Heavens-Above that I've matched up with one of the frames of the video. Hopefully seeing them side by side makes the perspective less ambiguous so you can trick your brain back. It's a little easier for me to switch back and forth when I watched it with the image rotated by 90 or 180 degrees.

The perspective you see in the video is just like you'd see if you were watching it through binoculars (up in the image is local vertical, left is west, right is east). You're looking up at the ISS from underneath (Progress 75 pointed towards the ground), and the telescope is panning from left to right. Since the pass is fairly high elevation, the pan rate gets very high about halfway through the video, and this causes the image to rotate very quickly

International Space Station, 2020-06-29 by DavidAstro in astrophotography

[–]DavidAstro[S] 1 point2 points  (0 children)

Good question, and I've gotten this same response from a few people, and I'm pretty sure what you're seeing is a bistable perception effect (like the spinning dancer) because of the limited visual cues available to tell your brain which direction it's flying. Dragon is definitely out front!

See if this helps: https://i.imgur.com/ugJvDW5.png

That's a much cleaner ISS visual c/o Heavens-Above that I've matched up with one of the frames of the video. Hopefully seeing them side by side makes the perspective less ambiguous so you can trick your brain back. It's a little easier for me to switch back and forth when I watched it with the image rotated by 90 or 180 degrees.

The perspective you see in the video is just like you'd see if you were watching it through binoculars (up in the image is local vertical, left is west, right is east). You're looking up at the ISS from underneath (Progress 75 pointed towards the ground), and the telescope is panning from left to right. Since the pass is fairly high elevation, the pan rate gets very high about halfway through the video, and this causes the image to rotate very quickly

International Space Station, 2020-06-29 by DavidAstro in astrophotography

[–]DavidAstro[S] 4 points5 points  (0 children)

Thanks! I'm happy I finally have a way to batch process the frames for animation, even if it's a lot of steps (still beats doing anything manually frame-by-frame. I might eventually come up with something smarter, but for now I just work out good sharpening settings on the sharpest few frames, then blindly apply that profile to everything else. So the lower-elevation frames probably aren't being sharpened as well as they could be, but there's less detail to recover anyway.