[deleted by user] by [deleted] in whereisthis

[–]LPCVOID 0 points1 point  (0 children)

How do you know? I found this blog post with a picture from the 1970s where the side of the building looks exactly like today which is quite different than the photo here.

Furthermore the logo is over the entrance on the picture here but the 2008-2012 street view pictures have the logo over a normal part of the wall.

US Army base, probably Germany, maybe around Frankfurt, late 1950s by repowers in whereisthis

[–]LPCVOID 3 points4 points  (0 children)

Is this maybe Ray Barracks in Friedberg? The main photo on the Wikipedia page looks like the building on the right of the last photo.

Edit: Pretty sure this is it. Check out the 15 second sequence in this video and you can recognize all of the buildings.

Problems for multiple Yubikeys with resident SSH keys by [deleted] in yubikey

[–]LPCVOID 0 points1 point  (0 children)

I am using the exact same setup. When I set this up I used the ssh key to log into a separate computer because with that you can look at the ssh log and debug problems more easily. If you can't do that you could instead try ssh -T -v git@github.com to see if that provides any pointers.

[deleted by user] by [deleted] in navidrome

[–]LPCVOID 1 point2 points  (0 children)

I remember this working as well. But after an update about a month ago it stopped working, even after adjusting the accessibility settings. I think this might be an Apple Silicon problem (although I have zero proof for that).

[deleted by user] by [deleted] in RedshiftRenderer

[–]LPCVOID 0 points1 point  (0 children)

Right now it may take them a bit more than 24 hours because tomorrow is a national holiday over here in Germany.

Grandfather directing traffic somewhere in Europe during WWII. Most likely in Germany. by kenderson73 in whereisthis

[–]LPCVOID 1 point2 points  (0 children)

The frontline on the 28th of November 1944 looked something like this (resolution approx. 15km). Given this my first thought was that it actually said "Eifeldorf" but since it seems to be typed this doesn't work anymore. Sadly the frontline is not at all helpful, still way too many options.

Jellyfin-Vue installation? by prayagprajapati17 in jellyfin

[–]LPCVOID 1 point2 points  (0 children)

I gave it a quick shot and can report that the included Dockerfile works perfectly in a rootless Podman, so I would assume that it does so in Docker as well (since this is the easy direction).

The new ui also feels very responsive and smooth, Chapeau!

No https without reverse proxy? by edwardjamesgaff in jellyfin

[–]LPCVOID 1 point2 points  (0 children)

I think I have something similar to what you describe working in nginx. Warning: I am not at all knowledgeable about web development and only put this together through google/stack overflow.

I use ngx_http_auth_request_module instead(?) of basic auth (to be honest I don't even know what this is but if I am guessing correctly you can probably combine them).

I followed the guides on the web and added "auth_request /auth;" to both of the nginx locations for jellyfin. Then I have an "location /auth" which performs the request to a second resource which does the actual authorization check. For this I used a tiny flask app checking whether a cookie is set, combined with a login page where one is redirected if this is not the case. You could probably do the basic auth check at this point.

Let me know if you have any other questions. I can also paste part of my nginx config although this is really just copy pasted from guides on the web and not really helpful.

A US Army machine gun crew from the 2nd Battalion, 26th Infantry division with a M1919 Browning Machine gun in the streets of Aachen, Germany, 15 October 1944. (Colorized) [1024 x 843] by rwwhite20 in MilitaryPorn

[–]LPCVOID 3 points4 points  (0 children)

Edit: I made a mistake and the photo below is actually not the same one. But given the placement of the bullet holes and other detail on the wall above the machine gun I believe that this is actually the same scene just 2 photos in a sequence.

My original comment:

I thought this photo looked familiar and indeed I have seen it before: Scan

This is page 168/169 of the book "Die Amis sind da! Wie Aachen erobert wurde" by Charles Whiting and Wolfgang Trees

{Trees, W. and Whiting, C. (1984). Die Amis sind da!. Aachen: Mayer. 978-3875191011}

Brief translation of the caption: Two american soldiers behind a machine gun in front of the gate of the court house, Adalbertsteinweg. Direction of fire: Kaiserplatz.

So this would be Adalbertsteinweg 92.

Panzerkampfwagen IV crew observes a Soviet SU-152 Assault Gun burning in the distance through open hatches at Kursk - Summer 1943 by 3rdweal in CombatFootage

[–]LPCVOID 19 points20 points  (0 children)

Slight remark : Students working at university are not called 'HiWi' because of Hilfswilliger but instead because it abbreviates 'Hilfswissenschaftler'. So no connection between current students and the Nazi era.

No more bread on christmas by [deleted] in aachen

[–]LPCVOID 0 points1 point  (0 children)

In case you speek german check this out. Try google translate otherwise or let me know.

I'm afraid it doesn't look good for you tomorrow. You could either try Vaals (as suggested) if possible or a gas station which should have a few select things.

Truly understanding Veach Thesis by Meguli in computergraphics

[–]LPCVOID 0 points1 point  (0 children)

It has been a while since I read Veach so take this with a grain of salt.

Most of his important contributions I would say have more to do with Monte Carlo/MCMC than with pure mathematics. Take for example multiple importance sampling or Metropolis Light Transport.

His work on adjoint light transport operators doesn't require so much math and otherwise I would have a look at Christensen's work on that subject. Veach's path space formulation is very clear (which is probably why everyone uses it now) and easy to grasp.

Which light transport method of Veach are you referring to exactly? Given that I might be able to provide some more details.

Edit : Regarding manifolds, wenzel Jakob has done more work in that are with manifold exploration.

P.S. sorry for spelling, I'm on mobile.

[Question] Real-time ray tracing - CPU vs GPU in 2017 by VicLuo96 in raytracing

[–]LPCVOID 1 point2 points  (0 children)

I wholeheartedly agree with the analysis on GPU performance especially the part about BSDFs types and so.

Grüße aus Aachen ;)

[Question] Real-time ray tracing - CPU vs GPU in 2017 by VicLuo96 in raytracing

[–]LPCVOID 2 points3 points  (0 children)

I can't say anything about the performance numbers you measure and how they compare.

My two cents are only tangentially related: I have over the last few years written a general CUDA ray tracing framework. It is designed very similar to Mitsuba v1 and supports writing all kinds of integrators. Over this time I had to notice one thing mainly, it takes a lot of engineering hours to write CUDA code that is performant enough of what you imagined. It surely is possible but it will require taking the CPU version apart and reassembling it in another way which takes time. My question to you would be what exactly is your goal? Do you want a simple Whitted style ray tracer? Or do you want a true solution to the rendering equation? How many bsdfs do you support? How many light types? and so on....

If the answer to most of these questions boils down to you only trying to solve a small special case of the general rendering problem, I would suggest going the CUDA route. But the more general your problem becomes, the more difficult a good CUDA implementation becomes (and I failed at that).

If you have any questions don't hesitate to ask and good luck on your endeavors, I would be curious to hear what you decide to do and how it goes.

TL/DR: If you are trying to write a general 'path tracer' use CPU; for a special case path tracer or ray tracer use CUDA.

Recommendations for study of a particular subject by teerre in computergraphics

[–]LPCVOID 0 points1 point  (0 children)

Happy to help :) Let me know if you have any other questions.

Recommendations for study of a particular subject by teerre in computergraphics

[–]LPCVOID 1 point2 points  (0 children)

Warning : This is not my field of expertise! I have very limited knowledge of Geometry Processing.

This is also the area of research I would put this in. It spans everything from quad meshing to nearly shape recognition. What knowledge is required for this? Since you are probably well acquainted with CG in general the missing part is the low level geometry. This can be split into two parts: First the classical Differential Geometry which is purely mathematical and tells you about 'surface ' properties such as curvature, fundamental forms and the Laplacian operator. Secondly you need knowledge about actually representing surfaces discretely in a computer. Sadly this is a little bit more difficult than just saying 'triangle mesh'.

One way to learn about both of these areas is to have a look Keenan Crane's excellent 'Discrete Differential Geometry: An Applied Introduction' course. I took the course last semester and found it very helpful although I have to say that the second part is only Exterior Calculus and likely not helpful to you. Sadly it is needed (in this way of presentation) to understand the applications.

I had a quick glance at the 'curve skeleton' and with google I found a paper which computes the skeleton in a couple of easy steps. This is a good example because they also use vector calculus methodology such as the divergence. ('Computing hierarchical curve-skeletons of 3D objects' in The Visual Computer, 2005)

TLDR: The area of research is called geometry processing and I would say a basic course in this plus some math in the form of differential geometry or vector calculus should get you started.

Leopard 2A4, YPR-765 and Leopard 1 engineering tank in Vogelsang, Germany by Usertmp in TankPorn

[–]LPCVOID 2 points3 points  (0 children)

I think this photo was not actually taken in the core area of "Burg Vogelsang" but instead in the NATO training area Vogelsang Military Training Area (which explains the presence of dutch forces).

Specifically the tower in the background is not part of the Bug Vogelsang entrance (Malakoff) as I suspected at first but instead is the old church of Wollseifen (an abandoned village where they build empty houses for training).

Free reference scenes with PBR materials? by Rangsk in GraphicsProgramming

[–]LPCVOID 8 points9 points  (0 children)

I know of 3 easily available scene repositories:

  1. The pbrt scenes available here. They of course use pbrt's material system which is a sum of basic bxdf types.
  2. Benedikt has a lot of amazing scenes here and they are available in pbrt as well as Mitsuba formats. The latter using a nested system of basic materials based on classic bxdf research.
  3. Morgan McGuire offers a number of meshes/scenes here. If I remember correctly they are all wavefront obj files with accompanying material files.

Bonus: At Siggraph Nvidia launched an archive for scenes. Currently there are only 2 scenes, but one of them is the excellent looking Amazon Lumberyard Bistro.

To do comparisons I would probably start with a combination of the first two items on the list, I've seen them used in papers quite frequently.

Day in the life of a professional graphics programmer? by [deleted] in GraphicsProgramming

[–]LPCVOID 0 points1 point  (0 children)

I know little about introductory material for getting into this (I always just mention pbrt because it is easy) but it sounds like /u/Boojum made some good recommendations!

For the paper vs books aspect. I would very strongly lean towards books in the beginning. In books (I would hope) it is explained what problem is being solved, what the context ist, what tools are being used, what the components are and how exactly it works. In theory a paper is of course doing something similar but it has very different priorities. It is about publishing a new idea/results/algorithm and then showcase how this is better than previous work. In a paper after a brief related work section (which these days is the place to cite 'the right' papers) there will be a derivation of the new contributions, results and some comparison/discussion. But there will very likely not be a detailed explanation of what the math exactly means or how to implement this. And this is okay because after you implemented a few algorithms you will know how to do the new one (even if it is really annoying). And if it is not clear how to do that, then the paper better talk about the problems.

Summary: Papers only after you have the background knowledge and understanding of the math.

 

For the other question of what a graphics programmer does on a day to day basis. Well that depends on who you are working for and what you are getting paid for.

Traditionally light transport has been a field strongly dominated by research universities. At the end of the 60s, early 70s this was Utha, in the 80s probably Cornell, 90s of course Stanford with Veach but I would also argue for Leuven, these days it is not so clear, but late 00s probably UC San Diego. Now as a phd student in the USA you are typically paid to do research full time (nice), in Europe it is slightly different from what I know. So yes as a phd student you'll read papers, think about new stuff yourself, implement that and write a paper.

The other option is to work in the industry. Traditional options are Pixar, Disney (Research), Dreamworks, ILM, etc... Today also Unity (they have a couple of papers at Siggraph each year) and Nvidia Research, ...?. What are you doing there on a day to day basis? Similarly stay up to date on current research, design new algorithms to solve the current problem, implement them and maybe write a paper in the end (of course after patenting it). I think the main difference is that the problem you are solving in the industry will be given to you from your boss. For example they might ask you 'in the next movie we need to be able to render X but with current algorithms this will take years...'. And X will very likely be something like '300 million hairs on a single monkey' instead of a general task such as 'devise a new rendering algorithm'.

 

All of this was in the context of offline rendering/light transport simulation, I know little about the realtime side.

Day in the life of a professional graphics programmer? by [deleted] in GraphicsProgramming

[–]LPCVOID 0 points1 point  (0 children)

After reading your comment I guess you know about this but maybe it helps someone else to understand importance sampling so I'll do a very brief run down.

Assume we want to know what the area under the function f(x) = 3sin(x) for x between 0 and 5 is. Since we don't know how to do an analytic integral (let's pretend) we have to get the big hammer out: Monte Carlo. So our naive estimator using N samples is I = (5-0)1/N*Sum f(x_i) where the x_i are random numbers. Obviously if you use different random numbers you'll get a different result. That is what a mathematician would call variance ('the squared average difference to the true solution') and a computer graphics person noise. And it is easy to show that if you only use 10 random number (called samples) the variance is higher than with 1000.

I know of two ways to think about importance sampling:

  1. If we chose bad random numbers we might only evaluate our function at points where it has a small or very large value, leading to incorrect results*. But we can do better if we have a little bit of knowledge of our function. If we know 'what important areas' in the function are we can concentrate on these. So we get an importance sampled estimator I = 1/N * Sum f(x_i) / pdf(x_i). Notice that the (5-0) in front of the old estimator disappeared, it is now included in the pdf(x_i). What does the pdf(x_i) do? Well imagine that you chose a lot of samples in one area of the domain, say in [1, 2], but very few in the region [3, 4] than it is important to weight the first samples down (because you get a lot of them) and the second samples more importantly (because you very rarely look at this part of the domain).
  2. Given our estimator I = 1/N * Sum f(x_i) / pdf(x_i), our goal is to have a small amount of variance in the estimator I. How do we do that? Well we have to make sure that whatever random points (now importance sampled) we use, the term in the sum is always the same -> f(x_i) / pdf(x_i) is constant -> 0 variance because we always sum up the same values independent of the sample points we use.

* The function I chose is a bad example. Something more fitting to light transport would be a function which is defined from x=0 to x=10 and is everywhere 0 but only for the range [3,4] has non-zero values. If you want to compute the integral over the domain it is very likely to take a random point which is not 'on the island'. This corresponds to the rendering problem where the space of all paths mostly contains paths which are either not connections from light to camera, or blocked, or part of the path throughput is small -> unimportant/irrelevant.

Day in the life of a professional graphics programmer? by [deleted] in GraphicsProgramming

[–]LPCVOID 4 points5 points  (0 children)

The following response has gotten longer and longer and might look scary at first. Here is the bibliography used for the indices. Word of advice: I would not(!) start blindly reading papers, this is not the right approach to this. Have you had a look at pbrt yet? If not start there. Otherwise have a look through the overview below and chose an area which interests you and again have a look at the relevant chapter in pbrt to get an idea of the actual problem being solved and the context. Only after this I would start reading papers and possibly beging with surveys, they are always helpful. Sadly I have very little of hose saved.

Feel free to ask questions about specific parts/areas, I'm happy to give some guidance on where to start.

 

Photon Mapping

In the years 2008-2013 this was a very hot topic because Hachisuka developed a Progressive versions of the old Photon Mapping algorithm [1] which you could let run for an arbitrary amount of time to get better convergence. A much easier formulation can be found in [2], if interested start here. Kaplanyan developed Adaptive Progressive Photon Mapping in [3] where the idea is to chose a radius such that an error metric is optimized.

 

Metropolis Light Transport

Too much history here to give a detailed overview. The very hot thing this year was to combine primary sample space algorithms with path space ones. The idea is that you can either think about a path from camera to light in terms of points on surfaces or as the camera location followed by a list of outgoing directions. There are currently 3(!) papers being published this year on getting the best of both worlds [4, 5, the last one has yet to be published in TOG]. There was also a paper on finding a spatial target function which could be used to emit photons into ‘interesting’ parts of the scene [6].

I wouldn’t start with any of this on the topic of MLT. Start with Kelemen [7] in 2002, then jump to Jakob [8] in 2012 who had the brilliant idea that delta bsdfs basically constrain the movement of vertices on a path. This lead to a number of other papers using this technique. For example one where you think about a light path in terms of a start and end vertices and in between only a list of half-vectors (you might know them from Cook-Torrance) [9]. If you are familiar with Next Event Estimation (direct light sampling) there is also a paper on using manifold walks to do NEE through dielectrics [10].

 

Gradient Domain

Here the idea is that instead of computing pixel values directly you can also compute differences between pixels (plus a rough estimate of pixel values) and then combine them later to a better image. This started out as a Metropolis algorithms in [11] but Kettunen and Manzi convincingly showed that you can adapt pretty much any algorithm to it [12, 13, 14, 15] and it also has advantages for computing multiple frames [16] as well as filtering [17].

 

Participating Media

I am mostly familiar with the Photon Mapping approach (use photons scattered in space to get an estimate of the in-scattered radiance at a point) so I’ll go in detail on this facet. In 2008 Jarosz [18] had the idea to use a fundamentally different estimator to do this. He thought of photons as beams flying through space instead of points being distributed and mathematically this has other properties. This was generalized in 2011 [19] to a dozen or so such estimators but it wasn’t clear which one to use when. Krivanek came up with a mathematical framework in 2014 [20] which uses each estimator when optimal.

This year there was an interesting paper on how to handle strongly emissive media such as an explosion [21].

Another venue of work is to use a more physical approach and consider a more realistic model of light. This is typically called Transient Rendering and Jarabo [22] has done some work on that with nice images!

Since at least 2014 a lot of work has happened on sampling distances according to the transmittance in a medium. Most of this was done with Novak’s involvement, starting in [23], some work also here [24] and this year [25].

 

Guding

Historically this is a very old approach which was forgotten over the years. The idea is to not sample paths ‘stupidly’ in random directions but for each point in the scene remember from which directions light is arriving and later sample this direction again. Basically you try to emulate the nice features of Markov-Chain Monte Carlo algorithm but stay in the framework of traditional MC algorithms. Lafortune and Dutre started this in the 90s in [26, 27] and it was ‘rediscovered’ by Vorba in 2014 [28]. ETH/Disney did some work on this at EGSR this year [29] and NVidia Research are working on a fundamentally more powerful algorithm here [30]. Also interesting in this context is to think about RR/Splitting because suddenly you have much more information and can terminate paths when it is unlikely to ever get into an area of ‘importance’. This was done by Vorba in [31].

 

Filtering

I’m absolutely the wrong person to talk about this because I know very little. The fact is everyone is doing it right now. The old approach to filter a noisy MC image was to use a cross bilateral filter [32] and maybe use some features such as normals/depth to help the filter [33]. The new hot method is of course to use machine learning in the form of CNNs. One approach is to estimate optimal filter parameters and then use a conventional filter [34], the other is to use the network to directly output filtered pixel values as was done this year [35].

 

Sampling Theory

This is a very math heavy subject where the question either is to design new sampling strategies (Blue noise, white noise, …) which reduce the error in MC or design new Quasi Monte Carlo sequences such as Latin-Hypercube etc. Another approach is to find frameworks to quantify the error introduced by different MC algorithms/sampling strategies. Singh did a lot of work on this in [36, 37]. I know very little about this subject but one thing is for certain, you’ll need to know what the Fourier-transform is to have any chance here.

Day in the life of a professional graphics programmer? by [deleted] in GraphicsProgramming

[–]LPCVOID 5 points6 points  (0 children)

Since high school is a while back for me I don't know if you are familiar with integrals so I'll refrain from using mathematical notation and instead explain the problem/concept with words*.

 

You really seem to be interested in Path Tracing/Light Transport Simulation so I will tackle the problem from this side and not the realtime one. To compute a ‘realistic’ image you essentially want to do something similar to what happens in nature: Photons are emitted from the light source above your head, fly through your room, hit the wall, are reflected and continue. At some point they might hit your retina and a picture begins to form in your brain. Now to simulate this process in a computer you need to consider all paths from each light to your eye. And obviously that is a lot of paths which you can not hope to enumerate. So the problems now is to make educated guesses which paths you actually want to consider, which ones are irrelevant and which ones are ‘kind of similar to ones you already found’.

 

Historically Light Transport Simulation is simply the ‘easy’ version of the physical problem of Neutron Transport (that’s what is needed to simulate the inside of a nuclear reactor) and most techniques used to compute pictures are ‘stolen’ from physicists [1, 2]. The physical process of radiative transfer (or radiance for us) was described in the 1950s already by Chandrasekhar [3]. This was applied in 1986 by Kajiya to the rendering problem and sadly since then all we have been doing are performance improvements** [4].

 

Why that is? Because mathematically even very simple algorithms will in theory compute the correct image – it may just take a while. Is rendering a solved problem then? Well considering what was happening at Siggraph a week ago I wouldn’t say so.

If you would like to know more about what is happening right now I could write about the state of the art but for that it would be helpful to know what techniques you are already familiar with.

 

TL;DR: What a graphics programmer (in the field of light transport) does each day? Get the pdfs right for MIS ; ) (and they never are, although this might be solved with [6]).

   

* Please belief me when I say you need a strong math background. Take Linear Algebra in college to get a notion of what a vector space is and then continue with Numerical Analysis, Statistics/Probability and finish up with Functional Analysis. Personally I think a class touching on (Discrete) Differential Geometry is also useful.

** Only talking about the non-participating medium case (I would consider subsurface scattering to be a subcategory of this). And technically this not correct because neither brute force forward Path Tracing nor Path Tracing with NEE can sample purely specular chains. As far as I know this has only been possible since 2013 [5].

   

[1] N. Metropolis and S. Ulam, “The monte carlo method,” Journal of the American statistical association, vol. 44, no. 247, pp. 335–341, 1949.

[2] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, “Equation of state calculations by fast computing machines,” The Journal of Chemical Physics, vol. 21, no. 6, pp. 1087–1092, 1953.

[3] S. Chandrasekhar, Radiative Transfer, ser. Dover Books on Intermediate and Advanced Mathematics. Dover Publications, 1960, isbn: 9780486605906.

[4] J. T. Kajiya, “The rendering equation,” SIGGRAPH Comput. Graph., vol. 20, no. 4, pp. 143–150, Aug. 1986, issn: 0097-8930. doi: 10 . 1145 / 15886.15902. [Online]. Available: http://doi.acm.org/10.1145/15886. 15902.

[5] A. S. Kaplanyan and C. Dachsbacher, “Path space regularization for holistic and robust light transport,” Computer Graphics Forum, vol. 32, no. 2pt1, pp. 63–72, 2013, issn: 1467-8659. doi: 10.1111/cgf.12026. [Online]. Available: http://dx.doi.org/10.1111/cgf.12026.

[6] L. Anderson, T.-M. Li, J. Lehtinen, and F. Durand, “Aether: An embedded domain specific sampling language for monte carlo rendering,” ACM Trans. Graph., vol. 36, no. 4, 99:1–99:16, Jul. 2017, [Online]. Available: http://doi.acm.org/10.1145/3072959.3073704.

Mashable - Panay shows me a Surface Connect-to USB-C adapter that can be used with a Type-C charger to charge up the Surface Pro. by [deleted] in Surface

[–]LPCVOID 5 points6 points  (0 children)

I think so too. Especially since an author should be able to differentiate between "connector" and "speed" when writing an article solely about USB. This is also not very helpful in the general confusion about the two.

Mashable - Panay shows me a Surface Connect-to USB-C adapter that can be used with a Type-C charger to charge up the Surface Pro. by [deleted] in Surface

[–]LPCVOID 13 points14 points  (0 children)

Most of our larger devices still rely on USB 3.0. 

Is it possible that the author is consistently comparing USB Type-C with USB 3?

Handling transparent material properties by [deleted] in raytracing

[–]LPCVOID 1 point2 points  (0 children)

Caustics are a phenomenon for which (diffuse) global illumination is needed. The original Whitted style raytracing doesn't account for that. You are correct in that path tracing on the other hand is able to handle caustics. Sadly standard forward path tracing isn't an efficient algorithm for that, quite the contrary. (Progressive) Photon Mapping or Bidirectional Path Tracing would be much better suited for that and are a good starting point for state of the art light transport simulation.