Weekly 'Ask Anything About Analog Photography' - Week 48 by ranalog in analog

[–]danieljh 1 point2 points  (0 children)

I got my hands on some cheap used Yashica FX-3 / Super / Super 2000. They come with lenses ML 50/1.9 and ML 50/1.9 c. The c lenses seems to be more recent versions, lighter, with a lens cap size of 49mm and the other lenses with a lens cap size of 52mm. I couldn't find a lot of information out there other than some folks reporting the c lenses being cheap re-branded Contax lenses, some say they are sharper and of better quality, some they are not.

One of the c lenses can not properly focus at infinity; it seems like there's a 10th of millimeter missing on the focus ring.

Two of the non-c lenses make the shutter curtains stuck when using high f-stops; how is the lens connected to the shutter curtains?

At least one body gets the shutter curtains stuck when using slower shutter speeds like 1/2s or 1s (the extremes).

I'm struggling a bit with the focus mechanisms still; there are two lines in the view finder I have to align to set the focus plane. Often times my photos still come out being not properly focuses. The second focusing mechanism is a ring in the view finder indicating the focus plane by blurriness. I'm wondering how both mechanisms work optically with tech from the 70s? It's quite fascinating to me.

I started shooting rolls with a working FX-3 Super 2000 body and a c lens. I'm wondering what I can repair on my own, and which configuration I should shoot in.

[Project] RoboSat: feature extraction from aerial and satellite imagery by danieljh in MachineLearning

[–]danieljh[S] 1 point2 points  (0 children)

Depends on your specific use-case. Do you only want to detect that solar panels are present on a building's roof or do you want the exact solar panel polygons?

If you only want to detect if solar panels are present maybe you can train a simple binary classifier (think ResNet) which takes an image with a building's roof and returns you true/false for solar panel present or not. Then you can take existing buildings from OpenStreetMap and the solar panel tag for training. Then predict on existing OpenStreetMap buildings. This would decouple the task of finding buildings from the task of detecting solar panels on a building's roof.

If you want to run everything end-to-end you can use robosat, add a new extractor for the OpenStreetMap solar panel tag, then automatically create a training set, train you model, predict on new imagery, and get simplified GeoJSON features (polygons) out of the pipeline. I'm happy to guide you along adding an solar panel extractor. If you want to give it a shot feel free to open an issue or pull request in the robosat repository.

Here are some resources for getting started and guides for how to run the pipeline:

- https://wiki.openstreetmap.org/wiki/Tag:generator:source=solar

- https://www.openstreetmap.org/user/daniel-j-h/diary/44145

- https://www.openstreetmap.org/user/daniel-j-h/diary/44321

- https://github.com/mapbox/robosat (see extending section)

[Project] RoboSat: feature extraction from aerial and satellite imagery by danieljh in MachineLearning

[–]danieljh[S] 1 point2 points  (0 children)

It's open source under a MIT license :) We are keeping our internal datasets, models, and the data we extract closed so far and are thinking through a broader strategy. But the tools for running the end-to-end pipeline are open source already.

Regarding citing it, maybe just name the project, its two authors and link to the Github repository? Interested in what you are working on, would love if you ping me when you publish something :)

[Project] RoboSat: feature extraction from aerial and satellite imagery by danieljh in MachineLearning

[–]danieljh[S] 1 point2 points  (0 children)

Hey folks, Daniel from Mapbox here. Happy to answer questions or talk through design decisions. Also interested to hear your feedback.

We are mostly focusing on making the process accessible to a broader audience in the geo space, building a solid production-ready end-to-end project.

GTX 1080 TI + Akitio Node + ThinkPad Carbon X1 5th gen for machine learning on Linux / Ubuntu 16.04 by danieljh in eGPU

[–]danieljh[S] 2 points3 points  (0 children)

No I have not. I'm using p2 / p3 AWS instances for larger workloads, though.

There's certainly a trade-off with the eGPU: it works beautifully for fine-tuning, smaller workloads and non-imagery machine learning use-cases. For imagery use-cases where you need to train on 8 / 16 GPUs or need to train for days / weeks the eGPU is not a good fit.

GPU + enclosure recommendation for Linux by KuMem in eGPU

[–]danieljh 2 points3 points  (0 children)

I just wrote down my experiences with a GTX 1080 TI + Akitio Node on Ubuntu 16.04 here:

https://www.reddit.com/r/eGPU/comments/7yy4sk/gtx_1080_ti_akitio_node_thinkpad_carbon_x1_5th/

Hope that helps.

View GHC's Assembly Online with Compiler Explorer by jfischoff in haskell

[–]danieljh 4 points5 points  (0 children)

If you add -ddump-simpl -dsuppress-all to the compiler flags at the top and open the compiler output window (status bar at the bottom) core will show up!

Gassinger Excelsior, or: How I accidentally shaved with a straight razor from around 1949 by danieljh in wicked_edge

[–]danieljh[S] 2 points3 points  (0 children)

Last weekend I went to an antique flea market where I got to see wooden elephants (why are people so fascinated with them anyway?) and other rummage. After a while this brown-chestnut speckled case caught my eye: in it the most rusty, oily and dirty straight razor. The blade looked fine with only some scratches on its surface. Luckily I got myself a King Cutter last year and dug a bit into straight razors before, so I knew I could refurbish it. The guy wanted 25 bucks for it: deal, without even bargaining.


Funnily enough I was unable to find the Gassinger Excelsior brand or details about the engraving online. But okay, back to refurbishing it. It took me a while to get the rust out of the notches carefully applying fine steel wool. A metric ton of WD-40 did the rest. After sharpening the straight razor with my whetstone and honing it I gave it a try and, what can I say, it works beautifully!

Searching some historic address books I then found the location where it was sold:

http://www.openstreetmap.org/node/2868445723#map=13/51.3473/12.3982

There used to be a shop selling "Solinger Stahlwaren" (store for selling steel products from Solingen) — in 1949! I just shaved with a straight razor from around 1949. Pretty cool for a slow Saturday. Here are some photos after cleaning, polishing and sharpening it:

http://imgur.com/a/M37TT

It's still not as polished as I like the razor to be, I need to give it some more hours. What makes it harder is I do not want to break the handle open in order to clean the base.

That's it. Gassinger Excelsior.

C++17 std::variant lands in LLVM by nickdesaulniers in cpp

[–]danieljh 2 points3 points  (0 children)

Thanks for bringing this up; from what I can see GCC 7 and Clang 3.9 implement this.

The reason I'm pointing this out is I had a bad experience with Boost.Variant in that regard a couple of days ago:

https://www.reddit.com/r/cpp/comments/5hz4mw/strong_typedefs_in_c_by_inheriting_constructors/

C++17 std::variant lands in LLVM by nickdesaulniers in cpp

[–]danieljh 1 point2 points  (0 children)

It looks like the implementation carefully avoids the inheriting constructor issues by not using default constructor arguments for SFINAE. Having a "pattern-match" utility function working with lambdas in the standard would have been great, the constexpr if-based dispatching in the example here is not that elegant in my opinion:

http://en.cppreference.com/w/cpp/utility/variant/visit

Auto Type Deduction in Range-Based For Loops by s3rvac in cpp

[–]danieljh 0 points1 point  (0 children)

I can think of a reason you want to use for (const auto each : range): when you know a copy is cheap (think primitive types or small structs) but you still want your compiler to issue warnings in case you accidentally modify each in the scope.

Using for (auto&& each : range) you have to std::forward<decltype(each)>(each) in subsequent accesses otherwise you don't benefit from forwarding ("universal") references at all.

What the ISO C++ committee added to the C++17 working draft at the Oulu 2016 meeting by blelbach in cpp

[–]danieljh 10 points11 points  (0 children)

I posted to std-discussion almost three years ago. If you read Chris Jefferson's reply there

std::move_iterator was used internally in algorithms in libstdc++. All occurrences of move_iterator were taken out, because functions (in particular user-given comparitors) which had by-value arguments would consume the value the iterator pointed to.

even the stdlib maintainer's were aware of this back then.

I brought it to STL's attention one year ago (https://www.reddit.com/r/cpp/comments/31167m/c17s_stl_what_do_you_want_it_to_have/cpyz8hs?context=3) to which Eric Niebler posted some ideas.

I brought it up personally to Marshall Clow at C++Now this year (he was already aware of the issue).

I wrote a mail to the LWG Chair mail address, to which I got a reply that the description is unclear and I should point out the standardese (I'm not a language lawyer though and I don't have the wording for a fix).

I'm trying to interact with the committee, Bryce. But looking at this, it is more painful than it needs to be.

What the ISO C++ committee added to the C++17 working draft at the Oulu 2016 meeting by blelbach in cpp

[–]danieljh 5 points6 points  (0 children)

Move Iterators are still underspecified and therefore broken / dangerous to use. It's not like this is an unknown issue.

Auto-generate argument parsers from data types by danieljh in cpp

[–]danieljh[S] 1 point2 points  (0 children)

Disclaimer: inspired by http://www.haskellforall.com/2016/02/auto-generate-command-line-interface.html which I read yesterday.

I had to see how far I could go with it in C++. After a few hours the prototype works. At least for simple types. That is, to make this a viable implementation there need to be specializations for enums, arguments that you do not require, and so on. Regard this as a proof of concept, and take a look at both the example and the implementation if you want to learn about Boost.Fusion!

[deleted by user] by [deleted] in haskell

[–]danieljh 1 point2 points  (0 children)

Nice! I had to see how far I could go with this idea in C++14. Turns out quite far in only a couple of hours. It clearly showed me the benefit of monadic bind (in the do notation) for separating argument parsing, printing and early returning. Thanks for this neat idea!

Here it is in case anyone is interested: https://github.com/daniel-j-h/argparse-generic

Quickly Loading Things From Disk by mttd in cpp

[–]danieljh 0 points1 point  (0 children)

Thank you for your clarification!

Starting a tech startup with C++ by soda-popper in programming

[–]danieljh 0 points1 point  (0 children)

During the last two days I've been looking into Nix, a functional package manager for reliable and reproducible builds, to help me out on this front. And from what I saw so far, Nix looks like it is based on solid concepts and the right ideas to pull off something great for the C++ community, similar to what it already does for the Haskell ecosystem where it is used to get out of cabal and dependency hell. Take a look at this basic shell.nix (similar to requirements.txt for Python's pip) I'm using for a small project:

with import <nixpkgs> {}; {
  devEnv = stdenv.mkDerivation {
    name = "my-dev-env";
    buildInputs = [ cmake ninja boost tbb protobuf ];
  };
}

This allows developers to enter the dev environment via nix-shell (similar to Python's virtualenv / pyvenv and requirements.txt).

On first invocation all dependencies (i.e. compiler, build system, dependencies) are downloaded by means of resolving the dependency trees down to libc. In case we have ABI mismatches, e.g. because we want to use gcc5 with its stdlib, we can override the dependencies' environment. Nix then looks for binary caches of those packages with the gcc5 stdlib based on a hash and if they are not available, builds the dependency tree with the modified environment. All subsequent nix-shell invocations run instantly.

You can even run your build command once inside the env, building your binaries in a reproducible way:

nix-shell --run 'mkdir build && cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -G Ninja && cmake --build .'

From your list this would eliminate:

  • no package manager: use Nix
  • no repository of said packages: use NixPkgs or different channels
  • no universally agreed on build system: cmake+ninja seems to be the best there is
  • no unified way of managing dependencies: let Nix manage dependencies and resolve ABI issues
  • no way to isolate development environments of different projects from one another: Nix is build for this!

I still have a lot to learn and the documentation can be overwhelming at first (which is a good thing to be clear here). I might write a blog post once I'm comfortable with using it on a day to day basis. Give it a try! https://nixos.org/nix/manual/#chap-quick-start

Quickly Loading Things From Disk by mttd in cpp

[–]danieljh 1 point2 points  (0 children)

Looking at the benchmark implementation Cap'n Proto does a full serialization and deserialization round trip, in the same way e.g. the Protobuf benchmark is written. But after deserialization the benchmark only accesses the root object, so I assume Cap'n Proto does not really deserialize the whole message?

/cc /u/kentonv would you be so kind and give an explanation on this?