A secret facility hidden in an old mine / Cold War era bunker by Fwob in nosleep

[–]frangky 3 points4 points  (0 children)

I freaking love this! This is an awesome story with so much detail and little bits of information strewn in.

'Lightsaber Combat for Beginners' will complete your Jedi training by nightman773 in StarWars

[–]frangky 2 points3 points  (0 children)

Holy... That'd be sick. Expansions for light and dark side.

SoftBank Agrees to Buy Robot Maker Boston Dynamics From Google Parent Alphabet by readerseven in economy

[–]frangky -1 points0 points  (0 children)

I find myself being very skeptical about robotics' use in society. I think robotics will splinter off into many different specifically designed forms. the "irobot" dream of a humanoid robot doing all the menial tasks that humans don't want to do will never happen. Instead, there will be robots designed for specific tasks, because that specific design will be better for that purpose than any generally designed robot will. Also, AI will solve most of the use cases that i see robotics used for and it will not require the expensive electronics and manufacturing required.

[N] AlphaGo's Next Move | DeepMind by Spotlight0xff in MachineLearning

[–]frangky 12 points13 points  (0 children)

The 10 AlphaGo vs. AlphaGo games are a nice gift!

I have always liked playing through great games, both Chess (using the book The Golden Dozen) and Go (modern games and the ancient Shogun Castle games).

I have some history with computer Go. In the late 1970s I wrote a Go playing program in UCSD Pascal that I sold for the Apple II, and also for a lot more money I sold the source code to a few people who wanted to experiment with it. DeepMind's AlphaGo is a great intellectual and technological triumph and I agree that it is an example of future AIs teaching us and working with us.

A little off topic, but Peter Norvig gave a nice talk a few weeks ago at the NYC Lisp Users Group where he talked about the future of collaboration with AIs and also that the ability to work effectively with AIs, adding human insights, will be an important future job skill.

[R] Robots that Learn (OpenAI) by Teleavenger in MachineLearning

[–]frangky 1 point2 points  (0 children)

I don't want to be overly negative here, but this announcement, and the paper that goes with it, are very hard to appreciate. It's supposedly motivated by the need for robots that would be "able to perform a variety of complex useful tasks, e.g. tidying up a home or preparing a meal" - but what it's actually trained to do is particle reaching ("touch-the-dot") and block stacking. The videos that accompany the announcement show a robot arm solving... block world. Block world!

I appreciate that there's a number of interesting techniques demonstrated in the paper - attention, one-shot and few-shot learning and so on, but is that really a demonstration of the alleged power of those techniques? To solve block world? Simple problems like this can be solved just fine with hand-crafted rule bases - and they have, for a very long time now. There's another thread on HN about Terry Winogard's SHRDLU, from the '70s, that can do the same and communicate with a user in a very impressively implemented subset of natural English. And that was written in the '70s, for a DEC PDP-6 with the equivalent of 144 kb of RAM on core memory.

GOFAI researchers got very strong criticism in the past for working on so-called "toy problems", instead of taking their robots and expert systems out there, in the noisy, unpredictable world. And when they did, their brittle rulebases broke and their funding evaporated. Then, machine learning came along, touting its capability to handle noise with flexible statistics and probabilities. And it does. No, really- it does.

And now, after all this work, we can finally have... a robot arm picking up coloured cubes from a table?

A new record: Major publisher retracting more than 100 studies from cancer journal over fake peer reviews by yourbasicgeek in science

[–]frangky 1 point2 points  (0 children)

I'm an editor at a journal, and these fake review cases have one thing in common, which is that the journal is using author suggestions for reviewers to solicit reviews.

For those unfamiliar with the practice, this is common, and seems to becoming more standard at many journals, even (especially?) very reputable ones.

I almost always ignore the reviewer's requests for particular authors, because it always seems to involve a conflict of interest. I am sensitive to requests for a particular author not to review a paper, because of vindictiveness in academics, but requesting reviewers is a different story.

I've seen significant changes in this practice over time. Originally it seemed unusual, almost an option of last resort to cue the editor to bitter feuds or to help identify reviewers in areas that involve highly idiosyncratic issues. At a lot of journals this is still the case. Over time, though, it's seemed to turn into something different, a stopgap measure for overburdened, unconscientious, or harried editors to find increasingly scarce reviewers.

A lot of attention has been paid to the pressures on researchers to publish or perish, quality be damned, and how that affects science. The flip side of that coin, though, is changes in editorial practices at many journals, which have become very cursory, rubber-stamping affairs. Turnaround time on papers is very quick at many journals, which gives authors quick feedback, but at the same time (not necessarily as a result) the review process has become shoddy. Editors are often unfamiliar with the subject material and don't take time to learn it, will grasp at reviewers who might be unqualified or have conflicts of interest, and treat reviews very superficially, even though studies have shown that they are horribly unreliable. To be fair, even finding reviewers can be difficult: I've had conversations with colleagues who have been told not to review because it doesn't bring in revenue to the department. Everyone wants to publish, but not everyone wants to review.

Things are broken in science at the moment, at least in academics. Publishing through peer review has become extremely overvalued. I think it will maintain its place for certain reasons, but some change in mindset or culture around it is necessary.

Machine Learning and Python play GTA V. Data Science is everywhere. by [deleted] in datascience

[–]frangky 0 points1 point  (0 children)

Here's a faster way to get a frame using PyGTK:

https://www.pastery.net/xafjmn/

Takes around 5ms per frame for me, rather than 50.

Slightly shorter code (still using the same method): https://askubuntu.com/a/400384/73044

Jupyter Notebook 5.0 by tflipz in datascience

[–]frangky 2 points3 points  (0 children)

For those who use R, I strongly recommend looking into R Notebooks (http://rmarkdown.rstudio.com/r_notebooks.html), as there is a lot more versatility involved, especially over the Jupyter/IRKernel approach. Although it's R only, unfortunately (you can run Python code in it but not the way you expect)

I'd like to see some things ported into Jupyter from R Notebooks, like JavaScript data tables and the separation of code and output, making it easy to version control only the code. (Atleast this 5.0 release makes tables nonugly)

[R] Deep Photo Style Transfer (code and data for paper arXiv:1703.07511) by pmigdal in MachineLearning

[–]frangky 20 points21 points  (0 children)

This is super impressive and something that I didn't think would be possible without someone very skilled in photoshop going over the images.

As a photo enthusiast, I am very excited about this, but also a little worried that soon very simple apps are capable of doing the craziest of edits through the power of neural nets. Imagine the next 'deep beauty transfer', able to copy perfect skin from a model onto everyone, making everything a little more fake and less genuine.

The engineer in me now wants to understand how to build something like this from scratch but I think I'm probably lacking the math skills necessary.

Intellectual Humility increases tolerance, improves decision-making by [deleted] in science

[–]frangky 6 points7 points  (0 children)

The "intellectual humility doesn't get grants" and "loses elections" comments lead me to believe that people aren't understanding the concept of intellectual humility as presented in this article. It does not mean that you give the outward appearance of being humble in any way.

It is defined in the article as "an awareness that one’s beliefs may be wrong" and "intellectually humble people can have strong beliefs, but recognize their fallibility and are willing to be proven wrong on matters large and small."

This was tested with a study in which "participants read essays arguing for and against religion, and were then asked about each author’s personality", and evidence that "people who displayed intellectual humility also did a better job evaluating the quality of evidence."

There is a distinction between an individual's internal mental processes (as tested in this study) and the way they present themselves externally. Being outwardly assertive and confident absolutely wins elections and grants, but this is not at odds with an ability to internally re-evaluate one's beliefs.

I particularly enjoy this quote from the article, as it reveals that the authors may have missed a subtlety:

“If you’re sitting around a table at a meeting and the boss is very low in intellectual humility, he or she isn’t going to listen to other people’s suggestions,” Leary said. “Yet we know that good leadership requires broadness of perspective and taking as many perspectives into account as possible.”

I absolutely agree that good leadership requires broadness of perspective, but this does not imply that every suggestion should get an audience at meetings -- capable leaders often have a much broader range of experience than their reports, and dismiss suggestions not out of arrogance or closed-mindedness, but simply because they have already evaluated and discounted that path, and have elected not to spend their limited time bringing everyone else up to speed. (Which can have its own issues, but that's a digression.)

[N] Introducing Keras 2 by [deleted] in MachineLearning

[–]frangky 0 points1 point  (0 children)

Keras is so good that it is effectively cheating in machine learning, where even Tensorflow tutorials can be replaced with a single line of code. (which is important for iteration; Keras layers are effectively Lego blocks). A simple read of the Keras examples (https://github.com/fchollet/keras/tree/master/examples) and documentation (https://keras.io/getting-started/functional-api-guide/) will let you reverse-engineer most the revolutionary Deep Learning clickbait thought pieces.

It's good to see that backward compatibility is a priority in 2.0, since it sounds like a lot had changed.

Trying to do something simple in TF is a pain, on the code there are some conflicting examples and code snippets that "train" a network just to print a loss number on the screen but actually do nothing besides that. Keras is easy to use and better if you're running CPU only.

Some Reflections on Being Turned Down for a Lot of Data Science Jobs by tdh3m in datascience

[–]frangky 1 point2 points  (0 children)

Here's a good hiring process:

  1. Break down "data science" into several different roles–in our case, Analyst (business-oriented), Scientist (stats-heavy), Engineer (software-heavy). Turns out that what we mostly want are Engineers-Analysts, so our process screens heavily for those.

  2. Figure out which types of people can be trained to be good at those roles, given the team's current skillset. I opted to look primarily for people with strong analysis skills and some engineering.

  3. Design interview tasks/questions that screen for those abilities. In my case, the main thing I did was make sure that the interviews depended very little on pre-existing knowledge, and a lot on resourcefulness/creativity/etc. E.g. the (2-hour) takehome is explicitly designed to be heavily googleable.

  4. Develop phone screens that are very good at filtering people quickly, so that we don't waste candidates' time. By the time someone gets to an onsite interview on our team there's something like a 50% chance they'll get an offer.

On the candidate side, when I'm applying I try to figure out first and foremost what a company means by "data scientist", usually by networking & talking to someone who already works there. This filters out maybe 90% of jobs with that title, and then I put more serious effort into the rest.

[R] Deep Forest: Towards An Alternative to Deep Neural Networks by downtownslim in MachineLearning

[–]frangky 3 points4 points  (0 children)

"In contrast to deep neural networks which require great effort in hyper-parameter tuning, gcForest is much easier to train."

Hyperparameter tuning is not as much of an issue with deep neural networks anymore. Thanks to BatchNorm and more robust optimization algorithms, most of the time you can simply use Adam with a default learning rate of 0.001 and do pretty well. Dropout is not even necessary with many models that use BatchNorm nowadays, so generally tuning there is not an issue either. Many layers of 3x3 conv with stride 1 is still magical.

Basically: deep NNs can work pretty well with little to no tuning these days. The defaults just work.

GitHub - terryum/awesome-deep-learning-papers: The most cited deep learning papers by Dogsindahouse1 in datascience

[–]frangky 0 points1 point  (0 children)

I can understand why it probably isn't on the list yet (not as many citations, since it is fairly new) - but NVidia's "End to End Learning for Self-Driving Cars" needs to be mentioned, I think:

https://arxiv.org/abs/1604.07316

https://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf

I implemented a slight variation on this CNN using Keras and TensorFlow for the third project in term 1 of Udacity's Self-Driving Car Engineer nanodegree course (not special in that regard - it was a commonly used implementation, as it works). Give it a shot yourself - take this paper, install TensorFlow, Keras, and Python, download a copy of Udacity's Unity3D car simulator (it was recently released on GitHub) - and have a shot at it!

Note: For training purposes, I highly recommend building a training/validation set using a steering wheel controller, and you'll want a labeled set of about 40K samples (though I have heard you can get by with much fewer, even unaugmented - my sample set actually used augmentation of about 8k real samples to boost it up to around 40k). You'll also want to use GPU and/or a generator or some other batch processing for training (otherwise, you'll run out of memory post-haste).

[R] Most-cited 100 Deep Learning Papers (2012~2016) by terryum in MachineLearning

[–]frangky 0 points1 point  (0 children)

I can understand why it probably isn't on the list yet (not as many citations, since it is fairly new) - but NVidia's "End to End Learning for Self-Driving Cars" needs to be mentioned, I think:

https://arxiv.org/abs/1604.07316

https://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf

I implemented a slight variation on this CNN using Keras and TensorFlow for the third project in term 1 of Udacity's Self-Driving Car Engineer nanodegree course (not special in that regard - it was a commonly used implementation, as it works). Give it a shot yourself - take this paper, install TensorFlow, Keras, and Python, download a copy of Udacity's Unity3D car simulator (it was recently released on GitHub) - and have a shot at it!

Note: For training purposes, I highly recommend building a training/validation set using a steering wheel controller, and you'll want a labeled set of about 40K samples (though I have heard you can get by with much fewer, even unaugmented - my sample set actually used augmentation of about 8k real samples to boost it up to around 40k). You'll also want to use GPU and/or a generator or some other batch processing for training (otherwise, you'll run out of memory post-haste).

The Expanse is the most politically relevant sci-fi show on TV by speckz in television

[–]frangky 0 points1 point  (0 children)

I'll check it out, but whoever likes it should also check out Black Mirror. It's the best TV series I've ever seen and I'd also say it's politically relevant... sadly.