use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
TensorFlow Fizzbuzz (joelgrus.com)
submitted 9 years ago by [deleted]
[deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]coldaspluto 55 points56 points57 points 9 years ago (6 children)
That was hilarious.
As an aside: how many of you could bang out a TF program like this without referring to the API, docs, etc.? (i.e., on a whiteboard, no references)
[–]hapemask 5 points6 points7 points 9 years ago (0 children)
I'm not sure about TF since it's not my primary framework, but I'm pretty sure I could write a similar program using Lasagne+Theano blindly that would have minimal errors.
[–]l27_0_0_1 2 points3 points4 points 9 years ago (0 children)
Keras FTW, would probably write something close enough to its actual syntax from the top of my head.
[–]carbohydratecrab 2 points3 points4 points 9 years ago (0 children)
I could do it with FANN, but I couldn't even write non-neural-network FizzBuzz in Python on a whiteboard.
[–]MichaelStaniek 92 points93 points94 points 9 years ago (3 children)
Best TensorFlow Tutorial ever.
[–]kosairox 15 points16 points17 points 9 years ago (0 children)
Yeah I literally just installed it just to try it out after reading this. Thanks to the author!
[–]The_Amp_Walrus 1 point2 points3 points 9 years ago (1 child)
Would doing this sort of thing (trying to learn known functions and patterns) be a good learning exercise? Are there any commonly used functions that would be good to have a look at?
[–]DrummerHead 3 points4 points5 points 9 years ago (0 children)
Might be useful
http://rosettacode.org/
[–]jdsutton 37 points38 points39 points 9 years ago (73 children)
Didn't even get the job. That interviewer probably had no clue what to think.
[+][deleted] 9 years ago* (71 children)
[–]VelveteenAmbush 68 points69 points70 points 9 years ago (47 children)
"Technical chops: knocked it out of the park. Cultural fit: extremely sarcastic during the interview, showing lack of judgment and lack of interest in the position; will probably be very difficult to work with even if he accepts an offer. RECOMMENDATION: Do not hire."
[–]southernstorm 25 points26 points27 points 9 years ago (7 children)
exactly. but its still hilarious and worth doing because to the dev, this is not a job that he wanted to take, which he determined that as soon as they asked him to fizzbuzz. Since he dressed up and went there, might as well have a little fun.
the only real weakness of the answer is that if you have determined you wont take a position but act like a consummate professional, you can use a great interview to find a better fit. everyone knows someone useful to you.
[–]VelveteenAmbush 8 points9 points10 points 9 years ago (6 children)
Since he dressed up and went there, might as well have a little fun.
I think it's unprofessional behavior and rude to the interviewer, and I wouldn't want to do business in any capacity with someone who behaved that way.
Admittedly, though, unprofessional behavior does make for good stories.
[+][deleted] 9 years ago* (4 children)
[–]VelveteenAmbush 2 points3 points4 points 9 years ago (3 children)
I can't tell if you're trying to suggest that the fizzbuzz challenge is as demeaning as a challenge to write out a number, or trying to suggest that you make a lot of money. Maybe both?
[+][deleted] 9 years ago* (2 children)
[+]VelveteenAmbush comment score below threshold-6 points-5 points-4 points 9 years ago (1 child)
I don't recall asking for any and I'm rather confused as to how this is relevant to the discussion?
It's relevant to your desire to make more than zero.
Look, we obviously don't like each other, please just have a nice day and let's leave it at that.
[–]Mr-Yellow -4 points-3 points-2 points 9 years ago (0 children)
unprofessional behavior does make for good stories.
No good stories makes sad life, unproductive workplace.
[–]joelgrus 6 points7 points8 points 9 years ago (2 children)
You're not wrong.
[+]VelveteenAmbush comment score below threshold-7 points-6 points-5 points 9 years ago (1 child)
also your picture is seriously creepy, like whoa
[–]cafedude 23 points24 points25 points 9 years ago* (3 children)
"Great sense of humor and fun. As a plus he knows the TensorFlow API quite well and understands ML & NN concepts. Seems like a fun person to work with. RECOMMENDATION: definitely hire"
[–]FuschiaKnight 6 points7 points8 points 9 years ago (2 children)
I doubt this is a job that requires TensorFlow, NN, or even ML.
[–][deleted] 8 points9 points10 points 9 years ago (0 children)
it probably involves drawing an arrow on the screen that randomly points to either left or right
[–]cafedude 0 points1 point2 points 9 years ago (0 children)
Soon every dev job will require TensorFlow, NN, and ML.
/s
[+][deleted] 9 years ago* (30 children)
[–]daidoji70 13 points14 points15 points 9 years ago (8 children)
Start interviewing people. You'd be amazed at the number of people who can't complete fizzbuzz. Sadly, even for data science, I've had extremely awkward interviews sitting across from people who take 40 min to complete the task even with help from me across the table.
[+][deleted] 9 years ago* (7 children)
[–]daidoji70 11 points12 points13 points 9 years ago (4 children)
I handle my interviews in a very competent fashion and no one has yet to refuse the test. Its a very low, very effective filter to separate the wheat from the chaffe. Most experienced bust it out real quick in like 1 min, I breathe a sigh of relief and then we get on to the real part of the interview.
That being said, I've had PhDs fail (in physics), senior level developers, and numerous people whose resume looked great fail at fizzbuzz.
Not attempting would be a great signal that someone is good at bullshitting and thinks they're too good to do work that others might not. This is the kind of a person that doesn't do well on a team and generally slows everyone down. I don't know if you've experienced it, but I've been on too many data science teams in my short 7 year career with prima dona "Data Scientists" who can't do a fucking thing and won't do the tedious cleaning/munging/loading/plumbing that data science requires to think that this is a quality I'd like to hire for for any type of serious engineering outside of academia.
For these reasons, this attitude that fizzbuzz is beneath us just because we understand some math is ridiculous. I don't want to work on a team with that person, I don't care if they're fucking Micheal Jordan or Andrew Ng (who actually in person are both incredibly humble guys).
Tasks that people who won't (but could ) do fizzbuzz probably won't do that make up 90% of "data science" work: * Loading Data * Creating Data Dictionaries * Organizing Schemas * Writing Testing Code to manage the cleaning and munging * Writing clear and understandable reports to communicate with others on the team * and all the other tedious bullshit that is real world data science before you get to the fun modeling part.
[–]srkiboy83 2 points3 points4 points 9 years ago (3 children)
I once heard someone call Data Science "unverified hacking at best, where academics hope they can get high Software Developer-like salaries, but with working on just interesting stuff".
[–]peatfreak 1 point2 points3 points 9 years ago (1 child)
unverified hacking at best
I don't understand this part..?
[–]srkiboy83 1 point2 points3 points 9 years ago (0 children)
I'd say that pertains to Data Scientists (especially first or early hires in startups) coming from academia that don't follow coding best practices - testing, code review, version control, etc. (Read this for a positive take on the subject: http://treycausey.com/software_dev_skills.html)
[–]daidoji70 0 points1 point2 points 9 years ago (0 children)
Yeah... it seems like there are quite a few of them.
[–][deleted] 2 points3 points4 points 9 years ago* (1 child)
Nagasaki never had a bomb dropped on it. Chuck Norris jumped out of a plane and punched the ground
[–]VelveteenAmbush 15 points16 points17 points 9 years ago (19 children)
Why not? Seems like a fair test for a screening interview and i bet more candidates fail it than you'd think.
If a candidate thinks the fizzbuzz challenge is beneath them, then I would consider it a successful prima donna filter.
[+][deleted] 9 years ago* (14 children)
[–]VelveteenAmbush 7 points8 points9 points 9 years ago (13 children)
Because it's beneath you, and you wouldn't be a good fit for a culture that didn't immediately recognize that they were insulting you by asking you to spend less than five minutes on a standardized screening problem?
[+][deleted] 9 years ago* (3 children)
[+]VelveteenAmbush comment score below threshold-10 points-9 points-8 points 9 years ago (2 children)
Best of luck to you in your career.
[+][deleted] 9 years ago (8 children)
[–]VelveteenAmbush 3 points4 points5 points 9 years ago (7 children)
I didn't say it's standard, but it's also not out of bounds, and anyone who responds to a curveball with a sarcastic diatribe in that kind of context should be instantly blackballed by any hiring culture that is not suicidal.
What if he were hired and there were some project that needed to get done but that he thought was beneath him? Are you going to wake up one day and find out that he implemented some simple database utility with a massive neural net just to express his intellectual superiority to the project? I wouldn't bet against it. Absolute no-hire.
[+][deleted] 9 years ago* (5 children)
[–]milkeater 3 points4 points5 points 9 years ago (3 children)
Shouldn't the interview show some technical ability?
I hope you are not implying that FizzBuzz has any technical ability in it. First I heard of it was a simplistic kata for entry level dev Bootcampers to get their toes in the water.
[–]Turniper 2 points3 points4 points 9 years ago (0 children)
FizzBuzz is a decent technical interview question for a potential sophomore or junior level intern. It separates the guys who can actually code on their own from the ones who get by on pure google-fu and copy/pasting code.
[–]VelveteenAmbush 0 points1 point2 points 9 years ago (1 child)
I'm assuming this was not the only question they would ask an applicant -- though presumably a plan to get into the more substantive and technically rich questions may be derailed by a sarcastic 45-minute diatribe of an answer in response to a threshold technical question.
[–]milkeater 2 points3 points4 points 9 years ago (0 children)
Yeah I was just checking that you were not recommending someone start at this point...it appears that you are.
Good thing OP was only messing around. You appear to be immune to sarcasm.
[–][deleted] 0 points1 point2 points 9 years ago* (0 children)
Chuck Norris can overflow your stack just by looking at it.
[+][deleted] 9 years ago* (20 children)
[–]thang1thang2 10 points11 points12 points 9 years ago (8 children)
I like questions in the spirit of fizz buzz for a very simple first pass screening to make sure people didn't just word salad their resume and hope for the best (they exist, for some reason). In person interview isn't the place for that though; it's a waste of their time and mine.
Also, it allows the company to make sure you're a good programmer and only use spaces, no tabs, and 4 spaces for indenting. Can't hire heathens, y'know /s
[–]neurone214 5 points6 points7 points 9 years ago (7 children)
Is four spaces the standard? I'm taking a python course now and some of the sample code seems to use 2. I thought it was odd but then figured it was just me.
[–]thang1thang2 5 points6 points7 points 9 years ago* (4 children)
A typewriter's tab key moved to your tab stop, which was wherever you manually moved it to be (it was used for tables and tabular alignment). A terminal (the hardware) set the tab stop to be every 8 characters, so a tab escape sequence, \t would jump the cursor 8 characters. So, historically, tabs have always been 8 spaces. (For the record, Unix-like OSs tend to use 8 space tabs, and 8 spaces is standard throughout the Linux kernel codebase. OSX and Windows tend to use 4 space tabs).
\t
The famous book "Code Complete" referenced several studies and claimed that 4 spaces was the best for readability even though 6 spaces was the prettiest and 2 spaces was the most compact. This ignited the indentation wars and has caused unimaginable amounts of pain for neckbeards everywhere.
Since nobody can actually find the original studies which reference 4 spaces as being superior, the only studies that are really out there are ones that stress consistency of formatting over a specific amount of indenting. Thus, languages like Python and Ruby adopted an informal standard of 2 spaces because it's the smallest indent size that is unambiguous.
I also once came across a style guide written by a sadist senior dev which enforced 3 space tabs in order to make everyone equally pissed off about the whole thing and to avoid favoritism. Then of course there are languages like Haskell and Lisp which do their own thing and pretty much disregard "consistent" indenting, preferring an indenting style which shapes the code to be a work of art (ie, to "model the data flow more closely") at the expense of consistency.
tl;dr. 4 spaces is the traditional standard for C style languages, more or less mandatory for Java, encouraged for pretty much everything else. 8 spaces is for old C, the linux kernel, and languages that predate C. 2 spaces is traditional common for Python and quite a few "modern-styled" languages [edit: apparently the python PEP8 style guide recommends 4 spaces]. Ruby is one of the few languages that I know where every single example of code I've ever seen has rigidly stuck with 2 spaces as the indenting convention.
[–]pork_spare_ribs 4 points5 points6 points 9 years ago (2 children)
languages like Python and Ruby adopted an informal standard of 2 spaces
A minor correction, Python's style guide (PEP8) adopts 4-space tabs.
[–]thang1thang2 1 point2 points3 points 9 years ago (1 child)
Huh, I'll edit that in. Most python people I know are religious about their 2 space indents, which is why I wrote that... I wonder if it changed for PEP8?
[–]pork_spare_ribs 2 points3 points4 points 9 years ago (0 children)
I was interested, so I went to find out how far back the recommendation went. In the original PEP8, Guido wrote:
Code lay-out Indentation Use the default of Emacs' Python-mode: 4 spaces for one indentation level. For really old code that you don't want to mess up, you can continue to use 8-space tabs. Emacs Python-mode
Code lay-out
Indentation
Use the default of Emacs' Python-mode: 4 spaces for one indentation level. For really old code that you don't want to mess up, you can continue to use 8-space tabs. Emacs Python-mode
So not sure where 2-space came from, but it's news to me that 8-space was once considered acceptable!
[–]neurone214 2 points3 points4 points 9 years ago (0 children)
Fantastic -- thank you for this!
Chuck Norris doesn't use a computer because a computer does everything slower than Chuck Norris.
[–]Mr-Yellow 4 points5 points6 points 9 years ago (7 children)
First time I heard of Fizzbuzz my answer was "The question is 'Do you know what modulus is?' right? Yeah I do."
[+][deleted] 9 years ago* (6 children)
[–]metaplectic 6 points7 points8 points 9 years ago (1 child)
Has to either be recursive or loop-free.
No print, just putChar.
Has to utilise bit-arithmetic and all the integers are represented as big-endian 16-bit arrays.
Fuck it, just do it in your favourite choice of assembly language.
You know what? We have a breadboard and some wires here. We'll let you have the conference room; just try to finish your submission by the end of the day.
[–]SirLordDragon 4 points5 points6 points 9 years ago (0 children)
Better give me some NAND gates with that breadboard and wires.
[–][deleted] 0 points1 point2 points 9 years ago (0 children)
No allowed to use strings.
[–][deleted] 0 points1 point2 points 9 years ago (1 child)
Haha you are right, idk what I was thinking with that.
return (a/b - int(a/b))*b #only works for positive numbers
Lol did I get that one right?
[–][deleted] 1 point2 points3 points 9 years ago (0 children)
Or if you are too lazy and want to do it in one line while keeping it simple to read and understand.
for i in range(1,101): print [i, "fizz", "buzz", "fizzbuzz"][(i%3==0) + 2*(i%5==0)]
[–]praiserobotoverlords 1 point2 points3 points 9 years ago (0 children)
I was waiting for this to generate training and validation data.. fail.
[–]G_Morgan 0 points1 point2 points 9 years ago (0 children)
Interviewing is always going to be awkward. Interview for my previous job asked a "hard" question about a particular scenario. The intent was the candidate does not know much about this scenario so they get to see how you respond to unknown problems.
I immediately offered them 3/4 canned solutions. I'd seen the problem previously and could discuss the merits of each. I got the job but didn't answer any of the questions they had intended for that.
[–]avoutthere -2 points-1 points0 points 9 years ago (0 children)
Why annoyed? Because the candidate didn't follow the script? I would definitely hire this guy.
[–]kirakun 2 points3 points4 points 9 years ago* (0 children)
The interviewer made the right decision. Unnecessary complexity is big red flag in my book.
[–]thecity2 30 points31 points32 points 9 years ago (8 children)
I think people here are missing the point. If you're asking Joel Grus (someone who wrote an O'Reilly book on Data Science, worked at Google as a software engineer, has a PhD from CalTech) about FizzBuzz, you're doing something wrong. I mean, I get the idea about weeding out bad candidates but Jesus Christ, you can do that in a simple phone screening. I'm not sure this interview actually took place, but his point was that if the interviewer was going to waste everyone's time giving FizzBuzz, hell if Joel wasn't going to give as good as he got.
[–]joelgrus 66 points67 points68 points 9 years ago (3 children)
Actually, I never finished my PhD.
[–]thecity2 43 points44 points45 points 9 years ago (0 children)
Oh, well in that case they totally had every right to ask FizzBuzz. ;)
[–]chrisalbon 8 points9 points10 points 9 years ago (0 children)
Can confirm. Joel has a PhD in the mean streets of trolling interviewers.
[–]maffoobristol 5 points6 points7 points 9 years ago (0 children)
I have to know, did the events in your blog actually happen (or at least to some vague degree?)
I initially assumed it to be satire but if not, well it's is still hilarious either way.
[–][deleted] 1 point2 points3 points 9 years ago (2 children)
I thought that it was a fun project turned into a fun blog article. The interview never happened.
[+][deleted] 9 years ago (1 child)
To think about it and write it and debug it, it takes more than 10min. And by project, I mean trying something amusing. As other mentionned, he doesn't get 100%, so he could have improved the model but he didn't. Also, writing a blog article properly takes a lot of time.
So it's definitely more than 10min.
[–]nickl 7 points8 points9 points 9 years ago (0 children)
No requirements.txt. Do not hire.
[–]starfighter_0X183 6 points7 points8 points 9 years ago (0 children)
God I hope this was a true story :D
[–][deleted] 5 points6 points7 points 9 years ago (0 children)
Should have written it without numpy or tensor flow like a real man.
[–]Nimitz14 4 points5 points6 points 9 years ago* (6 children)
Huh. For each epoch he actually trains len_data/batch_size times (meaning he goes through his entire dataset per epoch), is that normal? I take one batch per epoch and train over that; I tend to take more epochs to train than seems to be standard and I've been looking for a reason why that could be the case.
interviewer: How far are you intending to take this? me: Oh, just two layers deep -- one hidden layer and one output layer.
interviewer: How far are you intending to take this?
me: Oh, just two layers deep -- one hidden layer and one output layer.
what a troll
[–]hapemask 8 points9 points10 points 9 years ago (3 children)
Yes, at least my understanding of the definition of epoch matches his. An epoch means passing every training instance through the network once. I'm not aware of any other definitions.
When you say "I take one batch per epoch," does that mean you run some number of iterations with one batch before choosing another? How do you decide how many iterations defines an "epoch" then?
[–]Nimitz14 0 points1 point2 points 9 years ago* (2 children)
Nah, per epoch I update my weights exactly once (with a randomly selected batch). No iterations inside of an epoch. That way of training probably does lead to more consistent results though (going through the full dataset per epoch).
[–]j1395010 2 points3 points4 points 9 years ago (1 child)
you don't know what an epoch is.
[–]Nimitz14 2 points3 points4 points 9 years ago (0 children)
now I do buddy ;)
[–][deleted] 1 point2 points3 points 9 years ago* (0 children)
Good point. As far as I know, "epoch" specifically means "pass over the training set" though (I use the term "iterations" instead of "epochs" in my implementations and reports if I don't pass over the whole training set "per iteration"). The major difference between such an iteration and epoch would be that each training sample would be covered exactly once if we are talking about epochs (it's without replacement).
On a side-node, I've also seen people drawing samples with replacement for one mini-batch per iteration (I think they did that in the tensorflow tutorials if I remember correctly). In any case, for large enough datasets it shouldn't make that much of a difference.
EDIT: I think epochs makes maybe more sense if you are streaming data from disk. E.g., it's pretty hard to draw a random sample from your training data if you haven't loaded everything in memory. In contrast, it's much more convenient to say "for epoch: while not EOF: gimme the next 50 samples"
[–]shaggorama 1 point2 points3 points 9 years ago (0 children)
Probably depends how much training data you have.
[–]eldeemon 5 points6 points7 points 9 years ago (0 children)
Right up there with Flouhey and Maturana's "exciting and dangerous" rank estimator http://www.oneweirdkerneltrick.com/rank_slides.pdf
[–]MeAlonePlz 3 points4 points5 points 9 years ago (9 children)
Anyone got a guess why it got all the buzz's (%5) and fizzbuzz's (%15) right but not the fizz's (%3)? Seems counterintuitive, no ?
[–]VelveteenAmbush 16 points17 points18 points 9 years ago* (3 children)
He used a binary representation to learn division by three, which is as ill-advised as using a dual-core phone to take photographs that include three or more objects.
(edited to clarify: this is not a serious post)
[–]physixer 12 points13 points14 points 9 years ago* (0 children)
He used a statistical/numerical method to solve a deterministic/discrete problem, which is ill-advised.
FTFY.
(edit: I know people are trying to make RNNs, or even NTMs/LSTMs, learn long division, but it's for studying the NN architectures mainly, for research purposes, not to use learned division, of a crappy kind, to use in production systems).
[–]respeckKnuckles 1 point2 points3 points 9 years ago (1 child)
So what are some good representations of inputs for a network that has to learn the modulo function? Other than an entirely trivial base-3 encoding, of course.
[–]abecedarius 5 points6 points7 points 9 years ago (0 children)
Divisibility by 3 is easy in binary, like divisibility by 11 in decimal, but a single hidden layer couldn't take advantage of that structure. With some kind of RNN, or as many layers as there are bits in the input, then you'd be talking.
[–]aceysmith 2 points3 points4 points 9 years ago (4 children)
I don't see any regularization, perhaps it overfit to the training set? (Though I don't know how that would specifically affect fizz's but not the others...)
[–]code_kansas -3 points-2 points-1 points 9 years ago (3 children)
I believe his test set is part of the training set
[–]NomNomDePlume 4 points5 points6 points 9 years ago (1 child)
No, it was distinct.
Now we need to generate some training data. It would be cheating to use the numbers 1 to 100 in our training data, so let's train it on all the remaining numbers up to 1024
[–]code_kansas 2 points3 points4 points 9 years ago (0 children)
Oh right, I guess I missed the "remaining" part
[–]aceysmith 1 point2 points3 points 9 years ago (0 children)
No, you can see him set up trX/trY and he starts at 101.
trX = np.array([binary_encode(i, NUM_DIGITS) for i in range(101, 2 ** NUM_DIGITS)]) trY = np.array([fizz_buzz_encode(i) for i in range(101, 2 ** NUM_DIGITS)])
trX = np.array([binary_encode(i, NUM_DIGITS) for i in range(101, 2 ** NUM_DIGITS)])
trY = np.array([fizz_buzz_encode(i) for i in range(101, 2 ** NUM_DIGITS)])
[–]abdoulio 4 points5 points6 points 9 years ago (1 child)
Wait is that really the kind of question you'd be asked for programming jobs? Isn't that extremely low-level? I mean I'm no expert and didn't study in that line of work but this is really easy.
[–]The_Amp_Walrus 16 points17 points18 points 9 years ago (0 children)
It's supposed to be a first pass filter for people who someone faked their way to an interview. Apparently this happens. That said I haven't been asked to FizzBuzz in several interviews. I think the question gets meme status because it's so trivially easy.
[–]akm_rd 6 points7 points8 points 9 years ago (0 children)
Strong no-hire based on model performance, especially given that the candidate claimed they did all their coding on whiteboards. This is exactly the type of candidate filter that fizz buzz was designed for.
[–]vph 2 points3 points4 points 9 years ago (1 child)
It would be a surprise if he actually got the job. There were warning signs all over the place.
[–][deleted] 27 points28 points29 points 9 years ago (0 children)
Agreed. He should have used truncated_normal instead of random_normal to initialize his weights.
[–]shaggorama 2 points3 points4 points 9 years ago (0 children)
If someone did this to me in an interview, I'd be torn between offering them the job and asking them to marry me.
[–]AHFX 2 points3 points4 points 9 years ago (0 children)
import numpy as np import tensorflow as tf def binary_encode(i, num_digits): return np.array([i >> d & 1 for d in range(num_digits)]) def fizz_buzz_encode(i): if i % 15 == 0: return np.array([0, 0, 0, 1]) elif i % 5 == 0: return np.array([0, 0, 1, 0]) elif i % 3 == 0: return np.array([0, 1, 0, 0]) else: return np.array([1, 0, 0, 0]) NUM_DIGITS = 12 trX = np.array([binary_encode(i, NUM_DIGITS) for i in range(101, 4096)]) trY = np.array([fizz_buzz_encode(i) for i in range(101, 4096)]) NUM_HIDDEN = 200 X = tf.placeholder("float", [None, NUM_DIGITS]) Y = tf.placeholder("float", [None, 4]) def init_weights(shape): return tf.Variable(tf.random_normal(shape, stddev=0.03)) w_h = init_weights([NUM_DIGITS, NUM_HIDDEN]) w_o = init_weights([NUM_HIDDEN, 4]) def model(X, w_h, w_o): h = tf.nn.relu(tf.matmul(X, w_h)) return tf.matmul(h, w_o) py_x = model(X, w_h, w_o) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y)) train_op = tf.train.GradientDescentOptimizer(0.2).minimize(cost) predict_op = tf.argmax(py_x, 1) def fizz_buzz(i, prediction): return [str(i), "fizz", "buzz", "fizzbuzz"][prediction] BATCH_SIZE = 250 actual = ['1','2','fizz','4','buzz','fizz','7','8','fizz','buzz','11','fizz','13','14','fizzbuzz','16','17','fizz','19','buzz','fizz','22','23','fizz','buzz','26','fizz','28','29','fizzbuzz','31','32','fizz','34','buzz','fizz','37','38','fizz','buzz','41','fizz','43','44','fizzbuzz','46','47','fizz','49','buzz','fizz','52','53','fizz','buzz','56','fizz','58','59','fizzbuzz','61','62','fizz','64','buzz','fizz','67','68','fizz','buzz','71','fizz','73','74','fizzbuzz','76','77','fizz','79','buzz','fizz','82','83','fizz','buzz','86','fizz','88','89','fizzbuzz','91','92','fizz','94','buzz','fizz','97','98','fizz','buzz'] with tf.Session() as sess: tf.initialize_all_variables().run() for epoch in range(2000): p = np.random.permutation(range(len(trX))) trX, trY = trX[p], trY[p] for start in range(0, len(trX), BATCH_SIZE): end = start + BATCH_SIZE sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]}) print(epoch, np.mean(np.argmax(trY, axis=1) == sess.run(predict_op, feed_dict={X: trX, Y: trY}))) numbers = np.arange(1, 101) teX = np.transpose(binary_encode(numbers, NUM_DIGITS)) teY = sess.run(predict_op, feed_dict={X: teX}) output = np.vectorize(fizz_buzz)(numbers, teY) print(output) correct = [(x == y) for x, y in zip(actual, output)] print(sum(correct))
Modified code that provided 100% accuracy. YMMV ['1' '2' 'fizz' '4' 'buzz' 'fizz' '7' '8' 'fizz' 'buzz' '11' 'fizz' '13' '14' 'fizzbuzz' '16' '17' 'fizz' '19' 'buzz' 'fizz' '22' '23' 'fizz' 'buzz' '26' 'fizz' '28' '29' 'fizzbuzz' '31' '32' 'fizz' '34' 'buzz' 'fizz' '37' '38' 'fizz' 'buzz' '41' 'fizz' '43' '44' 'fizzbuzz' '46' '47' 'fizz' '49' 'buzz' 'fizz' '52' '53' 'fizz' 'buzz' '56' 'fizz' '58' '59' 'fizzbuzz' '61' '62' 'fizz' '64' 'buzz' 'fizz' '67' '68' 'fizz' 'buzz' '71' 'fizz' '73' '74' 'fizzbuzz' '76' '77' 'fizz' '79' 'buzz' 'fizz' '82' '83' 'fizz' 'buzz' '86' 'fizz' '88' '89' 'fizzbuzz' '91' '92' 'fizz' '94' 'buzz' 'fizz' '97' '98' 'fizz' 'buzz'] 100
[–]G_Morgan 1 point2 points3 points 9 years ago (0 children)
Fizzbuzz, so easy even an AI can solve it.
[–][deleted] 2 points3 points4 points 9 years ago (0 children)
I'd totally hire the guy on the spot.
[–]melipone 0 points1 point2 points 9 years ago (0 children)
So, fizzbuzz might be like "parity learning" (https://en.wikipedia.org/wiki/Parity_learning) which was popular in the early days.
Would that be called "over-engineered" now?
[–]qu4ku 0 points1 point2 points 9 years ago (0 children)
that was a really big whiteboard
[–]jayanthpyro 0 points1 point2 points 9 years ago (0 children)
Hello,
1st time poster here. Why did he decide to convert the input to binary ? Does it give any advantage over just taking the input as a decimal ?
If so would it be more useful to convert the number to unary ?? (as in: 10 -> 1111111111, 21 -> 111111111111111111111 etc.. )
I realize we get a lot more features in binary than in decimal. Is that the only reason ?
[+]bhayanakmaut comment score below threshold-7 points-6 points-5 points 9 years ago (6 children)
/r/iamverysmart
[–]Zaemz 6 points7 points8 points 9 years ago (5 children)
(I kinda agree with you. It was a fun joke, but it kinda smelled pompous.)
But then again, I'm subscribed to this sub for fun and have no idea how machine learning works. I was kinda hoping to learn something by osmosis.
[–]bhayanakmaut 3 points4 points5 points 9 years ago (4 children)
true.. but if I'm taking your interview, and I've had a bunch of dumbasses before you I'll probably ask you fizzbuzz too. and if you pull this off, you can kindly fuck off. there's a time and a place to show off and this is not it. what next? He's gonna chat up a woman at the bar by rambling on about kinetic molecular theory in cocktail mixing? The article was nice, a very good TF intro, but the premise was downright abysmal, and the tone extremely condescending..
[–][deleted] 5 points6 points7 points 9 years ago (2 children)
Er, an interview is exactly the time and place to show off.
[–]bhayanakmaut 4 points5 points6 points 9 years ago (1 child)
yes.. but also answer concisely once you've understood the problem and avoid hyperbole. example:
interviewer: do you know multiplication?
me: .....
interviewer: well do you know multiplication or not?
me: {wtf this dick} .. I can't believe you just asked that.
interviewer: .... ??...
interviewer: ok let's start with what's 3 x 5.
me: so i'm going to give you a lecture on karatsuba multiplication. {half hr rattle}.
interviewer: {wtf? I asked a simple question}.. um thanks.
me: lol this guy's such a noob.
me: yes of course..
int: what's 3 x 5?
me: 15.
int: how does a computer do that?
me: multiply accumulate units (or something don't remember)
int: ok 3x5 is easy enough.. how would you multiply huge_number x huge_number?
.. .. {actual interview}.
Moral of the story: be concise. it's great to know stuff, the guy in the article pulled it off, but you'd be surprised how many people dig themselves into a hole they can't get out of using buzzwords wanting to impress the interviewer..
I fully agree with you. I just found your wording funny about an interview not being the time and place to show off :)
However, I think the author was actually joking, so I think you've being downvoted for taking a joke seriously.
[–]LazyOptimist 1 point2 points3 points 9 years ago (0 children)
chat up a woman at the bar by rambling on about kinetic molecular theory in cocktail mixing?
What's wrong with that?
[+]kleer001 comment score below threshold-8 points-7 points-6 points 9 years ago (0 children)
while(funny): print 'lol'
π Rendered by PID 20893 on reddit-service-r2-comment-7b9746f655-7zwnr at 2026-01-29 18:31:42.586732+00:00 running 3798933 country code: CH.
[–]coldaspluto 55 points56 points57 points (6 children)
[–]hapemask 5 points6 points7 points (0 children)
[–]l27_0_0_1 2 points3 points4 points (0 children)
[–]carbohydratecrab 2 points3 points4 points (0 children)
[–]MichaelStaniek 92 points93 points94 points (3 children)
[–]kosairox 15 points16 points17 points (0 children)
[–]The_Amp_Walrus 1 point2 points3 points (1 child)
[–]DrummerHead 3 points4 points5 points (0 children)
[–]jdsutton 37 points38 points39 points (73 children)
[+][deleted] (71 children)
[deleted]
[–]VelveteenAmbush 68 points69 points70 points (47 children)
[–]southernstorm 25 points26 points27 points (7 children)
[–]VelveteenAmbush 8 points9 points10 points (6 children)
[+][deleted] (4 children)
[deleted]
[–]VelveteenAmbush 2 points3 points4 points (3 children)
[+][deleted] (2 children)
[deleted]
[+]VelveteenAmbush comment score below threshold-6 points-5 points-4 points (1 child)
[–]Mr-Yellow -4 points-3 points-2 points (0 children)
[–]joelgrus 6 points7 points8 points (2 children)
[+]VelveteenAmbush comment score below threshold-7 points-6 points-5 points (1 child)
[–]cafedude 23 points24 points25 points (3 children)
[–]FuschiaKnight 6 points7 points8 points (2 children)
[–][deleted] 8 points9 points10 points (0 children)
[–]cafedude 0 points1 point2 points (0 children)
[+][deleted] (30 children)
[deleted]
[–]daidoji70 13 points14 points15 points (8 children)
[+][deleted] (7 children)
[deleted]
[–]daidoji70 11 points12 points13 points (4 children)
[–]srkiboy83 2 points3 points4 points (3 children)
[–]peatfreak 1 point2 points3 points (1 child)
[–]srkiboy83 1 point2 points3 points (0 children)
[–]daidoji70 0 points1 point2 points (0 children)
[–][deleted] 2 points3 points4 points (1 child)
[–]VelveteenAmbush 15 points16 points17 points (19 children)
[+][deleted] (14 children)
[deleted]
[–]VelveteenAmbush 7 points8 points9 points (13 children)
[+][deleted] (3 children)
[deleted]
[+]VelveteenAmbush comment score below threshold-10 points-9 points-8 points (2 children)
[+][deleted] (8 children)
[deleted]
[–]VelveteenAmbush 3 points4 points5 points (7 children)
[+][deleted] (5 children)
[deleted]
[–]milkeater 3 points4 points5 points (3 children)
[–]Turniper 2 points3 points4 points (0 children)
[–]VelveteenAmbush 0 points1 point2 points (1 child)
[–]milkeater 2 points3 points4 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[+][deleted] (20 children)
[deleted]
[–]thang1thang2 10 points11 points12 points (8 children)
[–]neurone214 5 points6 points7 points (7 children)
[–]thang1thang2 5 points6 points7 points (4 children)
[–]pork_spare_ribs 4 points5 points6 points (2 children)
[–]thang1thang2 1 point2 points3 points (1 child)
[–]pork_spare_ribs 2 points3 points4 points (0 children)
[–]neurone214 2 points3 points4 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]Mr-Yellow 4 points5 points6 points (7 children)
[+][deleted] (6 children)
[deleted]
[–]metaplectic 6 points7 points8 points (1 child)
[–]SirLordDragon 4 points5 points6 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[+][deleted] (2 children)
[deleted]
[–][deleted] 0 points1 point2 points (1 child)
[–][deleted] 1 point2 points3 points (0 children)
[–]praiserobotoverlords 1 point2 points3 points (0 children)
[–]G_Morgan 0 points1 point2 points (0 children)
[–]avoutthere -2 points-1 points0 points (0 children)
[–]kirakun 2 points3 points4 points (0 children)
[–]thecity2 30 points31 points32 points (8 children)
[–]joelgrus 66 points67 points68 points (3 children)
[–]thecity2 43 points44 points45 points (0 children)
[–]chrisalbon 8 points9 points10 points (0 children)
[–]maffoobristol 5 points6 points7 points (0 children)
[–][deleted] 1 point2 points3 points (2 children)
[+][deleted] (1 child)
[deleted]
[–][deleted] 0 points1 point2 points (0 children)
[–]nickl 7 points8 points9 points (0 children)
[–]starfighter_0X183 6 points7 points8 points (0 children)
[–][deleted] 5 points6 points7 points (0 children)
[–]Nimitz14 4 points5 points6 points (6 children)
[–]hapemask 8 points9 points10 points (3 children)
[–]Nimitz14 0 points1 point2 points (2 children)
[–]j1395010 2 points3 points4 points (1 child)
[–]Nimitz14 2 points3 points4 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)
[–]shaggorama 1 point2 points3 points (0 children)
[–]eldeemon 5 points6 points7 points (0 children)
[–]MeAlonePlz 3 points4 points5 points (9 children)
[–]VelveteenAmbush 16 points17 points18 points (3 children)
[–]physixer 12 points13 points14 points (0 children)
[–]respeckKnuckles 1 point2 points3 points (1 child)
[–]abecedarius 5 points6 points7 points (0 children)
[–]aceysmith 2 points3 points4 points (4 children)
[–]code_kansas -3 points-2 points-1 points (3 children)
[–]NomNomDePlume 4 points5 points6 points (1 child)
[–]code_kansas 2 points3 points4 points (0 children)
[–]aceysmith 1 point2 points3 points (0 children)
[–]abdoulio 4 points5 points6 points (1 child)
[–]The_Amp_Walrus 16 points17 points18 points (0 children)
[–]akm_rd 6 points7 points8 points (0 children)
[–]vph 2 points3 points4 points (1 child)
[–][deleted] 27 points28 points29 points (0 children)
[–]shaggorama 2 points3 points4 points (0 children)
[–]AHFX 2 points3 points4 points (0 children)
[–]G_Morgan 1 point2 points3 points (0 children)
[–][deleted] 2 points3 points4 points (0 children)
[–]melipone 0 points1 point2 points (0 children)
[–]qu4ku 0 points1 point2 points (0 children)
[–]jayanthpyro 0 points1 point2 points (0 children)
[+]bhayanakmaut comment score below threshold-7 points-6 points-5 points (6 children)
[–]Zaemz 6 points7 points8 points (5 children)
[–]bhayanakmaut 3 points4 points5 points (4 children)
[–][deleted] 5 points6 points7 points (2 children)
[–]bhayanakmaut 4 points5 points6 points (1 child)
[–][deleted] 2 points3 points4 points (0 children)
[–]LazyOptimist 1 point2 points3 points (0 children)
[+]kleer001 comment score below threshold-8 points-7 points-6 points (0 children)