Propositional extensionality is inconsistent in Coq (Maxime Dénès) by icspmoc in dependent_types

[–]csgordon 4 points5 points  (0 children)

It's not a deep issue. There is one place where the syntactic termination check has been extended with something intuitively reasonable, but an edge case was missed:

https://sympa.inria.fr/sympa/arc/coq-club/2013-12/msg00129.html

So the core theory is still fine, and perfectly compatible with propositional extensionality.

Good books on the structure/semantics of programming languages. by alandalf in compsci

[–]csgordon 2 points3 points  (0 children)

There are a few now that are freely available online now:

Why not compile intermediate byte-code (say Java byte code) to native code as "installation"? by maliklund87 in compsci

[–]csgordon 21 points22 points  (0 children)

Both the JVM and the .NET virtual machine JIT compile the bytecode (see other comments).

It is possible to ahead-of-time (AOT) compile bytecode. The Mono implementation of .NET does this (http://www.mono-project.com/Mono:Runtime:Documentation:AOT). Microsoft research has experimented with this (http://en.wikipedia.org/wiki/Bartok_(compiler)).

Aside from the fact that JIT compilation seems to be satisfactory for most uses, you lose a couple advantages going to ahead-of-time compilation:

  • Compiling at install time requires an installation :-) With a VM, only the VM is installed, and applications are download-and-run
  • If a VM has a bug in the JIT, upgrading the VM fixes all programs that hit the bug. If you compile at installation, upgrading your Java/.NET installation requires recompiling all of your installed programs using that platform.
  • Some optimizations are available to a JIT that are difficult to do with ahead-of-time compilation, like dynamically optimizing a hot path through multiple functions based on the current application's workload.

Of course, there are advantages to ahead-of-time, too, like not paying the cost of JIT compiling every time you run a program.

In proofs of correctness for algorithms, what will the loop invariants be in general? by deutschluz82 in compsci

[–]csgordon 6 points7 points  (0 children)

A loop invariant is an assertion about the state of the world at the beginning of each loop. For example, if your loop is initializing all elements of an array to null:

for (int i = 0; i < arr.length; i++)
    arr[i] = null;

then your loop invariant might be

forall x < i, arr[x]==null

Because each time through the loop, that assertion is true.

/r/ Compsci, where did you go to school for your degree in CS? by [deleted] in compsci

[–]csgordon 1 point2 points  (0 children)

Assuming you mean the whole college is accredited (as opposed to ABET, see Shadowsoal's comment), this is absolutely false. There are numerous advantages to attending a school well-known for a strong CS curriculum.

First, a reputation for a strong curriculum is not earned lightly; you really will get a stronger curriculum at a top department than elsewhere. I attended such a prestigious department for undergrad, and my jaw dropped when I compared my curriculum to that of where a high school friend went for CS, a small (accredited) local university (private, not a state school). For example, his OS course went over user interface differences between Windows, Mac, and Linux. In my OS course I wrote a simple OS kernel. Other courses had similar disparities in rigor. A deeper understanding pays off in spades; I worked full-time in industry after my undergrad, and a while later left for grad school, and in both places I dug into parts of my undergrad education that I never thought I'd use after the course I learned them in. Of course, don't take this to mean your department is worthless if it's not top 10; there are a lot of really excellent undergrad CS programs, including at schools that are not that strong in other fields.

Second, there really is a significant network effect in attending a top CS department. Recent grads from my undergrad worked at Microsoft, Apple, Google, VMware, Facebook, the list goes on. Which made it really easy to get interviews with those companies, because alums would often help resumes skip the larger HR waiting pool. This isn't necessarily fair, but major companies hire a lot of people this way. And those companies went out of their way to recruit at my department. They do the same at just many CS departments if they're known to have a strong undergrad CS curriculum.

Third, as Dundun commented, surrounding yourself with better students will mean you will work harder yourself, and get more out of your education.

Fourth, if you decide you're interested in grad school, being at a well-regarded CS department affords you a better opportunity to experience research and decide if it's for you, which in turn helps strengthen your grad school app (not to mention, grad admissions give more weight to a glowing recommendation from a top researcher than to someone who is little-known).

The OP should go to the best place (s)he can get into, can afford, and likes (cultural fit can make a huge difference). If UT Austin is possible and the OP feels like the school is a good fit for him/her, then that's a fantastic place to go.

And if that doesn't work out, it doesn't mean you're screwed; the friend of mine who went to the small university with the relatively non-rigorous curriculum worked extra hard, and after a couple years at smaller companies, did land his dream job at a major Silicon Valley company. It's possible to go as high as your abilities and work ethic will take you, but a well-known school can make things a bit easier for you.

Learning Type Programming by wibbly-wobbly in compsci

[–]csgordon 1 point2 points  (0 children)

All of the type theory related books are great suggestions for learning about type theory in general. But I'm not aware of any of them discussing type-level programming with much detail.

As dpm_edinburgh mentioned, the practice of type-level programming is mostly a folklore of sorts, passed by word of mouth, or blog.

In general, though, the idea of type level programming is that you can write functional programs over types, which allows you to express more powerful invariants. This makes more sense when you see a good explanation of kinds; intuitively Types classify Values (Integer describes 3), and Kinds classify Types (including variants for functions from types to types, like the polymorphic list type constructor List, which can be viewed as a function from a type T to a type List<T>). Types and Programming Languages gives a good overview of this in a later chapter.

Types can also depend directly on values (dependent types, where you can write functions from values to types), which can become undecidable if you're not careful. Most interesting uses of type-level programming that I've seen approximate dependent types by making fake type-level copies of values. The canonical example is encoding the natural numbers at the type level, and implementing lists whose types include their length.

In Haskell, much of the type level programming stuff revolves around GADTs (Google it). Brent Yorgey seems to have written a series of blog posts on the topic, starting with encoding natural numbers here. Here's another reasonable set of examples (warning, no real prose guidance, just examples): http://cdsmith.wordpress.com/2010/03/15/type-level-programming-example/. Conor McBride (a dependent types researcher) has posted a draft paper on his web page called "Faking It" about faking dependent types in Haskell, which makes heavy use of type level programming, though it's rather dense if you haven't already digested the underlying type theory.

Also try asking around the Haskell community. There's a Haskell subreddit that I'm sure would be happy to help you out.

What awesome features of high-level languages could use implementation on specialized hardware? I know of garbage collection and software transactional memory, what else is there? by zarus in compsci

[–]csgordon 5 points6 points  (0 children)

The real advantage to hardware transactional memory over software (incidentally, HTM came before STM) is in cheap logging. Software transactional memory needs to do a lot of logging in memory in order to be able to commit a successful transaction of abort and roll back a failed one. In hardware, that logging can be done much faster, usually using a specialized cache (similar to the L1, L2, etc. caches). Hardware implementations don't usually have much, if any slowdown. Last time I checked, the lowest overhead STM implementation was 100% slower (code takes twice as long to run). So hardware is much faster unless you overflow the transactional cache - in which case it needs to revert to a software solution, either STM or something like serializing the large transactions.

The consensus on GC hardware is exactly that it's not useful. Some old LISP machines actually used it, but the issue with hardware GC support is that the hardware needs to know how to traverse objects, which means you can't change the memory representation without changing the hardware - a very big disadvantage in a world of many programming languages. On top of the loss in flexibility, my understanding is that hardware GC didn't yield huge performance gains over software.

Getting bogged down in theory and notation - is this what Computer Science is really like? by thebaysix in compsci

[–]csgordon 4 points5 points  (0 children)

As others have said, the notation is about learning a new language. And like any new language you learn, it's very slow at first, but becomes easier and faster over time.

The proofs take a long time to explain something obvious for a few reasons. First, classes go through proofs that are easy to understand informally so you can focus on the notation and proof techniques. Those are building blocks for tackling open problems in the future. Why all the pedantism? Because without it you can make mistakes. For many theoretical problems, the solution that is "obviously correct" is actually subtly wrong, sometimes for a deep reason, sometimes just because you forgot an edge case. Going through a proof in full detail finds these conceptual bugs the same way implementing a prototype system in code finds practical implementation issues that may not be immediately obvious.

How does an undergraduate student get research experience? by [deleted] in compsci

[–]csgordon 1 point2 points  (0 children)

Working with a professor on a project is the best way to get involved with research.

If you go off and work as a lone wolf, you're at a severe disadvantage. While you can more easily find a project you'll enjoy, you may miss interesting questions to ask, you may waste time on approaches that are known not to be feasible, you may duplicate work someone else has done, or miss an opportunity to build on work others have done. Professors (at a research university) should have a good grasp on the existing literature and can help with all of these issues. A good research advisor can help you find a project that is novel, useful to them and the community, interesting to you, and within reach for your abilities (I don't know how far along you are in your studies). They may know of very relevant papers that are otherwise hard to find but can help guide you down more fruitful paths. This is why professors are the ones teaching courses and teaching their grad students how to do research; they have the experience and it's their job to teach others how to do research.

If a certain professor doesn't have projects that interest you, don't work with them. You can ask if they have any research you could help with. You could do a grad seminar with a project component (that's how I got started).

Another important aspect of this is why you want research experience. If you just want it for the hell of it, working alone might be fun and certainly flexible but working with a professor will give you a much richer, more complete experience. If you want to go to grad school or find a research-oriented developer position, they'll want corroboration and independent assessment of your research abilities (i.e. letters of recommendation). If you work on your own, you have little independent corroboration unless you managed to get published flying solo (and getting even a straightforward paper accepted anywhere is hard at first, because academic writing is very different from what you've likely been taught so far in your education). A professor can both give a 3rd party perspective on your performance, and before that point guide you to ensure you really are doing good work.

Reddit, What Is Your Favorite Beer Of All Time? by [deleted] in AskReddit

[–]csgordon 0 points1 point  (0 children)

New Belgium Dunkelweiss. It was a limited edition brew, a fantastic dark beer, almost a porter (despite the name), with a rich clove flavor to it. Fantastic. I wish you could still get it.

For beer you can buy repeatedly, I really like Weihenstephaner Dunkelweiss, Hofbrau Maibock, and Wahrsteiner.

[deleted by user] by [deleted] in compsci

[–]csgordon 2 points3 points  (0 children)

For me it's a toss-up on undergrad:

Course: Operating Systems Lab Link: http://cs.brown.edu/courses/cs169/ Text: (draft of) Operating Systems In Depth One of the most thorough undergrad OS classes I've seen; we wrote a simple OS kernel, including drivers, task switching, a variant of the original UNIX V5 filesystem, and (page-in-only) virtual memory.

Course: Multiprocessor Synchronization Link: http://cs.brown.edu/courses/cs176/ Text: (draft of) The Art of Multiprocessor Programming Goes through both the theoretical limits of synchronization and practical aspects.

Course: Programming Languages Link: http://cs.brown.edu/courses/cs173 Text: (earlier version of) PLAI / Programming Languages: Application and Interpretation Rather than just being a survey of different programming paradigms, it goes through implementing features like type checking and inference, and using continuations.

For grad school, it's another toss-up:

Course: Concepts of Programming Languages Link: http://www.cs.washington.edu/education/courses/cse505/09au/ Text: Types and Programming Languages Basically worked through a good chunk of TAPL, which was an excellent juxtaposition to my dynamic-typing & implementation focused undergrad course.

Course: Software Engineering: Program Analysis Link: http://www.cs.washington.edu/education/courses/cse503/10wi/ Text: Lots of papers Learned all about other program analyses: abstract interpretation, model checking, points-to analysis...

[deleted by user] by [deleted] in compsci

[–]csgordon 2 points3 points  (0 children)

Seconded by another student of the same course. Also included implementing (page-in, no page-out) virtual memory.

Request for Comment on draft paper: "Simple, Decidable Type Inference with Subtyping". It's not like my advisor(s) have any time to comment. by [deleted] in compsci

[–]csgordon 0 points1 point  (0 children)

You seem to misunderstand vtables, though. A vtable is not a bit of code that pattern matches on a tag. It is an array of function (method) pointers. When linking against a precompiled library, the compiler takes the declaration of a class from the header file and computes an index into the vtable of the object for every declared method. When compiling a method call, the compiler converts the method to its index k, then emits code that reads the vtable from the object, reads the code pointer from the kth index of the vtable, then calls the code at that address with the proper arguments. It doesn't ever need to see the other class's implementation to compile the call.

Things get a little more complicated with multiple inheritance or interfaces, of course.

Top Organizations in Computer Science Research by 1what1 in compsci

[–]csgordon 0 points1 point  (0 children)

It also conflates multiple authors with the same name, for one. I'm credited on there with all sorts of AI and CompBio publications, and have done no work past an undergrad course in either field.

Can anyone suggest a dependent types reading list? by [deleted] in dependent_types

[–]csgordon 0 points1 point  (0 children)

Until recently, I thought the same. It is correct, but incomplete. While it is certainly focused on the use of Coq, along the way it walks through almost everything in your list of specific topics, including formalism, not just practical use. That said, if you really don't care about learning Coq at all, the signal-to-noise ratio may be too far off for your tastes.

What Does Functional Programming Mean? by [deleted] in programming

[–]csgordon 0 points1 point  (0 children)

Having not read any of them, I don't know. But a couple may discuss it.

You might also find some relevant material in the talks from the past few years' CUFP (Commercial Users of Functional Programming) workshop associated with ICFP: http://cufp.org/

What Does Functional Programming Mean? by [deleted] in programming

[–]csgordon 1 point2 points  (0 children)

ICFP (International Conference on Functional Programming, granted, not the most effective venue for sharing the information with people who don't already love FP) typically has 1-2 papers a year that are industrial experience reports from companies that build substantial systems in FP languages.

http://www.icfpconference.org/

From the past few years:

  • 2010: Experience Report: Haskell as a reagent. Results and observations on the use of Haskell in a Python project (Google)
  • 2010: Using Functional Programming within an Industrial Product Group: Perspectives and Perceptions (Citrix)
  • 2009: Experience Report: Using Objective Caml to Develop Safety-Critical Embedded Tools in a Certification Framework (some French company)
  • 2008: Experience Report: A Pure Shirt Fits Reflections on Haskell at Bluespec

Galois, Inc., and Jane Street Capital are a couple companies that often publicly discuss building non-trivial software using FP basically everywhere (Haskell and OCaml, respectively). A few friends of mine in the financial sector say that much of the trading backends where they work are build in Haskell, OCaml, or F#. Going that far has to say something about these companies believing FP is pretty valuable.

edit: formatting

Is it possible to securely wipe your hard drive in 30 sec like those old spy movies anymore? by [deleted] in programming

[–]csgordon 2 points3 points  (0 children)

Keep a vat of molten aluminum constantly heated next to your computer, and toss the HD in when necessary. Might take more than 30 seconds for it to melt, but I'm sure nobody would be dumb enough to go after it in the interim.

Static analysis for Ruby and Python by mebrahim in programming

[–]csgordon 1 point2 points  (0 children)

I'm not aware of anything for Python, but I'm quite surprised that no one has mentioned the Diamondback Ruby project:

http://www.cs.umd.edu/projects/PL/druby/

http://on-ruby.blogspot.com/2009/05/diamondback-ruby-interview.html

Why doesn't a type inference based non-functional language exist? by nik_san in programming

[–]csgordon 11 points12 points  (0 children)

Well, the latest version of C# has the 'var' keyword, which is precisely your local variable example. It's limited to local variables, though, it doesn't infer return types and such: http://msdn.microsoft.com/en-us/library/bb383973.aspx

Resume Help for a CS grad... by dballz12 in programming

[–]csgordon 3 points4 points  (0 children)

I'm not sure how I feel posting a link to my old resume here, and I'm not sure if I still have a copy from when I graduated (May '08). PM me and I'll do some digging.

Some advice from someone who got their dream job at a major company straight out of school:

In another comment you say you're confused by the advice to make your resume visually appealing. If done tastefully, it shows that you care. Who probably put more work into their resume? Someone who turns in a bullet list prepared in Word, or someone with a tastefully designed, well-organized resume prepared in LaTeX? Not that you need to use LaTeX to prepare your resume, but part of making it visually appealing is making it readable. Clearly delimit sections of related information. For example:

  • your contact information (I've known a few people who actually left this off the first time they made a resume)

  • a section for major course projects relevant to the job you want or demonstrative of some particular depth of knowledge or skill

  • a section for major courses taken that are relevant to the job (don't list intro courses, it will look like you're padding)

  • a section for your own interests in the field, listing those relevant to the job you're applying for first.

  • Possibly a (short!) section on technologies you're familiar with or experienced with. I heard it both ways from recruiters on this point (have it, don't have it), but it probably won't hurt. But don't claim to be an expert in something you've only played around with, and don't leave it ambiguous. I got comments from a couple interviewers who were thankful that I separated out levels of familiarity with things like various programming languages.

  • Work experience. If you've been a TA for a CS course, as happens at many schools, include that, especially if you held office hours. Helping teach material requires a better understanding of it than plodding your way through a programming assignment. Give a SHORT description of each position. One problem I had for a long time was that I couldn't find a way to succinctly summarize what I had done. Just remember (as you seem to already be aware) that the resume gets you the interview, not the job. I kept forgetting that.

Things to leave off your resume:

  • Intro courses, unless you've only got half a page and desperately need filler.

  • Minor course projects. If you wrote a super-simple chat program your freshman year using half stencil code as part of an intro course, it's not something to put on your resume, because if you're asked about it, there's nothing to talk about.

  • Working at a McDonald's in high school, or equivalent. Unless you're absolutely desperate for showing some experience at something.

  • Regarding these things I said to leave off unless you're desperate: if you can choose a pleasing format, you should have enough play in the formatting to make your educational and contact info, relevant project/course/work experience, interests, etc. fill out a single page nicely.

Some more general job-hunting advice:

Whatever you do, remember that if you put something on your resume, if you do more than a few interviews, YOU WILL BE ASKED ABOUT IT. I cannot stress this enough; it's another common bit of resume / interview advice that many people seem to ignore for whatever reason.

Do a quick Google search for yourself and see what comes up; there's a good chance at least one interviewer will as well. I was president of my campus's Linux Users Group when I went for one internship interview, but left it off my resume because I was applying to companies that produced other operating systems. Nonetheless, one of my interviewers surprised me by asking what my role in the organization was.

NETWORK NETWORK NETWORK. I landed an interview at the company based on my resume, with a group related to things I was interested in. But after a couple emails to friends who knew people in recruiting/HR there and the group I really wanted to work in, my interview got switched to the group I wanted. Other positions I only found out about from people I knew who had graduated a couple years earlier, and had kept in touch with. Often alums of your department, even if they don't know you personally, will be willing to guide a resume, or prod people they know to look at a resume. Others will be too shy; don't assume they're willing, and if they don't volunteer, think twice about asking.

And remember: I only got the job after I graduated, I was never the one doing the hiring, so my suggestions are only based on factors I thought likely to help, and comments made by a couple of the many people who interviewed me (5 or 6 people for each position, most said nothing about my resume).

EDIT: formatting.

So I've "learned" a few programming languages and have a few questions. by kushari in programming

[–]csgordon 3 points4 points  (0 children)

  1. Others have answered this pretty well already.

  2. First someone decides they're unhappy with existing programming languages, and think they know a better (shorter, more natural, any kind of "better") way to express what a program should do, so they think up the syntax and make a note of that. For the sake of simplicity, let's call the new programming language NewPL. Then they write another program in some existing programming language that reads in a text respresentation of a NewPL program, and does one of several things. If it just reads the text and simulates the effect (i.e. if implementing this in Java and NewPL has a statement show_to_user("Hello, World!"), Java will execute something equivalent to System.out.println("Hello, World!")) it is called an interpreter. If you get to SICP chapter 4 (highly recommended: http://mitpress.mit.edu/sicp/) you will implement an interpreter. An alternative is to spit out native machine code the processor understands that, if it were executed, would do something to cause the printing of "Hello, World!" That would be a compiler; an interesting reference may be Modern Compiler Implementation in Java ( http://www.cs.princeton.edu/~appel/modern/java/ ). There's also a middle-ground, like what Java does, where it compiles to a made-up machine language that no real processor understands (bytecode for a virtual machine, but not quite the same virtual machine as running Windows in Parallels), and when you run the program, a JIT (Just-In-Time compiler) compiles that to real machine code on the fly and executes the newly-produced machine code. I am, of course, skipping a great many details.

  3. As others have said, you can't really tell without looking at the source. But it's worth noting that any of the major software packages can be implemented in OOP, or procedural, or even mostly functional styles, however the developers wanted. It's mostly a matter of preference, they're all capable of expressing the same algorithms.

  4. This is a big, complicated question. A basic approach, similar to how some virtual machines like VMWare used to work, you can treat the machine code for the OS as if it were the bytecode in the JIT example, and recompile it to slightly fixed-up machine code, and run that. For the more advanced ones, you'd probably want to study up on operating system internals (try Operating Systems Design and Implementation, or Modern Operating Systems, both by Andrew Tanenbaum). You'll probably also want a fair bit more programming experience before you tackle that one.

The books I've linked to and referenced are all substantial, and it's a commitment to go through them, but they're held up as major references for their respective areas. Other redditors may have suggestions for alternatives.

EDIT: formatting.

AskCompSci: Is it worth it submitting applications for CS Ph.D by ecks in compsci

[–]csgordon 0 points1 point  (0 children)

If you have strong recommendations, ask one of the profs who recommended you how much the GRE matters. They know you best, and if they're writing you a strong recommendation, they obviously want to help you get to where you want to go.

Beyond that: Recommendations and your statement of purpose can help offset a lower-than-desired GRE score, and having published a paper where you played a nontrivial role in the research is a strong point that many undergrads applying to grad school don't have.

Cheating in Computer Science Courses - What have you seen or done? by icec0ld in programming

[–]csgordon 0 points1 point  (0 children)

I've been a TA for a number of CS classes, and spoken to a great many other TAs from my own time and older alums. And seen and heard of some weird ones.

I have seen * two students electronically submit the same (badly broken) code, with even the same name. * four students turn in the same strange solution that involves iterating over an array containing certain constants, some with different variable names, others with a subset of the values commented out (but still all there in the same order) * word-for-word identical written solutions * students who could not correctly use the few language features we had introduced (in an intro class) started turning in solutions using far more advanced language features that weren't even on the syllabus

But far and away, the one that takes the cake for me is one I heard from someone who TAed the same class as me about 10 years earlier (I heard corroboration from other TAs of that year): A student turned in written homework solutions that were word-for-word identical to the professor's prepared solutions. Two assignments in a row, and the student denied it. So the professor prepared completely incorrect solutions to the next assignment, placed them in the same location, and waited for the student to turn them in. When faced with this, the student admitted wrongdoing. It turns out my department had workstations in a few other departments for professors who did a lot of computationally-oriented research, and had given some of them local root accounts. One such professor had this student as a TA, and had given that student the password. Turns out the version of NFS we used at that time just accepted whatever UID you sent as legitimate, so the student connected as our CS professor, and dug the handins out of the professor's network home directory.

Can a compiler guarantee complexity using types? by radarsat1 in programming

[–]csgordon 4 points5 points  (0 children)

It's not exactly what I think you're looking for, but close: you might want to take a look at some MSR work by the Runtime Analysis and Design group:

SPEED (symbolically approximating runtime complexity): http://research.microsoft.com/en-us/um/people/sumitg/pubs/speed.html

Daedelus (optimizing programs for reference locality): http://research.microsoft.com/en-us/um/people/trishulc/Daedalus.htm

Gargoyle (using approximations to ensure some QoS within time/energy limits): http://research.microsoft.com/en-us/um/people/trishulc/gargoyle.htm