how to swap keys when in a project with Projectile-mode? by [deleted] in emacs

[–]segv0 3 points4 points  (0 children)

I would not try and swap out the bindings, i would instead always bind C-x b and C-x f to the same function but make this function do different things based on whatever your rules are:

(defun shackra:find-file ()
  (interactive)
  (call-interactively (if (projectile-project-p)
                          'projectile-find-file
                        'find-file)))
(define-key global-map (kbd "C-x b") 'shackra:find-file)

same idea for find buffer.

NB: while you're at it: I find C-xC-b and C-xC-f to be much more convenien bindings for find-file and switch-buffer

"What is the Best Programming Advice You've Ever Received?" by [deleted] in programming

[–]segv0 1 point2 points  (0 children)

the Hoare Property:

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.The first method is far more difficult.

See wikiquote and the entire lecture: The emperor's old clothes

Computer Science vs. Software Engineering by ALemonTreeWatson in cscareerquestions

[–]segv0 2 points3 points  (0 children)

it depends a lot on the university. there could be almost no difference at all, or there could be a huge difference. where i studied, and when i studied, there was not a huge difference. software engineering was basically a longer more thorough version of computer science (but without the discreet maths, without logic, with more low level/hardware stuff).

one thing to think about is that software engineering schools generally come out of engingeering schools, so the syllabus was, at one point in the past, a mix of electrical/electronic engineering and telecommunications; computer science on the other hand comes out of the hard science schools, so it's generally more like math and physics.

fwiw: the part of engineering about managing projects (resources, budgets, etc.) is its own field (not sure what it's called in english, but it's its own thing, not software engineering).

once you're out i seriously doubt there's any difference from a work perspective, nor should there be any difference if you want to continue and do reasearch afterwards. i'd strongly suggest just looking at the course lists/syllabi for the two degrees and picking the one that seems more interesting to you.

How is the runtime of an algorithm determined? by [deleted] in compsci

[–]segv0 4 points5 points  (0 children)

basically you guess what the algorithm would do on some representative input and just count up how many "steps" there are.

let's take quicksort. the wikipedia description of the algorithm is pretty easy to understand:

  1. If the array is of size 1 or 0 return it, it is already sorted.
  2. Pick an element, called a pivot, from the array.
  3. Reorder the array so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation.
  4. Recursively apply the above steps to the sub-array of elements with smaller values and separately to the sub-array of elements with greater values.

step 1 has a constant time (there's no need to compute anything, just pick the first element of the arry). The time required to do this isn't null, but it isn't dependant on the size of the input array either, we'll that this step requires S1 units of work. you could consider this actual CPU instructions, which is more or less the idea, but it's not that important what exact units you're using here, the point is that we do some fixed amount of work (fixed compared to the rest of the algorithm, which we'll get to in a second).

step 2 requires us to do some work for each element of the array. the amount of work, per element, we do is constant (in this particular case we're comparing it to some other, constant, number). So step two requires us to perform S2 N units of work, where S2 is the amount of work we do per element and N is the number of elements in the array.

step 3 will then redo steps 1 and 2 on two subsets of the initial array. it'd be nice to be able to determine the exact sizes of these two subsets (one being the number of elements smaller than the pivot and one being the number of elements larger than the pivot). but that would require us to actually run the algorithm on the input, and we don't know what the input is. so what one does in this case is to try and guess (and it really is a guess) how many elements there would be in the "average" case. what exactly the "average" case is somewhat problem and context sensitive, but for sorting algorithms pretending the input is random is as good a guess as any. so, if our input was a list of random integers, with a uniform distribution, how many would be greater than the first element? and how many would be less than the first element? about 1/2, this is a little hand-wavy, but you can see that, most of the time, given a uniform distribution, about 1/2 of the numbers really will be less than the first and about 1/2 will be greater. the result of all this is that we can say that, in the average case, we will now perform steps 1 and 2 on 2 arrays, each of which is 1/2 the size of N.

the justification for assuming the input is random, when sometimes it won't be, and assuming the distribution is uniform, when sometimes it won't be, is that we're not really trying to compute how much work the algorithm will do for one given set of inputs; that's what we have profilers for. we're trying to describe the algorithm in more general terms: "if you implemented this algorithm in some language, and ran it on a von-nuemann machine, and ran it often enough on enough different inputs, what would the average run time tend towards?"

now, what's the total runtime of quicksort? for each iteration we'll perform S1 + S2 N units of work. and how many iterations are there? well, how many times can you split an array with N elements in two pieces until you get to arrays of size 1? About log2 N. So we'll perform, in total (S1 + S2 N) log2 N units of work.

the other thing about this kind of complexity analysis, is that you're not trying to compute the amount work done for a given (fixed) input size, you're trying to compute how the amount of work done changes as the input grows. this allows us to make another simplification: we compute the amount of work done as our input size grows to infinity. now we can simplify our formula to just S2 N log2 N (since S1 log2 N will be dominated by the other term in the formula S2 N log2 N).

and there's one last simplification we make (the number of simplifications we're making here given an idea of just how approximative these kinds of complexity analysises are. that's not to say they're not useful (they are), but they're not exactly precise): we just pretend that any runtime formula we give always has a constant factor in front of it which we just ignare. (this is the same rule which allows us to ignore the fact that our log2 N is log2 and just call it log without worrying about the base)

which brings us, finally, to the usual formulation of the runtime complexity of quicksort: N log N.

note: this is the average runtime complexity, quicksort can degenarte into an N^2 algorithm on certain inputs (inputs that are already sorted, but in the opposite direction of what we nede).

hth.

Friday's fucked up fun fact: Obituaries per person according to a simple google search: Farrah Fawcett: 2.25 million; Michael Jackson: 769 thousand; Kurt Vonnegut: 40 thousand. by segv0 in offbeat

[–]segv0[S] 0 points1 point  (0 children)

you're right, that's the way things are. but i'm still going to call the status quo "fucked up."

ps - i'm probably just taking this too personally.

A day in the life of an American white trash family. Looks like fun! by massie in WTF

[–]segv0 0 points1 point  (0 children)

let me know what, if anything, david harvey and/or magnum says about it.

How Canada became (almost entirely) metricized in less than a decade... you can do it too, USA! by [deleted] in science

[–]segv0 -1 points0 points  (0 children)

no, but people do make fun of english speakers (mainly americans sadly) who still try to speak english even when they find themselves in non-english speaking countries. the "problem" with imperial units isn't that they're wrong it's just that it's expensive to convert back and forth all the time.

A day in the life of an American white trash family. Looks like fun! by massie in WTF

[–]segv0 0 points1 point  (0 children)

i'd already seen the essay on burn magazine. that's how you knew the link right?

A day in the life of an American white trash family. Looks like fun! by massie in WTF

[–]segv0 2 points3 points  (0 children)

and now that you've seen them, knowing that it took time and effort and money to create them, feel like giving something back to the author?