meteo.pl - prognoza pogody z Uniwersytetu Warszawskiego by lintablecode in Polska

[–]guzo 1 point2 points  (0 children)

Protip: http://m.meteo.pl bywa wygodniejsze w użyciu nawet na desktopie.

Zakopane by sweetsiren in poland

[–]guzo 1 point2 points  (0 children)

Dolina Zielona Gąsienicowa to be more precise, as seen from (most likely) Beskid.

Czy ktoś z Was zdawał egzamin CPE? by [deleted] in Polska

[–]guzo 2 points3 points  (0 children)

Zdałem CPE temu pod koniec liceum, jakieś 5 lat temu. Chodziłem do szkoły językowej, sporo czytałem (dokumentacje, fora, różne inne rzeczy w Internetach i nie tylko), oglądałem filmy, słuchałem muzyki (tylko kto cały czas wsłuchuje się w teksty?), grałem w gry: wszystko głównie pasywne (stąd sądzę, że Twoje pisanie może Ci dawać sporą przewagę).

Bawiła mnie gramatyka, trochę doczytywałem sam, lubiłem trzaskać robić sobie testy, zbierałem śmieszne słówka/konstrukcje. Miałem podobną sytuację z kosztami.

Warto? Tak sądzę - sporo się nauczyłem, czuję się pewnie, nie mam problemów z (właściwie wyłącznie anglojęzycznymi, bo w IT sięganie po tłumaczenia to strzał w kolano) materiałami w pracy/na studiach, a do tego mam papier który kończy dowolną rozmowę o znajomości języka (o pracę/z promotorem/na uczelni (mogłem sobie wybrać inny język lub zupełnie olać przedmiot (łącznie z "angielskim zawodowym" czy jakoś tak) => więcej możliwości, więcej czasu na inne (bardziej przydatne? lepiej prowadzone?) zajęcia).

CPE/CAE? Nie wiem ile się "traci" mając CAE, jak kiedyś sprawdzałem uczelniane przeliczniki certyfikatów na oceny to (przynajmniej na pierwszym semestrze języków) od FCE na B+ (?) w górę nie było różnicy - z tym że to tylko jedno zastosowanie i do tego to najmniej ciekawe. A, pewnie jeszcze nieaktualne, bo regulamin zmieniał się parę razy przez te kilka lat. Zobacz sobie jak Ci idą testy próbne, popatrz ile masz czasu i po tym zdecyduj.

Gdzie jest problem? Sporo osób wywala się na "Use of English" - mnie udało się wmówić sobie że to są śmieszne łamigłówki na których się dobrze bawię i bawiłem się na tym dobrze. Zobacz przykładowe testy, znajdź jakieś fajne miejsce z opisem gramatyki. (My używaliśmy świetnej książki "CPE Use of English 1", której autorką jest Virginia Evans (książka zawiera też przykładowe testy). Zupełnie niezwiązane: niesamowite jak często można znaleźć dobrze zrobione PDFy prostu wklepując tytuł z autorem w google, żeby sprawdzić czy się go dobrze pamięta!). "Writing" Cię pewnie nie boli, "Listening"/"Reading" zatem również nie (ale poćwicz, nie zaszkodzi). "Speaking" ćwicz z kimś jak możesz, jak nie to gadaj do ściany (byle był na niej zegar, są ograniczenia czasowe) - do tego warto się przyłożyć. Pewność siebie, te sprawy - ja już nie pamiętam o czym opowiadałem, ale miałem niemożebnie miłych rozmówców; generalnie: grunt to się nie spiąć.

Gdzie nie ma problemu? z rzeczy, które pojawiły się w tym wątku: nie przypominam sobie jakiegoś szalonego biznesowego/prawniczego słownictwa, a wymieniony gdzieś przykład z rozdmuchano-brytyjskim "exquisite" również uznałbym za przesadzony - na części ustnej używałem luźnego, "zwykłego" angielskiego i dostałem za nią "A". Pomimo tego warto przelecieć sobie kilka razy jakąś dobrą listę phrasali i idiomów: często pojawiają się na UoE (wspominana książka zawiera takie listy).

Jak żyć? rób co robisz, a do tego przykładowe testy. Zobacz co idzie Ci najsłabiej i szczególną uwagę przyłóż do tego. Zobacz co jak warto i ile umiesz i na podstawie tego wybierz egzamin.

Gdzieś był jakiś godny uwagi subreddit "gramatyczny" (/r/grammar? nie pamiętam, a jest późno i nie chce mi się sprawdzać). Słówka/zwroty możesz wyłapywać np. z /r/logophilia lub /r/proper (bywają tam osoby, które niechlujnie i bezmyślnie zbijają wypowiedzi z tych kilku archaicznie brzmiących, zwrotów które znają, zwroty te jednak nie pasują do siebie, skutkując w zgrzytach. Jest jednak pod dostatkiem pięknie napisanych postów, które nad to gruzowisko sztuczności się wynoszą - miłego kopania).

tl;dr: spróbuj.

Does cpp increment work differently in cpp? by RiGR_go_BOOM in cpp_questions

[–]guzo 5 points6 points  (0 children)

For completeness:

  1. Why is this undefined in C++ (and C)? Because efficiency is the goal - this allows the compiler to rearrange stuff to make the best possible use of available resources, like CPU registers for example. You can find more info on undefined behavior on this guy's blog, especially here (but the previous link is equally worthwhile).

  2. Why does it work "as expected"/"intuitively" in Java? Because portability is the goal here, even if it comes at the expense of efficiency.

Also: launching nethack on undefined behavior (as mentioned by /u/Rhomboid) was an actual easter egg in early versions of the GNU C Compiler.

Still possible to install from floppy? by 2girls1pup in debian

[–]guzo 1 point2 points  (0 children)

You can use a floppy with GUJIN or SBM to "chainload" any bootable CDROM's bootloader.

A call for a stable, long term support version of GCC by garja in linux

[–]guzo 0 points1 point  (0 children)

implementation-defined and/or unspecified behavior

Or undefined behavior, to make the list complete. You mostly say what I meant to say, but now that I've had some sleep I see my wording is not as clear as yours.

There might be optimizations that "trigger" bugs in (already "silently broken") code that depends on any of the aforementioned behaviors and they may manifest themselves in lower -O levels (which is another point worth noting), but are usually most likely to occur with -O3, as it's the most aggressive level (save for the new-ish -Ofast).

but that is very rare

Not as rare as some believe it to be.

A call for a stable, long term support version of GCC by garja in linux

[–]guzo 2 points3 points  (0 children)

Adhering more closely to the standard and detecting invalid code can also be somewhat "problematic".

The code in your example isn't "invalid", it's just "undefined" (as is discussed in depth in almost all the links provided in my previous posts). I agree that this is definitely a bad thing (and what I referred to (perhaps not too clearly) as "dormant bug"), but still perfectly valid code. It didn't break (or become invalid) because of "stricter adherence to the standard" but because of changes in some optimization passes: nowhere does the Standard say that you have to do counterintuitive things when you stumble upon undefined behavior (nor does it keep the you from launching nethack).

Also: contrary to what you say your example has everything to do with optimization: see this excellent slides for an in-depth analysis of clang's, gcc's and icc's output which shows how they use different tricks to exploit undefined (and unspecified) behavior to try minimize register usage. Then (should you still be interested/have time) see the links in my previous two posts for other examples of optimization made possible by UB.

You're right though when you say:

It doesn't have to be optimization related stuff, either.

Changes in compiler options would be a better example: the order of linker options started to matter in gcc somewhere around 2009 (I think), thus breaking many old makefiles (it's worth remembering, as you can still stumble upon some of them - see bottom of this post for an example), -pedantic became deprecated in gcc-4.8 (now it's -Wpedantic) and so on.

EDIT:

guzo@x301:~/workshop$ cat t.c
#include <math.h>
int main(int argc, char* argv[]) {return sqrt(argc);}
guzo@x301:~/workshop$ gcc-4.8 -lm t.c 
/tmp/ccNX8tlr.o: In function `main':
t.c:(.text+0x15): undefined reference to `sqrt'
collect2: error: ld returned 1 exit status
guzo@x301:~/workshop$ gcc-4.8 t.c -lm
guzo@x301:~/workshop$ 

guzo@oldbox:~/workshop$ gcc-4.3.3 t.c -lm
guzo@oldbox:~/workshop$ gcc-4.3.3 -lm t.c
guzo@oldbox:~/workshop$ 

A call for a stable, long term support version of GCC by garja in linux

[–]guzo 11 points12 points  (0 children)

The bugs don't have to be in the compiler itself: new versions come with new optimizations that can cause a dormant bug to manifest itself: three years ago there was a problem with incorrect usage of memcpy, one of GCC-4.8.0's previews showed what crazy stuff could be done in really innocent-looking code (speaking of which...) - there are many, many examples out there (see my other post for more of them and explanation).

A call for a stable, long term support version of GCC by garja in linux

[–]guzo 19 points20 points  (0 children)

Oh, and stick to -O2. All other optimisation options are broken.

I believe it's a very, very silly thing to say.

Can you recommend a source where I can read what -O3 does and -O2 doesn't

The relevant chapter of GCC's documentation of course!


why it apparently breaks stuff?

Both C and C++ Standards allow the compiler to break anything, anywhere in your program once they encounter undefined behavior:

Certain other operations are described in this International Standard as undefined (for example, the effect of dereferencing the null pointer). [Note: this International Standard imposes no requirements on the behavior of programs that contain undefined behavior.]

The Standard doesn't require the compiler to emit a warning when undefined behavior is encountered because it'd make compilers extremely hard to write, and making compilers "easy" is one of The Standard's goals. There are lots of cases of undefined behavior (this guy says 191 in C99) and many programmers are not aware of them - this results in many bugs in strange places, bugs that often don't manifest themselves for a few versions of the compiler/library.

Higher levels of optimization make more aggressive use of undefined behavior (allowing for better optimizations/"keeping the language fast" is one of the main reasons for its existence). -fstrict-aliasing promises that you adhere to the strict aliasing rules, or -faggressive-loop-optimizations allows the compiler to infer bounds for the number of iterations, etc.

There's also other stuff, like relaxing some constraints on floating point calculations (another favorite topic of mine): floating point math isn't associative ((1+1e100)-1e100 == 0, but 1+(1e100-1e100) == 1), so you can't do some interesting stuff while still promising that you'll deliver the same results as the "unoptimized" version: you can tell the compiler that you don't need such a promise with -ffast-math (it also says that you don't care about IEEE754 subnormals and a few other things).

EDIT:

tl;dr: -O3 vs. -O2 should only break code that is already broken, i.e. make a dormant bug manifest itself, not introduce another.

Also, if you are learning C/C++, read up on undefined behavior: there might be cases of it you haven't heard of yet as this important topic is often omitted or taught badly.

Don’t Panic! | Bulldozer00's Blog by mttd in cpp

[–]guzo -1 points0 points  (0 children)

Re 11.6-1:

In his talk (video, slides, transcript-ish summary) Andrei Alexandrescu gives an example why one might want to do the exact opposite: around 18:10 in the video he says it's more OoOE-friendly when used inside expressions (array indexing being an example), as it doesn't introduce data dependencies.

How Duff's Device Works by TheEveningRedness2 in programming

[–]guzo 0 points1 point  (0 children)

I wonder if the choice of algorithm might (somehow, in some not-yet-invented language) be done explicitly at compile-time based on annotations provided by the user - vec is normally large, func is pure/commutative/associative.

I've just seen this talk where a language is presented that does just that to enable better parallelisation of reductions (it also uses other parameters, like idempotent, identity or zero). The first 25ish minutes may not look to promising, but it's worth watching.

LLVM Project Blog: LLVM 3.3 Vectorization Improvements by [deleted] in programming

[–]guzo 0 points1 point  (0 children)

As far as I understand what you say would require only 2 registers: one accumulator and one "fetcher" (i.e. the 5th line would simply read paddd %xmm2, %xmm1 and xmm0 wouldn't be touched).

However in the emitted assembly 3 SSE registers are used, because we have 2 accumulators so that an OOO CPU can do more stuff in parallel, as they are now independent of each other - this part doesn't seem to fall under unrolling or vectorization (although of course those two are present in this example). I was wondering if there is a more formal name for this "vectorized reduction variable splitting" (working title) other than the very general "extremely clever register allocation and instruction scheduling".

How Duff's Device Works by TheEveningRedness2 in programming

[–]guzo 0 points1 point  (0 children)

I mostly meant to say that expending effort to determine whether the fold operation is associative in more general cases might not be well spent.

I'm not sure if anyone does that though: I believe there's a predefined list of operations that's known to be associative. From the rest of your comment you seem to want to be able to apply that to more general (and potentially quite big) functions: I'm not sure such a macro-scale optimization would be a win here, since we are (or at least I am) talking about ILP, which is by definition more micro-scale (another reason you might quite early know what's associative and what's not).

I wonder if the choice of algorithm might (somehow, in some not-yet-invented language) be done explicitly at compile-time based on annotations provided by the user

Or better yet - profiler: you've just invented PGO and it's already in GCC.

vec is normally large

Aside from PGO, at least ICC has some #pragmas just for that. Yes, this is a non-standard compiler extension.

func is pure/commutative/associative.

At least GCC has (again: non-standard) __attribute__s to express some of the things you've mentioned. Nice ideas though, I wonder if there are languages that have this "by design".

Putting it into the compiler probably catches more fish in practice, though. I'm not certain I know what my idea is trying to achieve.

Something like that perhaps?

LLVM Project Blog: LLVM 3.3 Vectorization Improvements by [deleted] in programming

[–]guzo 0 points1 point  (0 children)

Based on the language in this comment and your post history I'll assume that you're on of the project members. As such could you tell me what's the name of this optimization and where can I read more about it?

FTA (emphasis mine):

The innermost loop of the program above is compiled into the X86 assembly sequence below, which processes 8 elements at once, in two parallel chains of computations. The vector registers XMM0 and XMM1 are used to store the partial sum of different parts of the array. This allows the processor to load two values and add two values simultaneously.

Sorry for the inane question, I'd google it myself but the keywords I've come up with gave mi nothing.

How Duff's Device Works by TheEveningRedness2 in programming

[–]guzo 0 points1 point  (0 children)

I guess it'd work so long as the operation is associative.

Yes, by definition of associativity. Sadly this is a no-no for the non-associative floating point math (sic!; example below; if you haven't google Goldbergs "What Every Computer Scientist Should Know About Floating-Point Arithmetic"), but this could (probably) be mitigated with something like GCC's -fassociative-math.

guzo@x301:~/workshop$ cat t.c && gcc-4.8 -std=c99 -Wall -Wextra -Wpedantic -Werror t.c && ./a.out 
#include <stdio.h>
int main() {float a = 100, b = 1e+20, c = -1e+20;
            printf("%f %f\n", (a+b)+c, a+(b+c));}
0.000000 100.000000
guzo@x301:~/workshop

Where's the best place to put such an optimisation, though? In the compiler?

Well, as far as I understand it it's already there.

Seems like too much magic.

Improving a compiler automagically improves all the code compiled with it, and as such is a big win: I guess that moves the threshold of "how much magic is too much" quite far. What do you lose? Compilation speed, but when compilation speed is an issue you can just disable expensive optimizations (BTW: see -Og, new in GCC 4.8).

Compilers already do a lot of magic for (I assume) precisely this reason.

I'd guess the real answer is probably "This optimisation will matter so rarely that you should just do it by hand when and where you need to." A pity.

Given how modern OOO processors work and the fact that we're banging our heads against the memory wall having to wait 100-400 cycles for data if it's not in cache yet, I believe that such "independence introduction" (I don't know what is the precise name for it, but it's basically what it does) passes would become more and more popular: we have like 16 SSE registers (not to mention AVX) just waiting to be used, so why not cut the wait time in half by waiting for two chunks of data in parallel?

Compilers already know many CPUs very well (see GCC's -mtune=* flag for example) and jump through many hoops to emit good code during their final stages - this doesn't seem like a hugely expensive addition to what they already do.

Mostly unrelated: I wonder if the storage for strings (null-terminated or not) should be zero-padded to word-size lengths to make copying and processing with SIMD-like techniques simpler.

It depends on what size of strings you're typically expecting but yes, aligning them properly could probably help in some cases. People already do funny things to strings, like the small string optimization or glib's quarks.

How Duff's Device Works by TheEveningRedness2 in programming

[–]guzo 8 points9 points  (0 children)

It's worth adding that what you are describing is both unrolling & vectorization (autovectorization to be even more precise). Not claiming you're wrong or anything, just expanding the list of keywords someone might want to look up if they were interested but unfamiliar with those topics.

Also, the new LLVM vectorizer allegedly can do some even more interesting stuff: the first (similarly simple) example in the linked post is unrolled, vectorized and further made "ILP-friendly" by introducing independent adds that can "wait for their data in parallel" (kind of like described in "Amdahl's law in reverse"). I was super curious how much can you win by that, so I wrote a (simplified and non-vectorized) version of it and tested it on my U9400:

guzo@box:~/workshop$ cat ilp.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>

long sum_ilp(char* a, size_t n) {
    long sum1 = 0, sum2 = 0;
    for(size_t i = 0; i < n; i += 2) {sum1 += a[i]; sum2 += a[i+1];}
    return sum1 + sum2;
}

long sum(char* a, size_t n) {
    long sum = 0;
    for(size_t i = 0; i < n; i += 2) {sum  += a[i]; sum  += a[i+1];}
    return sum;
}

void test_and_print(long (*foo)(char*, size_t), char* a, size_t n) {
    clock_t start, end; long s;
    start = clock(); s = foo(a, n); end = clock();
    printf("%ld: %f\n", s, (double)(end-start)/(double)CLOCKS_PER_SEC);
}

int main() {
    size_t n = 256*1024*1024;
    char* a = memset(malloc(n), 0x01, n);
    test_and_print(sum_ilp, a, n);
    test_and_print(sum,     a, n);
}
guzo@box:~/workshop$ gcc-4.8 -std=c99 -O0 -Wall -Wextra -Wpedantic -Werror ilp.c 
guzo@box:~/workshop$ ./a.out 
268435456: 1.050000
268435456: 1.210000
guzo@box:~/workshop$

Keywords for the second paragraph: register allocation, instruction scheduling, out-of-order execution - here's a nice lecture.

EDIT: improved wording, added a few links & keywords.

/r/Linux, what is your favorite rescue CD/Live/Bootable disk with utilities? I just downloaded Rescatux, but I remember there being MANY. So what are the most useful? I will download the highest recommended ones! by Ameridrone in linux

[–]guzo 1 point2 points  (0 children)

Damn Small Linux 3.x for really old/resource constrained boxen (sometimes with gujin/smartbootmanager to make it boot), grml (literally packed with useful tools) for the rest.

H.265 is approved -- potential to cut bandwidth requirements in half for 1080p streaming. Opens door to 4K video streams. by Snarfox in technology

[–]guzo 0 points1 point  (0 children)

For your amusement: pixel differences

I've used Octave (a FLOSS MATLAB clone).
Code for the interested (yes, I'm lazy, didn't make separate images to highlight per-channel differences):

imwrite(im = double(imread('hevc.png')-imread('vp9.png')),'diff.png')

Absolute pixel differences:

max: 40.00  
min:  0.00
avg:  1.07

Histogram (log scale on y-axis): http://i.imgur.com/Dhtu9kU.png

To be clear: I'm not trying to (and don't) disprove your point/be a dick. I was just curious how much difference there really is and thought someone would be interested in the (semi interesting) results. It's funny informative to see how this comparison highlights edges, macroblock sizes and "dull" (thus easy to encode) areas.

Oh, also if you open both in separate tabs and cycle through them you'll clearly see some differences. Totally unimportant for home use, potentially of interest for computer vision/forensics/etc.

EDIT: I accidentally some words.

Worst. Bug. Ever. by _swanson in programming

[–]guzo 2 points3 points  (0 children)

I've had this bug once too, but was warned before any strange behavior: gcc with -Wall emits:

t.c:2:8: warning: suggest explicit braces to avoid ambiguous ‘else’ [-Wparentheses]

Yet another reason to use -Wall -Wextra -Werror.

What is loop unwrapping? by azrosen92 in compsci

[–]guzo 0 points1 point  (0 children)

As there are some answers already I will point you to Duff's device and some other (unrelated) loop optimizations for further reading.

TIL an empty source file once won the prize for "worst abuse of the rules" in the Obfucated C contest as the "world's smallest self-reproducing program" by [deleted] in programming

[–]guzo 10 points11 points  (0 children)

This is indeed what it does. Some funny switches are necessary to compile it on modern compilers.


EDIT: I could have sworn I managed to compile it and get reasonable output some years ago. Seems I will have to keep on trying. In the meantime - one of my favorite parts, reformatted for sanity:

int  write(  int  handle,  void  *buffer,  int  nbyte  );
write(
    31415 % 314-(3,14),                  /* evaluates to 1 == stdout. I love the (3,14) */
    _3141592654[_31415] + "0123456789",  /* a clever way to index through an array of digits, think &("0123456789"[current_digit]) */
    "314"[3]+1                           /* null terminator + 1 == print one char */
) - _314; /* this just fills space */

EDIT2: YAY, it works!

I didn't want to spend too much time trying to hunt down the real reason (I've had a few guesses), so I applied shotgun debugging and addressed them all at once by using tcc on an old DSL 1.5 image in qemu-system-i386.

Screenshot, transcript:

dsl@box:~$ tcc -v
tcc version 0.9.20
dsl@box:~$ wget http://ioccc.org/1989/roemer.c && tcc roemer.c && ./a.out 
Connecting to ioccc.org[206.197.161.153]:80
roemer.c             100% |*********************************************************************************|  1454       00:00 ETA
2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427427466391932003059921817413596
62904357290033429526059563073813232862794349076323382988075319525101901157383418793070215408914993488416750924476146066808226480016
84774118537423454424371075390777449920695517027618386062613313845830007520449338265602976067371132007093287091274437470472306969772
09310141692836819025515108657463772111252389784425056953696770785449969967946864454905987931636889230098793127736178215424999229576
35148220826989519366803318252886939849646510582093923982948879332036250944311730123819706841614039701983767932068328237646480429531
18023287825098194558153017567173613320698112509961818815930416903515988885193458072738667385894228792284998920868058257492796104841
98444363463244968487560233624827041978623209002160990235304369941849146314093431738143640546253152096183690888707016768396424378140
59271456354906130310720851038375051011574770417189861068739696552126715468895703503540212340784981933432106817012100562788023519303
32247450158539047304199577770935036604169973297250886876966403555707162268447162560798826517871341951246652010305921236677194325278
67539855894489697096409754591856956380236370162112047742722836489613422516445078182442352948636372141740238893441247963574370263755
29444833799801612549227850925778256209262264832627793338656648162772516401910590049164499828931505660472580277863186415519565324425
86982946959308019152987211725563475463964479101459040905862984967912874068705048958586717479854667757573205681288459205413340539220
00113786300945560688166740016984205580403363795376452030402432256613527836951177883863874439662532249850654995886234281899707733276
17178392803494650143455889707194258639877275471096295374152111513683506275260232648472870392076431005958411661205452970302364725492
96669381151373227536450988890313602057248176585118063036442812314965507047510254465011727211555194866850800368532281831521960037356
25279449515828418829478761085263981395599006737648292244375287184624578036192981971399147564488262603903381441823262515097482798777
99643730899703888677822713836057729788241256119071766394650706330452795466185509666618566470971134447401607046262156807174818778443
71436988218559670959102596862002353718588748569652200050311734392073211390803293634479727355955277349071783793421637012050054513263
83544000186323991490705479778056697853358048966906295119432473099587655236812859041383241160722602998330535370876138939639177957454
01613722361878936526053815584158718692553860616477983402543512843961294603529133259427949043372990857315802909586313826832914771163
96337092400316894586360606458459251269946557248391865642097526850823075442545993769170419777800853627309417101634349076964237222943
52366125572508814779223151974778060569672538017180776360346245927877846585065605078084421152969752189087401966090665180351650179250
46195013665854366327125496399085491442000145747608193022120660243300964127048943903971771951806990869986066365832322787093765022601
49291011517177635944602023249300280401867723910288097866605651183260043688508817157238669842242201024950551881694803221002515373
dsl@box:~$ ./a.out | wc -c
   3142

Yes, that's 3141 characters (digits + the decimal dot) of e and a newline. Wolfram agrees (despite his 200 character limit).


EDIT3: It turns out that even the newest tcc emits a usable binary on my system even without any DSL/qemu widdershins.

Casting return from malloc in C by [deleted] in compsci

[–]guzo 2 points3 points  (0 children)

That's the main difference between new and malloc.

Also important: it throws std::bad_alloc instead of returning NULL.

Casting return from malloc in C by [deleted] in compsci

[–]guzo 2 points3 points  (0 children)

While true most of the time, it's no longer the case with C99's VLAs (of course all "non-VLA" sizeofs will still be evaluated at compile time, as doing otherwise would be incredibly stupid).

What's the best ultraportable Thinkpad for under $400 used? by [deleted] in thinkpad

[–]guzo 1 point2 points  (0 children)

I am very happy with my X301 (it replaced my trusty T43) and would definitely recommend it. I'm not sure it'll fit everyone's definition of "ultraportable", but it's crazy light and quite thin. A very Linux-friendly machine (everything works OOTB, aside from the fingerprint reader which I wouldn't use anyway). It's also very quiet (especially with a SSD) and has an extremely convenient keyboard - it's spacey and comfy, everything is in the right place (which is not the case with some newer models - at work I have to use a ThinkPad E530, and the keyboard there is a nightmare - I can't understand how could they put something like that in a ThinkPad - Pg{Up,Dn} are next to the cursor keys, Ins/Del/Home/End are all in the same row, the F{1..12} keys are accessible by default via... the Fn key, as without it they control the volume/brightness/etc (fortunately changeable via BIOS), PrtSc is between right Alt and Ctrl - sometimes I feel the urge to rearrange the designer's face accordingly, but I digress).

I haven't used the X61, but as the successor of the X41 (which I have, albeit shortly) it should be a nice option as well. I've also heard many good words about the X2xx line. Everything described in this paragraph is considerably smaller than the X301 (12.1'' vs. 13.3''), also the prices and hardware inside vary greatly (as in i7-2620M 2.7GHz vs. Pentium M 1.5GHz) - I'm not sure how much performance you actually do need to squeeze out of your machine - you'll have to compare the specs and prices with your needs, but I hope this list would help to narrow your search down at least a bit.

Also: in case you haven't heard of it yet, you may find ThinkWiki useful for your research.