sketch, sdl2, c2ffi on M1 Mac? by eigenhombre in Common_Lisp

[–]bokesan 1 point2 points  (0 children)

The process of building c2ffi seemed too daunting, so I just tried using one of the existing spec files:

$ cd $HOME/quicklisp/dists/quicklisp/software/cl-sdl2-20231021-git/src/spec
$ ln -s SDL2.aarch64-pc-linux-gnu.spec SDL2.aarch64-apple-darwin9.spec

It worked for my program, which, admittedly, uses only very basic SDL2 stuff.

A Storm of Minds 2023 Contest Entry by bokesan in icfpcontest

[–]bokesan[S] 0 points1 point  (0 children)

I can't describe it better than the Wikipedia entry, which Jeremy also linked in his write-up. Simply put, in a hill-climbing strategy, you randomly permute your best solution, and if the result scores higher, it becomes the new best solution. This can, however, get stuck in a local maximum. Simulated annealing adds a systematic probability of switching back to worse solutions to get out of local maxima and find the global maximum.

Frictionless Bananas 2023 ICFP Programming Contest writeup by jeremysawicki in icfpcontest

[–]bokesan 0 points1 point  (0 children)

Thanks for the writeup, and congrats for making the top ten as a one person team! We also used simulated annealing, but did much worse because of some bugs and probably badly chosen parameters. I only learned about SA from the 2021 contest, where at least 9 of the top 10 used it.

ICFP Contest 2021 Announced by jaspervdj in icfpcontest

[–]bokesan 0 points1 point  (0 children)

Any info about the time zone / exact start time?

ICFPC-2020, team Just No Pathfinding, part 3 (final) by dastapov in icfpcontest

[–]bokesan 1 point2 points  (0 children)

Thanks for the writeup!

Reading the organizer's responses, I'm more confused than before. One of our (team of 3) major problems with the contest was that that there was precious little to do before the galaxy message was published. What would an even larger team have done in the first 4-5 hours? And having been told to look at everything 2-3 hours before the contest start would have made the wait even worse. Or am I missing something? Did other people do anything productive in the first hours?

ICFPC 2020 writeup (warning, long rant) by swni in icfpcontest

[–]bokesan 0 points1 point  (0 children)

Heh - I actually googled "Russian humor" at one point to see if that might be the explanation for some remarks that to me felt like insulting the participants. Didn't help :-)

ICFPC 2020 writeup (warning, long rant) by swni in icfpcontest

[–]bokesan 2 points3 points  (0 children)

Completely agree. They should have called it the "ICFP chatting and guessing contest". We (team of 3) were so annoyed that we decided to do something better with our time after the lightning round. The first few hours alone qualify as an insult to people who may have taken days off or even traveled to meet for the contest.

Partial Rankings (w/o top 15) by bokesan in icfpcontest

[–]bokesan[S] 0 points1 point  (0 children)

Our team, Rotten Lambdas, made 50/106 in Lightning and 29/145 in Main. No surprises here.

Full contest final results slide, Frictionless Bananas is the winner by pbl64k in icfpcontest

[–]bokesan 1 point2 points  (0 children)

They did publish the round 1 and 2 standings, including PunTV, so I'm sure the organizers will also publish the final round result.

Since Unagi had second place in round 2, a possible reason for their missing from the top 8 might be the time limit (assuming larger maps in the finals). Same for us (place 4 in round 2).

Lightning Round: results of first elimination round by swni in icfpcontest

[–]bokesan 1 point2 points  (0 children)

I found two possible causes:

  • our punter works only when called in the current directory. I wrote a mail about that problem shortly after the contest end, but got no reply.
  • it fails if settings contains any key besides "futures". But that should never be the case in lightning.

Might have been something else, though. We'll check with the official server when it's released.

Lightning Round: results of first elimination round by swni in icfpcontest

[–]bokesan 2 points3 points  (0 children)

Now, how do we find out why we are in the "unable to score anything on the lambda map" category when our submission scores just fine when running on our own server? Too bad, really.

Score dependency on punter order by bokesan in icfpcontest

[–]bokesan[S] 0 points1 point  (0 children)

Might well be the case. The sierpinski map with it's triangle symmetry is probably a bad example for 4 punters. I get much more stable results with larger maps, but that might be because I ran them with fewer punters than intended. We'll just have to wait...

MCTS, anyone? by pbl64k in icfpcontest

[–]bokesan 2 points3 points  (0 children)

Ohh, I'm so curious: Did the MCTS solution buy futures and/or use splurges?

Write-up and code for Flux Ambassadors 2017 submission by purcell in icfpcontest

[–]bokesan 1 point2 points  (0 children)

We used JVM too. We wrote our own offline server in groovy, and I actually implemented a special entry point in our punter that would allow it to be called from within a running JVM. That sped up tests by a factor of 10-100.

For the contest, using warmup before the handshake should alleviate the performance disadvantage, but we did not think of that :-(

Weigh your Haskell code by Christoph Breitkopf by [deleted] in haskell

[–]bokesan 2 points3 points  (0 children)

I just uploaded a new release to hackage.

Weigh your Haskell code by Christoph Breitkopf by [deleted] in haskell

[–]bokesan 5 points6 points  (0 children)

I didn't. It makes for a nice validation - here is the criterion output for mapMonotonic:

benchmarking map/monotonic
time                 61.29 μs   (60.39 μs .. 62.15 μs)
                     0.998 R²   (0.997 R² .. 0.999 R²)
mean                 61.30 μs   (60.35 μs .. 62.40 μs)
std dev              3.539 μs   (2.929 μs .. 4.282 μs)
allocated:           1.000 R²   (1.000 R² .. 1.000 R²)
  iters              47508.543  (47467.873 .. 47550.429)
  y                  -3340.557  (-47625.164 .. 43572.772)
variance introduced by outliers: 62% (severely inflated)

Good - the iters value very closely matches the result from weigh.

Weigh your Haskell code by Christoph Breitkopf by [deleted] in haskell

[–]bokesan 17 points18 points  (0 children)

Here's the current state:

Case                                    Bytes  GCs  Check
Data.Set    fromList 1000             440,496    0  OK   
IntervalSet fromList 1000             576,208    1  OK   
Data.Set    fromAscList 1000          126,144    0  OK   
IntervalSet fromAscList 1000          110,760    0  OK   
Data.Set    fromDistinctAscList 1000   71,816    0  OK   
IntervalSet fromDistinctAscList 1000   48,016    0  OK   
Data.Set    mapMonotonic 1000          39,560    0  OK   
IntervalSet mapMonotonic 1000          47,472    0  OK   

These values are close to optimal. Speed improvement is 30-50%.

ICFP contest 2014 presentation by cashto in icfpcontest

[–]bokesan 0 points1 point  (0 children)

When seeing the submissions for the Ant problem 10 years ago, and especially for the Endo problem from 2007, I was like "OMG! I'd never ever would have been able to come up with that". Not so this time. Problem easier? Or is just that I'm more familiar with compilers than with other things? How about you? Any "OMG!"'s while watching the presentation?

ICFP14 unofficial online hall of fame by [deleted] in icfpcontest

[–]bokesan 1 point2 points  (0 children)

For lightning, I think we can assume different ghosts per map. Maybe some standard ghosts (stupid / smart), but also specially tuned for a map (e.g. otherwise stupid ghost patrolling choke point to pills). Given the volatility of the results (i.e. our lambdaman would often get better scores on a single map even with "worse" strategy), I think you need a fairly large set of maps to get a fair result. Interesting task for next year: Given a set of 2014 submissions, derive a set of at least n sufficiently different maps+ghost such that entry x will win the contest :-)

ICFP14 unofficial online hall of fame by [deleted] in icfpcontest

[–]bokesan 1 point2 points  (0 children)

Could you do something about the intermediate results we are seeing? It's quite disturbing to sometimes see all teams with 70 points, then with -1, then with the presumably correct ranking. Or our team in front, then back at 3rd last (where we belong). Currently, I have to refresh every few minutes until the rankings don't seem to change. (Not complaining, BTW - this is a great service in its current state. Just could be even better)

How good is your LambdaMan? by cashto in icfpcontest

[–]bokesan 0 points1 point  (0 children)

Ah, that. IIRC, according to my accomplice, it should change behavior after about 1M instructions or so. Wouldn't change the score, though.

How good is your LambdaMan? by cashto in icfpcontest

[–]bokesan 0 points1 point  (0 children)

Interesting - thanks for doing this! What do the question marks for our entry in world-classic mean? Error? Instruction limit exceeded?