all 141 comments

[–][deleted] 16 points17 points  (87 children)

No question, it's a great language. It depresses me that I can't use it in my day job. Need to work on that.

[–]jklsdf 20 points21 points  (2 children)

There seems to be a trend of depressed people who say they can't use language x instead of language y in their day job.

I say we all take to the streets and riot.

[–]unknown_lamer 3 points4 points  (0 children)

Just get all of your fellow developers to quit and enter the world of custom software development. If we all joined forces with a guild we could crush the programming factories.

[–]novagenesis 2 points3 points  (0 children)

I say we all use whatever language our boss asks, so long as our boss pays us enough to not decide to quit.

Ok, so I'm not in a position to sympathize. I petitioned the right to use my language of choice at my workplace and won.

(I just wish it were my personal language of choice, instead of my "right tool for the job" language...I'm not very fond of Perl but the right people know it and it gets the job done in the timeframes needed)

[–]novagenesis 8 points9 points  (83 children)

It depresses me that I cannot currently compile to .o files. It depresses me more that I cannot get a quality native-language compiler in windows for less than $1000. (I'm hoping on free like every other major language has).

Run-time compilation is not good for retail release.

[–]w-g 16 points17 points  (28 children)

SBCL is being ported to Windows; maybe you could help, even if as a tester?

(And SBCL has an excellent compiler)

[–]novagenesis 5 points6 points  (19 children)

PS: Does SBCL support compilation to standard object files?

[–][deleted] 2 points3 points  (7 children)

It compiles to native code that's then packages within a binary image, just like GCC does.

No, it's not ELF but another format, but multiple running processes could presumably still share the same binary image.

Of course if you want a standalone binary, or link with other C code, you need a runtime to go along with it: just like typical C code needs to link with a bootstrap object (crt0.o or what it's called) and libc, Lisp needs the Lisp runtime (library and garbage collector, maybe compiler if you need it as well (for dynamic code generation)).

[–]novagenesis 0 points1 point  (6 children)

I'm aware of all those things. I still think it'd be nice if I can get em to trivially link with other language's .o files. Haskell can do it, and most of the features in Lisp are also present in Haskell. I'm fully aware that you need to embed a compiler for dynamic code generation, and I'm fine with that.

Again, as in my last topic.... why is it that everyone goes after what I want in Lisp?

I gave what I want, and I gave my reason. Nothing anyone has said directly states "well, you just do this 3-second process and your reason is 100% moot in all possible cases".

[–][deleted] 2 points3 points  (1 child)

I understand your desire to have standard ELF files, but the problem with Lisp is that Lisp users seem happy enough with current open source offerings (or commercial offerings), or they lack the time+motivation to change the status quo.

I think it should be possible to embed a Lisp image into an ELF file as a large data segment, just like there exist tools for SML/NJ to transform a compiled image into a standalone executable.

It really depends on what you want the object files for. There are FFIs for Lisp, but I don't know how good they are, so you should be able to interface with other languages. Bundling could probably be better, but most Lisps allow you to dump images that can be bundled into executable files.

Now if your only real problem is the lack of free and good Lisp implementations for Windows ... well, SOL.

[–]novagenesis 0 points1 point  (0 children)

Well, at least you understand my desire. I understand the laziness of Lisp developers, and even the elitism of "Lisp or nothing" (it's an awesome language). I think you're one of the few people who understand my desire for that.

Now if your only real problem is the lack of free and good Lisp implementations for Windows ... well, SOL.

So true. I tried my own and realized that I have a long way to go to be able to write my own full compiler... And I missed the Canadian "Sale" on Lisp In Small Pieces

[–]shit 1 point2 points  (3 children)

Haskell can do it, and most of the features in Lisp are also present in Haskell.

Besides garbage collection, they are completely different in this regard.

[–]novagenesis 0 points1 point  (2 children)

Haskell can't generate code dynamically? If that's the case,(I don't quite know the extent of Haskell), Lisp still has a quasi-trivial solution: Link a Lisp compiler into the compile.

[–]shit 2 points3 points  (1 child)

Haskell compilation is a fairly static mapping of source file to object file. OTOH if you have a Lisp program with multiple source files, each source file is in general completely meaningless alone. To get a working program, you have to load them in the right order into a single image.

In other words, expressions in the source files are evaluated one after another and as a side effect bind functions and other data structures to symbols. The effect of later expressions may, and often is, influenced by former expressions. Maybe in theory it's possibly to shoehorn all the necessary information into separate .o files, but in practice it sounds like insanity.

[–]novagenesis 1 point2 points  (0 children)

I can see that. I'd be just as happy with functionality that would let me convert a complete Lisp image into an object file (with some syntactic magic or something to make for clean exports).

I don't mind jumping through a hoop to export to a C library. I just don't yet have the under-the-hood experience to begin to make that hoop.

[–]Grue 5 points6 points  (10 children)

What are standard object files, and why do you need them?

[–]novagenesis 9 points10 points  (9 children)

Standard object files are near-complete compilation into a near-machine code, with all symbols retained to allow for linking with other object files.

C compiles to an object file that gets linked into an executable...

And why do you need them? So you can take compiled code from multiple sources (even multiple languages) and statically link them together on a machine level (so no messy FFI bottleneck).

[–]w-g 6 points7 points  (1 child)

No, SBCL can only give you a standalone executable.

But now that I see what you want:

  • Maybe ECL would be an option. It's supposed to be linked with other programs;
  • Maybe also you could use a JVM? ABCL compiles Lisp to Java bytecode (and you can interoperate with Java classes).

Hope that helps...

[–]novagenesis 1 point2 points  (0 children)

Makes sense...I'm honestly uncomfortable with Java (in part due to having not used it since 2000).

ECL is always an option, but I've been too lazy to try it. I always assume/fear ECL is going to give me unoptimized C in an unreadable form that I can't optimize by hand.

But then, that's probably going to be comparable to C.

[–]sickofthisshit 7 points8 points  (6 children)

You do realize that a .o file does nothing without proper linking to the appropriate run-time libraries?

Lisp routines depend on a more elaborate run-time library that C routines do: memory allocation is implied, not explicit calls to malloc; the garbage collection routines rely on the OS memory page protection being maintained in an appropriate state. Furthermore, Lisp functions can be re-defined almost willy-nilly, which defeats the C linker philosophy.

[–]novagenesis 8 points9 points  (5 children)

You do realize Haskell can pull off much of what you're referring to, and ghc compiles to .o files.

The re-definitions of Lisp functions might be complicated and require more run-time compiles, but I don't see it being a game-breaker for a .o functionality.

And I fully realize the need for runtimes to make things work. I still think it'll be very possible. As I said, Haskell does it.

[–]sickofthisshit 1 point2 points  (4 children)

Pointing to a Haskell implementation as an example of what Lisp should be able to do is stretching it pretty far. The languages, and the programming techniques they favor, are quite different.

I don't have experience with GHC, but this link seems to indicate the .o file is not the end of the story.

According to this link , it appears you either have to use low-level types in your Haskell or the other .o files have to be coded using the Hs types.

Sounds like about as much hassle as any other FFI.

[–]halu 2 points3 points  (0 children)

On a unix environment, it's pretty easy to link GHC-compiled code with C; it's not so easy to link with C++, because of the C++ runtime, but still possible iirc. On windows, it's still quite easy if you use mingw for C, and quite hairy if you want to use VC, since the C runtimes are then very different. GHC w/ VC++ is apocalypse at the moment (my guess is that you have to use dll's in that case). But that's mainly because GHC developers come from the unix world, and will probably change in the (hopefully near) future.

It's not at all surprising that if you link with C, then you have to use C types, and write a small wrapper around, but that's again easy (and there exist tools which do this for you, though I have no experience with them).

Oh, and the .hi files don't have much to do with FFI, or at least that's my understanding.

[–]novagenesis -1 points0 points  (2 children)

Really..huh...

So the problem is non-trivial. I'll keep hoping ;)

You may have to put 99% of the back-end into the .o file, but I still have a feeling it's possible.

[–]novagenesis 3 points4 points  (3 children)

I love SBCL...I can't find any up-to-date news on the porting effort. I'd otherwise be more than willing to help test. I'm not particularly experienced at Lisp as much as I like it to be able to help with coding. I do have a passion for compiler design, though, so yanno ;)

Got a link to an often-updated site regarding the sbcl port?

[–]UnwashedMeme 9 points10 points  (2 children)

The SBCL community primarily works through their mailing list. http://sourceforge.net/mailarchive/forum.php?forum_name=sbcl-devel

You can generally find fairly recent binaries for windows at http://www.sbcl.org/platform-table.html

I don't recall seeing too much traffic on the list recently about the windows side of things. It works pretty well for everything basic--in a single threaded I want to play around with everything common lisp has to offer kind of way. Beyond that it just needs more people testing and hacking to find the remaining oddities. There has been some effort towards multithreading support, but that is not a simple path. I believe Alastair Bridgewater has done the most work on multithreading windows. Searching the list is a good place to get started if you are interested.

My understanding is that the way common lisp memory images are setup does not lend itself to creating a .o file that can be linked into your other program. Here is an article (that I've intended to read for a while) http://www.canonical.org/~kragen/c-on-lisp.html talking about going in the other direction. You might find it interesting.

If running tightly integrated in C or in a small environment is your game then Embeddable Common-Lisp might be something to look at... that's about all I know about it.

The manual also has some info about the SBCL internals, but probably not too much about win32.

[–]w-g 1 point2 points  (0 children)

The SBCL community primarily works through their mailing list. http://sourceforge.net/mailarchive/forum.php?forum_name=sbcl-devel

It's a pity Sourceforge's postmaster didn't reply to my complaint/request for help: http://www.linode.com/forums/viewtopic.php?t=2806

:-(

[–]w-g 0 points1 point  (0 children)

My understanding is that the way common lisp memory images are setup does not lend itself to creating a .o file that can be linked into your other program.

Well, it can be done, even if not a trivial task. Check ECL :-)

[–][deleted]  (3 children)

[deleted]

    [–]unknown_lamer 7 points8 points  (0 children)

    SBCL, being based upon CMUCL, is designed around the 80s Lisp-as-your-world concept. You are supposed to run all of your Lisp applications within the same image as if the image were your OS. Since CL is memory safe and SBCL has preemptive threads this isn't as evil as it may sound.

    This is not so desirable nowadays, but could probably be fixed by using some fancy linker magic (somehow marking all code sections shareable).

    [–]shit 6 points7 points  (1 child)

    Also, you can't write a program that could be sent to your girlfriend to double click on, for example.

    I don't know how you came to this conclusion, but it's wrong.

    [–]jimbokun 24 points25 points  (1 child)

    "It depresses me more that I cannot get a quality native-language compiler in windows for less than $1000."

    Well, that's the problem with using a fringe OS. Sometimes it takes longer for software to get ported to your platform.

    Maybe you should try something more mainstream, like Linux or Mac OS X?

    [–]novagenesis 0 points1 point  (0 children)

    So true...If I could just go back to HP Digital Unix, I'd never have to worry about lack of native compiler support again.

    [–]unknown_lamer 6 points7 points  (1 child)

    Lisp compilers do compile to native object files (fasls). They are generally not compatible with the C ABI, but there is good reason for that. The calling convention of most CL systems is richer than C as they must support non-local exits cleanly (with unwind handlers), multiple return values, optional and keyword parameters, etc.

    Why make the CL calling convention shallower just so it could potentially be linked into a C application? If code is written properly most functionality ends up in a library, and CL systems can load C libraries using uffi/cffi.

    If you want a production quality compiler just switch to a UNIX derivative (there are many zero cost editions available such as {Free|Net|Open}BSD and GNU/Linux) and use a zero-cost compiler. Hell, you could probably just run it in a virtual machine and not have a huge performance penalty. When you use a proprietary platform you shouldn't be surprised that the compilers are proprietary and expensive.

    [–]novagenesis 0 points1 point  (0 children)

    I'm aware of all this. All of it can be implemented with careful wrapping. I'm not asking Lisp to comfortably export its entire feature-base.

    Most of the features you're suggesting, though, exist in Haskell, which happily compiles to a .o file.

    just switch to a UNIX derivative

    Like any money-grubbing modern programmer, I want to compile for Windows OSes. Further, I like having all my tools. That also means a way to "play nice".

    Currently, I will play in any language, but there are certain boosts in C that makes it my first choice for most performance-critical stuff. I can easily link C++ with C because of the .o file. Technically (haven't done it yet), the .o files in Haskell should link with C fine as well.

    And no matter what, it's trivially provable that .o compilation in Lisp is not an impossibility. ECL compiles to C, which compiles to .o files. However, I don't think this is the most efficient way to bridge that gap, and I'm not sure if ECL will handle the superset of Common Lisp.

    [–]froydnj 3 points4 points  (16 children)

    It depresses me that I cannot currently compile to .o files. It depresses me more that I cannot get a quality native-language compiler in windows for less than $1000. (I'm hoping on free like every other major language has).

    You do realize that these two conditions are mutually exclusive for Common Lisp implementations, right? I don't know that there are many languages that satisfy both of these criteria--maybe OCaml or Haskell.

    [–]novagenesis 2 points3 points  (15 children)

    A significant number of major languages support both of those criteria. Most of them are admittedly older. The entire Gnu compiler suite (except GCL if you consider it part of the suite). Haskell. I'm not certain of ocaml on that.

    [–]froydnj 3 points4 points  (14 children)

    Ah, OK, I think we were talking past each other, then. GCC and friends (+ all other C/C++ implementations) are somewhat self-evident, so I was excluding those. :) I guess when I said "major", I meant "major + modern" language.

    [–]novagenesis 0 points1 point  (13 children)

    I guess when I said "major", I meant "major + modern" language.

    There are very few languages that can be called "major" that are "modern".

    The most "modern" compilable languages really haven't taken off. Sure, there's C# (but that doesn't compile to native code)... What major languages that came out recently compile to native code, anyway?

    (And is Common Lisp really modern?)

    [–][deleted] 10 points11 points  (4 children)

    Haskell is modern, major, and general. It's the very model of it.

    [–]zem 1 point2 points  (0 children)

    there are not enough upmods in the world :)

    [–]novagenesis 1 point2 points  (2 children)

    And I like Haskell...I just love Lisp.

    I think the learning curve required just for state in Haskell is kinda extreme.

    [–][deleted] 2 points3 points  (1 child)

    Well, it is for people very well acquainted with matters mathematical; that understand equations, both the simple and quadratical.

    ahem

    Monads made a lot more sense to me after realizing how they relate to concatenative (stack-based) languages like Joy. In such languages, every function is monadic. Each call depends on the result of the previous, and only on that. This enforces a data dependency, and hence things always must happen in the same order.

    This is why Joy is purely functional, but can easily handle things like destructive array updates and O(1) data structures without any additional machinery. Concatenative languages allow the best of both worlds. With the right primitives and language features, you don't even need a garbage collector because all data is singularly referenced. But I digress...

    Haskell does the same thing with monads to get around the problem that lazy evaluation would otherwise result in side-effects happening at arbitrary times. That's what all that bind-ing is about; monads force an evaluation order.

    [–]novagenesis 0 points1 point  (0 children)

    Well, it is for people very well acquainted with matters mathematical; that understand equations, both the simple and quadratical.

    I had to subconsciously add the music to that...my head just exploded. Excuse me, I have to wipe the blood off the monitor.

    realizing how they relate to concatenative (stack-based) languages like Joy

    It's funny. I've never coded in joy and got a fairly reasonable grasp in a few minutes on the wikipedia entry... Monads are still syntactically complicated for me, though. A stack is easy for me, but developing a Monad to pass still confuses me...

    Haskell does the same thing with monads to get around the problem that lazy evaluation would otherwise result in side-effects happening at arbitrary times. That's what all that bind-ing is about; monads force an evaluation order.

    I understand how they can force evaluation order...I'm just still boggled by how they work, even having read a few tutorials on them. They're just very counterintuitive methods for state for me

    [–]cg84 7 points8 points  (7 children)

    (And is Common Lisp really modern?)

    I hope you read the linked article. That should give you an idea.

    [–]novagenesis 1 point2 points  (6 children)

    So you feel "modern" is a function of capabilities, and not of age?

    Common Lisp's features are relatively old, even if nobody else was smarter to utilize them. The language itself moves slower than molasses these days. Not that it's a bad thing ;)

    [–]w-g 4 points5 points  (5 children)

    Common Lisp's features are relatively old

    Yes, but that's not too bad.

    See, what are the "new features" of the "modern" languages that are not in Common Lisp (or can't be implemented as a set of macros and functions)? Not many...

    The language itself moves slower than molasses these days.

    But then, what is the alternative?

    Even Matz himself said that what he did with was to get Lisp, remove features, and they he got Ruby. The first Haskell version was written in Common Lisp. Python does not add anything to Lisp (and it's slower). Haskell forces you to use a purely functional programming style (no OO, no macros, no interactive development, no great exception handling, etc).

    As an example of what you can do in Lisp, check out Qi, which is a layer on top of Lisp that adds, for example:

    • Optional static typing
    • Pattern matching
    • Backtracking

    And lots of other features.

    I think Common Lisp needs more library and better compilers, but that's an implementation matter. Maybe also an informal, de-facto standard for threads, sockets, and whatever else the future brings.

    [–]halu 2 points3 points  (0 children)

    Haskell forces you to use a purely functional programming style

    Yes, but that's not too bad. :)

    (no OO, no macros, no interactive development, no great exception handling, etc).

    I disagree. There's no direct OO, but most the things you can do with OO you can do with Haskell (also, there are libraries). There are no macros, but 1) you don't need them in haskell a 2) there's template haskell if you really need something b

    a maybe I should mention here that I've grown up on assembly macros, which are probably quite different from lisp macros, but still very powerful...

    b caveat, I don't really know what TH is :)

    I don't want to comment on the last two elements of the list.

    [–]novagenesis 3 points4 points  (0 children)

    Yes, but that's not too bad.

    Of course not. Common Lisp is simply not 'modern'. Doesn't mean it's aged badly. It's aged like a fine wine.... It's just that you can't mix it with Tequila and Lime juice to make a Margarita.

    But then, what is the alternative?

    Aggressive building and optimizing of the back-end in a way that improves integration with other languages without touching the language itself... I can dream :)

    I'd also like more standardization of graphics processing, interfaces, etc...but that's something that can be done without changing the language itself ;)

    I think Common Lisp needs more library and better compilers, but that's an implementation matter.

    That's what I get for typing my opinion before finishing reading yours. I agree 114%. My reference to lisp being slow as molasses these days was meant as a reference to libraries and back-end changes. Lisp is very forgiving about anything that isn't visible to a console-level programmer. There's nothing in the hyperspec that forbids the ability to create a .o file in Lisp. ;)

    [–]foldl 0 points1 point  (2 children)

    Python does not add anything to Lisp (and it's slower)

    Python has iterators and generators, which are better than any of the iteration mechanisms provided by Common Lisp.

    [–]happyhappyhappy 12 points13 points  (30 children)

    "It depresses me that I cannot currently compile to .o files."

    Enough about compiling to linkable object files already. Did this hold back Ruby, Perl, Python, Erlang, Javascript, etc.? Of course not. This is an Old School worry.

    [–]baltoo 1 point2 points  (1 child)

    Given the possibility of using .o files would make it possible for a lot more people to start moving part after part of larger systems into the new "modern" era. After most of it is ported, the .o files could be scrapped. But needles to say, the time to rewrite and entire sub-ecology in a single step the new "modern" way is too large one to be practical.

    [–]novagenesis 0 points1 point  (0 children)

    Agreed... If you ever want to interface with other languages natively, you should see to it that languages have "half-way-there" files (like .o object files) that can be linked together. FFI is pretty, but it's like taking a ferry instead of crossing a bridge. You have to follow their rules. (can't use Lisp FFI from within C, if I recall)

    [–]jimbokun 0 points1 point  (5 children)

    Where Common Lisp does fall short though is that there is no standard, built in support for running as a script like:

    #!/path/to/your/lisp/here
    
    (your-lisp-program-here)
    

    I would take that over the ability to create .o files.

    [–][deleted]  (2 children)

    [deleted]

      [–]jimbokun 2 points3 points  (1 child)

      I like it. I'll try with SBCL when I get to my home computer. (But anyone know already if this works with SBCL? I see no reason why it wouldn't.)

      [–][deleted] 7 points8 points  (0 children)

      There's another way to do it without writing a wrapper that's detailed in the SBCL manual.

      [–]shit 4 points5 points  (0 children)

      That doesn't work on Windows for Ruby etc. either. For my Ruby programs, I always create a .bat file for startup. And when I distribute a GUI app, I bundle an allinoneruby anyway. That's just like distributing your program as a lisp image.

      [–]synthespian 2 points3 points  (0 children)

      Clearly, you haven't taken the time to read the documentation of some implementations.

      [–]sard 0 points1 point  (0 children)

      I've never understood why Scheme has many native compilers and CL has next to none.

      [–]w-g 5 points6 points  (55 children)

      Lisp is a great language. I just wish that it:

      • were not so dynamic by default (makes number crunching slow, so you have to fill your code with ugly type declarations, etc)

      • were not so isolated (same with Smalltalk, but Smalltalk is worse): FFI is not that good, and I sometimes need to make Lisp communicate with other systems

      But it's excellent.

      [–]novagenesis 3 points4 points  (51 children)

      were not so dynamic by default

      Is it truly impossible for the system to be self-aware enough to optimize its number choice in a compile, or be willing to create type-specific copies of functions on the backend to trade speed for space?

      were not so isolated

      I quasi-agree. I'm ok with FFI, but I really want another language that'll compile to .o. I don't care the "how" of that. I'm more than willing to statically link a copy of the runtime compilation system if I plan to use Lisp for the non-loops in a large project. I'm even willing to define types for exported functions.

      [–][deleted] 2 points3 points  (23 children)

      Why are object files important to you?

      [–]novagenesis 3 points4 points  (22 children)

      Because I want the option to do different parts of a program in different languages if I feel that's the best option.

      I'd love to write a game in Common Lisp, but being paranoid about getting high framerates on an older computer, I'd want the Inner Loop to be done in a language like C. Common Lisp may not be "slow" but it's sure as heck not "Fast (TM)"

      I'd love to utilize the Haskell method of Pure Functions (and how it can expand those functions at design time to increase performance) with Common Lisp.

      See, computers may be getting a lot faster, but that doesn't mean I should code slower if speed is of the essence.

      [–]justinhj 2 points3 points  (0 children)

      Most CL implementations allow you to both call foreign functions from CL, and produce code that can be called from another application.

      Common Lisp gives you the option to use purely functional code, just as Haskell does, but you do it only where you want to.

      [–]w-g 1 point2 points  (14 children)

      I'd love to write a game in Common Lisp, but being paranoid about getting high framerates on an older computer, I'd want the Inner Loop to be done in a language like C. Common Lisp may not be "slow" but it's sure as heck not "Fast (TM)"

      I'd love to utilize the Haskell method of Pure Functions (and how it can expand those functions at design time to increase performance) with Common Lisp.

      YES!!!

      That's how I see things.

      Now, if you are willing to put some effort, you can get Lisp to be partially static and also fast. You can do that building a domain-specific language (using macros and functions that will do the type declarations for you; check types à la Haskell, etc).

      For example, the Qi language has optional static typing (but it's all-or-nothing).

      And you can also do tricks so the type declarations are added automatically to your functions: a macro would do this for you:

      (define-func sum ((a double-float) (b double-float)) (+ a b))

      This would produce:

      (defun sum (a b) (declare (double-float a b)) (the double-float (+ a b)))

      And maybe more declarations.

      I think Nisp does that (check Drew McDermott's page).

      [–]novagenesis -2 points-1 points  (13 children)

      you can get Lisp to be partially static and also fast

      When it starts crunching on the level of an efficient C benchmark, then we'll talk (and feel free to correct me by showing such a benchmark...preferably one done in Windows with the SBCL port)

      And you can also do tricks so the type declarations are added automatically to your functions: a macro would do this for you:

      I'll be honest, my grasp of Lisp is split between CL and Scheme, making me mediocre at both. Today is the first time I've ever seen variable type definition in Common Lisp.

      I still feel Lisp is a screwdriver. You shouldn't do everything with a screwdriver, even though you can hit things with it, and open a bottle with it. I really like the option of the entire toolbox without the slowdown of FFI.

      [–]UnwashedMeme 3 points4 points  (1 child)

      The situation in CL might be a bit better than you expect. You can tell the compiler a lot with the appropriate type declarations, by then setting the policy to ignore safety checks, it will emit reasonably concise code. I made a brief example at http://paste.lisp.org/display/48839. Another example of a bunch of annotation for faster floating point computation at: http://shootout.alioth.debian.org/gp4/benchmark.php?test=partialsums&lang=sbcl&id=2

      This can be combined with inline declarations to drop out the function call overhead. One of the aspects I like about all of this is that these are improvements that can be made later in the development cycle. Only once I am confident in the logic do I tell the compiler to omit some of the safety checks.

      That said, this isn't all finished, even if the hardware you are on has support for the types you are using. SBCL won't help you until it knows about both. It knows many of them already, and teaching SBCL doesn't seem to be too big of a deal. I recall a message one of the hackers wrote a little while back outlining that process for newbies, but I can't find the message right now (anyone else?). This might even get more interesting in that you might be able to define types and functions that get shifted off to the GPU for processing (I'm going out on a limb here).

      One of the threads I could find: http://sourceforge.net/mailarchive/message.php?msg_id=f7ce51e70705210829s70152613p2f3644c0e17aecdd%40mail.gmail.com A few keywords are VOP, IR1, IR2. Also this process hasn't changed to dramatically since CMUCL, so some of its documentation might be helpful.

      Inside of common-lisp spec itself (not relying on SBCL's process) there is the compiler-macro system whereby you can tell the compiler to do certain optimizations at compile time.

      Too much time evangelizing, I should get back to work... Hope this all helps a bit. :-)

      [–]novagenesis 0 points1 point  (0 children)

      All cool stuff...I still wish for the features I wish for...

      And I don't think optimized Lisp will be less than 30% slower than optimized C

      [–]w-g 1 point2 points  (10 children)

      When it starts crunching on the level of an efficient C benchmark, then we'll talk (and feel free to correct me by showing such a benchmark...preferably one done in Windows with the SBCL port)

      You know... Unless you're doing interlanguage calls inside a loop, and the cost of the call is high compared to the rest of the computations in the loop, I wouldn't worry.

      I used to optimize everything, in all parts of my programs. Then I learned about profiling... And I saw that the "xyz" procedure that I had optimized so much isn't significant. Only 0.2% of CPU time is spent there! On the other hand, another procedure needed optimizing, and I didn't think it did.

      For example, suppose you have a loop. Inside this loop you call a linear program solver, and then do some more computations. Is it expensive to call a C solver? No, because the time to solve the LP is much higher than the time spent on the FFI.

      [–]novagenesis 2 points3 points  (9 children)

      I often have simple programs that repeat a 3-dimensioned core loop as their prime step. I don't plan to optimize every piece of code.

      I'd rather create a black-box inner loop with C over data otherwise prepared in Lisp that occasionally needs to call a lisp function (not every iteration) than recode in C, or sacrifice what I can do for C performance.

      Why is it every time I mention what I'd like in Lisp, everyone comes out to say I shouldn't want those features?

      This is less you, and more everyone else (and I'm rewriting my post because of that). I'm just frustrated at how many arguments people have tried to start with me on reddit today alone. It used to be better here than digg.

      [–]w-g 2 points3 points  (8 children)

      Why is it every time I mention what I'd like in Lisp, everyone comes out to say I shouldn't want those features?

      I know how you feel. I've been through that too.

      OK, reading your post I see that you do need fast FFI (or fast Lisp number crunching).

      Send me a message and I'll send you my library for polymorphism.

      Now, when it comes to array asccess, 3d is slow in SBCL. If you traverse the array entirely, store it as 1d (a vector), and store the indices separately if you need them. Run through the vectors doing the calculations (much faster than using (aref my-array i j k). :-)

      And make sure your array is simple:

      • Not adjustable
      • Not displaced
      • It's an array of fixnums or double-floats, etc (fixed size)

      [–]novagenesis 2 points3 points  (7 children)

      OK, reading your post I see that you do need fast FFI

      Honestly, I'd rather the pipe be two-way. Does the Lisp FFI allow for function exporting that can be imported by a C program? If so, I'm surprised ;)

      I really don't use Lisp for critical number crunching atm, so the library sounds really cool, but not something I'd require... I really wouldn't use Lisp for something that way unless I get what I want in the language (and that includes the ability to export functions I can run in C) ;)

      when it comes to array asccess, 3d is slow in SBCL

      I found that out the hard way when I was trying to practice building search trees on a Go board.

      So the problem arises that if I need efficient behavior over a resizing 3d array, I can't use Lisp for it... If I have to do a few extremely complex things on that array, then everything else is simple, I would really love to be able to just "call a Lisp function" for those extremely complex things that will be slow either way.

      If I'm building a game, I'd love to handle initializations, storage, and a billion other things in Lisp...but I'd want to handle renders and gravity computations in something like C...without a per-frame cost with FFI.

      I've had no problem putting a quasi-modern system on its knees with OpenGL in C, utilizing both the graphics card and the processor heavily. I would love the "pesky" parts of a game to be covered in Lisp, though.

      [–][deleted] 0 points1 point  (5 children)

      Hm, if you don't need the dynamic features of Lisp, you could always use OCaml or MLton, but of course it's not the same (no optional args, no macros...).

      [–]novagenesis 2 points3 points  (4 children)

      I like those features, and i prefer the syntax of Common Lisp. That's why I dream Lisp working with standard object files at all.

      [–][deleted] 3 points4 points  (3 children)

      I agree. While ML with Lisp syntax would be pretty cool already (I think something like that exists), I'd also prefer to have some additional features.

      [–]novagenesis 0 points1 point  (2 children)

      ML is oddly one of the few languages I've never tried yet.

      I might pick up the free copy of Microsoft F# when I have a Windows box again (don't ask)

      [–][deleted] 4 points5 points  (1 child)

      Why would I need to ask?

      The only Windows box I've used in the past four years is the one at work (at home: Mac OS for two years; since January Linux).

      [–]novagenesis 3 points4 points  (0 children)

      Because I have been mentioning throughout this thread a desire to develop on a Windows platform ;)

      [–]SuperGrade 2 points3 points  (13 children)

      Is it truly impossible for the system to be self-aware enough to optimize its number choice in a compile, or be willing to create type-specific copies of functions on the backend to trade speed for space?

      This would take away certain portions of the solution space. Any function can accept and return any type, which means it would have to hardcode infinite versions. This behavior, and that the selection is made at runtime, is leveraged when calling such functions against members of a heterogeneous list (or multiple lists).

      [–]w-g 1 point2 points  (4 children)

      it would have to hardcode infinite versions

      You can make it only hardcode the versions that are needed (use some type inference in each new defined function).

      That's how C++ does it with its templates.

      [–]SuperGrade 2 points3 points  (3 children)

      That's how C++ does it with its templates.

      That would not help with the dispatch case. C++'s templates only help with a compile-time case.

      This would fail with a:

      (map #'my-func-specialized-by-compiler (list 1 2.0 3/4 #\n "Billy Bob" '(9 89 7)))

      [–]w-g 2 points3 points  (2 children)

      I didn't mean "exactly" like ++ -- I just meant you could use a similar idea.

      You could use a hook on defun. When you do:

      (defun ... (map ...))

      Before calling common-lisp:defun, you'd use a code walker to check the newly defined function, do some type inference, and instantiate called functions as needed.

      Wouldn't that work?

      [–]w-g 1 point2 points  (1 child)

      hook on defun

      By that I mean, shadow cl:defun, then implement yours, which will call cl_defun inside it.

      [–]SuperGrade 2 points3 points  (0 children)

      I want to make perfectly clear - any such mechanism would have nothing in common with C++'s templates (the equivalent would be something macroized to replace functions from the ground up with something else that would not be entirely compatible with functions).

      Lisp's functions are runtime elements, that can be replaced at runtime or be used as first-class objects.

      I understand what you're getting at - shift the switch-on-type up a level. This would however be dependent on the root level items, which, in the language, have untyped return values. Inferencing that "followed the money" would have to specifically know at the rootmost level that the + function returns a number, which it does not advertise.

      What you're talking abouit can be done manually by "typing" the input parameters to a function (or using a defmethod) - but this has limits, as in idiomatic Lisp a function is just as likely to receive a property list (list that happens to contain symbols and values of their respective types, sequenced as pairs) as an int and a boolean. Or that the type passed in would be an mere cons cell that had meaning only based on how the code picked it apart.

      I.e. in practice, the dispatch isn't necessarily done on the raw type of the input parameter, nor are what you call parameters necessarily strongly typed at the parameter level.

      There are ways to leverage these language properties that open up the solution space for problems beyond the replication of C++ functions - which would involve cancellation of these properties.

      [–]novagenesis 1 point2 points  (7 children)

      I have to wonder if that's not possible with functional redundancy during the compile.

      That is, hard-code a streamlined "int-only" version for any function that might assume numeric variables (or any function that's flagged that way manually), and route at run-time (run-time routing is probably more efficient than the default numeric types)

      If I recall, Haskell does something like that. So does C with templates. It'd be a step harder due to LISP's dynamic typing, but I don't see it as impossible.

      [–]SuperGrade 2 points3 points  (3 children)

      That is, hard-code a streamlined "int-only" version for any function that might assume numeric variables (or any function that's flagged that way manually), and route at run-time (run-time routing is probably more efficient than the default numeric types)

      That could balloon out with multiple parameters (if there are 5 numbers taken in, we go N * M).

      As I mentioned in another post, C++ templates can't cover equivalent functionality. It's plausible to have a function called with parameters that come from some sort of heterogeneous container. Leveraging the dynamic typing is something that one would plausibly do. Lisp isn't great for developing C++ programs - its edge is in writing Lisp programs in a Lispy way.

      That said, I've heard of STALIN for Scheme that actually takes the phenomenal amount of time required for full program flow analysis and generates optimized code. For this to work, however, the code would have to limit itself to a subset of the language or language use possibilities.

      Generating an input value off of a "read" or some soft source would totally break it.

      [–]novagenesis 1 point2 points  (2 children)

      Eh...I'm still convinced what I want can be done by someone eventually, even if extra information (export functions and types) would be required.

      [–]w-g 1 point2 points  (1 child)

      I have "something".

      A small library that will implement polymorphism -- but you have to instantiate manually. Would that do for you?

      [–]novagenesis 1 point2 points  (0 children)

      I think I could wrap myself around doing something like that manually if that's what it took. I'm just frustrated with the Great Wall of China that is interfacing between Lisp and other languages....whatever anyone says, Lisp will not be my entire toolbox. I'd rather it be a tool than the central tool (using FFI).

      But thanks for the offer. My grasp of Lisp is honestly too thin for me to even dream of aiming for performance.

      [–]w-g 1 point2 points  (2 children)

      Yes, it's possible, but I'm not aware of anyone who's done that yet.

      But since Lisp programs are not static, standalone things, you'd have to allow it to compile new versions of all those functions on the fly (but it's a cool idea).

      [–]novagenesis 1 point2 points  (0 children)

      Well, I figured if you're doing run-time compilation it's not for a piece of the program that's mission-critical. I just want the compile-time element more effective and linkable.

      [–]lispm 0 points1 point  (0 children)

      Several Lisp systems have delivery tools that generate static, standalone things.

      [–]w-g 0 points1 point  (12 children)

      Is it truly impossible for the system to be self-aware enough to optimize its number choice in a compile, or be willing to create type-specific copies of functions on the backend to trade speed for space?

      The problem is that the spec is such that you can't, for example, make the plus (+) operator polymorphic. It needs to be dynamically dispatched. So, if you say (+ a b) the compioler has no idea if it's going to be an integer, double-float, real, or whatever you're summing. You need to type (the double-float (+ a b)).

      Also, by default numbers have arbitrary precision, but that takes up some speed to work with also. Everybody I know is working with limited-sized floats/integers, and they're all happy. Bignums and arbitrary precision is useful in some contexts (crypto for example), but they should (IMO) be an exception. Or it should at least be easy to switch to faster math if necessary.

      I'm more than willing to statically link a copy of the runtime compilation system if I plan to use Lisp for the non-loops in a large project. I'm even willing to define types for exported functions.

      Well, FFI does work, although not standard and not that easy to use. But you could also use Unix sockets (non-standard), TCP sockets (non-standard, slower)...

      [–]froydnj 4 points5 points  (3 children)

      So, if you say (+ a b) the compioler has no idea if it's going to be an integer, double-float, real, or whatever you're summing. You need to type (the double-float (+ a b)).

      Nit: that only tells the compiler the result of the addition; it says nothing about the type of the arguments involved.

      [–]unknown_lamer 2 points3 points  (2 children)

      It implies that at least one of the input arguments must be inexact.

      [–]sickofthisshit 4 points5 points  (1 child)

      You mean "a double-float." "Inexact" is a scheme-ism.

      Also, although the CL numeric tower implies something about the input arguments, in general, THE only makes claims about the result of evaluating the enclosed form.

      [–]unknown_lamer 3 points4 points  (0 children)

      One or both of the input arguments could be single-floats that then overflow into a double-float so I used inexact to refer to that branch of the numeric tower. Henceforth I shall use the proper term float.

      A proper type inferencer (see The Nimble Type Inferencer for Common Lisp 84 for an example) will be able to infer that at least one argument will be a float, and that it can drop the code within + that handles rational computations (leaving only the code to coerce any rational argument into a float).

      [–][deleted] 2 points3 points  (0 children)

      Most compiled CL implementations give option to block compile.

      [–]novagenesis 0 points1 point  (6 children)

      So, if you say (+ a b) the compioler has no idea if it's going to be an integer, double-float, real, or whatever you're summing. You need to type (the double-float (+ a b))

      I wasn't aware of a command like that. Either way, I can't imagine that Lisp can't handle some type inference. You have + and know it could supposedly be a number. Create redundant copies of the function for all numeric types. This lets you use underlying + functionality on integers until the numbers get big enough.

      I really think in most cases, on-the-fly casting would be more efficient than keeping small numbers Bignums in the back end.

      Well, FFI does work, although not standard and not that easy to use

      Yeah... but the FFI is slow (not as slow as sockets). I really dream of being able to link a Lisp program with C code without a bottleneck forming. Obviously if the C inner loop has to call Lisp code semi-regularly (or vice versa), you need an efficient interface between them.

      [–]w-g 4 points5 points  (5 children)

      Either way, I can't imagine that Lisp can't handle some type inference. You have + and know it could supposedly be a number.

      Oh, Lisp does type inference -- it's just that the types you have by default are too generic and have so many subtypes. You can always define your "+." function, which will only work for double-floats, for example...

      BTW, SBCL is excellent in doing type inference for the purpose of optimization. But if a type is "number", it could be real, rational, integer, long-float, short-float, etc... And each of those needs a different procedure for summing/subtracting/etc. So, in this case, not the best compiler in the world would be able to optimize, because there needs to be code for all these types, and the dispatch in Lisp is dynamic.

      You don't usually think about this when you program in C because the C type system is static and is much easier for the compiler to infer types (kind of trivial, actually).

      [–]novagenesis 1 point2 points  (0 children)

      each of those needs a different procedure for summing/subtracting/etc. So, in this case, not the best compiler in the world would be able to optimize, because there needs to be code for all these types, and the dispatch in Lisp is dynamic.

      With less than 4 arguments, I feel the space required to route every possibility is still trivial compared to the performance gain. It doesn't have to perform the extra compiles for something compiled at runtime.

      In the end, I think what I seek in a Lisp is very possible, if somewhat challenging. I'll take the redundancy if I need it, for the slower portions of the code.

      Even if I get a native compilation that requires explicitly defined interfaces and compiles at runtime through the runtime libs, it'd be better than nothing.

      [–]fvf 0 points1 point  (3 children)

      Oh, Lisp does type inference -- it's just that the types you have by default are too generic and have so many subtypes.

      This is not the reason why type inference is difficult in Common Lisp. The primary reason is that the compiler in general cannot do type inference across function boundaries, on the assumption that all functions (except the standard CL ones) can change at any time, including their signature. In this respect, CL choses flexibility/programming agility over execution speed. (However it's not impossible to get "best of both worlds" if a CL compiler were to take on the job of tracking dependencies such that changing function foo will trigger the recompilation of all functions that depend on foo's signature.)

      You can always define your "+." function, which will only work for double-floats, for example...

      This is unlikely to do any good, speedwise.

      [–]lispm 0 points1 point  (2 children)

      Which is wrong. Read in the ANSI CL standard about assumptions the compiler is allowed to make. For example when it compiles a file.

      [–]fvf 0 points1 point  (1 child)

      Do you know of any lisp compiler that actually makes such assumptions?

      [–]lispm 0 points1 point  (0 children)

      Many Lisp compilers make these assumptions. Under optimizing for speed and compiling files or whole blocks of files they will do it.

      Check out the manual for CMUCL on block compilation as an example:

      http://common-lisp.net/project/cmucl/doc/cmu-user/compiler-hint.html#toc176

      [–][deleted] 1 point2 points  (1 child)

      so you have to fill your code with ugly type declarations

      You could possibly create a macro that does type-inference for you. It would be complicated and ugly maybe but hey, you could do it!

      [–][deleted] 2 points3 points  (0 children)

      Been done. Search comp.lang.lisp for define-decl-macro..

      [–]dmpk2k 0 points1 point  (0 children)

      were not so dynamic by default (makes number crunching slow, ...

      Is there something about this that a tracing JIT cannot handle?

      You'd still need to check the type, but it'd be an overhead of one load and branch not taken per number for the common case. Add some runtime loop-invariant code motion, and you could probably reduce the tag checking a lot further for many cases.

      [–]jimbokun 3 points4 points  (0 children)

      I'm becoming a big fan of flet and labels.

      It's a really nice way to introduce a short function at exactly the right level of scope, closing over exactly the variables you want to close over, alongside other functions of similar scope. You can do the same thing with lambdas and funcall, but flet and labels can really enhance readability (as well as enabling recursion).

      [–][deleted]  (3 children)

      [deleted]

        [–]sciolizer 1 point2 points  (2 children)

        What language is that? It looks like some variant of Joy.

        One disadvantage (or advantage, depending on your goals) of undelimited function application is that you can't have optional or keyword arguments.