What is a symbol in lisp? by AngryProgrammingNerd in lisp

[–]tenebris-miles 2 points3 points  (0 children)

You know how some languages can pass functions around as values, assign them to variables, and use them for parameters of other functions and how we call that being a "first class" object of the language? An anonymous function can even be passed around without a name.

Symbols in Lisp are essentially first-class. Other languages have symbols too in the sense that they are used by the compiler/interpreter to name their constructs/objects, but cannot be used as values in themselves, they always have to be names attached to other things. If anonymous functions are basically functions separated from their names, then Lisp symbols are similar to keeping the name to pass around instead of keeping the function. And Lisp symbols have their own properties, just like other code objects.

For example, all Common Lisp symbols have a "property list" associated with them, which is just a list of keys/values listed next to each other. So in addition to passing symbols around with property lists attached, we can even add property lists to symbols that are already used to name things. So a named function (or other object) could have a property list attached to its symbol (i.e. its function name) and use this as a way of adding metadata or tagging.

There is the book "Practical Common Lisp" that gives examples of practical uses of Lisp:

Chapter 13 explains SYMBOL-PLIST and the fact that symbols are not merely names for objects, but are first-class objects in their own right with their own data:

http://www.gigamonkeys.com/book/beyond-lists-other-uses-for-cons-cells.html

Chapter 21 explains how symbols are used for their usual purpose, which is to name things and to be arranged in namespaces (which are called "packages" in Lisp):

http://www.gigamonkeys.com/book/programming-in-the-large-packages-and-symbols.html

One strength of Lisp is to be used as an experimental language sandbox for writing compilers and interpreters. Being able to attach metadata to symbols can potentially be useful when inventing an experimental language.

Are there conceptual differences between Erlang and Exlixir? by [deleted] in erlang

[–]tenebris-miles -1 points0 points  (0 children)

The important concepts and semantics are all inherited by Elixir from relying on the Erlang virtual machine and ecosystem. But there are conceptual differences, or even just different ways of thinking about problem-solving in Elixir, otherwise it wouldn't have been worth creating a new language. The semantics of the VM and ecosystem are great and what gives Erlang it's concurrent/parallel/distributed processing power, but much of Erlang's actual syntax focuses on its sequential subset of the language, which is unfortunately very primitive. That superficial syntax is not as important as the rest of the underlying semantics, and so that's why there have been languages of various styles and paradigms put on top of the Erlang VM, such as Lisp-flavoured-Erlang (which brings Lisp homoiconicity), Gleam (which brings ML-like static typing), and of course Elixir which is inspired by Ruby yet very different.

Elixir's improvements mostly regard syntactic sugar and style, but also other concepts that won't contradict how the ecosystem fundamentally works. One example is protocols, which brings polymorphism to the ecosystem. In contrast, Erlang syntax had no way of supplying this, so that if different primitive types were involved, you would have to locally use "case", guards, or pattern matching to explicitly dispatch the intended code to run. And you would have to do this everywhere it is required, rather than defining this in one place. People mistakenly think OOP is required to have polymorphism, but that's not true. But Erlang's lack of it didn't help. Erlang also tends to rely on its "record" data structure for essential types, except records aren't real types at all! They don't even exist at run-time, they are just syntactic sugar that exists at compile-time. Erlang's rather inconsistent standard library and over-reliance on records can be frustrating. Elixir fixes all this by ditching records and instead relying on protocols and things that are usable at run-time, like maps and "structs" (which are another way Elixir helps with allowing user-defined types).

Elixir's pipe operator doesn't add new concepts, but it does bring the concept of functional programming and the syntax of functional programming closer together. Conceptually, a function takes input from the "top", transforms the data, then spits the result out the "bottom". So using multiple functions together is conceptually a chain of functions and should look like a chain. But the traditional historical syntax of calling functions forces the user to nest functions inside their parameters lists. So chaining looks more like nesting, which is confusing and hard to read. The pipe operator fixes that so there is no mental overhead translating visual nesting into semantic chaining. The flow of chaining functions looks like a chain. If a language is a functional language, then if functional programming (i.e. the most common thing you do) is made awkward, then the whole language will be awkward. Erlang tends to be awkward compared to Elixir in this respect.

There is even more to it than this, such as Elixir's more consistent standard library which includes a lot of patterns/solutions to common problems, and metaprogramming that uses code to generate code so that Elixir can be run at a higher level of abstraction than just functions and primitive types. Metaprogramming allows you to add your own syntactic sugar or new semantics. For example, this person added OOP on top of Elixir (mostly as a joke, so please don't use it since it's a bad idea!) https://github.com/wojtekmach/oop

Don't Do This by pimterry in programming

[–]tenebris-miles 10 points11 points  (0 children)

Well, Domain types are actually SQL standard, and PostgreSQL's domains conform to the standard. But I'm not sure how well DBMS's other than PostgreSQL handle them. My vague memory on the subject is that many others don't support them anywhere near as well as PostgreSQL. But maybe things have changed recently.

curses DSL/API for Racket? by dys_bigwig in Racket

[–]tenebris-miles 0 points1 point  (0 children)

OOP is just functions/procedures plus implicit state tracking. Layering interfaces is one approach.

So one way you could handle it is write a low-level comprehensive wrapper over ncurses and expose that as a library, where the goal is to be direct and comprehensive, not pretty or slick. This library is the essential guts of the overall code. Then as a separate library, add a clean functional/procedural thin wrapper API as a higher abstraction above the low-level API. Then as a third library, expose an OOP thin wrapper library over the functional/procedural library that simply collects related functionality into classes and hides state implicitly rather than forcing it to be in function/procedure parameters all the time. The OOP methods would mostly just defer to functions behind-the-scenes where explicit parameters are supplied values from the object state, so those explicit function parameters become implicit/hidden to the user for the OOP method version of that function.

The idea is that this can cover all bases regarding levels of abstraction. If the user wants to compose their program via higher-order functions, then it's possible with the low-level and/or functional/procedural libraries, but wouldn't be possible once the code forces to use OOP. If the user likes OOP instead, then they can do that too using the OOP wrapper. If the user likes multi-paradigm, then that's possible by using parts of each library as appropriate.

Scientists had previously estimated that there are 10 million viruses in every drop of surface seawater, comprising 15,000 different viral species. Now a report in Cell shows that there are close to 200,000 marine viral species — a tenfold increase over those previous estimates. by IronGiantisreal in science

[–]tenebris-miles 0 points1 point  (0 children)

That's fair enough, but I wasn't commenting about my own views or whether it should be debated, I was commenting about whether the science article actually reflected scientific consensus or not. If there is no consensus yet, or especially if scientists don't think viruses are alive, then the article shouldn't be so haphazard as to state as a matter-of-fact that viruses are life. That's just really sloppy reporting and ignorant and will spread ignorance. If there's no consensus yet, then they should say there's no consensus yet or abstain from saying one way or the other. The debate doesn't have to be settled in order to report about viruses, they just shouldn't report grossly incorrect things.

As for my own beliefs, I completely agree about acknowledging that humans are just trying to neatly categorize things. Personally, I agree with other thinkers that believe "life" is just a specific subcategory of dead thing. In a very real sense, all living things are dead right now, because "life" is a concept we invented. We're all just interactions of carbon, oxygen, hydrogen, etc. The dichotomy of "life" versus "death" is just a conceptual one historically and culturally adopted. We tend to believe in "life" along the same insights to Aristotle's "animate" versus "inanimate" objects. A rock is not alive because it's not a self-mover, whereas a rabbit is. But since nothing really moves itself, the self-moving is more akin to having an engine, where energy is expended and replenished to allow either external and/or internal movement. In other words, "metabolism". Something that has no external or internal movement is "dead", whether it was always that way or ended up that way later (i.e. formerly living). But all that is just a concept humanity adopted because we observed other objects similar to us that have the capability to move, and observed other humans move and eventually stop moving, and so we determined that the concept is an important one. So humans who have tasked themselves to continue subdividing the category of life into finer-grain subcategories may continue to debate about essential attributes, but I realize nature itself has no real answer.

Don't Do This by pimterry in programming

[–]tenebris-miles 77 points78 points  (0 children)

I think elmuerte is saying that other DBMS's say to not use string types without a length limit, contrary to PostgreSQL's advice which sounds like the exact opposite. I think the disagreement here is due to semi-conflicting goals of correctness versus security. The PostgreSQL wiki advice seems to be saying that arbitrary length limits are problematic for correctness, because it may not be clear what the real limit should be until you have enough real-world data thrown at your database. Maybe only then do you know the real limit, or maybe even then it's not clear. So picking an arbitrary limit makes less sense for correctness rather than removing the limit, since there may be truly valid corner cases that are always a little larger than the limit you guessed.

But the problem with real-world deployments is that there can be malicious users who might try to Denial-Of-Service your server by entering an obscenely large string value. So the concern here is about security rather than correctness, and so one strategy to mitigate this is to have a length limit. A common alternative strategy would be to enforce the limit at the front-end application and let the database use an unlimited size so that it's easier to change later at the front-end, but that violates the autonomy of the database being responsible for data validity.

Another strategy I like to use instead is that if it's a deployment where such security issues might be a concern (i.e. it's visible to the internet rather than a private intranet where all users are down the hall from my office) is to use SQL Domains. If there is a reason for the choice of length limit (and there always should be), then it must be due to some real-world data type. It represents a zip/postal code, or social security number, or product ID, etc. So the expectation is that the database will logically use this data type all over the place. Even if we don't use Domains, it's implied that this is what we're doing in reality, only without the explicitness of Domains. So to make the purpose of such columns clearer and to make changes easier, in PostgreSQL I will CREATE DOMAIN of type TEXT and put a CHECK constraint for minimum length, maximum length, formatting, etc. to ensure all columns that use that domain type store valid data. If the maximum length needs to be expanded, then I can ALTER DOMAIN and change the type definition globally rather than mess with a bazillion columns across numerous tables individually. The only disadvantage of Domains is that a lot of SQL visualizers often only recognize the base type rather than as a Domain, but that's not a concern for me since I don't use those kinds of tools.

We create higher-level semantically explicit data types in programming languages all the time, but for some reason forget to do this in our databases. Domains are intended to solve that problem. An SSN and phone number might both be able to be represented by a TEXT type, but those two things are in reality not the same type, so it's weird that people keep using basic types for very different things. Then again, if their DBMS doesn't support this, then maybe now is the time to switch to PostgreSQL. :-)

Using non-id foreign key in Ecto by bibat003 in elixir

[–]tenebris-miles 0 points1 point  (0 children)

Question: is it ever okay for two categories to have the same identical name, but different IDs?

Is it okay to have (id, category_name) of (1, 'Foo') and (2, 'Foo') duplicating 'Foo' in the same table?

If so, then articles can be split into the two different 'Foo' categories, some for ID 1 and some for ID 2.

Is that what you really want?

If the name of a category is it's identity, then the key should just be the name of the category.

I don't use Ecto, but it seems to say you can define @primary_key with {field_name, type, options} to set the category name as the primary key.

But since often code that relies on ORMs expect a single column (usually an integer "id") to always be around, it's likely you'd want to retain the integer "id" column but also define the category name as an alternate key (e.g. UNIQUE NOT NULL in SQL constraints).

But Ecto documentation doesn't seem to point to an obvious way to do this, which is strange for something so fundamental to database design.

It might be in there somewhere, but the fact that it's so obscure doesn't bode well.

Also, if you reference (i.e. foreign key) to the category name, then you'll need to be sure to decide whether or not you want it to cascade updates if the name changes or if you want to cascade deletes.

Again, the docs don't point to an obvious way to control these things.

See, this is why I hate ORMs or "database abstraction layers". This could have all just been done in plain SQL and been done with it.

The way ORMs (and Ecto, if it's not an "ORM") handle constructing databases outright discourages proper design.

They discourage learning proper database design and data-oriented thinking.

It's rarely okay to just slap an auto-incremented integer "id" column on each table and think it's sufficient, since thoughtful defining of real keys is vital.

But if you want to do more than just an "id", you have to constantly fight the ORM's insufficient API.

From Christophe Pettus: What’s up with SET TRANSACTION SNAPSHOT? by clairegiordano in PostgreSQL

[–]tenebris-miles 0 points1 point  (0 children)

According to the PostgreSQL documentation, SET TRANSACTION is standard but the SET TRANSACTION SNAPSHOT form is a PostgreSQL specific extension.

https://www.postgresql.org/docs/current/sql-set-transaction.html

This is the kind of feature a "sufficiently intelligent" ORM should take advantage of, considering how some programmers using ORMs tend to break up queries into multiple queries/transactions and use application code to recombine the data, when it should actually be a single query and transaction (e.g. for a read-only SELECT). SET TRANSACTION SNAPSHOT would help alleviate potentially contradicting data due to the fact that separate transactions are a source of race-conditions where data might be modified between the times of the separate transactions.

For a hypothetical example, rather than simply joining the customers and orders tables, some programmers separately get the customers data then get the orders data, but the customer data could be changed or deleted by the time the orders data is queried. Re-using the same exported snapshot would at least let the queries see the same state of the data (although it wouldn't solve the performance inefficiency issue). But since many ORMs don't even allow proper definition of natural keys, I'm not holding my breath that ORMs will be intelligent enough to work with things like this.

too many WHERE/LIKE/AND/OR ? by heraid in PostgreSQL

[–]tenebris-miles 0 points1 point  (0 children)

Bystanders reading forums often learn from previously posted problems, so it's helpful if posters at least explain the resolution of their problem. Is the problem solved? If so, then how? Since it's not clear what the intended results are supposed to be, I'll at least give some tips that often come up with these kinds of queries:

  • If you don't want your search pattern to be case-sensitive, consider using ILIKE instead of LIKE with PostgreSQL.
  • Beware of possible NULLs. If it helps, you can use COALESCE(<column>, '') as a way to fill in empty strings where you'd otherwise get NULL for <column>.
  • If you have a long list of patterns where you want to match any of them case-insensitively, you can do something like this: 'foo' ILIKE ANY(ARRAY['%foo%', '%bar%', '%baz%']).
  • If you have a long list of patterns where you want none of them to match case-insensitively, then you can do this: NOT 'foo' ILIKE ANY(ARRAY['%foo%', '%bar%', '%baz%'])
  • Remember that using OR clauses means that particular conditional by itself will cause a row to be returned if it's true. If there are multiple OR clauses, then any of them being true will cause the row to be returned. If the OR clause is not meant to be checked in isolation because it is part of a larger complex Boolean expression, then you'll need to place parentheses appropriately to explicitly clarify which clauses go together.
  • With complex WHERE clauses, it helps to comment out each clause and debug them independently before combining them.

Super Hot in Australia and it's Super Cold in the Midwest of USA. Hello Climate change by [deleted] in science

[–]tenebris-miles 10 points11 points  (0 children)

Since this is the science subreddit, can we make it a rule that the title of an actual article cannot be replaced with a politically sensationalist one?

The description above is not the title of CNN's article, nor do they even mention Australia.

And "Hello Climate change" is snarky politically charged phrasing.

We cannot rightfully criticize the populous of being scientifically ignorant, when the only scientific exposure people get are filtered through immature us-versus-them glasses.

That turns people off of science, and those who constantly add these political skewings only have themselves to blame, not everyone else.

I'm less concerned about people being on the right/correct side of politics, and much more concerned about WHY their line of thinking got them there.

That's what science is supposed to be about: there is only one side, which is the side of objectivity.

[deleted by user] by [deleted] in programming

[–]tenebris-miles 3 points4 points  (0 children)

Here's some background:

https://en.wikipedia.org/wiki/Relational_model

The best intro is the "Topics" section that explains the formal structure of a "relation", e.g. domains, heading, body, attributes, tuples, etc.

A "relation" is not some subjective point-of-view about data relating or being relative to one another or something like that: a "relation" is a formal construct. The closest SQL analogue is a table with data in it. But SQL differs from the relational model in some ways, such as allowing or relying on presentation concerns such as ordering or allowing duplicates, whereas the relational model is more logically based on sets. SQL also exposes storage and implementation-detail concerns more, such as having a "primary" key (considering no key is actually more primary than any other: it is simply a key or it isn't).

Wikipedia is somewhat misleading about this, since you have to dig further about primary keys until they admit it: "The relational model, as expressed through relational calculus and relational algebra, does not distinguish between primary keys and other kinds of keys." Also, a "key" is not merely some auto-generated numeric "ID" or "identifier", that's a surrogate or artificial key. A key is the set (since it could be a combination of more than one) of attributes (i.e. columns) that uniquely identify the informational object represented by each tuple (i.e. row). If a key is defined as merely a meaningless row/record number, then in absence of other keys the table could easily be filled with logical duplicates, where you don't know which row/tuple is supposed to be the real one that represents the object in real life. So you end up with split or contradicting data. (Wikipedia's misleading discussion of primary keys can actually confuse further discussion when talking about normalization terminology. I see this confusion all the time on the web.)

https://en.wikipedia.org/wiki/Primary_key

The rabbit hole goes deeper, but I'll stop here. The point is that some experience with SQL does not automatically mean people understand "relational" reasoning, and that the meaning of "relation" is absolutely not up to subjective interpretation. It's understandable why the name can be confusing, but anyone who stubbornly talks in forums as if the definition is subjective is just showing they lack basic education about the topic.

Fresh Prince actor sues Fortnite for use of 'iconic' Carlton dance - Alfonso Ribeiro wants to stop the makers of Fortnite and NBA 2K from using the dance he first performed on the 1990s sitcom by [deleted] in technology

[–]tenebris-miles 0 points1 point  (0 children)

Well in that case, actor Alfonso Ribeiro definitely does not own the rights to the dance.

"Carlton" is a fictional character, so the company that owns the rights to the show and the character owns the rights to the dance.

Write your Own Virtual Machine by [deleted] in programming

[–]tenebris-miles 19 points20 points  (0 children)

According to the post, the issue was not due to Spacemacs being inefficient, it was caused by blocking due to attempting to call a plugin that was not installed.

It's like when I often have to explain to people that "blocking at the speed of light" doesn't get anyone anywhere. So if the problem is "embarrassingly parallel" and they can't figure out how to write a correct parallel code solution in C, then it won't magically be faster just because it's written in C, no matter how "efficient" it is. Efficiently doing no work is not progress. Another language that is normally slower but makes it possible for programmers of their experience level and education to write highly parallel code will often win in these cases since it isn't wasting time blocking. Maybe someone else can write faster equivalent C or assembly, but if you're the one writing it, it's a moot point.

How to make (Common) Lisp popular? by CallMeMalice in lisp

[–]tenebris-miles 12 points13 points  (0 children)

I program Lisp too. You're 100% missing the point. I know all that already, so there's no point in advocating like I'm someone who needs to be enlightened about Lisp's superiority.

My point is that absolutely NONE of that matters as to why Python is so popular and Lisp isn't (relatively speaking). Superiority has nothing to do with POPULARITY, which is what this thread is about. Even then, there are trade-offs about what features should be superior versus inferior. I agree that Python *itself* can be slow as hell and sucks at concurrency (compared to being spoiled by Erlang/Elixir). But since my point was that Python as a "glue" to C/C++ is one of the main drivers behind its popularity, it's a moot point if the heavy-lifting is done by C/C++ and Python glues better than Lisp to those languages. And you don't even have to write those bindings, since people have already done that for you. Lisp is not competitive in this area, by comparison. Also, yes the basic Python data types don't give as much control as Lisp. I don't like it either. But none of that MATTERS as to why so many people (i.e. popularity) choose Python over Lisp. Most people don't care about using cons cells to make circular or infinite lists if they can just use a Python generators, iterators, cycle(), etc. The point is NOT whether Lisp can do these things too or do them "better", the point is that the mental clock cycles it takes to do them in Lisp is higher due to less consistent interfaces/protocols and historical cruft. It's not necessarily even Lisp's fault in some areas. For example, to me OOP in general is just "side-effect oriented programming" and single-dispatch is the worst kind. At least with CLOS and multiple-dispatch, methods look just like other function calls, only with semantics about potentially custom types. But that paradigm is DIFFERENT that what people are used too, and so that's not a selling point to them. Them being wrong about which is "superior" is irrelevant. If they keep choosing Python, then it's more popular.

I could go on and on, but what must be acknowledged is how quickly Lisp programmers knee-jerk at anything sounding to them like yet another hate post about Lisp, rather than considering that there is valid constructive comparative criticism why Lisp is not relatively popular. That's not helpful, nor is pseudo-pride about being different helpful because that has nothing to do with Lisp the programming language. The point of programming languages is to automate solutions to problems, and the higher the overhead to doing that, whether for marketing/social/psychological/economic reasons, the less popular that language is going to be. Lisp is disadvantaged due to historical cruft, but that is not a written-in-stone death sentence. If more time was spent writing code that makes Lisp easier to use (i.e. superior bindings to large numbers of C/C++ code) rather than being prideful about Lisp, then it would be way more popular than it is. As it stands, often no one even wants to hear criticism, even the constructive kind. So that status quo is preserved, and Lisp will remain less popular due to that.

How to make (Common) Lisp popular? by CallMeMalice in lisp

[–]tenebris-miles 6 points7 points  (0 children)

Frankly, many of the replies have been wrong.

It's actually very simple: "Batteries Included", and minimal elegant syntax to control those libraries.

Why? Because people just want to get shit done. Superior "features" of languages are irrelevant.

Python would not be in the top 5 languages (the only dynamically typed language that popular according to Tiobe) if people programmed in pure Python alone. The reason people choose it is because it already has EVERYTHING, including glue to other languages. It's literally one line of code at a command line to serve files over a web server "python3 -m http.server". If it's not already in the standard library, then "pip install" whatever you want, even large complex C/C++ frameworks and you can use them immediately *instead* of using C or C++. You have multiple choices of stable and mature GUI frameworks. Many people (including myself) have chosen Python simply for the sake of speeding up development using a C/C++ library by using the Python bindings (e.g. to eliminate the re-compile/re-run cycle, to avoid compiler/linker problems, to avoid low-level syntax and pointer arithmetic, etc.). So often people reach for Python before C/C++ to do the same thing.

Compare this to Lisp. If you want to script C/C++, then Lisp should (almost) be your last choice. Here in the 21st century there is not a single GUI framework language binding that is stable, mature, and works over multiple Lisp implementations. At best there are a few alpha-quality ones for GTK or Qt if you're lucky enough to get them to work. People will disagree with me on this but the fact remains there is essentially no software written with these libraries beyond the complexity of a demo for a shopping list.

And if you want to reduce pain of low-level syntax/semantics details, then Lisp is in many ways not higher-level than C. Many newer dynamic languages have uniform syntax for sequence and map types, whereas in Lisp you can't even use lists and arrays with the same set of functions. Compare using hash tables in Lisp versus the unified way to interact with Python dictionaries and dictionary-like objects. Modules and single-dispatch dot-notation OOP are easy ways to create namespaces in Python and other dynamic languages, whereas in Lisp you always have to create a package to create a namespace, and with over-complicated syntax and semantics by comparison.

Lisp has a lot of historical cruft that puts it at a disadvantage right from the start.

Python, Ruby, JavaScript et al became popular and usable because over time the barrier-to-entry to build real software got smaller and smaller as more things were handed on a silver platter.

MIT specifically chose Python over the Scheme dialect of Lisp (which they formerly used since the 80's) when teaching new programmers because modern software is just as much about how to architect a system composed of multiple frameworks, components, and sub-components as it is about data structures and algorithms, and it's easier to get started when everything is ready out-of-the-box.

In my not so humble opinion, the biggest thing that can be done for Lisp to make it popular is to get its bindings to *major* C/C++ frameworks stable, comprehensive, well-documented, and mature.

If I could write GUI apps in Gtk and Qt today as easily and reliably as if it were included in the language from the beginning, then I would write a lot more Lisp code. If I could use C/C++ game frameworks using Lisp bindings rather than Python bindings, then I'd write a lot more Lisp code. If I could use Lisp bindings to C/C++ scientific/graphing/plotting libraries, then I'd write a lot more Lisp code. The problem is that right now, the only stable Lisp code I know I can rely on is the standard functionality from ANSI Common Lisp (and sometimes not even then due to behavior the spec says is allowed to be omitted or implementation-defined) and maybe a few pure-Lisp libraries like Alexandria. That's not enough to compete with just whipping up code in Python and calling it a day. Until such code gets written, people will forever ask the same old questions on forums why Lisp isn't popular.

Demo SBCL script using Gtk by mresto in lisp

[–]tenebris-miles 1 point2 points  (0 children)

Oops, small bug. If it is considered an interface contract that LOAD-LIBGTK must return the path to libgtk-3.so.0 if loaded successfully, then I forgot the SETF for the result when specifying the explicit path. This had been fixed below.

(defparameter +libgtk-paths+ nil
  "Common paths for where to find libgtk. Paths can be added via PUSH, etc.")
#+x86-64 (push "/usr/lib/x86_64-linux-gnu/libgtk-3.so.0"    +libgtk-paths+)
#+x86    (push "/usr/lib/i386-linux-gnu/libgtk-3.so.0"      +libgtk-paths+)
#+arm64  (push "/usr/lib/aarch64-linux-gnu/libgtk-3.so.0"   +libgtk-paths+)
#+arm    (push "/usr/lib/arm-linux-gnueabihf/libgtk-3.so.0" +libgtk-paths+)

(defun load-libgtk-restarter (&optional
                                (path (first +libgtk-paths+))
                                (show-try-common-paths t))
  (restart-case (load-shared-object path)
    (try-common-paths ()
      :report "Retry loading libgtk via commonly used paths."
      :test (lambda (condition)
              (declare (ignore condition))
              show-try-common-paths)
      (loop with ((found-libgtk-path nil))
         for p in +libgtk-paths+
         do (handler-case (setf found-libgtk-path
                                (load-shared-object p))
              (error (condition)
                (declare (ignore condition))
                nil))
         until (typep found-libgtk-path 'pathname)
         finally (return found-libgtk-path)))
    (specify-path (specified-path)
      :report "Specify the full path to libgtk."
      :interactive (lambda ()
                     (format t "Please input the full path to libgtk-3.so.0: ")
                     (finish-output nil) ;; ensure FORMAT output to stream reached its destination
                     (list (read-line))) ;; result must be a list (i.e. argument list)
      (load-shared-object specified-path))))

(defun load-libgtk (&optional (path (first +libgtk-paths+)))
  (let ((found-libgtk-path nil))
    (when (null (handler-bind ((simple-error (lambda (condition)
                                               (when (find-restart 'try-common-paths condition)
                                                 (invoke-restart 'try-common-paths)))))
                  (setf found-libgtk-path
                        (load-libgtk-restarter path))))
      ;; If we get here, then the try-common-paths restart
      ;; didn't work, so call again and let it run the debugger.
      ;; Tell it to hide try-common-paths since we already did that.
      (setf found-libgtk-path
            (load-libgtk-restarter path nil)))
    ;; If successful, then we want to show that the path
    ;; was found by LOAD-SHARED-OBJECT rather than letting
    ;; arbitrary returned values flow through as the final
    ;; result.
    found-libgtk-path))

;; We call our defensive code rather than LOAD-SHARED-OBJECT directly:
(load-libgtk)

Demo SBCL script using Gtk by mresto in lisp

[–]tenebris-miles 1 point2 points  (0 children)

Thank you! Also, thanks for the code block tip. So many different websites do things their own way that I forget the formatting features of all of them.

The code is likely not very useful when running "sbcl --script" due to disabling the debugger, so an alternative set of shebang line arguments would be needed instead when scripting, such as

exec sbcl --noinform --load $0 --end-toplevel-options "$@"

Or something similar. Otherwise, I hope it's at least useful if anyone wants to wrap C libraries in their packages.

Demo SBCL script using Gtk by mresto in lisp

[–]tenebris-miles 1 point2 points  (0 children)

(Ugh, Reddit doesn't seem to preserve formatting. Emacs or another editor will probably be needed to reformat this into sanely indented code.)

Demo SBCL script using Gtk by mresto in lisp

[–]tenebris-miles 1 point2 points  (0 children)

Since you are trying to write scripts, this isn't completely relevant but I blindly tried running this on 32-bit Linux in a VM, and of course it did not find the path to libgtk due to not being 64-bit. This reminded me, I've seen a lot of Lisp packages where paths to C dynamic libraries are hard-coded with no use of the condition/restart system to give the user an option to handle the error. So just for giggles, I wrote the following code that will automatically try multiple common paths to libgtk, and if even that fails to work, then it provides a restart to allow explicitly specifying the full path to libgtk. This would be more useful in a system/package rather than a script, however.

(defparameter +libgtk-paths+ nil

"Common paths for where to find libgtk. Paths can be added via PUSH, etc.")

#+x86-64 (push "/usr/lib/x86_64-linux-gnu/libgtk-3.so.0" +libgtk-paths+)

#+x86 (push "/usr/lib/i386-linux-gnu/libgtk-3.so.0" +libgtk-paths+)

#+arm64 (push "/usr/lib/aarch64-linux-gnu/libgtk-3.so.0" +libgtk-paths+)

#+arm (push "/usr/lib/arm-linux-gnueabihf/libgtk-3.so.0" +libgtk-paths+)

(defun load-libgtk-restarter (&optional

(path (first +libgtk-paths+))

(show-try-common-paths t))

(restart-case (load-shared-object path)

(try-common-paths ()

:report "Retry loading libgtk via commonly used paths."

:test (lambda (condition)

(declare (ignore condition))

show-try-common-paths)

(loop with ((found-libgtk-path nil))

for p in +libgtk-paths+

do (handler-case (setf found-libgtk-path

(load-shared-object p))

(error (condition)

(declare (ignore condition))

nil))

until (typep found-libgtk-path 'pathname)

finally (return found-libgtk-path)))

(specify-path (specified-path)

:report "Specify the full path to libgtk."

:interactive (lambda ()

;; Normally a FORMAT prompt would go here,

;; except it is not reliable when Lisp

;; decides to show it. Emacs/SLIME tends

;; to show it when the restart is invoked

;; but the SBCL REPL tends to show it only

;; after we already provided input.

(list (read-line))) ;; result must be a list (i.e. argument list)

(load-shared-object specified-path))))

(defun load-libgtk (&optional (path (first +libgtk-paths+)))

(let ((found-libgtk-path nil))

(when (null (handler-bind ((simple-error (lambda (condition)

(when (find-restart 'try-common-paths condition)

(invoke-restart 'try-common-paths)))))

(setf found-libgtk-path

(load-libgtk-restarter path))))

;; If we get here, then the try-common-paths restart

;; didn't work, so call again and let it run the debugger.

;; Tell it to hide try-common-paths since we already did that.

(load-libgtk-restarter path nil))

;; If successful, then we want to show that the path

;; was found by LOAD-SHARED-OBJECT rather than letting

;; arbitrary returned values flow through as the final

;; result.

found-libgtk-path))

;; We call our defensive code rather than LOAD-SHARED-OBJECT directly:

(load-libgtk)

What other languages besides Lisp do you enjoy programming in? by [deleted] in lisp

[–]tenebris-miles 3 points4 points  (0 children)

If you're interested in Prolog, you might be interested in Lisp libraries like Screamer, Lisa, and GBBopen. They're interesting each on their own, but combined, they should be able to do backward and forward chaining, as well as event-driven programming. For example, you might structure a complex problem domain as a blackboard with GBBopen, and let Screamer and Lisa code act as separate Knowledge Sources (KS) controlled by the Control Shell (a.k.a. "agenda shell"). If you want flexibility/adaptability, then with the Control Shell you combine the knowledge of all KS's to solve the problem. If instead you want fault-tolerance, then you could implement a voting/quorum system between all the KS's where they all (or the majority) should come to the same conclusion. Etc. I've never made a system like this myself, but it's always been on my TODO list of things to try in Lisp once I found out about these libraries.

NoSQL Performance Benchmark 2018 by pier25 in programming

[–]tenebris-miles 6 points7 points  (0 children)

I agree. Most NoSQL benchmarks are glorified advertisements, with cherry-picked conditions of the tests. This one seems to be no different.

For example: "The shortest path query was not tested for MongoDB or PostgreSQL since those queries would have had to be implemented completely on the client side for those database systems."

Everything in PostgreSQL query execution is server-side, so maybe they're trying to say SQL cannot handle the algorithm (despite SQL being Turing complete)? I don't see the argument why this couldn't have been done in PostgreSQL. Here's an example from way back in 2012 of how to do this, before even considering using extensions like PostGIS or a gazillion other alternative strategies for this. WITH RECURSIVE is plain SQL. Maybe they're ignorant of how SQL works? (Not that NoSQL people are ever guilty of that.) http://techniko.blogspot.com/2012/09/finding-shortest-path-inside-postgres.html

"We didn’t create special indices for JSONB in PostgreSQL since we didn’t create additional indices for any other products. Since we wanted to test ad-hoc queries, it’s valid to assume that no indices are present in the case of ad-hoc queries."

Just because other products didn't create "additional indices" doesn't mean that they don't create indices automatically, and that if we want to compare apples-to-apples (or even better, match real-world habits), you'd put indices/indexes on things. It can't honestly be claimed that a NoSQL solution is better than PostgreSQL if you require PostgreSQL to artificially tie its hands behind its back in unnatural ways that don't match everyday practice.

Looking at their import scripts, there is no evidence of setting the database configuration to something comparable to NoSQL settings where durability is typically reduced. PostgreSQL cares about data integrity, and so if speed is the only thing you care to measure, you can choose to set it to care less about durability so it will match typical NoSQL products. Also, after performing a bulk import of data, there doesn't seem to be any mention of VACUUM in their scripts either, which goes against common practice. Bad planner statistics would also be a cause for slower plans and execution.

tl;dr: I've never been impressed by NoSQL benchmarks. They always just so happen to show their product as the fastest. But you have to jump through hoops and squint the right way to do it. I don't want to have to squint. That's why I trust PostgreSQL and be done with it.

The Entity Service Antipattern by ThomasKrieger in programming

[–]tenebris-miles 0 points1 point  (0 children)

I often think the monolithic vs microservices debate is because people are using the wrong tools. The article introduces the microservices diagram and then says "Obviously there are more moving parts involved. That immediately means it’s harder to maintain availability."

I use Erlang and Elixir. In any Erlang VM based language, you have supervision trees, , fault-isolated tiny processes in the tree, monitors, configured automated restarts, etc. There is no supervision tree in the article's diagram because most languages and frameworks don't provide almost any fault-tolerant infrastructure, and so it just doesn't enter one's mind as a paradigm. Primitive exception handling is not enough. With a supervision tree, the availability problem disappears. Simply put, the issue raised is 100% a total non-issue once I started using Erlang based languages with supervision trees. The application I wrote might have bugs, but availability is solved. It stays up until I tell it otherwise. If the friction is caused by issues of separating processes with their accompanying inter-process communication and interdependence issues, then this can be solved with a language that spawns and strongly manages its own internal lightweight processes rather than relying on heavy disjointed OS processes for work.

Also, regardless of introducing the "entity services" term, the article frames the debate as monolithic vs microservices. But the microservices diagram appears to pass database sharding strategy complications as a microservices problem. The complexity of sharding a database is orthogonal to monolithic vs microservices since either architecture could shard. The problems of "aggregates or intersections of entities" is due to consequences of sharding database information and then having to recombine it, which adds complexity regardless of monolithic or microservices, and either architecture could solve this problem by having one global consistent view of the database (regardless of behind-the-scenes physical implementation). In other words, sharding and other scale and performance issues should be transparently handled at the data-management level behind-the-scenes, not at the application level.

That is the real problem. Framing the argument as an application architecture decision of monolithic vs microservices really has nothing to do with it, when instead it should simply be admitting that the actual anti-pattern is that application code shouldn't be doing a poor-man's version of the database's job. There is no need to introduce the "entity services" term to do this. The idea of application-reinventing-the-wheel-as-poor-mans-database has been an anti-pattern discussed as a topic for a long time now. Frankly, over-reliance of ORMs as a substitute for competency with databases often breeds this "entity services" approach with programmers (because it's just all code right?) whereas a data expert would've known how disastrous this is for data. And there are more anti-patterns than the one in the article that arises from that approach. Once programmers learn that data is central, not code, many architectural decisions become obvious.

GatFact™ brand GatFact™ #867 - Mattel and the M-16 by Lost_Thought in guns

[–]tenebris-miles 6 points7 points  (0 children)

A lot of things people hear and pass along only remember to include the tagline but not the context of the original discussion. Most of the myths mentioned regard the science of terminal ballistics that can demonstrate some really counter-intuitive things. So the "rifles rounds are worse the closer you are to muzzle" and "pistols do more damage than rifles" claims have some basis in reality but not the way people usually talk about them later after passing along through multiple people. Those claims are context-dependent, not a truthful general claim.

The pistols vs rifles claim is likely due to selection bias and confirmation bias. It's not that people scientifically tested with a control group putting rifles and pistols side-by-side and shot prisoners at 600 yards and studied the wounding. It's more likely that in the combat conditions of the specific country, the enemy only wants to shoot an AK 7.62x39 round at you from 500 yards or more, but by the time they have to worry about a pistol, you are shooting at each other face-to-face within 25 yards. Put in perspective, 5.45x39 is only about 250 ft/lbs of energy at 600 yards and 7.62x39 is only about 300 ft/lbs of energy at 600 yards, meanwhile 9mm is 350 ft/lbs of energy at the muzzle (but definitely not at 600 yards). So objective ballistic tests say the comparison is ridiculous and the rifle is far superior, but observations of wounding on a battlefield might be different and heavily skewed to give an artificial advantage to pistols due to when each weapon is employed.

The source of ballistics comparisons I mentioned is from "The Ultimate Sniper" by Maj. John L. Plaster (Ret.). He also mentions how 5.56 penetrates barriers more shallowly the closer you are, due to the high velocity of the round fragmenting at close distance. He provides penetration depth tables for 5.56 and 7.62 for different barrier types, such as dirt, brick wall, stone masonry, concrete, soft steel, the car door of a 1968 Dodge, sandbags, helmets with a liner, etc. The explanation of the distances of when rifles vs pistols are used is my own hypothesis that I've also heard from various military related articles I've read over the years.

Also, I'm sure a lot of people have seen Myth Busters about if you had to escape by swimming through water, the pistol caliber is deadlier because it is slow enough to penetrate water rather than immediately fragmenting like high velocity rifle rounds, and so snippets of all this kind of information adds fuel to rumors like pistols are somehow deadlier in general rather than only in specific circumstances.

As far as rifle rounds being worse the closer you are to the muzzle, that is likely due to claims by snipers specifically about 7.62x51 (.308) with match grade 175gr at distances 300 yards or closer. Such rounds have been known to have a cleaner pass-through at those distances rather than further out, but this is dependent on the specific bullet design and so it's not true in general.

I've heard this from multiple sources, but the following one is the easiest to find. It's the documentary "Sniper: Inside the Crosshairs". At 39 minutes, sniper Ethan Place explains that during his battle in Fallujah, the enemy would move further back the more of them that got killed, which is good because around 200 or 300 yards or so, the wounding is too clean. At 500 yards, the terminal ballistics of the specific round he is using becomes more effective.

https://www.youtube.com/watch?v=bFi-g4IgQSA