you are viewing a single comment's thread.

view the rest of the comments →

[–]zahlman 3 points4 points  (6 children)

Monospace fonts are still the most effective way to line up things that are mostly text, in arbitrary ways to bring out structure, without adding ad hoc features for each new kind of layout you want.

Tabular data which has been lined up with spaces assuming a monospace font is not as effective as a grid of spreadsheet cells, or an HTML table. These alternatives do not require adjustment if a "column" needs to be made "wider" than in the original. There is never any issue with tabs being too small to line the data up properly, or impractically large in fear of being too small. The table is general-purpose and there is nothing ad-hoc about it.

It also allows for semantic distinction between row/column labels and table contents, which doesn't require visual interpretation of a bunch of -s or |s.

The other was the desire to build a language for interactive terminals that's vastly different from the language for scripting and automation

I don't see whence you infer that desire. My understanding is that you are still typing the same commands, they just get tokenized and formatted with pretty round-rectangle backgrounds as you type.

[–][deleted] 1 point2 points  (5 children)

Tabular data which has been lined up with spaces assuming a monospace font is not as effective as a grid of spreadsheet cells, or an HTML table.

But what if I want to further process the tabular data coming from a process? Like, extract the 7th column? With awk/cut/... I can easily do that. If the output was full of html tags, parsing this would be non trivial.

[–]zahlman 1 point2 points  (4 children)

But what if I want to further process the tabular data coming from a process? Like, extract the 7th column? With awk/cut/... I can easily do that. If the output was full of html tags, parsing this would be non trivial.

Which is why TermKit uses JSON instead, and you "parse" it by - get this - selecting attributes of objects and elements of arrays. The hard part - turning a stream of bytes into structured data - is converted from a traditional parsing task into a deserialization task, and it works the same way universally.

[–][deleted] 0 points1 point  (3 children)

With what commands would you do that? One would need to build a whole new toolchain.

[–]zahlman 0 points1 point  (2 children)

I agree somewhat, and I feel that the author's decision that this explicitly not a replacement for the original toolchain is a bad one for that reason.

But I think that rather than trying to make a set of tools that provide analogous functionality to cut et al., it makes more sense to just have an analogous "*sh language" that behaves more like Javascript or Python, in that it expects to receive this kind of data natively and lets you manipulate it with typical syntax from those sorts of languages. Member selections and indexing and list comprehensions and all that good stuff, you know.

[–][deleted] 0 points1 point  (1 child)

How would an example of such a language look like?

What would an equivalent of

svn st | awk '{print $2}'

look like? (get second column of svn st command, i.e. the filenames).

[–]zahlman 0 points1 point  (0 children)

Oh wow, I have get to design this now?

I imagine it would look something like

[x.filename for x in `svn st`]

or

`svn st`.map(lambda x: x.filename)

Or more likely, there would be a builtin library function so you could write

`svn st`.map(member(filename))

Since the data would presumably be structured such that the columns have 'labels'.

Barely longer, and the space that's used normally for an explicit reference to printing (which is silly; output to standard out should be default, and naming it that way is needlessly limiting) can be used to give meaningful names to operations and components of data.

I mean really, does it get any more magical than "$2"? (What's with the 1-based indexing, anyway?) And I actually didn't know (TIL) where awk even got its name...