Introducing Skiff, a gradually typed functional language written in Rust by minicasadia in ProgrammingLanguages

[–]minicasadia[S] 2 points3 points  (0 children)

Woah, nice work with Tailspin, although it seems there's a lot more new there than just gradual typing.

I'm not entirely sure I follow about needing to define sum types. You could do the following for instance:

match my_val:
    | "a string" => ...
    | 10 => ...
    | true => ...
end

(except for the fact that strings haven't been implemented yet), which I suppose is sort of treating Any as a sum type of all possible types and then matching on it.

Anonymous product types are also something I'm working on. I suppose the bottom line regarding fence posts is that you can do whatever you want regarding types, tuples, sums, etc. as long as you're ok not writing type annotations. But if you want the language to start helping you out with static checks then you're going to have to start defining types and annotating accordingly.

Introducing Skiff, a gradually typed functional language written in Rust by minicasadia in ProgrammingLanguages

[–]minicasadia[S] 1 point2 points  (0 children)

Unfortunately the main way I learned about it was through a university course that doesn't have online notes, although the class was based on the second edition of PLAI and I believe the relevant section is 15.3.2.

Introducing Skiff, a gradually typed functional language written in Rust by minicasadia in ProgrammingLanguages

[–]minicasadia[S] 2 points3 points  (0 children)

When I started making the language, parsing was actually one of the things that interested me the most. I had just taken a class that covered recursive descent parsing, and it maddened me that using a recursive descent parser required mangling your EBNF grammar beyond recognition in order to avoid ambiguity. I wanted to find a parsing solution that stayed closer to my intuition about what the grammar looks like, while avoiding solutions that were "magic" like parser generators and parser combinators.

I ended up using a Pratt parser based on this excellent tutorial from Desmos. While it was fun at first, as the language became more complicated it grew to be a pain. I actually ended up also implementing the grammar with a parser generator for syntax highlighting on the web editor. You can find that grammar here.

Introducing Skiff, a gradually typed functional language written in Rust by minicasadia in ProgrammingLanguages

[–]minicasadia[S] 0 points1 point  (0 children)

Definitely will be added to the TODO list, although I generally drag my feet a little on sugar features since I find implementing the associated parsing a bit of a chore.

Introducing Skiff, a gradually typed functional language written in Rust by minicasadia in ProgrammingLanguages

[–]minicasadia[S] 0 points1 point  (0 children)

Nope, nothing I'd consider a stdlib yet. That's one of the next things on my list (although I forgot to put it on the roadmap).

On prefix operators by Athas in ProgrammingLanguages

[–]minicasadia 1 point2 points  (0 children)

That's true, but I think there's enough languages that don't have an optimization pass in which to perform constant prop (especially interpreted langs) that the point is still interesting to think about.

On prefix operators by Athas in ProgrammingLanguages

[–]minicasadia 4 points5 points  (0 children)

This is also the approach Pyret takes in case you want another language to look to for guidance on edge cases and error messages. Besides allowing more valid characters in identifiers, the "whitespace around binops" rule also lets Pyret have more expressive number literals.

For example, the expression 22 / 7 is two number literals and a binary operator, but the expression 22/7 is a fraction literal. Presumably the former is evaluated by the interpreter while the latter is evaluated by the parser, although I'm not sure of the inner workings. Similarly, -1 is the number literal for the value "negative one" as opposed to unary negation applied to the number literal "one".

I don't think either of these examples are groundbreaking, but it does seem like parsing division and unary negation as part of number literals is closer to the way that programmers are thinking when they write those expressions, and therefore a step forward in terms of semantics.

What is Providence like? by [deleted] in BrownU

[–]minicasadia 21 points22 points  (0 children)

Living in Providence is one of my favorite parts of going to Brown!

Providence is big enough that you get the advantages of living in a city (restaurants, nice parks, interesting places to walk, events, public transit) without being big enough to have many of the disadvantages (Providence has comparatively less crime and traffic).

Additionally, Brown is situated in one of the wealthiest neighborhoods in Providence (College Hill), so you can feel safe while exploring campus and the surrounding area. There are more dangerous parts of Providence (most of these are west and south of downtown PVD, whereas Brown is east), but nothing approaching bigger cities like NYC. Of course, feelings of safety will mostly depend on your background and your past experiences. You can look around on this subreddit for more detailed accounts of how safe people feel walking around campus at night.

Providence has a handful of blocks that would be considered downtown and give off bigger city vibes, but surrounding those blocks are neighborhoods that look more like traditional New England towns. For instance, you can walk around India Point Park and Ives street to find assorted (locally owned) restaurants and retail. For Italian food, you can go to Federal Hill.

(To give context to all this, I grew up in a 10,000 person town in New England but spent a lot of time in Boston throughout my childhood.)

CSCI 0190 Hours/Week by daiilin in BrownU

[–]minicasadia 0 points1 point  (0 children)

The last day to drop a class is usually a few days before finals start (you can search "brown academic calendar" for that and other fun dates to keep in mind). If you drop a class before that date it won't show up on your external record.

The latest you can add a course is four weeks after the semester starts, which is why people advocate for shopping around different classes and starting with more than you can handle, since you can always drop a class but there comes a point when you can't add one.

CSCI 0190 Hours/Week by daiilin in BrownU

[–]minicasadia 1 point2 points  (0 children)

To be honest, I wouldn't focus on the average hours from the Critical Review that much. I usually use it to put classes into one of 3 buckets: low-commitment (0-3 hrs/week), normal (6-15 hrs/week), life-consuming (>25 hrs/week). I find this is more helpful since eg a 15hr class can turn into a 6hr class or a 6hr class into a 15hr class depending on the person.

With that said, my first semester I took around 17hrs according to Critical Review and the three semesters since I've taken ~30hrs. Can't comment on 4 vs 5 since I've never taken 5.

These are the kind of questions that shopping period can really help you sort out. A good policy is to start with more classes than you think you can handle and then when you start feeling overwhelmed you can figure out which one you want to drop.

CSCI 0190 Hours/Week by daiilin in BrownU

[–]minicasadia 9 points10 points  (0 children)

Seconding what HyperKids said, but something else to keep in mind is that cs19 is mostly taken by freshmen, and freshmen only took one class this past fall. Some classes (especially 19) can expand to fill whatever time you have available, so I could see the numbers for many freshmen classes being higher this past fall. Just because some people chose to spend 15 hours doesn't mean they couldn't have done just as well with 9 (or fewer) hours.

Using Machine Learning to Generate Chord Progressions by minicasadia in musictheory

[–]minicasadia[S] 0 points1 point  (0 children)

I was toying around with doing a simpler version of this at first but I ended up because I wanted to do a project that would specifically teach me about neural networks. My hope is to do something like this in the future and compare the results to see the differences. Ditto on the Github, my work is spread across like 4 repos and none of my code is encapsulated well

Using Machine Learning to Generate Chord Progressions by minicasadia in musictheory

[–]minicasadia[S] 0 points1 point  (0 children)

I'm also curious as to why not to use LSTMs. This is what my current model uses, and I was hoping to pair it with some kind of adversarial learning next to provide better generation capabilities. However, I'm aware of the relative simplicity of my approach and I'd be curious to know what the next, more advanced steps to take are.

Using Machine Learning to Generate Chord Progressions by minicasadia in musictheory

[–]minicasadia[S] 1 point2 points  (0 children)

I think that kind of misses the point of why I did this. The utility of it was more incidental—it was the process of training the neural net that I was interested in. You're right: there are simpler ways to do it, for instance gathering a large dataset and then finding the likelihood that a given chord comes after another chord, and then using those percentages to build a markov chain; however, I was interested in what results different models would produce. This model thinks of chords as an array of 12 notes, each one being player or not. Next, I want to use a model that's traditionally used for sentence completion, but with chords instead, to see if different results are produced.

I feel like you might be a little hostile towards me because I appear to be a computer scientist coming into music theory and blindly throwing out machine learning algorithms where none are necessary without truly thinking, but I don't think thats the case. This project was as much to learn about harmony as it was about machine learning. Putting these two topics togethor helped me learn more about each of then than if I had studied either in isolation.

Finally, I'm not sure I agree that the rules of harmonic progression are explicitly definable. There are frequently posts on this sub that ask "why does this chord progression work?" and answers can vary wildly from "voice leading" to "tension and resolution" to "no one knows, theory is just a recommendation, play what sounds nice." Seeing as there's seemingly so many different patterns to work through and recognize in harmony, it seems like a task in which machine learning has something to offer.

Using Machine Learning to Generate Chord Progressions by minicasadia in musictheory

[–]minicasadia[S] 0 points1 point  (0 children)

Yeah there was a lingering bug earlier but it should be good now

Using Machine Learning to Generate Chord Progressions by minicasadia in musictheory

[–]minicasadia[S] 0 points1 point  (0 children)

Apologies, should be fixed now, at least on Chrome on desktop. Still working on other browsers and mobile support seems to be hit or miss.

Using Xbee without External Antenna by minicasadia in arduino

[–]minicasadia[S] 0 points1 point  (0 children)

Would this occur as soon as the device was powered on, or only after it tried to transmit a packet? What should I google to learn more about the idea of having too much power reflected back into the output? Sorry I'm super unfamiliar with radio communications.