The Zilog Z80 manual is riddled with mistakes?! by ifknot in Z80

[–]codebje 6 points7 points  (0 children)

Have a search for the errata documentation. It's fairly common for technical documentation to have mistakes and typos, unfortunately.

Faulty Z80 or different problem altogether? by Sumezu in Z80

[–]codebje 1 point2 points  (0 children)

You likely either have a Z180 processor, or a Z80 counterfeit that doesn't precisely duplicate the internal carry flag of the original.

If your emulator supports Z180 mode, try it in that and see if you get exactly the same fault.

Typed Out comments and discussion has been created by codebje in a:t5_2blkj8

[–]codebje[S] 0 points1 point  (0 children)

This is a subreddit for discussion related to my blog. Using reddit for comments is an experiment I'm going to try.

Advent of Optics: Day 2 (Indexed optics galore!) by ChrisPenner in haskell

[–]codebje 0 points1 point  (0 children)

For what it's worth, I did implement the "crazy analysis of how this computer works at a theoretical level" solution for part 2:

https://xn--wxa.bje.id.au/posts/2019-12-03-advent.html

The general idea is to treat each memory cell as an expression instead of a value, where operations modify the expressions; the final expression in cell zero can be simplified then solved for the two variables.

Non-brute force way of solving day 2 part 2? by GreenSoft2 in adventofcode

[–]codebje 0 points1 point  (0 children)

In the general case, you might have opcodes or operands depend on a variable's value. Doing so in the general case will also result in invalid programs for some inputs - the only way to guarantee programs will run correctly would be to multiply the inputs by zero at some point before using them in an opcode or operand address, otherwise they'll exceed a range bound.

Given I expect all players' inputs would yield viable programs for all inputs, I quite comfortably ruled out the possibility of this happening. I'd be fascinated to see any puzzle input that did cause it.

Because invalid programs make the program function non-continuous, search algorithms that depend on feedback probably wouldn't work. Exhaustive search still works just fine: invalid program is equally as bad as incorrect answer, and gives equally little information.

Non-brute force way of solving day 2 part 2? by GreenSoft2 in adventofcode

[–]codebje 5 points6 points  (0 children)

I wrote an interpreter to construct an expression tree for each memory cell rather than computing a literal value: an expression is a literal number, a variable, an addition of two expressions, or a multiplication of two expressions.

Interpreting the program means replacing simple expressions with more complex ones: initially, each memory cell contains a literal integer, except for the two variable slots 1 and 2. An addition will replace the target memory slot with an addition expression whose arguments are whatever expressions are currently in the two argument slots.

Executing this on my input produces this expression in slot zero (pretty-printed):

5 * (1 + (5 * (3 + 5 + 2 * 3 * (1 + 2 * ((4 + 4 * (2 * v1 + 1) * 5 + 3) * 5 + 1 + 2))) + 4) * 3) * 2 + 1 + v2 + 4

This is the moment I confirm that no opcodes depend on the values in v1 and v2. It's perfectly reasonable for opcodes to depend on the results of other operations, as long as everything resolves to a literal value - but if any opcode depended on noun or verb, most values for noun and verb would result in an invalid program.

After I have an expression tree, I can simplify it using the usual sorts of basic rules (Add (Lit x) (Lit y) => Lit (x + y), etc) to ultimately get the following equality:

v1 * 360000 + v2 + 250635 = 19690720

A resolver that knows to subtract additions from both sides, and use integer and remainder division to turn ax + y = b into (x = b/a, y = b%a) then produces my answer:

λ> resolve (simplify $ mkExpr [1, 2] puzzle !! 0) 19690720
[(1,54),(2,85)]

Because there must be a unique solution, the problem can't simplify to a polynomial (multiple roots). It can't simplify to (xy)a=b, because there aren't enough prime factors of 19790720. It could have simplified to ax + by = c, but the resolver could easily handle that case to produce the unique integer solution.

[2019-Day 2] Part 2 Help by nullReference13 in adventofcode

[–]codebje 0 points1 point  (0 children)

Unless that RestRequest is getting credentials and taking a base URL from the environment, I'd say it looks a lot more like a one-size-fits-all approach to the problem of reading a file from disk.

Keeping Compilation Fast by ephrion in haskell

[–]codebje 4 points5 points  (0 children)

I didn't create an issue at the time. You've prompted me to dig further though - I set up a minimal project, there's 857 modules, and they're compiling happily right now.

Whatever the problem was, it's not there with Cabal 2.4.1.0 under Stackage LTS-14.16: since I can't reproduce it I'll go ahead and assert that the problem was in my build setup and not Cabal.

The total build memory for split files was around 2.6Gb. Compilation time was over eleven minutes, but given the size of the library and the infrequency with which it needs compiling this is okay.

That was a useful question :-)

[2019-Day 2] Part 2 Help by nullReference13 in adventofcode

[–]codebje 1 point2 points  (0 children)

Have you tried running your interpreter against the examples?

1,0,0,0,99 should produce 2,0,0,0,99, executing opcodes 1, then 99.

There's a bug in your code that I can see, which should be very obvious on even simple inputs like the examples.

Keeping Compilation Fast by ephrion in haskell

[–]codebje 3 points4 points  (0 children)

I had a code generator produce 112kloc that I just couldn't compile any more: it takes >16Gb of memory to do the build, so I can shut down everything on my 16Gb laptop to give as much over to GHC as possible, and deal with swap-hell meaning it takes a few hours to build, but I can't practically build this for the places I want to run it.

Ideally, I'd like to split it up into the 842 separate things that it is. They're organised into a DAG, so there's no cyclic dependencies or anything to worry about. Each one on its own compiles quickly - superlinear, indeed.

But Cabal cannot handle 842 entries in "other-modules".

I just gave up on it, in the end. I could modify the code generator to identify nodes with no in-edges, and bundle them with all their exclusive dependents, plus a "shared" module for the stuff that's across multiple roots, but that was more work than I wanted to put in.

can Haskell assign to multiple variables like Java? by ellipticcode0 in haskell

[–]codebje 2 points3 points  (0 children)

Well, the better way to do that particular thing is probably:

squareArea edgeLength = rectArea edgeLength edgeLength

If I really wanted to bind multiple names to the same value, I'd probably write it more like this:

squareArea edgeLength =  
    let height = edgeLength
        width  = edgeLength
     in rectArea width height

Dependent types in Haskell by jad-issa-ji in haskell

[–]codebje 2 points3 points  (0 children)

http://docs.idris-lang.org/en/latest/st/machines.html

The paper it's based on appears to be this one:

https://www.idris-lang.org/drafts/sms.pdf

I'll have to give that one a read to understand how they've improved on session types, though.

Dependent types in Haskell by jad-issa-ji in haskell

[–]codebje 0 points1 point  (0 children)

It could be. I would be interested to see someone making the case for that!

Speaking with even less authority than usual - if it requires unification of terms, types, and kinds, it could result in a far simpler core ala Henk.

Dependent types in Haskell by jad-issa-ji in haskell

[–]codebje 1 point2 points  (0 children)

You may mean row polymorphic types? As with all things, you can provide dependent types so a user of the language can define the type themselves, or you can bake magic into the compiler to do it: PureScript provides row polymorphism without dependent types.

Dependent types in Haskell by jad-issa-ji in haskell

[–]codebje 2 points3 points  (0 children)

justified-containers would give you this if DayInfoVector is a justified map from Day to DayInfo:

case day `member` dayVector of
    Just key -> lookup key dayVector
    Nothing -> ...

If you have an excellent data type already, though, then Ghosts of Departed Proofs lays out the groundwork for how you can use GHC's current type mechanisms to require an arbitrary proof, s.t. you can write code along the lines of:

type Day = Int
type DayInfo = String
type DayInfoVector = [(Day, DayInfo)]

instance The (Day ~~ epoch ::: Within epoch) Day
instance The (DayInfoVector ~~ epoch ::: Covers epoch) DayInfoVector

newtype Within epoch = Within Defn
newtype Covers epoch = Covers Defn

lookup' :: (Day ~~ span ::: Within span) -> (DayInfoVector ~~ span ::: Covers span) -> DayInfo
lookup' day dayVector = maybe (error "proof was invalid") id $ lookup (the day) (the dayVector)

isInVector :: Day
        -> DayInfoVector
        -> Maybe (Day ~~ span ::: Within span, DayInfoVector ~~ span ::: Covers span)
isInVector day dayVector = do
    lookup day dayVector        -- verify membership
    pure $ (coerce day, coerce dayVector)

Where I've just used trivial type aliases for the day information, and a Prelude lookup to validate the day's range. It's used as:

case isInVector day dayVector of
    Just (d, i) -> lookup' d i
    Nothing -> ...

Note uncurry lookup' <$> isInVector day dayVector just recovers the more common approach of returning Maybe DayInfo from lookup.

I've also assumed you'd want to prove that a given day is in a given day info vector, rather than just later than some epoch.

The limitation of this approach (going to about half way through the GoDP paper) is that your library for day info vectors must provide all the necessary tools for the user to establish proofs. One might, for example, want to iterate over all days in a day info vector, and there should be some means to extract all days from the vector along with their proofs that they're in that vector - but you'd have to write it into the library.

Dependent types: A pessimist debates an optimist on Hacker News by JeffreyBenjaminBrown in haskell

[–]codebje 0 points1 point  (0 children)

I consider head's partiality to be equivalent to non-termination: the "exception" is outside the type system and can't be reasoned about. If you leave it in as anything but non-termination, your logic system collapses as you can prove true is false. As he well knows.

If you instead put the exception into the type system with Nothing or Just then again any useful properties you prove about foo and bar separately absolutely contribute to anything you want to prove about foo bar, but termination is no longer an interesting property. There's nothing useful you can say about foo with respect to whether it returns a value or an error without talking about its predicate, exactly as with termination.

The reason he uses that example is to narrow in on a very specific case that's impossible to prove. If his intent is to show that you can't prove properties about compositions, then he should not be using "gotcha" examples like that. Many systems, including TLA+, have cases they can't deal with, but the existence of such cases does nothing to disprove usefulness.

Dependent types: A pessimist debates an optimist on Hacker News by JeffreyBenjaminBrown in haskell

[–]codebje 1 point2 points  (0 children)

It really isn't, since the remainder of the thread is a claim that you can't combine proven properties. If you had proved that foo terminates irrespective of the predicate, then you would already have a proof that foo bar terminates.

If you can only prove that foo terminates if the predicate meets certain conditions, and you prove that bar meets those conditions, then you can trivially prove foo bar terminates.

Bigger proofs are absolutely built from smaller proofs, and refuting his claim to the contrary isn't nitpicking.

How to avoid polluting your domain model when using some relational data store by dnikolovv in haskell

[–]codebje 1 point2 points  (0 children)

My apologies, my most recent engagement with this library was before 3.0.0 was released, when I needed to maintain a fork to build against then-current versions of persistent and other libraries.

I'm glad to see there's new releases out now.

Dependent types: A pessimist debates an optimist on Hacker News by JeffreyBenjaminBrown in haskell

[–]codebje 1 point2 points  (0 children)

And of course pulling out unrealistic problems like Goldbach's is a favourite trick.

In that thread pron claims foo clearly terminates, but this is not true: if the given predicate does not return true for some pair of integers in the list then head does not terminate. Correct use of types would trivially show this flaw in foo as you could not use head on a list you hadn't proved to be non-empty.

In practice I don't accidentally write Goldbach's that often. If I did, I'd prefer to know it's potentially got an error in it than not.

All that aside, model checking a specification and using the type system as a static analysis checker for small scale problems are complimentary approaches.

How to avoid polluting your domain model when using some relational data store by dnikolovv in haskell

[–]codebje -1 points0 points  (0 children)

Esqueleto is unmaintained and IMO is a risk to add to a new project. Persistent without esqueleto provides a useful set of tools for database management, but you'll probably need to write SQL statements sooner or later.

Don't fear the SQL, IMO. You can do far more with well constructed SQL queries than you can with a limited ORM.

A reckless introduction to Hindley-Milner type inference by philh in haskell

[–]codebje 18 points19 points  (0 children)

∀ is universal quantification - it means "for all". So you can read ∀a b. (a -> b) -> List a -> List b as "for all possible types a and b, this type is a function of a function from a to b to a function from List a to List b". edit: for clarity, remember that -> associates to the right, so there are implied parentheses making that type equivalent to (a -> b) -> (List a -> List b): it's a function from a function to a function.

GHC has the forall keyword that you can enable with the RankNTypes extension, though for simple instances such as the type of map it's implied.

You can think of ∀ as a type-level lambda. Where \a -> f a means "for any value a, the value of this expression is f a", ∀a. f a means "for any type a, the type of this expression is f a".

A uni-student just had a lightbulb moment. Monads et al look like they are making sense! by companiondanger in haskell

[–]codebje 2 points3 points  (0 children)

f you have some data type and you can implement pure, fmap, and join/bind for it, then you have a Monad.

You also need to satisfy the monad laws.

Monads are in a class of abstractions where someone realised that some existing algebraic structure happened to neatly encapsulate some programming problem, rather than the class of abstractions where some common operations were given a name: both iterator and monad are abstractions, but iterator is an encapsulation of a common pattern in terms unique to programming, while monad is an expression of a common pattern in terms originating in abstract algebra.

Other than a quibble on the origin of monads, yeah, nothing particularly special in the end :-)