I used OCaml as my daily driver in grad school and my postdoc, and recommend my students use it. I think the main benefits of OCaml are:
Functional
Has mutable references
Not lazy
Haskell's type system is just better. But for large, complex systems that require performant code, it can be quite difficult to track the laziness, and sometimes life is just easier if I can use a mutable references.
The author is completely correct that there's a big failure in the lack of a type-class-like system. This is why people ask so much for modular implicits. Core is also a decent standard library (though I don't know why they require sexp_of_t and t_of_sexp on like all their data structures).
So I'd gladly use a different functional language with mutable references and strict evaluation. But there isn't any besides F#, and I have a Mac and don't personally want to figure out .NET on Mac.
They require `sexp_of_t` and `t_of_sexp` because of their workflow being based on them; such as expectation testing [1] and tools such as [2]. It's a less noisy alternative to json for many people.
I think looking at it as a 'failure' or 'deficiency' is the wrong mindset. OCaml makes a different tradeoff than Haskell. In exchange for not having ad-hoc polymorphism, OCaml's functor system allows much quicker compilation and easier-to-understand type errors. Like everything in tech, it's a tradeoff.
Yeah, considering it was an open question for 12 years, I'd be pretty surprised if you solved it immediately after hearing the solution. If you were provided hints by the interviewer and got to it with some directions, I most would believe it more.
What's more likely, somebody figured the solution out to a problem that previously took over a decade in under an hour and then felt the need to brag about it to strangers on the internet; or somebody lied online to feel better about themselves. People are certainly more likely to assume the latter.
I didn't know the history, I had no idea it was an open question for 12 years!
As I said, it was a very long time ago when I heard of the problem. I don't think it was in an interview, but I honestly don't remember the circumstances. I do remember being quite proud of myself, so I guess there's truly an element of bragging in my statement. Doesn't mean I lied though.
It appears the tortoise-and-hare solution was found by Floyd in 1967. That means the problem was first proposed in 1955? I'd love to learn more about it, do you have a source for that?
This is a little tangential, but does anybody find it weird that, even on desktop, the article seems hyperoptimized for mobile? The huge margins made it feel like I was reading on a phone emulator instead of on a desktop.
Maybe that wasn't the best way to phrase it, but the question was asked and Prof Buzzard replied basically that in Lean it's straightforward to express quotients, while in Coq you get "setoid hell". This specific statement is at 1:04:20 in the video. The question is at 1:00:02, and is probably useful for context.
Aren't quotients non-constructive? Univalence is probably also relevant to this issue, given that it generalizes the treatment of 'equality' in a broadly similar direction to the one needed for quotients. Groupoids are after all a fairly natural generalization of setoids.
I can't, perhaps lean's definition of quotient isn't constructive (I hadn't looked or noticed that),
But there is at least this construction of quotients in NuPRL.
I think quotients are not typical of constructive objects in that if you construct some object, and then project it into the quotient, then you have the quotient, but you cannot project that object back out. You get subtly less, that the resulting quotient satisfies some equivalence relation.
Under a specific set of assumptions, you may very well be able to construct the quotient... but under a different set of assumptions (including the assumption of the quotient), Perhaps in these differing sets of assumptions, you have the quotient, but lack the assumptions necessary to construct it yourself.
Anyhow, the argument that they are non-constructive in a humpty dumpty sense you cannot
put it together -> take it apart -> put it back together again.
Where it seems reasonable to consider the "put it together" phase as not entirely incompatible with constructivity?
thanks for the reply! i'm gonna need some time to mull it over...
would you be able to say how "normalizing" fits into this? by "normalizing" i mean applying some function that takes each equivalence class to a representative element (think simplifying fractions). using something like that, we could round-trip a value (object -> quotient -> object), though at the end we might end up with a different value than what we put in. so yeah, a normalizing function seems like a useful thing when looking at quotients constructively. sorry for the handwavyness, i hope i managed to get my point across!
So it seems at least that the definition of quotients in lean is classical, and justified by axioms, NuPRL at least seems constructive, this at least leads me to believe quotients as defined in lean aren't constructive. But i'm not sure we can take the step to "quotients aren't constructive".
Anyhow it's an interesting question, and one which I too wish I had more clarity on.
As a layman I would describe them as implicits like in current Scala [0] but better (easier to reason about). One thing which they allow is ad-hoc polymorphism like typeclasses.
The ML programming language is known for its advanced module system. First, you have structured, which are your traditional idea of a module. Structures define types and data. Signatures describe the interface of a module and are like the module's "type." By making types abstract or not mentioning members, signatures achieve encapsulation of modules. Finally, you have functors, which are functions from modules to modules. The parameter is constrained by a signature, and the client can pass any module that conforms to the signature to get an output module that uses the passed module in its implementation.
The module system is used when a type must support certain operations. For example, the Set module (https://caml.inria.fr/pub/docs/manual-ocaml/libref/Set.html) defines a signature called OrderedType for types equipped with a comparison function, and any module that implements OrderedType can be used to make a set.
Notably, ML modules differ from Haskell typeclasses because more than one module can exist for the same combination of types. However, you must pass ML modules explicitly, whereas typeclass instances are passed implicitly. So, ML modules have more power than typeclasses, but at the cost of convenience.
Here is an example where modules are used to make a set:
module LtInt = struct
type t = int
(* Use built-in polymorphic comparison *)
let compare = compare
end
module GtInt = struct
type t = int
let compare lhs rhs = compare rhs lhs
end
module LtIntSet = Set.Make(LtInt)
module GtIntSet = Set.Make(GtInt)
However, modules can quickly get inconvenient:
module type Monoid = sig
type t
val op : t -> t -> t
val e : t
end
module Addition = struct
type t = int
let op = (+)
let e = 0
end
module Multiplication = struct
type t = int
let op = ( * )
let e = 1
end
module FoldLeft (M : Monoid) = struct
let fold_left = List.fold_left M.op M.e
end
Here, it's probably less verbose to just pass the operation and starting value directly to List.fold_left than to use modules! Debatably, this is as equally inconvenient as Haskell's newtype solution to multiple instances, but if there is only one instance, typeclasses are much cleaner to use.
Modular implicits are a long-awaited feature that will allow ML modules to be passed implicitly.
I recently helped make a VS Code extension during a Microsoft internship, and I found the language server interface really easy to use. It really makes it easy to write an extension.
Functional Has mutable references Not lazy
Haskell's type system is just better. But for large, complex systems that require performant code, it can be quite difficult to track the laziness, and sometimes life is just easier if I can use a mutable references.
The author is completely correct that there's a big failure in the lack of a type-class-like system. This is why people ask so much for modular implicits. Core is also a decent standard library (though I don't know why they require sexp_of_t and t_of_sexp on like all their data structures).
So I'd gladly use a different functional language with mutable references and strict evaluation. But there isn't any besides F#, and I have a Mac and don't personally want to figure out .NET on Mac.