Using haskell as my main language at work is pretty great, except for one thing. Having learned agda first, I got really used to having dependent types at my disposal. Now I often find myself naturally thinking of a solution that makes use of dependent types, then having to translate this solution into haskell. Of course, there is always a translation. But the translation may obfuscate what the code is doing, making it harder for future programmers to understand it and maintain it.
The work I do has no relation to formal verification, nor is it "mission critical" that the code is correct. But I still find my self wishing I could use dependent types, just because they make writing programs and capturing business logic both faster and easier. A language with dependent types requires a less severe translation from the programmers ideas to a compiling program.
=> More informations about this toot | More toots from isAdisplayName@mathstodon.xyz
@isAdisplayName I can’t stand the PR campaign in programming languages that our stuff is only (or even especially!) good for “mission critical” programming. it’s the human parts that matter
=> More informations about this toot | More toots from chrisamaphone@hci.social
@chrisamaphone absolutely! Dependent types make a programmer's life better. That's all the justification needed to research them and put energy into implementations.
I really hope the day soon comes when there is widely supported, general purpose language with dependent types. (here I'm counting haskell as a general purpose language)
=> More informations about this toot | More toots from isAdisplayName@mathstodon.xyz
@isAdisplayName @chrisamaphone you're gonna LOVE idris! https://github.com/idris-lang/Idris2/
=> More informations about this toot | More toots from rahulc29@mathstodon.xyz
@rahulc29 Idris is great! I've bumped up against Idris (and I think Chris has as well). And I love Edwin Brady's book "Type Driven Development in Idris". The language does seem much more geared towards general purpose. But, there is a vast amount of peripheral features and kinks that would need to be worked out before Idris (or any other dependently typed language) can be used in production as a general purpose language.
Even today, large companies using Haskell occasionally have to hire GHC consultants to fix bugs in the compiler or upstream some important feature. When facebook started trying to use haskell at scale, they essentially hired a GHC developer to make the project work.
=> More informations about this toot | More toots from isAdisplayName@mathstodon.xyz
@isAdisplayName i think that's fair, we still cannot use idris in "production". however, it is getting better by the day, so hopefully we can very soon :')
=> More informations about this toot | More toots from rahulc29@mathstodon.xyz
@rahulc29 @isAdisplayName Yes... And also, I think that there is a pretty huge spectrum of what "production" means to different people. There are some definitions of "production" under which I could certainly use Idris — and ultimately the difficulties would be very similar to those one would encounter in a non-dependently typed language that hasn't had zillions poured into it by Facebook or Google or Apple.
I think in nearly all these cases, what prevents something from being usable "in production" is not so much language features & kinks & compiler bugs, but the lack of an ecosystem.
=> More informations about this toot | More toots from jonmsterling@mathstodon.xyz
@jonmsterling @rahulc29 Thats a great point. I think the lack of ecosystem is one of the biggest impediment to adoption for agda and idris. Getting used to "oh, there's a library for that" is really really nice. And it could kill a project if you suddenly realize you need to write an entire library from scratch. Many place can't justify that investment.
So here's to hoping some tech giant picks up a dependently typed language (please don't make it lean lol)
=> More informations about this toot | More toots from isAdisplayName@mathstodon.xyz
@isAdisplayName @rahulc29 Yeah :)
Well, honestly, it’s going to be Lean. I think we dependent type theorists have been a little reluctant to push for our work to go mainstream, because once you do this you can’t turn back and many mistakes get locked in for years or even decades. This is, I suppose, what is happening with Lean — which is simultaneously a huge win and a huge loss for type theory. (Mostly a win, IMO, but there are a number of cultural ideas embedded in the Lean project that I think are very harmful.)
=> More informations about this toot | More toots from jonmsterling@mathstodon.xyz
@jonmsterling @isAdisplayName @rahulc29 can you give some examples of the harmful ideas?
=> More informations about this toot | More toots from antidote@mathstodon.xyz
@antidote @isAdisplayName @rahulc29 I think the big one is "Equality is proved by tactics anyway, so there is no need to ensure that definitional equality has any global properties whatsoever". This leads to bad technical conclusions, like "It is OK that definitional equality is not transitive, you use 'simp' instead and nobody expects 'simp' to be able to reason transitively anyway".
This harmful idea is related to another harmful idea embedded in the project, namely "We don't need no stinkin', pointy-headed type theorists”.
=> More informations about this toot | More toots from jonmsterling@mathstodon.xyz
@jonmsterling @antidote @isAdisplayName what does it mean for Lean's definitional equality to not be transitive? Lean's implementation violates transitivity of definitional equality?
=> More informations about this toot | More toots from rahulc29@mathstodon.xyz
@rahulc29 @antidote @isAdisplayName Ah yes, thanks for the clarification.
=> More informations about this toot | More toots from jonmsterling@mathstodon.xyz
@jonmsterling @antidote @isAdisplayName it is pretty counter intuitive in my opinion. To my knowledge, one checks for definitional equality by normalising and then checking equality of normal forms. Does Lean not do that in its kernel? Or is there some known bug in the normalisation function that the community has deemed not-a-priority (presumably due to tactics etc. working well enough) ?
=> More informations about this toot | More toots from rahulc29@mathstodon.xyz
@rahulc29 @antidote @isAdisplayName Well, in practice you implement some algorithm that is equivalent to the one you describe, but is more efficient. The problem with Lean is, to my recollection, that the definitional equality relation is very strong and includes things where you would have to invent out of thin air a proof of some proposition. When this theory is turned into an algorithm, there could be many potential impacts; for lean, I seem to recall that the impact is that there are equations that can be proved by sequencing two “refls” but not by using a single “refl”. Syntactically minded people identify this as a failure of subject reduction, but I think that description can obscure the practical impact.
=> More informations about this toot | More toots from jonmsterling@mathstodon.xyz
@rahulc29 @antidote @isAdisplayName There are two instances of this problem in Lean, one easy to fix (or maybe already fixed?) and one harder to fix now without impacting a lot of library code. The latter is the ability to do recursion on proofs of proof irrelevant inductive propositions; if it was required to be proof relevant, the problem would disappear. But I think it can be hard to make this change, when considering the potential impacts on Lean code in the wild.
=> More informations about this toot | More toots from jonmsterling@mathstodon.xyz
text/gemini
This content has been proxied by September (3851b).