So I like to joke occasionally on the Fediverse that when looking at Wikipedia's list of programming languages supporting dependent types[1]:
It's tons of extremely complex functional languages, formal theorem provers, and… Ada, a random Government Language dating back to 1983.
Also you'll note that at no point in any Ada documentation anywhere do they feel the need to bring up set theory or the λ cube when explaining how dependent types work :P
I just love the juxtaposition of the highly formalized academic/research/for-fun languages with a very bureaucratic-feeling language whose design philosophy is very similar to COBOL's.[3]
But then I'll usually get one of a few functional programmers/type theory people that follow me to ask about Ada's dependent types, and this time I felt the motivation to write a big post explaining dependent types in Ada. And I really want to illustrate how how the designers were really just trying to achieve a specific goal, and coincidentally their solution meets the formal definition of dependent types.
Note that I'm really not knowledgeable about functional programming or type theory at all so I can't really speak much about it, although I am fairly skilled with Ada.
=> [1]: list of programming languages supporting dependent types | [2]: list of requirements that they considered mandatory for a high-integrity, embedded programming language
[3]: Like, Ada's entire design is basically the U.S. DoD making a list of requirements that they considered mandatory for a high-integrity, embedded programming language[2], and then paying government contractors to design a language meeting those requirements, without caring if it's “fun” or “breaking new ground” or “mathematical purity”. Not that that's necessarily a good design philosophy, it just makes the language very different from many; and is what reminds me of COBOL which was made similarly: businesses designing a language to their requirements without regard for any of the things academic computer scientists of the time cared about.
First-off, for context here's a very simplified summary of what I'm talking about since it is a rare feature: dependent types are just “types that depend on some concrete value”. So, for example, a structure with an array field whose length depends on another field in the struct—one of the primary applications of Ada's dependent types[4].
I also need to touch on one of Ada's design philosophies, which is to avoid dynamic allocation unless actually necessary for safety reasons, since it doesn't have a borrow checker for fully-safe allocation like Rust does[5]. The key problem is that when you use one single call stack to both store return addresses and local variables, a called function can't return the contents of one of its stack-allocated variables “upwards” because the callee needs to be able to pop off the return address and have the stack pointer be precisely where it was before it got called. Instead, any larger-than-register returned value has to have its stack slots be allocated by the caller, with a size known to the caller. Ada solves this by having a second non-call stack[8] that functions can allocate compile-time unknown space on and return that upward, without needing dynamic allocation and preserving the nice scoping and free lexical deallocation that a stack provides. The only time you really need dynamic allocation in Ada is when you're interworking with C/C++/Fortran/COBOL[9], or if you have an aggregate type that contains an object of unknown length, e.g. an array of strings, where you have to dynamically allocate all the strings and then the array of pointers can be handled like normal (Ada has a rich container library so it's rare you need to think about it at all). Allowing use of the second stack rather than dynamic allocation (i.e. avoiding needing a pointer elsewhere in order to encapsulate an undefined-length objects within another object) is a key motivator for one style of Ada's dependent types.
=> Here is an addendum detailing dynamic allocation and the secondary stack in much more detail.
And now an overview of Ada's type system because I feel them developing dependent types mostly-independently is really just a natural extension of its type system. If you're familiar with Ada already, I'm just going over basic derived types and subtypes, discriminated records, and type predicates. And tangentially things like indefinite types and how they work with subprograms and being nested in other aggregate types.
Firstly, Ada's type system was ahead of its time in 1983, in the sense that it's focused on modeling (and enforcing) the intent of the type, and letting the compiler do the work of mapping your intent to underlying machine types[10]. So, for instance, while there is an Integer
type that corresponds to some efficient machine integer like C's int
, unless you need a generic integer you'd typically prefer to do something like this (this is probably not an ideal way to represent a social security number, it's just an example):
type Social_Security_Number is range 0 .. 999_99_9999;
This is a completely new numeric type, but you could also make it derived type of Integer
to allow well-defined type conversions, (although the conversions would require an explicit typecast since they're distinct but compatible types. To avoid needing explicit casts you'd use the subtype
keyword instead of type
).
As an extension of that, arrays allow using arbitrary integer types as their indices. They actually don't need to start at 0 or 1 or anything, they can use any bounds that make the most sense for your application (with the main caveat being that you have to use Arr'First
& Arr'Last
or Arr'Range,
rather than being able to just go from 0 to Arr'Length - 1
). So, for instance, here's an array type:
type My_Integer is range -20 .. 20; type My_Array_Constrained is array (My_Integer) of Whatever_Type;
Or, more concisely if the integer type isn't needed anywhere outside of the array type:
type My_Array_Constrained is array (-20 .. 20) of Whatever_Type;
But that's fairly limiting, because instances of My_Array
must always be exactly 41 elements long with bounds from -20 to 20. So you can make an array with bounds determined at instantiation instead of at the type declaration:
type My_Integer is range -20 .. 20; type My_Array is array (My_Integer range <>) of Whatever_Type;
To use that type, you could make an instance like this:
Arr_1 : My_Array(-15 .. -5);
Arr_2 : My_Array := (11 => 0, 12 => 1, 13 => 2, 14 => 3); -- implicitly determined length and bounds
It should be noted that while these instances are descendants of `My_Array`, they are still distinct types that are implicitly created on-the-fly, and are unnamed. You could declare it a new named type instead if you wanted:
type My_Array_Constrained is new My_Array(-15 .. -5);
Arr_1 : My_Array_Constrained;
Both declarations of `Arr_1` are nearly equivalent, except the second one has a named type and could be easily assigned to another instance of `My_Array_Constrained` without explicitly copying slices of the same length. And now we already technically have dependent types, because arrays with indefinite bounds can be created based off of a function parameter or other compile-time unknown value. For example (this is being excessively explicit about named types to prove it's true dependent typing):
type My_Integer is range -20 .. 20;
type My_Array is array (My_Integer range <>) of Integer;
subtype My_Length is My_Integer range 0 .. My_Integer'Last; -- Technically unnecessary
function Make_Array (Len : My_Length) return My_Array is
-- Declarative area, where all of the local variables and types and nested
-- functions and whatever would be declared
-- Make a subtype using the provided length. Will raise an exception if
-- the upper bound is outside of the range of My_Integer.
subtype Dynamically_Created_Array is My_Array(My_Integer'First .. My_Integer'First + Len);
-- Create an instance of that subtype with all-zero contents.
Arr : Dynamically_Created_Array := (others => 0);
begin
-- Function body, where the procedural stuff is done. Presumably we
-- would do some processing here rather than just returning the array.
-- Returning Dynamically_Created_Array is safe because it's a compatible
-- subtype of My_Array.
return Arr;
end Make_Array;
But of course there's more, what if you want to use `My_Array` inside of a record (Ada's term for a structure)? You could just have one of a fixed length of course:
type My_Record_Constrained is record
Field : My_Array(-15 .. -5);
end record;
And note that that Field's length can be computed at runtime if you need, just by using a function to compute one or both bounds instead of specifying literals. But all instances of `My_Record_Constrained` in the entire program would still have the same sized field that's only calculated once, at program startup[11]. It'd be really convenient to keep the flexibility to determine the size at instantiation time like we have with standalone arrays. So there's the concept of “discriminants”, which are like standard record fields but are treated specially and can be depended on by the types of normal fields of the record. For instance:
type My_Record (Top : My_Integer) is record
Field : My_Array(My_Integer'First .. Top);
end record;
Rec : My_Record(-5);
That example could be extended with a second discriminant to allow defining both the start and end bounds, etc. And there, that's it, that's also a dependent type! There is an implicit unnamed type that's recognized by the type system, that depends on a value at runtime. There's other things you can do rather than just using the discriminant as the length of an array. And you can do things like have an embedded record whose discriminant depends on a discriminant in the outer record and such. And there's also “variant records” which are just tagged unions where the tag (any signed/unsigned integer or enumeration type) is declared in the same place as the discriminant is above.
But Ada 2012 introduced even more capable dependent types as part of its design by contract system[12][13]. Say you want to allow My_Record
to specify both the lower and upper bounds as discriminants. Well, you can just do that, even though the instantiator could specify a lower bound that's greater than the upper bound, that's fine because in Ada that's just how you make an array with a length of 0. But say, in a contrived example, that you always want at least one element. You could add a type predicate, which is I think is somewhat analogous to what theorem provers would call “tactics”? You provide an expression acting on the value of the type and will be enforced as being true either at compile-time (for static predicates) or at runtime (for dynamic predicates). For example:
type My_Record (Top, Bottom : My_Integer) is record Field : My_Array(Bottom .. Top); end record with Dynamic_Predicate => Bottom <= Top;
Or you could not use discriminants at all:
type Pair is record A, B : Integer; end record with Dynamic_Predicate => B > A;
Type predicates and invariants (and also function pre/postconditions) are great for non-dependent type uses too, but that's getting too off-topic for this already plodding post. Although it is neat to be able to do things like
type Even is new Integer with Dynamic_Predicate => Even mod 2 = 0;
type Day_Of_Week is (Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday); subtype Weekday is Day_Of_Week range Monday .. Friday; -- Filthy Americans creating this problem for themselves subtype Weekend is Day_Of_Week with Static_Predicate => Weekend in Saturday | Sunday;
[4]: And a natural extension of C's flexible array members if their length was enforced to match to the field in the struct storing it.
[5]: SPARK, the formally verified subset of Ada, does have a borrow checker though!
=> [6]: Comments in the GCC GNAT source code | [7]: documentation on configuring the second stack for embedded targets
[8]: Comments in the GCC GNAT source code[6] and some documentation on configuring the second stack for embedded targets[7] is the only real documentation on how the second stack works in detail that I know of. Probably because while I consider it a brilliant solution, the secondary stack is actually an implementation detail and the standard just mandates “you have to be able to return indefinite types”. An implementation could in theory dynamically allocate and insert malloc and free calls appropriately in the caller and callee code.
[9]: The Ada specification has official, albeit optional-to-implement, APIs for interfacing with all of those languages lol
[10]: Not relevant to the point I'm making in this post, but if in a given application you do care about the machine types, there is a whole featureset of “representation clauses” that allow you to very precisely state the underlying size and layout of a primitive type, array, or structure/record. The compiler will generate the required bit-shifts and masks for you when reading or writing fields of an object of that type, and the specified representation is always how it's stored in memory and doesn't need a separate (de)serialization step, making it convenient for memory-mapped I/O (since Ada also lets you specify the precise address of a variable without needing pointers or poking the linker directly).
These features, amongst a few others similarly intended for easy low-level programming, are actually the main reason I personally use Ada since their convenience and simplicity is unmatched in any other programming language in existence AFAIK.
[11]: To be precise: it is calculated when the “elaboration code” for the package containing that type is called, which is typically but not always at program startup or when dlopened; but may be at any point prior to the containing package or any dependent package being used. The precise point and order in which elaboration is performed is determined by the “binder” and may be delayed from program startup if the main() is overridden to be a program in some other language rather than the entrypoint emitted by the binder.
=> [12]: design by contract system
[13]: The other main thing that I love about the language
Something I find very interesting is that there's a strange interplay between a very get-things-done oriented language like SPARK (the formally verified subset of Ada) which wants to verify correctness in a program, and functional programming theory that provides ways to reason about the correctness of programs. And yet functional programming languages themselves nor the theory directly isn't suited for what SPARK does, because they prefer purity and usefulness for developing the theory further over things like allowing side-effects in a way that doesn't confuse imperative programmers (coughmonadscough) or the compiled code being efficient enough for low-power embedded targets.
Note that I'm not implying that functional programming languages can't “get things done”, they just tend to be focused on different goals than Ada and SPARK are. Because if you're interested in the theory why would you care about the small set of high-integrity embedded developers over being able to express what you need to in the most natural way possible? And similarly, SPARK had concrete requirements for a specific set of tasks different from what academics want, and it just happened that theorem provers and FP theory can be applied to those problems to great effect.
I feel somewhat like this with Ada as a whole. Really, lots of its type system often stumbles across interesting concepts in type theory, but just from them reaching for some goal of correctness or flexibility rather than because of actual type theory. Although I haven't had more than a casual IRC chat with any of the standardizers so maybe they did actually get their ideas directly from type theory and just fit them into a non-academic language, rather than stumbling across the concept like I hypothesize here.
⁂
=> also available on the web | and available on gopher
=> contact via email: alex [at] nytpu.com | or through anywhere else I'm at
=> backlinks
=> -- Copyright © 2024 nytpu - CC BY-SA 4.0 This content has been proxied by September (ba2dc).Proxy Information
text/gemini;lang=en-US;charset=utf-8