Lets introduce some notations for talking about functions. Since a computer program has to be thought separately from the mathematical or real life objects, we must name those program procedures, they’re largely functions, and call them methods. These methods are functional in the sense that their definition and evaluation cause no side effects. Let the identity brane be the association of types with a set of formal parameter names:
{ "first parameter" : Integer "second parameter" : Boolean }
The actual parameters, also known as the bindings for these parameters, are typed using $P_f$, the set of possible actual parameters are all dependently typed I-Brane having instances that look like:
{ "first parameter" : 1 "second parameter" : False }
For convenience, we endow and members of with mutators: for some we can write to mean adding a parameter to the formal parameter specification, to mean removing a formal parameter. similarly: with the expression has type belowing to the set of possible parameters and can be used in the full invocation of any method typed Invocation on underspecified parameters automatically curries: applying to a underspecified parameter (here, is a collection of parameters missing ) then .
Finally we type the type signature for the method differential:
Note the should mostly be defined for methods that has as a formal parameter: .
Now then, a method has a formal parameter that is of interest. To evaluate the differential of with respect to , we assert that at any parameter of , the application of differential to a change in the parameter () from to results in the proper change in the output of itself.
Analogously, we have converted the multiplication of to a method and evaluated it at to produce . This conversion is quite native to computer programs. Since there is not a universal way to properly write for all possible as we have in mathematical language, the solution is to encode the change in the form of methods.
Technically, if we encode changes as methods, the full blown differential has type:
And the equal expression of meaning, assuming , will be:
Finally, we introduce a factored version of the program differential:
Requiring that
We may use any of these three as they becomes more convenient.
Constant and Order of Differential
An interesting idea to explore based on this differential is the order of dependence of a function on a parameter. If a method does not depend on a variable, its differential would be the identity function :
This is . But there are also first order terms who has constant, , differential . In this case we can find an equivalent method typed as:
(*)
Where
That it does not depend on any input variables. And certainly there is something resembling , with constant :
Our allows for arbitrarily complex changes in output value even when the input parameter changes are not small. Methods with thusly typed differentials do more than constant functions but are not as dependent on its inputs than functions with non-constant differentials. We are therefore inspired to qualify or even quantify the complexity of dependence a method has on its parameter. It is the complexity of the differential function.
(*) here, the means, essentially, or for all intents and purposes, the same. This seems like an important idea to formalize, perhaps in a next step of this effort.
Chain Rule
Relatedly, the simple treatment of composition and curried methods is to uncurry them to an essentially equivalent method before computing differential. The actual implementation of that differential can be programmed using the chain rule. For this composition of methods:
And we’d like to compute . After juggling the types and parameters a bit one discovers that the differential can be written directly as the method:
This, then, is the chain rule for program differentials.
Todo: write the proof for this chain rule.
The Limit
The reality of the matter is that the program differential is not quite the equivalent of partial differentiation over real functions. There inside the definition of derivative is a limit. If we could take the limit of programming objects, then we can actually come to a equally localized derivative as we have for real functions. Instead of a limit the program form of the partial differentiation would ask for:
is the identity meaning no change. But that is perhaps work for another entry, to iron those details of program limits. We may yet achieve a unified world where mathematical differentiation is a sub-type of program differentiation: