Try F# site: A top-notch programming resource

Microsoft Research's Try F# tutorial site is the best resource Justin James has tried for learning F#. Read about the site's highlights, as well as its weak spots.

Over the years, I keep coming back to F# as a neat language for certain situations. A few weeks ago, Microsoft Research announced the beta of the Try F# site, and I decided to check it out. Not only is Try F# an excellent way to learn a little about F#, it also sets an excellent standard for programming tutorials.


It's quite easy to use the Try F# tutorials. Once you are in the tutorials, I suggest you click the Index icon in the top-right corner to show the table of contents at all times. The format of the lessons is fantastic: You select a topic, it shows you text and sample code, and to the right there is a live, interactive F# interpreter. To make it even easier, you can click Copy And Run (sometimes labeled as Load And Run) in the sample code to have it sent over to the interpreter and executed. I really liked how this format bridged the gap between static code samples and experimenting live, without the tedium of re-typing it in or having to grab code from a book publisher's website.

The tutorial text is very good as well. It lays the F# language out in plain speak, while highlighting along the way key conceptual differences between F# and languages such as C# and Java. This is important for any F# tutorial, because it is easy for developers who are familiar with other languages to get lost on fundamental principles, such as the "let" statement binding a variable to a function instead of assigning a value.

F#'s three paradigms -- functional, imperative, and object oriented -- are all briefly covered. As expected for a functional language, recursion is touched on. The tutorial also covers some advanced topics such as parallelism and asynchronous programming.

Weak points

One glaring omission is .NET interop. While it is not a fun topic and can certainly scare away a newcomer, I think it is going to be on the minds of people who want to learn F#. After all, if you don't use .NET interop, does it really matter if it is a .NET language?

Other weak spots were the lack of information explaining how to use F# in a more effective manner than other languages, or details about what changes to your programming style allow you to get the most benefit from F#. Again, this may be a bit too advanced for a tutorial like this, but it is the kind of thing that is really important to people who are taking a look at F# and want to know what it can do and why they should be interested in it.


If you fit the target audience for F# (i.e., people working on algorithms, primarily), the Try F# site is a great way to see if the language is for you. Out of all of the F# resources I have sampled over the years, this is the best I have seen. It only takes an hour or two to check out (depending on your previous background in functional languages and reading speed), and it is broken up into easily-digested chunks, perfect for perusing on a lunch break or during other brief moments of downtime.


Additional F# resources on TechRepublic

Keep your engineering skills up to date by signing up for TechRepublic's free Software Engineer newsletter, delivered each Tuesday.


Justin James is the Lead Architect for Conigent.


F# features the imperative model (which means: I want you to run this code now, even if I don't care about the result.). IT's not a good choice. Only the functional model is good (which means : now I need the result in output, execute all what you can do to return this output). But imperative conditions are in fact extremey rare. What you atually need of a program will be a final output. This output depends only on preconditions that will need to be met to get it. All preconditions specified in a programe need NOT be evaluated immediately, but only if the ouput participant MUST run (when you initially launch a program, only ONE rendez-vous must be met and is candidate for execution : the END of the program when it has produced that output. All the other execution points are delayed (they may be ready to run wut will only run when it is a participant to a rendez-vous whose SINGLE output has been made runnable. In a normal program (or algorithm) you absolutely don't need that every step in a sequence have been executed. In fact before ordering the imperative execution of this program, your rendez-vous is inactive but just wants to be able to do type inference to see if you need to activate one of the input branches of the rendez-vous. So the actual execution of the program will first consiste in preparing the execution context of the entry point of this sequence, the sequence being still not executed, allowing you to transform that context before effectively making it run by imperatively wanting the output of that sequence. Most elements in a program are NOT a sequence, unless there are preconditions for one of them depending on the result of another one, they are candidates for running ALL concurrently, in any order. This means that all these concurrent execution paths are not only not executed when the rendez-vous is first entered (but not exited with its output path), but they are all separate members of a single rendez-vous object, which will activate the input branches only when the output branch will have been activated. The process of activating branches, and propagating the activation of branches in REVERSE execution order, or in PARALLEL upwards branches, is the job of the run-time execution machine (i.e. its compiler and optimizer). This machne will decide itself the resources it can use to activate as many branches as possible, creating parallel execution contexts if possible (according to performance monitoring and resource constraints like the available memory), or sequencing them in arbitrary order, up to the point that each input branches have been activated and then met the rendez-vous. Programs should never need to specify an imperative execution path. In almost all cases, this should only be specified by the set of preconditions, i.e. by the specification of the rendez-vous which is crossed in two steps : on entry to prepare the strongly typed execution context of each input branch (allowing type inference at theis step), and on exit when type inference has ordered all input branches to run because all input preconditions were satistied and because the output branch has been activated. An example is I/O : this can be modeled like a client/server model : there's a server waiting for input and, so, which is implementing an event listener. Its precondition is: an event has been received. But to receive this event, the server must activate its input branches of its rendez-vous (in any order, or concurrently, this does not matter). One of these branch will be the output branch of a client (local or remote), which is still inactive as long as the server has not activated it. So it's the rendez-vous that tells the client that its ouput is desired. For asynchronous client events, basically the server implements an loop around its rendez-vous : many clients may be connected to this rendez-vous, and under this definition, they can (and will) run in parallel asynchronously, because they will all be activated : the server when exiting a looping rendez-vous to retun a result to an output branch, does not actually exit the loop, it stays in that rendez-vous and clients remain connected to them, up to the point where their output has been processed by one of the parallel output branches. In summary: even an ouput order is not imperative : the output completion however is one of the preconditions needed to terminate an input operation. You can see here the concept of "I/O pipes". If you need an ordered output, what you create is a sequence, i.e. a chained suite of preconditions (output completion events), and this branch has a single entry point and a exit point (only the activation of the exit point is needed, the entry point will be reached by cascading/ propagating the activation event upward from the exit point to the entry point, in the chain of precondition rendez-vous). Everything in this model is correctly ordered by dependancy, everything is parallelizable at each rendez-vous level, and sequences are the result of chaining preconditions. No programmer should need to specify imperative execution models, except in terms of sequences (basically, only to perform ordered output from arbitrarily ordered and parallelizable inputs). Serializing inputs is also possible using preconditions, by chaining the inputs in sequences. But like sereilized ouputs, serialized inputs are not all activated at the same time, the activation will start from the final rendez-vous of the chain and will propagate up. Here also input can be performed in a loop: a single input in the rendez-vous from the input, but a loopback of the outputas an input of the same rendez-vous, whose precondition is says that all the input branch must be in the ready-to-run state : exiting the rendez-vous by taking one input value means that the initial branch is no longer active, but the loop back will reactivate it to process the nest input value. In other words: input precondition should include in the rendez-vous the completion status of an ouput, and the I/O reverse is true. input and output rendez(vous are not different, both can be chained to perform serialization or to implement client-servr dialogs. But there are three kinds of rendez-vous (in fact only two): - the sequence (with one input branch, and one output branch, the first that will be activated) - the loop (whose output branch is connected by some execution paths as an input branch of the rendez-vous) - classical rendez-vous with multiple input branches are just like the sequence, except that this is generalized to mutliple input branches instead of just one. In a pure functional program, the parallel execution model is the default, they imply the generation of a single rendez-vous that will be crossed only when all input branches have been activated (they can all be activated at the same time, waiting then from their concurrent completion events) : you should not need any separator in the programming language other than a simple space to separate items (each only representing an input of thr rendez-vous). You activate then the output branch of this rendez-vous, which means that type inference will start there according to the execution context of each (parallel) input branch. This is the step where a compiler can optimize things and elliminate input branches that are always complete without needing any execution (type inference can be performanded once if the type has already been seen in such context). The compiler will determine itself how many input branches can be activated at the same time, in an arbitrary order acdording to various constraints, creating parallel threads or launching remote execution if available and appropriate for the monitored performance (allowing scalability), or running them in sequence if not possible. A program can still specify a higher priority between these parallel branches, but should not block them. To insert a rendez-vous between the parallel inputs (that are just space-separated) the simplest way is to insert a semicolon between them : 'a b' is a set of two parallel execution branches, it has no rendez-vous, it cannot run. 'a b;' activates the rendez-vous at end of the execution paths of the parallel branchs a and b. At this point type inference occurs and if the output of 'a' is a function, it means that you'll need the code of this function, and to pass the lazy value of 'b' to this code. Both branches are made "runnable" but are still not activated to run. The compiler can determine which branches a or b to run first or can run them concurrently. According to type inference results, 'a' is a function and its code which was lazily avaluted needs to be first evaluated to return an instance of its code to which it will be passed the lazy value of its first parameter 'b'. Then type inference has determined that the result of the evaluation of this code with 'b'. The evaluation of 'b' wll be made by this code only when it will need its actual value. The result is then, by type inference another function without parameter, whose code will return a value of some type. But the ";" rendez-vous instructs that function to return its value by activatng it on the input branch of the rendez-vous, which causes the two branches to be activated in parallel. then the code of 'a' to act with 'b' when it will be needed by activating it. As both input branches of the rendez-vous have been activated it is possible to exit this rendez-vous: the result is that function which will operate according to the definition of 'a' and the value of 'b' if it needs it. - "a; b;" is the typical sequence, here with two chained rendez-vous. both branches a and b are inactive but only the second rendez-vous is activated at start. The sequence creates an execution context that is the result of this chaining of rendez-vous. On other words, this basic language creates a graph of rendez-vous. You can add extra syntax to create the other type of rendez-vous, the loop: 'a; {b}; c' whch has a single entry, a precondition on the output of 'a' connected as an input of the rendez-vous the ouput of 'b' as another input of the same rendez-vous, and 'c' representing some code that will run in sequence after the loop. 'a', '{b}', 'c' are all separate and concurrent execution paths, they would be all active at the same time, if there was no ';' to chain them in a sequence. But the first ';' is a basic sequence, the second one afer the '}' is a loop rendez-vous chained to a sequence. But normal programs will never need the sequence in a purely functional program in a "timely" fashion. It is the type inference and chaining of rendez-vous that orders the execution in an imperative way. Solution: drop the ";", it is not needed at all. Instead, if you need it write the code for "c" in "a {b} c" so that it will include a rendez-vous depending on some properties (completion status, infered type, return calue..) from another object of this program, here 'b" and or 'c'. Typically the code of b should be enumerating the content of some collection (including a basic range of integers) to create as many parallel instance of some code for each enumerated value These parallel instances will also run in parallel to 'a' and 'c'. If you need some ordering, use explicit preconditions to chain the rendez-vous. Now how do you write preconditions ? just like traditional if statements: (a ? b : c) i.e. if precondition from a is met then b is activable, otherwise c is activable. A rendez-vous is created on output of a, b and c, all three are activable but the activation of the rendez(vous input from 'a' will be the only one occuring if the output of the rendez-vous occurs. Forcing 'a' to be evaluated to get its default boolean property. Depending on this value the "?" operator will activate b or c and will complete immediately the rendez-vous completion of the other branch without execuring it. Beause 'a'; and either 'b' or 'c', is now complete, the only remaining active branch is the code of 'b' or 'c' depending on the boolean property value of 'a'. Actually the ':' is an operator/function here which creates a function that will activate either 'b' or 'c', and the '?' is an operator/function which operates on 'a' and the function 'b:c'. This could as well be written as '(? a : b c)', more like a switch. And parentheses are just there to limit the scope of operands for type inference. The ';' is just a shortcut for another rendez-vous : 'a ; b;' is the same as '(? (a).complete b);' using the reverse polish notation, or '((a).complete ? b);' using the more common infixed operator notation which could be infered as well from type inference, The 'complete' property returns the completion event which will always be true when the activation of 'a' will have been made so that its execution will have reached the completion status to reach the first input of the rendez-vous; the second rendez-vous is built from the completion of b which is activated, but not runnable because it is connected on output of the first rendez-vous and depends on this first rendez-vous to be met.


Ideally, the concept of postconditions (i.e. invariants) is notariously difficult to program : this is where many postconditions are not written. It is much easier to think about any algorithm using only preconditions. Preconditions are natively implemented by rendez-vous (used as a synchronization mechanism for parallel execution and multithreading). rendez-vous imply a mutex but only between a few participant objects specified in preconditions. Ideally any pure functional language should be entirely based on preconditions. Preconditions for example all what is needed to support strong type-safety (type is provided by the resulting condition tested when the rendez-vous is met by all participants, so that type inference becomes possible to split an execution path later, because the rendez-vous has generated an fully updated execution context). Type inference can also be extremely efficient because decisions made on the content of the execution context is also delayed up to the point where a precondition for one of the possible execution paths. Multiple execution paths are also possible from a starting point: this will generate parallel executions, and automatic multithreading, and possibly transparent deployment across remote execution systems available in a computing grid (including collaborations between clients and servers for who will actually run the code -- this means lots of possibilities offered for scalability). Let's drop compltely postconditions (or invariants which are equivalent), they are not needed. We just need preconditions. The first pure functional language that will be entirely based on preconditions (and only them) will win. It is the dream of mathematicians, theoricians, algorithmists, and the dream (later) of Software Quality Assurance teams. It will also make reuse of code much more versatile.


All functional languages have the interest that they can naturally be parallelized (this is not true of imperative languages like Basic, even VB#) Pure functional languages include the good old-fashioned Lisp (even if people hate its parentheses), or the excellent Eiffel. Those languages (based on the very strong theory of lambda calculus) are also excellent candidates as scripting languages, mixing the interest of compilers (for performance) and interpreters, plus native support for reflexion and conservation of the semantics of all "values" (i.e. they are strongly typed), and the versatile reuse of existing algorithms for many kind of "data" or execution contexts, allowing true generalisation (and true object-oriented type inference and complete polymorphism, without the cost implied by arbitrary choices of execution scheduling models hardwired in the written source code). These languages have a natural system for eliminating dead-end code (that is not needed), they allow much more optimizations when performance is seeked. They also simplify a lot the parallel or multithreaded programming (less mutex or critical sections needed as well, which means better performance as well, and where they occur, the synchronization is part of the job of the runtime which can create more fine-grained mutexes or critical sections only at the smallest machine level to interface the software with the various hardwares executing it). Let's forget imperative languages. F# will fail, but Python, Java will continue winning. Or even Javascript which is not really imperative and has now a functional definition with natural reflexion capabilities, except that it still lacks a system for asserting postconditions/invariants, but it has at least a strong system for preconditions) : all the impressive progresses of performance made in Javascript today are related to its functional definition, and many more progresses are still possible in Javascript engines. Those programmers, that Microsoft wants to interest to F#, are Python programmers, and Javascript programmers but I doubt it is sufficent to convince them : the economic model of Microsoft does not like languages that cannot be precompiled, because Microsoft sells compilers and optimizers, and does not want to invest more in runtime execution machines featuring compiler and reflexion capabilities directly when it is used as if it was an interpreter. By selling compilers and development platforms, Microsoft promotes its own API, its own OS, its own view about how to develop and deploy any application, and sells its inline application distribution system (e.g. the online Windows 8 application store)...


Why inventing a new language that requires the use of the "let" statement. OK there's also "let!" for asynchronous asignments (i.e. when passed to another function as a parameter, its actual value is not computed but will be computed lazily when it will be needed and made available, blocking only the place where the value will be really required.) This merits a separate statement but even this could be abbreviated: let x=2 let! y=x*x would probably become simply: x=2 !y=x*x But then why not simply making ALL assignments executed lazily and asynchronously ? As in a pure functional language. So this could have been more simply: x=2 y=x*x Microsoft won't convince Python programmers to use F# (it's perfectly possible to make Python purely functional with lazy evaluation). The distinction made in F# between the two assignments using the ugly "let" (and "let!") statement, that was forgotten since long in Basic (even in VB#), is just a pure local optimization which defeats the pure functional definition of the language. In other words, F# is not a functional language, as it enforces an execution schedule, by its syntax, instead of infering it according to the execution context and other constraints. This makes algorithms still hard to prove correct, and not suitable for strong mathematical definitions (unlike Java, which can easily be parallelized with lazy evaluation as well, thanks to its strict constraints for the evaluation preconditions and due to its strict postconditions that enforce invariant rules), The local optimization and scheduling of instructoins (when to evaluate the lazily assigned expressions according to satisfied constraints) being the job of the run-time compiler (which can take profit of the effective current capability of the local host on which the application runs, with more or less available processors, including possibly remote processors, or step-by-step debuggers which may force an earlier evaluation for inspection purpose, if all other dependency constraints are satisfied).


The let statement just binds a name to a value. It's also used in ML, OCaml and Haskell and serves the same purpose. It shouldn't be confused with how it was used in VB. As for F# being a functional language it certainly is, in the same way ML and OCaml are. These languages are strict and impure, being immutable by default but allowing mutability on an as needed basis. Haskell is an example of a pure and lazy functional language. The main advantage of strict, impure functional languages like F# is their practicality and the ease in which they can interop with existing libraries. F# is a functional first multi-paradigm language supporting functional, imperative and object-oriented programming. After reading your post I'm reminded of a line from Adam Sandler's "Billy Madison" from the principal: "Mr. Madison, what you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul."

Editor's Picks