I was writing a new language the other day and I thought, “this puppy needs a REPL”!
But before I could write one, I had to decide how it would look and behave. I mean, I knew the basics: take something in, execute it, then display the result. But how do you open the help? How do you handle multi-line input? Can I use terminal colors? What does the prompt look like?
To answer that last one, I took a quick survey of my favorite languages - turns out they’ve all coalesced to the > prompt, but there are some fun variations:
> 2 + 3;;
val it : int = 5
F# is my favorite language, but the REPL is a bit busy for me. First the language dictates this weird crying emoji (;;) in input and the result is always encumbered by val it : noise. But I still <3 you F#.
csharp> 2 + 3
5
Much cleaner! No semicolons, and just the answer. Well C# does show its vanity a bit with its name announcement on each line - but heh, it deserves it.
>>> 2 + 3
5
Elegant and bold at the same time. The >>> means Python, but you don’t actually say “Python”. So hipster, so cool. I’m sure I could copy >>> as remixing is hip these days; but no, I’d be trying too hard.
irb(main):009:0> 2 + 3
=> 5
OK, I get what you’re going for here Ruby. Part of me even likes it. But no. Too much. I would expect this kind of complexity and technical jargon when logging into my refrigerator - but my dev environments should have a little more refinement.
On the plus side - Ruby outputs the answer in yellow. I’m totally stealing that.
Oh and geeze, they use the same symbol as Calca - so how could I not love that?
> 2 + 3
5
Look at you. Simple, reasonable, well thought out. It’s like staring at an oil on canvas painting containing only the lowercase Helvetica a.
While we have standardized on > as the one prompt to rule them all, there is a fair amount of diversity as to what comes before it.
I’m a fan of simplicity and in the end, I went with C#’s vanity> prompt. Cause, like, I’m vain.
Over the past six months I have been working on a new .NET IDE for the iPad, and today I am very pleased to release it on the App Store.

Continuous gives you the power of a traditional desktop .NET IDE - full C# 6 and F# 4 language support with semantic highlighting and code completion - while also featuring live code execution so you don’t have to wait around for code to compile and run. Continuous works completely offline so you get super fast compiles and your code is secure.
Continuous gives you access to all of .NET’s standard library, F#’s core library, all of Xamarin’s iOS binding, and Xamarin.Forms. Access to all of these libraries means you won’t be constrained by Continuous - you can write code exactly as you’re used to.
I love the iPad but was still stuck having to lug around my laptop if I ever wanted to do “real work”. Real work, in my world, means programming. There are indeed other IDEs for the iPad: there is the powerful Pythonista app and the brilliant Codea app. But neither of those apps was able to help me in my job: writing iOS apps in C# and F#. I couldn’t use my favorite languages on my favorite device and that unfortunately relegated my iPad to a play thing.
That realization produced this tweet last December:
I resolve to use my iPad Pro for software development in 2016.
— Frank A. Krueger (@praeclarum)January 1, 2016
Well it took me a bit of time, but I finally have it: a .NET IDE on the iPad (and phone too!).
But it’s not “just an IDE”. I didn’t want it to simply be sufficient - I wanted it to be great. I also thought it was a nice time to push the state of the art in .NET IDEs a tad.
For ages compiled languages like C# and F# have forced a sequential development loop on programmers: the Code-Compile-Run-Test loop. We code something up, wait for it to compile, then wait for it to deploy and run, then we get to test it.
I hate waiting for compilation and deployment so I designed Continuous to minimize those steps. It does this by eagerly compiling your code - never waiting for you to tell it when to start. It runs your code as soon as those compiles complete successfully and displays the results of that execution right next to your code. Now you can focus on the code and the results of that code instead of being distracted by all the silly machinery of a compiler and IDE.
The benefits of making compilation and execution fast have surprised me. My iPad has become my favorite place to write apps now.
I could argue that I’m a more efficient programmer thanks to these changes. Perhaps I am more productive. But the truth is, I’m just happier using Continuous. I play with GUIs more now, trying new ideas and tweaking things left and right. It’s quite liberating and plain old fun to get nearly instant feedback on your work.
I hope you find these features as exciting as I do. Please visit the website if you want more details on them, or throw caution to the wind and buy Continuous on the App Store now to see them first-hand.
Continuous wouldn’t be possible if it wasn’t for .NET’s great open source ecosystem. Continuous uses Roslyn for compiling C# and FSharp.Compiler.Service for compiling F#. Continuous also relies heavily on Cecil (what problem can’t be solved with Cecil?) Also, Xamarin.Forms could only be included thanks to Xamarin open sourcing it.
And of course, none of this would be possible without mono and Xamarin.
I wrote Continuous in F# using Xamarin Studio. The code is more functional than object oriented and uses a redux style architecture. I don’t think I could have built such a large app with its sophisticated requirements without F# at my side. Three years ago I wasn’t sure how to write GUI apps in a functional language, now I question why I haven’t always done things this way.
(Source: continuous.codes)

Today I’m pleased to release Calca 1.4 for iOS. This is an exciting release for me for two reasons.
First, this is the best version of Calc; it supports:
Second, I’m trying something a little crazy with the price - Calca for iOS is now free! This means more people than ever can try Calca and see how it can be used to solve their problems.
How can an independent app developer survive making a free app? Don’t worry, I didn’t fill it with ads. Except one small ad: a request for a donation to support the development of Calca. I’m hoping that users will find enough value in it that they will contribute some money towards its development. These contributions in no way add features to the app - they only remove the donation request. Your contribution enables me to continue working on that app, and I thank you for it.
The donations are tied to a time period. This provides a way for you to choose an amount that you’re comfortable with and also provides a way for you to donate multiple times if you’re feeling like an awesome superhero of a person. #justsaying
This is a bit of an experiment - one that I hope will pan out because I am truly excited to see how many people will use the app now that it’s free. That said, I am open to failure and learning from it. Let’s see how the patronage model works!
Anyway, enough about that. Go get Calca and get calculating!
TLDR; I wrote a new Xamarin Studio add-in that dramatically reduces the number of Build and Run cycles you need to perform while developing an app. Please follow the instructions to install Continuous and let me know what you think!
UPDATE I renamed this project from “LiveCode” to “Continuous Coding” or “Continuous” for short because I was being harrassed by a bunch of britishers. Bullying works kids.
Since the beginning of time, there has been one limitation of running .NET code on iOS using Xamarin - System.Reflection.Emit doesn’t work. That means you cannot dynamically create executable code.
It’s not a serious limitation. .NET has had this ability for years but, as a community, we really only use it for one purpose: to make code fast. In that vain, this missing feature hasn’t really been a problem for us because the slow path is often just fine.
But there’s a second use of Emit: improving the development experience with things like REPLs.
While mono spear-headed the “C# Interactive” movement with the csharp REPL, they hadn’t been able to give us that tech when running on iOS.
Until now.
Xamarin has released their Xamarin Inspector tool that acts like the developer tools that you get with web browsers.
It’s really nifty. On one hand it gives you an inspectable visual tree of your live-running app - just like the DOM in a webapp. They even have a cool layer breakout 3D view.
On top of that, there is a REPL so that you can type in C# code and see the result. This acts like the “Command” window in the browser dev tools.
Put these two together and you have a fantastic tool to diagnose what a mess you made of the view hierarchy. ;-) Click the link above and install the Inspector, you won’t regret it.
Yes Xamarin Inspector is great, and I want to see more tools along these lines. I especially can’t wait to see if Xamarin uses this tool to help us write UI tests.
And yet, I have always been a bit unenthusiastic about classical REPLs. Surely it’s fun to have a command prompt and play around a bit, but I have never been comfortable with the fact that you are not working with “real code” - the code that actually gets built to ship your app.
Since the second dawn of time, IDEs have integrated REPLs with real code with a simple trick: they allow you to select some code from your real code and send that over as a snippet to the REPL.
Even this stupid little convenience makes a world of difference. I use the heck out of F# Interactive which gives me this exact feature, and it’s amazing.
Thanks to this tool, I find myself doing full app builds far less often.
Builds are the enemy for two reasons:
They lock up the IDE as you wait for big compilers to do their thing and as you wait for your app to restart. Of course, the IDE isn’t frozen, but my mental state is. I cannot edit code because I might screw up the compilation and because the debugger will get lost. So I go into a mental spin-loop watching the progress bar. It’s not healthy. (I used to check Twitter, but fixed that with an edit to /etc/hosts.)
Second, they re-initialize your context. If I’m working on one part of my app that’s far removed from the initial screens, then I have to dig back through the app to get to seeing what I’m actually interested in. If I was a better automated test writer, or a better designer, or a better planner, this wouldn’t be such a problem. But back to the real world…
A little while ago, I took a stab at doing something different from the REPL and wrote Calca. After some futzing around I found an environment that allowed me to see results as quickly as I could type them and it didn’t have the annoying necessity to keep sending code to the evaluator.
I want something like Calca for my day to day work. I want to write code and see the results immediately.
While watching James Montemagno’s live stream on the Inspector, I started to wonder how it worked.
I started to wonder if Xamarin snuck in dynamic assembly support into their newest versions. I wrote a quick app that referenced Mono.CSharp which hosts mono’s awesome dynamic evaluator, then tried to run the evaluator and got what I expected:
System.InvalidOperationException
No dynamic code for you.
After James finished up, I installed the Inspector and laughed at some of my view hierarchies. Great tool.
And on a whim I ran my test app again, and you won’t believe what happened next. The stupid thing ran.
That’s right, installing Xamarin Inspector makes dynamic assemblies work. (On the simulator at least.) I don’t know what dark and old magic makes this possible but the Xamarin engineers have come through again.
Well, we’re given a hint into this dark magic. In the Inspector docs, this passage appears as a “known limitation”:
As long as the Inspector addin/extension is installed and enabled in your IDE, we are injecting code into your app every time it starts in Debug mode
Haha, they call that a limitation. Dear Xamarin, enabling dynamic assemblies in all apps, at least in the development environment, is not only OK but please keep doing it. Please don’t see this as a limitation - this is a feature I never knew was possible and I don’t want to lose it.
When I saw my test program successfully evaluate code dynamically, I was aghast. Shocked because I didn’t expect it to work, and horrified that by all the ideas that occurred to me. With great power comes great, oh forget it.
Little known fact: I spam Xamarin with bug reports and feature requests on a monthly basis. They are very tolerant of me and I appreciate it.
One of my last crazy ideas was a tool that I want to see integrated into the IDE that would enable live coding scenarios - all in an attempt to break the Build and Run cycle. It was a play off of Inspector with a little bit of influence from Example Centric Programming (pdf).
The whole premise was that I wanted to see live evaluations of whole classes and modules while I was working on them without having to manually send snippets to a REPL. I wanted the tool to monitor certain classes and to visualize them whenever I changed them.
Imagine creating a UI layout. We have two options: we can use a designer or we can write it in code. With a designer, we pay the price of being separated from logic but are awarded with instantaneous feedback (or instantaneousish if using autolayout). With code, we have the full power of logic and data, but are stuck with the Build and Run cycle.
With live code, we can have the best of both worlds. We write the UI using code, but we see the effects of our code instantaneously.
In two days I have been able to put together on tenth of the tool I described in my email. But even this small version of it has me really excited.
It is able to do two things:
Send code to the iOS simulator to be evaluated and then visualized. This is to enable classic scenarios where I sometimes just want to know the value of a particular expression.
Monitor whole classes that are evaluated and visualized whenever they are edited. This makes creating UIs super fun and is the part I’m most excited about.
Please go follow the instructions to run it and let me know what you think. (This only works in Xamarin Studio.)
I am not sure how well words can describe the tool, so I took the time to record a video of me using it. The video’s a bit long, but I think you can get the general idea after just a few minutes (and if you skip the first 6 minutes describing installation).
Check it out:
I hacked together a cool little tool that I’m pretty sure will become an invaluable asset. I still want to implement more of the features I described in my original design and make it work on other platforms.
Speaking of platforms, there is one major limitation: it only works in C#. While most won’t see that as a limitation, I have been doing a lot of coding in F# lately and would prefer the tool to work with that.
Unfortunately F# doesn’t ship with a simple compiler service like Mono.CSharp and I haven’t tried yet to get the compiler to compile itself under Xamarin. I’m sure that this is technically possible, but gosh that F# compiler is intimidating and I don’t know where to begin.
I’m also interested in seeing how much feedback this blog post and tool get. I often wonder if I’m just a nutter for hating Build cycles and can’t wait to be validated or invalidated by your response.
So say hello to me @praeclarum on Twitter and let me know if any of this looks good to you.
Drone Builder is a site I created to play with different DIY drone (multicopter) designs.
Building a drone isn’t rocket science but there is a lot to learn when making your first one. You first have to learn what parts you need and what all their parameters mean. Then you have to learn how they combine to produce different effects. On top of it all, you have to do it on a budget.
It’s a lot to take in, but it’s also a trying task when you know all of that. You still have to track down shipment times, compare reviews, maintain Excel sheets - it’s a messy process.
So, Drone Builder.
The UI is split into two areas: designs on the left and components on the right. Each component has a list of products sold by online merchants (Amazon and Banggood).
As you choose products on the right side, a design is built up on the left side. If you choose multiple products for a component, multiple designs will be built with all the possible combinations.
That combination of designs is the true power of Drone Builder - not only can you design one drone, but you can easily design multiple variations and compare them.
It’s a fun little app, I hope you’ll give it a try!
But you’re not here for the drones, you want to know about this F# and React thing.
To explain why I like React, let me compare it to the traditional way GUI apps are built.
I started building UIs with Visual Basic. In those days, application logic and UI logic mutated a large UI tree to create user experiences.
Well, it’s still how we do it. The HTML DOM is a large tree that we can manipulate with JavaScript. Building apps in HTML is roughly how we did it in VB. We may use fancy binding libraries nowadays, but we’re still mutating some application data and then mutating a UI tree to match it (and the other way around).
But, but, but. Time marches on and ideas evolve. We started to see some flaws with this architecture for apps.
First, it makes parallelism hard - if objects are mutated anytime, by anyone, then it’s hard to write parallel tasks that you can trust.
Next, we started to notice the dependency graphs were becoming incomprehensible. If a tap mutates a property of object A resulting in an event that mutates a property of object B that then mutates a property of A - we get ourselves into a potentially endless update cycle. We’ve all added code of the like:
void HandleEvent() {
updatingUI = true;
UpdateUI();
updatingUI = false;
}
All to break the mutation dependency chain for a brief moment. (Usually to guard against over-zealous UI events firing.)
Even if you manage to avoid cycles, you have a wild graph of objects with a plethora of implicit references - both explicit references and implicit references from events with closures. That is to say, you are creating fertile ground for memory cycles that keep objects around past their welcome.
To combat this, one usually has to write “unbinding” code. This takes the form of unsubscribing from events and disposing of objects we know to be useless.
It feels a lot like writing destructors in C++ - simple enough to explain: for every event you subscribe to, make sure you unsubscribe. It’s a bookkeeping exercise; but who likes to keep books? One missed unsubscribe and you have a dangling object eating your memory and resources.
Lastly, mutation and its destruction of data becomes undesirable. Building an undo buffer becomes tricky business if we routinely overwrite data. Rolling back to a valid state after a failed operation is very tricky business. But these are trivial problems to solve when you don’t destroy data.
The enemy has been identified as mutation - both mutation of application data and mutation of the UI tree.
React enables creating UIs without mutation and rewards you for not mutating your model.
React flips this model by treating the UI like any other data structure.
HTML entities, such as div - the analog of native “views”, become
light-weight objects instead of large and complex OS resources.
The idea is to map your application state into a light-weight UI tree. Data mapping is a familiar operation to any functional programmer and any .NET programmer that loves LINQ.
We never mutate the DOM directly. Instead we just keep creating new UIs - never destroying with mutation.
React then takes on the onerous task of synchronizing that tree with the heavy DOM. This is all done implicitly on behalf of the programmer.
Generally speaking this is a heavy-duty process, but its performance can be drastically increased if you use immutable data. This is because React can cache the results of previous generations if it is told that data hasn’t changed. The only way to know if data hasn’t changed is to compare it to old data - something that can only be done if you don’t destroy the old data. Thus, immutability.
Writing these map functions can get a bit tedious
- especially when designing UIs - so React
introduces “components” with the
JSX syntax.
Each of these components maps
a bit of your application state to UI state using declarative
HTML syntax. Instead of `map`` functions, you write markup templates.
For those familiar with XAML, this is very analogous to a XAML
page binding to a view model.
React is nicely architected with an emphasis on composing apps from many of these small components - each responsible for just a small part of the UI. When you combine all these little components, you can build up an information-rich page.
There are events in React, but you don’t handle them the way you did in VB. That is, you don’t mutate the app state, then mutate the UI component handling the event.
Instead of mutation, you clone the entire application state while making precision substitutions in that clone. This clone preserves the old application state while also giving the illusion of mutation.
You then notify the root of the UI tree - a component - that the app state has changed. It re-maps itself (a process called “rendering” in React) and recreates the UI tree. The DOM is subsequently (implicitly) mutated to match that tree.
You end up with an app that centralizes app state changes. Facebook has even gone so far as to codify such centralization in their Flux library.
To ameliorate the cost of cloning, persistent data structures (or, immutable data structures) are used. These are designed to make this “clone with substitutions” trick easy on the CPU and memory.
There’s just one problem in this new React world - JavaScript.
I have nothing against JavaScript - I find it to be a rather enchanting language in fact - but it was not designed with immutable data structures in mind. It has no syntax to help declare them. It has no syntax to clone them. It only knows about reference equality - not structural.
Facebook, the creators of React, recognized this and built another library to help out. This one is called Immutable. It’s a brilliant little library (50KB minified) that adds a lot of persistent data structures to JavaScript. If you’re willing to forego JavaScript standard way to create objects, then this library puts you well on the way to success.
But, but, but. Immutable is great, but there’s a bit more needed to write persistent transformations than what a set of generic data structures can provide.
Ideally, you will have a programming language that takes immutability seriously. Something like F# (or Elm, or Swift, or …).
Not only does F# have immutability baked into its design, but it has a large mature library of algorithms, data structures, and abstractions to help you write the logic for your app.
When I think
of F#, I think of the Seq type. This is your generic pull-based infinite
stream of data and F# has a wonderfully powerful set of operations for working
with them in non destructive ways. It’s a very useful tool to have at your
disposal, and it’s a missing feature of JavaScript.
For data modeling, F# also has union types and record types both with automatic structural comparison and hashing. These types are more specific than “plain old objects” and can be used to create more precise models of your problem.
From a programming standpoint, F# is great due to its simple and powerful syntax. Functions are quick to define and easy to combine into chains. The syntax is driven by whitespace so its easier to refactor and move code blocks around in than, say, our curly-brace endowed languages.
And let’s not forget F#’s other advantage: F# Interactive. It is a REPL that allows you to execute code while you’re writing it. F#’s take on the REPL - F# Interactive - has nice IDE integration that makes writing apps an amazingly satisfying experience.
If you would like to read more from me about using F# to create GUIs you can look at my slides from .NET FRINGE 2015.
But why am I talking about F#, isn’t Drone Builder a web app?
There is an insane library out there called FunScript that can output JavaScript code from your F# code.
Why do I say “insane library” and not “cool transpiler”? That’s because of its implementation. It turns out that F# has some amazingly powerful reflection capabilities that include the ability to retrieve the abstract syntax tree (AST) of your entire app.
Constructing the AST is the first step to building a compiler or transpiler. Normally you write a parser, and then a type system, and then a module system… You then write tricky code to add types to expressions and create data structures to form the AST. It’s a lot of work. But it’s exactly the work the F# compiler already performs whenever you compile your app. The genius of F# is that it makes the results of that effort (the typed AST) available to you at runtime.
All you have to do is mark the modules of your app with the ReflectedDefinition attribute. With that, the F# compiler will retain the AST and make it available to your app (and libraries like FunScript).
FunScript, armed with the full F# AST, is then able to generate JavaScript. This process, in general, is difficult and error prone (translating between two virtual machines) and FunScript handles it with aplomb.
It has an amazingly simple way to replace F# expressions with JavaScript versions using just an attribute. This little trick enabled the FunScript authors to port large swaths of the F# standard library (Core) to JavaScript and also makes it easy for your app to interact with other JavaScript libraries and the DOM not covered out of the box.
One other great bonus for using F# with FunScript is IntelliSense. Dynamically typed languages like JavaScript are hard to provide completion info for. But for statically typed languages, like F#, code completion is nigh trivial. That is to say, I get full IntelliSense as I’m coding my web app.
Part of that wonderful editing experience is thanks to the TypeScript team and their effort towards wrangling JavaScript libraries to publish “declaration” files. These files add type information to otherwise untyped JavaScript libraries. FunScript is able to use those TypeScript declaration files to provide IntelliSense for working with external JavaScript libraries and the DOM itself. It’s fantastic.
So how do you build one of these React + F# apps? Let me walk you through Drone Builder’s architecture.
Let’s with the data model. The usual product-based suspects are declared:
type Component =
| Frame of FrameOptions
| Motor of MotorOptions
| Esc of EscOptions
| Propeller of PropellerOptions
| FlightController of FlightControllerOptions
| PowerDistribution
| Battery of BatteryOptions
| RadioReceiver of RadioReceiverOptions
| RadioTransmitter of RadioTransmitterOptions
type Product =
{
Name : string
Key : ProductKey
Url : string
ImageUrl : string
DeliveryTime : int
Price : float
Currency : string
Components : (int * Component)[]
}
type DesignComponent =
{
Key : string
ComponentInfo : ComponentInfo
Component : Component
Product : Product
}
type Design =
{
Key : string
Components : DesignComponent[]
Purchases : (int * Product)[]
}
That’s it. These types - 3 records and 1 union - comprise most of the data
model. Products represent something that you can purchase online and
contain a set of Components (and quantities). There is not a 1-1 mapping
between products and components because online merchants love to bundle
things together.
A Design and DesignComponent is one specific way to build a drone. They are
calculated from an analyze function. More on that later…
There are also the Options types - these are just additional bags of data
attached to each component class. Here’s the MotorOption to give you a flavor:
type MotorOptions =
{
Weight : float
VelocityConstant : float
Diameter : float
MaxCells : int
Model : MotorModel
}
(Note that I’m able to make use of F#’s units of measure.)
Products are assembled together into a big global variable called products:
let products : Product[] =
[|
{
Name = "EMAX MT2204 KV2300 + ARRIS 12A 2-3S ESC"
Key = "A-B00Y0J5WLY"
Url = "http://www.amazon.com/dp/B00Y0J5WLY/?tag=mecpar-20"
ImageUrl = "http://ecx.images-amazon.com/images/I/51bguFHIyFL.jpg"
DeliveryTime = 6*7
Price = 105.00
Currency = "USD"
Components =
[|
4, Motor { Diameter = 27.9; Weight = 25.0; VelocityConstant = 2300.0; MaxCells = 3; Model = MotorModels.M2204_2300 }
4, Esc { Weight = 12.0; ContinuousCurrent = 12.0<a>; BurstCurrent = 20.0</a><a> }
|]
}
//...
|]
First I played with loading the catalog from a JSON file - but eventually didn’t see the point in writing all the serialization/deserialization functions. F# has a very clean data declaration syntax, why not use it?
The downside is that the catalog gets merged into the code - but it sorta doesn’t matter because web browsers will need to download the code + catalog anyway.
The application’s logic is simple enough to state:
Users select products for components. Multiple products can be selected in one component. Whole components can be skipped if the user doesn’t care to choose.
Designs are produced by finding all the valid combinations of product selections.
Stats are generated for each design to help the user choose between them.
From a code stand-point, this boils down to needing to keep a set of selected products (per category), then writing the design combinator, then deriving stats.
A sketch of it looks something like:
type SelectedProduct =
{
ComponentKey : CompKey
ProductKey : ProductKey
}
let getDesigns (selProducts : Set) : Design[] = ...
This function was not easy to write (80 loc, factored into 8 functions) and I won’t bore you with its implementation. I will say that it uses F# collections and F# pattern matching to great effect and I would be hessitant to write that algorithm in another language. It has to take care of generating combinations of designs and distributing bundled components - it sounded so easy when I first started it. :-)
It also calculates stats about the drone using a combination of physics calculations and data measured from motors.
Unfortunately motor profiles are terribly measured. There are about 4 variables you need to calculate thrust from a motor and online motor profiles often only provide you with 4 data points. In order to make any inferences from this terrible data, I had to write fancy math functions that calculate Jacobians on the fly to do linear extrapolation. Again, I’m thankful I had F# to help me through writing that code. Here’s a little snippet:
let getMaxThrust (v : float) (c : float</a><a>) (d : float) (p : float) (m : MotorModel) : float * float =
let nearestPoints : MotorModelPoint[] = ...
let p0 = nearestPoints.[0]
let diff (fy : MotorModelPoint -> float) (fx : MotorModelPoint -> float) : float = ...
let dtdv = diff (fun x -> float x.Thrust) (fun x -> float x.Volts)
let dtdc = diff (fun x -> float x.Thrust) (fun x -> float x.Current_)
let dtdd = diff (fun x -> float x.Thrust) (fun x -> float x.Diameter)
let dtdp = diff (fun x -> float x.Thrust) (fun x -> float x.Pitch)
let t =
float p0.Thrust
+ dtdv * float (v - p0.Volts)
+ dtdc * float (c - p0.Current_)
+ dtdd * float (d - p0.Diameter)
+ dtdp * float (p - p0.Pitch)
Who says you don’t get to use calculus in your day to day work?!
So that’s about it for application logic. Time for a UI!
The UI is built up using a mix of HTML and custom React classes. Each React class is backed by an F# View Model object.
The view models are declared as an F# tree rooted at the “app” view model. This tree gets transformed into the React component tree by the JSX declarations.
Let’s look at one of the nodes on that tree. Here is the view model for the component selectors on the right side of the app:
type ComponentView =
{
Key : CompKey
Info : ComponentInfo
Options : OptionView[]
Products : ProductView[]
}
This view model is then paired up with a React JSX class (this is JavaScript):
var ComponentSelector = React.createClass({
shouldComponentUpdate: function(nextProps, nextState) {
return !componentEq (this.props.component) (nextProps.component);
},
render: function() {
var comp = this.props.component;
var info = comp.Info;
var products = comp.Products;
var options = comp.Options;
return (
<section><h1>{info.Title}</h1>
</section>
);
}
});
That JSX declaration does a lot of things:
It tests if it even needs to be updated by comparing its old binding
to the new one. Doing these checks drastically improve’s React’s performance.
In fact, it’s the whole reason we’re using immutable data structures to begin
with (and, therefore, the whole reason I’m writing this article).
The comparison is done by the componentEq global function; more on that later.
The render function declares the outputted HTML.
It also continues the mapping process by combining React classes with F# view models.
It’s pretty simple huh? Your UI layer becomes very straight-forward to write. It’s basically all about unpacking variables, choosing some HTML, and then messing with CSS to get everything to look nice.
The most important interaction in the app is the user toggling whether a product is selected. This is handled in the Product React class:
var Product = React.createClass({
handleClick: function(event) {
setProductSel (this.props.productView.ComponentKey) (this.props.productView.ProductKey) (!this.props.productView.Selected);
},
render: function() {
var pv = this.props.productView;
var priceEach = pv.PriceEach;
...
return (
<div>
<div>
<img src="%7Bprod.ImageUrl%7D" alt="image"></div>
<div>
{price}
<span>{prod.Name}</span>
</div>
</div>
);
}
});
When a product is clicked, the global function setProductSel is called.
Let’s take a look at it:
let setProductSel ck pk s =
let k = ck, pk
if s = TheApp.SelectedProducts.Contains k then ()
else
let a = TheApp
let newApp =
if s then { a with SelectedProducts = a.SelectedProducts.Add k }
else { a with SelectedProducts = a.SelectedProducts.Remove k }
updateAppState newApp
where TheApp is a global variable of type:
type AppState =
{
SelectedOptions : Set
SelectedProducts : Set
}
setProductSel is given a component key, a product key, and whether it is
selected. It then recreates the global app state with that new information.
If passes that app state onto the updateAppState function:
let updateAppState newState =
TheApp <- newState
TheAnalysis <- analyze newState
for l in TheAppListeners do l ()
This is, basically, the only mutation in the app. It replaces the global app state with the new one (I could just as easily have retained it to create an undo buffer or something.)
It then calculates a new “analysis” which is just the rooted view model tree.
Lastly, it fires off an event to let the UI know that the state has changed.
I’ve described how the app runs, but how does it get started? This is the final bit of glue that merges the React class world with my F# view model world:
var DroneApplication = React.createClass({
getInitialState: function () {
var t = this;
registerAppListener(function () {
var a = getTheAnalysis();
window.location.hash = a.LocationHash;
t.setState ({ app: getTheApp(), analysis: a });
});
return { app: getTheApp(), analysis: getTheAnalysis() };
},
render: function () {
var analysis = this.state.analysis;
var comps = analysis.Components;
return (
<div>
<header><summary>
{comps.map(function(c) {
return ;
})}
);
}
});
React.render (
,
document.getElementById('content'));
You can see that the first thing the root class does is to register for those state-updating events. It then returns the global app state (and analysis) as its own state. When an updated event is fired, it fetches its new state and invalidates itself.
And that’s it! The rest is just writing more view models and more HTML and CSS.
I have completely ignored the actual process for getting all this code into a packaged form. I’ll try to outline that process now.
Start by putting all the F# code into an F# console app. This is convenient because we can “run our app” from the command line to test our logic or do other wacky things.
It’s also necessary to have an app and not a library because someone has to call FunScript to generate JavaScript.
[]
let main argv =
let js = FunScript.Compiler.compileWithoutReturn <@ appMain() @>
let d = "../Site/build"
System.IO.File.WriteAllText (System.IO.Path.Combine (d, "client.js"), js)
The main entry point for our console app calls the FunScript compiler by
passing it a reference to a function called appMain. All code referenced
by appMain will end up getting compiled (FunScript has a nice dependency walker).
The console app ends by dumping out the generated JavaScript.
My appMain function acts like a standard JavaScript module and exports
a set of functions. Since I’m doing this in the browser, “export” means
that I assign it to the window object (it’s fine).
[]
let external (n : string) (f : 'a) = ()
let appMain () =
external "registerAppListener" registerAppListener
external "loadPreviousAppState" loadPreviousAppState
external "setOptionSel" setOptionSel
external "setProductSel" setProductSel
external "getTheApp" (fun () -> TheApp)
external "getTheAnalysis" (fun () -> TheAnalysis)
external "analysisEq" (fun (x : AppView) (y : AppView) -> x.Eq y)
external "componentEq" (fun (x : ComponentView) (y : ComponentView) -> x.Eq y)
external "designEq" (fun (x : DesignView) (y : DesignView) -> x.Eq y)
This appMain is perfect because it’s easy for me to make F# functions
available to JavaScript and it also satisfies FunScript’s dependency checker.
As a final bonus, it’s compatible with Google’s Closure Compiler.
We’re doing great, all the F# code has been turned into JavaScript thanks to the magic of FunScript. But, the code it generates isn’t optimal. One might even say it’s unoptimized. It repeats whole expression branches when it doesn’t need to, it loves generating empty expressions, and it does not share generic implementations.
The 2,500 lines of F# code (1,200 logic + 1,300 catalog) get translated to 934 KB of JavaScript… A bit much.
Google to the rescue. Google has a fantastic JS compiler called Closure that does all the gross data flow analysis needed to clean out fat code.
I just crank it up to its max settings, pass it the generated code, and out pops a 168 KB minified file. Magic.
Yes, I still use Makefiles. Here’s what building the app looks like:
OPTIMIZATIONS = ADVANCED_OPTIMIZATIONS
all: public/index.html public/site.js
public/site.js: build/client.js build/components.js Makefile
java -jar build/compiler.jar --externs react-externs.js --compilation_level $(OPTIMIZATIONS) --js build/client.js --js build/components.js --js_output_file public/site.js
build/components.js: src/components.js
jsx src build
build/client.js: ../Scraper/Client.fs ../Scraper/Model.fs ../Scraper/Catalog.fs
xbuild ../DroneBuilder.sln
mono ../Scraper/bin/Debug/Scraper.exe
This file describes the 3 phases of the build:
(The F# conolole app is called “Scraper” - for reasons.)
And would you believe it, it all works!
I am quite proud of the app. At first it was supposed to be a quick toy to help me with my hobby, but it quickly became a furtile ground to try out some new ways to build apps.
I am completely sold on this way to architect apps:
While I sometimes miss VB and mutating all the things - I don’t miss the bugs.
Apps written in the functional style are easier to write, easier to understand later, and easier to extend to new scenarios.
There are tradeoffs of course. Functional languages and libraries are great at handling trees but they suck at graphs - and I find most apps to be graphs.
I love FunScript, it’s one of the best transpilers I’ve ever used, but I don’t think I’ll ever use it again.
The problem is that it just doesn’t do any optimization and ends up generating code that JavaScript just can’t handle. For instance, Drone Builder is very slow when you first start clicking around using an iPhone (it’s fine on Desktop browsers). It takes a long time for the browser to JIT all the methods it needs to to make the app run fast.
On top of that, the error reporting in FunScript is horrendous. I love getting errors like “never” and “interface not found” with absolutely no indication which line of code triggered this bug.
I gave up on this project once because I couldn’t understand one of these errors and didn’t know what to change. (Finally I got lucky and changed the right thing.) Then it happened again towards the end of the project when it refused to compile equality comparisons.
Now equality comparisons are one of the main reasons I’m using functional data types. It was a real blow but I pushed on and wrote my own equality comparisons (had gone too far to give up).
These types of problems are to be expected with a project like FunScript - bugs happen. The real crux thogh is that the maintainer of the project hasn’t worked on it in a while and is not interested in continuing work. SO I’m not seeing a bright future of these bugs getting fixed.
The good news though is that this app has validated this style of programming. I just have to work on what tools I use to achieve it.
Argument Nuget 3 makes my life harder, all in the name of solving a problem I don’t have.
Lemma And it doesn’t solve any of the problems I currently have.
Conclusion @#$E%^&&^%! meh…
My complaint against nuget 3 comes from its added burden and complexity hefted onto library developers.
Let me start by putting my cards on the table: if it’s hard for me to support your platform in my library, I’m not going to bother.
In my mind then, every effort towards improving nuget has to improve it from a library developer’s perspective. If you make it easy for developers, nuget will be filled with awesome libraries that can run on the ridiculously large number of runtimes. The ecosystem and community grow and we all get back to our jobs of making fun of C++ and JavaScript programmers.
If, on the other hand, you make it hard, as has been done with nuget 3, you get a whopping “meh” from people like me and a o_O from the community.
Library developers start on a platform. I start on Mac or iOS. I have only ever started two libraries where I set out to make them cross-platform. The rest I made cross-platform either because it was trivial (start with a PCL, more on that later) or because I was willing to make the Herculean commitment to make it cross-platform.
I say commitment because anyone can create a library once - a nuget (even a nuget 3) package is a tolerable time investment. What’s not tolerable is creating build scripts and build servers that can compile and package everything every time I make a minor code change. Then getting those build bots configured in a way that the community can use them? Forget about it. (I don’t mention any of the commercial build services because it’s hard to justify monetary investment in OSS projects. I don’t mention any of the free build services because they don’t support my kind of builds which usually involve Xamarin.)
Back to platforms. Now, I’ve started a new library on a platform.
In the bad old days before PCLs, to release the library, I would have to make a bunch of junk projects for each and every fragment of .NET, all to convince msbuild to make me a bunch of binaries. This is just a silly assortment of meaningless names - Windows RT, Windows Phone, Windows PCL, Windows UWP, blah, blah, blah. 1 Library turns into N Projects.
(Personally, I see this as a major design flaw of msbuild. Imagine how different the .NET ecosystem would be if msbuild was actually a Common Language tool that could handle sources from multiple languages, imagine if it could output binaries not tied to a single platform, but “fat binaries” that just worked. Imagine if it was a build bot and not some CLI app from 1970. This is a tool I’ve written for myself a couple times when I was in my deepest throws of nuget and .NET cross-platform depression. Never released any version of them cause they play hell with the IDEs, but man it bothers me that no one else sees the project system to be one of .NET’s major flaws (more on that later).)
Thankfully, PCLs came and saved the day. 1 Library remains 1 Project. I could ignore .NET fragmentation if I just picked one of the supersets. This means that the majority of my libraries and code could now be shared without creating a hundred meaningless project titles and build scripts. I even write my apps using PCLs even when I don’t care about cross-platform. I do it because I might want to take that code and open source it. This is how I’ve always worked - I see a chunk of my app that I think others could benefit from, then I open source that bit.
With PCLs, open sourcing a library became trivial. I write a terrible XML file, I don’t have to create any new projects, and I just put nuget in my Makefile. Done. (And sorry that I’m conflating “Open Source” with “nuget”, but most .NET devs won’t even blink at a lib unless it’s on nuget.)
Of course, the necessarily platform-specific bits would have to be shaken out into their own projects. It’s not a perfect system, but it’s manageable. 1 Library turns into M Projects where M is the number of platforms I actually care about (it’s not the multitude of .NET fragments). This isn’t like a PCL where I want to run everywhere - this is a platform specific lib and I take on all the effort and commitment that it implies. (I wish this effort was smaller, but the IDEs don’t seem to care about library authors.)
Nuget 3 was an opportunity to fix the few things wrong with nuget and make the world a better place. Nuget 2 has a couple design mistakes that I would love to see corrected in a new version:
It has no concept of “families” of libraries so platform specific libs - or libs that have been partitioned on one axis or another - each act like standalone libraries. Look at the hilarity of the FunScript libraries. Look at the FSharp Data providers. Or, if you have a sufficiently stiff drink nearby, look at the numerous ASP.NET libraries. I have no idea what any of them are or how they’re related. Nuget has a very simple dependency graph that concerns itself only with binary dependencies, not conceptual. That’s to say, it works fine for machines, but is a long way from humane. If libraries could join families - the catalog could be cleaned up and lib devs would feel safer partitioning their libs.
That partitioning I mentioned? Libraries get split up for millions of reasons. Perhaps it’s due to platform. Perhaps a large feature is split out. Perhaps the lib developer loves the modern world of 1 class per library. Whatever their reasons, almost all large nugets are partitioned. Unfortunately, nuget (and its UI) leave it up to the consumer to reason what those partition axes are and how they apply to a project. If these axes were first-class (reified, whatever), we could turn the catalog into well organized and friendly place for both lib developers and consumers. Instead, it’s just an FTP directory with a bunch of DLLs in it with a big sign: “You better RTFM”!
Even the the simple dependency system is broken. If I add library A that depends on B, then remove A, I still have B lying around. This is just an embarrassing bug that should be fixed.
OK, maybe it’s unfair to judge nuget 3 on what it’s not. But with its slow update cycle - seemingly tied to Visual Studio - it’s hard not to regret missed opportunities.
Nuget 3 upheaves the entire ecosystem. Old nuget: PCLs + Platform Specific bits (finally we hit a panacea). New nuget: PCLs? (maybe? I honestly have no idea if I’m supposed to write PCLs anymore) + Platform stuff + CoreCLR. Wait what? CoreCLR? You mean that thing that still can’t run Hello World yet? My nugets got torn to shreds to support that thing? I know it’s the future, and it’s an exciting future, but OMG we are a long way from there. You have introduced a new platform (that doesn’t work) and said that nuget is now based off of it.
Seriously, are PCLs deprecated now? A running theme in my criticism is a lack of communication about how to write libraries in this new world. I know enough to know that nuget 3 has a complicated facility to resolve between PCLs and “dotnet” - so I guess PCLs still work? But am I supposed to stop making them? Should my cross-plat libraries be dotnet based or PCL? No one will stand up and answer that question without their own several-paragraph prelude. If “dotnet” is the future, it’s one shrouded in mist.
I am so confused by DNX, DNVM, and that thing called project.json. I have no idea if these things are related to nuget 3, but they have the same scent. Let me repeat, I have no idea if this nuget 3 stuff has anything to do with those techs. I am so confused by buzzwords and cute project names and blog entries that I’ve completely lost the narrative. Those tools are supposedly how you run code on the CoreCLR (why oh why? why couldn’t we just have a simple executive. Oh? Because web people environment variables? srsly?)
Or was it package.json? Confusion continues. Maybe next year we’ll have purpose.json. And then the year after, promise.json. And then, no-seriously-use-this-project.yaml (haven’t you all noticed yet that JSON is a terrible format for hand editing? XML is easier. YAML is easier. JavaScript is easier. TSON or any of the other *SONs are easier.).
Let’s say I choose to embrace “dotnet”. Well, I can’t because Xamarin doesn’t support it. This is a letter to Microsoft, so perhaps you don’t care. But it’s my main form of .NET consumption. If Xamarin doesn’t support it, it might as well not exist. I can guarantee you I will actively ignore nuget 3 until Xamarin supports it.
Still hypothetically embracing “dotnet”… what is up with the manual dependencies? Breaking up the BCL is some sick joke. I was in denial for a long time, then I got angry, now it just makes me sad. There is no more stdlib. We get, what?, int and string? And now I have to import libraries for everything else? This partitioning may have some technical benefits, but I don’t see them. It’s just added effort for what? I guess I can now run newer versions of System.Collections and old versions of System.Text? In what world does someone need to do that? A reminder: users of .NET are on a platform - we may like to consume cross-plat libraries, but we use a specific platform. I use mono. It updates its libraries every year or so. It’s an exciting time of year - retesting apps and making changes and filing bug reports. The thought of libraries now following their own independent release schedules just makes me shutter.
Whatever, I’m on the losing side of history for wanting a monolithic class library. So let’s say I fall down into a well and my only way out is to solemnly commit to embracing “dotnet”. I am still confused about their relationship to PCLs. Every time I hear someone discuss the resolution rules for nuget 3, I dream of my peaceful days back in that well. If I install Visual Studio 2015 Community edition (thanks so much for that btw!), and I create an additional project in parallel to my PCL project. Now I’m managing two project files instead of 1. One is classy and takes care of itself. The other has brain damage and I need to hand hold it and it’s 100 dependencies. Or am I supposed to throw out the PCL?
Let’s say my time out of the well has reformed me and the CoreCLR is actually a viable target. Well, nuget3’s file format is still a terrible bastardization of something that used to be simple. We keep shoving more and more rules and features into this schema that the file is a mixture of configuration and convention. I keep mentioning the resolution rules for nuget 3. Where are they written down? Which binaries does XS or VS pick given the set of available platforms? There are blog posts that make rough English impressionist style drawings of this algorithm - but nothing definitive.
What I really want is a matrix with “nuget platform” as one axis and “real platform” on another. Now, if I want a library that I know works on a given “real platform”, then I merely have to look on the row and find which “nuget platforms” that corresponds to. Ideally, an organization with funding would maintain this matrix - Microsoft, the .NET Foundation, Xamarin, Mono, anyone. Except “the community”. The .NET community is important, but since we don’t get a say in nuget design decisions and since this matrix is becoming more and more complex with every nuget release, the people doing the damage should take responsibility.
I am sad that I desire such a matrix. Sad that .NET has fragmented so much that it’s needed. But instead of nuget 3 coalescing that fragmentation, it just created more.
You may be reading this document and shaking your head “he just doesn’t get it”.
That is 100% possible. Maybe nuget 3 actually improves my life and I’m acting like an out of touch old codger.
But I guess that’s my point too. If nuget 3 really is a fix for the fragmentation problem, then why is the present so gray and cloudy? Why are OSS library devs who have been doing this stuff for years so confused? For goodness sake, even Newtonsoft is confused and they are Microsoft’s darling example.
Why isn’t anyone shouting “PCLs are dead, all hail the Core CLR and it’s 100 dependencies!”
Is nuget 3 ahead of its time, or simply the answer to the wrong question? Only time will tell I guess.

TLDR; I wrote a website to share circuits made with my app iCircuit and I hope you’ll check it out.
iCircuit users create amazing things. For the past 5 years of reading support emails, I have been privy to just a fraction of these wonders. Circuits far bigger than I ever thought iCircuit could handle - circuits that were clever and required me going back to my college texts to understand - and circuits that just made me laugh. I learned something from each of them.
It was a shame that all these wonders were hidden in my inbox. Well, no more.
Introducing, the iCircuit Gallery - a community driven web site full of circuits.
Now iCircuit users have a place to upload their and share circuits with the world. Each circuit is lovingly rendered in SVG and can contain rich textual descriptions of the circuit. Even if you’re not an iCircuit user, you can still learn a lot from the gallery.
I have seeded the site with the standard example circuits and Windows Phone users have (believe it or not) been able to upload circuits for years - so the site has some initial work in it already. But,
I am asking iCircuit users to share their designs - big or small - novel or standard - brilliant or otherwise. Share them with the world! There is great satisfaction to be had in sharing your work with others. I hope also to see educational examples pop up that take full advantage of the ability to document the circuit.
Simply click the Upload button, create an account (email optional), and pick the files off your device. Right now, that means Mac and Windows users have the easiest time with the gallery. I am working on iOS and Android updates to make uploading a snap there too.
I am very excited to see your designs!
I have lots of ideas on how to improve upon this initial release but hope to get some feedback from the community before pursuing any of them. For example, I hope to add Tags to help organize things and Comments if contributors desire.
Also, I will be integrating the gallery into the app to make browsing and uploading easier. Keep your eye out for updates!
Oh my, I wrote a website! With servers and all that. Part of the reason it took me 5 years to write this thing is that I am scared to death of running servers. My ability to manage a server only gives it a life span of a few months before some hacker is using it as a spam bot.
So what’s changed? App hosting is what’s changed. I adored Google App Engine for it remedied the whole server problem - host apps instead of servers - genius! They provided a great database and a great toolset.
But it wasn’t .NET and I always wanted to run the iCircuit engine on the server.
And then Azure came along. Azure has a million enterprisy “solutions” and one awesome service called Mobile Services. But they their Cloud Service was the most confusing thing ever. It acted like an app host but also acted like a server. Which was it? So very confusing.
Well, Azure fixed that with a Web Apps service. Finally, after that little marketing spin and an assurance that I’m not managing a server, I became a customer.
Building the site was a snap with ASP.NET MVC. My only possible mistake is that I’m using Azure’s Table Storage - not sure how that decision will pan out. I foresee a future of migrating to SQL…
I am also scared to death about cloud pricing. Every page on the site has an HTTP and memory cache of 5 minutes. It’s ridiculously high. Almost as ridiculously high as my fear of cloud service pricing.
But there’s only one way to find out…
I’m terrible at coding interviews - some busy bee dusts off a tricky algorithm that they studied in college and asks you to (1) originate it from a poorly stated problem and (2) live code it in front of them.
This isn’t how I work. Like most programmers who survive more than a few years in this business faced with a novel or difficult problem, I do the majority of my design work in my head - slowly.
The problem gets repeated endlessly: “The user wants to accomplish X, Y, and Z - I will need to talk to data sources I, J, K - I will use algorithms A, B, C - they are connected in this configuration or that - information will be on a screen that looks like…”
I try out all the permutations of data structures, objects, their relationships to one another, algorithms that I already know, and algorithms that I note to seek out. I think through the user interface - attempting to limit the number of choices the user has to make to do repetitive tasks while still trying to giving them a new power.
Steeped in years of OOP programming, all this design work culminates in an object schema in my head. Known classes and their relationships to other classes are built and toyed with. I refine this graph by running many algorithms across it to see how nasty my layers of abstraction and encapsulation make moving data around (remember, in the end, the most important thing to your program is the data - not how you represent it). I look at it to see how easy it will be to extend or flat out replace in the future.
This is a slow process. It’s why I have a list of 100 “potential next apps”. They’re up in my head (or at least a few top candidates) while I toss them around and poke and prod at their code.
Once a design is deemed robust, useful, and interesting enough, it’s time to sit down and code it. At this point you are basically limited by your programming language. This is why I’m a programming language nerd and relentless critic.
I don’t care about powerful programming languages because they save me from typing. I care about them because they allow me to get closer to my mental design than less powerful languages.
Designs of the mind are necessarily abstract - unconcerned with particulars of language. My “head design language” is just objects, interfaces, methods, properties, and events. Call this OOP 1.0. (As I learn functional programming, my language is slowly turning to records, abstract data types, interfaces, and functions.)
When I sit down to write these, any boilerplate that the language forces on me becomes an annoyance. C++ and Objective-C that require designing a memory strategy are profoundly annoying (I can barely get my own designs right, and now the fracking computer needs help too?). C#’s lack of metaprogramming and first class events is another annoyance. F#’s single-pass compiler that makes you order every declaration and even your source files (seriously, what decade is this?) is, you guessed it, annoying. Even trivial syntax gets annoying at this point - why do I have to write all those silly characters? { ; } oh my.
The tools we use also become obstacles. Intelligent IDEs that are intended to make coding easier become enemies with every spinning beach ball - with every hidden setting - with every error message. Imagine trying to create an intricate sand castle on the beach during a hurricane. No wonder text editors such as Sublime are such hits.
So your beautiful mental design gets compromised into some language or another. This is why we call it coding - we are encoding a design into some barbaric text format that only highly paid professionals and intelligent 13 year olds can understand. Anyway…
That’s all to say that it’s best to burn through all the bad designs in your head so that only the decent ones have to suffer this transition to code.
It’s a slow process but it can’t be sped up. No, test driven development is not an answer. TDD causes you to hash out a design - but one that’s biased to one consumer - the tests. It neglects the most important consumer - the end user. Also I am happy to throw out a design that I’ve been mulling over for a week. I have never once seen a TDD advocate throw away a week’s worth of Asserts - no they just get painfully “refactored” into the next design option.
It’s not a perfect process because your initial designs are never right. Certainly it saves you from writing endless numbers of throw away prototypes before you settle on a good design - but it won’t be a perfect design. It will have to be changed once you’ve implemented the app and learned what the app really is and how people really use it.
Submitting apps to the App Store is filled with many wonderful opportunities to be rejected. Let’s count them!
1. Compiling/Building your app is the first possible level of rejection. It’s usually your fault, but some days…
2. Signing your app is also an adventure in rejection with the added joy of creating multitudes of profiles and app IDs than you really don’t know what to do with but are too afraid to delete.
3. Sometimes the phone itself will reject you next. Maybe Springboard is having a bad day, or maybe you really have made a mess of those profiles…
4. Hey look at me! The watch wants in on this game too! It likes to reject you for a variety of reasons but doesn’t like to tell you which. You’ll have to dig into the logs to find its secret motives.
5. Time to submit that puppy and get rejected by the iTunes Connect! iTunes is actually pretty good at this whole rejection thing and does its best at helping you through the difficult times.
6. Well now that you’re uploaded, surely the app… whoops. Nope. Time for the little Prerelease Binaries to reject you. Oh you didn’t know about that esoteric requirement? You read every guide, right? Right?
7. Time to submit for review and let the humans… nope, wrong again. Another computer can reject you now before a human ever sees it. Watch your inbox cause iTunes Connect has no idea what that computer is doing.
8-1,000. Finally after all that, you can be rejected by a human. This rejection process is long, filled with unspoken truths, false assumptions, and bitter quibbles. But at least it’s a human…
1,001-1,024. It was all worth it, your app is in the store and is running gr… oh, it crashes on iPad 2s when you rotate the screen during the 5th moon of the year.
So close.