Saturday, December 31, 2011

Ice Cream Sandwich VS iOS5 VS WP 7.5 Mango

Comparing Ice Cream Sandwich with iOS 5
In this document, I am trying to compare the APIs of ICS with those of iOS 5. So, the comparison is from the point of view of the developer. And, I am looking only at the public APIs documented on the Android SDK web site. I know there are other APIs in the source code and they may be used by some applications. But if they are private it means they may change in the future and the applications using them would break.
Both systems are huge and not always very well documented. The best way to check the assumptions is by writing code and experimenting. But it is taking time and I don't have it. I used to develop for iOS but I stopped at iOS 3.0 and only explored a few domains. I developed a bit for Android and stopped at Froyo. So, obviously this comparison is containing mistakes. I am sure I have missed some features (specially on ICS side). I hope my readers will help me to improve the comparison. I am far from being an expert at all of those domains. So, this document is first to help me improve my understanding and also for the non-experts who'd like to know more about each OS.
So, please, don't be too harsh with me if you find some big mistakes :-) Because I am sure you'll find some.
Comparing ICS and iOS 5 is a risky thing to do :-) So, for the trollers and zealots : don't waste your time and mine. You'll be censored heavily. Freedom of speech means you can express yourself on your blog and I don't have to read it. It does not mean you can say anything here.
This document is sometimes using the word «OS» for Android or iOS. Of course, it is incorrect since they are distributions. But, most people are speaking like that so the document is following the general trend.
I have tried to avoid copy/pasting extracts of the OS documentations but in some cases it was too tempting.
Finally, this document will be improved based upon the comments. So, I'll track here the list of changes I have made and the mistakes I have corrected.

Change history

  • First release
  • 12 / 22 / 2011 : AsyncTask

1. Graphics

The graphic domain is one of the most important and so it is the first in this document. The personality of a phone and its main differentiator today is the UI.
iOS and Android are both using the concepts of windows, views and layers but with different meanings.

1.1. Windows

Windows are used to contain a view hierarchy.
On iOS, the windows are used by a developer to access to the different screens. iOS can support several screens (external or internal). Windows are used by iOS in a standard way : to allow different processes to share the screen. For instance : status bar, notification tray ...
On Android, : a window is linked to an activity. Generally, the window is using the full screen but some activities may have a smaller window floating on top of other windows. Activities are described in the section about process management. For instance, as windows you have the status bar, the wallpaper, the launcher, alerts ...
So, on both OS, windows are serving similar functions. But from the point of view of the 3rd party developer they are used a bit differently.

Android Screenshot
In the previous Android screenshot, you can see 4 windows : the status bar, the animated wallpaper (full screen), the menu and the search bar.

1.2. Views

A UI is organized as a hierarchy of views in both OS. Each view has its own coordinate system and the drawing code for a view is expressed in the view coordinates. Views are responsible for handling events.
In iOS, a view is also layer. A layer in iOS has allocated memory managed by the composition engine and is different from Android layers.
In Android, a view does not have to own some memory. It is an abstraction for event handling, coordinates and generally views are sharing a buffer from their activity window.
The Android views having a buffer are SurfaceView (or one of its children like GLSurfaceView, RSSurfaceView, VideoView) or a TextureView.
TextureViews are new to ICS and are useful for OpenGL content and video content.

1.3. Layers

Layers in iOS are objects managed by the composition engine : CoreAnimation. Core Animation is a patented technology at the center of the iPhone user experience. Layers in iOS are very similar to iOS views and can be seen as light views ...
Layers in Android are different : they are caches for a view. When a view is dirty and has to redraw itself, it may force other views to redraw themselves (and their sub-tree of views) since they are sharing the same native window. But if the other views have not modified their content, it may be expensive to recompute them if the content is complex. So, the content can be cached in a bitmap or in an OpenGL texture when hardware acceleration is enabled in the system.
In some ways, iOS views are all cached since they have an associated memory buffer and are layers. In iOS, there is also the possibility of caching a view hierarchy instead of just caching one view. I am not clear if the Android layer for a view V is a cache just for V or for the full view hierarchy starting from V ?

1.4. Surfaces

Surfaces are an Android concept.

1.4.1. Android surfaces and composition

A first category of surface is created through a view and is interacting with the view hierarchy : SurfaceView and its children GLSurfaceView, RSSurfaceView and VideoView.
In a SurfaceView, the surface is managed through a SurfaceHolder interface. A class implementing this interface is a class whose objects are managing some memory for a surface.
But views created with SurfaceView have a problem : they do not live in the application's window. They are creating an hole in this window to reveal the new window created with SurfaceView. As a consequence : SurfaceViews are not very convenient for animations. They are useful to display OpenGL content or video content since the view content can be refreshed without having to redraw the application's window. But they cannot be transformed efficiently (moved, scaled, rotated …).
For that reason, a new category of views has been introduced in ICS : TextureView and its children RSTextureView. They can be used only in an hardware accelerated window. They are behaving as normal views in the view hierarchy. They can be easily and efficiently animated by the GPU. So they are very interesting for video and 3D.
In addition to that, a view can be cached using a layer : Software layer (a normal bitmap) or Hardware layer (a texture managed by the GPU if the hardware acceleration of the window or view is enabled).
A layer for a surface S can be used for instance to cache a complex view hierarchy starting with the surface S. It can also be used if some filtering has to be applied to the surface after rendering.
The hardware accelerated layers are useful for animations.
Hardware acceleration means that instead of describing the view content as an array of pixels, the view content is described as a list of commands : OpenGL commands. It is the display list. So, in HW accelerated mode, instead of sending a huge array of pixel to the GPU, Android only needs to send a hopefully smaller list of display command. And this list does not have to be generated again by the CPU each time the screen must be composited (it has to be regenerated only if the view content has been invalidated). It is useful for animations. Of course, it depends on your scene : if you have lots of textures the GPU will need to load the same amount of memory.
Android can use a Paint object to apply some effects to a view when it is being rendered.
Obviously, the legacy from previous Android version is visible here : it is a bit confusing.

1.4.2. iOS Layers and composition

iOS is conceptually much more simple : there are only Core Animation Layers which would be roughly equivalent to the Android View with an hardware accelerated layer cache. UIView in iOS are Core Animation Layers.
There are roughly fundamentally only 3 kind of CALayers : CALayer, CAEAGLLayer and CATiledLayer.
CAEAGLLayer is using a customized version of EGL to create and manage the EGL surface (A in EAGL is for Apple).
CATiledLayer is a subclass of CALayer providing a way to asynchronously provide tiles of the layer's content, potentially cached at multiple levels of detail. As more data is required by the renderer, the layer's drawLayer:inContext: method is called on one or more background threads to supply the drawing operations to fill in one tile of data. iOS can use some CoreImage filter to apply effects to a view when it is being rendered.
It is not clear what's a CALayer memory is handling : an OpenGL texture or a display list ? The presence of the shouldRasterize property in the API and other hints leads to the possibility that iOS may choose the best representation depending on how the layer is used. But I have really no idea about it.

1.5. Animation

1.5.1. Animations in ICS

Before ICS, there is no really efficient mechanism for animations. SurfaceView cannot be efficiently transformed. Normal views are sharing a buffer with the application's window so animating their content is leading to redraw of other part of the UIs. It is possible to cache the content of a view to a bitmap to avoid having to generate again the view content if only the view properties (like position, transparency ... ) are changed. But, this bitmap will be drawn into the parent view and thus force some redraws.
The introduction of hardware acceleration and texture views is enabling new possibilities where the view can be animated without forcing the redraw of other views. A new animation framework is introduced to take advantage of those new features : property animator.
The new property animation system, hardware layers and texture views is in fact close to the CoreAnimation framework of the original iOS (2007) with its CALayers.
Previous versions of Android were using the view animation (and ICS is still supporting it). With the view animation you can only animate views and only a very restricted set of view properties. Moreover, only the view drawing is changed. The view events are still recorded in the original view position unless some special logic is coded to manage it. Indeed the view properties are never changed. It is only the way the view is drawn by its container that is changed.
With the new property animation you can animate property of objects (not just view). One can easily control how the values are interpolated in time. For a visual animation, it is important because one often wants, for instance, to slow the animation towards the end. A linear interpolation is not always the most visually beautiful. So, with property animations one can control the interpolation.
It is possible to group animations to have some animations to take place in parallel (in same group) and some animations occurring sequentially (one group after another one). Groups can be nested. It is done with AnimatorSet. The framerate of an animation can be controlled etc …
When an hardware accelerated view is animated, Android often just has to replay the display list without having to ask the view to recompute it. It is much faster. Of course it depends on the animation but for most of the view properties (like rotation etc …), the animation is equivalent to generating a few OpenGL commands before replaying the display list for the view. Hardware accelerated layers can also be animated without forcing redraw of other views (I have not read that software layers can be used too. My understanding is that a software layer will have to be blit into the native window and thus the native window has to be redraw).
To animate a property there are three choices:
  • The object must have methods setFoo and getFoo to change and get the value of the object member foo. Or a wrapper around the object must provide those methods ;
  • If it is not possible, a listener must be created and listen on a valueAnimator. This listener will be responsible from changing the field of the object based upon the valueAnimator value.
  • There may be a member field of class Property.
In addition to that, animation is taking place on the main UI thread if I am not wrong.

1.5.2. Animations in iOS

CoreAnimation has been present since the very first iPhone in 2007 and is totally similar to the property animator but with some interesting improvements.
In CoreAnimation one can similarly animate properties like opacity, coordinate transforms, background colors etc … There is the equivalent of an animation group (a Core Animation Transaction). There is something equivalent to listeners to react to some animation events and choreograph a complex animation. It is also possible to specify interpolators to control the animation curve and have some effects like slowing down at the end of an animation.
In addition to this, there are two main differences compared to the Android property animator : there are implicit animations and the model is asynchronous (animation is done on a different thread from the UI one).
In iOS, if the developer is writing
myView.opacity=0.0;
the system will start animating the view to fade it out. The opacity field will slowly change from 1.0 to 0.0. But, the developer won't see it. The developer is using the final value directly. The animation is taking place in parallel on the presentation tree managed by a different thread. The developer is only accessing the layer tree.
If at any time during the animation, the developer is deciding to change the value of opacity again, it will be reflected in the animation. Let's assume the current value of opacity is 0.6 on the presentation tree and the developer is changing it to 1.0 again on the layer tree. Then, the animation will animate the opacity between 0.6 to 1.0 automatically instead of continuing from 0.6 to 0.0.
So, with implicit animations (explicit model is also supported) it is very easy to start and change an animation. In both models (implicit or explicit) the animation is asynchronous and taking place in the presentation tree. The interaction model is simple and not requiring the use of semaphores by the developer.
So, any event (like a touch event) can very easily and very quickly modify the current animation or pause it.
iOS is using Objective C. In Objective C it is possible to programmatically ask if a class is supporting a given method by giving the name of this method as string. As a consequence, iOS (like Mac OS X) is implementing a flexible system of key-value where it is possible to change the value of some properties, or react to the change of value of some properties by naming the properties at runtime. For instance, a key “a.b.opacity” would name the member opacity of member b of member a of current object. By using key-value coding it is very easy to wrap a new behavior around an existing graph of objects.
It is a bit more flexible than the reflectivity currently implemented in the ICS animation framework.

1.5.3. Callbacks

With animations, it is possible to listen to some events like the start of an animation, the end of an animation … It is useful to choreograph complex animations. In Android, it is done by creating listeners and executing some function when some event is occurring.
The Java language is supporting inner classes which are generally used to implement callbacks.
In iOS, the C language has been extended with the concept of block : a closure. They are similar to inner classes but less boilerplate is needed to create them. The iOS APIs have been modified to specifically use blocks as much as possible. Blocks are also important for SMP in the iOS world as explained in the section about process management.

1.6. OpenGL

1.6.1. Android

ICS is supporting OpenGL 1.1 and OpenGL ES 2.0. There are Java APIs and native development kit support.
A GLSurfaceView is introduced for OpenGL support. It is providing a continuous render mode and a dirty render mode where the view is refreshed only when needed.
Calling GL APIs from Java is adding an overhead due to the JNI glue (Java Native Interface). The GL code written in native code cannot easily access to the Java data structures describing the 3D scene. So, native code is more work which may not always been justified.
So, Android is introducing Renderscript whose first goal is to simplify and optimize 3D graphics compared to Java for common uses (not games). Renderscript is providing portability and automatic Java wrapper generation (to ease communication with the Java world).
A RSSurfaceView is introduced to support RS drawing. The RS framework is introducing a new Mesh data structure to describe 3D objects (set of triangles) and more easily exchange those descriptions between the Java and RS world. RS can also specify shaders for the rendering and some additional settings like blending modes etc ... RS is supporting samplers. In short RS is not OpenGL but RS is introducing some abstractions similar to those available in OpenGL.
The GLSurfaceView is not providing enough control of the EGL API to be able to share data between contexts which can be useful in case of big data like textures etc ... (I have not found anything about it).
ICS is introducing the new TextureView and RSTextureView which can be used with OpenGL and have none of the problems related to SurfaceView (they are true views). They work well with the new animation framework.

1.6.2. iOS

iOS is supporting OpenGL ES 1.1 and OpenGL ES 2.0. iOS is using native APIs directly so no need of a technology like Renderscript.
EGL is not directly supported. Instead Apple has developed a very close API : EAGL. EAGL is supporting the concept of Share groups to share objects like textures, buffers between different contexts. It is similar to what's provided by the EGL API.
The OpenGL support is integrated with CoreAnimation. The OpenGL ES renderbuffer is also a CoreAnimation layer.
The rendering can be done on demand for views not changing very often or it can be done in an animation loop. iOS is providing a CADisplayLink object for animation loops : A display link is a Core Animation object which is synchronizing drawing with the refresh rate of a screen.

1.7. Display

1.7.1. Android

TV out is supported by Android but is not under the control of the developer. So, it is possible to send video on an external display and that's all.
There is also a cloning mode available on HDMI for instance. In that case, and contrary to the case of video, the external display is displaying the same thing as the internal one.

1.7.2. iOS

iOS is supporting several screens with a different display on each. From a HW point of view, only an external screen on HDMI or AirPlay (wireless display) is supported. From a software point of view, the API is supporting any number of external screens.

1.8. Comparison Summary

ICS graphics is finally close to the level of the 2007 iPhone with hardware layers and the new property animator. There are some remaining problems : the animation threading.
And of course, applications have to be updated to benefit from the improvements.

2. Process management / SMP

There are lots of differences between Android and iOS on process management and SMP. It is the domain where both OS are the most different and Android is the most interesting.

2.1. Process management

On iOS, an application (application from the point of view of the user) is a process. There is only one entry point. An application is monolithic. Applications are delivered as archives containing assets (pictures, sound, UI descriptions …). The archive is signed. The archive is also containing a list of metadata about the application.
iOS applications can be asked to do things from other applications using the standard URL mechanism. An URL is not just an http:// string to access a web resource. URL = Uniform Resource Locator. An URL can be used to access a resource in any domain. Each application can define its URL scheme like : mailto://, phone://, skype:// etc … Some URL schemes are recognized by the OS (like phone, mailto etc …) other are really application specific.
On Android, an application is also delivered into a signed package containing metadata (the manifest). But an Android application is not monolithic with only one entry point. An Android application can contain several components : activities, services, content providers, broadcast receivers.

2.1.1. Activity

An activity is basically what's controlling a window. For instance, an activity could manage a calendar view. Android is providing mechanisms to select and launch an activity : intents and intent filters. An intent is similar to an URL scheme but formatted differently. The advantage of intents and intent filters is that there is a mechanism to publish the available activities and select one. In iOS, if you have several browsers installed you can't decide which one will be selected to handle the http scheme. In iOS, if an application (let's say Skype) is implementing the skype URL scheme, you have no way to let other applications know it. The other applications must have hardcoded this information or refer to an external databases like http://handleopenurl.com
With intent filters and implicit intents, one could replace a standard activity (for instance the calendar view) with an activity from another application. So one could replace the Android default calendar activity with a calendar coming from another application. In that case, the new calendar activity may run in a different process (the process in which the activity is running is controlled through metadata coming in the manifest of the Android application). So, Android is more customizable than iOS thanks to this mechanism.
(I don't know if it is possible with the calendar :-) I am just using it as an example. It is theoretically possible if the calendar is exposing a clear API. But, if in practice it is not yet possible there are other parts of Android which can be customized with that same mechanism)
A customization like that is imposing the use of inter-process communication (IPC) much more often. So there is a tension between sandboxing (see section about security) and use of IPC. It is difficult, when IPCs are widely used, to be sure that the sandboxes won't be compromised. iOS and Android have totally different philosophy about it.

2.1.2. Services

On iOS and Android everything is done to avoid having too many processes running in the background because it is not good for the battery and user experience. So, when an activity (Android) or process (iOS) is going to background, it is asked to save its state and then it is "paused" by the OS. At any time the OS can then quit it (to free some memory) or relaunch it. The activity / process should be able to restore its state from the saved data. The goal is to give the user the illusion that the applications are always running. But, in some case there is the need to do something in the background and the mechanisms defined by Android and iOS are quite different.

Android background activity

In Android, the application has to be split into several components. A background work has to be done in a service. A service has no UI.
An activity in background is either in paused or stopped state. In those states, the activity can be killed at any time. So there is no guarantee that a processing could be done for as long as it is needed. That's why a service must be used. Its life cycle is different.
A service is exposing some functionality that should be run in background. Several components can be bound to it (communicate with it). The service does not have to run in its own process but most of the time will. For instance, a music player can be split into an activity managing the user interface and a service playing the sound. Both are running in the same process. The service could be linked to a status icon in the status bar so considered as a foreground service. As a consequence, Android would not be able to kill the process containing the activity in case of low memory condition because the process for the activity is also containing the service.
But, if the service was in another process it means that the activity / service communications would be an inter process communication which is less efficient. But it is not a requirement. When the service is used only in the same process as the activity, function calls can be used.
So generally a service is not a process (it could but it is not mandatory). It is not a server. And a service is not a thread. A service is a way to expose functions to other components and a way to tell Android that some work has to be done in background and thus the process should not be killed.
There are other components (broadcast receivers and content providers) but they won't be described in this document.
Android is using the Linux cgroups to schedule differently the foreground components and the background ones. The foreground activity is more likely to get CPU time than the background components.

iOS background activity

In iOS, the full application is running in the background. But like in Android, the application must tell the OS that it needs to do something in the background otherwise the application can quit at any time. In Android, it is done through Services. In iOS, it is done through metadata in the application package. Those metadata (flags) are enabling some background use-cases. So, contrary to Android where a service can do anything, there are a few restrictions in iOS to preserve battery and user experience.
The main use-cases are (several use-cases can be combined) :
  • Executing a finite-length Task in the Background : it is using blocks and grand central dispatch. Only the blocks queued on a specific queue are scheduled. Other threads from the app are not scheduled. The time to finish the processing is limited. But any processing can be done (access to web servers etc …) ;
  • Local notifications (similar to some kind of timers) ;
  • Audio (audio playing and recording) ;
  • Location with 2 mechanisms depending on the battery usage and accuracy : significant location change or background locations (GPS) ;
  • VoIP ;
  • Communication with external accessories ;
  • Newsstand download
The Android use-cases which can't be supported with a combination of above use-cases are : camera in background, sensors in background and server access in background (like monitoring tweets) except for a limited amount of time. For the later use-case, iOS is favoring an offload of the service to the cloud.
Once the iOS application is allowed to run in background (for instance for audio or VoIP) it can do a lot of things like accessing web servers etc … So, people who are saying that iOS multi-tasking is not a true one are not accurate. It is a true one from a technical point of view and from an user experience point of view. But some use-cases are either forbidden or limited in time.

2.2. SMP

To benefit from an SMP system several threads must be available and use the CPUs. Often the threads are just paused (process in background) or waiting on an event. What's needed are computation threads. Some native applications are multithreaded like the web browser but to really benefit from the SMP it must be possible to easily use the parallelism hidden in the applications of the Android market or iOS AppStore.
Unfortunately, threads are not really the right abstraction to parallelize an application. They are an implementation detail : an important detail but something that should be hidden if possible because it a difficult abstraction to use. That's why the OS are providing other abstractions to make the use of threads more convenient. The goals of those abstractions are:
  • Minimize the restructuring of the code required to parallelize ;
  • Minimize the number of computation threads on each core ;
  • Maximize the load of the computation threads so that they are not just waiting ;
A threadpool is often used and some abstractions are built on top of it to simplify some common use-cases.
Since the concept of closure is very important to the iOS way and since closures have already been quickly discussed in this document, it is now the right place to explain what is a closure for people who don't know it.

2.2.1. Closures

Let's consider a block of code that the developer would like to parallelize: 

Block of code
The block of code is referring to local variables that may also be used by another block. The block of code to be parallelized may take a long time to finish its processing. So, if executed asynchronously, the function f may have returned. As a consequence, to parallelize it, the simple f function has to be changed. The local variables used by the block must be saved somewhere else and survive to their containing function f. The local variables may refer to objects in memory and not just be simple type like int, float. So, some memory management must be added.
A closure is a function with a state. A function depending on some arguments and on some other variables not given as arguments and which are open. Their value must be given by an environment. In the above example, the block would be transformed into a function with no argument but depending on several open variables. The function would have a meaning only when the open variables are given a value with an environment.
Several calls to the f function would thus create several closures sharing the same code but with different values for the open variables.
The closure concept must be supported by the language and the compiler. It implies that some local variables can't be saved on the stack or that the stack must have a more complex structure like a tree-like one.
In iOS, the C language and Objective C languages have been extended with the concept of block. A block is a closure. The compiler has been modified to support it.
In Java, it is more difficult to change the virtual machine but closures are finally supported (inner classes). The references to local variables may be replaced with method calls in some cases and thus be less efficient. In addition to local variables on the stack, the inner classes can also access the member variables of the containing object. The problem of an inner class is that you still have to wrap your function into a new class. It is less convenient.

2.2.2. Android SMP

A Java thread can be used. The function to be executed in the thread must be wrapped into a Runnable object. A common use-case it to do a processing on a thread and then update the UI when the processing is done. The AsyncTask is used for this. The AsyncTask must be subclassed and two methods implemented : one to be run on a working thread, the other to be run on the UI thread when the worker thread has finished.
The java.util.concurrent package is containing abstractions to work with threads : thread pools can be created with Executor services and there are data structures which are thread safe.
Thread pools can either be created with a fixed number of threads or with a variable number of threads. In that later case, the goal is to minimize the number of running threads and at the same time ensure that there is always one thread doing some work.
There is the concept of scheduled thread pool where it is possible to execute some work after some delay or for a periodic execution.
The unit of work to be run in the threads have to be wrapped into a Runnable class or a Callable class.

2.2.3. iOS SMP

In iOS, it is possible to use threads directly. But, several other technologies are also available like Grand Central Dispatch and Operation queues.
In iOS, the developers never have to create new classes just to wrap the functions to be run on a thread. There are two other ways : blocks (closures) and selectors. A selector is a specific feature of the Objective C language and some words about it are required.
In the usual object oriented languages like C++ or Java, a method call is noted:
a.draw()
The method draw is hard coded. Polymorphism may select different draw method according to the dynamical type of a but that's the only flexibility available.
In Objective C, a method call is not really a method call although the compiler is generating method calls most of the time. Instead, Objective C is using the syntax:
[a draw]
to specify that the object a is sent the message draw. This message handler may be implemented with a method call draw for optimization but it is not required.
A message may contain data. For instance, a drawCircle method specifying the position and radius of the circle may be written as:
[canvas drawCircleAtPos: thePos withRadius: r]
the message in that case is drawCircleAtPos:WithRadius:
Message names can be saved in variables etc …
To manipulate message names, Objective C is introducing the statement @selector which is returning a number identifying a given message name.
So, previous message would have
@selector(drawCircleAtPos:WithRadius:)
as identification number.
In Objective C, you can decide that a message sent to an object will be handled on another thread. So, there is no need to create new classes to execute a function on another thread. Any message handler of an object can potentially be executed on another thread with or without delay.
There are several APIs to do this but as an example :
performSelector:onThread:withObject:waitUntilDone:
This message is recognized by any object in the Objective C system. There are several variants and some of them are able to support delayed execution.
iOS is also supporting Operation queues which are providing abstractions to ease the management of asynchronous operations:
  • Support for the establishment of graph-based dependencies between operation objects. These dependencies prevent a given operation from running until all of the operations on which it depends have finished running.
  • Support for an optional completion block, which is executed after the operation's main task finishes.
  • Support for monitoring changes to the execution state of your operations using key-value notifications.
  • Support for prioritizing operations and thereby affecting their relative execution order.
  • Support for canceling semantics that allow you to halt an operation while it is executing.
In addition to that, iOS is supporting Grand Central Dispatch (GCD). This technology takes the thread management code you would normally write in your own applications and moves that code down to the system level. All you have to do is define the tasks (blocks) you want to execute and add them to an appropriate dispatch queue. GCD takes care of creating the needed threads and of scheduling your tasks to run on those threads. Because the thread management is now part of the system, GCD provides a holistic approach to task management and execution, providing better efficiency than traditional threads.
The work units submitted to GCD are blocks. With GCD, a serial code like:
printf("Do some work here.\n");
printf("The first block may or may not have run.\n");
printf("Do some more work here.\n");
printf("Both blocks have completed.\n");
can be transformed into a parallel code with:
dispatch_async(myCustomQueue, ^{
printf("Do some work here.\n");
});

printf("The first block may or may not have run.\n");

dispatch_sync(myCustomQueue, ^{
printf("Do some more work here.\n");
});
printf("Both blocks have completed.\n");
So, in iOS there are lots of tools to ease the work of the developer and with minimum changes to the source it is possible to dispatch lots of units of works for asynchronous execution. Threads are hidden and synchronizations are not based upon semaphores which are used as less as possible (but can still be required in some cases).
It is made possible by a global approach : changes to the programming language to support closures, a system level thread pool management with kernel support (GCD) and the semantic of Objective C (message instead of function call).
The move to GCD and blocks has started more than one year ago because blocks are more natural for lots of APIs even on mono-core systems (callbacks). Apple has moved a lot of iOS APIs to blocks.

2.3. Compute APIs (OpenCL, Renderscript …)

This section can be skipped. I am trying to imagine what we will get in the future.
OpenCL is not available on iOS 5.0 and Renderscript is not yet running on GPU nor dispatching compute work to GPU. Current Renderscript can only call OpenGL functions. One can expect to have Renderscript on GPU in the future and OpenCL on the iOS side.
On iOS side, OpenCL will work well with Grand Central Dispatch because it is already the case on Mac OS. The same GCD API can now be used to dispatch OpenCL kernels to the GPU. This API is much more simple than using directly the OpenCL API. So, in the Apple world, GCD is providing a unified API to dispatch all kind of works to different kind of cores.
OpenCL allows more control of the GPU resource than Renderscript : there are ways to control where the data is located in the memory hierarchy (important for efficiency when there is some local memory in the GPU). There are also synchronizations primitives (barriers etc …).
In Renderscript there is nothing like that. The synchronization is done when data is exchanged between the RS world and the Java world. (I have not found anything else so I assume there is no other way right now).
There is nothing about any GPU local memory. So, if RS wants to use the GPU resource efficiently, one can expect some evolutions towards the OpenCL model otherwise RS will get portability but sacrifice performances.
Renderscript can also be used to parallelize some work on an SMP system. It is done through the rsForEach statement.
rsForEach is applying a RS script to all the elements on an allocation (array). It is assuming that the processing on each element is independent from other elements. The work items are dispatched on a thread pool. So, rsForEach is another way to use an SMP system in Android.
In iOS, the rsForEach can easily be implemented using GCD and a dispatch_async inside a loop body.

2.4. Comparison Summary

Android applications are not monolithic and the concepts introduced in Android allow much more customizations of the system. That's a major difference with iOS and one big positive point for Android. I'd like this point to be more often promoted on the web instead of focusing on areas where Android is still weak.
The support for parallelization looks less advanced on Android side and requiring more boilerplate.

3. Video

In this section, like in the Audio one later, I have discovered some big limitations of Android which are surprising me. And, I am probably wrong and have missed some key features.
Also, those domains are big, with code in Java side and in native side. The native side is a bit confusing since the Android implementation is not fully compliant with the Khronos one : some features are missing and some customizations are added by Android. So it is easy to get wrong without experimenting with the code.
That being said, now you can read the video and audio sections :-) and don't get mad if you think I am wrong.

3.1. Video playing

3.1.1. Android

The only public API in Android for video playing and manipulation is the MediaPlayer. MediaPlayer is providing the standard controls for video playing and possibility to fetch the data from different sources (like file or network). The MediaPlayer is decoding the stream into a surface. It is thus possible to access the video frame and apply additional transformations to it.
Timestamps are added by the MediaPlayer to the Surface.
With the new ICS support for OpenAL in the NDK, it is possible to create MPEG-2 transport streams by software which can then be processed and decoded by the platform. For instance, the application may receive and encrypted flux, decode it and generate an MPEG-2 transport stream to be decoded by the platform.
Android does not yet provide a compliant implementation of OpenMAX AL 1.0.1 because Android does not implement all of the features required by any one of the profiles.
Note that OpenMAX AL is not the same as OpenAL used in iOS. OpenSL is like a light version of the iOS OpenAL.

3.1.2. iOS

In iOS, there is also a Media Player framework providing features similar to what's in Android with the addition of the integration with the iPod and iTunes store and its DRMs and the support for AirPlay to stream video over Wifi to a TV.
But there are also several APIs coming from the AV Foundation framework and providing more control on the video playing. AV Foundation is one of several frameworks that can be used to play, create and edit time-based audiovisual media.
AVPlayer can decode video streams (local or network) to CoreAnimation Layers.

3.2. Video (and Image) Recording

3.2.1. Android

It is the responsibility of the Camera framework. It is possible for an application to launch the Camera activity to request a picture or video. But there are also APIs to control the video / image recording : creating a camera with a preview based upon the SurfaceView class. A SurfaceTexture can also be used as a texture for the preview. Hence it is possible to access the frame of the preview and apply effects. But it is not possible to access the full frames, only preview (or I missed the feature. First surprise for me).
The recorded flux is saved into a file.
There are APIs for face detection.
The camera APIs have APIs for taking pictures. For recording a video, the additional MediaRecorder class must be used. This class can only be used to record a video coming from the camera.
There is no API to record a video coming from another source like for instance generated by an application.
And the captured flux from the Camera is not accessible (only the preview).
So, the big limitation surprising me is that post-processing by an application is limited : I can't access a video flux and if I could, I could not encode it (since encoding a flux with MediaRecorder is only from the Camera).
So, if there are HW accelerator on the SOC to encode the video, they can't be used from the application since there is no API for it. The developer may have to use something like ffmpeg to encode a flux.
I must have missed something here ? Or does it really mean that current video chat applications are doing the video processing on the ARM processor ?

3.2.2. iOS

Like in Android, there is a camera app which can be used from any other application to quickly record a video or picture. The OS is not able to save on SD Cards since there is none but the photos can be streamed to the Photo Stream in the Cloud. Note that this is different from backup to the cloud which is supported by iOS and Android.
But with AV Foundation, iOS is more flexible. It is possible to configure capture sessions connecting different sources and sink:

Capture Session
The flux can be post processed by the developer since it is possible to access the video frames and apply any processing to them.
And it is also possible to generate flux by software and encode it into a video file.

3.3. Video Editing

3.3.1. Android

The media framework is too limited. There is no way to encode a user generated flux. With the OpenAL framework, you can create an MPEG-2 transport stream but nothing is provided to create the content of the transport stream like the H264 packets. So, either those packets are coming from a file or network (with a possible additional layer of encryption) or they would have to be generated from software without any support from the HW accelerators present on the platform.
So, there is no way to read a video, apply some application custom effects, re-encode it. And video editing is also requiring some ways to edit the stream : remove some parts of it, edit the metadata etc ...
Some APIs are available in the android.media.videorecorder but they are still private and not documented on the Android web site.
Those APIs are introducing the concept of a MediaItem. Media items can be combined to creates video. The combinations can apply effects and transition. But the MediaItem in itself is opaque. It can't be created by software or processed by software. It has to come from an already encoded video stream.

3.3.2. iOS

Through the AV Foundation, it is possible to do video editing from any application. AV Foundation is providing high level APIs to edit a video stream : effects, transitions, editing, metadata etc ... In addition to that, it is possible to access all the frame of a video, process them in software and re-encode the stream. The processing of video can be done live while capturing video from the camera. Each frame can be processed by software and encoded to a file using formats supported by AV Foundation.

3.4. Effects

3.4.1. Android

Effects are provided by the android.media.effect package. Effects are applied to OpenGL ES 2.0 textures.
Effects can be used to improve a photo (red eye removal, histogram equalization) but effects can also be used for fun or for more complex user case like face detection. Face detection can be used directly (with some camera API) without requiring the access to an OpenGL ES 2.0 texture.
Most effects are implemented as OpenGL ES shaders.
Internally there is the possibility to create filter graph described in a custom defined language. The parsing and graph management is done by the android.filterfw packages. But, those APIs are not available for the Android developer. They are still privates and not documented in the Android 4.0 platform.

3.4.2. iOS

Effects are provided with the CoreImage framework which is less powerful than on OS X since in the iOS version it is not (yet) possible to write new filters but it is possible to combine the existing ones and benefit from the CoreImage just-in-time compilation and optimization.
In addition to CoreImage, some low level image processing functions are provided with the Accelerate library : histograms, morphological filters, convolutions ...
CoreImage is applying effects to a CIImage : Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe.” A CIImage object has all the information necessary to produce an image, but Core Image doesn't actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible. CIImage is a way to create a graph of filters.
It is possible to create CIImage from several kind of buffers : image buffer, video buffers, OpenGL texture, raw data, URL etc ...
In addition to standard filters, CoreImage is providing transition filters depending on a time parameters : useful for movies.
Finally, like in Android, there are more complex filters like face detection ...
The set of filters on iOS and Android are different although there is some overlap.

3.5. Comparison Summary

Video editing is not possible for the 3rd party developer in Android since some APIs are still private and limited. It is only possible to encode a video flux coming from the Camera. More flexible use-cases must use libraries like ffmpeg and do the encoding and/or decoding on the ARM. The ICS API is not yet allowing the 3rd party developer to benefit from any HW accelerator to build video processing applications.
The Android effect API is focusing on pictures. There are only one video effect : animated background. No transition effects and available for the developer. There are private to the video editing framework in the android.media.videorecorder package.
Effects can be efficiently combined in iOS using CoreImage. The effect graph is not visible to the developer in Android and is still a private API. The effect graph in Android is not as sophisticated as the one in iOS (no JIT).
So, Android looks a lot more limited than iOS. But there is the possibility that I am totally wrong and have missed some important features in ICS ?

4. Audio

In this section, I have discovered some big limitations of Android which are surprising me. And, I am probably wrong and have missed some key features.
Also, the domains is big, with code in Java side and in native side. The native side is a bit confusing since the Android implementation is not fully compliant with the Khronos one : some features are missing and some customizations are added by Android. So it is easy to get wrong without experimenting with the code.
That being said, now you can read the audio sections :-) and don't get mad if you think I am wrong.

4.1. Audio Playing

4.1.1. Android

Media playing is done through the Android MediaPlayer or using the AudioManager to play system sounds.
In addition to that, the NDK is supporting OpenSL ES.
Playback of MIDI is based upon OpenSL. OpenSL is also providing support for 3D sounds.
It is possible to synthesize PCM samples and play them using OpenSL through the use of buffer queues. It is possible to decode an audio stream to PCM buffers with some limitations: Decode to PCM is supporting pause and initial seek. Volume control, effects, looping, and playback rate are not supported.
Android does not yet provide a compliant implementation of OpenSL 1.0.1 because Android does not implement all of the features required by any one of the profiles.
The supported audio outputs are:
  • Speaker
  • Earpiece
  • Headset
  • Bluetooth A2DP and SCO
  • Docking station
  • Memory (PCM and with some limitations)
Memory is not an audio output but there are way to output audio into a memory buffer (with some limitations).
The developer can only control speaker on/off and bluetooth SCO on/off. Other audio routes are dependent on the connected peripherals. Connection to a docking station is using USB Audio.
Lot of constants are defined in AudioSystem but now AudioManager must be used instead. AudioManager is not reusing all of the constants from AudioSystem for defining audio outputs / inputs.
So I am confused here ...

4.1.2. iOS

iOS can play audio using media player. There are classes to simplify the handling of common cases like playing alerts, short sound files ... With those simple classes, it is possible to play several sounds but without precise synchronization. The playback level for each sound can be configured. With AudioQueue services, it is possible to have a better control : synchronization, and playback level with buffer resolution.
3D sounds are supported with OpenAL (different from OpenMAX AL. OpenAL can be seen as a more complete library than OpenMAX SL).
Audio packets can come from many different sources like memory and files.
Audio on iOS (and Mac OS X) is built on top of Core Audio : a low latency and high performance framework. Low latency is requiring kernel support (realtime threads).
The iOS audio architecture is very complete:

iOS Audio Architecture
The supported audio outputs are :
  • LineOut
  • Headphones
  • BluetoothHFP
  • BluetoothA2DP
  • BuiltInReceiver (speaker when in call)
  • BuiltInSpeaker (speaker when hands free)
  • USBAudio
  • HDMI
  • AirPlay
  • Memory
Memory is not really an output controlled by the audio session but there are way to output the audio to a memory buffer.
The audio output is mainly controlled through the audio session and dependent on the connected peripherals. Some customization is possible from the developer : the speaker and bluetooth output can be selected.

4.2. Audio Session

4.2.1. Android

Audio control (alerts, routes, notifications) is done through the AudioManager. For setting the audio routes, ICS is supporting setSpeakerphoneOn/Off and setBluetoothScoOn/Off. Previous APIs are deprecated.
For the mode, ICS is supporting normal, in call, in communication, in ringtone.
Android is supporting special notifications for Audio focus and Broadcast intents for more general notification about the audio routing.

4.2.2. iOS

AudioSession is defining the behavior of an app:
  • Do you intend to mix your application's sounds with those from other applications (such as the iPod), or do you intend to silence other audio?
  • How do you want your application to respond to an audio interruption, such as a Clock alarm?
  • How should your application respond when a user plugs in or unplugs a headset?
iOS has 6 standard AudioSessions categories which can be customized if the standard behavior is not what's the developer is wanting:
  • Three for playback
  • One for recording
  • One that supports playback and recording
  • One for offline audio processing
In this list, there is one interesting audio session :
  • Measurement where the automatic gain control is disabled on the micro.

4.3. Audio Transform and effects

4.3.1. Android

Media players are part of an audio session. The audio session is used to apply audio effects : an audio effect is applied to the audio content of an audio session. OpenSL is providing volume, rate, and pitch, music player effects such as equalizer, bass boost, preset reverberation and stereo widening, as well as advanced 3D effects such as Doppler, environmental reverberation, and virtualization.
The way OpenSL is working does not make it easy to extend it. For instance, if you want to enable the Bass Boost effect, you'll have to enable the Bass Boos interface on the player object or the output mix object. Bass Boost is not and object that you could add to an audio processing graph.
And the implementation of objects is hidden to the developer.
So the only way to add custom effects is to escape from the OpenSL world using PCM buffers and process them on the CPU.
Android does not provide real-time threads for Audio.

4.3.2. iOS

AudioUnits in iOS allow the building of complex audio processing applications:

Excellent responsiveness.

Because you have access to a realtime priority thread in an audio unit render callback function, your audio code is as close as possible to the metal. Synthetic musical instruments and realtime simultaneous voice I/O benefit the most from using audio units directly.

Dynamic reconfiguration.

The audio processing graph API, built around the AUGraph opaque type, lets you dynamically assemble, reconfigure, and rearrange complex audio processing chains in a thread-safe manner, all while processing audio.

Audio Processing Graph
iOS is providing several default audio processing units. Other can be developed by the third party developers.

I/O Units

  • The Remote I/O are providing low latency access to hardware and format conversion between HW format and application format
  • The Voice-Processing I/O unit extends the Remote I/O unit by adding acoustic echo cancelation for use in a VoIP or voice-chat application. It also provides automatic gain correction, adjustment of voice-processing quality, and muting.
  • The Generic Output unit does not connect to audio hardware but rather provides a mechanism for sending the output of a processing chain to your application. You would typically use the Generic Output unit for offline audio processing.

Effect Units

  • The iPod Equalizer, the same equalizer used by the built-in iPod app. This audio unit offers a set of preset equalization curves such as Bass Booster, Pop, and Spoken Word.

Mixer Units

  • The Multichannel Mixer unit provides mixing for any number of mono or stereo streams, with a stereo output.
  • The 3D Mixer unit is the foundation upon which OpenAL is built. OpenAL is a higher level API.

4.4. Audio Recording

4.4.1. Android

OpenSL is providing audio recording features. The Java MediaRecorder is using OpenSL.
The audio sources are micro (tuned or not for voice recognition), voice (downlink and/or uplink), camera.
It looks like there is no support for USB Audio or Bluetooth for recording. And similarly it does not look like it is possible to record audio generated from the application into a compressed format.
Audio synthesis can be done with OpenSL through the use of buffer queues. Buffer queues are only supporting PCM. There is no way to encode them to other formats like AAC, MP3. Indeed, Android OpenSL does not yet support URI data locator for encoder (only player).
Only the Android MediaRecorder can generate other formats than PCM but it can't use PCM buffers as input.
So, it is the second limitation which is surprising me and I may have missed the APIs ? It looks like it is not possible to easily encode a flux to an MP3 or similar format.

4.4.2. iOS

Audio inputs in iOS:
  • LineIn
  • BuiltInMic
  • HeadsetMic
  • BluetoothHFP
  • USBAudio
  • Memory
So, iOS is not giving access to the audio from a voice call (for phone call - a VoIP application could record the audio which is coming from data network).
The selected audio input is dependent on the audio mode, connected peripherals and some developer control : selection of BluetoothHFP. Other inputs are selected according to the kind of peripheral connected.
It is possible to encode audio generated by an application to a compressed format. CoreAudio is providing all of the needed flexibility.

4.5. Comparison Summary

Like for video, in Android the only way to do custom processing of audio is to do it on ARM and use PCM. There is no way to encode directly to other formats (MP3) a PCM flux through the Android API. So the encoding would have to be done on ARM using a library embedded in the application. It means than any HW acceleration available on the SOC would not be used.
Custom audio effects can only be implemented as ARM processing on PCM samples.
Android audio is not using real-time threads and as consequence an audio route involving the ARM will require high latency.
iOS is providing full flexibility : realtime threads, decoder, encoder (with HW support) and full customization of the audio processing chain.
Of course, as I said in introduction of this section : I may be wrong and I am not totally sure of this conclusion. Additional checks and exploration of this domain will be needed.

5. Sensors and Location

5.1. Sensors

5.1.1. Android

Sensors can be read with the SensorManager. The user is registering for some sensor events (value and accuracy changes).
The sensors are not disabled automatically (like for instance when the screen is going off). Sensors have a huge impact on the battery life and it is strongly advised by Google to disable them when not needed.
The API has several constants for different type of sensors.
The API is listing constants for ACCELEROMETER, AMBIENT_TEMPERATURE, GRAVITY, GYROSCOPE, LIGHT, LINEAR_ACCELERATION, MAGNETIC_FIELD, ORIENTATION, PRESSURE, PROXIMITY, RELATIVE_HUMIDITY, ROTATION_VECTOR, TYPE_TEMPERATURE.
When registering for sensors event, the sampling frequency can be : normal, UI, game, fastest. So, there is no control of the sampling frequency and no numerical value which can be a problem if the developer wants, for instance, to implement sensor fusion algorithms.
Android is providing a generic API but at the same this API is containing several custom functions depending on the kind of sensor.
Indeed, SensorManager is containing APIs for computing altitude from a pressure or for doing conversions between different kind of rotation representations (quaternion, vector ...). It may have been cleaner to put the sensor specific APIs somewhere else.
For motion, it is not clear if any fusion algorithm has already been used by Android or not.
For the magnetic compass, detecting the accuracy and asking the user to move the device in a special 8 movement to calibrate the sensor is the responsibility of the developer. The value returned by the API for the magnetic sensor is assumed to be the calibrated one and when the reported accuracy is too low, the developer should take an action. Calibrated means : removing as much as possible the bias introduced by the device. From what I have read on the web to understand this a bit more, it looks like it is a bit a nightmare to manage this magnetic compass on Android because of those calibration issues.

5.1.2. iOS

Some sensors are used for motion estimations and are part of the CoreMotion framework.
CoreMotion is handling the accelerometer, gyroscope and magnetometer.
CoreMotion can return the raw magnetic field taking into account the earth one + surrounding one + device one. CoreMotion framework can also provide a filtered magnetic field removing the bias due to the device. iOS is from time-to-time asking the user to move the device in a specific way to recalibrate the magnetic sensor and estimate the device bias. So, calibrating the sensor is not the responsibility of the developer. The developer has access to the unfiltered and filtered value.
It is possible to define an update interval for each sensor. The interval is specified in seconds (float value - so milliseconds are possible). The sensor update can be handled in a block on a Grand Central Dispatch queue. Or, if periodic sampling is not needed, the latest read sensor values are available as read only fields of the CoreMotion object.
A Device Motion «sensor» is available. It is the device motion resulting from a sensor fusion from the different other motion sensors.
Proximity sensor is part of the UIDevice and has a dedicated API. It looks like there is no public API to access the ambient light sensor.

5.2. Location

5.2.1. Android

You don't always need a great accuracy like with GPS. Moreover, GPS can be too slow to get a location when it can get one (satellites must be visible). It is also not very good for battery. So, it is possible to request a location by network (based upon Wifi and cellular base stations). In that case, the client of the API will request how often the location must be reported (minimum time between updates) and the minimum change in distance between location.
The Android application must manage its own best estimate based on the different measurements coming from the different providers (Network or GPS).
It is possible to have one proximity alert monitoring a circle region around a point. When the device is sleeping, the checks will occur every 4 minutes.

5.2.2. iOS

Location is managed with the CoreLocation framework. A main difference with Android is the presence of the Significant Location Change framework. It is using the cellular radio to be notified of any significant location change. So, you can track location with very low impact on battery. If you don't move, the cellular modem won't notify the applicative chip. Without such a framework, the applicative system would have to measure position periodically like in Android and notify the application when needed.
The other framework (Standard Location Service) is similar to what's available in Android. But the APIs to control it are different. You don't specify a GPS or network location but you specify the required accuracy. So, there is no logic to implement on the user side to get some specific accuracy. It is done transparently by iOS which will select the best method based upon the required accuracy. You can also specify the minimum distance to move before receiving a notification.
In addition to that, iOS is supporting the monitoring of shape-based regions : geo-fencing. The regions are handled by the OS even if the application has been stopped. The application will be relaunched when the region is entered or exited. iOS will use features like significant location changes to minimize battery usage. The region is currently a set of circle.

5.3. Comparison summary

iOS is focusing on battery usage for sensors and location. Sensors are forbidden in background mode (contrary to Android where it is nevertheless strongly advised to stop them). Location is using the Significant Location Change framework.
Motion in iOS is managed in a more coherent way (dedicated class) and with sensor fusion and automatic calibration. Geo-fencing is more advanced in iOS (improved battery usage and more complex regions monitored).
Android can support some additional simple sensors.

6. Security

6.1. Android

Android is using the standard Unix security model based upon user and group ID. Each application is given a different user ID and thus cannot by default access data created by other applications.
Android is using a capability model where access to some APIs is authorized only if some permissions have been granted at the installation of the application. Those permissions are checked at different places in the system. Some of the permission checks are done through the standard Linux group mechanism (bluetooth, internet, mass storage) other checks are done in the System process. Applications using the system APIs access to those APIs with an IPC implemented thanks to the binder. The System process is responsible for checking the permissions when receiving a remote call through the binder. The checks are spread through the APIs and done in user space.
A capability model can be seen as a fine-grained sandboxing mechanism. In Linux world, sandboxing is generally implemented with SELinux or AppArmor which are providing a different model : rule-based execution.
Android is coming with several Java classes for cryptography, digital signature management and DRM.
New DRM schemes can be integrated with the Android DRM framework.

6.1.1. iOS

iOS is implementing a different kind of sandboxing. Sandboxing are policies enforced by the kernel, described in a specific language (TinyScheme) and describing what an application is allowed to do or not (rule-based execution). For a simple example coming from MacOS X : on Mac OS, the root user cannot delete or corrupt the TimeMachine backup. Only the TimeMachine process can. So, the security model based upon user id and group id is replaced by a more flexible security model based upon mandatory access control and security rules.
Here is a very simple example of a sandbox security profile:
;;
;; named - sandbox profile
;; Copyright (c) 2006-2007 Apple Inc. All Rights reserved.
;;
;; WARNING: The sandbox rules in this file currently constitute
;; Apple System Private Interface and are subject to change at any time and
;; without notice. The contents of this file are also auto-generated and not
;; user editable; it may be overwritten at any time.
;;
(version 1)
(debug deny)
(import "bsd.sb")
(deny default)
(allow process*)
(deny signal)
(allow sysctl-read)
(allow network*)
;; Allow named-specific files
(allow file-write* file-read-data file-read-metadata
(regex "^(/private)?/var/run/named\\.pid$"
"^/Library/Logs/named\\.log$"))
(allow file-read-data file-read-metadata
(regex "^(/private)?/etc/rndc\\.key$
"^(/private)?/etc/resolv\\.conf$ "
"^(/private)?/etc/named\\.conf$ "
"^(/private)?/var/named/"))
As consequence, in iOS, all user applications are running with the same mobile user id. Access to some specific features can only be done on behalf of a more privileged process (different security profile). So, the mechanism is similar to the Android way : an IPC to a more privileged process. But the security rules are enforced by the kernel.
iOS is coming with APIs for cryptography, digital signature management. But the DRM scheme is not customizable. Only the Apple FairPlay DRM is supported.
iOS is supporting data protection for files : Data protection is using the built-in encryption hardware present on specific devices (such as the iPhone 3GS and iPhone 4) to store files in an encrypted format on disk. While the user's device is locked, protected files are inaccessible even to the app that created them. The user must explicitly unlock the device (by entering the appropriate passcode) at least once before the app can access one of its protected files.
Data protection is just an extended attribute to set on the file. Everything is then transparently handled by the OS. An attribute for a file is like «creation date», «file size» etc ... The iOS file system (like the MacOS X) is supporting any kind of extended attributes to record any kind of file metadata in the filesystem.

6.2. Comparison summary

There are implementation differences but the philosophy is similar.
iOS is more paranoid : A stronger sandboxing is supported. The traceability is enforced since all applications loaded on the platform must be signed by Apple (coming from the AppStore) in addition to being signed by the developer.
In both OS, there is the concept of privileged server / services doing some work on behalf of the application.
iOS is providing some abstraction to ease data encryption.

7. Speech

7.1. Android

Android has text-to-speech and speech recognition. The APIs are available to the developer. The recognition engine is used for dictation. Through Intents it is possible to launch web search and get the results. It looks like there is no other voice command supported through the speech APIs for the developers.

7.2. iOS

iOS has no API (in 5.0) for text-to-speech or speech recognition. It is totally under the control of the OS.

8. External peripherals

8.1. Bluetooth

8.1.1. Android

Android is offering lot of visibility on the Bluetooth standard. There are API to scan accessories, pair them, open socket or work with Bluetooth profiles.
Some Android phones are shipping with Bluetooth 4.0 Low Energy Accessories support instead of NFC (Droid Razr).

8.1.2. iOS

The developer can select Bluetooth as input or output of the audio chains.
iOS 5.0 has added an API to support the new Bluetooth 4.0 Low Energy accessories. The CoreBluetooth framework is providing full abstractions to interact with Low Energy accessories.
Other uses of Bluetooth are done through the External Accessory framework which is also handling external accessories connected to the USB connector (in the 30-pin dock connector). Those external accessories (Bluetooth or USB) must conform to a protocol defined by Apple. The External accessory framework is defined in the USB section.

8.2. NFC

8.2.1. Android

There are two major uses cases when working with NDEF data and Android:
Reading NDEF data from an NFC tag Beaming NDEF messages from one device to another with Android Beam™

8.2.2. iOS

NFC is not currently supported on iOS.

8.3. USB

8.3.1. Android

Android is supporting host and client mode (accessory mode). Android is giving some visibility on the USB API : bulk transfer, enumeration, control endpoint.
Android is also defining a new USB class for developing Android accessories : the Android accessory protocol.
USB Audio is supported as output for docking stations. It is not clear if it is also supported as input.

8.3.2. iOS

iOS is giving access to USB in several different ways. Some are controlled by the OS like USB Audio / MIDI or access to USB Mass storage devices (for photo). USB Audio / MIDI peripheral are made available into the audio chains but the protocol is hidden to the developer.
Other uses of USB are done through the ExternalAccessory framework. Communications with accessories are done through input and output data streams. The underlying physical layer (Bluetooth or USB) is hidden. Applications in the AppStore are exposing the accessory protocol they support. When an accessory is connected, iOS is launching the corresponding application or suggesting one in the AppStore.
External accessories must support an Apple defined protocol : http://developer.apple.com/programs/mfi/

8.4. Comparison Summary

Android is giving more visibility on the kind of interface (bluetooth or USB). It is requiring more work from the developer but there is a much better support for the protocols so it is the platform of choice for peripheral developments.
For USB, they are limited to bulk for the developer. USB Audio is not controllable from the USB API which is focusing on USB bulk and interrupts. Since a big part of the USB protocol is exposed, Android is not requiring peripherals specifically designed for it. But, it is imposing the constraint on the final user to find the application with the right USB «driver» to control the connected peripheral.
iOS is abstracting the interface (bluetooth or USB) and requiring specific protocol for the peripheral except in the case of audio. The main reason for this specific protocol is to ease the discovery : when a peripheral is connected, the applications which can handle it are proposed to the user (I have never tested it).
USB audio use-cases are handled by the OS and the details hidden from the developer as much as possible.
NFC or Bluetooth 4.0 with Low Energy peripheral ? Android is supporting NFC in the API but not all recent Android phones have decided to use it. Some have instead decided to support Bluetooth 4.0. iOS has so far focused on Bluetooth 4.0 for low Energy peripheral (also recently introduced in the recent Macs).

9. Data Management

Data management is independent from the features provided by a SOC contrary to domains like audio or video which are highly dependent on the SOC features.
Data management is the set of abstractions used to encode the data model of an application, save it and backup it.

9.1. Android

9.1.1. Data Model

In Android, the data model of an application is a set of Java objects. There is no specific abstractions or tools provided by Android for specifying the data model. Of course, there are tools useful for implementing (lists, queues, dictionaries) but what's the most difficult in a data model is defining the meaning of the data. There are a lot of properties that must be preserved in a graph of objects to model a domain of activity.
Android can use SQLite and try to specify the data model as SQL tables. But, SQLite is not a fully relational database so the invariant and properties still have to be enforced by the Java code. In addition to that, there is a matching problem between a relational model and an object model.
I am sure there are lots of frameworks in the Java world to provide that kind of features (if some reader can link me to a few good ones ?). But there is no standard one coming with the Android SDK.

9.1.2. Synchronization and backup

Android is not providing any specific tools or abstractions for synchronizing documents between different devices. Each application must implement its own system.
There are some abstractions for backup that must be specifically used by the application to support backup. Said differently : the backup is the responsibility of each application instead of being the responsibility of Android. Android is providing the API, the framework, the events but the application must implement the backup when asked by the Android framework.

9.2. iOS

9.2.1. Data Model

iOS has the standard tools like Android : graph of objects implemented in Objective C. Libraries of abstractions for the implementation : queues, lists, dictionaries etc ...
SQLite is also supported with the same problems : relational / object matching problem.
But iOS is coming with a very powerful system : CoreData. CoreData is really a huge and sophisticated system difficult to summarize in a few words.
The CoreData framework provides generalized and automated solutions to common tasks associated with object life-cycle and object graph management, including persistence. Its features include:

Change tracking and undo support.

Core Data provides built-in management of undo and redo beyond basic text editing.

Relationship maintenance.

Core Data manages change propagation, including maintaining the consistency of relationships among objects.

Futures (faulting).

Core Data can reduce the memory overhead of your program by lazily loading objects. It also supports partially materialized futures, and copy-on-write data sharing.

Automatic validation of property values.

Core Data's managed objects extend the standard key-value coding validation methods that ensure that individual values lie within acceptable ranges so that combinations of values make sense.

Schema migration.

Dealing with a change to your application's schema can be difficult, in terms of both development effort and runtime resources. Core Data's schema migration tools simplify the task of coping with schema changes, and in some cases allow you to perform extremely efficient in-place schema migration.

Optional integration with the application's controller layer to support user interface synchronization.

Core Data provides the NSFetchedResultsController object on iOS.

Full, automatic, support for key-value coding and key-value observing.

In addition to synthesizing key-value coding and key-value observing compliant accessor methods for attributes, Core Data synthesizes the appropriate collection accessors for to-many relationships.

Grouping, filtering, and organizing data in memory and in the user interface.

Automatic support for storing objects in external data repositories.

Sophisticated query compilation.

Instead of writing SQL, you can create complex queries by associating an NSPredicate object with a fetch request. NSPredicate provides support for basic functions, correlated subqueries, and other advanced SQL. With Core Data, it also supports proper Unicode, locale-aware searching, sorting, and regular expressions.

Merge policies.

Core Data provides built in version tracking and optimistic locking to support automatic multi-writer conflict resolution.
Core Data is not a relational database or a relational database management system (RDBMS).
Core Data provides an infrastructure for change management and for saving objects to and retrieving them from storage. It can use SQLite as one of its persistent store types. It is not, though, in and of itself a database. (To emphasize this point: you could for example use just an in-memory store in your application. You could use Core Data for change tracking and management, but never actually save any data in a file.)
The CoreData schemas are defined graphically and the schema description is saved as an asset of the iOS application package.

CoreData schema editor

CoreData fetch request editor
A fetch request can be used from the application code to fetch objects corresponding to a query from the object graph (with lazy loading from mass storage if needed).

9.2.2. Synchronization and backup

The backup is the responsibility of iOS. It is transparent to the applications which can only specify that some files do not need to be saved (like caches).
iCloud is providing lots of features for synchronizations between different devices. Conflict management (and merge) is the responsibility of the application.
CoreData can be used with a SQLite in iCloud. In that case, no SQLite database is transmitted : only log files to ensure that the status of the remote SQLlite is synchronized with the local one.

9.3. Comparison Summary

Clearly, iOS is much much more advanced on the topic of data management which is often a very time consuming and tricky part of the application development.

10. Conclusion

I have not covered all the domains but I think I have covered the key ones.
Lots of very famous tech web sites often contain posts titled "Why X is the best mobile phone OS", or "Why is A > B" … So, now that I have answered a bit the "Why", I am sure you want to know what are X, A and B.
Do you really think that I am going to summarize a comparison between two complex OS with just one bit of information ? It does not make sense and I dislike posts like that from bloggers who pretend to be journalists.
In complex systems, there are different possible choices, different tradeoff … Android and iOS have made some different choices. They do not target exactly the same people. And there is room for both OS in the market.
The Android screenshot is coming from the excellent Android : A visual history on The Verge.
The iOS architecture screenshots are coming from the iOS public documentation.
The sandbox file example is coming from Mac OS X.
The Android screenshot and the extracts from the Apple documentation and OS are not covered by the Creative Common Licence on this site. I have included them assuming it is a fair (and limited) use as an illustration.
So, which OS is the best?
AAPL GOOGL MS

3 comments:

  1. Hey buddy that was a gud post
    lot of quality stuff and essential information
    GMC Safari AC Compressor

    ReplyDelete
  2. Everything has its value. Thanks for sharing this informative information with us. GOOD works! best snow rake

    ReplyDelete