Archive for the ‘Apple’ Category:

How TV Should Be

Lets say we wanted to reinvent TV (like everyone expects Apple to), how would we go about it?

Right now I susbscribe to some number of cable packages (lets count the Beeb’s licence fee as one of those…), I may also download from the various iplayeralikes, and buy content (mainly from itunes in my case).

What if apple (or a rival) could convince the TV companies to allow people to have pay as you go subscriptions – something along the lines of this:

At any time you may purchase 1 month of our channel’s programming – which you can watch when you want during the month – but when the months up, its gone.  That isn’t too different from the iplayer and standard cable models.

Also, you can buy any of our top programmes – either as a rental (a week to watch, but it vanished 48 hours after you’ve started watching… in the same vein as sky boxoffice) or for good (like with itunes)

And since we let you do that, why not let you buy or rent films…

We would probably start getting new channels – after all, netflix fits into the new monthly model.

We could also add ‘oyster card – like’ deals (if you rent enough of our content, everything else you buy from us over a given period will be free)

All we need is for the tv set to have a single payment mechanism which everyone agrees to use.  Which is where the monetization strategy comes in.  Do this well, and in time, you’ll be able to give widescreen tellys away for free.

The TV could recommend programs that you can watch for free – or programs you would like which will cost you money (there are ad dollars to be made with the latter).   Add the social media features I’ve suggested before, and you have a new TV model, which democratises the platform in a range of interesting ways.

But of course, this isn’t the future.  Because in the future, the new program will be the app.  We know this, but it’ll be a long time before channels figure it out.  If the right TV manufacture gets this right now, they can get everyone onboard… and then become the app store!

Windows Out To The Clouds

It was probably at some point between the release of the iPhone and the release of the iPad that the change started happening.  Until that time, we had windows on our computers – be they Windows, Mac or Linux.  Sure, our small digital devices didn’t have windows, but they were small, trivial things… or in the case of Symbian devices, they at least looked like they wanted to have windows.  But on the iPad we have no windows.  And in the next version of windows, the windows have been demoted to sit behind the windowless Metro interface (except of server editions.  On server editions, the recommendation is to drop windows entirely and have command lines)

So where have the windows gone?

it used to be, if we wanted to do many things at once, we did one in each window.  Each web page had its own window.  Each application had its own window.  But gradually we realised that all these windows were becoming hard to manage.  For web browsers, we invented tabs.  While more and more people moved to doing email on their phone.  Gradually we took control back.

What is happening is that we are keeping fewer windows open on each screen.  And effectively the screens are becoming the new windows – we now flick between tasks not by alt-tabbing, but by picking up – or focusing on – the display showing the thing we want to look at.

Its also clear that we are using different types of display for different things.  I have 2 monitors for programming on.  One has a landscape orientation – which is useful for some things, whereas the other has a portrait orientation – which is useful for displaying long documents and web pages – not to mention getting lots of code on the screen when I’m programming.  I also have a phone.  Its smaller, less obtrusive, but it tells me when I have new emails and tweets – and it often acts as a timer and a desk clock.  Also, unlike my monitors – it moves with me wherever I am.  Also on my desk is a Kindle – it has a e-ink screen – which is much more comfortable if I want to read books (unfortunately, it is terrible for reading reference books – but that’s a different story).

Increasingly too, all these devices are connecting to the same data.  My email is on my phone, but I can also get to it by logging into gmail on a web browser – on my monitors at work, and even – if absolutely neccessary – on my Kindle.  The same is true of many documents (whcih I tend to store in google docs).  My phone has Citrix Receiver which lets me get to various parts of my work IT infrastructure.

Less and less does the computer under my desk seem to be a particularly useful thing.  In the old days, I would install software on it.  Now, not so much – more and more of what I use runs on someone else’s cpu somewhere out there in the cloud.  The day can’t be far off when I abandon any idea of installing software on my own PC and, if I need to install software, I install it on a virtual machine hosted somewhere in the cloud.

So our devices are more and more becoming windows on the cloud.  Our apps, and out data live out there, in the nebulous somewhere of data centres and network connected disks.  Ideally, I don’t have to know where my data is at all – I just have to hope it can get to whereever the application which wants to process it is.

Increasingly I doubt the software on the devices we use matters all that much.  The hardware matters – it can change a devices usability.  And maybe the device drivers which expose this hardware to the world matter.  But once the cloud can see the device driver, the cloud can do what it wants, and the software on the device stops mattering.

We’re not at this place yet – but its where we are going.  And as the cost of devices drops, we’ll find ourselves there.

For this to work we’ll need:

  • Ways to access the full capabilities of a device from the clound.  Standardised ways, where possible.
  • Ubiquitous low cost network connectivity
  • Better data sharing between cloud applications
  • Better data movement around the cloud (either through clever caching, or just higher bandwidth)
  • Some way to pay for everything we use – which is to say a way to get a ‘phone bill’ for all of this
  • More and more cheaper, small,er more portable, more capable, devices to act as windows for the cloud.


A Sandbox In Every Walled Garden

The Apple wold seems to be throwing a wobbly at the announcement that sandboxing will be required for all apps in the mac os app store.  People are discussing what might be a better solution, and if we might eventually only ever be able to install applications from the walled garden of the apple store .

It all gave me a sense of Deja Vu.  Weren’t we having the same thoughts over in Microsoft land a few months ago?  Are Apple developers really that out of touch about what is going on on other platforms that they haven’t noticed the parallels?  Apparently yes – I hav’t seen the words Metro or WinRT mentioned in this discussion.  Which is odd, because surely how the competition is trying to solve the same problem – and going down the paths which have Apple devs so up in arms – can feed into their strategy for how to approach our brave new world.

So, herewith, a cheatsheet aimed at showing the parallels:

What Apple Have Announced?

Mac OS apps for the Mac Os App store will have to implement sandboxing – which is to say, they will have to list a set of capabilities that their app requires, and then not make any calls which require capabilities they have not listed.  It appears (from people saying that this currently buggy and affects AppleScript) that this is enforced at runtime.

What Apple Have Not Announced That People Are Scared Of?

It might seem only a short way from there to declaring that the app store becomes the only way to install apps on your Mac.  And only a short way from there to giving Apple the chance to have a kill switch on every Mac Os application.

What People Have Suggested As Alternatives?

Certification.  Specifically having per developer certificates signed by Apple, so that if someone does something bad, Apple can revoke their trust in the developer certificate.  And ditching the whole sandboxing idea.

What Have Microsoft Announced?

If you want to use the new Metro UI, you can only use a subset of Win32 calls alongside calls to the new WinRT runtime.  Furthermore, you must specify a set of capabilities your app will be using at compile time.

The only way to install your apps will be via the new Windows App Store (this isn’t strictly true… there is a way to install developer signed apps on a developers own machines, and we are expecting to hear a way for enterprises to install apps which will presumably be more than just the App Store)

To get your apps into the app store, your app will have to pass a set of tests.  Microsoft will run these tests, but they also provide them to developers, so that developers know if they will pass.  Once MS have validated you pass the tests, MS will sign your app and put it in the app store.

One of the tests that MS will provide is to ensure you make no calls which are not allowed by the set of capabilities you have requested.  In short, the sandboxing is done prior to signing, rather than at runtime.  (This has some security issues, specifically in the area of self modifying code.  I presume MS plan to handle this via legal and social means, rather than technical)

Oh, and all your old Win32 Apps will continue to run unaffected – but not via Metro.  You will even be able to install Win32 apps via the App Store.

Have Microsoft Ever Done Anything Like This Before?

Yes.  We’ve had driver signing for years – each driver type has its own set of functions it is allowed to call, and there are any number of testing hoops you have to jump through in order to pick up a signature.  Just check my twitter stream to see how much pain WHQL causes me every so often.

This means Microsoft have experience of the real world implications of trying to manage a certification and signing scheme.  The main implications being “for every rule we lay down, there are exceptions”  generally many exceptions – there seem to be as many special cases as there are drivers.  Half of the fun of passing WHQL is convincingMicrosoft that a set of rules they require drivers to obey are wrong, or insufficient, or just plain shouldn’t apply to your driver.  The good thing is that Microsoft can usually be convinced.  Eventually.

Now, I’ve no idea if Microsoft’s driver signing experience will feed into their App signing experience, but there are enough similarities between the processes for me to guess there has been some communication beween departments on this issue.

Are Microsoft Doing Anything That Seems Wrong?

The biggest problem seems to be requiring that things are signed.  Because once you require apps to be signed, you need to sign every script you run (or just sign scripting languages – in which case you’ve lost most of the security you were aiming for).  It looks like the solution to this involves dev certificates which so far are only available via Visual Studio.  So all development will involve Visual Studio in one way or another.  (Incidentally, PowerShell has had signed scripts since day one – maybe there is some intention to integrate that architecture – but I don’t see a straight forward path).  It may be that all scripting will stay on the Win32 side of the fence.

Is there anything Apple could learn from Microsoft?

Firstly, MS are allowing old apps to continue to work with no changes.  There is no Win32 walled garden.  All the changes are only for people who want to use the new WinRT hotness.  Now, we’ve no idea if anybody will want to use WinRT, but MS do seem to be providing us with a world where people get to make the choice between two different environments.

MS are also allowing the same of old-style apps via their app store.  It seems that this will be more ‘providing a link to your companies website’ and less ‘a full integrated install experience’ for Win32 apps.  As far as I can see, its a way MS can make money while still saying ‘do this at your own risk’.  I’m guessing here that anything distributed this way may have to be an MSI – if so, you might just be giving the app store the ability to uninstall apps which turn out to be dangerous.

MS realise that there are exceptions, that app stores and enterprises won’t mix (think bespoke software), that admins have to have some control over what users install, that perhaps some software won’t fit into the model they are testing for.  Apple have always wanted to provide the user with the best experience, whereas MS are more about providing the developer with the best way to ship their software.  Apple is about fitting in around how Apple work, whereas MS is more about MS fitting in around how your application works – and we see this with the attitude towards signtime vs runtime tests for sandboxing. With Metro, MS is trying to learn from Apple, Apple could probably stand to learn a few things from MS too.

Are there any other thoughts

Moving to OS X bought Apple a whole load of developers who wanted Unixy tools on a reliable machine with a nice UI.  OS X comes with many many scripting languages which are able to access the core of the system and do everything a compiled program can do.  Do we honestly think that apple are going to restrict those scripting languages so that scripts can no longer access the system?  That one move would cause a major rupture in the dev community and harm Apple significantly.  MS can get away with it (if you want to develop for Metro, use Iron Python on top of the CLR and you’re happy – if you want to run a script, theres Win32), but without a new hotness to tie all these changes to, Apple would just be taking developers favourite toys way – and suffering the tantrums that follow.

Of course, most mac users don’t know or care abou what a scripting language is.  These will be the people who use the app store.  Just like Itunes makes it easier to get music (so fewer and fewer people bother buying CDs and ripping them to fill their iPod), the App store makes it easier to get your apps – your average user won’t consider getting apps any other way.  There is no need to restrict the techie few that the Mac software ecology depends upon.

And is Signing the answer?

Signing isn’t a flawless solution to all your problems – assume you have a killer app your system depends upon – lets say “Photoshop” for the mac.  Assume the manufacturers of Photoshop were to bring out another piece of software Apple didn’t like (I’ll call it ‘flush’).  If apple wanted to revoke the signature for flush, they could either revoke the signature on every release of flush ever made (and on the new applications ‘flish’ and ‘flosh’ that might be submitted thereafter), or they could revoke the developer’s signature and loose their killer app (and annoy many customers in the process).

Signing also requires that certificate lists are kept up to date on every system involved (or that you have reliable internet connectivity all the time)

But signing does allow for technical, legal and social means of deciding which apps to allow to run.

Most notably though – signing is really really irritating to have to do all the time – especially if you’re scripting.  I can’t see it as a real solution to the problem if you want to keep developers hanging around.  What you need is to just let people write their scripts and get on with using their machines…  By all means make developers go through some sort of hoop once to be able to script and install their own software (lets say by joining a group, or turning off a particular feature of their user account), but don’t come up with a technical solution that will only irritate.