Archive for the ‘Apple’ Category:


The Rebirth of the PC

People are talking about the death of the desktop PC.  While Rob Enderle is talking about it’s rebirth.  I’m conflicted about both these stories.  I think they are missing the trends which will really shape how we come to think of the PC in the future.

Looking at the market now, there are desktops, there are laptops, there are tablets and there are phones.  We also have vague attempts to cross genre, with Windows 8 trying to reach between tablet and laptop, while IOS and Android reach between tablet and phone.  But this isn’t the future, this is a market still trying to figure itself out. I’m going to limit my predictions to a particular segment of the market – the segment which is currently dominated by the desktop PC.

The reasons we have desktops are:

  • They are more powerful than laptops
  • They are tied to a single desk, so that management can control where we work (and where our data stays)
  • They are more comfortable to use than laptops or tablets (at least for keyboard entry and pixel perfect design)

However, the game is changing.  The question of power is becoming moot.  Machines seem to be gaining power (or reducing power consumption) faster than applications are taking it up.  There is less and less need for more powerful machines.  And, where more powerful machines are needed in a company, it doesn’t make sense to hide them under individual’s desks.  It makes more sense to put them in the datacenter, allocating processing power to the people that need it.

In short, we don’t need computers under our desks, we need reasonably dumb clients.  Network computers.  Oracle could have told you that years ago.

That said, dumb clients never quite seem to happen.  And the reason for this is that is that smart is so cheap, there is in point in tying yourself down, limiting yourself to this year’s dumb.

Tying the computer to the desk is increasingly being seen as a limitation rather than a benefit.  It doesn’t just prevent working from home, it also prevents hotdesking, and simple team re-orgs. What is more interesting to companies are technologies which let them keep data in controlled locations – and again the same technologies which let people work from home are also keeping data in the cloud – but locking it there so that it is harder to misuse.  This argument for the desktop PC is gone.

Comfort is more important.  But by comfort we specifically mean comfort for typists, and mouse operators.  Tablets are going to cut into the market for mouse operators, and combinations of gesture and speech technologies will gradually reduce the advantage of the poweruser’s keyboard.  Text entry will probably remain best done by keyboard for the time being.  But the comfort aspects are changing.  My bet is we will see an increase in big, screens angled for touch rather than display, while tablets are used for on screen reading.  Keyboards will remain for people who do a lot of typing, but onscreen keyboards will be commonplace for the everyday user.

So – by my reckoning we will have (probably private) cloud data, applications running on virtual machines which live in the datacenter and being distributed to big screens (and still some keyboards) on the user’s desks.

This isn’t a particularly impressive point of view.  Its the core of a number of companies who are playing in that field’s business plans.

But what is missing from the view is the PC.  As I said : there might be big monitors acting as displays for clients, but clients doesn’t mean dumb.

Smart is cheap.  We could probably power the monitors running smart clients – and some local personal, and personalized, computing – from our phones.  We could certainly do it from our laptops.  But we won’t.  Because we won’t want to become tied down to them.

We will want our tablets and laptops to be able to carry on doing what we were doing from our desktops – but thats an entirely different issue.  Indeed, since I’ve suggested we might want to run some personal programs locally, it suggests we need something on our desktop to mediate this.

It has felt, recently, that the IT industry is moving away from letting us own our own devices.  That the Apple’s and Microsofts want to control what our computers run.  Some have shouted ‘conspiracy’, but from what I know of the people making these decisions, the reason is hands down ‘usability’ tied with ‘security’.  However, there is a new breed of entrant in the market which cares little about this usability thing – the Raspberry Pi’s and android dongles.  Smart, but cheap.  You – not any company – control what you do with these devices.  They are yours.  And in a company environment, they can quite happily sit in a DMZ, while they run software that gets full access to the corporate intranet.

The desktop computer could easily be something along these lines.  No need to make the devices limited.  No need to limit what they are able to do.  All you need to limit is their access to privileged data and privileged servers.  These devices become the hub that you connect whatever hardware and whatever display are appropriate for the job.  I can keep my keyboard. Designers can have their Wacom digitisers.

But you also make sure that these devices can be accessed from outside the corporate network – but only the things running locally on them.  This might require a bit of local virtualization to do well, but Xen on ARM is making significant progress – so we’re near.

This is my bet about the desktop.  Small, smart, configurable devices tied in with private cloud services, and whatever UI hardare you need.

But my next bet is we won’t even notice this is happening.  These devices wills tart turning up in the corporation without the CTO or CIO giving permission.  At first it’ll be techies – and the occasional person using an old phone or tablet as a permanent device.  But gradually it will become more common – and devices will be sold with this sort of corporate use in mind.  You’ll get remote client software preinstalled with simple user interfaces for the common user.  They’ll come into their own as corporations start mandating the use of remote desktops and sucking everything into the cloud – taking advantage of the same networks that the engineering services teams have been forced to make available for phones and pads.

The desktop PC will stay.  It will stay because we want more, better, personal control of our work lives.

When the network computer does, finally, make the in roads we have been promised, it will have been smuggled in, not ordered.

(Oh, and we won’t call them desktops, we won’t call them PCs.  We will think of them as something different.  We’ll call them dongles, or DTBs (Desk Top Boxes), or personal clients, or something else.  This is going to happen without anyone noticing.  It might happen differently from the way I’ve suggested, but ultimately, our desktops will be low powered, small devices, which give users more control over their computing experience.  They’ll probably run linux or android – or maybe some MacOS/IOS varient if Apple decide to get in on the game.  And while companies will eventually provide them, the first ones through the door will belong to the employees.)

I don’t want my, I don’t want my, I don’t want my Apple TV

In the late nineties, I worked for a dot com startup doing some early work in the digital set top box space.  Video streaming, personalization, web browsing.  It was the sort of thing which only became popular in the home about a decade later.  We were too early (and probably too incompetent).

These days its popular to think that the TV set is due for a change.  Some sort of revolutionary rethinking in line with what Apple have done to the tablet computer, the phone and the mp3 player.  Apple are usually considered to be the people who will lead this revolution (the rumours it will happen any day now have been around for years).  Others think Google might manage it.  And I’ve suggested Amazon could be the black horse.

But the more I think about revolutionizing the TV, the more I realise, I don’t want it to happen.  At least not like a TV version of the iPhone.

There are a few things I have realized about the television:

1. It’s a device for multiple people to watch at the same time
2. It’s about showing pictures and playing sound.
3. UIs for TVs are hard.  And generally ugly.  Your best bet up till now has been to control things with a IR remote control.  Ownership of the remote, and losing the remote have become the cliches of ancient stand up comedy routines.  We are just about entering the period when people might consider replacing their remote controls with mobile phones and tablet computers.
4. No one wants to browse the internet, read their email or post to twitter through their TV.  We might want to browse the web in order to get to YouTube or some other video playing site, but generally people prefer to read things they can hold in their hands.

It has gradually become clear to me that the home user isn’t going to be looking for a magic box – or for extra capabilities of their TV – which will allow it to take advantage of all the new content opportunities the web provides.  No.  They are just going to use their TV to watch programs with other people, together.  They won’t be installing apps on their TV. They won’t be browsing the web on it.  And they won’t be controlling their viewing with the TV’s remote.  They will be doing everything from their phone or tablet.

Think about it for a moment.  You can already watch TV on your phone.  And with airplay you can send anything you’re watching to your TV.  This is fine for an ‘all Apple’ household, but until lots of people get in on the game, I don’t see this as the future.

No the future comes with WiFi Direct and Miracast (plus a lot of extra work).

I’ve explained WiFi Direct and Miracast elsewhere, but to put it simply:  Miracast lets you beam video from your phone – or from any other device – to your TV.  Its like a wireless HDMI cable.

So imagine, if you would, the TV of the future.  It will be a box with no buttons, just a lovely display and a power supply.  Inside it will be WiFi direct ready.  (Hopefully WiFi Direct has some sort of wake on lan functionality, so that you can plug your TV in and put it in a low power mode awaiting a connection.  If it doesn’t, we’ll stick a discrete pairing button on the top)

You come in with your phone, or tablet.  You install an app – which might be something like iPlayer, Hulu or Netflix, but might also be a specialist app perhaps ‘Game of Thrones’.  How you pay for this (one off, or subscription) is up to the app publisher.  The app publisher can also decide if the app contains all the audio/visual data, or if the data will be streamed from some external source.  You play the app, and are offered a number of screens to play the video on.  You select the TV and you are away.  The video is streamed from your phone to the TV set… or better, the TV set.

This world is already (just about) possible with Miracast.  But it isn’t quite enough.  Here are some ways we can improve on things.

Your friend is also watching TV with you, and decides to turn the volume up a bit.  The volume is a feature of the TV, so your friend needs to tell the TV to play sounds a bit louder.  So your friend reaches for his phone.  Now, he doesn’t live at your house, so he won’t have an app for controlling your TV.  There are two solutions:
1. We insist every TV provides a common interface, so that lots of people will make TV control apps.  In which case, he can then just pair with the TV and control it that way.  But this sort of standardisation doesn’t seem to work well.  So the odds are low.  My preferred alternative is to encourage the following:
2. When your friend pairs his phone with the TV, he is told there is a web service available (providing a web server ought to be a common feature of WiFi Direct devices that need to be interacted with) and goes straight to the front page.  At the front page he is given a web ui, and a link to download a better app from whichever app stores the TV company have chosen to support.

What would be even better is if the web app worked by communicating with a simple web service.  Each Web service could be different, but so long as they were simple, hackers could work out how they functioned.  And as a result could develop control apps which work with hundreds of different TV sets – just like multi-set remote controls work today.  In short everyone would have an app which would quickly be able to decide how to control whatever TV they came into contact with – while also having a web app ui workaround in case of failure.

So, this is fine for controlling the TV.  But what about if my friend wanted to pause the show in order to say something?

My suggestion is that along with WiFi direct linking devices, you want to make some other information available.  Possibly provided by a web service as above – but ideally in a more standardized way.  I would want the TV to tell me which device was currently streaming data to it.  And I would want to be able to join that WiFi direct group, to communicate with the sender.  Finally I would like the sending device to also provide me with a web interface – so that I could control it remotely too.

In short, the TV becomes far more dumb than your average Apple TV box is today, and you rely on the smarts of the tablets that control it.  Especially since the apps on the tablets can ensure a far better user experience in the process.

From here we need to consider other devices.  I’m pretty sure the PVR as is will die.  Broadcast TV will gradually wither, and the PVR won’t be supported.  But until this happens, the PVR and cable box will be part of the home entertainment system.  And increasingly we will get video servers which will hold the video data of films we have purchased – or even, perhaps, caches for external video providers.  In any event, we will control these devices in the same way we control the TV: pairing via WiFi Direct, then a web UI and potential app downloads to get to the functionality.  These boxes will stream the video straight to the TV.

We also need to consider audio.  Right now many homes have a TV with speakers, and also a HiFi of some sort.  Let’s rethink this:  Add a few wireless speakers, and let them be sent audio by a protocol similar to Miracast (but perhaps with some additional syncing technology)  Your phone could even become a remote wireless speaker – especially useful if you want to attach some headphones without laying out wires.

At this point we have everything we need to allow app writers to revolutionise television.  I still feel there is a lack of a central TV guide – but perhaps that will be forthcoming now we know we have personal touch interfaces and no longer have to assume everything will be controlled via the screen.

Whatever, we don’t need smart TVs.  We just need good displays, and sensible use of wireless technology.  The Apple TV as is, both is too smart, and not up to the job.  Lets make it simpler, and make the interactions between devices work well.

Wearing it on my sleave

ScratchInput SteveMann self portrait

Wearable computing.  Thats what we called it, back in the late nineties when I was at university.  It seemed like a great idea, never being away from my computer, instant ability to connect to the internet.  We wondered about the best way to do it – I was fantasising about a belt which could hold a twenty-four hour battery pack, some sort of input device – perhaps using combinations to buttons to let my type – or maybe a joypad spread 50:50 between my trouser pockets (though the thought of what using that might looked like was an issue) and, of course, some sort of output device strapped to my arm.

Later in life I got a Nokia Communicator.  These days I have an ageing Android phone, and I’m well behind the times with wearable computing.  The phone is now doing the job of the wearable computer – it does everything we wanted and more, in a more sensible and more acceptable looking way.  The reason I’m behind the times, is that wearable computing has become fashionable.  Its about being up to date, more than it is about the technology.  I’d bet the people drooling over the latest iPhone weren’t impressed by the technology like we all were a few years ago – they just wanted something new and cool.  And thats cool like a pair of jeans, not cool like the demo of Xen on ARM I saw the other day.

But I don’t want to talk about the new iPhone, because its a step improvement, not a game changer.

I want to talk about the iPod Nano.  Because the iPod Nano has been changed from a square to a rectangle.  And this interests me no end – because you can no longer put it into a watch strap and use it as a watch.  And this seems to me to be a weird decision from Apple.

Now, I’m not going to say the iPod nano was the publicly acceptable face of a phase of wearable computing we haven’t yet reached – mainly because I never saw anyone wearing them as a watch.  But those what straps sell.  And some people love their Nano watches.  And Apple must have been aware of this – because they sell the watch straps in their stores.

And I can’t believe Apple were unaware of the Pebble watch which was causing a lot of buzz earlier this year.  I can’t believe Apple don’t want a part of that market, somewhere down the line.

And so, the only reason I can think of for stopping people from using the iPod as a watch is that Apple have plans (possibly vague plans, but plans nonetheless) to enter that market.  Amongst the possible ideas I can think of are an iPhone on your arm (unlikely – watches make for ungainly telephones), an ipod touch on your arm (plausible) or an apple TV on your arm (interesting concept, bordering on the plausible).  Battery size would be the big issue for all of these, but we aren’t so far away from it being possible.

I began pondering on the names:

iArm would cause trademark conflicts with Arm

iWatch sounds horrible – unless you’re talking about Apple TV on your arm

iBand has potential.  And brings to mind the various flexible displays which are coming close to commercial production, along with a clever magnetic ‘smart strap’ inspired by the iPad smart case.

If I’m right, and the iPhone is effectively uninteresting now, and the people pushing back the boundaries don’t feel like the iPhone is the place to work, then Apple have got to be looking at something new.  And Apple tends to do best when they become the first people to see the advantages of using new technologies to make a step change in existing markets (think of the micro hard drive for the original iPod, the larger sized solid state memory for the iPod nano, the capacitive touchscreen & multitouch for the iPhone or the retina display).  Right now the wearable watch is taking off (slowly, but step by step its happening) and a half decent low power flexible waterproof screen would be a game changer – especially if done with the design genius of Apple.

It’s only a thought, but Apple’s rise to dominance has always been about mobility and individuality.  We all know that the iMac and the Mac Pro are unloved, while the macbook (especially the air), the iPod and iPhone are where Apple’s heart is.  Apple TV never really fit in this slot – it felt like a horizontal extension of iTunes rather than something genuinely new.  It isn’t Apple’s core.  An iWatch – that just might be.

Could Apple be getting out of the watch market, so that when they enter it, they are doing something new, on their own?

The Ubiquitous Tablet

I’m not going to say anything about the new range of Kindles yet – that deserves consideration alongside whatever comes from Microsoft and Apple in the next month or so.  I do want to talk about the trend which is becoming clear with the pricing of the Kindle fire:  Tablets are becoming cheaper.  Tablets are going to continue to get cheaper.  We will stop considering tablets as expensive pieces of technology, and start considering them part of our lives – like we do with phones and wrist watches.

Here is my prediction:  Fairly soon, we will all own lots of tablets.  We will leave tablets littered around the house and workplace, and we will use whichever tablet is closest to us when we want to do something.

My key assumption here is that tablet UI development is not dead.  That one day, we will probably settle on a fairly common UI pattern for tablets – much as we have with the desktop metaphor for PCs – but it took us 15 years to firmly settle on the PC UI – and I’m going to guess there is another half decade before we come close to doing the same with tablets.

So what does this mean for how tablets should develop:

1.  We will not store our data on tablets.  We may cache our data on tablets, but the data will be stored in the cloud (or – possibly – on a server you own.  I think the cloud is more likely, but the geek in me likes the idea of being able to control my own data)

2.  Since I don’t think there will be just one brand of tablet, any more than there is just one brand of notebook (yes, you are allowed to use notebooks which are not Moleskines, just like you are allowed to use tablets which are not iPads), and since tablets will be interchangeably used, this brings into question native apps.  I don’t think native apps will die, but I think they will become less ubiquitous.  More and more, I foresee people using javascript and html based apps which they can access from any of their tablets.  Native apps will exist for a few purposes:

  • Games – assuming games are not streamed from your media centre box or somesuch, many games will remain native apps
  • Turning a particular tablet into a particular thing.  If I buy a 32″ tablet and decide ‘this will be my TV set’, then I might buy a specific native TV guide app for it.  In this case, the app will be an app you don’t want to move between devices – so it will be installed on a per device basis (perhaps with an access control list of approved users)

It is just possible that Android apps will become the default – but that seems unlikely.  Since you will want your personal collection of apps to move with you between devices (not having to install every app on every device), I think there will probably be initially space for an app which acts as an installer for these new apps in some way.  I don’t quite know how this will work – I’m guessing we’ll see it on Android first, followed by Windows, then Apple last.

3. Multi account tablets are not the way forward.  With tablets just lying around to be used this seems non-obvious, but my thought is that tablets should not be multi or single account, they should haves no account.  What I want is to go to a friend’s house I have never visited before, pick up his tablet and start using it – with all my apps there waiting for me.  If all the data (including your set of apps) is stored in the cloud, this isn’t a pipe dream, all it would take is some form of federated log in – I expect the best way to do this will be by bumping your NFC enabled phone up against the tablet.

You might worry that not having accounts with passwords might mean tablets get stolen.  I don’t share this worry.  Tablets are cheap, for most of the tablets we wil leave lying around and lend to friends, you won’t be bothered stealing them any more than you would steal the crockery from their dinner table.  Expensive tablets can till have some sort of pin locking mechanism before they let you in.

 

In thinking about this new, tablet, world, I’m wondering how far off we are.  Right now, I can’t see any reason why companies wouldn’t stick six iPad mini’s or Nexus 7s in each of their meeting rooms, to allow people to get to that email they need on the spur of the moment without having to bring in their laptop (and all the associated distractions).  Since these are special tablets with a special purpose (sitting in a meeting room), we might also want to install some sort of video conferencing app on them – each person having their own camera and being able to look whoever is speaking in the eye (or quickly go to another speaker and send a sidebar message), might well make multi-site videoconferences work.

We haven’t yet seem the impact of the tablet on the world.  It will be a different impact from the PC – more like the impact of the mobile phone, but without needing the mobility, since ubiquity and cheapness works just as well.  My predictions are probably conservative – but we’ll see them happening, and they’ll probably begin happening in the next few months. Give it five years, and the idea of not having a tablet to hand will be a strange as going anywhere without your mobile.

 

The Art of Being Invisible

Invisible Man

Recently Citrix commissioned a survey into the public perception of cloud computing and it went ever so slightly viral.  Which was presumably the intent – to get magazines and websites to publish articles which link Citrix with cloud computing, rather than actually to learn anything new about the cloud.  I have nothing against this – Citrix is a company that is a big player in the growing cloud, but anyone who hasn’t noticed this (and many haven’t) probably still consider them to be ‘Those metaframe people’ – so any PR that works is probably a good thing.

What I found out from watching this unfold was:

Not many people writing articles about surveys actually link to the original source

Even when I got to the original source, I wasn’t able to locate the survey people were give, or the responses to those questions – just the results, as digested by the company.  Which means I have absolutely no idea of the context in which to put the results.

Most people who actually reported on the article didn’t seem to care.  They pretty much parroted the press release data.  Again, as I would have expected – that seems to be what tech journalism is all about.  But it would be nice to see more people out there who get some interesting data and actually think about it – and its implications – before writing anything.

And finally, as the survey suggests:  Not many people know what cloud computing is.

Which isn’t a surprise, because it is a made up term which loosely describes a whole bunch of tech industry trends.  In short, I think we can safely say it comes from those vague technical drawings of infrastructure where you might draw a few data centers, each with a bunch of servers and storage inside, then link them by straight lines to a picture of a cloud – often with the words ‘The Internet’ inside to suggest the data centers were connected together via someone else’s infrastructure.  As people are increasingly hosting there technology on someone else’s infrastructure, rather than in bits of a datacenter maintain by company employees we say that technology is in the cloud.

The public don’t know about this.  And frankly they don’t care.

And also they shouldn’t.

My day job is developing a key part of the infrastructure for the cloud.  Without it big parts of what we call the cloud wouldn’t work – or at best would have to work in a very different and less good way.  You will almost certainly have used part of this product in some way today.  And you probably don’t even realise it, or care.  So why don’t I care that no-one knows about the cloud?  Why don’t I wish more people would love my work and sing its praises?

Because, if I do my job well, my work is invisible.  Every time you notice anything about my work, any time you worry that it exists in any way, shape, or form, you’re keeping me up at night because I’m not doing my job well.

I’ll give you an example:  Electricity.  To get electricity there are power stations, huge networks of wires, substations, transformers, all ending up at a plug socket in your house.  You don’t notice these.  You don’t care.  Unless – that is – it all stops working… or perhaps you have some technical problem like trying to run a 110 volt appliance in the UK.  If electricity wasn’t invisible – if we had to ring up and request enough power for our TV set to run, then we would care more – and enjoy our lives a little bit less.

Cloud computing is actually all about making computing into a utility, just like electricity.  It is about not having to worry about where servers are.  It is about not having to worry about where your data is.  Now, some people have to worry about electricity – if you’ve ever set up a data center, you’ll know that you need to start caring about all sorts of issues which don’t worry the home owner.  Similarly, if you work in the IT industry, you’ll have all sorts of worries about aspects of CLoud computing which end users simply shouldn’t ever have to care about.

So if you ask a man in the street about the cloud – he should remain more worried about the sort of cloud which rains on him.  And, to determine how worried he should be, he’ll probably ask Siri on his iPhone.  And not care about how Siri takes his voice input, and uses vast numbers of computers to respond to it with data generated by metrological offices who process big data over vast grids of computers.  He won’t worry about anything which goes in between, and more than he worries about how to charge is iPhone when he gets home.

Consumers already have their heads in the cloud.  They don’t realise it.  and they don’t care.  because they are already used to it.  To them the cloud isn’t anything new, its just how things are these days.  As for companies and programmers – we need to make the cloud less and less obvious, less and less difficult.  One shouldn’t need to think about doing something in the cloud, because that should be the easiest way to do things.  We have to take the blocks of code we put together, and make them blocks which work across the cloud as seamlessly as they currently work across CPU cores.  We need to stop thinking in terms of individual computers and individual locations – and those of us who build the code need to make it easier and easier to do this.

We are already on our way.  But would I want to be the number one clod computing company?  No, I would want to be the number one computing company – because once everyone is in the cloud, the cloud vanishes, and we ar back playing the same game we always played.

 

Use Case : Information Capture

When stumbling around trying to figure out which combination of tablet / laptop / phone makes the most sense for me, I find it useful to consider the use cases which, at the moment, my devices don’t quite meet.  The most obvious thing that I’m missing is a good information capture device.  Here are the situations where it would be useful:

I’m called into a meeting – I need to be able to access web sites during the meeting – and perhaps run GoToMeeting or WebEx, so it’ll need to have decent web browsing facilities.  I’ll also need to be able to access (and maybe write) emails.  Finally, I’ll want to be able to type down notes as quickly as possible, without looking at the keyboard (so I’ll need the sort of feedback which only a physical keyboard can give, and I’ll want a full size keyboard, for comfort)

I’m at a conference.  I want to take notes in all of the sessions. So I need good battery life, and I also need to be able to type with the device either in my hands or on my lap.  So far, I’ve not managed to find a device which is as comfortable for taking notes on as a keyboard, and many tablets with keyboards won’t rest nicely in my lap.  Later, I’m going to want to transform these notes into documents.

After a day at a conference, I’m in my hotel room.  When I travel, I prefer not to take my main computer with me – I prefer to have something cheap, something which doesn’t have all my data on it (I have backups, so I could stomach the loss of my data – and a good quantity of my data is in the cloud, but still, I don’t want the inconvenience).  Recent experience has shown that my windows tablet (with docking station and bluetooth keyboard) does well here (though isn’t that cheap – still any tablet & wireless keyboard combo would clearly do almost as well).

As an added benefit, it would be nice to be able to make doodles and drawings to accompany my notes, using a pen or stylus.

My suggested solutions are:

Carry a laptop, and some form of extended power.  This would be a good reason to buy a macbook air.  But an air, or an ultrabook would not meet my criteria of being a cheap device.

Carry a netbook and some form of extended power.  Then also use a tablet for the things tablets are better for.  The problem here is that netbook keyboards are not as big as I would like.

Carry a tablet, and then use one of those pens which record what you write for making notes:  This isn’t a bad plan – although I doubt the OCR capabilities of the pen’s software.  And we could probably achieve the same thing with pen, paper and a travel scanner.

What would seem to me a better idea would be a tablet case which has a built in keyboard, and is designed to work as a laptop.  Extra marks if it can contain extra batteries to increase tablet lifetime.  We’re not just talking a tablet dock, we’re talking about something specifically designed for using on your lap, like a laptop.  The idea of it being a case more or less rules out android devices – they are just too different from one another, you would end up with some half functioning system equivalent to those suction pads you use to attach phones and GPSs to car windows.

The right sort of thing already exists for the ipad – consider for example the keyboard case from clamcase.com.  There is a japanese company selling a notebook case which also sports a  battery, but this doesn’t seem to have worldwide availablility yet.

I wonder, however, if this is an opportunity for windows OEMs who were blindsided by Surface.  Suface, no matter how funky the magnetic keyboard thing is, won’t work from your lap.  Yet a Windows RT device would meet all my needs as described above.  A WinRT laptop would have a niche that Surface can’t quite touch – and a slightly higher spec varient could easily come with a stylus.  Ultimately, a device like that might mean I would rarely, if ever, need to use a proper laptop for anything.  And also that my life in conferences, meetings, and bland corporate hotel rooms would be much improved.

 

Given the 7 inch tablet, do we still need phones?

The advantage of the 7 inch tablet over the 10 inch is that it can be taken everywhere.  My kindle (7 inch) slips nicely into a suit or coat pocket.  I’m sure it would slip just as nicely into many handbags or briefcases.  If you have a device that ges everywhere with you, and which can do cellular communication, why not use it as your phone?

You, like me, might grasp the idea that holding a 7 inch tablet to our face like a phone is a non-starter.   And you, like me, might feel that a bluetooth headset isn’t something you want to have pinned to your ear all the time.  So the 7inch phone is likely to make you look faintly ridiculous.  Maybe it’ll become fashionable, but I’m getting old and grumpy, and it is clearly more sensible to hold something chocolate bar sized to the side of your head than something paperback book sized.  If only because your arm will hurt less.

My initial thought was:  What if you could have a bluetooth handset.  Just a microphone and speaker in a chocolate bar sized box with almost infinite battery life talking just to the tablet?  I can see a market for this.

But I can also see a market for something else:

Take the same box.  Put proper cellular communications and a cheap arm processor inside.  And lots of battery.  Don’t give it a screen.  Because the owner will already have a tablet in most places they go to.  But do give it Siri.  Or something like Siri.

Ladies and gentlemen, I give you the iPhone nano.

The iPhone nano will do all the jobs that a phone is good at – calling people, letting you make notes and record and be notified about appointments.  But it won’t do the things that a 7 inch tablet can do better (email, web browsing, reading ebooks, watching video).  And while your 7inch is quite portable – and will go to work, to clients and to the pub with you, your nano will go everywhere with you, so you’ll always be in touch: not just in the office, but also in the gym and int the park playing with your kids.

Without tablets, the iPhone nano doesn’t make much sense.

And, given that I own a kindle, a smart phone and a 10 inch tablet, for me the 7 inch tablet doesn’t make much sense.

But a 7 inch tablet and an iPhone nano – that seems to make perfect sense to me.

 

So, given the 7 inch tablet, do we need phones?  Yes.  Most of us will need phones.  But will we need smart phones?  Maybe, but not today’s smart phone.

And, for all I know, Apple might just have spotted just this path, and already be moving in that direction.

Microsoft Surface For Windows 8 – is it a good idea?

Some quick and initial thoughts on MS releasing their own Surface tablets:

Q. Did I expect this?

A. A week ago, no.  A day ago, I thought it was a possibility, based on the ideas below.  I still thought that an ARM tablet for developers to have early access to was more likely.

Q. Will their OEM partners mind?

A. Yes.  Yes they will.  And they may well bitch and moan a bit.  But let me ask you a few more questions:

Assuming Microsoft really are betting the consumer shop on windows 8 (and it seems they are), do they actually have to compete with anyone other than Apple?

If Microsoft are competing with Apple, will they (based on previous experience of the OEMs) have a better chance if they make design decisions about hardware?

Would their OEM partners mind if today MS announced that they could license XBox?

Q. Will OEM partners keep on manufacturing tablets?

A. Yes.  Probably.  If I told you you could go out and sell your own ipad compatible device, do you think you might consider it.  If MS is clever they will design one device (well, two – one for ARM, one for Intel) and put it at the sweet spot, price wise, for the home user.  Other OEMs can fill the niches on price, power or features.  My bet is that they will.  A bigger question is:  if MS are successful, how long will they feel the need to support their OEMs as much as they do today in the consumer segment?

Q. Can MS function as a hardware company?

A. They don’t have to.  They are no more a hardware company than Apple.  Or indeed than Dell.  All their hardware is going to be built by the Foxcons and DNIs of the world.  What MS are is a brand label, a design house, a venture capitalist, an advertising agency and end user support.

Q. Can MS keep prices low?

A. They would be stupid not to. Each tablet sold is the loss of one windows licence fee.  So thats how much profit they need to make on the tablets.  Meanwhile, by keeping quality high, and prices low, they will be telling their OEM partners the prices they need to aim for.  There was no other way MS would be able to ensure that the pricing of windows tablets would be competitive with the iPad.

Q. Overall?

A. MS are adapting to a new marketplace. And are doing it rather slowly, but more skillfully than I would have expected a year ago.  They really do seem to be betting their consumer shop – but they are trying their best to stack the deck in their favour.  Will it work?  I think there is a good chance they will carve out a strong postion, albeit not the market leading position they used to have.  With this new hardware strategy, they are playing an interesting game : will licensing their OS to other manufacturers be a bigger win, than the amount it costs to support said manufacturers.  Interestingly Apple played this game once and that gamble didn’t pay off.

Oh, and I don’t think this affects the corporate / enterprise space at all (at this point).

Q. Will MS’s history mean they only repeat the bits of Apple’s history that they want?

A. Watch this space.

 

More Frighteningly Ambitious

Continuing my discussion of Paul Graham’s frightening ambitious ideas:

The Next Steve Jobs

I don’t see how a company can set out to be the next Apple, or how an individual can set out to be the next Steve Jobs.  This isn’t the way the world works.  Apple didn’t set out to be the Apple of today.  Sure, perhaps, early on, Jobs saw the plausibility of turning computers into household appliances, but I’m guessing he wasn’t thinking of the devices we have today – because back in the early eighties, they weren’t thinkable… and Jobs was a realist – a special type of realist who knew just how far reality could be distorted in his favour at any particular point in time.  And Jobs didn’t set out to be Jobs.  Not the Jobs we knew at the end.  That Jobs was created by the successes and failures of the younger, brasher, less tidied up Jobs.

But more than anything, I dint think Jobs would have set out to be the next anything else – he would have set out to be the first Steve Jobs.

Now – there is absolutely space for people to try to bring better design to the tech industry.  and there is space fot people who want to move on the capabilities of existing technologies.  These are things we need to see.  What Jobs had was a combination of good design, a step forward in capabilities and a strong brand behind him.  The strong brand was important – the strong brand is what gave Jobs the clout to get entertainment industries and telecoms industries moving into step with him.

Getting a strong brand is hard.  But these days its easier.  Facebook might, potentially, have some of the clout we are talking about, and its still young.  But to become a strong brand quickly requires a low cost of entry for the users – and that pretty much precludes being involved in making innovative consumer electronics.

So the future of design is going to start in software.  It’ll be when one of the guys behind some particularly popular and well designed website says “screw this – I don’t want you making my site ugly” to advertisers and finds another way to make money – possibly by extending his brand into the physical world that we’ll see changes happening…

Though the other place I would look to is kickstarter and etsy.  There are more and more iphone cases and ipad covers that exude beauty.  What if one of these designers were to build a wrapper around something cheap and generic (say the Raspberry Pi) and turn it into something better.  I don’t know what that something better might be, but we are at a place where design first development of products is looking plausible.

Bring Back Moore’s Law

To be honest, i’m not hopeful that someone is going to come out and say “Look at my new compiler, it avoids all the problems with parallel processing”.  But my experience is stat you never have to solve all of the problems, just some of them.

That said, I don’t think Moore’s law is the problem that needs to be solved, when it comes to parallelism.  I think scalability is the problem.  You want a program that runs as well on 12 cores as it does on 1 core – thats Moore’s law being brought back [we all know Moore's law hasn't gone anywhere in hardware - I'm talking about getting software to take advantage of it] – but you also want a program that runs on a million cloud based servers as it does on one core.  That is a different problem.  And its a problem we’re not close to solving.  So it really is frighteningly ambitious.

Programming languages, as they have taken off in real world usage have gone from being wrappers around assembly language [C] to being more and more abstract [C++, Java], and usable [Python, Ruby] and less woried about the processors control flow and more worried about the user’s [Javescript].  Operating systems used to just cove over the complexities of the CPU, now they provide more and more abstraction – to the extent we even have hypervisors – operating systems for operating systems.  But operating systems still work like CPUs do.

There is another layer of abstraction to be jumped to.  Abstraction over the cloud.

We have various parts of this.  Hadoop is the sort of engine we need inside such an OS.  The web provides us with a user interface to it.  But we don’t have the full tools.  What should happen is this:  I right a program which handles a users request, prcreeses it andprovides a response.  A simple program.  one that doesn’t worry about what else is happening.  Perhaps I write more programs to handle background activities and the like.  And I set all these programs running on ‘my cloud’ – something which I access through a browser, develop on through a browser and which looks like one big computer to me.  The cloud takes my code, and does all the work.  It figures out what the complexities are, what the things my code requires.  Where my code needs to scale by being broken down into multiple jobs.  And it compiles the code, and runs it appropriately – probably recompiling sections of the code in response to runtim analysis of modules.  The suer doesn’t have to understand how file storage is spread across a billion disks – just like right now I don’t have to understand about my single disk’s sector sizes and rotation speeds.

And yes – if it turned out that my cloud was a single corred mobile phone, then, yeah – why shouldn’t it be able to target that too?

All of this is possible, its just a huge and frightening task.  If someone were to take it on, the world would look a very different place immediately.

Ongoing Diagnosis

The problem with healthcare monitoring is – unlike most of the other ideas – it requires hardare.  And hardware is hard to make, expensive to ship, breaks, and is generally quite big.  So the problem is making light cheap healthcare sensors.  Which is something I’m absolutely not qualified to talk about.

But two things I do know about hardware are – it is cheaper to make hardware which is dumb, and it is cheaper to make hardware which is produced en mass.

Dumb hardware simply needs to communicate with software which can do the real processing – and combine the information from lots of sensors to build a bigger picture.  It may be the market itself is not in making the sensors, but it being the best diagnosis engine combining the inputs from lots of sensors and looking them up against a database.

Getting the first sensors cheap enough is a bigger problem. Were I going into this area, I would be looking at the developing world.  Right now, parts of the word are crying out for a doctor in a box.  It doesn’t need to be small enough to fit into your mobile phone or training shoe – just into the back of a Toyota Hilux.  But if it can be made fairly cheap, the market is out there – and there are Bill and Melida Gates’s who will pay you to make your product – and to make it cheaper and smaller, and more efficient.

While we could revolutionise first world healthcare (and that is probably where the big bucks are), while we are developing this, we might accidentally make the world a far beter place.

I don’t know healthcare.  I don’t know everything that can be done.  But I know that is the sort of accident I would like my startup to have.

Amazon on Your High Street

Word on the street (you know, that information super highstreet you have these days) is that amazon are planning on opening brick and mortar stores.  How does this fit in with my previous suggestion that the new opportunity on the high street is the apple store for brands (especially publishers)?

At first, quite well you might think.  Amazon are  a publisher, and have a range of their own products they may wish to support or add value to.  And you’d be right – this might jst about work.  But it isn’t a unified brand like Apple is… so while Amazon might be able to bring author speeches and Kindle Fire support, unless they really go in for the ‘community cafe’ approach I think publishers need to adopt I don’t see it being a rip-roaring success.

Because the products they sell are not the thing that makes Amazon Amazon – so showcasing the products isn’t going to be a big success.  What makes Amazon Amazon is excatly the opposite of products – Amazon doesn’t care about what it sells – Amazon care that it is able to sell lots of everything you might possible need at a better price than everyone else and just as conveniently.  Amazon doesn’t tie the user to the product, it ties the user to the convenience (which is why I’m an Amazon Prime junkie).

So I don’t see an Amazon bookshop – or an Amazon iStore – being a success.  But what if Amzon went down the convenience route.  Right now, I get next day delivery (Prime Junkie, see) But what if I want a book or product right now?  Amazon could handle this… they could buy into out of town shopping park stores and fill them with books – both for browsing and with Argos-like warehousing behind.  Now when I buy a book Amazon could offer me “Pick it up right now from …”.  Moreover, they could also offer “Pick it up this afternoon from …” – which might give me access to a far wider range of books (it would be easier, since Amazon would only have to ship from warehouse to specific shops).

Now add 24 Hour opening, and a place from which I could collect all my amazon deliveries (since some people don’t work in an office where they can easily get all their parcels sent to them), and we have even greater convenience – and even less caring about what the product is they are selling.

Sure they could still use the space to promote their authors and their electrical goods.  Sure, it would certainly be the place you would go to if your Kindle broke.  But it would be Amazon, not Apple.  And for Amazon, being Amazon would – I suspect – be a better bet.

 

 

© Ben.Cha.lmers.co.uk
CyberChimps