Archive for the ‘Technology Futures’ Category:


After one week of Android Wear

I decided right away to pick up an Android Wear watch – I’ve been interested in wearables since I bought a fitbit a year or so ago, and Android Wear seemed to be the next logical step.  The watch I bought was the Samsung Gear Live, but I suspect most of my comments are likely to be relevant for most of Android Wear.  So, after a week of playing with it, here are my initial thoughts:

Battery Life:  I take of my watch every evening, so it has been no problem with plugging it into a charger.  I suspect I might occasionally forget, so I’m going to possibly need a backup watch for those days.  The battery seems to easily last through my waking hours, but I’m a bit concerned about travelling with the watch as, when flying to the west cost, I’ve had days bordering on 24 hours long – which I’m not sure it will cope with.

Maps: Many people have identified the directions feature as being one of the watch’s best.  And they are right – a brief buzz on your arm every time you need tu turn a corner is much less obtrusive than having to hold a phone out in front of you.  But it isn’t suitable for driving, which is a shame, because that’s what the app defaults to.  I haven’t yet figured out how to get public transport directions on the watch  -which is a big shame, because live bus times (along with directions to walk to the nearest bus stop) would be a big win.

“OK Google” : the voice recognition is quite impressive – and certainly up to sending text messages and (using the Bunting app) tweets.  However, voice control of the watch and phone leaves a little something to be desired.  To start with, you’re only ever going to say “OK Google” when showing the watch off, or in a car or your own home – so it is really best for hands free usage – you don’t particularly want to have to press any buttons on the watch to do anything.  It is rather good for starting playing music (“OK Google, play My Life Story” gets my phone playing mixes of Jake Shillingford’s finest), but not able to pause or skip tracks using voice – which would be handy when driving.  It’s also not great for launching apps – I use an application called Dog Catcher for playing podcasts – and when I ask my watch to launch it, it opens the Play Store page for the free version, rather than noticing I have an app with that name installed on my phone.

Range:  I’ve actually been quite impressed with how far away i can get from my phone, and still have the watch working.  This has two advantages:  Most of the time, I leave my phone on my desk charging when i’m at work.  The range means I can get notifications from it in nearby meetings rooms and the office kitchen which is handy.  The range also means I can contact my phone from anywhere in my house… should I lose my phone, a quick ‘OK Google, play music’ will help me track it down.

Apps: I’ve tried a few, so far, Bunting, a neat tool for working with twitter, and Evernote are my two favourites.  IFTTT lets you add buttons to trigger tasks – I’ve added a few for putting my phone’s ringer onto silent, for instance.  But I’m sure more IFTTT functionality would make the watch more useful.  App-wise, there is lots of scope for more development here.

Notifications:  You probably want to cut down the number of notifications you receive to your phone, if you use an Android Wear watch.  But that’s a god thing.  It is quite smart at filtering out notifications you don’t need.  Over all, notifications coming to the watch is the most important part of the android wear experience, and it is probably the place app developers should spend their time improving their apps and integrating with Wear.

Fitness features:  The step tracker just works, and lets you set a daily goal.  Fine, but nothing special.  The heart rate monitor requires you to stand still while you use it – so not great for tracking how much effort you should be spending when running or walking.

The watch faces:  There are a selection of faces to choose from, and they are fine.  But there isn’t yet a face which displays an analogue clock with day and date on the screen.  I believe it is possible to write new faces, so I’m waiting for one to turn up which meets my specifications.  As far as moving between low power and high visibility modes goes, the watch is quite good at getting it right, but not perfect.  Since you need to be in high visibility mode to use voice commands, this is a bit of a distraction when driving.  The visibility of the watch screen in the sun isn’t great, but despite some sunny days, I haven’t needed to cup my hand over the screen to tell the time from an analogue watch face.

Media Control:  This was the biggest surprise when it came to a use I hadn’t thought of for the Wear.  I’m a big user of netflix with my chomecast at home, and of DogCatcher for podcasts in my car.  Both of these apps put up notifications when they are playing, to allow you minimal control… and in both cases these controls turn up on the watch face.  So should I want to pause a track or a film, I just tap my watch – no need to dig around for my phone.  While these is scope to improve these features further, they are already the functionality i use the most.

My conclusion is:  The watch isn’t perfect – and in a year or two, if the wearables sector takes off, we’ll probably have much better models which are more suited to day to day use.  That said, it meets my needs, and exceeds my expectations so far.  Most of the downsides I’ve mentioned are software issues, so I expect the watch on my arm to become more powerful as time progresses.  We are still in an early adopter phase for wearables, but at this point you can see a viable consumer product peeping out from the future.

Replacing RSS

The death of Google Reader has made it clear to me that there is a gap in the market.

No, not a gap for another RSS feed reader, that has been well and truly satisfied by the mass of new contenders for the previously unloved RSS Reader throne.

The gap is a gap for control over your information.

You see, the reason we have to assume Google have dropped Reader is that they think Google+ is where all your reading should happen.  Facebook is much the same.  Like reader it is both a reading and a writing platform.  And when you develop a platform you can read and write to, you have very little incentive to keep it open so that other people can read from it, or write to it elsewhere.

Oddly, we tend to try to resolve this by imagining new read/write platforms that emulate Facebook and Google+, but are more open.  And any such system is destined to fail because of network effects.  Or friends and families are already using Facebook, so if we want to read what they have to say, and be read by them, then we have to go to Facebook.

But what if we decided to break the link between reading and writing?

Most of my Facebook posts are actually just copied from Twitter.  But Twitter is Read/Write,just like Facebook (and becoming more so every day).  Also, I don’t actually care where people read my posts – I just care that it is available to them to read.  But it would be better if it was also available to the world to read in other, better, more open ways.  Ways which are under the control of the author, not the reader provider.

Now, the way we used to do this was via blogs.  And blogs are a good thing – but I suspect blogging is to some extent dying, and the loss of Reader might, potentially hasten this demise.

My suggestion is that someone produce a write only microblogging, blogging, photo and video sharing platform.  The business model is simple – you charge people a recurring subscription for their account, and you ensure they are in control of their data.  But you make it easy for them to share the data they’ve provided.  You make the system automatically be able to post to Facebook, Twitter, Linked In, Pinterest, whatever network you want.  You also make sure it provides RSS feeds so that people who want to aggregate content .  And you make sure it offers an ‘email people’ option, so you can push your content to friends and family who haven’t yet grasped what the internets are for.  You probably also want to allow people to read your content if they come to your site directly.  You could also provide the ability to be better at deciding who gets to see what content – by letting the publishing platform understand how private your various accounts on different social networking sites are.

This would never be popular.  Facebook and its kin are good enough for most people.  But every time they betray their users by tightening their locking, or tilting things further in favour of the advertisers, a social publishing platform would make it easier for people to begin to opt out.  And it would mean more content was available outside the gated walls of the big social networks, which would be good for those of us who prefer more powerful and personalised ways of accessing interesting things to read.

Technology Isn’t For Me Any More

Yesterday, I was sitting with my nephew and niece as they were playing with their iPads.  I wasn’t so shocked at the skill and dexterity with which they were manipulating their games (indeed, the 3 year old was barely able to play Sonic the Hedgehog – pah, when I was her age I… hadn’t ever seen or touched any form of computing device.  I didn’t even have teletext or a Nintendo game and watch.) as I was shocked by the fact that to them the iPad will have always been the least technically advanced computers have ever been.

To the generation born only two or three years ago, will the iPad will be remembered in the same way as we remember the ZX81?  Perhaps – although it might be better to compare the iPad to the more well established consumer technologies of my youth:  the iPad will be remembered by them in the same way the massive, fake wood paneled black and white CRT TV set (which later became my BBC B’s monitor) is remembered by me in the age of PVRs, HD, Netflix and Youtube.

Talking of Youtube, apparently the youth of today are using it to show each other what they bought on trips to shopping malls.  Which seems boring and asinine until you think more about what is happening.  In my youth, you established your social status by constantly, hour after hour, hanging around with your friends and engaging in a million small acts of oneupmanship – or in my case, generally ignoring your friends, fiddling with computers, and praying that something like the internet would come along to ensure you never had to have any real social contact with anyone if you didn’t want to.  In the new world, they are doing the same – but they are trying to one-up the world.  Youtube has become the school yard, people are jockeying for social status and celebrity on a global scale.

And this isn’t abnormal – this is totally normal to them.  Their peer-reviewed value is now not down to who they got off with at the school disco or where they sit during assembly, but how many people liked their new profile picture.  And as they grow up and take more influential roles in society (and thats only ten or so years away folks – I won’t even be close to retiring – barring a lottery win or decent set of share options) these attitudes are what are will be shaping our world and our cultural currency.  Even for someone like me who has lived their life online as much as possible, the culture shock will be crippling.  I probably need to quickly invent a virtual lawn so that I can attempt to keep those kids off it.

But there are upsides too.

The new users of technology are not going to be satisfied with the iPads and iPhones of today.  They are going to be confused about why the rest of the world doesn’t work like the iPad does.  Why are TV remotes so clunky? Why do I actually have to be present at a particular place for my lessons, lectures and job?  Driving is hard – why I can’t I tell the car where to go and let it take me?  Shouldn’t Tesco know what shopping I’ve used and refilled my fridge while I’m at work? Unpacking is so irritating!  And probably lots of other things too – things I’m so used to that I can’t conceive of being any different.  I’m past the point where I’m going to be driving new technology (unless I happen to develop it), but the coming consumers will be looking at the world with fresh, already bored by the amazing futuristic world we live  in, eyes.

As for me, while my niece – thanks to television and computer games – is already better than me at speaking spanish, I can still outplay her at Sonic the Hedgehog – at least for a few more months.  After that I’m a relic – a walking dinosaur who will be harking back to the days of loading Elite from cassette tape, when music came on little shiny disks and when phones were mainly about talking to people.  She will be creating the world, and, at best, I’ll be responsible for implementing her demands until the government her generation elects decides it’ll be more cost effective to ship me off to Dignitas.

But the world will keep on changing.  And the future she creates will be amazing.

 

The Rebirth of the PC

People are talking about the death of the desktop PC.  While Rob Enderle is talking about it’s rebirth.  I’m conflicted about both these stories.  I think they are missing the trends which will really shape how we come to think of the PC in the future.

Looking at the market now, there are desktops, there are laptops, there are tablets and there are phones.  We also have vague attempts to cross genre, with Windows 8 trying to reach between tablet and laptop, while IOS and Android reach between tablet and phone.  But this isn’t the future, this is a market still trying to figure itself out. I’m going to limit my predictions to a particular segment of the market – the segment which is currently dominated by the desktop PC.

The reasons we have desktops are:

  • They are more powerful than laptops
  • They are tied to a single desk, so that management can control where we work (and where our data stays)
  • They are more comfortable to use than laptops or tablets (at least for keyboard entry and pixel perfect design)

However, the game is changing.  The question of power is becoming moot.  Machines seem to be gaining power (or reducing power consumption) faster than applications are taking it up.  There is less and less need for more powerful machines.  And, where more powerful machines are needed in a company, it doesn’t make sense to hide them under individual’s desks.  It makes more sense to put them in the datacenter, allocating processing power to the people that need it.

In short, we don’t need computers under our desks, we need reasonably dumb clients.  Network computers.  Oracle could have told you that years ago.

That said, dumb clients never quite seem to happen.  And the reason for this is that is that smart is so cheap, there is in point in tying yourself down, limiting yourself to this year’s dumb.

Tying the computer to the desk is increasingly being seen as a limitation rather than a benefit.  It doesn’t just prevent working from home, it also prevents hotdesking, and simple team re-orgs. What is more interesting to companies are technologies which let them keep data in controlled locations – and again the same technologies which let people work from home are also keeping data in the cloud – but locking it there so that it is harder to misuse.  This argument for the desktop PC is gone.

Comfort is more important.  But by comfort we specifically mean comfort for typists, and mouse operators.  Tablets are going to cut into the market for mouse operators, and combinations of gesture and speech technologies will gradually reduce the advantage of the poweruser’s keyboard.  Text entry will probably remain best done by keyboard for the time being.  But the comfort aspects are changing.  My bet is we will see an increase in big, screens angled for touch rather than display, while tablets are used for on screen reading.  Keyboards will remain for people who do a lot of typing, but onscreen keyboards will be commonplace for the everyday user.

So – by my reckoning we will have (probably private) cloud data, applications running on virtual machines which live in the datacenter and being distributed to big screens (and still some keyboards) on the user’s desks.

This isn’t a particularly impressive point of view.  Its the core of a number of companies who are playing in that field’s business plans.

But what is missing from the view is the PC.  As I said : there might be big monitors acting as displays for clients, but clients doesn’t mean dumb.

Smart is cheap.  We could probably power the monitors running smart clients – and some local personal, and personalized, computing – from our phones.  We could certainly do it from our laptops.  But we won’t.  Because we won’t want to become tied down to them.

We will want our tablets and laptops to be able to carry on doing what we were doing from our desktops – but thats an entirely different issue.  Indeed, since I’ve suggested we might want to run some personal programs locally, it suggests we need something on our desktop to mediate this.

It has felt, recently, that the IT industry is moving away from letting us own our own devices.  That the Apple’s and Microsofts want to control what our computers run.  Some have shouted ‘conspiracy’, but from what I know of the people making these decisions, the reason is hands down ‘usability’ tied with ‘security’.  However, there is a new breed of entrant in the market which cares little about this usability thing – the Raspberry Pi’s and android dongles.  Smart, but cheap.  You – not any company – control what you do with these devices.  They are yours.  And in a company environment, they can quite happily sit in a DMZ, while they run software that gets full access to the corporate intranet.

The desktop computer could easily be something along these lines.  No need to make the devices limited.  No need to limit what they are able to do.  All you need to limit is their access to privileged data and privileged servers.  These devices become the hub that you connect whatever hardware and whatever display are appropriate for the job.  I can keep my keyboard. Designers can have their Wacom digitisers.

But you also make sure that these devices can be accessed from outside the corporate network – but only the things running locally on them.  This might require a bit of local virtualization to do well, but Xen on ARM is making significant progress – so we’re near.

This is my bet about the desktop.  Small, smart, configurable devices tied in with private cloud services, and whatever UI hardare you need.

But my next bet is we won’t even notice this is happening.  These devices wills tart turning up in the corporation without the CTO or CIO giving permission.  At first it’ll be techies – and the occasional person using an old phone or tablet as a permanent device.  But gradually it will become more common – and devices will be sold with this sort of corporate use in mind.  You’ll get remote client software preinstalled with simple user interfaces for the common user.  They’ll come into their own as corporations start mandating the use of remote desktops and sucking everything into the cloud – taking advantage of the same networks that the engineering services teams have been forced to make available for phones and pads.

The desktop PC will stay.  It will stay because we want more, better, personal control of our work lives.

When the network computer does, finally, make the in roads we have been promised, it will have been smuggled in, not ordered.

(Oh, and we won’t call them desktops, we won’t call them PCs.  We will think of them as something different.  We’ll call them dongles, or DTBs (Desk Top Boxes), or personal clients, or something else.  This is going to happen without anyone noticing.  It might happen differently from the way I’ve suggested, but ultimately, our desktops will be low powered, small devices, which give users more control over their computing experience.  They’ll probably run linux or android – or maybe some MacOS/IOS varient if Apple decide to get in on the game.  And while companies will eventually provide them, the first ones through the door will belong to the employees.)

What is Miracast?

I’m sitting with a bunch of friends, at Greg’s house.  Greg being a friend.  One of the bunch of friends.  Who I’m sitting with.  You get the picture.  I mention a particularly hilarious YouTube clip.  You’ve probably seen it, it’s the one with the cat.  Oddly neither Greg, nor any of my other friends have seen it.  So, in an effort to educate them, I summon all twelve of us to crowd around my phone and begin to play it for them.

This is life 12% of the way through the twenty-first century.

But what if things were different.  What if, instead of playing the video on my phone, I could beam it to Greg’s TV?

That’s where Miracast comes in.  With Miracast I could do just that.  We could all watch YouTube cats to our hearts content from the comfort of our Lay-Z-Boys.

So, why isn’t it here yet?

Well, as I write this, Miracast is quite a new standard.  And in a standard’s early days it takes time for things to begin to work well together.  But let’s look at what Miracast actually does:

Firstly we have to find out about the TV.  And we don’t want to go through all the hassle of connecting to Greg’s home network.  So we use WiFi Direct (which I’ve explained elsewhere) to create a peer to peer connection.

Now, for Miracast both devices have to be WiFi Direct compatible and both devices have to support Miracast.  So it will be a time before Greg gets all the bits of kit necessary.  Nevertheless, several companies have certified Miracast TV adapters, so we might be able to start playing with this quite soon.

There have been previous attempts to create video streaming solutions:  Apple have their proprietary AirPlay – which does the job, but requires you to tie yourself both to an infrastructure network (though I’m hearing rumours that this is going to change soon – hopefully in a WiFi Direct compatible way) and to Apple proprietary devices (and I’m hearing this will change just after snowflakes decide to remove their travel advisory about visiting Hell).  Everyone else has been playing with DLNA – but the manufacturers of DLNA devices have failed to play nicely with one another, and DLNA relies on everyone being able to decode every format of video.

So what Miracast does is specify one video format (H264 – which is pretty widely used) and then provides HDCP DRM wrappers around it which are identical to those used by cabled interfaces.  Miracast essentially becomes a cable in the ether.

And this will solve all our AV problems, right?

Well, Miracast doesn’t support audio only (which is a bit of an oversight), but the WiFi alliace certification does at least mean there will be some interop testing going on.  The key thing to remember though is that Miracast is just a virtual wire – it doesn’t control who can access a particular device, or allow you to control anythign about the device other than what is shown on the screen.  In short, its a technology which could well be useful in home AV, but it isn’t the complete solution.

I don’t want my, I don’t want my, I don’t want my Apple TV

In the late nineties, I worked for a dot com startup doing some early work in the digital set top box space.  Video streaming, personalization, web browsing.  It was the sort of thing which only became popular in the home about a decade later.  We were too early (and probably too incompetent).

These days its popular to think that the TV set is due for a change.  Some sort of revolutionary rethinking in line with what Apple have done to the tablet computer, the phone and the mp3 player.  Apple are usually considered to be the people who will lead this revolution (the rumours it will happen any day now have been around for years).  Others think Google might manage it.  And I’ve suggested Amazon could be the black horse.

But the more I think about revolutionizing the TV, the more I realise, I don’t want it to happen.  At least not like a TV version of the iPhone.

There are a few things I have realized about the television:

1. It’s a device for multiple people to watch at the same time
2. It’s about showing pictures and playing sound.
3. UIs for TVs are hard.  And generally ugly.  Your best bet up till now has been to control things with a IR remote control.  Ownership of the remote, and losing the remote have become the cliches of ancient stand up comedy routines.  We are just about entering the period when people might consider replacing their remote controls with mobile phones and tablet computers.
4. No one wants to browse the internet, read their email or post to twitter through their TV.  We might want to browse the web in order to get to YouTube or some other video playing site, but generally people prefer to read things they can hold in their hands.

It has gradually become clear to me that the home user isn’t going to be looking for a magic box – or for extra capabilities of their TV – which will allow it to take advantage of all the new content opportunities the web provides.  No.  They are just going to use their TV to watch programs with other people, together.  They won’t be installing apps on their TV. They won’t be browsing the web on it.  And they won’t be controlling their viewing with the TV’s remote.  They will be doing everything from their phone or tablet.

Think about it for a moment.  You can already watch TV on your phone.  And with airplay you can send anything you’re watching to your TV.  This is fine for an ‘all Apple’ household, but until lots of people get in on the game, I don’t see this as the future.

No the future comes with WiFi Direct and Miracast (plus a lot of extra work).

I’ve explained WiFi Direct and Miracast elsewhere, but to put it simply:  Miracast lets you beam video from your phone – or from any other device – to your TV.  Its like a wireless HDMI cable.

So imagine, if you would, the TV of the future.  It will be a box with no buttons, just a lovely display and a power supply.  Inside it will be WiFi direct ready.  (Hopefully WiFi Direct has some sort of wake on lan functionality, so that you can plug your TV in and put it in a low power mode awaiting a connection.  If it doesn’t, we’ll stick a discrete pairing button on the top)

You come in with your phone, or tablet.  You install an app – which might be something like iPlayer, Hulu or Netflix, but might also be a specialist app perhaps ‘Game of Thrones’.  How you pay for this (one off, or subscription) is up to the app publisher.  The app publisher can also decide if the app contains all the audio/visual data, or if the data will be streamed from some external source.  You play the app, and are offered a number of screens to play the video on.  You select the TV and you are away.  The video is streamed from your phone to the TV set… or better, the TV set.

This world is already (just about) possible with Miracast.  But it isn’t quite enough.  Here are some ways we can improve on things.

Your friend is also watching TV with you, and decides to turn the volume up a bit.  The volume is a feature of the TV, so your friend needs to tell the TV to play sounds a bit louder.  So your friend reaches for his phone.  Now, he doesn’t live at your house, so he won’t have an app for controlling your TV.  There are two solutions:
1. We insist every TV provides a common interface, so that lots of people will make TV control apps.  In which case, he can then just pair with the TV and control it that way.  But this sort of standardisation doesn’t seem to work well.  So the odds are low.  My preferred alternative is to encourage the following:
2. When your friend pairs his phone with the TV, he is told there is a web service available (providing a web server ought to be a common feature of WiFi Direct devices that need to be interacted with) and goes straight to the front page.  At the front page he is given a web ui, and a link to download a better app from whichever app stores the TV company have chosen to support.

What would be even better is if the web app worked by communicating with a simple web service.  Each Web service could be different, but so long as they were simple, hackers could work out how they functioned.  And as a result could develop control apps which work with hundreds of different TV sets – just like multi-set remote controls work today.  In short everyone would have an app which would quickly be able to decide how to control whatever TV they came into contact with – while also having a web app ui workaround in case of failure.

So, this is fine for controlling the TV.  But what about if my friend wanted to pause the show in order to say something?

My suggestion is that along with WiFi direct linking devices, you want to make some other information available.  Possibly provided by a web service as above – but ideally in a more standardized way.  I would want the TV to tell me which device was currently streaming data to it.  And I would want to be able to join that WiFi direct group, to communicate with the sender.  Finally I would like the sending device to also provide me with a web interface – so that I could control it remotely too.

In short, the TV becomes far more dumb than your average Apple TV box is today, and you rely on the smarts of the tablets that control it.  Especially since the apps on the tablets can ensure a far better user experience in the process.

From here we need to consider other devices.  I’m pretty sure the PVR as is will die.  Broadcast TV will gradually wither, and the PVR won’t be supported.  But until this happens, the PVR and cable box will be part of the home entertainment system.  And increasingly we will get video servers which will hold the video data of films we have purchased – or even, perhaps, caches for external video providers.  In any event, we will control these devices in the same way we control the TV: pairing via WiFi Direct, then a web UI and potential app downloads to get to the functionality.  These boxes will stream the video straight to the TV.

We also need to consider audio.  Right now many homes have a TV with speakers, and also a HiFi of some sort.  Let’s rethink this:  Add a few wireless speakers, and let them be sent audio by a protocol similar to Miracast (but perhaps with some additional syncing technology)  Your phone could even become a remote wireless speaker – especially useful if you want to attach some headphones without laying out wires.

At this point we have everything we need to allow app writers to revolutionise television.  I still feel there is a lack of a central TV guide – but perhaps that will be forthcoming now we know we have personal touch interfaces and no longer have to assume everything will be controlled via the screen.

Whatever, we don’t need smart TVs.  We just need good displays, and sensible use of wireless technology.  The Apple TV as is, both is too smart, and not up to the job.  Lets make it simpler, and make the interactions between devices work well.

What is Wi-Fi Direct?

Here is the problem:  I have two devices – lets say my phone and a printer.  On my phone I’ve got an email containing e-tickets to a show I want to see.  All I have to do is print them out from the printer.

Not a problem, right?  My phone and my printer are both connected to the wireless network in my house.  So long as my phone can find my printer (and lets say it can – using UPnP or Bonjour, for instance) and knows how to talk to my printer (this is always a big ‘if’ – but lets assume for the moment it can), then my phone can drive the printer over the wireless network, and my e-tickets can be printed out.  Fine.

Now imagine I’m at a hotel.  I’ve still got my phone, but we’re now talking about the hotel’s printer.  The simple solution might be to join the hotel’s wireless network.  But that might cost money.  Or at the very least be inconvenient.  And imagine I’m not in the hotel at all, I’m in a branch of QwikPrint.  I’m not even sure I would trust their network.  I’m standing in the same room as the printer – I shouldn’t have to jump through hoops.

This is where Wi-Fi Direct comes in.

The way I use wifi-direct might be something like this:  I press a button on the printer, then I go to the wifi-direct setup on my phone.  On it I see the name of the printer (I know this, because there is a label on the printer just above the button I pressed).  I select the name, and we are connected.  Now I can print straight to that printer (with all the provisos about knowing how to drive it we had before).  I haven’t had to join a network.  I’m talking straight to the printer.

How Wi-Fi direct works is simple.  One device decides to act as if it is a wireless router.  The other device connects to it, but only if certain security considerations are met – such as the button on the printer being pressed.  In fact only one of the devices – the one pretending to be a router – needs to know anything about Wi-Fi direct.  The other just thinks it is joining a WPA2 network.  So there is a bit of backwards compatibility built in.  And for the record, this isn’t just a rehashing of the old Ad-hoc WiFi networks we knew (and generally tried to ignore), This is an infrastructure mode, 802.11n connection – just like your standard WiFi and can reach the same potential speeds – up to 250 Mbps.

There are a couple of extra twiddles – while service discovery works using Bonjour or UPnP – just like on a normal WiFi network, WiFi Direct also provides a little bit of extra service discovery majic, which means you can find services without having associated with one another or obtained an IP address.  This means that in my hypothetical print shop, I know that its a printer I’m connecting to.

So, why is this a big deal?

Well, if people support WiFi direct (and the early signs are good), it means we are getting close to the point where home automation might be possible.  Previously, had I decided to buy a washing machine or a thermostat which I wanted to control remotely, I would have a few, quite unsatisfactory choices:

1.  I could connect the device to my home wifi network.  This would be a pain, often involving configuring ip addresses, or at least ensuring the device knew the network’s name.  It would certainly require a more complex display than my thermostat or washing machine currently have
2. My washing machine or thermostat might decide to be their own WiFi router.  In which case I would have to switch network every time I talked to them
3. My washing machine or thermostat might be physically wired to my hub.  Lots more unnecessary wiring running through my house
4. They could have mobile telephony built in, and I would have to connect to them via an external website (security nightmare, not to mention expensive, and probably requires me to pay a subscription)
5. It could all be done via bluetooth.  But that has range issues.  And is slow.

With WiFi direct, the UI on the washing machine or thermostat would be a button.  The UI on the phone would be a drop down list.  And I would have a connection.  In short, the biggest problems of home automation are resolved, and everything else becomes a matter of money and software.

Training Marketplace

I am all to aware of the things I don’t know.  And, from time to time, I decide to do something about it.  Often, this involves me picking up a book and reading, and for pure learning of information, the site memrise is fantastic, but from time to time you cant beat actually getting taught something by another person.

We all have things we are capable of teaching – everyone has an area of expertise where they are better than most people.  In theory, therefore, we should all be picking up some extra cash by teaching others what we know.

But we aren’t.  Because at the moment, the cost of selling what we know, of reaching other people who want to know what we know, of realising there are people out there who want to know what we know, has been too high.

It used to be the case that we binned unwanted christmas presents.  Or gave them to a charity shop.  Now we flog them on eBay.  By creating a marketplace, the cost of getting rid of things we don’t want has gone down.  But marketplaces are not just for physical things – AirBnB lets you sell space, and, in its own way, Intrade lets you sell knowledge.  Kaggle is doing a roaring business in selling algorithm improvements.

I’ve not yet found something similar for selling training.

What I’m thinking about is something that starts off looking like TripAdvisor.  You search for the skill you want to learn, and then narrow down the search by what you are willing to pay.  You might also select a location you want to learn in, and consider features such as weather you want to learn on your own, or in a group, in person or via skype or a webinar system.  I can even see the potential of offering automated online training courses here, with or without the benefit of a human advisor.

You then sort and list the opportunities – by price, distance or rating.  And you allow people to rate any training they experience.

This, on its own, would be a fantastic resource.  A resource which could be monetized in all the usual ways - referral fees, sponsorship, advertising.

But you could take this marketplace further.  Perhaps there are skills people want which it might not always be worthwhile running training courses for.  Just as on the stock market you have both offers and bids, in a training market place you could have not only training offered, but training wanted – set up an advert of where you are and what training you want when, and let prospective trainers offer it to you (and provide a payment system so that the site can rake off 5%).  Make things even easier by making it possible for people to forward these skill requests to other people they know – via facebook, linkdin, twitter, whatever – let people get their whole network involved, because for a market to work well, you want as many eyeballs as possible looking at it.

 

Wearing it on my sleave

ScratchInput SteveMann self portrait

Wearable computing.  Thats what we called it, back in the late nineties when I was at university.  It seemed like a great idea, never being away from my computer, instant ability to connect to the internet.  We wondered about the best way to do it – I was fantasising about a belt which could hold a twenty-four hour battery pack, some sort of input device – perhaps using combinations to buttons to let my type – or maybe a joypad spread 50:50 between my trouser pockets (though the thought of what using that might looked like was an issue) and, of course, some sort of output device strapped to my arm.

Later in life I got a Nokia Communicator.  These days I have an ageing Android phone, and I’m well behind the times with wearable computing.  The phone is now doing the job of the wearable computer – it does everything we wanted and more, in a more sensible and more acceptable looking way.  The reason I’m behind the times, is that wearable computing has become fashionable.  Its about being up to date, more than it is about the technology.  I’d bet the people drooling over the latest iPhone weren’t impressed by the technology like we all were a few years ago – they just wanted something new and cool.  And thats cool like a pair of jeans, not cool like the demo of Xen on ARM I saw the other day.

But I don’t want to talk about the new iPhone, because its a step improvement, not a game changer.

I want to talk about the iPod Nano.  Because the iPod Nano has been changed from a square to a rectangle.  And this interests me no end – because you can no longer put it into a watch strap and use it as a watch.  And this seems to me to be a weird decision from Apple.

Now, I’m not going to say the iPod nano was the publicly acceptable face of a phase of wearable computing we haven’t yet reached – mainly because I never saw anyone wearing them as a watch.  But those what straps sell.  And some people love their Nano watches.  And Apple must have been aware of this – because they sell the watch straps in their stores.

And I can’t believe Apple were unaware of the Pebble watch which was causing a lot of buzz earlier this year.  I can’t believe Apple don’t want a part of that market, somewhere down the line.

And so, the only reason I can think of for stopping people from using the iPod as a watch is that Apple have plans (possibly vague plans, but plans nonetheless) to enter that market.  Amongst the possible ideas I can think of are an iPhone on your arm (unlikely – watches make for ungainly telephones), an ipod touch on your arm (plausible) or an apple TV on your arm (interesting concept, bordering on the plausible).  Battery size would be the big issue for all of these, but we aren’t so far away from it being possible.

I began pondering on the names:

iArm would cause trademark conflicts with Arm

iWatch sounds horrible – unless you’re talking about Apple TV on your arm

iBand has potential.  And brings to mind the various flexible displays which are coming close to commercial production, along with a clever magnetic ‘smart strap’ inspired by the iPad smart case.

If I’m right, and the iPhone is effectively uninteresting now, and the people pushing back the boundaries don’t feel like the iPhone is the place to work, then Apple have got to be looking at something new.  And Apple tends to do best when they become the first people to see the advantages of using new technologies to make a step change in existing markets (think of the micro hard drive for the original iPod, the larger sized solid state memory for the iPod nano, the capacitive touchscreen & multitouch for the iPhone or the retina display).  Right now the wearable watch is taking off (slowly, but step by step its happening) and a half decent low power flexible waterproof screen would be a game changer – especially if done with the design genius of Apple.

It’s only a thought, but Apple’s rise to dominance has always been about mobility and individuality.  We all know that the iMac and the Mac Pro are unloved, while the macbook (especially the air), the iPod and iPhone are where Apple’s heart is.  Apple TV never really fit in this slot – it felt like a horizontal extension of iTunes rather than something genuinely new.  It isn’t Apple’s core.  An iWatch – that just might be.

Could Apple be getting out of the watch market, so that when they enter it, they are doing something new, on their own?

The Ubiquitous Tablet

I’m not going to say anything about the new range of Kindles yet – that deserves consideration alongside whatever comes from Microsoft and Apple in the next month or so.  I do want to talk about the trend which is becoming clear with the pricing of the Kindle fire:  Tablets are becoming cheaper.  Tablets are going to continue to get cheaper.  We will stop considering tablets as expensive pieces of technology, and start considering them part of our lives – like we do with phones and wrist watches.

Here is my prediction:  Fairly soon, we will all own lots of tablets.  We will leave tablets littered around the house and workplace, and we will use whichever tablet is closest to us when we want to do something.

My key assumption here is that tablet UI development is not dead.  That one day, we will probably settle on a fairly common UI pattern for tablets – much as we have with the desktop metaphor for PCs – but it took us 15 years to firmly settle on the PC UI – and I’m going to guess there is another half decade before we come close to doing the same with tablets.

So what does this mean for how tablets should develop:

1.  We will not store our data on tablets.  We may cache our data on tablets, but the data will be stored in the cloud (or – possibly – on a server you own.  I think the cloud is more likely, but the geek in me likes the idea of being able to control my own data)

2.  Since I don’t think there will be just one brand of tablet, any more than there is just one brand of notebook (yes, you are allowed to use notebooks which are not Moleskines, just like you are allowed to use tablets which are not iPads), and since tablets will be interchangeably used, this brings into question native apps.  I don’t think native apps will die, but I think they will become less ubiquitous.  More and more, I foresee people using javascript and html based apps which they can access from any of their tablets.  Native apps will exist for a few purposes:

  • Games – assuming games are not streamed from your media centre box or somesuch, many games will remain native apps
  • Turning a particular tablet into a particular thing.  If I buy a 32″ tablet and decide ‘this will be my TV set’, then I might buy a specific native TV guide app for it.  In this case, the app will be an app you don’t want to move between devices – so it will be installed on a per device basis (perhaps with an access control list of approved users)

It is just possible that Android apps will become the default – but that seems unlikely.  Since you will want your personal collection of apps to move with you between devices (not having to install every app on every device), I think there will probably be initially space for an app which acts as an installer for these new apps in some way.  I don’t quite know how this will work – I’m guessing we’ll see it on Android first, followed by Windows, then Apple last.

3. Multi account tablets are not the way forward.  With tablets just lying around to be used this seems non-obvious, but my thought is that tablets should not be multi or single account, they should haves no account.  What I want is to go to a friend’s house I have never visited before, pick up his tablet and start using it – with all my apps there waiting for me.  If all the data (including your set of apps) is stored in the cloud, this isn’t a pipe dream, all it would take is some form of federated log in – I expect the best way to do this will be by bumping your NFC enabled phone up against the tablet.

You might worry that not having accounts with passwords might mean tablets get stolen.  I don’t share this worry.  Tablets are cheap, for most of the tablets we wil leave lying around and lend to friends, you won’t be bothered stealing them any more than you would steal the crockery from their dinner table.  Expensive tablets can till have some sort of pin locking mechanism before they let you in.

 

In thinking about this new, tablet, world, I’m wondering how far off we are.  Right now, I can’t see any reason why companies wouldn’t stick six iPad mini’s or Nexus 7s in each of their meeting rooms, to allow people to get to that email they need on the spur of the moment without having to bring in their laptop (and all the associated distractions).  Since these are special tablets with a special purpose (sitting in a meeting room), we might also want to install some sort of video conferencing app on them – each person having their own camera and being able to look whoever is speaking in the eye (or quickly go to another speaker and send a sidebar message), might well make multi-site videoconferences work.

We haven’t yet seem the impact of the tablet on the world.  It will be a different impact from the PC – more like the impact of the mobile phone, but without needing the mobility, since ubiquity and cheapness works just as well.  My predictions are probably conservative – but we’ll see them happening, and they’ll probably begin happening in the next few months. Give it five years, and the idea of not having a tablet to hand will be a strange as going anywhere without your mobile.

 

© Ben.Cha.lmers.co.uk
CyberChimps