Archive for the ‘Google’ Category:


After one week of Android Wear

I decided right away to pick up an Android Wear watch – I’ve been interested in wearables since I bought a fitbit a year or so ago, and Android Wear seemed to be the next logical step.  The watch I bought was the Samsung Gear Live, but I suspect most of my comments are likely to be relevant for most of Android Wear.  So, after a week of playing with it, here are my initial thoughts:

Battery Life:  I take of my watch every evening, so it has been no problem with plugging it into a charger.  I suspect I might occasionally forget, so I’m going to possibly need a backup watch for those days.  The battery seems to easily last through my waking hours, but I’m a bit concerned about travelling with the watch as, when flying to the west cost, I’ve had days bordering on 24 hours long – which I’m not sure it will cope with.

Maps: Many people have identified the directions feature as being one of the watch’s best.  And they are right – a brief buzz on your arm every time you need tu turn a corner is much less obtrusive than having to hold a phone out in front of you.  But it isn’t suitable for driving, which is a shame, because that’s what the app defaults to.  I haven’t yet figured out how to get public transport directions on the watch  -which is a big shame, because live bus times (along with directions to walk to the nearest bus stop) would be a big win.

“OK Google” : the voice recognition is quite impressive – and certainly up to sending text messages and (using the Bunting app) tweets.  However, voice control of the watch and phone leaves a little something to be desired.  To start with, you’re only ever going to say “OK Google” when showing the watch off, or in a car or your own home – so it is really best for hands free usage – you don’t particularly want to have to press any buttons on the watch to do anything.  It is rather good for starting playing music (“OK Google, play My Life Story” gets my phone playing mixes of Jake Shillingford’s finest), but not able to pause or skip tracks using voice – which would be handy when driving.  It’s also not great for launching apps – I use an application called Dog Catcher for playing podcasts – and when I ask my watch to launch it, it opens the Play Store page for the free version, rather than noticing I have an app with that name installed on my phone.

Range:  I’ve actually been quite impressed with how far away i can get from my phone, and still have the watch working.  This has two advantages:  Most of the time, I leave my phone on my desk charging when i’m at work.  The range means I can get notifications from it in nearby meetings rooms and the office kitchen which is handy.  The range also means I can contact my phone from anywhere in my house… should I lose my phone, a quick ‘OK Google, play music’ will help me track it down.

Apps: I’ve tried a few, so far, Bunting, a neat tool for working with twitter, and Evernote are my two favourites.  IFTTT lets you add buttons to trigger tasks – I’ve added a few for putting my phone’s ringer onto silent, for instance.  But I’m sure more IFTTT functionality would make the watch more useful.  App-wise, there is lots of scope for more development here.

Notifications:  You probably want to cut down the number of notifications you receive to your phone, if you use an Android Wear watch.  But that’s a god thing.  It is quite smart at filtering out notifications you don’t need.  Over all, notifications coming to the watch is the most important part of the android wear experience, and it is probably the place app developers should spend their time improving their apps and integrating with Wear.

Fitness features:  The step tracker just works, and lets you set a daily goal.  Fine, but nothing special.  The heart rate monitor requires you to stand still while you use it – so not great for tracking how much effort you should be spending when running or walking.

The watch faces:  There are a selection of faces to choose from, and they are fine.  But there isn’t yet a face which displays an analogue clock with day and date on the screen.  I believe it is possible to write new faces, so I’m waiting for one to turn up which meets my specifications.  As far as moving between low power and high visibility modes goes, the watch is quite good at getting it right, but not perfect.  Since you need to be in high visibility mode to use voice commands, this is a bit of a distraction when driving.  The visibility of the watch screen in the sun isn’t great, but despite some sunny days, I haven’t needed to cup my hand over the screen to tell the time from an analogue watch face.

Media Control:  This was the biggest surprise when it came to a use I hadn’t thought of for the Wear.  I’m a big user of netflix with my chomecast at home, and of DogCatcher for podcasts in my car.  Both of these apps put up notifications when they are playing, to allow you minimal control… and in both cases these controls turn up on the watch face.  So should I want to pause a track or a film, I just tap my watch – no need to dig around for my phone.  While these is scope to improve these features further, they are already the functionality i use the most.

My conclusion is:  The watch isn’t perfect – and in a year or two, if the wearables sector takes off, we’ll probably have much better models which are more suited to day to day use.  That said, it meets my needs, and exceeds my expectations so far.  Most of the downsides I’ve mentioned are software issues, so I expect the watch on my arm to become more powerful as time progresses.  We are still in an early adopter phase for wearables, but at this point you can see a viable consumer product peeping out from the future.

Replacing RSS

The death of Google Reader has made it clear to me that there is a gap in the market.

No, not a gap for another RSS feed reader, that has been well and truly satisfied by the mass of new contenders for the previously unloved RSS Reader throne.

The gap is a gap for control over your information.

You see, the reason we have to assume Google have dropped Reader is that they think Google+ is where all your reading should happen.  Facebook is much the same.  Like reader it is both a reading and a writing platform.  And when you develop a platform you can read and write to, you have very little incentive to keep it open so that other people can read from it, or write to it elsewhere.

Oddly, we tend to try to resolve this by imagining new read/write platforms that emulate Facebook and Google+, but are more open.  And any such system is destined to fail because of network effects.  Or friends and families are already using Facebook, so if we want to read what they have to say, and be read by them, then we have to go to Facebook.

But what if we decided to break the link between reading and writing?

Most of my Facebook posts are actually just copied from Twitter.  But Twitter is Read/Write,just like Facebook (and becoming more so every day).  Also, I don’t actually care where people read my posts – I just care that it is available to them to read.  But it would be better if it was also available to the world to read in other, better, more open ways.  Ways which are under the control of the author, not the reader provider.

Now, the way we used to do this was via blogs.  And blogs are a good thing – but I suspect blogging is to some extent dying, and the loss of Reader might, potentially hasten this demise.

My suggestion is that someone produce a write only microblogging, blogging, photo and video sharing platform.  The business model is simple – you charge people a recurring subscription for their account, and you ensure they are in control of their data.  But you make it easy for them to share the data they’ve provided.  You make the system automatically be able to post to Facebook, Twitter, Linked In, Pinterest, whatever network you want.  You also make sure it provides RSS feeds so that people who want to aggregate content .  And you make sure it offers an ‘email people’ option, so you can push your content to friends and family who haven’t yet grasped what the internets are for.  You probably also want to allow people to read your content if they come to your site directly.  You could also provide the ability to be better at deciding who gets to see what content – by letting the publishing platform understand how private your various accounts on different social networking sites are.

This would never be popular.  Facebook and its kin are good enough for most people.  But every time they betray their users by tightening their locking, or tilting things further in favour of the advertisers, a social publishing platform would make it easier for people to begin to opt out.  And it would mean more content was available outside the gated walls of the big social networks, which would be good for those of us who prefer more powerful and personalised ways of accessing interesting things to read.

A debugging war story

The bug has been around for most of the last year.  It’s intermittent and a pain to replicate.  We hadn’t heard anything about it from our customers, but our test system threw the errors up when we performed our nightly runs.  But the failures were all slightly different.  And different tests were failing each time.

All we knew were we were blue-screening, and that the problem was something to do with memory corruption.

The first thing to do was look at the crash dumps.  And the good news was that our drivers weren’t on the stack.  The bad news was that more and more testing showed the problem only occurred when our drivers were loaded.  You see, the problem was clearly a memory corruption – but the crash wasn’t happening at the time memory was being corrupted – no, the crash was happening when the memory got freed. You couldn’t tell who corrupted the memory, just that it got corrupted.

We tried driver verifier.  As soon as we put verifier to work on the driver we suspected, all the problems went away.

We did find that close to where the corruptions occured there was often a memory pool allocated with the tag ‘HAL’ – what was interesting about this pool, which looked like being some sort of mapping between addresses and page frame numbers, was that it seemed to have one entry too many – it had overflowed the space allocated for it.  The good news is it wasn’t one of our pools.  The bad news – I was beginning to suspect it was something like a double free of memory which caused this situation to arise.

Because we thought our driver might be causing this, we added all the instrumentation in the world to its memory allocations and frees.  But this didn’t show anything up.  The driver seemed to be working perfectly.

We were close to giving up.

One of our test engineers went through the test logs, and came up with a set of situations most likely to cause the problem.  With a bit of effort he made a reproduction of the bug that could happen in about an hour – much better than the six hour repro we had earlier.  One of the things he found was that the issue mainly happened on Windows 2008 Sp2 32 bit.

We then wen’t through ruling out any number of potential hypotheses.  Everythign from ‘Was a DVD in the drive at the time?’ to ‘Does it only happen at on machines with 2 CPUs’.  Once we had ruled out the impossible, whatever reamined, however unlikely was sure to be the cause.

Unfortunately, we ended up with the same suspicious driver.  And the same lack of clues.

Not knowing where else to look, I tried reproducing the reproduction on a checked build of 2008 sp2.  I didn’t hold out much hope.  We frequently use checked builds in developing our code, and this issue looked like being timing specific – the checked build was going to play havoc with the timing.

I installed the drivers, rebooted, and:

Assertion failed: RaidIsRegionInitialized

OK.  great.  What now?  Google was our friend.  Well, almost.  We found two results.  One was an MSDN page which didn’t mention anything about this.  The other wasn’t clear but had a few lines of hope

You may need to call GetUncachedExtension, even if you’re not going to use it. IIRC, on Win7 Storport would allocate the DMA adapter object during the GetUncachedExtension context. Your adapter likely doesn’t have any DMA restrictions, so Storport probably doesn’t really need the DMA adapter object, which is why everything works without the call.”

http://lists.openfabrics.org/pipermail/nvmewin/2012-March/000075.html

And, as it turned out, we did.  We did need to call GetUncachedExtension, even though there was no reason for us to do so.

One line fixed our storport driver, removed the bugs, fixed everything.

A year of irritating, intermittent, bluescreens gone.  And a good reason to help us understand what, roughly, was happening:  Microsoft Windows was freeing memory which we had never asked it to allocate.  More or less a double free.

Its astounding how often my job ultimately comes down to being a Google Monkey.  But there was a lot of work to lead us to Google.  And some bad luck too – we used checked builds a lot, but – it turns out – not the 2k8 checked build, which was the one that had the assertion.  We only used that this time because 2k8 was part of the repro we found.

But figuring this out is something that our team (and it was absolutely a team effort) can be proud of.

Today is a good day to code.

The Rebirth of the PC

People are talking about the death of the desktop PC.  While Rob Enderle is talking about it’s rebirth.  I’m conflicted about both these stories.  I think they are missing the trends which will really shape how we come to think of the PC in the future.

Looking at the market now, there are desktops, there are laptops, there are tablets and there are phones.  We also have vague attempts to cross genre, with Windows 8 trying to reach between tablet and laptop, while IOS and Android reach between tablet and phone.  But this isn’t the future, this is a market still trying to figure itself out. I’m going to limit my predictions to a particular segment of the market – the segment which is currently dominated by the desktop PC.

The reasons we have desktops are:

  • They are more powerful than laptops
  • They are tied to a single desk, so that management can control where we work (and where our data stays)
  • They are more comfortable to use than laptops or tablets (at least for keyboard entry and pixel perfect design)

However, the game is changing.  The question of power is becoming moot.  Machines seem to be gaining power (or reducing power consumption) faster than applications are taking it up.  There is less and less need for more powerful machines.  And, where more powerful machines are needed in a company, it doesn’t make sense to hide them under individual’s desks.  It makes more sense to put them in the datacenter, allocating processing power to the people that need it.

In short, we don’t need computers under our desks, we need reasonably dumb clients.  Network computers.  Oracle could have told you that years ago.

That said, dumb clients never quite seem to happen.  And the reason for this is that is that smart is so cheap, there is in point in tying yourself down, limiting yourself to this year’s dumb.

Tying the computer to the desk is increasingly being seen as a limitation rather than a benefit.  It doesn’t just prevent working from home, it also prevents hotdesking, and simple team re-orgs. What is more interesting to companies are technologies which let them keep data in controlled locations – and again the same technologies which let people work from home are also keeping data in the cloud – but locking it there so that it is harder to misuse.  This argument for the desktop PC is gone.

Comfort is more important.  But by comfort we specifically mean comfort for typists, and mouse operators.  Tablets are going to cut into the market for mouse operators, and combinations of gesture and speech technologies will gradually reduce the advantage of the poweruser’s keyboard.  Text entry will probably remain best done by keyboard for the time being.  But the comfort aspects are changing.  My bet is we will see an increase in big, screens angled for touch rather than display, while tablets are used for on screen reading.  Keyboards will remain for people who do a lot of typing, but onscreen keyboards will be commonplace for the everyday user.

So – by my reckoning we will have (probably private) cloud data, applications running on virtual machines which live in the datacenter and being distributed to big screens (and still some keyboards) on the user’s desks.

This isn’t a particularly impressive point of view.  Its the core of a number of companies who are playing in that field’s business plans.

But what is missing from the view is the PC.  As I said : there might be big monitors acting as displays for clients, but clients doesn’t mean dumb.

Smart is cheap.  We could probably power the monitors running smart clients – and some local personal, and personalized, computing – from our phones.  We could certainly do it from our laptops.  But we won’t.  Because we won’t want to become tied down to them.

We will want our tablets and laptops to be able to carry on doing what we were doing from our desktops – but thats an entirely different issue.  Indeed, since I’ve suggested we might want to run some personal programs locally, it suggests we need something on our desktop to mediate this.

It has felt, recently, that the IT industry is moving away from letting us own our own devices.  That the Apple’s and Microsofts want to control what our computers run.  Some have shouted ‘conspiracy’, but from what I know of the people making these decisions, the reason is hands down ‘usability’ tied with ‘security’.  However, there is a new breed of entrant in the market which cares little about this usability thing – the Raspberry Pi’s and android dongles.  Smart, but cheap.  You – not any company – control what you do with these devices.  They are yours.  And in a company environment, they can quite happily sit in a DMZ, while they run software that gets full access to the corporate intranet.

The desktop computer could easily be something along these lines.  No need to make the devices limited.  No need to limit what they are able to do.  All you need to limit is their access to privileged data and privileged servers.  These devices become the hub that you connect whatever hardware and whatever display are appropriate for the job.  I can keep my keyboard. Designers can have their Wacom digitisers.

But you also make sure that these devices can be accessed from outside the corporate network – but only the things running locally on them.  This might require a bit of local virtualization to do well, but Xen on ARM is making significant progress – so we’re near.

This is my bet about the desktop.  Small, smart, configurable devices tied in with private cloud services, and whatever UI hardare you need.

But my next bet is we won’t even notice this is happening.  These devices wills tart turning up in the corporation without the CTO or CIO giving permission.  At first it’ll be techies – and the occasional person using an old phone or tablet as a permanent device.  But gradually it will become more common – and devices will be sold with this sort of corporate use in mind.  You’ll get remote client software preinstalled with simple user interfaces for the common user.  They’ll come into their own as corporations start mandating the use of remote desktops and sucking everything into the cloud – taking advantage of the same networks that the engineering services teams have been forced to make available for phones and pads.

The desktop PC will stay.  It will stay because we want more, better, personal control of our work lives.

When the network computer does, finally, make the in roads we have been promised, it will have been smuggled in, not ordered.

(Oh, and we won’t call them desktops, we won’t call them PCs.  We will think of them as something different.  We’ll call them dongles, or DTBs (Desk Top Boxes), or personal clients, or something else.  This is going to happen without anyone noticing.  It might happen differently from the way I’ve suggested, but ultimately, our desktops will be low powered, small devices, which give users more control over their computing experience.  They’ll probably run linux or android – or maybe some MacOS/IOS varient if Apple decide to get in on the game.  And while companies will eventually provide them, the first ones through the door will belong to the employees.)

I don’t want my, I don’t want my, I don’t want my Apple TV

In the late nineties, I worked for a dot com startup doing some early work in the digital set top box space.  Video streaming, personalization, web browsing.  It was the sort of thing which only became popular in the home about a decade later.  We were too early (and probably too incompetent).

These days its popular to think that the TV set is due for a change.  Some sort of revolutionary rethinking in line with what Apple have done to the tablet computer, the phone and the mp3 player.  Apple are usually considered to be the people who will lead this revolution (the rumours it will happen any day now have been around for years).  Others think Google might manage it.  And I’ve suggested Amazon could be the black horse.

But the more I think about revolutionizing the TV, the more I realise, I don’t want it to happen.  At least not like a TV version of the iPhone.

There are a few things I have realized about the television:

1. It’s a device for multiple people to watch at the same time
2. It’s about showing pictures and playing sound.
3. UIs for TVs are hard.  And generally ugly.  Your best bet up till now has been to control things with a IR remote control.  Ownership of the remote, and losing the remote have become the cliches of ancient stand up comedy routines.  We are just about entering the period when people might consider replacing their remote controls with mobile phones and tablet computers.
4. No one wants to browse the internet, read their email or post to twitter through their TV.  We might want to browse the web in order to get to YouTube or some other video playing site, but generally people prefer to read things they can hold in their hands.

It has gradually become clear to me that the home user isn’t going to be looking for a magic box – or for extra capabilities of their TV – which will allow it to take advantage of all the new content opportunities the web provides.  No.  They are just going to use their TV to watch programs with other people, together.  They won’t be installing apps on their TV. They won’t be browsing the web on it.  And they won’t be controlling their viewing with the TV’s remote.  They will be doing everything from their phone or tablet.

Think about it for a moment.  You can already watch TV on your phone.  And with airplay you can send anything you’re watching to your TV.  This is fine for an ‘all Apple’ household, but until lots of people get in on the game, I don’t see this as the future.

No the future comes with WiFi Direct and Miracast (plus a lot of extra work).

I’ve explained WiFi Direct and Miracast elsewhere, but to put it simply:  Miracast lets you beam video from your phone – or from any other device – to your TV.  Its like a wireless HDMI cable.

So imagine, if you would, the TV of the future.  It will be a box with no buttons, just a lovely display and a power supply.  Inside it will be WiFi direct ready.  (Hopefully WiFi Direct has some sort of wake on lan functionality, so that you can plug your TV in and put it in a low power mode awaiting a connection.  If it doesn’t, we’ll stick a discrete pairing button on the top)

You come in with your phone, or tablet.  You install an app – which might be something like iPlayer, Hulu or Netflix, but might also be a specialist app perhaps ‘Game of Thrones’.  How you pay for this (one off, or subscription) is up to the app publisher.  The app publisher can also decide if the app contains all the audio/visual data, or if the data will be streamed from some external source.  You play the app, and are offered a number of screens to play the video on.  You select the TV and you are away.  The video is streamed from your phone to the TV set… or better, the TV set.

This world is already (just about) possible with Miracast.  But it isn’t quite enough.  Here are some ways we can improve on things.

Your friend is also watching TV with you, and decides to turn the volume up a bit.  The volume is a feature of the TV, so your friend needs to tell the TV to play sounds a bit louder.  So your friend reaches for his phone.  Now, he doesn’t live at your house, so he won’t have an app for controlling your TV.  There are two solutions:
1. We insist every TV provides a common interface, so that lots of people will make TV control apps.  In which case, he can then just pair with the TV and control it that way.  But this sort of standardisation doesn’t seem to work well.  So the odds are low.  My preferred alternative is to encourage the following:
2. When your friend pairs his phone with the TV, he is told there is a web service available (providing a web server ought to be a common feature of WiFi Direct devices that need to be interacted with) and goes straight to the front page.  At the front page he is given a web ui, and a link to download a better app from whichever app stores the TV company have chosen to support.

What would be even better is if the web app worked by communicating with a simple web service.  Each Web service could be different, but so long as they were simple, hackers could work out how they functioned.  And as a result could develop control apps which work with hundreds of different TV sets – just like multi-set remote controls work today.  In short everyone would have an app which would quickly be able to decide how to control whatever TV they came into contact with – while also having a web app ui workaround in case of failure.

So, this is fine for controlling the TV.  But what about if my friend wanted to pause the show in order to say something?

My suggestion is that along with WiFi direct linking devices, you want to make some other information available.  Possibly provided by a web service as above – but ideally in a more standardized way.  I would want the TV to tell me which device was currently streaming data to it.  And I would want to be able to join that WiFi direct group, to communicate with the sender.  Finally I would like the sending device to also provide me with a web interface – so that I could control it remotely too.

In short, the TV becomes far more dumb than your average Apple TV box is today, and you rely on the smarts of the tablets that control it.  Especially since the apps on the tablets can ensure a far better user experience in the process.

From here we need to consider other devices.  I’m pretty sure the PVR as is will die.  Broadcast TV will gradually wither, and the PVR won’t be supported.  But until this happens, the PVR and cable box will be part of the home entertainment system.  And increasingly we will get video servers which will hold the video data of films we have purchased – or even, perhaps, caches for external video providers.  In any event, we will control these devices in the same way we control the TV: pairing via WiFi Direct, then a web UI and potential app downloads to get to the functionality.  These boxes will stream the video straight to the TV.

We also need to consider audio.  Right now many homes have a TV with speakers, and also a HiFi of some sort.  Let’s rethink this:  Add a few wireless speakers, and let them be sent audio by a protocol similar to Miracast (but perhaps with some additional syncing technology)  Your phone could even become a remote wireless speaker – especially useful if you want to attach some headphones without laying out wires.

At this point we have everything we need to allow app writers to revolutionise television.  I still feel there is a lack of a central TV guide – but perhaps that will be forthcoming now we know we have personal touch interfaces and no longer have to assume everything will be controlled via the screen.

Whatever, we don’t need smart TVs.  We just need good displays, and sensible use of wireless technology.  The Apple TV as is, both is too smart, and not up to the job.  Lets make it simpler, and make the interactions between devices work well.

The Art of Being Invisible

Invisible Man

Recently Citrix commissioned a survey into the public perception of cloud computing and it went ever so slightly viral.  Which was presumably the intent – to get magazines and websites to publish articles which link Citrix with cloud computing, rather than actually to learn anything new about the cloud.  I have nothing against this – Citrix is a company that is a big player in the growing cloud, but anyone who hasn’t noticed this (and many haven’t) probably still consider them to be ‘Those metaframe people’ – so any PR that works is probably a good thing.

What I found out from watching this unfold was:

Not many people writing articles about surveys actually link to the original source

Even when I got to the original source, I wasn’t able to locate the survey people were give, or the responses to those questions – just the results, as digested by the company.  Which means I have absolutely no idea of the context in which to put the results.

Most people who actually reported on the article didn’t seem to care.  They pretty much parroted the press release data.  Again, as I would have expected – that seems to be what tech journalism is all about.  But it would be nice to see more people out there who get some interesting data and actually think about it – and its implications – before writing anything.

And finally, as the survey suggests:  Not many people know what cloud computing is.

Which isn’t a surprise, because it is a made up term which loosely describes a whole bunch of tech industry trends.  In short, I think we can safely say it comes from those vague technical drawings of infrastructure where you might draw a few data centers, each with a bunch of servers and storage inside, then link them by straight lines to a picture of a cloud – often with the words ‘The Internet’ inside to suggest the data centers were connected together via someone else’s infrastructure.  As people are increasingly hosting there technology on someone else’s infrastructure, rather than in bits of a datacenter maintain by company employees we say that technology is in the cloud.

The public don’t know about this.  And frankly they don’t care.

And also they shouldn’t.

My day job is developing a key part of the infrastructure for the cloud.  Without it big parts of what we call the cloud wouldn’t work – or at best would have to work in a very different and less good way.  You will almost certainly have used part of this product in some way today.  And you probably don’t even realise it, or care.  So why don’t I care that no-one knows about the cloud?  Why don’t I wish more people would love my work and sing its praises?

Because, if I do my job well, my work is invisible.  Every time you notice anything about my work, any time you worry that it exists in any way, shape, or form, you’re keeping me up at night because I’m not doing my job well.

I’ll give you an example:  Electricity.  To get electricity there are power stations, huge networks of wires, substations, transformers, all ending up at a plug socket in your house.  You don’t notice these.  You don’t care.  Unless – that is – it all stops working… or perhaps you have some technical problem like trying to run a 110 volt appliance in the UK.  If electricity wasn’t invisible – if we had to ring up and request enough power for our TV set to run, then we would care more – and enjoy our lives a little bit less.

Cloud computing is actually all about making computing into a utility, just like electricity.  It is about not having to worry about where servers are.  It is about not having to worry about where your data is.  Now, some people have to worry about electricity – if you’ve ever set up a data center, you’ll know that you need to start caring about all sorts of issues which don’t worry the home owner.  Similarly, if you work in the IT industry, you’ll have all sorts of worries about aspects of CLoud computing which end users simply shouldn’t ever have to care about.

So if you ask a man in the street about the cloud – he should remain more worried about the sort of cloud which rains on him.  And, to determine how worried he should be, he’ll probably ask Siri on his iPhone.  And not care about how Siri takes his voice input, and uses vast numbers of computers to respond to it with data generated by metrological offices who process big data over vast grids of computers.  He won’t worry about anything which goes in between, and more than he worries about how to charge is iPhone when he gets home.

Consumers already have their heads in the cloud.  They don’t realise it.  and they don’t care.  because they are already used to it.  To them the cloud isn’t anything new, its just how things are these days.  As for companies and programmers – we need to make the cloud less and less obvious, less and less difficult.  One shouldn’t need to think about doing something in the cloud, because that should be the easiest way to do things.  We have to take the blocks of code we put together, and make them blocks which work across the cloud as seamlessly as they currently work across CPU cores.  We need to stop thinking in terms of individual computers and individual locations – and those of us who build the code need to make it easier and easier to do this.

We are already on our way.  But would I want to be the number one clod computing company?  No, I would want to be the number one computing company – because once everyone is in the cloud, the cloud vanishes, and we ar back playing the same game we always played.

 

Rethinking Social Networks : How To Replace Facebook

Facebook engancha

It seems like Facebook has got sufficiently sticky that we will never be able to usurp it from its position.  Altavista felt that way once, but all it took was for a new startup to come along and do things better.  Lets say we want to usurp Facebook – how would we do it?

The first thing we have to do is make money.  Even if we want to get VCs involved, I think they would still want to see some sort of monetization plan.  I also think that, right now, app.net are right – you don’t want the money to come from adverts.  However, app.net seem to suggest the solution is charging the user a subscription.  I’m not sure about that.  If you want to create the ideal Facebook killer you want to get lots of people there – and a subscription is a gatekeeper.

I have a different idea to monetize the social network (a plan, which incidentally, encourages it to be a better platform too):  the network is funded by an app store.  This might seem odd, until you realise that almost all publishing on the network could be an app – and that only apps sold via the app store could interact with the various apis.  Apps could either be ad-supported (in which case, we would take a cut of the advertising), in-app purchase supported (in which case we would take a cut of the purchasing), or price supported (we get a cut, you get the picture).

To explain further – we would create a social network where you would get access to read anything posted to you – and perhaps to post twitter size posts to up to 100 followers.  This would suit most people.  If you want to add pictures to your post, you’ll need to buy the ‘add pictures’ app.  If you want to have more followers, or to be able to push your posts to particular people, we’ll provide apps.  Want to write longer articles?  We can provide the means.  Want to do something we can’t even think of?  We’ll make an api so that other people can do it – so long as they follow the rules of our framework (and our app store guidelines).  Want to use your phone to read – you can do it for free from our mobile site, or, if you need an app – there will be one (but it will be ad supported, to pay for the cost of development).  Want to post from your phone – that’ll be an in app purchase.

Most of these apps would be a one off purchase.  We might also charge for storage above a limit (I’ve long believed storage should usually be a one off purchase price – if you’re making people rent storage, you should probably be thinking about making people pay for something like data transmission instead).  We might charge a recurring fee for some ‘enterprise level’ features – but only to skim lots of income from big companies.  People will keep coming back.  People will want multiple accounts.  Each account will need apps.  We will keep making money – but we will be making it from our biggest fans – from the people who want to pay us.

So we have a monetization plan.  How do we get people to the new service?

The answer is:  we make it easy.

Facebook seems to provide a few services

  • Find and keep up with old friends (or at least don’t lose track of them totally)
  • Keep up with current friends, and arrange activities
  • Stay in touch with celebrities
  • Do some amount of microblogging
  • Play multiplayer games
  • Store & publish photos

My guess is we don’t want to replicate all of these – at least not to attract people.  I suggest right off that we don’t worry about the finding and keeping up with old friends aspect.  That’ll come to the new platform when enough people are there.  Celebrities will do the same.  We want to be a good platform for them to blog on, but not spend our time trying to encourage them.

The app store monetization strategy suggests games are a good thing to support.  It isn’t my interest, but it will attract people.

The other area to support strongly is microblogging and publishing of photos.  Now this is harder – why blog on a platform which no-one uses?  My answer is we make it better, and we make it easier to share.  Anyone can read things you publish to the world (and there is no reason why you can’t syndicate such content to other social network feeds, along with a linkback).  What if you just want to publish to a small group?  You could always use email to share your content.  Not just to link to our site, but to share what you are writing.  We have no need for people to come to our site – unless they want to use it to publish – so why not work on making the mailbox the hub of the social experience?  Of course, people are not going to want your tweets in tiny one line emails, so how about trying to create some sort of ‘what I’m up to’ life journal digest you can send out.  Tweets for followers, longer blogs & photo albums to email readers.

Of course, any email address we send your digest to, we remember.  If you come to our site later, and log on with that email address, it will be pre-populated with all the people who have sent you their digests.  Because each email would have to offer you the opportunity of turning the digests off, the link to do this would encourage you to log in with your email address – and show you what is available.  You might also consider allowing the links to take you directly to your own page (in the zero-login, cookie only, format I described a few days ago… this might have problems though, as I would suspect these links and emails might be very forwardable.  That said, commenting by replying to emails, facebook style, would have to be supported.

This wouldn’t be an overnight success – but it would provide a pathway to something which could grab people virally, and wouldn’t require people to use the site themselves unless they wanted to.  And to get people to want to use the site?  Well, it would simply have to be better for them to use than Facebook - and given how hard Facebook seems to be trying to drive people like me away, that can’t be too difficult.

 

Rethinking Social Networks : The App.Net move

Social Network

Social Networks are high in people’s minds right now.  Twitter is annoying its developers, trying to become an island rather than the convenient platform it used to be.  Facebook is a mess, a jumble of confusing options, an unfriendly interface, and adverts jumping out at every corner – it reminds me more of the pre-Google Altavista than anything else.  And there is reaction to this.  The Diaspora project seems to have gone nowhere, but newcomer App.Net has hit a kickstarter target – and, by getting enough people to make a cash commitment has become interesting.

App.Net makes two points:

  • At the moment, the customers of social networking sites are not the users, but the advertisers.  So long as the users are tied in, they will remain, and their eyeballs will be able to be exchanged for the contents of advertisers wallets.  A social network designed for users needs to be funded by the users – they need to be the customers
  • What makes a social network work is when it ceases to be a website and becomes a platform

Its worth describing two geek fallacies before we continue:

Fallacy 1:  Any good internet project is distributed in nature.

This is the flaw of Diaspora.  Geeks love us some hard distributed systems problems, but the take away from the user the simplicity of going to a single place – the same place as everyone else – to get what they want.  Distributed technologies such as social media require people to provide servers – but these servers have to be paid for, so people will charge.  Charging isn’t too bad, except any such server must, by its nature be a commodity, there is little room for differentiation.  It is hard to see why anyone would want to get into this game – see the decline of usenet servers as an example.

Fallacy 2: It is all about the platform

UIs are for wusses.  What matters is the clever technology underneath.  This is both true, and false.  What matters to must users is that users get the features they are looking for – it doesn’t matter if the backend has some hyper-clever architecture or runs in Spectrum BASIC if it does the job and keeps out of the way.  Geeks think differently – they want to know that their lives are going to remain easy as they interact with the system over time, so they design platforms which you can build good products on top of, but don’t care that much about the product.  I fear this might be what app.net are doing.  I hope I’m proven wrong.

Where app.net have been clever is in using Kickstarter for some cash.  Not because they needed the cash (if you can convince that number of individuals to pony up $50, you can probably convince some investors to do likewise).  Getting the cash gave app.net some publicity, because Kickstarter is hot right now, and social networks are causing consternation – and for a social network to get going, it needs publicity.  But it also got a number of people to tie themselves into the service – and the sort of people who would fund a new social network are early adopters, the thought leaders in the social sphere, and this could be very important to app.net’s growth.

But it could be more important to the people who paid for the developers licence.

Right now, if I wanted to try something new and interesting in the social world, I would seriously consider tying it in with app.net – because its a small market of exactly the sort of people you want playing with your fresh idea.

I don’t think there is anything special about app.net in itself, but I expect it to be a breeding ground for interesting social graph based applications.  So in app.net’s case, perhaps by building the platform, they are doing the right thing, even if it isn’t the right thing for them.

Incidentally, I have a number of thoughts about the next moves that could be made in social networking – I’ll be writing about them over the next few days.

Use Case : Information Capture

When stumbling around trying to figure out which combination of tablet / laptop / phone makes the most sense for me, I find it useful to consider the use cases which, at the moment, my devices don’t quite meet.  The most obvious thing that I’m missing is a good information capture device.  Here are the situations where it would be useful:

I’m called into a meeting – I need to be able to access web sites during the meeting – and perhaps run GoToMeeting or WebEx, so it’ll need to have decent web browsing facilities.  I’ll also need to be able to access (and maybe write) emails.  Finally, I’ll want to be able to type down notes as quickly as possible, without looking at the keyboard (so I’ll need the sort of feedback which only a physical keyboard can give, and I’ll want a full size keyboard, for comfort)

I’m at a conference.  I want to take notes in all of the sessions. So I need good battery life, and I also need to be able to type with the device either in my hands or on my lap.  So far, I’ve not managed to find a device which is as comfortable for taking notes on as a keyboard, and many tablets with keyboards won’t rest nicely in my lap.  Later, I’m going to want to transform these notes into documents.

After a day at a conference, I’m in my hotel room.  When I travel, I prefer not to take my main computer with me – I prefer to have something cheap, something which doesn’t have all my data on it (I have backups, so I could stomach the loss of my data – and a good quantity of my data is in the cloud, but still, I don’t want the inconvenience).  Recent experience has shown that my windows tablet (with docking station and bluetooth keyboard) does well here (though isn’t that cheap – still any tablet & wireless keyboard combo would clearly do almost as well).

As an added benefit, it would be nice to be able to make doodles and drawings to accompany my notes, using a pen or stylus.

My suggested solutions are:

Carry a laptop, and some form of extended power.  This would be a good reason to buy a macbook air.  But an air, or an ultrabook would not meet my criteria of being a cheap device.

Carry a netbook and some form of extended power.  Then also use a tablet for the things tablets are better for.  The problem here is that netbook keyboards are not as big as I would like.

Carry a tablet, and then use one of those pens which record what you write for making notes:  This isn’t a bad plan – although I doubt the OCR capabilities of the pen’s software.  And we could probably achieve the same thing with pen, paper and a travel scanner.

What would seem to me a better idea would be a tablet case which has a built in keyboard, and is designed to work as a laptop.  Extra marks if it can contain extra batteries to increase tablet lifetime.  We’re not just talking a tablet dock, we’re talking about something specifically designed for using on your lap, like a laptop.  The idea of it being a case more or less rules out android devices – they are just too different from one another, you would end up with some half functioning system equivalent to those suction pads you use to attach phones and GPSs to car windows.

The right sort of thing already exists for the ipad – consider for example the keyboard case from clamcase.com.  There is a japanese company selling a notebook case which also sports a  battery, but this doesn’t seem to have worldwide availablility yet.

I wonder, however, if this is an opportunity for windows OEMs who were blindsided by Surface.  Suface, no matter how funky the magnetic keyboard thing is, won’t work from your lap.  Yet a Windows RT device would meet all my needs as described above.  A WinRT laptop would have a niche that Surface can’t quite touch – and a slightly higher spec varient could easily come with a stylus.  Ultimately, a device like that might mean I would rarely, if ever, need to use a proper laptop for anything.  And also that my life in conferences, meetings, and bland corporate hotel rooms would be much improved.

 

Thinking About Filters

I recently wrote about the idea that one might prefer to use a filter, rather than an inbox.  For clarity, I thought I woud add a few additional thoughts.

There is already one filter which is fairly widely used – Google News.  Its my opinion that an inbox / filter of the sort I am describing would end up looking quite a lot like Google News.  As the filter scoured the web (or took in the results of other web scourers) for content, it would collect similar content together, much as Google News collects news stories together.  As I user I would choose one of these areas to ‘zoom in’ on, which would give me access to a priority ordered list of potential content to read.  My choice would then both help identify the sort of thing I wanted to read more of, but also eliminate identical and nearly identical articles.

With this in mind, I might think a page would have different sections such as ‘incoming mail’, todos, groups of things to read.  Exactly what appeared in those groups would depend on a large number of factors including time of day, day of week, what I’ve been doing recently, where I am physically located, which computer I’m using.

Search would stop being ‘find data in an index’ and would become ‘open certain parts of the filter, and bias towards certain term’.  Search terms would stil be biased towards things that the filter has learned about you (so a UK-centric user searching for Football would find information about soccer rather than american rules – or certainly higher ranked).

I’ve talked about using external services to get more data – in effect it would work like this:  in the general use of the filter, I would be building up a personal index of pages I visit (or read RSS feeds of, say) and ‘close’ pages – pages closely linked to those.  When I search, I would open the filter to find examples of those pages which contain those search terms.  However the filter would also contact some known external sites – lets say Google and Wikipedia – to see what pages they have to offer.  The filter would then read those pages and add them to the general quorum of pages it knows about.  They would then have the chance of showing up in the filter’s search (but would not necessarily show up if you already have content which looks better for your needs)

I said that the filter could run on your home PC, or in the cloud.  In retrospect this was wrong.  It would have to run in the cloud.  I have a large number of devices, and more and more I want all my devices to sync together – the cloud is the place where this can be done.  Similarly, some of my devices are too dumb to run a sufficiently complicated filter, so again, we are looking at something running in the cloud.

When we start talking about things running in the cloud, a threat looms – what makes this different from, say Google or Facebook?  I think my answer is that with Google or Facebook, they hook the user by providing useful services, and in return get lots of data about the user.  This data is then used by google to sell targeted advertising.  In the filter model, things are slightly more complicated.  The filter begins to act a bit like a huge distributed market – people will push an advert (or what I’m going to consider ‘sponsored content’) to the user, offering to pay a certain amount if the user clicks on it.  The user (or more reasonably the user’s filter) returns how much it is willing to charge if the user clicks on the content.  For other content the user may offer to pay for it, and the content provider may set a charge.  In short, we are instituting a micropayment system, one which doesn’t require the user to actually put any money forward, if they are getting enough sponsored content that they are prepared to read…  the filter can increasingly make it clear that watching adverts is necessary if the user wants to continue reading things – or that the user can inject some cash of their own.  In any event, the advertisers will be paying the user rather than the equivalent of Google (the filter service provider).  The user will then pay the filter service provider from their amassed micropayments.  What this ultimately means is that the user of the filter becomes the customer – so the filter service is set up with the customer (and not the advertiser) in mind – indeed it becomes the filter services mission to ensure the customer sees as few adverts as possible, while enabling them to continue viewing the type of content they want (if the customer wants to see newly released movies, they are going to be watching lots of adverts, or injecting quite a lot of their own cash).  A final side effect of this is that whichever company builds a filter like this will become a major micropayment player and clearing house.

It occurs to me that this – rather than me-too plays such as Google Play or Google + is what Google should be working on now.  Whoever does manage to introduce the right type of filter engine could easily out google google, just as Google out altavista’ed altavista.

© Ben.Cha.lmers.co.uk
CyberChimps