Robot IFA, here we come? 

Just in case you hadn't noticed, or simply had better things to do, the Beeb is this week carrying an extended focus on the subject of robots, artificial intelligence, and how more and more of human functionality is being taken over by the darn things.  We've seen them handling engineering work on pipelines, pushing crates of milk around warehouses, loafing around visitor centres and assisting in care homes.  As if the elderly in our society actually need less human contact.

Throughout, the Beeb's mantra is at least consistent.  This technology is going to put people out of jobs, so the line runs.  The list of jobs becoming robotised is increasing all the time, but the perplexing thing is that we are told that this development is 'inevitable', as if robots are a kind of naturally-developing by-product of evolution, which simply can't help itself.  And it is worth observing that, underpinning this line of argument, is the presupposition that human beings are merely machines, that the human 'mind' is just a function of physical brain-stuff.  This is a reductionist argument, and folks with a more thoughtful, nuanced view of the world might want to take issue with that kind of outlook.

The Beeb's treatment of the subject may be more science fiction, than real science, but it does highlight at least one aspect of relevance to us, as IFAs:  how much of what we do, can be subcontracted out to a black box, and how much of it is down to me, the human IFA?  It could well be that we are selling ourselves short, if we 'reduce' the value of what we do to the output from some kind of clever algorithm.  This is the "Computer says No" methodology - we click a few fields on the computer screen, press the 'Izzy-whizzy let's get bizzy' button, sit back and wait for LEDs to flash and disk-drives to hum, before the thing spews out some sort of printout, which must - necessarily - be regarded as authoritative, in some absolute sense.  After all, we're paying for all these systems, and may as well make use of them.

If these kinds of intelligent systems are an effective replicator of IFA expertise, then it makes me wonder about what we were up to before they came along.  Most of us have had the experience of carefully fine-tuning our research parameters on our software, only for it to produce a result which leaves us responding, "Damn - Lucky Rabbits Insurance can't possibly be the right answer".  And we start all over again, because we know that, whatever it is that the client needs, it's not Lucky Rabbits Insurance.

Clearly, intelligent systems are here to stay, in various shapes and forms - so it's important for us to think about what it is that we have to offer, as human beings, which cannot be roboticised, in the way that the Beeb fondly imagines.  The following aspects of our advisory role come to mind:

  • Voluntarily making our time available to those who need it;
  • Empathising with our clients as they wrestle with pressing needs and priorities;
  • Assisting with difficult choices;
  • Modelling ethics in practice;
  • Arriving at reliable diagnoses based upon soft data (drawing on years of experience) as well as hard data (which robots handle better);
  • Choosing to step outside the box when circumstances justify it;
  • Mentoring, teaching, counselling, representing another person;
  • Freely choosing not to work with those whose behaviour and attitude is antithetic to our values.

I'm sure you can think of more.  The trick seems to be to define our professional role in terms of those things which are uniquely and definably 'human', rather than rely entirely on the more mechanistic aspects of our work.

For ValidPathers...

There's guidance on the website, covering the use of research software and systems - but you'll need to be logged-in.

Update

Developments on this subject at the FCA





 

Kevin Moss, 16/09/2015